🏡


to read (pdf)

  1. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  2. Neobrutalism components - Start making neobrutalism layouts today
  3. Debunking zswap and zram myths
  4. Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
  5. Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog

  1. April 09, 2026
    1. 🔗 r/Leeds Slimy facial recognition van set up on New York Street rss

      Disgusting to see, when did such surveillance tools and technology become so normalised?

      Not gonna claim this area doesn't have it's issues, it sure does. But lazily plopping a facial recognition van is absolutely not the way to deal with this. It's the same characters round here committing most of the crimes, all it takes is a bit of actual policing. Not this gross overreach

      A little afraid I'm gonna find out in the comments that most people are fine to see this

      submitted by /u/semaphoreslimshady42
      [link] [comments]

    2. 🔗 r/reverseengineering Porting Mac OS X to the Nintendo Wii rss
    3. 🔗 r/york The York food waste "saints" who feed city's children rss

      The York food waste "saints" who feed city's children | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    4. 🔗 r/Yorkshire 19 new family hubs open in Yorkshire and the Humber to support parents rss

      19 new family hubs open in Yorkshire and the Humber to support parents | More families are set to benefit from extra support on their doorstep. Nineteen new Best Start Family Hubs have opened across Yorkshire and the Humber, forming part of a new national network to support parents and children from pregnancy through to school age. The hubs offer free stay and play sessions, infant feeding advice, parenting support, and early help for children with special educational needs and disabilities (SEND), aiming to provide easier access to vital services under one roof. Hubs are designed to help families save up to £200 per year by reducing the need for private classes and offering free services such as stay and play. Additional support on debt and welfare is also available. Each hub includes outreach workers and practitioners to help identify and address additional needs early, ensuring families are not bounced between different services. Healthy Babies support will be integrated into the hubs, including midwifery, health visiting, infant feeding, and perinatal mental health support. The government plans to deliver up to 2,000 satellite locations by the end of 2028, offering family services from health centres, libraries, leisure centres, and churches to improve access. Across Yorkshire and the Humber, newly opened hubs include New Bewerley Children's Centre Family Services in Leeds, sites across Kirklees, 12 locations in East Riding of Yorkshire, and Scarborough Library. More than 200 hubs are now open nationally, with the government pledging to have 1,000 in operation by the end of 2028. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    5. 🔗 r/wiesbaden Jetzt auch in Wiesi rss
    6. 🔗 pydantic/monty v0.0.10 - 2026-04-09 release

      What's Changed

      New Contributors

      Full Changelog : v0.0.9...v0.0.10

    7. 🔗 r/Yorkshire Gannet launching gracefully from a rock at RSPB Bempton Cliffs rss
    8. 🔗 r/Leeds Recruitment agencies that aggressively help you find a role? rss

      I haven’t found my niche yet and looking to try new things, so temporarily contract work would be good for me atm. I have some qualifications and experience in a professional setting but im awful at interviews and not getting enough to improve that. I am currently feeling the pressure to find something quickly and was hoping someone could help with the names of some recruitment firms that secure interviews on your behalf… Cheers.

      submitted by /u/becordisman00
      [link] [comments]

    9. 🔗 r/LocalLLaMA It's insane how lobotomized Opus 4.6 is right now. Even Gemma 4 31B UD IQ3 XXS beat it on the carwash test on my 5070 TI. rss
    10. 🔗 Console.dev newsletter Skir rss

      Description: Declarative language for types & APIs.

      What we like: Define the schema then generate idiomatic, type-safe code (Python, Go, Java, Rust, TS, etc). Dynamic serialization options e.g. binary or JSON. VS Code extension & LSP tooling. Supports backwards compatibility when changing schemas.

      What we dislike: Compile-time errors if you add a field, but don’t regenerate the code - this is the opposite to protobuf which handles it by defaulting values. This may be better, depending on your philosophy.

    11. 🔗 Console.dev newsletter Atomic rss

      Description: Knowledge graph.

      What we like: Local, open source, markdown-based knowledge base. Auto-generates wiki articles from tags. Built-in semantic search with vector embeddings. Supports different types e.g. markdown notes, web clips, articles. Supports MCP. Can be an HTTP server without a desktop UI.

      What we dislike: Everything is self hosted so you need to do some setup work. This is also a benefit.

  2. April 08, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-08 rss

      IDA Plugin Updates on 2026-04-08

      Activity:

      • python-elpida_core.py
        • 8dc18b89: feat: remove code_edit exclusion from Stage 2 — the system speaks its…
        • a9aec531: fix: desperation guard — cycle 52 heartbeat is run-complete, not crisis
        • b8154d26: feat: Parliament rebalancing — evidence-based provider reassignment
    2. 🔗 r/Yorkshire Can’t have anything these days rss
    3. 🔗 Simon Willison Meta's new model is Muse Spark, and meta.ai chat has some interesting tools rss

      Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago. It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required).

      Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows".

      The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro.

      A couple of pelicans

      I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly.

      Here's the pelican I got for "Instant":

      This is a pretty basic pelican. The bicycle is mangled, the pelican itself has a rectangular beak albeit with a hint of pouch curve below it. Not a very good one.

      And this one for "Thinking":

      Much better. Clearly a pelican. Bicycle is the correct shape. Pelican is wearing a blue cycling helmet (albeit badly rendered). Not a bad job at all.

      Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused Playables SDK v1.0.0 JavaScript libraries.

      Which got me curious...

      Poking around with tools

      Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style.

      But what else can it do?

      I asked it:

      what tools do you have access to?

      And then:

      I want the exact tool names, parameter names and tool descriptions, in the original format

      It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks.

      Here are highlights derived from that response:

      • Browse and search. browser.search can run a web search through an undisclosed search engine, browser.open can load the full page from one of those search results and browser.find can run pattern matches against the returned page content.

      • Meta content search. meta_1p.content_search can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including author_ids, key_celebrities, commented_by_user_ids, and liked_by_user_ids.

      • "Catalog search" - meta_1p.meta_catalog_search can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector.

      • Image generation. media.image_gen generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images.

      • container.python_execution - yes! It's Code Interpreter, my favourite feature of both ChatGPT and Claude.

        Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at /mnt/data/.

        Python 3.9 is EOL these days but the library collection looks useful.

        I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021).

      • container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to html for websites/apps or svg for vector graphics."

      • container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or catalog_search_citation_id for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox.

      • container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar?

      • Tools for editing files in the container - container.view, container.insert (with new_str and insert_line), container.str_replace. These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness.

      • container.visual_grounding - see below, this one is fun.

      • subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response."

      • third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook.

      Image analysis in the container

      Let's talk about that container.visual_grounding one. Here's the description in full:

      Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects.

      Parameters:

      • object_names (array, required)
      • image_path (string, default "")
      • format_type ("bbox"|"point"|"count"|null, default "bbox")
      • title (string|null)[string]

      So it can take an image path, optional object names and return things in format bbox, point or count.

      Sounds to me like Meta's Segment Anything. I decided to take it for a spin.

      Prompt:

      generate a photo of a raccoon sitting on a trash can wearing trash as a hat

      Response:

      Here you go — king of the alley, in full regalia:

      It's a photo of a raccoon wearing a disposable coffee cup and some newspaper and banana peel as a hat. It's pretty good

      Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too?

      My guess is that image generation in Meta AI is powered by their Emu model, or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's media.image_gen).

      The image_gen tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried:

      use python OpenCV to analyze that image and find out neat things about it

      Annoyingly it did not show me the code it ran, but it did create this image:

      OpenCV image analysis dashboard titled "OpenCV Analysis: Raccoon Trash Hat" with four panels: top-left shows the original 1280x1280 photo of a raccoon wearing a trash-lid hat with a coffee cup and newspaper on top in an alley; top-right shows a Canny edge detection version with white edges on black background labeled "Canny Edges - 4.6% density"; bottom-left shows "Top 5 Dominant Colors" as swatches with values RGB(20, 19, 24): 35.3%, RGB(43, 43, 47): 34.2%, RGB(84, 80, 79): 18.3%, RGB(139, 135, 131): 6.3%, RGB(210, 213, 208): 5.8%; bottom-right shows "Color Histograms" line chart with Red, Green, and Blue channels plotted from 0-250 on x-axis and 0-35000 on y-axis, peaking around value 25-50.

      Pretty neat! We can both generate images and then use them with Python tools.

      Next, the visual_grounding tool. Knowing the names of the tools makes it easy to refer to them in prompts:

      Now try the container.visual_grounding tool

      The response (full response here):

      Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision:

      Visual grounding analysis of a raccoon photo titled "Visual Grounding: Found 8 objects" with colored dot annotations on the image marking each detected object: coffee cup (515,205) in yellow, banana peel (569,261) in light yellow, newspaper (399,279) in blue, right eye (461,396) in magenta, left eye (582,397) in pink, nose (514,472) in cyan, raccoon (553,548) in green, and trash can lid (550,892) in red. A legend at the bottom lists all objects with their normalized 0-1000 coordinates. Footer text reads "Coordinates are 0-1000 normalized. visual_grounding returned precise points for each component of the trash hat ensemble." Watermark reads "Meta AI".

      [...]

      Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect?

      Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly point mode, but how about the others?

      I prompted:

      yes draw bounding boxes, can it do masks too?

      Response:

      Yes — bounding boxes work great. Masks? Not directly in visual_grounding; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks.

      Here's the bbox output:

      Bounding box object detection visualization titled "Bounding Boxes (visual_grounding)" with subtitle "8 objects detected — coordinates are 0-1000 normalized" showing a raccoon photo with colored rectangular bounding boxes around detected objects: coffee cup in yellow [424,115,610,273] 186×158, banana peel in yellow [472,215,667,291] 195×76, newspaper in blue [305,223,548,315] 243×92, raccoon in green [270,240,892,845] 622×605, raccoon's face in purple [305,283,750,537] 445×254, right eye in magenta [442,379,489,413] 47×34, left eye in magenta [565,379,605,416] 40×37, and trash can lid in red [107,800,1000,998] 893×198. A legend at the bottom shows each object's name, coordinates, and pixel dimensions in colored cards. Watermark reads "Meta AI".

      What you’re seeing:

      • raccoon: big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal
      • coffee cup: (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space
      • banana peel: (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side
      • newspaper: (305,223)-(548,315) — left ear coverage
      • trash can lid: (107,800)-(1000,998) — spans almost the full image width at the bottom
      • face: (305,283)-(750,537) — tight crop around the mask
      • eyes: both ~40×37 pixel boxes, perfectly level at y=379

      Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers.


      About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options:

      1. GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask
      2. SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel

      Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG.

      Here's the HTML for that bbox visualization.

      Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here. The count mode is particularly interesting - here's that in full:

      [
        {
          "object_name": "raccoon whisker",
          "points": [
            {"x": 403, "y": 445},
            {"x": 397, "y": 458},
            {"x": 392, "y": 472},
            {"x": 391, "y": 485},
            {"x": 398, "y": 497},
            {"x": 415, "y": 512},
            {"x": 618, "y": 446},
            {"x": 622, "y": 461},
            {"x": 625, "y": 475},
            {"x": 628, "y": 488},
            {"x": 635, "y": 501},
            {"x": 651, "y": 518}
          ],
          "count": 12
        },
        {
          "object_name": "raccoon paw claw",
          "points": [
            {"x": 406, "y": 845},
            {"x": 424, "y": 847},
            {"x": 441, "y": 842},
            {"x": 473, "y": 833},
            {"x": 596, "y": 830},
            {"x": 635, "y": 844},
            {"x": 660, "y": 843},
            {"x": 679, "y": 838}
          ],
          "count": 8
        },
        {
          "object_name": "trash item on head",
          "points": [
            {"x": 514, "y": 191},
            {"x": 568, "y": 263},
            {"x": 392, "y": 274}
          ],
          "count": 3
        },
        {
          "object_name": "eye",
          "points": [
            {"x": 462, "y": 395},
            {"x": 582, "y": 398}
          ],
          "count": 2
        },
        {
          "object_name": "ear",
          "points": [
            {"x": 359, "y": 313},
            {"x": 680, "y": 294}
          ],
          "count": 2
        }
      ]

      So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools.

      Which means... it can count pelicans too!

      Screenshot of a Meta AI conversation where the user uploaded a photo of pelicans on rocks and asked "Count the pelicans". response reads: "That rocky haul-out is packed — I count 25 brown pelicans in your photo. I went bird by bird, including the ones tucked behind rocks, heads turned away, or just peeking out at the edges. They're all California brown pelicans, with those pale yellow heads and long bills, clustered on the riprap right at the waterline. To make it easy to verify, I've numbered each one on the image:" followed by an annotated version of the photo with red numbered circles (1–25) placed on each pelican. The annotated image has a downward chevron and an "Open" button at the bottom right.

      Here's that overlay exported as HTML.

      Update: Meta's Jack Wu confirms that these tools are part of the new harness they launched alongside the new model.

      Maybe open weights in the future?

      On Twitter Alexandr Wang said:

      this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions.

      I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency:

      [...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison.

      So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively.

      I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    4. 🔗 badlogic/pi-mono v0.66.1 release

      Changed

      • Changed the Earendil announcement from an automatic startup notice to the hidden /dementedelves slash command.
    5. 🔗 r/wiesbaden Dein erster Pulli: Strickworkshop in Wiesbaden Westend rss

      Hey zusammen 🤍

      bei unseren KNIT&MEET Events kam öfter die Frage auf, wie man eigentlich den ersten Pulli strickt. Deshalb haben wir für Samstag in Wiesbaden einen kleinen Workshop dazu organisiert.

      In der ersten Hälfte geht’s um die Basics: wie Pullis aufgebaut sind, wie man Patterns liest und worauf man am Anfang achten sollte. In der zweiten Hälfte schauen wir uns auch mitgebrachte Patterns an, beantworten Fragen und helfen beim Start.

      Falls das hier für jemanden interessant ist, schicke ich die Infos gern per DM oder poste den Link in die Kommentare :)

      submitted by /u/authentic_n_creative
      [link] [comments]

    6. 🔗 r/wiesbaden Yoga in Kastel rss

      Hat jemand Lust auf Outdoor Yoga in Mainz-Kastel am Rhein? Jeder ist Willkommen. Ich bin Yoga Lehrerin und würde gerne eine Community hier aufbauen ✨

      submitted by /u/flowyogamarie
      [link] [comments]

    7. 🔗 r/york York therapists rss

      I'm looking for recommendations for therapists in the York area who specialise in adults dealing with the effects of being abused as a child/teen by a parent, and ideally has experience with Autistic people. Who have you really clicked with? I'm particularly interested in people who are clear and concise, and who don't talk in circles or metaphors.

      Ideally based in the city center, or Acomb.

      Thank you!

      submitted by /u/I-am-a-commotion
      [link] [comments]

    8. 🔗 r/LocalLLaMA It finally happened, I actually had a use case for a local LLM and it was brilliant rss

      It finally happened, I actually had a use case for a local LLM and it was brilliant | https://preview.redd.it/6v2q5726j0ug1.png?width=2950&format=png&auto=webp&s=142b34c6829d80d7ff807a3a589441463d0babf9 I've had aerosinusitis a few times before in my life and it was fairly painful, but not something that happens often. Today on a flight I had an overwhelming bout of it, the pressure was genuinely unbearable, and I had no painkillers with me. I was on a cheap flight, in the cheap seats so no Wifi. I've been playing around with local LLMs on my laptop for a year or so, but it's always been pure novelty. It suddenly dawned on me that I could use Gemma 4 mid-air, and so I pulled out my laptop and asked for any way I could possibly reduce the pain. The Toynbee Maneuver, which I had never in my life heard of, slowly but surely relieved the pressure. Within 10 mins I felt completely fine. It may sound trivial, but without local AI I would have been in blinding pain for probably 90 mins – so it was a rare moment when new technology actually makes a palpable difference to your life. Sharing this here because my wife didn't care and I felt if anyone would appreciate this small win it would be this community. submitted by /u/EntertainerFew2832
      [link] [comments]
      ---|---

    9. 🔗 r/wiesbaden The vendor at the newspaper stand just kind of accepted it rss
    10. 🔗 badlogic/pi-mono v0.66.0 release

      New Features

      • Earendil startup announcement with bundled inline image rendering and a linked blog post for April 8 and 9, 2026.
      • Interactive Anthropic subscription auth warning when Anthropic subscription auth is active, clarifying that Anthropic third-party usage draws from extra usage and is billed per token.

      Fixed

      • Fixed bare readline import to use node:readline prefix for Deno compatibility (#2885 by @milosv-vtool)
      • Fixed auto-retry to treat stream failures like request ended without sending any chunks as transient errors (#2892)
      • Fixed interactive startup notices to render after the initial resource listing, and added a bundled Earendil startup announcement with inline image rendering for April 8 and 9, 2026. Moved the blog link above the image to avoid overlap with terminal image rendering.
      • Fixed interactive mode to warn when Anthropic subscription auth is active, so users know Anthropic third-party usage draws from extra usage and is billed per token.
    11. 🔗 r/Leeds Rhinos fan to take on 108-mile ultra-marathon for MNDA rss
    12. 🔗 r/LocalLLaMA kepler-452b. GGUF when? rss
    13. 🔗 r/reverseengineering Runtime JavaScript instrumentation via CDP (no monkeypatching, works inside closures) rss
    14. 🔗 r/LocalLLaMA It looks like we’ll need to download the new Gemma 4 GGUFs rss

      https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF

      https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF

      by u/danielhanchen:

      We just updated them again in response to:

      1. kv-cache : support attention rotation for heterogeneous iSWA https://github.com/ggml-org/llama.cpp/pull/21513
      2. CUDA: check for buffer overlap before fusing - CRITICAL fixes <unused24> tokens https://github.com/ggml-org/llama.cpp/pull/21566
      3. vocab : add byte token handling to BPE detokenizer for Gemma4 https://github.com/ggml-org/llama.cpp/pull/21488
      4. convert : set "add bos" == True for Gemma 4 https://github.com/ggml-org/llama.cpp/pull/21500
      5. common : add gemma 4 specialized parser https://github.com/ggml-org/llama.cpp/pull/21418
      6. llama-model: read final_logit_softcapping for Gemma 4 https://github.com/ggml-org/llama.cpp/pull/21390
      7. llama: add custom newline split for Gemma 4 https://github.com/ggml-org/llama.cpp/pull/21406

      submitted by /u/jacek2023
      [link] [comments]

    15. 🔗 r/Harrogate Window cleaners rss

      Best window cleaners in Harrogate? Preferably via ladder rather than telescopic pole. Thanks.

      submitted by /u/Outrageous-Square-21
      [link] [comments]

    16. 🔗 r/reverseengineering [Tool] Rikune: A reverse-engineering MCP server that turns PE triage, Ghidra analysis, runtime evidence, and source-like reconstruction into reusable tools for any LLM rss
    17. 🔗 r/wiesbaden Gravity Trail rss

      Ich habe soviele Mythen und Märchen gehört, was ist denn nun wirklich passiert?

      submitted by /u/pathetichypocrit
      [link] [comments]

    18. 🔗 r/reverseengineering Reverse-engineered the Whoop 4.0 BLE protocol — CRC-32 with non-standard polynomial, 96-byte real-time data packets rss
    19. 🔗 Pagefind/pagefind v1.5.1 release

      v1.5.1

    20. 🔗 r/reverseengineering From UART to Root: Vendor Shell Escape on a Uniview IP Camera rss
    21. 🔗 r/Leeds "What's on?" Wednesday - April 08, 2026 rss

      "What's on?" Wednesday

      Our weekly post where people can highlight local events or interesting things that are coming up in the next week. It doesn't have to be your event, but this is a place where reasonable exceptions to our self-promotion rules are allowed. Please check whether the event you're posting about is already mentioned. If you're adding your own event, please try to answer any questions people may have in the comments.

      You can also ask questions if you're looking for something to do, e.g. "Does anyone know of a good pub quiz in the centre on Thursday?"

      " What's on?" rules:

      1. Leeds based events only
      2. Must be taking place in the next 7 days (next Wednesday included)
      3. Posts highlighting an event should include:
        • Event name
        • Date and time
        • Location
        • Price(s) (if applicable)
      4. A description of the event and a relevant link is recommended but not required

      The rules may evolve over time as we find out how things work. Please send any feedback to the mod team.

      submitted by /u/AutoModerator
      [link] [comments]

    22. 🔗 Mario Zechner I've sold out rss

      I've sold out

    23. 🔗 Armin Ronacher Mario and Earendil rss

      Today I'm very happy to share that Mario Zechner is joining Earendil.

      First things first: I think you should read Mario's post. This is his news more than it is ours, and he tells his side of it better than I could. What I want to do here is add a more personal note about why this matters so much to me, how the last months led us here, and why I am so excited to have him on board.

      Last year changed the way many of us thought about software. It certainly changed the way I did. I spent much of 2025 building, probing, and questioning how to build software, and in many more ways what I want to do. If you are a regular reader of this blog you were along for the ride. I wrote a lot, experimented a lot, and tried to get a better sense for what these systems can actually do and what kinds of companies make sense to build around them. There was, and continues to be, a lot of excitement in the air, but also a lot of noise. It has become clear to me that it's not a question of whether AI systems can be useful but what kind of software and human-machine interactions we want to bring into the world with them.

      That is one of the reasons I have been so drawn to Mario's work and approaches.

      Pi is, in my opinion, one of the most thoughtful coding agents and agent infrastructure libraries in this space. Not because it is trying to be the loudest or the fastest, but because it is clearly built by someone who cares deeply about software quality, taste, extensibility, and design. In a moment where much of the industry is racing to ship ever more quickly, often at the cost of coherence and craft, Mario kept insisting on making something solid. That matters to me a great deal.

      I have known Mario for a long time, and one of the things I admire most about him is that he does not confuse velocity with progress. He has a strong sense for what good tools should feel like. He cares about details. He cares about whether something is well made. And he cares about building in a way that can last. Mario has been running Pi in a rather unusual way. He exerts back- pressure on the issue tracker and the pull requests through OSS vacations and other means.

      The last year has also made something else clearer to me: these systems are not only exciting, they are also capable of producing a great deal of damage. Sometimes that damage is obvious; sometimes it looks like low-grade degradation everywhere at once. More slop, more noise, more disingenuous emails in my inbox. There is a version of this future that makes people more distracted, more alienated, and less careful with one another.

      That is not a future I want to help build.

      At Earendil, Colin and I have been trying to think very carefully about what a different path might look like. That is a big part of what led us to Lefos.

      Lefos is our attempt to build a machine entity that is more thoughtful and more deliberate by design. Not an agent whose main purpose is to make everything a little more efficient so that we can produce even more forgettable output, but one that can help people communicate with more care, more clarity, and joy.

      Good software should not aim to optimize every minute of your life, but should create room for better and more joyful experiences, better relationships, and better ways of relating to one another. Especially in communication and software engineering, I think we should be aiming for more thought rather than more throughput. We should want tools that help people be more considerate, more present, and more human. If all we do is use these systems to accelerate the production of slop, we will have missed the opportunity entirely.

      This is also why Mario joining Earendil feels so meaningful to me. Pi and Lefos come from different starting points. There was a year of distance collaboration, but they are animated by a similar instinct: that quality matters, that design matters, and that trust is earned through care rather than captured through hype.

      I am very happy that Pi is coming along for the ride. Me and Colin care a lot about it, and we want to be good stewards of it. It has already played an important role in our own work over the last months, and I continue to believe it is one of the best foundations for building capable agents. We will have more to say soon about how we think about Pi's future and its relationship to Lefos, but the short version is simple: we want Pi to continue to exist as a high-quality, open, extensible piece of software, and we want to invest in making that future real. As for our thoughts of Pi's license, read more here and our company post here.

  3. April 07, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-07 rss

      IDA Plugin Updates on 2026-04-07

      Activity:

      • augur
      • capa
        • 0798528b: ci: use explicit and per job permissions (#3002)
        • c55b0686: ci: fix web rules failure (#3003)
        • ed7e0cd7: lint: replace black/isort/flake8 with ruff (#2992)
        • ac1cba74: feat: update vivisect to 1.3.2 (#3001)
        • ed6b40e9: Merge pull request #3000 from mandiant/dependabot/npm_and_yarn/web/ex…
      • DriverBuddy-7.4-plus
        • f87787e9: Sync security-review.yml from .github repo
        • 583074d2: Sync auto-copilot-code-cleanliness-review.yml from .github repo
        • f99d929e: Sync auto-llm-issue-review.yml from .github repo
        • 1e2b0ece: Sync trigger-all-repos.yml from .github repo
        • 29d1ff04: Sync auto-llm-pr-review.yml from .github repo
        • 4cc9e38a: Sync auto-label-comment-prs.yml from .github repo
        • a3cd8c0b: Sync auto-label.yml from .github repo
        • 74620e5b: Sync auto-sec-scan.yml from .github repo
        • eb8c7fcd: Sync auto-gpt5-implementation.yml from .github repo
        • 34c2593c: Sync swarm-mode.yml from .github repo
        • dbf9f7a4: Sync auto-assign-copilot.yml from .github repo
        • bc0b1b03: Sync auto-assign-pr.yml from .github repo
        • f72ce60b: Sync workflows-sync-template-backup.yml from .github repo
        • ff1522b2: Sync auto-tag-based-review.yml from .github repo
        • 036f8459: Sync runner-smoke-test.yml from .github repo
        • 1620d21f: Sync daily-continuous-progress.yml from .github repo
        • dcb59862: Sync auto-close-issues.yml from .github repo
        • 948f1658: Sync auto-copilot-functionality-docs-review.yml from .github repo
        • ce3c09ec: Sync oz-pr-review.yml from .github repo
      • ghidra
        • 505a59ff: Merge remote-tracking branch 'origin/GP-0-dragonmacher-test-fixes-4-7…
        • 44bd56bb: Test fixes
      • haruspex
      • ida-structor
        • 1afadd01: feat: Materialize deferred symbolic accesses when index bounds are found
        • de5cbc3e: feat: Improve subobject detection using pointer flow edges
        • b4e63d35: feat: Track symbolic array accesses and support named variable synthesis
        • 5afb49b1: feat: Expand bounded array accesses and improve alias offset tracking
        • 1acb065e: chore: Add debug logging for constant tracking in access collector
      • idamcp
      • IDAPluginList
        • 3b6b426c: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
      • IDEA
        • 4c7e96ac: Refine typed refresh and add type workflow regression coverage
        • 6c35fdbc: Downgrade struct field access classification to hints
        • d9d945c1: Add struct workflow helpers and typed export tools
        • f92a527e: Slim search output and implement local/stack rename
        • 7978d76c: Normalize export paths and persist staged IDBs
      • pharos
        • 217bf2a0: Merge pull request #303 from sei-eschwartz/fix-cycle
        • 4fd85d2f: Refactor reasonClassAtOffset_int to avoid cycles
      • python-elpida_core.py
        • 67b98628: fix: desperation guard reads stale MIND heartbeat as live crisis
      • rhabdomancer
      • Rikugan
        • 84f00d2d: Merge pull request #35 from buzzer-re/dev
        • 90146269: perf(ui): reduce restore, markdown, and bulk renamer overhead
        • 767fd59b: fix(ui): align Qt test stubs and qt_compat typing with runtime usage
        • d5d3d078: fix(tests): isolate stubbed modules to prevent CI order-dependent fai…
      • sighthouse
        • 1f98a8ca: Fix missing dependency inside sighthouse-client
        • a24a2dd7: CI github fixed order for documentation after pypi push
        • a2bfe738: Simply/fix the ghidra installation and improve [+] Bump package versi…
        • 3103903a: Simply/fix the ghidra installation and improve [+] Bump package versi…
    2. 🔗 r/Leeds Guitar Pedal shops in Leeds? rss

      Hi, I'm looking to shop around for some guitar pedals (like phasers etc) and I'd love to demo some. I'm aware PMT has now unfortunately closed, and I do miss the days of Big Deal and Dawsons. I love Hobgoblin and still frequent the store there, and wondering where else in Leeds I can go to browse?

      York has some great shops to visit too but I'd like to look more locally.

      Thank you!

      submitted by /u/blissedandgone
      [link] [comments]

    3. 🔗 r/wiesbaden Mountainbike/Downhill Trails rss

      Wo gibt es in Wiesbaden und Umgebung gute Trails zum Mountainbiken und/oder Downhill fahren?

      Gibt es eventuell sogar eine eigene Community?

      Bin recht neu bei dem Thema und für alle Vorschläge dankbar :-)

      submitted by /u/Excellent_Scheme_247
      [link] [comments]

    4. 🔗 r/LocalLLaMA Gemma4-31B worked in an iterative-correction loop (with a long-term memory bank) for 2 hours to solve a problem that baseline GPT-5.4-Pro couldn't rss
    5. 🔗 Simon Willison Anthropic's Project Glasswing - restricting Claude Mythos to security researchers - sounds necessary to me rss

      Anthropic didn't release their latest model, Claude Mythos (system card PDF), today. They have instead made it available to a very restricted set of preview partners under their newly announced Project Glasswing.

      The model is a general purpose model, similar to Claude Opus 4.6, but Anthropic claim that its cyber-security research abilities are strong enough that they need to give the software industry as a whole time to prepare.

      Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.

      [...]

      Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems—systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.

      There's a great deal more technical detail in Assessing Claude Mythos Preview’s cybersecurity capabilities on the Anthropic Red Team blog:

      In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets.

      Plus this comparison with Claude 4.6 Opus:

      Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozilla’s Firefox 147 JavaScript engine—all patched in Firefox 148—into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more.

      Saying "our model is too dangerous to release" is a great way to build buzz around a new model, but in this case I expect their caution is warranted.

      Just a few days (last Friday) ago I started a new ai-security-research tag on this blog to acknowledge an uptick in credible security professionals pulling the alarm on how good modern LLMs have got at vulnerability research.

      Greg Kroah-Hartman of the Linux kernel:

      Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.

      Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.

      Daniel Stenberg of curl:

      The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.

      I'm spending hours per day on this now. It's intense.

      And Thomas Ptacek published Vulnerability Research Is Cooked, a post inspired by his podcast conversation with Anthropic's Nicholas Carlini.

      Anthropic have a 5 minute talking heads video describing the Glasswing project. Nicholas Carlini appears as one of those talking heads, where he said (highlights mine):

      It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently. But this model is able to create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. [...]

      I've found more bugs in the last couple of weeks than I found in the rest of my life combined. We've used the model to scan a bunch of open source code, and the thing that we went for first was operating systems, because this is the code that underlies the entire internet infrastructure. For OpenBSD, we found a bug that's been present for 27 years, where I can send a couple of pieces of data to any OpenBSD server and crash it. On Linux, we found a number of vulnerabilities where as a user with no permissions, I can elevate myself to the administrator by just running some binary on my machine. For each of these bugs, we told the maintainers who actually run the software about them, and they went and fixed them and have deployed the patches patches so that anyone who runs the software is no longer vulnerable to these attacks.

      I found this on the OpenBSD 7.8 errata page:

      025: RELIABILITY FIX: March 25, 2026 All architectures

      TCP packets with invalid SACK options could crash the kernel.

      A source code patch exists which remedies this problem.

      I tracked that change down in the GitHub mirror of the OpenBSD CVS repo (apparently they still use CVS!) and found it using git blame:

      Screenshot of a Git blame view of C source code around line 2455 showing TCP SACK hole validation logic. Code includes checks using SEQ_GT, SEQ_LT macros on fields like th->th_ack, tp->snd_una, sack.start, sack.end, tp->snd_max, and tp->snd_holes. Most commits are from 25–27 years ago with messages like "more SACK hole validity testin..." and "knf", while one recent commit from 3 weeks ago ("Ignore TCP SACK packets wit...") is highlighted with an orange left border, adding a new guard "if (SEQ_LT(sack.start, tp->snd_una)) continue;"

      Sure enough, the surrounding code is from 27 years ago.

      I'm not sure which Linux vulnerability Nicholas was describing, but it may have been this NFS one recently covered by Michael Lynch .

      There's enough smoke here that I believe there's a fire. It's not surprising to find vulnerabilities in decades-old software, especially given that they're mostly written in C, but what's new is that coding agents run by the latest frontier LLMs are proving tirelessly capable at digging up these issues.

      I actually thought to myself on Friday that this sounded like an industry-wide reckoning in the making, and that it might warrant a huge investment of time and money to get ahead of the inevitable barrage of vulnerabilities. Project Glasswing incorporates "$100M in usage credits ... as well as $4M in direct donations to open-source security organizations". Partners include AWS, Apple, Microsoft, Google, and the Linux Foundation. It would be great to see OpenAI involved as well - GPT-5.4 already has a strong reputation for finding security vulnerabilities and they have stronger models on the near horizon.

      The bad news for those of us who are not trusted partners is this:

      We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview.

      I can live with that. I think the security risks really are credible here, and having extra time for trusted teams to get ahead of them is a reasonable trade-off.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    6. 🔗 r/york First trip to your wonderful city rss

      First trip to your wonderful city | A few of my favourite shots. Not so much time for pictures with the children in tow these days but still managed a couple keepers. submitted by /u/MicroWave
      [link] [comments]
      ---|---

    7. 🔗 r/reverseengineering ida-mcp 2.1: Progressive Tool Discovery, Background Analysis, and Batch Operations rss
    8. 🔗 r/Leeds Cherry blossoms in Leeds 🌸 rss

      Finally have a day off after 11 days working straight so I’m planning to go touch some grass tomorrow (yay!).

      Any suggestions on places with beautiful cherry blossoms in bloom in a spot that I can also do a little picnic, and not too far away from city centre?

      Heard someone said Temple Newsam or Roundhay park, but I’m not sure if there’s any blossoms blooming there now yet.

      Any suggestions would be greatly appreciated. Thank you x

      submitted by /u/AnOtterInAMug
      [link] [comments]

    9. 🔗 r/Yorkshire Rivers of Yorkshire Rhyme rss

      My grandma used to recite a poem or nursery rhyme about the rivers of Yorkshire. 45 years later I can't remember it! Does anyone know what this rhyme could be?

      submitted by /u/Lumpy_Geologist7644
      [link] [comments]

    10. 🔗 r/Leeds North Street LS2 rss

      Headed up to North Street today. Loads going on, seems like a nice part of town. I always avoided that area as you head up to Little London but almost feels nicer and safer now than other areas of town (city centre). Is this a recent boom or have I just had my blinkers on? Anyone know the plans on what they are doing around that area?

      submitted by /u/Olivrrpb
      [link] [comments]

    11. 🔗 r/Yorkshire Yorkshire Devolution rss

      Something a little different for the community here. What's your thought on Yorkshire devolution to the historic county?

      I've heard that there was recently a One Yorkshire deal that the Tories rejected and that the Yorkshire mayors work pretty closely with each other now. I also know we have a really strong identity, proud folk and unique customs and dialect that make us an ideal group for one devolution. But I just wanted to know what people think? Do you want a Yorkshire Parliament like Wales's has? Or are you not that fussed? Lerrus av a kall abaht it!

      P.s. I've always thought us Northerners squabble too much anyway and that's why Westminster always gets its way!

      submitted by /u/masa8910
      [link] [comments]

    12. 🔗 @binaryninja@infosec.exchange Join us tomorrow, Wednesday April 8th @ 3pm ET, for a sneak peak at Binary mastodon

      Join us tomorrow, Wednesday April 8th @ 3pm ET, for a sneak peak at Binary Ninja 5.3!

      We'll take a look at all the new major features coming to 5.3, from expanded architecture and platform support, core analysis features, new debugger features, brand new UIs, and so much more. Follow along with latest dev, or join us tomorrow to see what's worth all the hype: https://youtube.com/live/iD8UidhFbhg

    13. 🔗 r/LocalLLaMA Every day I wake up and thank God for having me be born 23 minutes away from a MicroCenter rss

      Every day I wake up and thank God for having me be born 23 minutes away from a MicroCenter | submitted by /u/gigaflops_
      [link] [comments]
      ---|---

    14. 🔗 r/Leeds Good Boy (the Stephen Graham film, not the dog one) currently in cinema was filmed in Leeds during freshers after a Leeds game at 10pm. It's pretty hectic and shows off some bits of Leeds including lower Briggate. rss

      87% on Rotten Tomatoes and very good. A bit of a strange one kinda like a Yorgos Lanthimos film.

      If anyone has watched and can identify all the Leeds/Yorkshire locations I'd be interested.

      submitted by /u/montfree
      [link] [comments]

    15. 🔗 r/Harrogate Best bars for a date night rss

      hi all,

      i am taking my girlfriend out to Harrogate tmrw for her birthday! I have booked La Feria as our dinner in the evening but wondered what were the best bars are to go to before and after are meal.

      just to note we are dressing up nicely - she’ll be in a dress and I’ll be in a shirt. just wondered if that fits the vibe of La Feria and if you know any nice bars that suits that is nice for the evening!!

      submitted by /u/GarlicCharacter3247
      [link] [comments]

    16. 🔗 r/york Scenic Drives (ideally circular) - Leaving York rss

      Hi

      Can anyone recommend any drives that you can easily do within an hour or so (total drive time) with the following ideal criteria:

      - A scenic and reasonably quiet route

      - Ideally circular (leave York - head out and back to York eventually)

      - Something that's not necessarily well known

      - A nice village pub to call in at along the way

      - That's all!

      All recommendations welcome, thank you.

      submitted by /u/HeroRon
      [link] [comments]

    17. 🔗 r/LocalLLaMA GLM-5.1 rss

      GLM-5.1 | submitted by /u/danielhanchen
      [link] [comments]
      ---|---

    18. 🔗 r/Leeds Trying to get to Leeds station in the morning from Bramley with all the road works rss

      There is a lot of road works in Kirkstall and people were delayed. My flight is 12pmin the afternoon and I have to get all the way to Manchester..

      Thnking of setting off between 6:00 and 7:00 a.m.

      would you recommend an Uber? I think the buses might be out of whack that morning with all the road works. I did speak to the person on the road works and they said that all the road work starts around 9:00 which means there's no room for error in me getting up early.

      submitted by /u/Crazy_Screen_5043
      [link] [comments]

    19. 🔗 r/Leeds Anyone near Headingley able to offer an emergency puppy cuddle for a very overdue pregnant woman? rss

      Bit of an unusual one, but my sister-in-law is very overdue with her first baby, desperate to avoid induction, and absolutely mad about dogs, while sadly unable to own one.

      I wondered whether anyone near Headingley has a friendly puppy and might be willing to let her have a short, supervised interaction, just some gentle fuss, cuddles, and a bit of puppy time. The hope, honestly, is to get a bit of oxytocin flowing and lift her spirits, as she completely melts around dogs.

      We are not looking to borrow a dog or put anyone out, just a brief visit with the owner present, ideally somewhere convenient and public if that suits you better.

      Bit of a long shot, but hey ho.

      Thank you!

      submitted by /u/Desecron
      [link] [comments]

    20. 🔗 syncthing/syncthing v2.0.16 release

      Major changes in 2.0

      • Database backend switched from LevelDB to SQLite. There is a migration on
        first launch which can be lengthy for larger setups. The new database is
        easier to understand and maintain and, hopefully, less buggy.

      • The logging format has changed to use structured log entries (a message
        plus several key-value pairs). Additionally, we can now control the log
        level per package, and a new log level WARNING has been inserted between
        INFO and ERROR (which was previously known as WARNING...). The INFO level
        has become more verbose, indicating the sync actions taken by Syncthing. A
        new command line flag --log-level sets the default log level for all
        packages, and the STTRACE environment variable and GUI has been updated
        to set log levels per package. The --verbose and --logflags command
        line options have been removed and will be ignored if given.

      • Deleted items are no longer kept forever in the database, instead they are
        forgotten after fifteen months. If your use case require deletes to take
        effect after more than a fifteen month delay, set the
        --db-delete-retention-interval command line option or corresponding
        environment variable to zero, or a longer time interval of your choosing.

      • Modernised command line options parsing. Old single-dash long options are
        no longer supported, e.g. -home must be given as --home. Some options
        have been renamed, others have become subcommands. All serve options are
        now also accepted as environment variables. See syncthing --help and
        syncthing serve --help for details.

      • Rolling hash detection of shifted data is no longer supported as this
        effectively never helped. Instead, scanning and syncing is faster and more
        efficient without it.

      • A "default folder" is no longer created on first startup.

      • Multiple connections are now used by default between v2 devices. The new
        default value is to use three connections: one for index metadata and two
        for data exchange.

      • The following platforms unfortunately no longer get prebuilt binaries for
        download at syncthing.net and on GitHub, due to complexities related to
        cross compilation with SQLite:

        • dragonfly/amd64
        • solaris/amd64
        • linux/ppc64
        • netbsd/*
        • openbsd/386 and openbsd/arm
        • windows/arm
        • The handling of conflict resolution involving deleted files has changed. A
          delete can now be the winning outcome of conflict resolution, resulting in
          the deleted file being moved to a conflict copy.

      This release is also available as:

      • APT repository: https://apt.syncthing.net/

      • Docker image: docker.io/syncthing/syncthing:2.0.16 or ghcr.io/syncthing/syncthing:2.0.16
        ({docker,ghcr}.io/syncthing/syncthing:2 to follow just the major version)

      What's Changed

      Fixes

      • fix(protocol): verify compressed message length before decompression by @calmh in #10595
      • fix(systemd): support overrides for syncOwnership by @Valloric in #10602
      • fix(systemd): add back chown allowed syscalls by @Valloric in #10605

      Other

      • chore(config, connections): use same reconnection interval for QUIC and TCP (fixes #10507) by @marbens-arch in #10573
      • build(deps): update dependencies by @calmh in #10588
      • chore(sqlite): reduce max open connections, keep them open permanently (fixes #10592) by @calmh in #10596

      Full Changelog : v2.0.15...v2.0.16

    21. 🔗 r/LocalLLaMA You can now fine-tune Gemma 4 locally 8GB VRAM + Bug Fixes rss

      You can now fine-tune Gemma 4 locally 8GB VRAM + Bug Fixes | Hey guys, you can now fine-tune Gemma 4 E2B and E4B in our free Unsloth notebooks! You need 8GB VRAM to train Gemma-4-E2B locally. Unsloth trains Gemma 4 ~1.5x faster with ~60% less VRAM than FA2 setups: https://github.com/unslothai/unsloth We also found and did bug fixes for Gemma 4 training:

      1. Grad accumulation no longer causes losses to explode - before you might see losses of 300 to 400 - it should be 10 to 15 - Unsloth has this fixed.
      2. Index Error for 26B and 31B for inference - this will fail inference for 26B and 31B when using transformers - we fixed it.
      3. use_cache=False had gibberish for E2B, E4B - see https://github.com/huggingface/transformers/issues/45242
      4. float16 audio -1e9 overflows on float16

      You can also train 26B-A4B and 31B or train via a UI with Unsloth Studio. Studio and the notebooks work for Vision, Text, Audio and inference. For Bug Fix details and tips and tricks, read our blog/guide: https://unsloth.ai/docs/models/gemma-4/train Free Colab Notebooks: | E4B + E2B (Studio web UI) | E4B (Vision + Text)-Vision.ipynb) | E4B (Audio)-Audio.ipynb) | E2B (Run + Text)-Text.ipynb)
      ---|---|---|---

      Thanks guys!

      submitted by /u/danielhanchen
      [link] [comments]

    22. 🔗 r/york casualties of the storm :( rss

      casualties of the storm :( | on the riverbank near cinder lane and in homestead park submitted by /u/whtmynm
      [link] [comments]
      ---|---

    23. 🔗 r/wiesbaden FDP und Pro Auto bekommen Schnappatmung rss
    24. 🔗 r/reverseengineering AI just hacked one of the world's most secure operating systems in four hours. rss
    25. 🔗 r/Yorkshire World Coal Carrying Championship 2026! rss

      World Coal Carrying Championship 2026! | A beloved tradition in the village of Gawthorpe. The competition takes place annually with men, women and children taking part (the kids don't carry coal!) This year we had good weather and a turnout of many hundreds of spectators to cheer along the runners. Congratulations to all those taking part, where even completing the race is a feat in itself. submitted by /u/Paradoxbox00
      [link] [comments]
      ---|---

    26. 🔗 r/york Jorvik Tickets for Today or early tomorrow? rss

      We’re visiting York- arrived yesterday and leave midday tomorrow. I’m an idiot and didn’t prebook tickets to the Jorvik museum and they’re sold out. By any small chance is there anyone who has tickets they can’t use that we could buy for today or early tomorrow?

      submitted by /u/SnooCats1465
      [link] [comments]

    27. 🔗 gchq/CyberChef v10.23.0 release

      See the CHANGELOG and commit messages for details.

    28. 🔗 r/LocalLLaMA Turns out Gemma 4 had MTP (multi token prediction) all along rss

      Turns out Gemma 4 had MTP (multi token prediction) all along | Hey Everyone, While I was trying to utilize Gemma 4 through the LiteRT api in my android app, I noticed that Gemma 4 was throwing errors when loading it on my Google Pixel 9 test device of the "mtp weights being an incompatible tensor shape". I did some digging and found out there's additional MTP prediction heads within the LiteRT files for speculative decoding and much faster outputs. Well turns out I got confirmation today from a Google employee that Gemma 4 DOES INDEED have MTP but it was "removed on purpose" for "ensuring compatibility and broad usability". Well would've been great to be honest if they released the full model instead, considering we already didn't get the Gemma 124B model leaked in Jeff Dean's tweet by accident. Would've been great to have much faster Gemma 4 generation outputs, ideally on the already fast MoE. Maybe someone can reverse engineer and extract the tensors and the math based on the compute graph in LiteRT? Here's a link to the conversation: https://huggingface.co/google/gemma-4-E4B-it/discussions/5 submitted by /u/Electrical-Monitor27
      [link] [comments]
      ---|---

    29. 🔗 r/Yorkshire Yorkshire, UK 🇬🇧 rss
    30. 🔗 r/reverseengineering DeepZero: An automated, agentic vulnerability research pipeline for finding kernel zero-days rss
    31. 🔗 r/LocalLLaMA Gemma 4 26b A3B is mindblowingly good , if configured right rss

      Last few days ive been trying different models and quants on my rtx 3090 LM studio , but every single one always glitches the tool calling , infinite loop that doesnt stop. But i really liked the model because it is rly fast , like 80-110 tokens a second , even on high contex it still maintains very high speeds.

      I had great success with tool calling in qwen3.5 moe model , but the issue i had with qwen models is that there is some kind of bug in win11 and LM studio that makes the prompt caching not work so when the convo hits 30-40k contex , it is so slow at processing prompts it just kills my will to work with it.

      Gemma 4 is different , it is much better supported on the ollama cpp and the caching works flawlesly , im using flash attention + q4 quants , with this i can push it to literally maximum 260k contex on rtx 3090 ! , and the models performs just aswell.

      I finally found the one that works for me , its the unsloth q3k_m quant , temperature 1 and top k sampling 40. i have a custom system prompt that im using which also might be helping.

      I've been testing it with opencode for the last 6 hours and i just cant stop , it cannot fail , it exiplained me the whole structure of the Open Code itself , and it is a huge , like the whole repo is 2.7GB so many lines of code and it has no issues traversing around and reading everything , explaining how certain things work , i think im gonna create my own version of open code in the end.

      It honestly feels like claude sonnet level of quality , never fails to do function calling , i think this might be the best model for agentic coding / tool calling / open claw or search engine.
      I prefer it over perplexity , in LM studio connected to search engine via a plugin delivers much better results than perplexity or google.

      As for vram consumption it is heavy , it can probably work on 16gb it not for tool calling or agents , u need 10-15k contex just to start it. My gpu has 24gb ram so it can run it at full contex no issues on Q4_0 KV

      submitted by /u/cviperr33
      [link] [comments]

    32. 🔗 Jessitron Adding Correctness Conditions to Code Changes rss

      Today I looked at the first PR on our new project repo. It added a new run script, but the README didn’t mention it. The proposed change was incomplete, because the documentation was out of sync.

      Did I comment on the PR? heck no. I want to fix this problem for all PRs, not just this one. We can automate this stuff now.

      Correctness condition: All PRs include updates to all relevant documentation files.

      How can we make this true?

      Instructions - We can change AGENTS.md to instruct our coding agent to look for documentation files and update them.
      Verification - We can add a reviewer agent to check each PR for missed documentation updates.

      This is two changes, so I can break this work into two parts. Which of these should we do first?

      Changing the instructions is easy.

      If we do instructions first, it’s easy. It will work most of the time. When I try it on this little PR, it will certainly work, and then I can claim victory and move on to the next feature.

      Then later, on some future PR, the agent will miss updating some documentation. Will I notice? No. In fact: I hope not. If I’m looking through PRs to a level of detail that includes all documents in the PR and also documentation files not in the PR, then we have failed to automate enough of this project. (This project does not deserve that level of scrutiny.)

      Changing instructions without verification gives me no guarantee of my correctness condition.

      Adding validation is sufficient.

      If we do validation first, then every PR will be checked for missed documentation updates. Incorrect PRs will be rejected, so the coding agent will have to update the documentation.

      My correctness condition will be guaranteed. Well, as guaranteed as I can get it with this nondeterministic automation. The reviewing agent will have only one task, so it won’t forget to check for needed documentation updates. If we ever catch it being wrong, then we must update its instructions.

      If we never implement the instructions change, then PRs will take longer, because some agent has to respond to the PR comments, and then the feedback loop runs again.

      With verification in place, the instructions change is an optimization!

      Validation before implementation.

      It’s a little like test-first development, but at a higher level. We’re adding a check to every feature implementation, not just one.

      It’s more like property testing than unit testing. We aren’t hard-coding “every feature should update the README.” We’re stating a property: the documentation should be up-to-date after every feature change.

      Now my PR reviews are also system reviews: what about this PR should have been different? How can we change the agent’s context and feedback to make that different? Now test that system change on this PR before we fix it.

      This is the new Boy Scout Rule. It went from “leave the codebase cleaner than I found it” to “make the whole develoment system stronger than it was.”

      It’s all part of programming the agents to program our software.

    33. 🔗 Jamie Brandon 0058: borrow-checking surprises, books, links rss
      (empty)
    34. 🔗 Jamie Brandon Borrow-checking surprises rss
      (empty)
    35. 🔗 Mitchell Hashimoto The Building Block Economy rss
      (empty)
  4. April 06, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-06 rss

      IDA Plugin Updates on 2026-04-06

      Activity:

      • augur
        • aa5847a0: feat: update ida plugin stub and metadata
        • 9ff757e1: doc: improve compatibility info
      • capa
        • 70f275ac: build(deps-dev): bump types-protobuf (#2994)
        • 63aa5729: build(deps-dev): bump mypy from 1.19.1 to 1.20.0 (#2993)
        • 63edbedb: build(deps-dev): bump lodash from 4.17.23 to 4.18.1 in /web/explorer …
      • efiXplorer
        • 29960936: update guids submodule (#139)
      • Greffe
        • 19e36ac1: Merge pull request #72 from Lixhr/70-core-avoid-overwriting-instrumen…
        • 9458749b: add branch overlap detection on close targets
        • 17aa102d: Merge pull request #71 from Lixhr/69-test-instrument-every-instructions
        • 497cca77: Add batch adds
        • 90f54e65: Merge pull request #68 from Lixhr/65-core-set-and-call-handler
        • 638e8906: fix wrong register saving order
        • 11d9a781: fix segfault on targets added from config file
        • d6882068: Fix non-thumb branch / wrong ret sp offset
      • haruspex
        • dcf1bcba: feat: improve ida plugin stub and metadata
        • 9d2107db: doc: improve compatibility info
        • 2fa79f55: doc: improve compatibility info
      • ida-pro-mcp
        • 9f489ca3: Merge pull request #345 from JohnsterID/test/pr335-unsafe-gating-cove…
        • 66af3ff6: Merge pull request #337 from JohnsterID/fix/ida-rpc-query-params
        • cb6e84cd: Restrict GHA to this repo
        • 29f6ae93: Merge pull request #346 from hzqst/main
        • 30774f3a: Merge pull request #341 from ZehMatt/token-optimizations
        • bbca7351: Fix [MCP] « notifications/initialized (0.0ms) ERROR: Method 'notific…
        • 256cc92e: Merge pull request #343 from hzqst/main
        • b8be0301: Use better approach to detect idalib headless mode: ida_kernwin.is_id…
        • 779d707d: Fix https://github.com/mrexodia/ida-pro-mcp/issues/342
        • a0bd04db: test: add coverage for @unsafe/@ext decorator sets and extension gating
        • c5360f62: fix: preserve ?ext= query params from -ida-rpc URL
      • python-elpida_core.py
        • fe666cbd: fix: add CONVERGENCE to Rhythm enum — ECS crash on cycle 1
        • 7350af96: feat: D16 Stage 2 — Witnessed Agency + Stage 1 gap closure
      • rhabdomancer
      • UltraKernelDumper
        • ea1cae2c: Add full project source excluding large target folder and build artif…
    2. 🔗 r/Yorkshire Richmond gleaming in the spring sunshine today. rss
    3. 🔗 r/Yorkshire No better place.. rss

      No better place.. | Average photo. submitted by /u/Melodic_Position_590
      [link] [comments]
      ---|---

    4. 🔗 r/LocalLLaMA What it took to launch Google DeepMind's Gemma 4 rss

      What it took to launch Google DeepMind's Gemma 4 | 💎💎💎💎 submitted by /u/jacek2023
      [link] [comments]
      ---|---

    5. 🔗 r/york What’s the name of the trio who play in York? rss

      They are a three piece, violin, guitair and double bass and they play covers in York. They’re bloody fantastic but cannot remember their name

      submitted by /u/rjle_x
      [link] [comments]

    6. 🔗 @binaryninja@infosec.exchange Tired of unzipping your password-protected malware samples just to analyze mastodon

      Tired of unzipping your password-protected malware samples just to analyze them? We've got you covered.

      Our latest blog post covers Container Transforms and how Binja now handles nested binary formats with structure and provenance intact.

      Read it here: https://binary.ninja/2026/03/31/container- transforms.html

    7. 🔗 r/york Wedding Venues in and around York rss

      Hi all. My fiancé and I are looking for twilight wedding packages (late afternoon to evening) in and around York for maybe a year from now. We are looking for a low to medium price point and we'd love to hear what venues are a good place to get married at.

      submitted by /u/HarryBuckley49
      [link] [comments]

    8. 🔗 r/wiesbaden Sprach Schule in Frankfurt/Wiesbaden rss
    9. 🔗 r/Yorkshire Hand painted Yorkshire artworks by Paul Halmshaw. rss
    10. 🔗 r/york Catering Rec's rss

      Hi, looking for recommendations for a caterer/someone to do food for my son's Christening. I quite like the idea of a graze table style spread, but not set on this.

      Thanks!

      submitted by /u/aprodog
      [link] [comments]

    11. 🔗 r/york My Visit To The City Today - lots of photos. rss

      My Visit To The City Today - lots of photos. | submitted by /u/danum1962
      [link] [comments]
      ---|---

    12. 🔗 r/york Original Ghost Walk (1973) vs. Mad Alice, which one should I book ? rss

      Hi All, I'll be visiting York soon and I badly want to do a ghost tour. Ive been looking for choices and Im torn between 2.

      I really love the fact that the Original Ghost Walk is the oldest in the world, that authenticity is pulling me.

      But I see everyone raving about Mad Alice (The Bloody Tour) for the performance. For those who have done both, which one feels more like a genuine dive into York's history ? (or) should I even care about history and just look to have fun ?

      I’m staying overnight specifically to do one of these, so I want to make sure I pick the one that actually feels worth after dark.

      submitted by /u/Lanky_Cartoonist_743
      [link] [comments]

    13. 🔗 r/Leeds 18f in leeds wanting a creative circle rss

      Hi, I’m 18 and based in Leeds. I’m really into the idea of filmmaking and creative stuff in general (making videos, trying out ideas, etc.), and I’d love to meet people around my age who are into the same kind of thing.

      I’m still pretty new to it and trying to build a creative circle, so I’m supperr desperate for people who want to make things together, collaborate, or just chat about creative ideas.

      If anyone knows of any places, groups, or communities in Leeds where people like this hang out, I’d really appreciate any suggestions too.

      Feel free to message me if you’re interested 😭🙏🏾

      submitted by /u/Sufficient_Leg_5141
      [link] [comments]

    14. 🔗 sacha chua :: living an awesome life YE12: Categorizing Emacs News, epwgraph, languages rss

      View in the Internet Archive, watch or comment on YouTube, or email me.

      Chapters:

      • 00:41:21 epwgraph
      • 00:54:56 learning languages

      Thanks for your patience with the audio issues! At some point, I need to work out the contention between all the different processes that I want to be listening to the audio from my mic. =)

      In this livestream, I categorize Emacs News for 2026-04-06, show epwgraph for managing Pipewire connections from Emacs, and share some of my language learning workflows.

      You can e-mail me at sacha@sachachua.com.

    15. 🔗 r/Leeds Sharps bin disposal in Leeds? rss

      Hi there,

      Does anyone know where I can dispose of a sharps bin in Leeds?

      It’s for syringes and needles for a medication I am prescribed by an online company.

      Thanks in advance!

      submitted by /u/No-Stick9557
      [link] [comments]

    16. 🔗 r/york Jumble sale! rss

      🛍️ Jumble Sale – Saturday 11th April! 🛍️

      A fantastic jumble sale will be taking place on Saturday 11th April, 2pm – 4pm at the Sheriff Hutton Village Hall, in support of Shopmobility York.

      The wonderful Sheriff Hutton Jumblies will be running the sale on our behalf – and if you’ve been before, you’ll know it’s always a brilliant event with plenty of bargains to be found!

      ✨ Details:

      • ⏰ Time: 2pm – 4pm

      • 📍 Location: Village Hall, Sheriff Hutton Road, York YO60 6RA

      • 💷 Entry: Just 50p

      • 🚶 It’s always popular – arriving early to join the queue is highly recommended!

      🎟️ Don’t miss the tombola, and be sure to visit the cake stall for some delicious homemade treats!

      🙏 Donations still welcome! If anyone is still wanting to donate items, please contact to arrange collection or drop off.

      Come along, grab a bargain, and support a great cause – we’d love to see you there!

      JumbleSale #ShopmobilityYork #CommunityEvent

      submitted by /u/Single-Ad-5317
      [link] [comments]

    17. 🔗 r/reverseengineering Cracking a Malvertising DGA From the Device Side rss
    18. 🔗 r/york Walking into York by the Ouse rss

      Walking into York by the Ouse | submitted by /u/York_shireman
      [link] [comments]
      ---|---

    19. 🔗 sacha chua :: living an awesome life 2026-04-06 Emacs news rss

      There's a lot of buzz around the remote code execution thing that involves Git, but it seems to be more of a Git issue than an Emacs one. This might be a workaround if you want, and in the meantime, don't check out git repositories you don't trust. There's no page for the Emacs Carnival for April yet, but you can start thinking about the theme of "newbies/starter kits" already, and I'm sure Cena or someone will round things up afterwards. Enjoy!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    20. 🔗 r/Yorkshire I don’t think any place can match this vibe that Yorkshire has✨ rss

      I don’t think any place can match this vibe that Yorkshire has✨ | @ travelandchill1 submitted by /u/ScrollAndThink
      [link] [comments]
      ---|---

    21. 🔗 r/Leeds Few more photos rss

      Couple more photos this morning, although I got told off. Apparently Wellington Place don't permit commercial photography without prior agreement.

      I'm in my work clothes with a Red S9 posting pics on Reddit lol.

      submitted by /u/Phil-pot
      [link] [comments]

    22. 🔗 Pagefind/pagefind v1.5.0 release

      Hey! This is a big one. Pagefind 1.5.0 has been fermenting for a while, and addresses a lot of long-standing issues and feature requests. This release brings an entirely new search UI built on web components, major improvements to search relevance and ranking, diacritics support, automatic CJK segmentation, Web Worker search, notably smaller indexes, and a much faster indexing binary. Enormous thanks to everyone who contributed features and fixes, as well as to everyone who tested the beta releases and provided feedback ❤️ - @bglw

      If you only read this far, I should mention up front: The existing Default UI and Modular UI remain available and supported for now, so you can upgrade your sites to Pagefind v1.5.0 without migrating to the Component UI.

      Pagefind Component UI

      Pagefind ships a brand new UI system built entirely on web components. The Component UI gives you searchboxes, modals, result lists, and filter controls as composable <pagefind-*> elements that you can mix, match, and style with CSS variables.

      The Component UI is available as vendored files in your /pagefind/ output directory, or as an npm package to install and import.

      The best way to get a feel for the new components is on the 📘 Pagefind Component UI page of the docs, where interactive examples of various components are shown.

      Extra goodies with the Component UI:

      • Greatly improved accessibility over the Default UI
      • Keyboard navigation through search results
      • Configurable keyboard shortcuts (thanks @miketheman !)
      • Full custom templates for rendering results and placeholders
      • Exported types for Component UI npm consumers (thanks @vanruesc !)
      • Support for multiple scoped Pagefind instances on one page
      • A range of CSS variables available for light-touch customization (thanks @miketheman for some of these!)
      • Improved RTL and locale-specific rendering

      Search Relevance, and Searching Metadata

      Pagefind now searches metadata by default! Importantly, this means it now searches the title metadata. Matches in titles are now taken into account, and search results are very hard to shake from prime positions if all (or much) of the title matches the search query.

      You can configure the weight of any metadata field. See 📘 Configuring Metadata Weights to change the title boost or apply custom weights to your own metadata fields.

      Beyond metadata searching, a bunch of weird and wonderful ranking bugs were resolved:

      • Metadata-only matches now return results. Previously, if a page matched the search query only in its metadata (e.g. the title) but not in the body content, it would be missed. These pages now correctly appear in results.
      • Word splitting and indexing was revisited to properly handle diacritics, stemming, and compound words together. This fixes a broad set of edge cases where compound word parts weren't indexed correctly.
      • Loading index chunks now correctly uses stemmed terms. This was a discrepancy in how chunks were identified, and could cause some hard to pin down issues where the wrong chunk would be loaded for a search term, leaving you with no (or fewer) results.
      • A couple of pathways left you with only the first matching chunk loaded, which would also give you fewer results. Words that straddle multiple chunks now behave better.
      • Fancy-pants unicode characters in words could really mess up the chunk loading, which has been fixed.

      Diacritics Support

      We finally properly support matching across diacritics. You can now find your cafés without remembering how to type é.

      By default, exact diacritic matches are preferred. So if you're searching "cafe", pages with "cafe" will rank higher than pages with "café". Getting this relevance right by default was the final piece of the puzzle for shipping this, which is why it took a while to land. See 📘 Configuring Diacritic Similarity to adjust how this plays out on your site.

      If you need strict matching, set exactDiacritics: true to disable normalization entirely — "cafe" will only match "cafe", and "café" will only match "café". 📘 Exact Diacritics

      Multilingual Improvements

      Thanks browsers! Pagefind now taps into Intl.Segmenter to chop search queries in CJK (Chinese, Japanese, Korean) non-whitespace-delimited languages. This was already done during indexing by Pagefind, but users searching still had to delimit their queries. Now searching "这是一段简单的测试文本" searches for the words "这", "是", "一段", "简单", "的", "测试", and "文本", which is also how that sentence was indexed.

      We also updated the underlying stemming library (thanks @uded !) which brings stemming support for Polish and Estonian (and Esperanto, if anyone is out there indexing some lang="eo" pages). The Snowball upgrade also improves stemming quality across many already-supported languages.

      Indexing Performance

      The indexing binary (the one you install through npx or your wrapper of choice) is now both smaller (so, faster to download) and faster to run, by quite a lot on both fronts. On some sites, indexing is more than twice as fast. Thanks to @zmre for much of this!

      Search Performance

      Pagefind's search now runs in a Web Worker automatically. This doesn't make the search faster, per se, but it dramatically improves perceived performance on large websites by keeping the main thread responsive. If Web Workers are unavailable, it falls back to the main thread automatically.

      Plus: Some low-hanging fruit was picked off, and Pagefind's index chunks are now ~45% smaller thanks to delta-encoding page numbers and word locations.

      New Search Options

      • metaCacheTag — Allows you to configure the cache-busting tag on the metadata file (which is fetched fresh on every page load). For offline/PWA scenarios where assets need to be served with service workers, this can now be overridden.
      • plain_excerpt — Search results and sub-results now include a plain_excerpt field containing the excerpt text without highlight mark tags, for those who want to handle highlighting themselves (or don't want it at all).
      • matchedMetaFields — Search results now include a matchedMetaFields field listing which metadata fields matched the search query.
      • includeCharacters is now available in the Node and Python wrapper APIs.

      UI Translations

      • Added Greek (el) translations. (PR #1019 — thanks @Yoda-Soda !)
      • Improved Chinese Traditional (zh-TW) translations. (PR #990 —thanks @510208 !)
      • Improved German (de) translations. (PR #953 —thanks @randomguy-2650 !)
      • Added translations for new Component UI strings across all existing languages.

      Other bits and bobs

      • Fixed relative image URLs (e.g. ./image.png) breaking when displayed in search results. (PR #1087)
      • Fixed Python x86_64 macOS wheel being incorrectly tagged as arm64. (PR #950 — thanks @lioman !)
      • Fixed Python wheel tags being written in compressed form. (PR #989 — thanks @ichard26 !)
      • Excluded the vendor directory from the main pagefind PyPI package. (PR #991)
      • Migrated Python wrapper build tooling from Poetry to uv. (PR #934 — thanks @SKalt !)
      • Fixed subresult URLs ignoring page meta URL overrides. (PR #1076)
      • Fixed subresult highlight mark color. (PR #1024)
      • Index chunk fetches are now throttled to avoid overwhelming the network on large sites. (PR #1071)
      • Added Windows ARM64 (aarch64-pc-windows-msvc) as a supported platform. (PR #1079)
      • For crate consumers: Moved actix-web and related serving dependencies behind a serve feature flag (PR #1023)

      Looking Forward

      The Component UI is the new recommended way to add search to your site, and future UI work will focus there. The Default UI and Modular UI are sticking around for now, but the Component UI is where new features will land first.

      Thanks again to everyone who contributed to this release!

    23. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    24. 🔗 r/wiesbaden Karosseriebauer / Lackierer rss

      Moin,

      mir ist jemand gegen das geparkte Auto gedonnert.

      Verkleidung der Frontstoßstange und Kotflügel müssen gemacht werden.

      Der Unfallverursacher ist bekannt und seine Versicherung zahlt.

      Habt ihr Tipps für einen wirklich guten Karosseriebauer / Lackierer?

      Und eventuell auch einen gescheiten Anwalt für Verkehrsrecht?

      submitted by /u/BabaJoe
      [link] [comments]

    25. 🔗 r/LocalLLaMA I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM rss

      I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM | Hardware: • Stock iMac G3 Rev B (October 1998). 233 MHz PowerPC 750, 32 MB RAM, Mac OS 8.5. No upgrades. • Model: Andrej Karpathy’s 260K TinyStories (Llama 2 architecture). ~1 MB checkpoint. Toolchain: • Cross-compiled from a Mac mini using Retro68 (GCC for classic Mac OS → PEF binaries) • Endian-swapped model + tokenizer from little-endian to big-endian for PowerPC • Files transferred via FTP to the iMac over Ethernet Challenges: • Mac OS 8.5 gives apps a tiny memory partition by default. Had to use MaxApplZone() + NewPtr() from the Mac Memory Manager to get enough heap • RetroConsole crashes on this hardware, so all output writes to a text file you open in SimpleText • The original llama2.c weight layout assumes n_kv_heads == n_heads. The 260K model uses grouped-query attention (kv_heads=4, heads=8), which shifted every pointer after wk and produced NaN. Fixed by using n_kv_heads * head_size for wk/wv sizing • Static buffers for the KV cache and run state to avoid malloc failures on 32 MB It reads a prompt from prompt.txt, tokenizes with BPE, runs inference, and writes the continuation to output.txt. Obviously the output is very short, but this is definitely meant to just be a fun experiment/demo! Here’s the repo link: https://github.com/maddiedreese/imac-llm submitted by /u/maddiedreese
      [link] [comments]
      ---|---

    26. 🔗 r/Yorkshire Anybody here ever been to Market Weighton? Easily one of the nicest small towns in East Yorkshire in my opinion. rss

      I haven't been to Market Weighton since around 2012 but plan on visiting again when I'm next in Hull again, always loved visiting Market Weighton when I lived in East Yorkshire.

      submitted by /u/AcadiaNo1039
      [link] [comments]

    27. 🔗 badlogic/pi-mono v0.65.2 release

      No content.