🏡


to read (pdf)

  1. I don't want your PRs anymore
  2. JitterDropper | OALABS Research
  3. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  4. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  5. Neobrutalism components - Start making neobrutalism layouts today

  1. April 28, 2026
    1. 🔗 r/Leeds Firstbus app update shenanigans rss

      If you use the Firstbus app for tickets, be warned, they are rolling out an update. The update has gone so well that they have a banner on the website pointing to a separate FAQ specifically for the update with a big list of reasons why you will probably have to call them to get access to your tickets...

      https://www.firstbus.co.uk/help-support/help-and-support/first-bus-app- update

      submitted by /u/awesomeweles
      [link] [comments]

    2. 🔗 r/reverseengineering Example structure for evidence-based vulnerability reports rss
    3. 🔗 r/reverseengineering DeepZero - Automated Vulnerability Research rss
    4. 🔗 Jessitron Communication is hard, but sometimes I can fix it. rss

      We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.

      Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.

      Here’s an example.

      Iterating could be easier. The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic. ! As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking. In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn't tell exactly what I was talking about. I can’t just point to the UI.

      This wasn't the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I'm talking about.

      When communication gets difficult, that’s a signal. I can change this.

      So I made it make a way to point to the UI.

      In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ‘r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

      the html comic with references.

      Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?

      Claude understood and added a hover effect that highlights the originating bash tool call.

      That hover effect is OK; I used it a few times. Those reference tags are gold! I've used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.

      This was a great idea. Iterating is much easier now!

      I am in the loop and on the loop.

      There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.

      Then there’s a meta-level feedback loop, the “is this working?” check when I feel resistance. Frustration, tedium, annoyance-these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the “middle loop,” and Kief Morris renamed it "human on the loop."

      Here, I’m both in the development loop with the AI, and I’m “on the loop” as a thoughtful collaborator, smoothing the development loop when it gets rough.

      Resistance will be assimilated.

      As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!

    5. 🔗 r/wiesbaden Eiserne Hand mit der Vespa rss

      Kurz und knappe Frage an die Moped / Rollerfahrer.

      Meine Freundin muss nach Taunusstein pendeln und überlegt auf Roller umzusteigen.

      Daher meine Frage :

      Kommt eine kleine Vespa / Moped mit 50ccm die eiserne Hand hoch ? Also mit sinnvoller Geschwindigkeit?

      Hat das einer von euch schon gemacht ?

      Ich danke schonmal für die Antworten :)

      submitted by /u/metaldog
      [link] [comments]

  2. April 27, 2026
    1. 🔗 r/Leeds Scam companies to avoid rss

      I will attach pictures showing what to look out for, additionally, be careful of any promising high pay. These people compliment you, and essentially groom you into an extremely low wage, door to door sales job, whilst promising greater things e.g. Quick career progression

      submitted by /u/Fit-Librarian5590
      [link] [comments]

    2. 🔗 r/LocalLLaMA Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. rss

      Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. | TRELLIS.2 is a state-of-the-art large 3D generative model (4B parameters) designed for high-fidelity image-to-3D generation. It leverages a novel "field-free" sparse voxel structure termed O-Voxel to reconstruct and generate arbitrary 3D assets with complex topologies, sharp features, and full PBR materials.


      Link to the Paper:

      Link to the Code:

      Link to Try Out A Live Demo:

      submitted by /u/44th--Hokage
      [link] [comments]
      ---|---

    3. 🔗 r/york Where do parents buy baby/child car seats now that Paul Stride has closed? rss

      Where is there nearby that is good for buying car seats? Don’t know what you’ve got until it’s gone, Paul Stride was amazing and we now need a replacement for our 3 year old.

      submitted by /u/amusedfridaygoat
      [link] [comments]

    4. 🔗 MetaBrainz MusicBrainz Server update, 2026-04-27 rss

      This release mostly consists of a very substantial rewrite of the external links editor code, to make that section of our editors more efficient. While doing that we also fixed a few long-standing links editor bugs. While we kept this code in beta for quite a while so the community could help us catch most new bugs, do not hesitate to report any issues you might find.

      A new release of MusicBrainz Docker is also available that matches this update of MusicBrainz Server. See the release notes for update instructions.

      Thanks to rinsuki for having contributed to the code. Thanks to fabe56, HibiscusKazeneko and Lioncat6 for having reported bugs and suggested improvements. Thanks to Besnik, DenilsonSama, Khaled Salama, Marc Riera, ShimiDoki, Vaclovas Intas, cerberuzzz, coldified_, dddrnzv, dulijuong_artist, imgradeone, karpuzikov, mfmeulenbelt, salo.rock, smreo1590, syntariavoxmortem, wileyfoxyx and yyb987 for updating the translations. And thanks to all others who tested the beta version!

      The git tag is v-2026-04-27.0.

      Fixed Bug

      • [MBS-8570] - "This relationship already exists" error message does not go away when one duplicate URL is removed
      • [MBS-12032] - Adding a duplicate URL rel moves link to new section
      • [MBS-14307] - Wikipedia extracts are not displaying
      • [MBS-14309] - Can't click documentation/help links

      Improvement

      • [MBS-14279] - Support Amazon Belgium links
      • [MBS-14280] - Block archive.today, archive.is, archive.ph, archive.li, archive.fo, archive.md and archive.vn links

      Task

      • [MBS-11521] - Refactor error handling in the external links editor
      • [MBS-11889] - Refactor state handling in the external links editor
      • [MBS-13716] - Update React to v19
    5. 🔗 Simon Willison Tracking the history of the now-deceased OpenAI Microsoft AGI clause rss

      For many years, Microsoft and OpenAI's relationship has included a weird clause saying that, should AGI be achieved, Microsoft's commercial IP rights to OpenAI's technology would be null and void. That clause appeared to end today. I decided to try and track its expression over time on openai.com.

      OpenAI, July 22nd 2019 in Microsoft invests in and partners with OpenAI to support us building beneficial AGI (emphasis mine):

      OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

      But what is AGI? The OpenAI Charter was first published in April 2018 and has remained unchanged at least since this March 11th 2019 archive.org capture:

      OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

      Here's the problem: if you're going to sign an agreement with Microsoft that is dependent on knowing when "AGI" has been achieved, you need something a little more concrete.

      In December 2024 The Information reported the details (summarized here outside of their paywall by TechCrunch):

      Last year’s agreement between Microsoft and OpenAI, which hasn’t been disclosed, said AGI would be achieved only when OpenAI has developed systems that have the ability to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.

      So AGI is now whenever OpenAI's systems are capable of generating $100 billion in profit?

      In October 2025 the process changed to being judged by an "independent expert panel". In The next chapter of the Microsoft–OpenAI partnership:

      The agreement preserves key elements that have fueled this successful partnership—meaning OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until Artificial General Intelligence (AGI). [...]

      Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel. [...]

      Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first.

      OpenAI on February 27th, 2026 in Joint Statement from OpenAI and Microsoft:

      AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same.

      OpenAI today, April 27th 2026 in The next phase of the Microsoft OpenAI partnership (emphasis mine):

      • Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.
      • Microsoft will no longer pay a revenue share to OpenAI.
      • Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.

      As far as I can tell "independent of OpenAI’s technology progress" is a declaration that the AGI clause is now dead. Here's The Verge coming to the same conclusion: The AGI clause is dead.

      My all-time favorite commentary on OpenAI's approach to AGI remains this 2023 hypothetical by Matt Levine:

      And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    6. 🔗 r/york Askham Tesco recycling rss

      Does anyone know when the big cardboard recycling skip gets emptied? It's been full for weeks now and is in a state

      submitted by /u/Isla_Nooblar
      [link] [comments]

    7. 🔗 @binaryninja@infosec.exchange The debugger got some real love in our latest update. Hardware breakpoints and mastodon

      The debugger got some real love in our latest update. Hardware breakpoints and conditional breakpoints have both landed, and the new debug adapters make things faster and more reliable across a range of workflows. Read more from the latest blog: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#debugger

    8. 🔗 r/reverseengineering rfcat-py3 rss
    9. 🔗 r/wiesbaden Hat jemand Lust, diesen Mittwoch mit mir zu nem Konzert nach Köln (Aries) zu fahren? Ich zahle das Ticket rss

      Ich (M21) wohne in Nähe Wiesbaden und gehe diesen Mittwoch auf ein Konzert in Köln. Der Künstler heißt Aries und geht so in Richtung Indie/Pop/Rock/Hip-Hop (hier eine Geschmacksprobe). Ich freu mich schon richtig drauf. Mein Problem ist nur, ich hab kein Auto und mit den Öffis käme ich so ca. 6 Uhr morgens wieder zu Hause an.

      Wenn mich jemand von euch mitnimmt (Hin- und Rückreise), würde ich das Ticket + 20€ Spritgeld bezahlen. Also wer Lust auf sowas hat, meldet euch gerne in den nächsten 24h bei mir.

      Edit: wenn ihr andere Ideen habt, was ich tun soll, wenn das hier nichts wird: immer her damit. Mein aktueller Backup-Plan ist Hinfahrt mit BlaBlaCar und beim Konzert durch die Menge zu gehen und anzusprechen mit nem Pappschild:

      Köln -> Frankfurt

      Anybody?

      submitted by /u/BullfrogMiserable554
      [link] [comments]

    10. 🔗 r/york Thinking of buying a Persimmon new build home in Selby. There’s so many mixed reviews about this company. Was wondering on people’s experiences with this company. rss
    11. 🔗 r/LocalLLaMA Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 rss

      Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 | Hey fellow Llamas, your time is precious, so I'll keep it short. We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. We call it Luce DFlash (https://github.com/Luce-Org/lucebox-hub; MIT) ~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining (z-lab published a matched Qwen3.6-DFlash draft on 2026-04-26, still under training, so AL should keep climbing). If you have CUDA 12+ and an NVIDIA GPU (RTX 3090 / 4090 / 5090, DGX Spark, other Blackwell, or Jetson AGX Thor with CUDA 13+), all you need is # After cloning the repo (link in the first comment): cd lucebox-hub/dflash cmake -B build -S . -DCMAKE_BUILD_TYPE=Release cmake --build build --target test_dflash -j # Fetch target (~16 GB) huggingface-cli download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/ # Matched 3.6 draft is gated: accept terms + set HF_TOKEN first huggingface-cli download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/ # Run DFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):" That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. The binary links libggml*.a and never libllama. Luce DFlash will

      • Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify).
      • Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K.
      • Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts).
      • Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s.
      • Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL.

      Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256: Bench AR tok/s DFlash tok/s AL Speedup HumanEval 34.90 78.16 5.94 2.24x Math500 35.13 69.77 5.15 1.99x GSM8K 34.89 59.65 4.43 1.71x Mean 34.97 69.19 5.17 1.98x As you can see, the speedup is real on consumer hardware, not a paper number. Target graph produces bit-identical output to autoregressive in AR mode; the draft graph matches the z-lab PyTorch reference at cos sim 0.999812. Q4_0 KV costs ~3% AL at short context (8.56 to 8.33) and wins at long context where F16 won't fit anyway. Constraints: CUDA only, greedy verify only (temperature/top_p on the OpenAI server are accepted and ignored), no Metal / ROCm / multi-GPU. Repo started single-3090, recent community PRs added support for RTX 5090, DGX Spark / GB10, other Blackwell cards, and Jetson AGX Thor (sm_110 + CUDA 13). Feedback more than welcome! submitted by /u/sandropuppo
      [link] [comments]
      ---|---

    12. 🔗 r/Leeds Problem neighbours rss

      We have a house of multiple occupancy next door to our house which has adjoining garages. One of the garages is rented out by someone who does not live in any of the nearby houses and just rents the garage. This garage is in very frequent use by the guy renting who is habitually working on his car or multiple cars which groups of noisy ppl, dragging equipment around and using power tools weekend after weekend whenever the weather is good. We have a lovely quiet area apart from when this guy and his cohort show up - who don't even live here.

      Is there any department in LCC we can contact to get help with this as it is starting to really affect out quality of life and put us off spending time in our own garden and I imagine it is affecting other neighbours too. or does anyone know how I find out who owns the property next door.

      Imagine every Sunday it was like having a mechanics / building site going full tilt all afternoon. It's amazing how thoughtless people can be.

      Thanks

      submitted by /u/sanchez599
      [link] [comments]

    13. 🔗 r/reverseengineering Using Google's Gemma 4 E4B local AI model to Reverse Engineer a simple Crackme rss
    14. 🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
      sync plugin-repository.json
      
      No plugin changes detected
      
    15. 🔗 r/Harrogate Has the gentrification of Bilton began? rss

      Lots of new movers, young and from Leeds. Will this lead to businesses popping up supporting their tastes? The Knox is pricier than some town center spots already!

      submitted by /u/MechanicAggressive16
      [link] [comments]

    16. 🔗 sacha chua :: living an awesome life 2026-04-27 Emacs news rss

      There was a big discussion on lobste.rs about people's favourite Emacs packages and that sparked similar conversations on Reddit and HN. Discussions like that are a great source of inspiration. I added a couple of small improvements to my config based on this week's Emacs news, like diff-hl.

      Also, lots of people expressed their appreciation for Chris Wellons, who is moving on to other editors for now. Me, I've enjoyed using simple-httpd, impatient, and skewer, and I'm glad Chris made and shared them. Many of his packages already have new maintainers, and the rest are up for adoption. Perhaps we'll see him around again someday!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    17. 🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like going Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss

      Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?

      A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 100+ members and counting!

      We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people

      The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.

      We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc

      We also have a discord for chill voice chats & casual gaming etc.

      If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome

      I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc

      Leave a comment & I'll DM an invite when available! x

      PLEASE CHECK DMS FOR INVITES

      submitted by /u/rmonkey100
      [link] [comments]

    18. 🔗 r/york Flowers make this city even better somehow🥹💐🪻 rss

      Flowers make this city even better somehow🥹💐🪻 | submitted by /u/Wedding-Beauty
      [link] [comments]
      ---|---

    19. 🔗 r/LocalLLaMA To 16GB VRAM users, plug in your old GPU rss

      For those who want to run latest dense ~30b models and only have 16GB VRAM, if you have a old card with 6GB VRAM or more, plug it in.

      It matters that everything fits on the VRAM, even on 2 cards. Even if one of them is quite weak.

      I have a 5070Ti 16GB and a old 2060 6GB. The common idea is you need 2 same GPU to maximize performance. But one day I was strike by the idea, why not give it a try?

      Let's see, if you did not bought a mother board just for LLM, it's very possible you have a true PCI-E x16 slot and a couple that looks like x16 but are actually wired with x4, just like me. That's a perfect slot for a old card.

      16GB + 6GB = 22GB, it's getting close to the 24GB class card. If you have a better old card, lucky you!

      Then you use llama-server with a config like this

      [*] jinja = true cache-prompt = true n-gpu-layers = 999 no-mmap = true mlock = false np = 1 t = 0 [qwen/qwen3.6-27b] model = ./Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf mmproj = ./Qwen3.6-27B-GGUF/mmproj-Qwen3.6-27B-BF16.gguf reasoning = on dev = Vulkan1,Vulkan2 c = 128000 no-mmproj-offload = true cache-type-k = q8_0 cache-type-v = q8_0
      

      A couple specific points:
      - dev=Vulkan1,Vulkan2, this enables the two GPUs, run llama-server.exe --list-devices to see what you should set.
      - no-mmap and mlock=false keeps the model away from your RAM
      - np=1, no-mmproj-offload (or do not supply mmproj model), cache-type-k and cache-type-v to minimize VRAM needed
      - n-gpu-layers=999 to prefer GPU offloading, well this may be unnecessary, but I'd keeps it
      - split-mode=layer to split the layers asymmetrically across the device, "layer" is the default though so you don't see it above.
      - c=128000 could be a little stretch, but works well enough for me.

      BTW I also have intel integrated GPU that I plugged the monitors into, which is Vulkan0.

      Some numbers, basically, at 128k max context, 71k actual context useage, pp=186t/s and tg=19t/s, quite usable speed compared to the 4t/s on single card.

      [56288] prompt eval time = 5761.53 ms / 1076 tokens ( 5.35 ms per token, 186.76 tokens per second) [56288] eval time = 58000.15 ms / 1114 tokens ( 52.06 ms per token, 19.21 tokens per second) [56288] total time = 63761.69 ms / 2190 tokens [56288] slot release: id 0 | task 654 | stop processing: n_tokens = 71703, truncated = 0
      

      Edit:

      Some folks want numbers, so here is llama bench. This is with cuda instead. Runs with --device CUDA0 are on single GPU. Without uses all GPU. It's fairly clear fitting on GPU, even on a second weak one, matters a lot for tg speed, especially at long context.

      llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 903.13 ± 26.25 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.54 ± 0.14 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 663.60 ± 9.22 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.03 ± 0.08 | llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 769.00 ± 4.50 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 25.40 ± 0.30 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 668.83 ± 2.83 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.31 ± 0.09 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 981.43 ± 27.91 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.87 ± 0.17 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 751.15 ± 16.03 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.08 ± 0.12 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 807.61 ± 7.40 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 24.85 ± 1.57 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 732.96 ± 3.86 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.40 ± 0.07 |
      

      submitted by /u/akira3weet
      [link] [comments]

    20. 🔗 r/Yorkshire Cherry trees colouring the world. rss
    21. 🔗 r/Leeds Does anyone have spare beer bottles? rss

      I am brewing my own beer and I need bottles preferably brown. If you work in a pub and have empties I can come and collect? My local only does alc free bottles and doesn’t sell many. Thanks

      submitted by /u/DiligentPotential960
      [link] [comments]

    22. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    23. 🔗 r/wiesbaden Need help with moving rss

      Hey Leute!

      Meine Freundin und ich sind gerade für die Uni nach Wiesbaden gezogen (Daimlerstraße, 65197). Ich habe selbst einen Transporter gemietet und bin mit unseren Sachen hierher gefahren. Jetzt haben wir aber ein Problem: Wir bekommen unsere Waschmaschine einfach nicht vom Transporter in unsere Wohnung im 4. Stock.

      Hat jemand Tipps oder vielleicht sogar kurzfristig Zeit, kurz mit anzupacken? Würden natürlich auch was dafür geben!

      Vielen Dank schon mal!

      ---

      English:

      Hey guys!

      My girlfriend and I just moved to Wiesbaden for university (Daimlerstraße, 65197). I rented a van myself and drove all our stuff here to our new apartment. But now we have a problem: we can’t get our washing machine from the transporter up to our apartment on the 4th floor.

      Any suggestions, or maybe someone nearby who could help us carry it up? Happy to compensate!

      Thanks a lot in advance.

      submitted by /u/Orph3us_151
      [link] [comments]

  3. April 26, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-26 rss

      IDA Plugin Updates on 2026-04-26

      New Releases:

      Activity:

    2. 🔗 r/Yorkshire Sunrise at Cow and Calf rocks, Ilkley, West Yorkshire rss

      Sunrise at Cow and Calf rocks, Ilkley, West Yorkshire | 📸 Tatiana Hepplewhite submitted by /u/Wedding-Beauty
      [link] [comments]
      ---|---

    3. 🔗 r/LocalLLaMA Confirmed: SWE Bench is now a benchmaxxed benchmark rss

      Confirmed: SWE Bench is now a benchmaxxed benchmark | submitted by /u/rm-rf-rm
      [link] [comments]
      ---|---

    4. 🔗 r/Harrogate Almost drove into bus station rss

      First time driving in Harrogate to go to Leeds and got confused on the turning because everyone was indicating right (so i assumed that I would be able to drive forward where I thought the sign before meant to go).

      Realised my mistake almost immediately and stopped and reversed out quickly after having only gone in a little bit (not even sure if it would be listed as being entered).

      How much would the fine be if it is registered as an offence and would I be able to contest it? I'm a relatively new driver if that would help my case

      submitted by /u/stonecoldtruecel67
      [link] [comments]

    5. 🔗 r/wiesbaden Wie fandet ihr den Vinothon? rss

      Frage steht im Prinzip oben. Ich musste leider spontan krankheitsbedingt absagen, aber mich interessiert wie ihr den ersten Vinothon erlebt habt?

      War er gut organisiert? Was gab es an den Genussstationen? Hat jeder was bekommen oder war es zu wenig? Hattet ihr Spaß? :D

      Wetter war ja super.

      submitted by /u/itsKoeri
      [link] [comments]

    6. 🔗 r/Yorkshire Then & Now Pt 3 rss
    7. 🔗 r/Yorkshire Then & Now Pt 1 rss
    8. 🔗 r/wiesbaden Gibt es hier Gruppentreffs? rss

      Hi,

      Gibt es hier zufällig so Spieleabende oder Whatsapp-Gruppen um zu connecten?

      submitted by /u/Right_Drawing_5299
      [link] [comments]

    9. 🔗 r/Yorkshire It’s Grim Up North rss

      Near Wetherby.

      submitted by /u/Pitiful-Hearing5279
      [link] [comments]

    10. 🔗 r/Harrogate Any one interested for a game of Snooker or pool ? rss

      Im 30M from Hg1

      submitted by /u/ObjectDelicious3427
      [link] [comments]

    11. 🔗 r/Yorkshire Friday rss

      Friday | I work in Liversedge and live in Halifax, the bus journey gets a bit dull, so on Friday morning, after a night shift, I got a bus to Leeds, another across to Pickering. I had a few hours there and then got the bus over the moors to Whitby. Got the coast bus down to Scarborough and then the coastliner back to Leeds. Thoroughly recommend it if you’re ever at a loose end for a day submitted by /u/kitty_pickle
      [link] [comments]
      ---|---

    12. 🔗 r/LocalLLaMA HauhauCS (of "Uncensored Aggressive" fame) published an abliteration package that plagiarizes Heretic without attribution, and violates its license rss

      HauhauCS (u/hauhau901) publishes uncensored LLM models on HuggingFace with 5M+ combined monthly downloads across 22 models (verified via the HuggingFace API, April 2026). Every model card claims "0/465 refusals, zero capability loss." When asked about methodology on HuggingFace, the response was: "Currently it's my own private methods and tools :) Not interested in any donations."

      We recovered the deleted source code from PyPI's CDN. It's a fork of Heretic (AGPL-3.0).

      Full 17-point code breakdown, benchmark analysis, and SHA-256 verified downloads: dreamfast.github.io/reaper- analysis

      The evidence

      • 7/7 module filenames preserved from Heretic v1.2.0
      • 30/32 refusal markers character-for-character identical, including "i an ai" missing the "m" and "i can'" missing the "t"
      • 30+ shared function and class names including get_readme_intro, DatasetSpecification, batchify
      • Identical Optuna parameter bounds: (0.4, 0.9) and (0.6, 1.0) multiplied by last_layer_index
      • The config was renamed from Heretic's good_prompts/bad_prompts to safe_prompts/harmful_prompts, but the internal variables were left as good_residuals/bad_residuals, matching Heretic exactly
      • The entire analyser geometry pipeline reproduced step for step: geometric median computation, PaCMAP with n_neighbors=30, atan2 rotation with the same [[ct, -st], [st, ct]] rotation matrix. Heretic's author notes he has " never seen" the geometric median approach in abliteration literature.
      • A source comment in config.py reads: " kept as a module-level tuple so the literal does not duplicate line-for-line with any fork." A human hiding a fork would not document the evasion. An LLM asked to refactor code would describe the rationale as written.
      • SPDX headers identical format across all core files, just the copyright holder swapped

      View 17 hand picked code snippet comparisons in the side by side comparison.

      Heretic's author confirms derivation

      Philipp Emanuel Weidmann, the creator of Heretic, reviewed the recovered source code and stated: " I can say with certainty that this package was plagiarized from Heretic, and then probably refactored using an LLM in an attempt to hide this." He identified the same SPDX headers, the geometric median approach he has "never seen in literature," the DatasetSpecification fields including residual_plot_label and residual_plot_color, the cascading dtype fallback, the good/bad naming convention, and more. He calls it " a clear violation of Sections 4 and 5 of the AGPL. It's also a clear violation of every ethical standard imaginable, and an obvious case of outright plagiarism." Full quote on the analysis page.

      License violation

      Heretic is AGPL-3.0, which requires modified versions to preserve original copyright notices, identify as derivative works, and remain under AGPL-3.0. Reaper removed all copyright notices, does not identify itself as a derivative work of Heretic, and relicensed to PolyForm Noncommercial.

      Verify it yourself

      Grab the files here

      submitted by /u/nathandreamfast
      [link] [comments]

    13. 🔗 r/Leeds Mid 20s F looking to meet new people rss

      I’m looking to meet new people around my age and really struggling with it at the moment. Joined several groups but always end up fading out. Can anyone recommend some places or if anyone wants to meet? 😊

      Seen several posts but looking to see if there’s anything new?

      submitted by /u/Exciting_Shoulder_88
      [link] [comments]

    14. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [tc_deer](https://github.com/arkup/tc_deer): 0.1.3
      
    15. 🔗 r/LocalLLaMA Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! rss

      Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! | Been using this for a few days. It is BY FAR the best uncensored model I have found for Qwen 3.6 35B. With IQ4XS, Q8 KVcache, 262K context, it fits in 24GB of VRAM and does not fail on multi turn tool calls. I honeslty feel like it is smarter than the original model (call me crazy). The model also has a very low KLD so it should in theory be similar to the orignal model on harmless prompts. llmfan's 3.5 35B model does actually benchmark higher than the original in the UGI NatInt section, so I have a solid hunch this 3.6 35B will also benchmark higher than the original 3.6 model as well. Y'all should give it a try. submitted by /u/My_Unbiased_Opinion
      [link] [comments]
      ---|---

    16. 🔗 r/york York is beautiful from every angle✨🌹 rss

      York is beautiful from every angle✨🌹 | submitted by /u/No_Donut1433
      [link] [comments]
      ---|---

    17. 🔗 r/york Efl sticker swap shop in York? rss

      Efl sticker swap shop in York? | Anybody know of any efl sticker swap event in York or interested in organizing one? (I'm a man in my 30s, I know...) Got 150 spares and still need 400 to finish the album. Any tips on where is stocking them atm would also be much appreciated. submitted by /u/Beans-4862
      [link] [comments]
      ---|---

    18. 🔗 r/york Charity bike ride setting off today from the Eye of York rss

      Charity bike ride setting off today from the Eye of York | 200 bikes riding up to Huby raising money for the palliative care centre at York Hospital. Looking great with the blue sky over Clifford’s Tower! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    19. 🔗 r/Harrogate Cheapest option to London rss

      I need to travel to London once a week for a few months for a job, what’s the best and cheapest way to book this? I’ve found booking via uber gets 10% credits and Avios points. Are there any others??

      submitted by /u/Odd_Bookkeeper_6027
      [link] [comments]

    20. 🔗 Register Spill Joy & Curiosity #83 rss

      This is a time of great technological change. You could even wring a "once in a lifetime" out of me . Many times per week now I say to either myself or someone who just shared some news: this is crazy, man.

      The numbers, the pace, the demand, the bottlenecks shifting, the new capabilities emerging, and, man, the predictions. The predictions. AI will do that, AI will do this, in the future we'll do all of this and none of that, but surely this will still be that and that thing will be the most important thing.

      I've done it too, of course. I've predicted quite a few things in past issues of this newsletter and, hey, yes, I was right a few times. And so were others.

      But we're talking about technological progress here and that is very hard to predict, especially its second-order effects. So, as you read through the things I shared below, I want you to keep the following quote in mind, because it's been stuck in mine for many weeks now and I found it helpful to carry around with me:

      He did not create a world that went as he wanted, but he created a world that went well. We have many examples of that. Trains and bicycles come in, and we get feminism because it's easier for people, especially women, to move freely and independently. They can organize. They can mobilize. We get suffragettes. Did the inventor of the train intend for there to be women's liberation? No. Did it go the way he imagined? No. Did it go well? Yes.

      Or consider this:

      After the Great War, the Haber-Bosch process was used throughout the world to fix nitrogen on a grand scale. […] It was synthetic fertilizer that enabled Europe, the Americas, China and India to escape mass starvation and consign famine largely to the history books: the annual death rate from famine in the 1960s was 100 times greater than in the 2010s. […] If Haber and Bosch had not achieved their near-impossible innovation, the world would have ploughed every possible acre, felled every forest and drained every wetland, yet would be teetering on the brink of starvation, just as William Crookes had forecast.

      That was after the war. Here's what Bosch and Haber did with their process during the war:

      Then in September 1914 Bosch made the famous 'saltpetre promise' that he could convert the Oppau plant so that it turned ammonia into nitrate, using a newly discovered iron-bismuth catalyst. He built an even bigger plant at Leuna, producing huge quantities of nitrate and thus probably prolonging the war. Haber, in the meantime, had invented gas warfare, personally presiding over the first chlorine attack at Ypres in March 1915.

      Now, who would've predicted going from that to that?

      • Amp's smart mode now uses Opus 4.7. I think it's a great model. I now often switch between smart and deep mode. One plans, the other reviews, and vice versa.

      • Last week I re-read Mike Acton's Expectations of Professional Software Engineers and, man, is it good. So, so good. If you haven't, you need to read this right now. This is software engineering in a team, in a company, in a business. Hacking isn't programming isn't engineering, but what he describes here, that's the real thing. And -- of course you have to say this, Thorsten -- yes: this all still applies when using AI. Maybe even more so. Just like The Basics.

      • For many, many years I've come across strong recommendations to watch this talk by Richard Hamming: You and Your Research. Not considering myself a scientist, I shrugged off those recommendations and never saw it. I can tell you now: that was a huge mistake. This morning, right after waking up, still in bed, I read this transcript, start to end, and let me tell you this: watch the talk or read the transcript! If you're here, reading this newsletter, I'm certain you will get something out of it. It's fantastic.

      • Highly, highly recommend you watch this interview with Dylan Patel on the current state of tokenomics. Really: if you only have a vague idea of what "compute constrained" means, you have to watch this. (Also, the last ten minutes, in which Dylan talks about the optics of the model companies, are kinda separate from tokenomics, but worth it alone.)

      • Talking of which: "Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." $60 billion (!) now sounds like $60 million did in 2012.

      • Kevin Kwok's thoughts on Cursor's and SpaceX's partnership are interesting, but I disagree with him on the premise that model and harness have to go hand in hand. I don't think the causality of the loop is there: Claude 3.5's ability and eagerness for tool calls was the Urknall of agents. That's what lead to us to build Amp and Anthropic to build Claude Code.

      • Bonkers numbers: Google wants to invest up to $60B in Anthropic. The Hacker News comments are interesting.

      • Justin Jackson is asking: what has technology done to us? I very much don't agree with the quoted statement of "technology will always do its worst thing" (and neither does Justin, it sounds like.)

      • It's cool to care: "Whenever somebody asks why, I don't have a good answer. Because it's fun? Because it's moving? Because I enjoy it? I feel the need to justify it, as if there's some logical reason that will make all of this okay. But maybe I don't have to. Maybe joy doesn't need justification. […] So much of our culture tells us that it's not cool to care. It's better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I've certainly felt that pressure - the urge to play it cool, to pretend I'm above it all. To act as if I only enjoy something a 'normal' amount. Well, fuck that."

      • Take some time to play around with ChatGPT Images 2.0. It's mind-blowing. If they can accurately reconstruct screenshots like, regardless of whether that's the "image" model part or the "thinking" model part, I think something just shifted. Also, what a sick landingpage.

      • This was great: What will be scarce? The question that leads to the one in the title is this: "If advanced AI brings material abundance--if machines can produce many if not all forms of human production at very low marginal cost--does economics become irrelevant?" The whole piece is explains the possible mechanisms at play and also answer the question of whether economics will become irrelevant, but even more interesting is the prediction on the future of work: "The economics of structural change tells us that when technology makes one type of production cheap, the economy doesn't collapse. It transforms. It shifts toward the things that technology can't make cheap. For AI, those things are exactly the ones where human involvement carries inherent, irreplaceable value." And that means the "durable jobs will be in the relational sector, where the human element is the product itself." Or, in other words: "You don't need to be Picasso. You need to be the person whose involvement makes the product feel like it was made for someone, by someone."

      • "A parasite that has been eating people for 3,500 years is about to be wiped off the planet. It infected 3.5 million people in 1986. Last year, it infected 10. And I have not seen it make a single front page." Believe it or not, but in seventh grade I gave a presentation in biology class on the Guinea worm. Use Google Image search if you're as brave as I was in seventh grade. Yeah, thought so.

      • This is from December last year, so the numbers are even crazier now, which makes this even more interesting: Liar's Valuation. I knew about "take last month's revenue and multiply by twelve," but the tiered investment rounds were new to me, and so was the "give heavy discount in year one, but then report year three bookings as ARR."

      • The annotated Unicode map. More of this!

      • Yes, it's Sky Sports News of all places: "Pressure is a privilege. And if you're feeling any pressure or the weight of any expectation, you are breathing rare air, that very few of us get to live inside." Good frame.

      • Or, as Josh Kushner said: "Every experience is training you for the next one… In order to become king, God didn't give David a crown, he gave him Goliath."

      • Tim Cook is stepping down as Apple's CEO. This Stratechery reflection was very interesting: Tim Cook's Impeccable Timing. For example, I had no clue that Apple in China (as in: moving its manufacturing to China) was the work of Cook. For me, Cook will always be the CEO who was at the helm when the M1 shipped, one of most remarkable engineering achievements I've witnessed.

      • Apple's incoming CEO John Ternus in 2024 in a commencement speech: "At some point in my first year, I found myself at a supplier facility. I was far away from home, it was well past midnight. I was using a magnifying glass to count the number of grooves on the head of this screw, which, remember, lives on the back of the display. And I was arguing with the supplier because these parts had 35 grooves, they were supposed to have 25. I distinctly remember stepping back for a minute and thinking to myself, 'What the hell am I doing? Is this normal?' And I thought about it, and I realized it might not be normal, but it's right. It's right because I'd already spent months working on that product, and if you're going to spend that much time on something, you should put in your very best effort. Maybe a customer notices, maybe they don't, but either way, whenever I saw one of those displays on someone's desk, it mattered to me to know that my teammates and I had considered everything about it and done the very best job we could." There's a lot more good stuff in there. I'm excited.

      • After probably ten years of using Alfred I switched over to Raycast two years ago and one thing that I've sporadically but consistently missed was Alfred's "Large Type" feature: you type a bit of text hin, hit a shortcut, and boom, the text is now as big as your display. Very helpful when you want to show someone in the room the wifi password, for example. So, this week I thought: surely there's a Raycast plugin for that? And there is but the text isn't that large. But guess what, there's also this: large-type.com. How good is that?

      • Adam Mastroianni again with some very good writing on capital-S science: Nothing ever dies. It merely becomes embarrassing. I didn't know that ego depletion doesn't reproduce! While reading I had to think of Brandolini's Law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." (In 2015 both Brandolini and I both gave a talk at a Ruby conference in Wrocław, Poland, and we chatted for half an hour at the airport and, not sure exactly why, but I'm oddly proud of that.)

      • Orson Scott Card, author of Ender's Game: "Those changes made, I sent it to Ben again. I did not remind him of what he had advised me to do. I merely told him I liked my title, and said, 'I have addressed your other concerns,' which was true. I figured he wouldn't remember what his exact words had been. My answer was a check. [...] Did Ben's feedback help? Yes -- but his specific advice was not right, and I knew it. [...] Editors don't know more than you about your story. They especially don't know why they decide to accept or reject stories. YOU have to know what your story needs to be, and take only advice that you believe in."

      • Reminded me a lot of Bill Hader on feedback: "When people give you notes on something, when they tell you it's wrong, they're usually right. When they tell you how to fix it, they're wrong."

      • exe.dev raised a Series A: "We are building a cloud that makes sense for the current and future state of software development. One that includes the features needed for fast, secure development out of the box. A cloud developers actually enjoy using. We want to revitalize the spirit of projects like early Heroku (though our technology is very different) and ship features that bring you joy." (Not to take away from this announcement, hence the parenthetical: the impact Heroku had on a certain generation of programmers working on developer tooling is hard to overstate. I bring it up a lot , and so do my teammates who are close my age and worked with web technologies in the early 2010s.) I'm very excited to see what they'll do! I like using exe.dev a lot.

      • I also really like David's personal statement that goes along with the funding announcement: I am building a cloud.

      • Just a reminder: chat jimmy exists. Try it. You have to. Try it and then imagine what we could do if one of today's frontier models would run at even half that speed. Send me a letter if you know whether that's physically impossible.

      • New Larry David biography is coming out this year. Pretty, pretty, pretty good.

      • Elad Gil's Random thoughts while gazing at the misty AI Frontier. Lots of interesting things in there. AI researchers' distributed IPO, compute constraints, hidden layoffs, and also this bit: "It is not just the model you use, but the environment, prompting, etc you build around it that helps impact your choice. Brand also matters more then many people think. At some point, either one coding model breaks very far ahead, or they stay neck in neck."

      • Maggie Appleton: One Developer, Two Dozen Agents, Zero Alignment. I think I see the same future that Maggie sees. And we're building it at Amp.

      • That's a title worthy of a book, not a post, but the content is still fascinating: Fabric is harder than steel. As someone who's been chasing the perfect t-shirt for years and who has a very deep fascination with "tech shirts" (not company logos, but high-quality shirts made of "functional" textiles), this was very cool. I often wondered: how can car seats be this good for so long? Well, turns out it's engineering.

      • Jeff Geerling: New 10 GbE USB adapters are cooler, smaller, cheaper. I could read blog posts like this one five times every day.

      • I Found It: The Best Free Restaurant Bread in America. This was fantastic. Go read it if you have an hour and want to smile and enjoy some great writing. There are many quote-worthy sentences in there, but I'll let you read them yourself. Instead, here's a free bread anecdote. Once upon a time, I was working on a farm in Australia, along with around ten other backpackers. Handful of Germans, handful of French people, two Brits. One day we were sitting around the big table in this "shed" (actually a big house, with a shed-like quality, if you will) we were living in, chit-chatting about stuff. What do you miss the most from home? came up as a question and after someone said that they miss a proper shower and feeling clean for once many of us nodded. Yes, that'd be something. Then someone said: I really, really miss the bread. And everybody , because we've all seen and tasted what the Australians call bread, let out a big sigh and said, oh yes, the bread, I miss the bread. And precisely one second later, the room split into two factions and the Germans stared at the French and the French stared at the Germans and both factions, at the same time, said something to the effect of: wait, what the fuck, why do you miss bread, your bread fucking sucks, our bread is good bread, your bread is garbage, shut up. But, sadly, the French wouldn't see how wrong they were, thinking their long, dumb, comic book bread is any good. And I'm pretty sure that created a rift in our little community of grape pickers. Anyway, hopefully I pissed of all the Australians and French people reading this -- your bread sucks. So, go read the article and have some fun.

      Know which bread's the best? You should subscribe:

    21. 🔗 r/reverseengineering Importing GTA IV texture dictionary natively in Unreal rss
  4. April 25, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-25 rss

      IDA Plugin Updates on 2026-04-25

      New Releases:

      Activity:

    2. 🔗 r/Leeds Leeds Dungeons and Dragons - The East Ridings Group rss

      Hi everyone!

      I am one of the DMs of the East Ridings of Leeds and just trying to promote the group.

      We run a sandbox style game. There is a continent and a town and DMs in the community run games in this world and players jump in on games in this world when they want. All the DMs and players collaborate together. You are welcome to come and join! We take anyone from experienced players to brand new people. We mostly play in Chance and Counters at the moment and we try to host regular games, at least once a week if we can and we are looking for new DMs and players. So if you have been looking for a way into DnD as a beginner, want to play regular sessions to grow your character idea or simply want to jump in now and again for a laugh, we have it all! Message me and I'll invite you. We have over 30 people in our community and it is growing every day.

      We mostly communicate on Discord so I can give you an invite for that to get started and we have a wiki page with our rules and guidance. We don't charge for our sessions and the only thing you'll ever have to pay for is potentially whatever the venue wants and your own stuff.

      So get in touch!

      submitted by /u/Lit-Rature
      [link] [comments]

    3. 🔗 r/york Kickabout Community rss

      Kickabout Community | Enjoy a friendly football game to break up the week. Kickabout Community supports independent 5-a-side and 7-a-side adult football games across York. We’re a volunteer-run group of organisers, making football accessible for players of all ability, gender, age, and fitness levels. 👉 Join Kickabout Community here: https://chat.whatsapp.com/CSt29p06AGLL1E91uu5Eze 📍 Pitches used: • York Sports Village • University of York Sports Centre • PlayFootball Clifton Moor • Energise Acomb 💷 Subs: £3-4 per session (covering pitch hire, balls, and bibs) We are not a business and not profit-making. Any surplus funds are for player socials or charitable donations. submitted by /u/Chance_Board_5424
      [link] [comments]
      ---|---

    4. 🔗 r/Harrogate Looking for new friends 39 (F) Harrogate based , rss

      Hi.

      I’m looking for some new friends in the area for evening drinks, meals , walks xx

      submitted by /u/Firm_Guess306
      [link] [comments]

    5. 🔗 r/Leeds Has anyone seen this cat? (East Leeds area) rss

      FOUND!!!! Thanks for the help everyone!!!!

      My cat Bliss went missing last night at about 8pm. He is fully white, with blue eyes, a male, neutered and microchipped, and profoundly deaf. I have posted missing posters in the area with my telephone number on them but there has been no clues as to where he is yet. If anyone has any information on him or has seen him, I would be eternally grateful!!!

      submitted by /u/Dazailover101
      [link] [comments]

    6. 🔗 r/Harrogate Classic cars at ASDA rss

      Has anyone seen those classic cars in the ASDA carpark? they’ve been sat there for weeks now, anyone have any info? Think two of them are Triumphs one of em had a Riley badge, seems like a big risk to leave em there if they’re even real!

      submitted by /u/farfrombornagain
      [link] [comments]

    7. 🔗 r/wiesbaden Kann man (auch wenn offiziell nicht erlaubt) mit angeleintem Hund auf den Alten Friedhof gehen? Also, wird das toleriert oder macht das echt niemand? Bekomme Besuch mit Hund, der dort in der Nähe wohnen wird… rss

      Danke für Tipps

      submitted by /u/Haunting-Ad2182
      [link] [comments]

    8. 🔗 r/reverseengineering [CrackMe] PyVMP v5 : The Wall. I dare you to break it (again). rss
    9. 🔗 r/Yorkshire Out and about rss

      Out and about | a few grand days out……. submitted by /u/scottishdarkhorse
      [link] [comments]
      ---|---

    10. 🔗 r/LocalLLaMA "Weights are coming".Xiaomi’s MiMo V2.5 Pro has landed at 54 in the Artificial Analysis Intelligence Index. rss
    11. 🔗 r/Yorkshire Upsall rss

      Upsall | Found the perfect spot for lunch this week! I'm a field-based telecoms engineer and earlier this week I was working on expanding the fibre network in the beautiful village of Knayton, the exchange is a little shed on the hillside but the view from the rear is just stunning! submitted by /u/Trancer79
      [link] [comments]
      ---|---

    12. 🔗 r/reverseengineering Built a tool for reverse-engineering code line-by-line (30+ languages) with vibe code AI Instead of summarizing functions, it explains *each line in context* — useful for: rss
    13. 🔗 HexRaysSA/plugin-repository commits sync repo: +4 releases rss
      sync repo: +4 releases
      
      ## New releases
      - [DeepExtract](https://github.com/marcosd4h/deepextractida): 0.9.13
      - [augur](https://github.com/0xdea/augur): 0.9.1
      - [haruspex](https://github.com/0xdea/haruspex): 0.9.1
      - [rhabdomancer](https://github.com/0xdea/rhabdomancer): 0.9.1
      
    14. 🔗 r/york York game today rss

      Hey all, the York Dale game is on DAZN today, wondered if anyone knew what pubs would be showing it as google and social media aren’t being particularly helpful!

      submitted by /u/leo_smith08
      [link] [comments]

    15. 🔗 r/reverseengineering Claude APK reverse engineering rss
    16. 🔗 r/LocalLLaMA I'm glad we have deepseek rss

      other companies are slowly going away from open weight, not releasing base models, delaying open weight distribution, not releasing top models (this one I think is fair, but still), and I also noticed they stopped publishing research (old Gemma and qwen had detailed papers about the models training and characteristics, now it's replaced by blog posts and model cards)

      Kimi (no base model for Kimi k2.5), GLM (no base model for glm 5 and 5.1), minimax (delayed open weights and problematic license for m2.7) and qwen (qwen 3.5 397B was open weight, 3.6 is not)

      Meanwhile, deepseek keeps publishing mind-blowing research every month, release their base models, release the open weight as soon as the model is officially launched and explain model training and architecture in detail with a launch paper

      They are extremely important in the field and are the ones pushing the technology and efficiency forward

      Unfortunately they don't release small models, but we can't have everything can we?

      submitted by /u/guiopen
      [link] [comments]

    17. 🔗 Ampcode News Opus 4.7 rss

      Opus 4.7 now powers Amp's smart mode.

      In our internal evals, Opus 4.7 scored ~72%, up from Opus 4.6's ~65% - the first model since GPT 5.4 to clear 70%.

      It takes some getting used to

      Compared to Opus 4.7, Opus 4.6 was forgiving.

      You could give it a vague task and it would often infer the missing pieces, make a plan, and start working. Sometimes that was useful. But it also could lead to the model confidently solving a nearby problem instead of the one you actually had. Or rushing to the first, but not the best, solution.

      Opus 4.7 is less like that.

      It follows prompts more closely. It fills in fewer gaps. It researches more. It is less likely to silently generalize from "fix this case" to "fix every related case." If the task is underspecified, you are more likely to get a narrow answer, a pause, or a request for the missing constraint.

      At first, that can feel worse. But then you realize that a good prompt can make it go further.

      Opus 4.7 is better at harder coding work, especially tasks that span multiple files, tools, and verification steps. It is better at keeping the shape of a change in its head and carrying it through the codebase. It's better at refactoring too. Its explanations are more thorough.

      Fewer Built-in Tools

      We removed grep, glob, and mermaid from smart.

      Opus 4.7 is good enough at using the shell directly. When it needs to search, it can run rg or use the codebase search agent.

      Its ASCII diagrams are also equal to or better than what Opus 4.6 achieved with Mermaid diagrams.

      Token Usage

      Our internal assessment matches Anthropic's (see last section and graph): "token usage across all effort levels is improved." Opus 4.7 might use more tokens in some cases, but those tokens are smarter and lead to better results. And better results lead to less tokens wasted.

      Tunable Thinking Effort

      You can now toggle the thinking effort for smart directly from the CLI with Opt+D (Alt+D), cycling through high, xhigh, and max.

      How to Use It

      The main change is simple: tell it what success looks like.

      A few patterns have worked well for us:

      • Give it success criteria, not steps. Tell it what done means, not every move to make. Example: "Clean up the billing settings. Done means no public API changes, no database changes, pnpm test billing passes, and pnpm typecheck passes."
      • Give it a way to check itself. A model with a test, CLI, Storybook, preview URL, or screenshot diff is much better than a model guessing from code. Example: "Fix the import flow. Reproduce it with pnpm cli import ./fixtures/bad.csv. It is fixed when that command succeeds and pnpm test import passes."
      • Brainstorm, pick, implement. Use one pass to explore options, then implement the chosen approach. Example: "Compare two ways to remove this duplicate state. Recommend one. Do not edit files yet." Then: "Implement option B. Keep the API unchanged and verify with pnpm test settings."

      Update Amp to the latest version by running amp update and you're ready to go: smart mode is now powered by Opus 4.7.