šŸ”


to read (pdf)

  1. Letting AI Actively Manage Its Own Context | ę˜Žå¤©ēš„ä¹Œäŗ‘
  2. Garden Offices for Sale UK - Portable Space
  3. Cord: Coordinating Trees of AI Agents | June Kim
  4. Style tips for less experienced developers coding with AI Ā· honnibal.dev
  5. Haskell for all: Beyond agentic coding

  1. March 07, 2026
    1. šŸ”— badlogic/pi-mono v0.57.1 release

      New Features

      • Tree branch folding and segment-jump navigation in /tree, with Ctrl+←/Ctrl+→ and Alt+←/Alt+→ shortcuts while ←/→ and Page Up/Page Down remain available for paging. See docs/tree.md and docs/keybindings.md.
      • session_directory extension event for customizing session directory paths before session manager creation. See docs/extensions.md.
      • Digit keybindings (0-9) in the TUI keybinding system, including modified combos like ctrl+1. See docs/keybindings.md.

      Added

      • Added /tree branch folding and segment-jump navigation with Ctrl+←/Ctrl+→ and Alt+←/Alt+→, while keeping ←/→ and Page Up/Page Down for paging (#1724 by @Perlence)
      • Added session_directory extension event that fires before session manager creation, allowing extensions to customize the session directory path based on cwd and other factors. CLI --session-dir flag takes precedence over extension-provided paths (#1730 by @hjanuschka).
      • Added digit keys (0-9) to the keybinding system, including Kitty CSI-u and xterm modifyOtherKeys support for bindings like ctrl+1 (#1905)

      Fixed

      • Fixed custom tool collapsed/expanded rendering in HTML exports. Custom tools that define different collapsed vs expanded displays now render correctly in exported HTML, with expandable sections when both states differ and direct display when only expanded exists (#1934 by @aliou)
      • Fixed tmux startup guidance and keyboard setup warnings for modified key handling, including Ghostty shift+enter=text:\n remap guidance and tmux extended-keys-format detection (#1872)
      • Fixed z.ai context overflow recovery so model_context_window_exceeded errors trigger auto-compaction instead of surfacing as unhandled stop reason failures (#1937)
      • Fixed autocomplete selection ignoring typed text: highlight now follows the first prefix match as the user types, and exact matches are always selected on Enter (#1931 by @aliou)
      • Fixed slash-command Tab completion to immediately open argument completions when available (#1481 by @barapa)
      • Fixed explicit pi -e <path> extensions losing command and tool conflicts to discovered extensions by giving CLI-loaded extensions higher precedence (#1896)
      • Fixed Windows external editor launch for Ctrl+G and ctx.ui.editor() so shell-based commands like EDITOR="code --wait" work correctly (#1925)
    2. šŸ”— r/wiesbaden MTG Commander rss

      Hallƶchen,

      Ich (M/24) suche noch locals für commander in/um wiesbaden. Ich kenne bis jetzt nur das glitchless in mainz. Ich habe mir erst das tmnt deck gekauft und habe noch nie commander gespielt :)

      Wäre für jede hilfe dankbar

      submitted by /u/SF_Geto
      [link] [comments]

    3. šŸ”— r/york Trip next week - vegetarian rss

      Taking my fiancƩ to York next week for his 30th birthday. We are vegetarian and he LOVES Chocolate, any recommendations of things to do or places to eat? Thank you!

      submitted by /u/Impressive_Ant_296
      [link] [comments]

    4. šŸ”— r/Yorkshire Books set in North/East Yorkshire rss

      Hi all, new to the subreddit! Should’ve joined ages ago since North Yorkshire has always felt like a second home to me & my wife!

      I’m wondering if anyone has recommendations for books set in North or East Yorkshire, particularly in the dystopian or post-apocalyptic genre. I love stories that use real local places as part of the setting.

      I recently released a dystopian novel set across Hull and North/East Yorkshire myself, so I’m really interested to see if there are others doing something similar that I might have missed.

      Would love to hear any recommendations!

      Thanks in advance!

      submitted by /u/HullBusDriver2020
      [link] [comments]

    5. šŸ”— r/reverseengineering Nobody ever got fired for using a struct [Rust internals] rss
    6. šŸ”— r/Yorkshire A few pictures from my walk today - Richmond Yorkshire rss
    7. šŸ”— kevinmuoz/ida-theme-explorer v1.0.2 release

      Fix duplicate hotkey and improve QMessageBox style

    8. šŸ”— r/york Medieval row of shops in York's Goodramgate damaged by lorry rss

      Medieval row of shops in York's Goodramgate damaged by lorry | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    9. šŸ”— r/Yorkshire Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid rss

      Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid | Scarborough is embarking on a transformative journey as it prepares a bid to become the UK’s first-ever Town of Culture in 2028 but your help is needed. The bid, which could secure a Ā£3 million prize to fund a year-long cultural programme, coincides with a separate, substantial Ā£20 million "Pride in Place" investment aimed at revitalising the town through community-led decision-making. The UK Town of Culture competition, launched by the Department for Culture, Media and Sport, offers a platform for towns to share their unique stories. For Scarborough, recognized as the nation's oldest seaside resort, the bid is seen as a landmark opportunity to showcase its rich theatrical and artistic heritage. Local leaders believe the title would not only increase community spirit but also encourage residents to engage more deeply with the cultural opportunities on their doorstep. The competition builds on the success of the City of Culture initiative. For example, Bradford, the 2025 City of Culture, saw a 25 per cent increase in city centre footfall during its spotlight year, with the majority of participants reporting an improved sense of pride and wellbeing. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    10. šŸ”— r/wiesbaden Wiesbaden, wo geht ihr eigentlich essen? rss

      Hallo,
      wir sind vier Typen und haben die Wiesbadener Restaurantszene in eine App namens Vota gepackt. Das Konzept ist simpel: Dir werden zwei Restaurants nebeneinander angezeigt, zum Beispiel Ente vs. Das Goldstein, du wählst das aus, wo du lieber hingehen würdest, und das Ranking aktualisiert sich sofort. Je mehr Leute abstimmen, desto genauer wird die Liste mit der Zeit. Es gibt noch ein paar doppelte Einträge hier und da, aber ich bereinige die Daten laufend.

      Hier ist die iPhone-Version, mit Kategorien, die zur Wiesbadener Gastro-Szene passen:
      https://apps.apple.com/app/vota-restaurant-ratings/id6744969212

      Und hier ist die Android-Version, endlich live:
      https://play.google.com/store/apps/details?id=org.vota.app

      P.S. Ich komme nicht aus Wiesbaden, ich lebe in Göteborg. Ich sammle keine Daten, verkaufe nichts und die App nutzt keine KI-generierten Inhalte. Ich poste in mehreren Subreddits, weil wir inzwischen mehrere Regionen unterstützen, und ich freue mich über ehrliches Feedback von Leuten, die die Stadt wirklich kennen.

      submitted by /u/TheShynola
      [link] [comments]

    11. šŸ”— Probably Dance I’m Getting a Whiff of Iain Banks’ Culture rss

      The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI? Not for normal people for whom there's no difference between competing against a strong human or a strong AI, (you lose hard either way) but for the world's best humans. We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:

      Fighting against a powerful AI feels like you're weirdly underpowered somehow. Everything the AI does just works slightly better than it should.

      If you're not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.

      I'm getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said "This technology can radically accelerate what our military can do. I've talked to admirals, I've talked to generals, I've talked to combatant commanders who say this has revolutionized what we can do."

      The things I'm referring to are the raid that captured Maduro in Venezuela (Claude was used), the current war with Iran (Claude was used), the killing of a drug boss in Mexico (unclear if AI was used but US intelligence helped Mexico).

      The commentators in the AlphaGo match with Lee Sedol didn't know what to make of most games. The AI wasn't doing anything obviously brilliant, there were lots of little fights all over the board where the outcome wasn't quite clear, but they just all worked a little better for AlphaGo than expected. So gradually Lee Sedol's position changed from "this is tough, hard to tell how this is going but at least I'm feeling good about these areas" to "hmm I'm struggling, maybe I'm a bit behind but it's not clear" to suddenly "oh I lost".

      I don't know Go, but I got a clearer sense from the StarCraft 2 matches. In some skirmishes the AI would take damage, in others the human would. But somehow it always felt like the human was in more trouble. In some fights the human clearly came out ahead but then mysteriously just one minute later the AI had a clear advantage. It was able to quickly recover and constantly put pressure on the human. It all looked very stressful, because even when you think you do well as a human, it works out a little less well than expected and whatever the AI does works a little better than expected.

      And where have we seen this pattern before? In sci-fi of course. In particular I'm thinking of Iain Banks' Culture, the ostensibly human civilization that's actually run entirely by AIs. Alien civilizations keep on wanting to pick fights with them for reasons and keep on being surprised by how hard the harmless-seeming Culture can whoop your ass if you make it mad.

      I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they'd find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. But obviously the real EU has never come close to the Culture because nothing human ever comes close to the potential of AIs. It would be as if Russia picked a fight with Poland, gained ground for a week, feeling good, only to suddenly find all of its IT systems hacked and access to nuclear bombs revoked, bombs dropping on Moscow the next day and an army in Moscow another two days later. The Culture takes a week to get its act together and then whoops your ass so hard you don't even know what's happening.

      But now I'm getting a whiff of the power of the Culture for the first time, and it's from the US. Going into another country, kidnapping their leader and getting away with it is exactly the kind of overpowered move that the Culture would be able to pull off. Bombing cities all over Iran, knocking out the entire leadership within two days, while the air-defense systems supplied by China do absolutely nothing is another example. If this was a video game these would be strategies done by high level players, but they're not supposed to work that well.

      It would be foolish to think this is entirely due to AI. The US had a high- tech advantage for a while. Turns out the F-35 is actually good. But even a couple years ago the US regularly messed up when it tried to do operate at high precision. We saw in Iraq and Afghanistan that being overpowered doesn't work out as well in practice as it does in theory. So I think AI is the most likely candidate for the shift to "it worked better than it should have."

      So how specifically do you get to a point where everything works slightly better than it should? We saw two different approaches in Go and StarCraft 2:

      • In Go the AI was having little fights all over the map, in a way that combined to a few extra pieces at the end. It would defend a little bit here, attack a little bit there. It was able to keep the overall picture in its head, not feeling the pressure to resolve things too early. (I haven't played Go, but I know I get frustrated in strategy games if I have to deal with multiple fights in different parts of the map at once)
      • In StarCraft 2 we saw the same thing, but we also saw that the AI could have perfect micro when it counts, like playing with wounded stalkers in the frontline because it could get them out of danger just in time. Humans could also do that in theory but in practice you can't quickly click perfectly like that.

      So the two angles are "having a better high-level view" and "having better micro control."

      Another source of success for the Culture is that they're over-prepared for fighting. (not for their first big war, but in later books) And this is also part of the story we hear in Iran. Normally there's just too much going on in the world and you can't possibly keep track of all of it. Famously the US had prior intelligence on 9/11 but didn't really put the pieces together. (there's a whole Wikipedia article about it which has phrases like "Rice listened but was unconvinced, having other priorities on which to focus.") But AI has almost no limits of what it can keep track of. You can always spin up another agent. So when something important comes up, chances are that some AI was keeping track of it and can raise an alert. You'll never miss opportunities just because you had other priorities to focus on.

      So the third angle is: Being over-prepared because you can follow up on many more things at once.

      What does all of this mean for the world? It means we're in a weird temporary phase where one country has control of a game-changing technology while others are not far behind (sadly not the EU. I'm thinking of China, especially with H200s). You get to play at a higher level, but only for a short time and only in specific ways. In a year others will have caught up, but by then you'll have new capabilities that you didn't have a year ago. If this was a game you'd saturate at some point (you just can't play StarCraft that much better than the best humans), but in real life the game keeps on changing. New pieces keep on coming into play and the old pieces become irrelevant. You can't do this for long before the humans become irrelevant to the outcomes, and then you're fully in Culture territory. I personally wouldn't mind living in the Culture, but it seems scary to rush towards it without a good plan for how we'll survive the transition.

      I don't have a good angle for working on that plan, maybe others do. For now my contribution is just to point out that we seem to be in the early stages of overpowered AI, and to make people notice what that feels like.

    12. šŸ”— badlogic/pi-mono v0.57.0 release

      New Features

      • Extensions can intercept and modify provider request payloads via before_provider_request. See docs/extensions.md#before_provider_request.
      • Extension UIs can use non-capturing overlays with explicit focus control via OverlayOptions.nonCapturing and OverlayHandle.focus() / unfocus() / isFocused(). See docs/extensions.md and ../tui/README.md.
      • RPC mode now uses strict LF-only JSONL framing for robust payload handling. See docs/rpc.md.

      Breaking Changes

      • RPC mode now uses strict LF-delimited JSONL framing. Clients must split records on \n only instead of using generic line readers such as Node readline, which also split on Unicode separators inside JSON payloads (#1911)

      Added

      • Added before_provider_request extension hook so extensions can inspect or replace provider payloads before requests are sent, with an example in examples/extensions/provider-payload.ts
      • Added non-capturing overlay focus control for extension UIs via OverlayOptions.nonCapturing and OverlayHandle.focus() / unfocus() / isFocused() (#1916 by @nicobailon)

      Changed

      • Overlay compositing in extension UIs now uses focus order so focused overlays render on top while preserving stack semantics for show/hide behavior (#1916 by @nicobailon)

      Fixed

      • Fixed RPC mode stdin/stdout framing to use strict LF-delimited JSONL instead of readline, so payloads containing U+2028 or U+2029 no longer corrupt command or event streams (#1911)
      • Fixed automatic overlay focus restoration in extension UIs to skip non-capturing overlays, and fixed overlay hide behavior to only reassign focus when the hidden overlay had focus (#1916 by @nicobailon)
      • Fixed pi config misclassifying ~/.agents/skills as project-scoped in non-git directories under $HOME, so toggling those skills no longer writes project overrides to .pi/settings.json (#1915)
    13. šŸ”— r/Yorkshire Shepley Spring rss

      Shepley Spring | submitted by /u/davew80
      [link] [comments]
      ---|---

    14. šŸ”— r/reverseengineering Reviving a 20-year-old puzzle game Chromatron with Ghidra and AI rss
    15. šŸ”— r/Yorkshire Few pics from my walk this morning! rss
    16. šŸ”— r/LocalLLaMA turns out RL isnt the flex rss

      turns out RL isnt the flex | submitted by /u/vladlearns
      [link] [comments]
      ---|---

    17. šŸ”— kevinmuoz/ida-theme-explorer v1.0.1 release

      Improve README link rendering on Hex-Rays plugin page.

    18. šŸ”— r/Yorkshire Is ā€œnowtā€ ever used in the double negative? rss
    19. šŸ”— r/york ā€˜I believed I was going to die’ – York man stabbed his partner repeatedly rss

      ā€˜I believed I was going to die’ – York man stabbed his partner repeatedly | submitted by /u/the-minsterman
      [link] [comments]
      ---|---

    20. šŸ”— r/wiesbaden Günstig Parken - StadtnƤhe? rss

      moin! Ich möchte mir heute Wiesbaden anschauen, weiß aber nicht wo ich günstig parken kann. Habt ihr vorschläge? Danke!

      submitted by /u/MKFascist
      [link] [comments]

    21. šŸ”— r/Leeds Sunday treks around Leeds? rss

      Hi! Does anyone here go trekking/hiking on Sundays around Leeds, or know of any groups that organize weekend treks? I’d love to join if there’s something beginner-friendly. Thanks! 🄾

      submitted by /u/sanxsh
      [link] [comments]

    22. šŸ”— HexRaysSA/plugin-repository commits sync repo: +2 plugins, +3 releases rss
      sync repo: +2 plugins, +3 releases
      
      ## New plugins
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer) (1.0.0)
      - [edit-function-prototype](https://github.com/oxiKKK/ida-edit-function-prototype) (1.0.0)
      
      ## New releases
      - [function-string-associate](https://github.com/oxiKKK/ida-function-string-associate): 1.0.1
      
  2. March 06, 2026
    1. šŸ”— IDA Plugin Updates IDA Plugin Updates on 2026-03-06 rss

      IDA Plugin Updates on 2026-03-06

      New Releases:

      Activity:

    2. šŸ”— kevinmuoz/ida-theme-explorer v1.0.0 release

      Initial release of IDA Theme Explorer.

      • Browse 100+ community themes
      • Install themes directly from GitHub
      • Manage themes from a simple UI
    3. šŸ”— r/reverseengineering Core Dump Murder Mystery Game rss
    4. šŸ”— r/LocalLLaMA Open WebUI’s New Open Terminal + ā€œNativeā€ Tool Calling + Qwen3.5 35b = Holy Sh!t!!! rss

      Open WebUI’s New Open Terminal + ā€œNativeā€ Tool Calling + Qwen3.5 35b = Holy Sh!t!!! | Let me pre-apologize for this long and rambling post but I get excited by stuff like this. I think a lot of folks here (myself included) have been largely oblivious to what Tim & company over at Open WebUI has been up to lately with their repo. I know I’ve been too busy trying to get all the various Qwen3.5 models to count the ā€œRā€ā€™s in Strawberry to care about much else right now. Anyways, It didn’t help that there was a good solid month without even a peep out of the Open WebUI team in terms of new releases... but now I can see why they were so quiet. It’s because they were cooking up some ā€œdope sh!tā€ as the kids say (they still say that, right?) Last week, they released probably the most impressive feature update I’ve seen from them in like the last year. They started a new Open WebUI project integration called Open Terminal. https://github.com/open-webui/open-terminal Open Terminal is basically a Dockerized (sandboxed) terminal with a live file browser / render canvas that sits on the right side of your Open WebUI interface when active. You can drag files into and out of the file browser from the host PC to the sandbox, and the AI can basically do whatever you want it to with the sandbox environment (install libraries, edit files, whatever). The file render canvas will show you a preview of any supported file type it can open, so you can watch it live edit your files as the model makes tool calls. Terminal is blowing my friggin mind over here. With it enabled, my models are like super-capable of doing actual work now and can finally do a bunch of stuff without even using MCPs. I was like ā€œok, now you have a sandboxed headless computer at your disposal, go nutsā€ and it was like ā€œcool, Ima go do some stuff and load a bunch of Python libraries and whatnotā€ and BAM if just started figuring things out through trial and error. It never got stuck in a loop and never got frustrated (was using Qwen3.5 35b 3a btw). It dropped the files in the browser on the right side of the screen and I can easily download them, or if it can render them, it did so right in the file browser. If your application file type isn’t supported yet for rendering a preview in the file browser, you could just Docker bind mount to a host OS directory and Open the shared file in its native app and watch your computer do stuff like there is a friggin ghost controlling your computer. Wild! Here’s the Docker command with the local bind mount for those who want to go that route: docker run -d --name open-terminal --restart unless-stopped -p 8000:8000 -e OPEN_TERMINAL_API_KEY=your-secret-key -v ~/open-terminal-files:/home/user ghcr.io/open-webui/open-terminal You also have a bash shell at your disposal as well under the file browser window. The only fault I found so far is that the terminal doesn’t echo the commands from tool calls in the chat, but I can overlook that minor complaint for now because the rest of this thing is so badass. This new terminal feature makes the old Open WebUI functions / tools / pipes, etc, pretty much obsolete in my opinion. They’re like baby toys now. This is a pretty great first step towards giving Open WebUI users Claude Code-like functionality within Open WebUI. You can run this single user, or if you have an enterprise license, they are working on a multi-user setup called ā€œTerminalsā€. Not sure the multi-user setup is out yet, but that’s cool that they are working on it. A couple things to note for those who want to try this: MAKE SURE your model supports ā€œNativeā€ tool calling and that you have it set to ā€œNativeā€ in the model settings on whatever model you connect to the terminal, or you’ll have a bad time with it. Stick with models that are known to be Native tool calling compatible. They also have a ā€œbare metalā€ install option for the brave and stupid among us who just want to YOLO it and give a model free rein over our computers. The instructions for setup and integration are here: https://docs.openwebui.com/features/extensibility/open-terminal/ I’m testing it with Qwen3.5 35b A3b right now and it is pretty flipping amazing for such a small model. One other cool feature, the default docker command sets up a persistent volume so your terminal environment remains as you left it between chats. If it gets messed up just kill the volume and start over with a fresh one! Watching this thing work through problems by trial and error and make successive tool calls and try again after something doesn’t go its way is just mind boggling to me. I know it’s old hat to the Claude Cioders, but to me it seems like magic. submitted by /u/Porespellar
      [link] [comments]
      ---|---

    5. šŸ”— r/Yorkshire Join me on a hike through a hidden pocket of beauty in West Yorkshire. From Ferrybridge to Brotherton, Fairburn, Ledsham and Ledston. Let me know your thoughts šŸ™‚ rss
    6. šŸ”— r/LocalLLaMA New OpenSource Models Available—Sarvam 30B and 105B trained from scratch by an Indian based company rss
    7. šŸ”— r/Leeds Just saw a teenager gang in balaclavas trying to steal a bike in broad daylight rss

      I was getting back home after a quick shop run near hyde park, saw 4 teenagers about 14yo with balaclavas on their bike. I thought of crossing the road to avoid walking past them, but didn’t do it thinking not to assume them anything bad. When I was walking past them, one of the kids tried hard to pull a bike locked to a signpost in front of a restaurant. He told his friends he couldn’t after a few times and they rode off. All this was under daylight with people watching. I feel less safe now.

      submitted by /u/Trebiok
      [link] [comments]

    8. šŸ”— r/reverseengineering Reverse-engineered the WiFi transfer protocol for HeyCyan smart glasses (BLE + USR-W630 WiFi module) — first iOS implementation rss
    9. šŸ”— r/wiesbaden Schwarz-weiß Fotos entwickeln lassen rss

      Weiß jemand wo ich einen schwarz-weiß Film (Kentmere Pan 400) aus einer analogen Filmkamera entwickeln lassen kann? Rossmann&Co. kenne ich schon. Suche etwas das nicht so teuer ist und trotzem gute Fotos bei rauskommen. Jemand hat mir mal Foto Express in FFM empfohlen. Suche hier was vergleichbares. Danke

      submitted by /u/DocterSkinny
      [link] [comments]

    10. šŸ”— @binaryninja@infosec.exchange If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh mastodon

      If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh lounge! We will be running live demos and handing out Binja swag. Come say hey and sign the our banner! Not in Orlando this week? We will be streaming at 3 PM ET live from RE//verse: https://youtube.com/live/bW- oz1UVkCM?feature=share

    11. šŸ”— facebookresearch/faiss v1.14.1 release

      [1.14.0] - 2026-03-06

      Added

      Changed

      • c8579de Try to force relative import statement (#4878)
      • 471ddad Increment to next release, v1.14.1 (#4861)
      • c90c9dc Update python to include 3.13 and 3.14 (#4859)
      • 8af77fe SIMD-optimize multi-bit RaBitQ inner product (#4850)
      • ccc934f ScalarQuantizer: split SIMD specializations into per-SIMD TUs + DD dispatch (#4839)

      Fixed

      • 5622e93 (HEAD -> main, origin/main, origin/HEAD) v1.14.1 Fix build-release (#4876)
      • 8431e04 Replace squared-distance IP with direct dot-product in multi-bit RaBitQ (#4877)
      • 28f79bd Fix SWIG 4.4 multi-phase init: replace import_array() with import_array1(-1) (#4846)

      Deprecated

    12. šŸ”— r/Yorkshire Transit Options in Yorkshire Dales Park? rss

      Hi friends, I'll be in Hawes this summer and depend mostly on Google maps to get me around via public transit. I'd like to go eastward for a few days (e.g., Ripon, Whitby), but Google shows only rail options that connect through York (e.g., Hawes -> York -> Whitby). Curious if there are additional transit options that offer more direct routes westward or eastward. Appreciate it!

      submitted by /u/minpaul
      [link] [comments]

    13. šŸ”— r/Yorkshire Camping at Tan Hill Inn rss

      Hi everyone,

      I was planning on camping at the Tan Hill Inn during the late May Bank Holiday weekend.

      On their website it says to just turn up on the day to reserve a camping spot, however I'm coming from Manchester so slightly worried about turning up and for whatever reason there being no spots left and I can't stay there.

      Is there anyone who's camped at this place who knows if there's a risk of no camping availability when I turn up, or am I worrying for nothing?

      Cheers!

      submitted by /u/Mountain_Dig_3688
      [link] [comments]

    14. šŸ”— News Minimalist 🐢 Weight loss drugs fight multiple addictions + 12 more stories rss

      In the last 4 days ChatGPT read 122438 top news stories. After removing previously covered events, there are 13 articles with a significance score over 5.5.

      [6.5] GLP-1 drugs may fight addiction across every major substance, according to a study of 600,000 people —theconversation.com(+30)

      A study of 600,000 people found that GLP-1 drugs significantly reduce cravings, overdoses, and deaths across multiple addictions, including opioids and alcohol, marking a potential breakthrough in addiction medicine.

      Researchers observed a 50% reduction in substance-related deaths among users already struggling with addiction. The drugs also lowered the risk of developing new dependencies on nicotine and cocaine by roughly 20%, likely by dampening dopamine signaling in the brain’s reward centers.

      While not yet approved specifically for addiction, GLP-1 medications are already widely prescribed for diabetes and obesity. Ongoing clinical trials aim to confirm these findings and address questions regarding long-term effectiveness.

      [5.8] Iran grants China exclusive passage through the Strait of Hormuz —ndtv.com(+110)

      Iran will now permit only Chinese vessels to navigate the Strait of Hormuz, rewarding Beijing's support during the regional conflict and further threatening critical global energy supply chains.

      The Islamic Revolutionary Guard Corps claims full control of the chokepoint, warning that non-Chinese ships face missile or drone strikes. This blockade impacts regional neighbors like Qatar and the UAE while disrupting twenty percent of the world’s total oil supply transit.

      Beijing previously condemned Western military actions against Iran as unacceptable. Meanwhile, the United States government maintains that military escorts may be deployed to prevent domestic inflation and protect the international flow of commerce.

      Highly covered news with significance over 5.5

      [6.6] Evo 2: An AI model for genome prediction and design across all life — nature.com (+6)

      [6.1] France expands nuclear arsenal and strengthens European defense cooperation — bostonglobe.com (+29)

      [5.9] AI blood test detects silent liver disease years before symptoms — sciencedaily.com (+3)

      [5.8] Indonesia bans social media for children under 16 — abcnews.com (+45)

      [5.7] US forces support Ecuador's fight against drug trafficking organizations — bostonglobe.com (+29)

      [5.7] China sets slowest growth target since 1991, focusing on tech and domestic demand — abcnews.com (+49)

      [5.5] New study reveals underestimated sea level rise threatens millions more people — abcnews.com (+14)

      [5.5] Lawsuit claims Google Gemini AI gave dangerous instructions leading to a man's suicide — time.com (+34)

      [5.5] New treatment is reducing seizure frequency in children by 91% — ndtv.com (+11)

      [5.8] Japan approves world's first stem cell treatment for Parkinson's and heart failure — nippon.com (+6)

      [5.8] BYD introduces new battery technology with over 600 miles of range and rapid charging — fastcompany.com (+3)

      Thanks for reading!

      — Vadim


      You can set up and personalize your own newsletter like this with premium.


      Powered by beehiiv

    15. šŸ”— r/york York shot on my cheap little point and shoot film camera:) rss

      York shot on my cheap little point and shoot film camera:) | Some photos I shot a little while back in your beautiful city! submitted by /u/Organic_Repair8717
      [link] [comments]
      ---|---

    16. šŸ”— r/Harrogate Best way to travel to London rss
    17. šŸ”— badlogic/pi-mono v0.56.3 release

      New Features

      • claude-sonnet-4-6 model available via the google-antigravity provider (#1859)
      • Custom editors can now define their own onEscape/onCtrlD handlers without being overwritten by app defaults, enabling vim-mode extensions (#1838)
      • Shift+Enter and Ctrl+Enter now work inside tmux via xterm modifyOtherKeys fallback (docs/tmux.md, #1872)
      • Auto-compaction is now resilient to persistent API errors (e.g. 529 overloaded) and no longer retriggers spuriously after compaction (#1834, #1860)

      Added

      Fixed

      • Fixed custom editors having their onEscape/onCtrlD handlers unconditionally overwritten by app-level defaults, making vim-style escape handling impossible (#1838)
      • Fixed auto-compaction retriggering on the first prompt after compaction due to stale pre-compaction assistant usage (#1860 by @joelhooks)
      • Fixed sessions never auto-compacting when hitting persistent API errors (e.g. 529 overloaded) by estimating context size from the last successful response (#1834)
      • Fixed compaction summarization requests exceeding context limits by truncating tool results to 2k chars (#1796)
      • Fixed /new leaving startup header content, including the changelog, visible after starting a fresh session (#1880)
      • Fixed misleading docs and example implying that returning { isError: true } from a tool's execute function marks the execution as failed; errors must be signaled by throwing (#1881)
      • Fixed model switches through non-reasoning models to preserve the saved default thinking level instead of persisting a capability-forced off clamp (#1864)
      • Fixed parallel pi processes failing with false "No API key found" errors due to immediate lockfile contention on auth.json and settings.json (#1871)
      • Fixed OpenAI Responses reasoning replay regression that broke multi-turn reasoning continuity (#1878)
    18. šŸ”— r/Leeds Ex Starbucks, Chapel Allerton, What Next rss

      Hello

      I see the Ex Starbucks, Chapel Allerton, is under offer. Anybody know who's moving in? Big building to fill.

      submitted by /u/renlauo
      [link] [comments]

    19. šŸ”— r/york Loft conversion recommendations rss

      Hiya lovely people of York - happy Friday!

      Looking to get our mid terrace house loft converted - we got very stung by a plumber we found through checkatrade and have had problems finding roofers in the past, so the main thing stopping me is worry about getting the wrong people in!

      Anyone got recommendations? (Also rough cost if you don't mind sharing) - we're looking to go as simple as possible, no Dormer or bathroom !

      submitted by /u/AutumnDream1ng
      [link] [comments]

    20. šŸ”— r/LocalLLaMA To everyone using still ollama/lm-studio... llama-swap is the real deal rss

      I just wanted to share my recent epiphany. After months of using ollama/lm- studio because they were the mainstream way to serve multiple models, I finally bit the bullet and tried llama-swap.

      And well. I'm blown away.

      Both ollama and lm-studio have the "load models on demand" feature that trapped me. But llama-swap supports this AND works with literally any underlying provider. I'm currently running llama.cpp and ik_llama.cpp, but I'm planning to add image generation support next.
      It is extremely lightweight (one executable, one config file), and yet it has a user interface that allows to test the models + check their performance + see the logs when an inference engine starts, so great for debugging.

      Config file is powerful but reasonably simple. You can group models, you can force configuration settings, define policies, etc. I have it configured to start on boot from my user using systemctl, even on my laptop, because it is instant and takes no resources. Specially the filtering feature is awesome. On my server I configured Qwen3-coder-next to force a specific temperature, and now using them on agentic tasks (tested on pi and claude-code) is a breeze.

      I was hesitant to try alternatives to ollama for serving multiple models... but boy was I missing!

      How I use it (on ubuntu amd64):
      Go to https://github.com/mostlygeek/llama-swap/releases and download the pack for your system, i use linux_amd64. It has three files: readme, license and llama-swap. Put them into a folder ~/llama-swap. I put llama.cpp and ik_llama.cpp and the models I want to serve into that folder too.

      Then copy the example config from https://github.com/mostlygeek/llama- swap/blob/main/config.example.yaml to ~/llama-swap/config.yaml

      Create this file on .config/systemd/user/llama-swap.service. Replace 41234 for the port you want it to listen, -watch-config ensures that if you change the config file, llama-swap will restart automatically.

      [Unit] Description=Llama Swap After=network.target [Service] Type=simple ExecStart=%h/llama-swap/llama-swap -config %h/llama-swap/config.yaml -listen 127.0.0.1:41234 -watch-config Restart=always RestartSec=3 [Install] WantedBy=default.target
      

      Activate the service as a user with:

      systemctl --user daemon-reexec systemctl --user daemon-reload systemctl --user enable llama-swap systemctl --user start llama-swap
      

      If you want them to start even without logging in (true boot start), run this once:

      loginctl enable-linger $USER
      

      You can check it works by going to http://localhost:41234/ui

      Then you can start adding your models to the config file. My file looks like:

      healthCheckTimeout: 500 logLevel: info logTimeFormat: "rfc3339" logToStdout: "proxy" metricsMaxInMemory: 1000 captureBuffer: 15 startPort: 10001 sendLoadingState: true includeAliasesInList: false macros: "latest-llama": > ${env.HOME}/llama-swap/llama.cpp/build/bin/llama-server --jinja --threads 24 --host 127.0.0.1 --parallel 1 --fit on --fit-target 1024 --port ${PORT} "models-dir": "${env.HOME}/models" models: "GLM-4.5-Air": cmd: | ${env.HOME}/ik_llama.cpp/build/bin/llama-server --model ${models-dir}/GLM-4.5-Air-IQ3_KS-00001-of-00002.gguf --jinja --threads -1 --ctx-size 131072 --n-gpu-layers 99 -fa -ctv q5_1 -ctk q5_1 -fmoe --host 127.0.0.1 --port ${PORT} "Qwen3-Coder-Next": cmd: ${latest-llama} -m ${models-dir}/Qwen3-Coder-Next-UD-Q4_K_XL.gguf --fit-ctx 262144 "Qwen3-Coder-Next-stripped": cmd: ${latest-llama} -m ${models-dir}/Qwen3-Coder-Next-UD-Q4_K_XL.gguf --fit-ctx 262144 filters: stripParams: "temperature, top_p, min_p, top_k" setParams: temperature: 1.0 top_p: 0.95 min_p: 0.01 top_k: 40 "Assistant-Pepe": cmd: ${latest-llama} -m ${models-dir}/Assistant_Pepe_8B-Q8_0.gguf
      

      I hope this is useful!

      submitted by /u/TooManyPascals
      [link] [comments]

    21. šŸ”— r/reverseengineering My journey through Reverse Engineering SynthID rss
    22. šŸ”— r/reverseengineering My journey through Reverse Engineering SynthID rss
    23. šŸ”— r/Yorkshire Fountains Abbey, Ripon, Yorkshire rss

      Fountains Abbey, Ripon, Yorkshire | submitted by /u/mdbeckwith
      [link] [comments]
      ---|---

    24. šŸ”— jank blog jank is off to a great start in 2026 rss

      Hey folks! We're two months into the year and I'd like to cover all of the progress that's been made on jank so far. Before I do that, I want to say thank you to all of my Github sponsors, as well as Clojurists Together for sponsoring this whole year of jank's development!

  3. March 05, 2026
    1. šŸ”— IDA Plugin Updates IDA Plugin Updates on 2026-03-05 rss

      IDA Plugin Updates on 2026-03-05

      New Releases:

      Activity:

      • capa
        • 1173dc5f: build(deps): bump protobuf from 6.33.5 to 7.34.0 (#2891)
        • e53f6abc: ci: add black auto-format workflow (#2827) (#2883)
        • 038c46da: features: fix Regex.get_value_str() returning escaped pattern, breaki…
      • ghidra
        • a7a795b3: Merge remote-tracking branch 'origin/Ghidra_12.1'
        • 4e4674be: Merge branch 'GP-6537_ryanmkurtz_PR-1905_mduggan_phar_lap_ne_support'
        • 0351dc99: GP-6537: Certify
        • 6fa0ddbc: Support large (>2^16) offset to exe file NE header
        • f466bb00: Merge remote-tracking branch 'origin/Ghidra_12.1'
        • d374989a: Merge remote-tracking branch 'origin/GP-6536_ghidragon_null_ptr_excep…
        • 5e46aa4e: Merge remote-tracking branch 'origin/GP-0-dragonmachre-enum-test-fix'…
        • 9d55f0d8: Test fix
      • ida-edit-function-prototype
      • ida-pro-mcp
        • b160449c: Merge pull request #252 from baoan7090/baoan7090-patch-1
        • 4d613d0a: Merge pull request #262 from haosenwang1018/fix/bare-except
        • b0844afa: Merge pull request #257 from withzombies/many-to-many-session-management
        • 5aae5542: fix: prevent SIGPIPE crash and port collision with multiple IDA insta…
        • 5fb925bd: fix: auto-increment port for multiple IDA instances
        • 20f28764: fix: ignore SIGPIPE to prevent IDA crash on client disconnect
      • idamagicstrings
        • 7ed19762: Multiple refactorizations and tests added
        • b5b19a56: Update README.md
        • 317f03b7: Added ida-plugin.json to install this using hcli
        • c3869d0d: Updated IDA Magic Strings for IDA 9.X
      • idasql
        • 947ae256: Merge pull request #19 from allthingsida/decouple-idalib-from-plugin
        • 6cd73099: fix: fetch ida-cmake directly instead of using ida-sdk's bundled copy
        • e1ea32e0: fix: decouple libidalib from plugin build
      • msc-thesis-LLMs-to-rank-decompilers
      • Rikugan
        • 13c59b49: security: add prompt injection mitigation and harden approval gates
        • 80bebbb4: update readme
        • 4b244a89: update readme
        • b80a9f73: docs(webpage): sync docs with current codebase and add llms.txt
        • adfc54df: refactor: rewrite MCP client using official mcp SDK
        • 4c9f2eb8: fix readme
        • 2573664e: adds gif
        • 3f7d8725: feat: expanded test suite and misc fixes
        • 9ce5f0a0: Merge branch 'desloppify/code-health'
        • 629f7298: refactor: code health improvements (desloppify 37→81)
        • f3dbff97: Merge branch 'main' of github.com:buzzer-re/Rikugan
        • 474bbff9: adds cff example gif
        • 25072ded: feat: IL analysis/transform tools, deobfuscation skill, and fixes
        • 39f51312: docs: fix platform paths, add llms.txt, Architecture button, il_problem
      • sighthouse
      • zenyard-ida-public
        • bfb5ad98: Sync with 6aea1ad63941a5fcd215b9e5abbf96214f371227
        • 4a0d6c7b: Sync with 9dcf29a8e443ed01ff36aa4adb19e3bf7164376d
    2. šŸ”— badlogic/pi-mono v0.56.2 release

      New Features

      • GPT-5.4 support across openai, openai-codex, azure-openai-responses, and opencode, with gpt-5.4 now the default for openai and openai-codex (README.md, docs/providers.md).
      • treeFilterMode setting to choose the default /tree filter mode (default, no-tools, user-only, labeled-only, all) (docs/settings.md, #1852 by @lajarre).
      • Mistral native conversations integration with SDK-backed provider behavior, preserving Mistral-specific thinking and replay semantics (README.md, docs/providers.md, #1716).

      Added

      • Added gpt-5.4 model availability for openai, openai-codex, azure-openai-responses, and opencode providers.
      • Added gpt-5.3-codex fallback model availability for github-copilot until upstream model catalogs include it (#1853).
      • Added treeFilterMode setting to choose the default /tree filter mode (default, no-tools, user-only, labeled-only, all) (#1852 by @lajarre).

      Changed

      • Updated the default models for the openai and openai-codex providers to gpt-5.4.

      Fixed

      • Fixed GPT-5.3 Codex follow-up turns dropping OpenAI Responses assistant phase metadata by preserving replayable signatures in session history and forwarding phase back to the Responses API (#1819).
      • Fixed OpenAI Responses replay to omit empty thinking blocks, avoiding invalid no-op reasoning items in follow-up turns.
      • Updated Mistral integration to use the native SDK-backed provider and conversations API, including coding-agent model/provider wiring and Mistral setup documentation (#1716).
      • Fixed Antigravity reliability: endpoint cascade on 403/404, added autopush sandbox fallback, removed extra fingerprint headers (#1830).
      • Fixed @mariozechner/pi-ai/oauth extension imports in published installs by resolving the subpath directly from built dist files instead of package-root wrapper shims (#1856).
      • Fixed Gemini 3 multi-turn tool use losing structured context by using skip_thought_signature_validator sentinel for unsigned function calls instead of text fallback (#1829).
      • Fixed model selector filter not accepting typed characters in VS Code 1.110+ due to missing Kitty CSI-u printable decoding in the Input component (#1857)
      • Fixed editor/footer visibility drift during terminal resize by forcing full redraws when terminal width or height changes (#1844 by @ghoulr).
      • Fixed footer width truncation for wide Unicode text (session name, model, provider) to prevent TUI crashes from rendered lines exceeding terminal width (#1833).
      • Fixed Windows write preview background artifacts by normalizing CRLF content (\r\n) to LF for display rendering in tool output previews (#1854).
    3. šŸ”— r/Leeds Does anyone know a place where I can buy this drink? rss
    4. šŸ”— r/york No Three data signal in Goodramgate rss

      Is anyone else experiencing the Three mobile data drops out in Goodramgate/Kings Sq? There’s a full house of signal but no data connection. Eg Streaming music stops playing as I walk through the area. It’s been happening for a few weeks now.

      submitted by /u/dawnriser
      [link] [comments]

    5. šŸ”— r/york recently moved to york and looking to make new mates (23m) rss

      moved to foss islands 6 months ago but i’m struggling to meet new people here. i work for the NHS doing shifts so i struggle to attend anything regularly. i enjoy swimming, running and good old pintsšŸ»

      any local casual social groups/sports groups? i’ve tried meetup.com and others but struggling to find many groups - any recommendations welcome :)

      submitted by /u/Internal-Bet4689
      [link] [comments]

    6. šŸ”— r/york recently moved to york and looking to make new mates (23m) rss

      moved to foss islands 6 months ago but i’m struggling to meet new people here. i work for the NHS doing shifts so i struggle to attend anything regularly. i enjoy swimming, running and good old pintsšŸ»

      any local casual social groups/sports groups? i’ve tried meetup.com and others but struggling to find many groups - any recommendations welcome :)

      submitted by /u/Internal-Bet4689
      [link] [comments]

    7. šŸ”— r/wiesbaden Bestes Eis oder Kuchen für Samstag rss

      Guude :) Ich bin am Samstag in Wiesbaden und das Wetter soll ja ganz gut werden. Was wƤre so der beste Eisladen in Wiesbaden? Alternativ gerne auch ein CafƩ mit guter Kuchenauswahl/gerne traditioneller :) Danke!!

      submitted by /u/LastCauliflower3842
      [link] [comments]

    8. šŸ”— r/Leeds Anywhere that serves pints in ice cold glasses? rss

      One day of sun and I’m craving it

      submitted by /u/augustbecchio
      [link] [comments]

    9. šŸ”— r/wiesbaden US Car people in Wiesbaden rss

      Hi guys,

      since the weather is great and the (early) season is on, I am looking for likeminded people with (old) US Cars - Muscle Cars, Trucks, Jeeps etc. There are some meetings in the area but after visiting some, I feel like a lot of those people are organized in clubs etc and drive like 200km just to show off their new paintjob ;) I am more into tech talks, wrenching, learning and having fun with the cars. And I am not 60 years old.

      I own a '93 Corvette myself, have some knowledge of 350 Chevy V8s and cars in general, and would love to meet new interesting people who share this hobby. I'm german but my english is fine. There are a lot of US spec cars in Wiesbaden so I thought I just write in english here. I am living next to Hainerberg and was greeted by a black Challenger today, so this was my sign to give this post a go. Just hit me up and connect.

      Cheers!

      submitted by /u/randomsubi
      [link] [comments]

    10. šŸ”— r/york Rowntree Park Tennis Partner rss

      Hello. I am looking for a tennis buddy to hit balls with on an evening / sometimes during week with the lighter nights coming. Just joined Rowntree Park tennis yesterday so id like to play here. I'm intermediate/ advanced but I don't really like to play competitively tbh.

      submitted by /u/BlueSky86010
      [link] [comments]

    11. šŸ”— The Pragmatic Engineer The Pulse: Cloudflare rewrites Next.js as AI rewrites commercial open source rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer . This issue is the entire The Pulse issue from the past week, which paying subscribers received seven days ago. This piece generated quite a few comments across subscribers , and so I 'm sharing it more broadly, especially as it raises questions on what is defensible and what is not with open source.

      If you 've been forwarded this email, you can subscribe here to get issues like this in your inbox.

      Today's issue of The Pulse focuses on a single event because it's a significant one with major potential ripple effects. On Tuesday, Cloudflare shocked the dev world by announcing that they have rewritten Next.js in just one week, with a single developer who used only $1,100 in tokens:

      altCloudflare CTO Dane Knecht on X

      There are several layers to dig into here:

      1. The Next.js ecosystem: a recap. Close to half of React devs use Next.js, and the best place to deploy Next.js is on Vercel - partly thanks to its proprietary build output.
      2. What Cloudflare did with Next.js. Replacing the build engine in Next.js with the more standard Vite one, allowing Next.js apps to be easily deployed on Cloudflare.
      3. AI brings the impossible within reach. What would take years in engineering terms was executed in one week with some tokens.
      4. " AI slop" still an issue. Contrary to Cloudflare's claims, vinext is not production-ready, and will need plenty of cleanup and auditing to make it on par with Next.js.

      1. The Next.js ecosystem: a recap

      First, some background. Next.js is the most popular fullstack React framework and around half of all React devs use it, as per recent research such as the 2025 Stack Overflow developer survey. Next.js is an open source project, built and mostly maintained by Vercel, which is the preferred deployment target for Next.js applications for many reasons. One of them is that Next.js is ideal to deploy to Vercel because Next.js applications are built with Vercel's Turbopack build tool. The output of a build is a proprietary format. As Netlify engineer Eduardo BouƧas writes:

      "The output of a Next.js build has a proprietary and undocumented format that is used in Vercel deployments to provision the infrastructure needed to power the application.

      This means that any hosting providers other than Vercel must build on top of undocumented APIs that can introduce unannounced breaking changes in minor or patch releases. (And they have)".

      Next.js is an interestingly built project, where everything is open source, and the best place to deploy a Next.js application is on Vercel, as it's optimized to run undocumented build artifacts the most efficiently. This is a smart strategy from Vercel which competitors will dislike, as any hosting provider would prefer Next.js to produce a standard build format. To do this, the build engine, Turbopack, would need to be replaced with something more standard.

      Let 's talk about build tools for web development. According to the State of JS 2025 survey, the most popular in the web ecosystem are:

      1. Vite: the most popular choice for new projects due to its speed and developer experience. Uses projects like esbuild and Rollup under the hood
      2. Webpack: a legacy tool that's not very performant, but still widely deployed in older projects
      3. Turbopack: Created by Vercel and optimized for larger Next.js applications. Built in Rust and intended to be more performant
      4. Bun: a relatively new, all-in-one runtime and bundler. Anthropic acquired the team in December, and some Bun folks are now focused on improving Claude Code's performance.

      So, most of the web ecosystem uses Vite as a build tool; Next.js uses Turbopack, and the majority of React applications with a full-stack React framework use Next.js. Basically, most devs using Next.js are likely to use Vite as their build tool.

      2. What Cloudflare did with Next.js

      Here's a naive idea: what if Next.js used Vite to generate build outputs? In that case, build outputs would be standardized and would run equally well on any cloud provider, as there would be nothing proprietary or undocumented to Vercel.

      And this is what Cloudflare did: replace Turbopack with Vite and call the new package 'vinext':

      altCloudflare replaced the Turbopack build dependency with Vite to create vinext

      Buried midway in the announcement is how this project is experimental and not at all guaranteed to work okay: it's a 'use-at-own-risk' project. Still, the mere fact of this development feels like an earthquake in the tech world because of how it was pulled off.

      3. AI brings the impossible within reach

      In a blog post announcing the project, Cloudflare claims only one engineer "rebuilt" the whole thing in a way that's trivial to deploy to Cloudflare's own infrastructure, and only cost $1,100 in tokens. From Cloudflare's statement:

      "Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.

      The whole thing cost about $1,100 in tokens".

      What Cloudflare did:

      • Took the Next.js public API
      • Reimplemented behaviour using Vite
      • Created build output whose behaviour matches the "original" Next.js implementation

      After 10 years, the core of Next has around 194,000 lines of code (LOC)**. Meanwhile, vinext is about 67,000 lines of code which suggests a much leaner implementation: for example, vinext does not need to support legacy Next APIs, and vinext currently supports 94% of the Next.js API (and it's safe to assume they left complex edge cases in the remaining 6%).

      ** the Next.js repository is closer to 2M lines of code: 1M is bundled dependencies (eg React bundles, CSS build etc), tests are 308,000 LOC, Turbopack 311,000 LOC.

      Pre-AI, this reimplementation would have taken years of engineering time to complete. Doing what Cloudflare did was always possible _ in theory_, but never seemed practical. I mean, why have a team of engineers spend potentially years on generating a standardized build output for Next.js apps? Even if they did, the dev community would have doubts about whether Cloudflare would maintain the project.

      This is the thing with forking or rewriting open source projects: a major value proposition for commercial open source is to know that they will be maintained. Vercel has proved it's a reliable custodian of Next.js for the past 10 years. Without AI, it could be assumed that any new reimplementation would eventually run out of steam.

      Separately but relatedly, Cloudflare has now proved that the cost of rewriting existing software has become ~100x cheaper, thanks to AI, and this economy is likely to be the case for maintenance, too. Considering how trivial it was to rebuild one of the more complex open source projects, this augers well for it being trivial and much cheaper to maintain in the future. Potentially, Cloudflare no longer needs to budget an engineering team only for maintenance, if a single engineer could maintain the project, part-time!

      Cloudflare had a project measured in engineering years, and completed it in one engineering week! It just took a single engineer using OpenCode (open source coding agent), Opus 4.5, and a bunch of tokens, then: 'boom ', vinext was born.

      4. "AI slop" still an issue

      There are questions about the quality of vinext, though.**** Vercel, naturally, is unhappy and hit out at the obvious weakness that vinext is unfit for production usage because it's insecure. Vercel CEO, Guillermo Rauch, did not miss a beat by tying Cloudflare's effort to the "vibe coding" stereotype of sloppy work executed with a lack of understanding:

      altGuillermo Rauch on X

      Guillermo has a point: anyone who stopped reading Cloudflare's launch announcement after the first few sentences would assume it's production-ready, with the first paragraph of this announcement closing with:

      "And we already have customers running it in production."

      However, Cloudflare doesn't share the rather crucial detail that "running in production" means that vinext has been deployed onto a beta site, until more than 1,000 words (around 2-3 pages) into the announcement:

      "We want to be clear: vinext is experimental. It's not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. (...)

      We've been working with National Design Studio, a team that's aiming to modernize every government interface, on one of their beta sites , CIO.gov.

      Oh. So, "customers running it in production" at Cloudflare apparently means "customer running a beta site in production without meaningful traffic." This is a first from the infrastructure giant, which usually prides itself on accurate statements!

      This detail was also absent when Cloudflare's CEO and CTO were boosting vinext like it was a mature, battle-tested product. In that context, Vercel's raising of the issue of security vulnerabilities is more than fair game, in my view.

      Still, all that doesn't alter the core learning from this project: that AI has the power to drastically reduce engineering time by up to ~100x and deliver usable-enough output, for relatively negligible financial cost. Just keep in mind that security and reliability issues will probably take plenty of extra time and effort to address.

      5. New attack vector on commercial open source?

      If arch-rivalries exist in tech, then Cloudflare and Vercel are a prime example. Both are gunning to become the most popular platform for developers to deploy their code, and the CEOs are regularly seen in public taking shots at the other side. One such spat happened in March, as covered at the time:

      "Things kicked off on social media, with developers confused about the severity of the incident, and about why Next.js seemed silent, and also why Cloudflare sites were breaking due to its fix for the CVE causing its own issues. It was at that point that Cloudflare's CEO, Matthew Prince, entered the chat to accuse Vercel of not caring about security:

      Given the security incident was ongoing, this felt a bit "below the belt" by the Cloudflare chief. Criticizing rivals is fair game, but why not wait until the incident is over? The punch landed, and Vercel's CEO Guillermo Rauch is not someone to take it lying down, so he hit back.

      Cloudflare's CEO then responded with a cartoon implying that although Vercel is much larger than its competitor Netlify, Cloudflare is 100x bigger than both, and could stomp them into the ground at will."

      Serving the public interest wasn't why Cloudflare rewrote Next.js: they did it because they want Next.js sites to be deployed onto Cloudflare, but doing so made little sense until now because Next.js produced bespoke build output optimized for Vercel's infrastructure. With this change, Cloudflare claims it provides _superior _performance when hosting Next.js apps, according to their own measurements.

      I 'd just add that performance is important for developers, but other things matter, too. Cost, reliability, developer experience, and how much devs like a company, are all factors in choosing between vendors. Also, performance measurements from a vendor about its own service must be taken with a large pinch of salt.

      Zooming out from this episode, it seems that AI is bringing the value of existing commercial open source moats into question. Vercel carved out a clever open source strategy that helped turn its open source investment into business revenue:

      1. Build and maintain Next.js, delivering the best developer experience (DX).
      2. Optimize Vercel to serve the specific (and undocumented) build output of Next.js.
      3. Most developers onboarding to Next.js will decide to deploy on Vercel to get the most benefit, in terms of DX and performance.
      4. … repeat for years while the business becomes worth billions! (Vercel was valued at $9B last October).

      Underpinning this success are some assumptions:

      1. Next.js will remain the #1 choice for developers to build React applications, thanks to ongoing investment.
      2. It is expensive to rewrite Next.js to be deployable and performant on another cloud vendor.
      3. Even if someone did #2, developers would be skeptical and not switch over.

      Vercel can invest in #1 to keep Next as best-in-class, while knowing that the risk of #2 occurring is minor. However, Cloudflare has now "cloned" Next, and can easily keep up with all changes in the future, and port them back to vinext.

      But AI makes it trivial to "piggyback" off any commercial open source project, which is a massive problem for commercial open source startups. It puts all the effort and investment into building and maintaining Next.js, while Cloudflare enjoys the benefit of this hard work (the Next.js public API) which is easily deployable to Cloudflare, and it can now undercut Vercel on price. For all future Next.js changes, Cloudflare will just sync it to vinext, using AI!

      WordPress had a similar problem, with WP Engine "piggybacking" off its work and undercutting their pricing in 2024. As I analyzed at the time:

      "Free-riding on permissive open source is too tempting to pass on for other vendors. WP Engine uses a common loophole of contributing almost nothing in R&D to WordPress, while selling it as a managed service. This means that they could either easily undercut the pricing of larger players like Automattic which do spend on WordPress's R&D. Alternatively, a company like WP Engine could charge as much, or more, as Automattic, but be able to spend a lot more on marketing, while being similarly profitable. "Saving" on R&D gives the "free-riders" plenty of options to grow their businesses: options not necessarily open to Automattic while they invest as much into R&D as they do.

      Commercial open source vendors pressure to end "freeriding". Automattic is likely facing lower revenue growth, with customers choosing vendors like WP Engine which offer a similar service -- getting these customers either via a cheaper price or thanks to more marketing spend. This legal fight could be an effort to force WP Engine to stop eating Automattic's lunch, or perhaps get WP Engine to sell to Automattic, which would cement its leading status in managed Wordpress, while also boosting revenue by $400M a year - according to its own figures".

      Vercel managed to avoid the "free-riding" problem with Next.js, but that's no longer possible now that AI makes it trivial to rewrite.

      6. Defense or offense?

      How should commercial open source companies respond to the threat that a competitor can easily rewrite the software behind the managed solutions which they sell as services?

      One obvious response is to make tests private, so that replication is harder for AI. One thing that made it so easy for Cloudflare to rewrite Next was the project's comprehensive test suite. From their announcement __(emphasis mine):

      "We also want to acknowledge the Next.js team. They've spent years building a framework that raised the bar for what React development could look like. The fact that their API surface is so well-documented and their test suite so comprehensive is a big part of what made this project possible."

      Database solution SQLite is famous for its incredible test suite. What some people don't know is that while core SQLite tests are open source, its most comprehensive test suite - TH3 - is closed source. SQLite monetizes its advanced infrastructure as a service for purchase. This is a fair tradeoff: for most contributors, the basic open source tests work well enough. For enterprise users or customers who really care about correctness, it makes sense to purchase advanced testing services from the service's creator.

      Open source canvas project, tldraw, announced it will relocate its test suite to a closed source repository; a move which makes plenty of sense. Here's commentary from Simon Willison:

      "It's become very apparent over the past few months that a comprehensive test suite is enough to build a completely fresh implementation of any open source library from scratch, potentially in a different language."

      In the event, tldraw's announcement turned out to be a joke, but who's laughing now? An open source project with excellent tests is an easy target for an AI agent to execute a full rewrite of it.

      Could new licenses be created for the AI era? Existing open source licenses were created on the assumption that humans read open source code, and humans modify it. Agents break that assumption.

      Could we see new license types emerge to ban AI agents from modifying projects' source code? It seems pretty far-fetched and hard to implement, but not beyond the realms of possibility.

      AI agents are still very new, and going mainstream in tech. Once they break into other industries, I wouldn't be surprised if legal frameworks are reworded to also apply to AI agents. If and when this happens, it would open the path for open source licenses to distinguish between agents and humans.

      What is a moat, if code can be trivially ported? A team operating a popular open source project can no longer assume it's expensive to fork or to be completely rewritten, meaning it makes sense to focus on other moats, such as:

      • Outstanding (paid) support. AI could make this much easier at a higher quality, if done right.
      • Smaller open core, larger closed source part. "Open core" as a business model has been dominant for commercial open source: keep the core of the software open source, while advanced enterprise features are source available or closed source. I would expect more companies to move their additional services to closed source, not source available.
      • In-person connection and community. Projects with a real-world community will form a sense of connection that goes beyond code. For example, it's hard to imagine vinext meetups popping up - whereas there are many Next.js communities.
      • Infrastructure and hardware remains a massive moat. In a world where software is trivial to copy, infrastructure remains a moat. Commercial open source might make most sense for players that own and operate superior infrastructure layers than their rivals: and being able to offer lower cost, higher reliability, lower latency, higher performance, or a combination of these.

      7. AI-world reality

      One of the single best AI use cases is full-on rewrites of well-tested products. I estimate that AI sped up the creation of vinext by at least 100x, which is massive. But we don't really see efficiency boosts of anything like that with AI tools, in general. As Laura Tacho shared at The Pragmatic Summit in San Francisco, the average self-reported efficiency 'AI gain' seems to be circa 10%.

      I suspect this vast chasm in efficiency boosts is because AI is many times more efficient at "no-brainer tasks" where correctness can be verified with tests, versus those which are more open ended or involve more creativity.

      In general, tests are incredibly important for efficient AI usage. On The Pragmatic Engineer Podcast, Peter Steinberger stressed how important "closing the loop" in his developer flow is by instructing the AI to test itself, and ensuring the AI has tests to run that verify correctness.

      Automated tests were always considered a best practice for creating maintainable code. Now, having a codebase with extensive tests is the baseline to make AI agents work productively for refactors, rewrites - or even adding new features and verifying that things did not break!

      Vendors will start to deploy "migration AI agents" to move customers over to their own stacks. This got lost in Cloudflare's announcement, but it's important:

      vinext includes an Agent Skill that handles migration for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI coding tools. Install it, open your Next.js project, and tell the AI to migrate:

      > npx skills add cloudflare/vinext

      Then open your Next.js project in any supported tool and say:

      > migrate this project to vinext

      The skill handles compatibility checking, dependency installation, config generation, and dev server startup. It knows what vinext supports and will flag anything that needs manual attention.

      This is very clever from Cloudflare, and a true "AI-native" move. They have not only used AI to migrate Next.js, but also built an "AI plugin" (a skill) to help customers migrate their existing codebases over to vinext - and deploy on Cloudflare!

      This move will surely be copied by other vendors, since migrations which are tedious for humans are much less effort with agents.

      AI is making the tech industry more ruthless when it comes to business practices. Laura Tacho said something interesting at The Pragmatic Summit:

      "AI is an accelerator, it's a multiplier, and it is moving organizations in different directions."

      AI seems to be accelerating the ruthlessness of competition for customers and the speed at which this happens. In one week, Cloudflare rebuilt Next.js, and it's attacking Vercel full-on: claiming their "vibe coded" alternative is more performant and production-ready, and burying at the foot of the launch announcement the crucial information that vinext is very much experimental.

      I sense vendors are realizing that there's a limited amount of time in which to use AI to their advantage, and some will decide to use it like Cloudflare has.

      On the other hand, AI could be great news for non-commercial open source. AI presents as a threat to commercial open source because it removes existing moats which make code hard to fully rewrite. However, beyond that, AI could help non-commercial open source to thrive:

      • With AI, it's easy to fork an open source project and keep the fork in-sync with the original.
      • It's trivial to instruct AI to rewrite an open source project to another language or framework.
      • …and it's equally trivial for AI to add features to a fork.

      For these reasons, I believe there could be a lot more forks and rewrites to come, and more open source projects and code, in general.

      Takeaways

      Personally, I could not have imagined things changing this quickly in software. Rewriting Next.js in a single week, even to a version that is not quite there - but mostly works? This was out of the question as recently as a few months ago.

      Things changed around last December, when Opus 4.5 and GPT-5.2 came out and proved capable of writing most of the code. What used to be expensive is now cheap - like rewriting complete projects - and we still need to learn what the "new" expensive parts of software engineering are.

      All this is new territory for everyone. To succeed in the tech industry, you need to be able to capitalize upon change, as Cloudflare has clearly done in this case by making the most of an opportunity created by new technology. It's unclear how popular vinext will become, and how much of a moat Vercel has around the broader Next.js ecosystem, but I suspect that it'd take more than a Next rewrite to make Cloudflare into a viable Next.js platform-as-a-service provider.

    12. šŸ”— r/Leeds Lunch in Leeds - Best value for money? rss

      Thank you.

      submitted by /u/Bright_Fill_4770
      [link] [comments]

    13. šŸ”— r/Yorkshire Sunrise over Langsett Res nr Barnsley this Morning rss

      Sunrise over Langsett Res nr Barnsley this Morning | Didn’t see one person the whole walk round. Spring is on its way’ submitted by /u/Del_213
      [link] [comments]
      ---|---

    14. šŸ”— r/Leeds Leeds is betting big on new bike lanes. Will people use them? rss
    15. šŸ”— r/Leeds Metal fans in Leeds rss

      So I (31/m) am considering reviving an idea I had about a year ago for a meetup style group for metal fans in Leeds. I love black metal personally but don't really know anyone locally with similar music tastes. Idea is for gig meets and just general hangouts. Every 3/4weeks, give or take.

      I'm aware of the Leeds rock + metal fans meetup group although that seems dead, I joined their WhatsApp and nothing but silence. If there is anything else similar already existing I'd be keen to find out about it. I don't plan on using the meetup platform as I am limited financially, and they charge subscription fees so if anyone has advice on alternative platforms I'd be very interested.

      So, who's interested? Open to all fans of heavy music, 25+ preferred only as I'd feel awkward if it's just students or a generally younger crowd.

      I'll create something and update this post, depending on feedback.

      EDIT: I've made a WhatsApp group and will try arrange something for next week probably. I'll DM everyone who's commented so far, link available DM me for it. Don't want to post it to avoid it being flooded with bots.

      submitted by /u/GhengisChasm
      [link] [comments]

    16. šŸ”— Simon Willison Can coding agents relicense open source through a ā€œclean roomā€ implementation of code? rss

      Over the past few months it's become clear that coding agents are extraordinarily good at building a weird version of a "clean room" implementation of code.

      The most famous version of this pattern is when Compaq created a clean-room clone of the IBM BIOS back in 1982. They had one team of engineers reverse engineer the BIOS to create a specification, then handed that specification to another team to build a new ground-up version.

      This process used to take multiple teams of engineers weeks or months to complete. Coding agents can do a version of this in hours - I experimented with a variant of this pattern against JustHTML back in December.

      There are a lot of open questions about this, both ethically and legally. These appear to be coming to a head in the venerable chardet Python library.

      chardet was created by Mark Pilgrim back in 2006 and released under the LGPL. Mark retired from public internet life in 2011 and chardet's maintenance was taken over by others, most notably Dan Blanchard who has been responsible for every release since 1.1 in July 2012.

      Two days ago Dan released chardet 7.0.0 with the following note in the release notes:

      Ground-up, MIT-licensed rewrite of chardet. Same package name, same public API — drop-in replacement for chardet 5.x/6.x. Just way faster and more accurate!

      Yesterday Mark Pilgrim opened #327: No right to relicense this project:

      [...] First off, I would like to thank the current maintainers and everyone who has contributed to and improved this project over the years. Truly a Free Software success story.

      However, it has been brought to my attention that, in the release 7.0.0, the maintainers claim to have the right to "relicense" the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.

      Dan's lengthy reply included:

      You're right that I have had extensive exposure to the original codebase: I've been maintaining it for over a decade. A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.

      However, the purpose of clean-room methodology is to ensure the resulting code is not a derivative work of the original. It is a means to an end, not the end itself. In this case, I can demonstrate that the end result is the same — the new code is structurally independent of the old code — through direct measurement rather than process guarantees alone.

      Dan goes on to present results from the JPlag tool - which describes itself as "State-of-the-Art Source Code Plagiarism & Collusion Detection" - showing that the new 7.0.0 release has a max similarity of 1.29% with the previous release and 0.64% with the 1.1 version. Other release versions had similarities more in the 80-93% range.

      He then shares critical details about his process, highlights mine:

      For full transparency, here's how the rewrite was conducted. I used the superpowers brainstorming skill to create a design document specifying the architecture and approach I wanted based on the following requirements I had for the rewrite [...]

      I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code. I then reviewed, tested, and iterated on every piece of the result using Claude. [...]

      I understand this is a new and uncomfortable area, and that using AI tools in the rewrite of a long-standing open source project raises legitimate questions. But the evidence here is clear: 7.0 is an independent work, not a derivative of the LGPL-licensed codebase. The MIT license applies to it legitimately.

      Since the rewrite was conducted using Claude Code there are a whole lot of interesting artifacts available in the repo. 2026-02-25-chardet-rewrite-plan.md is particularly detailed, stepping through each stage of the rewrite process in turn - starting with the tests, then fleshing out the planned replacement code.

      There are several twists that make this case particularly hard to confidently resolve:

      • Dan has been immersed in chardet for over a decade, and has clearly been strongly influenced by the original codebase.
      • There is one example where Claude Code referenced parts of the codebase while it worked, as shown in the plan - it looked at metadata/charsets.py, a file that lists charsets and their properties expressed as a dictionary of dataclasses.
      • More complicated: Claude itself was very likely trained on chardet as part of its enormous quantity of training data - though we have no way of confirming this for sure. Can a model trained on a codebase produce a morally or legally defensible clean-room implementation?
      • As discussed in this issue from 2014 (where Dan first openly contemplated a license change) Mark Pilgrim's original code was a manual port from C to Python of Mozilla's MPL-licensed character detection library.
      • How significant is the fact that the new release of chardet used the same PyPI package name as the old one? Would a fresh release under a new name have been more defensible?

      I have no idea how this one is going to play out. I'm personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.

      I see this as a microcosm of the larger question around coding agents for fresh implementations of existing, mature code. This question is hitting the open source world first, but I expect it will soon start showing up in Compaq-like scenarios in the commercial world.

      Once commercial companies see that their closely held IP is under threat I expect we'll see some well-funded litigation.

      Update 6th March 2026: A detail that's worth emphasizing is that Dan does not claim that the new implementation is a pure "clean room" rewrite. Quoting his comment again:

      A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.

      I can't find it now, but I saw a comment somewhere that pointed out the absurdity of Dan being blocked from working on a new implementation of character detection as a result of the volunteer effort he put into helping to maintain an existing open source library in that domain.

      I enjoyed Armin's take on this situation in AI And The Ship of Theseus, in particular:

      There are huge consequences to this. When the cost of generating code goes down that much, and we can re-implement it from test suites alone, what does that mean for the future of software? Will we see a lot of software re-emerging under more permissive licenses? Will we see a lot of proprietary software re-emerging as open source? Will we see a lot of software re-emerging as proprietary?

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    17. šŸ”— r/Harrogate Channel 4's 'The Dog House' is Looking for Loving Homes in Harrogate rss

      Channel 4's 'The Dog House' is Looking for Loving Homes in Harrogate | Hi everyone!😊 I'm part of the team behind Channel 4's The Dog House and I'm wondering whether this might be of interest to anyone here? We're looking for dog lovers in Harrogate who could offer a loving home and a fresh new start to a rescue dog in need for our next series filming in spring. Filmed in partnership with Woodgreen Pets Charity, the series shines a light on how life-changing the bond between humans and dogs can be for both sides. If you're interested to apply you can do so at: https://c4thedoghousetakepart.co.uk/ I've also included our flyer, in case anyone would like to share it with others they might know. submitted by /u/Fallevo
      [link] [comments]
      ---|---

    18. šŸ”— r/wiesbaden Auto Lackierer rss

      Hallo liebe Wiesbadener,

      kann jemand einen Auto-Lackierer empfehlen hier in der Gegend?

      submitted by /u/nikitsolo
      [link] [comments]

    19. šŸ”— r/LocalLLaMA Ran Qwen 3.5 9B on M1 Pro (16GB) as an actual agent, not just a chat demo. Honest results. rss

      Ran Qwen 3.5 9B on M1 Pro (16GB) as an actual agent, not just a chat demo. Honest results. | Quick context: I run a personal automation system built on Claude Code. It's model-agnostic, so switching to Ollama was a one-line config change, nothing else needed to change. I pointed it at Qwen 3.5 9B and ran real tasks from my actual queue. Hardware: M1 Pro MacBook, 16 GB unified memory. Not a Mac Studio, just a regular laptop. Setup: brew install ollama ollama pull qwen3.5:9b ollama run qwen3.5:9b Ollama exposes an OpenAI-compatible API at localhost:11434. Anything targeting the OpenAI format just points there. No code changes. What actually happened: Memory recall : worked well. My agent reads structured memory files and surfaces relevant context. Qwen handled this correctly. For "read this file, find the relevant part, report it" type tasks, 9B is genuinely fine. Tool calling : reasonable on straightforward requests. It invoked the right tools most of the time on simple agentic tasks. This matters more than text quality when you're running automation. Creative and complex reasoning : noticeable gap. Not a surprise. The point isn't comparing it to Opus. It's whether it can handle a real subset of agent work without touching a cloud API. It can. The slowness was within acceptable range. Aware of it, not punished by it. Bonus: iPhone Ran Qwen 0.8B and 2B on iPhone 17 Pro via PocketPal AI (free, open source, on the App Store). Download the model once over Wi-Fi, then enable airplane mode. It still responds. Nothing left the device. The tiny models have obvious limits. But the fact that this is even possible on hardware you already own in 2026 feels like a threshold has been crossed. The actual framing: This isn't "local AI competes with Claude." It's "not every agent task needs a frontier model." A lot of what agent systems do is genuinely simple: read a file, format output, summarize a short note, route a request. That runs locally without paying per token or sending anything anywhere. The privacy angle is also real if you're building on personal data. I'm curious what hardware others are running 9B models on, and whether anyone has integrated them into actual agent pipelines vs. just using them for chat. Full write-up with more detail on the specific tasks and the cost routing angle: https://thoughts.jock.pl/p/local-llm-macbook-iphone-qwen-experiment submitted by /u/Joozio
      [link] [comments]
      ---|---

    20. šŸ”— r/LocalLLaMA Final Qwen3.5 Unsloth GGUF Update! rss

      Final Qwen3.5 Unsloth GGUF Update! | Hey r/LocalLLaMA this week we worked on further improving the best size/KLD tradeoff for Qwen3.5, and we’re excited to share new GGUF benchmarks for Qwen3.5-122B-A10B and Qwen3.5-35B-A3B (99.9% KL divergence). This will likely be our final GGUF update. We’re also deeply saddened by the news around the Qwen team, and incredibly grateful for everything they’ve done for the open source community! For a lot of model releases, they had to stay up all night and not sleep.

      • All GGUFs now use our new imatrix calibration dataset so you might see small improvements in chat, coding, long context, and tool-calling use-cases. We are always manually improving this dataset and it will change often.
      • This is a follow up to https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/
      • We further enhanced our quantization method for Qwen3.5 MoEs to reduce Maximum KLD directly. 99.9% is what is generally used, but for massive outliers, Maximum KLD can be useful. Our New method generally pushes the Maximum KLD quite a much down vs the pre March 5th update. UD-Q4_K_XL is 8% bigger, but reduces maximum KLD by 51%!

      | Quant | Old GB | New GB | Max KLD Old | Max KLD New
      ---|---|---|---|---
      UD-Q2_K_XL | 12.0 | 11.3 (-6%) | 8.237 | 8.155 (-1%)
      UD-Q3_K_XL | 16.1 | 15.5 (-4%) | 5.505 | 5.146 (-6.5%)
      UD-Q4_K_XL | 19.2 | 20.7 (+8%) | 5.894 | 2.877 (-51%)
      UD-Q5_K_XL | 23.2 | 24.6 (+6%) | 5.536 | 3.210 (-42%)

      • Re-download Qwen3.5-35B-A3B , 27B , and 122B-A10B as they're now all updated. Re-download 397B-A17B after today’s update (still uploading!)
      • Qwen3.5-27B and 122B-A10B include the earlier chat template fixes for better tool-calling/coding output. 397B-A17B will also be updated today to include this.
      • LM Studio now supports toggling ā€œthinkingā€ for our GGUFs. Read our guide or run lms get unsloth/qwen3.5-4b. This process will be easier very soon.
      • Benchmarks were conducted using the latest versions for every GGUF provider.
      • Replaced BF16 layers with F16 for faster inference on unsupported devices.
      • Qwen3.5-35B-A3B now has all variants (Q4_K_M, Q8_0, BF16, etc.) uploaded.
      • A reminder KLD and perplexity benchmarks does not exactly reflect real-world use-cases.
      • Links to new GGUFs: Qwen3.5-35B-A3B-GGUF, Qwen3.5-122B-A10B-GGUF, Qwen3.5-397B-A17B-GGUF (397B still uploading!)

      You can also now Fine-tune Qwen3.5 in Unsloth via our free notebooks! Thanks a lot everyone!

      submitted by /u/danielhanchen
      [link] [comments]

    21. šŸ”— r/york misty walk this morning :) rss

      misty walk this morning :) | it was like a lovely dream submitted by /u/whtmynm
      [link] [comments]
      ---|---

    22. šŸ”— r/wiesbaden Smoke Together 2.0 rss

      Hello everyone, since only one person showed up to the last meeting, which was probably due to the weather, I thought we could try again tomorrow around 6 pm, at the same spot on Kirchenpfad: 50.083057, 8.216951

      Who's interested?

      submitted by /u/Wide-Distribution-78
      [link] [comments]

    23. šŸ”— r/Leeds Kirkstall Abbey rss

      Song is ā€˜Departure’ by IHF

      submitted by /u/mr_errington
      [link] [comments]

    24. šŸ”— r/Leeds Did Mr sunshine call in sick today... rss

      Weather forecasted alll day sun!

      submitted by /u/newtobitcoin111
      [link] [comments]

    25. šŸ”— r/wiesbaden AFD Veranstaltung wurde abgesagt šŸ¤ rss
    26. šŸ”— r/LocalLLaMA Qwen3 vs Qwen3.5 performance rss

      Qwen3 vs Qwen3.5 performance | Note that dense models use their listed parameter size (e.g., 27B), while Mixture-of-Experts models (e.g., 397B A17B) are converted to an effective size using ( \sqrt{\text{total} \times \text{active}} ) to approximate their compute-equivalent scale. Data source: https://artificialanalysis.ai/leaderboards/models submitted by /u/Balance-
      [link] [comments]
      ---|---

    27. šŸ”— r/Leeds Anyone know what they're filming at Browns? rss

      Saw some riggers working on Browns yesterday blocking out all the windows. Anyone know what's being filmed?

      submitted by /u/Itsalladeepend
      [link] [comments]

    28. šŸ”— r/Leeds WFH / Remote Work Advice rss

      Hello! Hoping for some advice on co-working in Leeds.

      I'm M 32, I work in Consumer Goods/Tech and I am 100% WFH and while the convenience is incredible, the isolation can be a challenge. I would like to establish a solid rhythm of working from town a few days a week and even better, find other people wanting to do the same and have a bit of craic. Grab lunch, beer afterwards etc.

      I've tried a few co-working spaces in Leeds but haven't found something that feels sustainable, yet.

      What i've tried so far:

      Santander work cafe is great.

      Its free, but it doesn't open til 9 and they often host events making it hard to establish a solid routine. I have a lot of calls, so I found that difficult to manage while working in there.

      2Work is great but is expensive.

      At £20 a go, that becomes £30 after train/parking. It's great if i'm meeting friends after work but not something i could make a regular routine of.

      Waterlane Boathouse is good too, but I couldn't do a full 9-5 there and its hard to take calls. Obvs, it's a pub not a corporate office.

      What i'm looking for:

      - A space where I can reliably get a desk and would be able to take calls throughout the day

      • The cheaper the better

      • Parking would be ideal

      If you're in a similar position with WFH/Remote and want to find community during the week please drop me a line!

      submitted by /u/Longjumping-Stop-662
      [link] [comments]

    29. šŸ”— r/reverseengineering DLLHijackHunter v1.2.0 - Now with automated UAC Bypass & COM AutoElevation discovery rss
    30. šŸ”— Project Zero On the Effectiveness of Mutational Grammar Fuzzing rss

      Mutational grammar fuzzing is a fuzzing technique in which the fuzzer uses a predefined grammar that describes the structure of the samples. When a sample gets mutated, the mutations happen in such a way that any resulting samples still adhere to the grammar rules, thus the structure of the samples gets maintained by the mutation process. In case of coverage-guided grammar fuzzing, if the resulting sample (after the mutation) triggers previously unseen code coverage, this sample is saved to the sample corpus and used as a basis for future mutations.

      This technique has proven capable of finding complex issues and I have used it successfully in the past, including to find issues in XSLT implementations in web browsers and even JIT engine bugs.

      However, despite the approach being effective, it is not without its flaws which, for a casual fuzzer user, might not be obvious. In this blogpost I will introduce what I perceive to be the flaws of the mutational coverage-guided grammar fuzzing approach. I will also describe a very simple but effective technique I use in my fuzzing runs to counter these flaws.

      Please note that while this blogpost focuses on grammar fuzzing, the issues discussed here are not limited to grammar fuzzing as they also affect other structure-aware fuzzing techniques to various degrees. This research is based on the grammar fuzzing implementation in my Jackalope fuzzer, but the issues are not implementation specific.

      Issue #1: More coverage does not mean more bugs

      The fact that coverage is not a great measure for finding bugs is well known and affects coverage-guided fuzzing in general, not just grammar fuzzing. However this tends to be more problematic for the types of targets where structure-aware fuzzing (including grammar fuzzing) is typically used, such as in language fuzzing. Let’s demonstrate this on an example:

      In language fuzzing, bugs often require functions to be called in a certain order or that a result of one function is used as an input to another function. To trigger a recent bug in libxslt two XPath functions need to be called, the document() function and the generate-id() function, where the result of the document() function is used as an input to generate-id() function. There are other requirements to trigger the bug, but for now let’s focus on this requirement.

      Here’s a somewhat minimal sample required to trigger the bug:

      <?xml version="1.0"?>
      <xsl:stylesheet xml:base="#" version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
      <xsl:template match="/">
        <xsl:value-of select="generate-id(document('')/xsl:stylesheet/xsl:template/xsl:message)" />
        <xsl:message terminate="no"></xsl:message>
      </xsl:template>
      </xsl:stylesheet>
      

      With the most relevant part for this discussion being the following element and the XPath expression in the select attribute:

      <xsl:value-of select="generate-id(document('')/xsl:stylesheet/xsl:template/xsl:message)" />
      

      If you run a mutational, coverage guided fuzzer capable of generating XSLT stylesheets, what it might do is generate two separate samples containing the following snippets:

      Sample 1:

      <xsl:value-of select="document('')/xsl:stylesheet/xsl:template/xsl:message" />
      

      Sample 2:

      <xsl:value-of select="generate-id(/a)" />
      

      The union of these two samples’ coverage is going to be the same as the coverage of the buggy sample, however having document() and generate-id() in two different samples in the corpus isn’t really helpful for triggering the bug.

      It is also possible for the fuzzer to generate a single sample with both of these functions that again results in the same coverage as the buggy sample, but with both functions operating on independent data:

      <xsl:template match="/">
      ...
      <xsl:value-of select="document('')/xsl:stylesheet/xsl:template/xsl:message" />
      <xsl:value-of select="generate-id(/a)" />
      ...
      </xsl:template>
      

      This issue also demonstrates how crucial it is for any fuzzer to be able to combine multiple samples in the corpus in order to produce new samples. However, in this case, note that combining the two samples wouldn’t trigger any previously unseen coverage and thus the resulting sample wouldn’t be saved, despite climbing closer to triggering the bug.

      In this case, because triggering the bug requires chaining only two function calls, a fuzzer would eventually find this bug by randomly combining the samples. But in case three or more function calls need to be chained in order to trigger the bug, it becomes increasingly expensive to do so and coverage feedback, as demonstrated, does not really help.

      In fact, triggering this bug might be easier (or equally easy) with a generative fuzzer (that will generate a new sample from scratch every time) without coverage feedback. But even though coverage feedback is not ideal, it still helps in a lot of cases.

      As previously stated, this issue does not only affect grammar fuzzing, but also other fuzzing approaches, in particular those focused on language fuzzing. For example, Fuzzilli documentation describes a similar version of this problem.

      A possible solution for this problem would be having some kind of dataflow coverage that could identify that data flowing from document() into generate- id() is something previously unseen and worth saving, however I am not aware of any practical implementation of such an approach.

      Issue #2: Mutational grammar fuzzing tends to produce samples that are very

      similar

      To demonstrate this issue, let’s take a look at some samples from one of my XSLT fuzzing sessions:

      Part of sample 1128 in the corpus:

      <?xml version="1.0" encoding="UTF-8"?><xsl:fallback namespace="http://www.w3.org/url2" ><aaa ></aaa><ddd xml:id="{lxl:node-set($name2)}:" att3="{[$name4document('')att4.|document('')$name4namespace::]document('')}{ns2}" ></ns3:aaa></xsl:fallback>
      

      Part of sample 603 in the corpus:

      <?xml version="1.0" encoding="UTF-8"?><xsl:fallback namespace="http://www.w3.org/url2" ><aaa ></aaa><ddd xml:id="{lxl:node-set($name2)}:" att3="{[$name4document('')att4.|document('')$name4namespace::]document('')}{ns2}" xmlns:xsl="http://www.w3.org/url3" ><xsl:output ></xsl:output>eHhDC?^5=<xsl:choose elements="eee" ><xsl:copy stylesheet-prefix="ns3" priority="3" ></xsl:copy></xsl:choose></ddd>t</xsl:fallback>
      

      As you can see from the example, even though these two samples are different and come from different points in time during the fuzzing session, a large part of these two samples are the same.

      This follows from the greedy nature of mutational coverage guided fuzzing: when a sample is mutated to produce new coverage, it gets immediately saved to the corpus. Likely a large part of the original sample wasn’t mutated, but it is still part of the new sample so it gets saved. This new sample can get mutated again and if the resulting (third) sample triggers new coverage it will also get saved, despite large similarities with the starting sample. This results in a general lack of diversity in a corpus produced by mutational fuzzing.

      While Jackalope’s grammar mutator can also ignore the base sample and generate an entire sample from scratch, it is rare for this to trigger new coverage compared to the more localized mutations, especially later on in the fuzzing session.

      One approach of combating this issue could be to minimize each new sample so that only the part that triggers new coverage gets saved, but I observed that this isn’t an optimal strategy either and it’s beneficial to leave (some) of the original sample. Jackalope implements this by minimizing each grammar sample, but stops the minimization when a certain number of grammar tokens has been reached.

      Even though this blogpost focuses on grammar fuzzing, I observed this issue with other structure aware fuzzers as well.

      A simple solution?

      Both of these issues hint that there might be benefits of combining generative fuzzing with mutational fuzzing in some way. Generative fuzzing produces more diverse samples than mutational fuzzing but suffers from other issues such as that it typically generates lots of samples that trigger errors in the target. Additionally, as stated previously, although coverage is not an ideal criteria for finding bugs it is still helpful in a lot of cases.

      In the past, when I was doing grammar fuzzing on a large number of machines, an approach I used was to delay syncing individual fuzz workers. That way, each worker would initially work with its own (fully independent) corpus. Only after some time has passed, the fuzzers would exchange sample sets and each worker would get the samples that correspond to the coverage this worker is missing.

      But what to do when fuzzing on a single machine? During my XSLT fuzzing project, I used the following approach:

      1. Start a fuzzing worker with an empty corpus. Run for T seconds.

      2. After T seconds sync the worker with the fuzzing server. Get the missing coverage and corresponding samples from the server. Upload any coverage the server doesn’t have (and the corresponding samples) to the server.

      3. Run with combined corpus (generated by the worker + obtained from the server) for another T seconds.

      4. Sync with the server again (to upload any new samples) and shut down the worker.

      5. Go back to step 1.

      The result is that the fuzzing worker spends half of the time creating a fully independent corpus generated from scratch and half of the time working on a larger corpus that also incorporates interesting samples (as measured by the coverage) from the previous workers. This results in more sample diversity as each new generation is independent from the previous one. However the worker eventually still ends up with a sample set corresponding to the full coverage seen so far during any worker lifetime. Ideally, new coverage and, more importantly, new bugs can be found by combining the fresh samples from the current generation with samples from the previous generations.

      In Jackalope, this can be implemented by first running the server, e.g.

      /path/to/fuzzer -start_server 127.0.0.1:8337 -out serverout
      

      And then running the workers sequentially with the following Python script:

      import subprocess
      import time
      
      T = 3600
      
      while True:
        subprocess.run(["rm", "-rf", "workerout"])
        p = subprocess.Popen(["/path/to/fuzzer", "-grammar", "grammar.txt", "-instrumentation", "sancov", "-in", "empty", "-out", "workerout", "-t", "1000", "-delivery", "shmem", "-iterations", "10000", "-mute_child", "-nthreads", "6", "-server", "127.0.0.1:8337", "-server_update_interval", str(T), "--", "./harness", "-m", "@@"])
        time.sleep(T * 2)
        p.kill()
      

      Note that Jackalope parameters in the script above are from my libxslt fuzzing run and should be adjusted according to the target.

      Additionally, Jackalope implements the -skip_initial_server_sync flag to avoid syncing a worker with the server as soon as the worker starts, but this flag is now the default in grammar fuzzing mode so it does not need to be specified explicitly.

      Does this trick work better than running a single uninterrupted fuzzing session? Let’s do some experiments. I used an older version of libxslt as the target (libxslt commit 2ee18b3517ca7144949858e40caf0bbf9ab274e5, libxml2 commit 5737466a31830c017867e3831a329c8f605c877b) and measured the number of unique crashes over time. Note that while the number of unique crashes does not directly correspond to the number of unique bugs, being able to trigger the same bug in different ways still gives a good indication of bug finding capabilities. I ran each session for one week on a single machine.

      I ran two default experiments (with a single long-lived worker) as well as the two experiments with the proposed solution with different values of T, T=3600 (one hour) and T=600 (10 minutes).

      As demonstrated in the chart, restarting the worker periodically (but keeping the server), as proposed in this blog post, helped uncover more unique crashes than either of the default sessions. The crashes were also found more quickly. The default sessions proved sensitive to starting conditions where one run discovered 5 but the other run only 2 unique crashes during the experiment time.

      The value of T dictates how soon a worker will switch from working on only its own samples to working on its own + the server samples. The best value in the libxslt experiment (3600) is when the worker already found most of the ā€œeasyā€ coverage and discovered the corresponding samples. As can be seen from the experiment, different values of T can produce different results. The optimal value is likely target-dependent.

      Conclusion

      Although the trick described in this blogpost is very simple, it nevertheless worked surprisingly well and helped discover issues in libxslt quicker than I would likely be able to find using default settings. It also underlines the benefits of experimenting with different fuzzing setups according to the target specifics, rather than relying on tooling out-of-the-box.

      Future work might include researching fuzzing strategies that favor novelty and would e.g. replace samples with the newer ones, even when doing so does not change the overall fuzzer coverage.

    31. šŸ”— r/Yorkshire What’s in Mytholmroyd? rss

      I’m in Mytholmroyd for work for a few hours today. What’s actually here? Anything I should see before I go?

      Cheers!

      submitted by /u/ANuggetEnthusiast
      [link] [comments]

    32. šŸ”— MetaBrainz Remembering mayhem rss

      Rob Kaye (also known to the community and his peers as ruaok and mayhem) was many things. Friend, partner, colleague, 'that guy with the crazy hair', hacker, burner, visionary and much more. And always a source of creative mayhem!

      Millions more have used, contributed to, or benefited from his open-source vision and projects. There's no doubt that Rob was one of the spearheads of open-source. He championed open music data and showed the world that a non- profit open-source organisation could be financially viable, competing with (and far outliving most) similar corporate projects.

      Below we will share some of Rob's history with MetaBrainz and staff. Thank you to everyone who left memories on the announcement post and elsewhere on the world wide web. His spirit lives on in our hearts and in 1's and 0's.

      Rob and MetaBrainz

      In the year 2000 a young Rob created MusicBrainz. He had just witnessed the corporatization of CDDB and embarked on the creation of a collaborative music database that could never be snatched from its contributors.

      Young Rob ('the one with the hair') in the ballpit at the old London Last.fm offices

      For over 25 years Rob guided MusicBrainz along its path, always focussed on his vision of openness and independence. He nestled his projects safely the non-profit arms of the MetaBrainz Foundation, to further safeguard them for the future. Since the year 2000 many of MusicBrainz' sister projects have bloomed under the MetaBrainz umbrella, such as MusicBrainz Picard, BookBrainz and ListenBrainz, with Rob either supporting community efforts or identifying a need and kickstarting them himself.

      26 years after founding MusicBrainz, with 143,901,298 and growing MusicBrainz IDs serving billions of global requests and (relatively) young ListenBrainz already at 1 billion+ listens, there is no doubt that Rob’s open-source efforts have changed the landscape of music data and, by extension, human culture (which relies on open and accessible histories) and the lives of musicians. It’s changed not just for us die-hards who live ā€œinā€ the MetaBrainz ecosystem, but also for the millions of people using the thousands of services that interact with MetaBrainz’ data. It’s probably no exaggeration to say that most people have interacted with MetaBrainz data at some point in their lives.

      a black and white photo of rob with glasses on and a crazy smile with his
face in a bunch of exploding fireworks during a
festivalFearless, peerless

      None of this could have happened without Rob's fierce and immovable guard against corporate influence and the enshittification that has taken down so many of MetaBrainz' contemporaries over the decades. He would gleefully share stories of offers to "purchase" MetaBrainz and the ignorance of trying to spend money on something that has effectively been made utterly un-purchasable. He did not bend the knee to power - exemplified by his famous 'Amazon cake' endeavour.

      Rob was a hacker at heart which made it all the more admirable that he spent much of his time dealing with the humdrum of what has become a substantial operation with a respectable row of servers and employees, all clamouring to be kept warm, dry, fed and paid, not to mention guiding 100's of students and new contributors through their first forays into open-source.

      A crazy looking Rob with crazy sunglasses and crazy hair, in front of the
MetaBrainz team in India. He has edited a capybara into the water in the
background for absolutely no reason

      Robert Kaye and some of the MetaBrainz team in 2024

      Rob was also an excellent delegator. Once you had Rob's trust he would let you cook, resulting in a wide range of incredible talent being incorporated into the MetaBrainz team. Rob was still coding whenever he could, but his excellent team allowed him to spend the free time that MetaBrainz' admin left him hacking on collaborations, experiments and anything else that caught his interest - for instance, recently he was spending some evenings working on MBID Mapper 2.0, looking forward to GSoC, and was excited about upcoming collaborations.

      Rob will be outlived by what he built, just as he intended. Nothing will be able to replace the presence of that cheeky smile, but Rob's influence will still be felt when the monument to many a king would have crumbled.

      The Captain and My Friend

      zas has written the following piece about his experience working with Rob, an experience everyone on the MetaBrainz staff, board, and many many volunteer contributors were lucky enough to share.

      Rob and I were both born in 1970. Being children of that same year meant we shared more than just a birth year; we shared a digital soul. We grew up hacking hardware when it still felt like magic, watching the world connect through the screech of modems, and finding our first real homes in the scrolling text of IRC and newsgroups.

      Rob was a man of many origins—German, American, Catalan, and a constant traveller. But he didn't just move through the world; he transformed it. He was impossible to miss: a man of flashy colours, vibrant hair, and weird clothes. Even in the crowded ancient streets of the Old Delhi Market, Rob stood out. He occupied space with a joyous, colourful defiance that invited everyone else to be themselves, too.

      I first came to the project through a specific challenge. I had 2k+ CDs from my collection converted into FLAC files and a question: how to properly tag them with decent metadata? I met MusicBrainz, then Picard, and eventually, I met Rob and a life-changing friendship of 12 years. One day, he messaged me with a simple question: would I be interested in some sysadmin tasks?

      I jumped on a train to Barcelona just to see him. We sat in a bar, drank a beer, and—despite my "very bad" spoken English—we understood each other perfectly. We spent that afternoon dreaming up ways to migrate the entire MusicBrainz infrastructure.

      Rob had a rare duality. He was the flamboyant traveller and maker who could command a room in a custom-made skirt of his own design, yet he was also the close friend who would happily retreat into a quiet corner to lose himself in the details of a PCB design or a complex server migration. He was as comfortable under the spotlight as he was behind a terminal. He was loving machines AND humans.

      He built a "glass house" of data so that the fruits of our labor could never be sold or stolen. He was a leader who never lost the soul of a hacker, a visionary who lived and dressed in technicolor.

      Rob was the Captain of MetaBrainz, but to me, he was a fellow traveller who started his journey exactly when I did. He has moved on to the next adventure, leaving us a world that is a little more open, a lot more honest, and infinitely more colourful.

      The servers are up, the mission continues, and the music is playing for you, Rob.

      Rest easy, my friend. Ruhe in Frieden. Reposa en pau. Bon voyage.

      Gallery of mayhem

    33. šŸ”— badlogic/pi-mono v0.56.1 release

      Fixed

      • Fixed extension alias fallback resolution to use ESM-aware resolution for jiti aliases in global installs (#1821 by @Perlence)
      • Fixed markdown blockquote rendering to isolate blockquote styling from default text style, preventing style leakage.
    34. šŸ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [IDASQL](https://github.com/allthingsida/idasql): 0.0.10
      
    35. šŸ”— r/Yorkshire Is Whitby doable as a day trip from York? rss

      Hi everyone, I’ll be in York for a couple of days with a friend and we were thinking about heading over to Whitby for the day while we’re there.

      Not sure if it works well as just a day trip or if it ends up feeling a bit rushed. We mostly just want to walk around the harbour, grab some fish and chips and maybe head up to Whitby Abbey if we’ve got time. Has anyone done it as a day trip from York before? Just wondering if it’s worth it or if it’s better staying a night.

      submitted by /u/FeistyPrice29
      [link] [comments]

    36. šŸ”— r/LocalLLaMA Alibaba CEO: Qwen will remain open-source rss

      Alibaba CEO: Qwen will remain open-source | submitted by /u/Bestlife73
      [link] [comments]
      ---|---

    37. šŸ”— r/LocalLLaMA Google invites ex-qwen ;) rss

      Google invites ex-qwen ;) | to make Gemma great again? ;) submitted by /u/jacek2023
      [link] [comments]
      ---|---

    38. šŸ”— r/reverseengineering Your Duolingo Is Talking to ByteDance: Cracking the Pangle SDK's Encryption rss
    39. šŸ”— Rust Blog Announcing Rust 1.94.0 rss

      The Rust team is happy to announce a new version of Rust, 1.94.0. Rust is a programming language empowering everyone to build reliable and efficient software.

      If you have a previous version of Rust installed via rustup, you can get 1.94.0 with:

      $ rustup update stable
      

      If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.94.0.

      If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

      What's in 1.94.0 stable

      Array windows

      Rust 1.94 adds array_windows, an iterating method for slices. It works just like windows but with a constant length, so the iterator items are &[T; N] rather than dynamically-sized &[T]. In many cases, the window length may even be inferred by how the iterator is used!

      For example, part of one 2016 Advent of Code puzzle is looking for ABBA patterns: "two different characters followed by the reverse of that pair, such as xyyx or abba." If we assume only ASCII characters, that could be written by sweeping windows of the byte slice like this:

      fn has_abba(s: &str) -> bool {
          s.as_bytes()
              .array_windows()
              .any(|[a1, b1, b2, a2]| (a1 != b1) && (a1 == a2) && (b1 == b2))
      }
      

      The destructuring argument pattern in that closure lets the compiler infer that we want windows of 4 here. If we had used the older .windows(4) iterator, then that argument would be a slice which we would have to index manually, hoping that runtime bounds-checking will be optimized away.

      Cargo config inclusion

      Cargo now supports the include key in configuration files (.cargo/config.toml), enabling better organization, sharing, and management of Cargo configurations across projects and environments. These include paths may also be marked optional if they might not be present in some circumstances, e.g. depending on local developer choices.

      # array of paths
      include = [
          "frodo.toml",
          "samwise.toml",
      ]
      
      # inline tables for more control
      include = [
          { path = "required.toml" },
          { path = "optional.toml", optional = true },
      ]
      

      See the full include documentation for more details.

      TOML 1.1 support in Cargo

      Cargo now parses TOML v1.1 for manifests and configuration files. See the TOML release notes for detailed changes, including:

      • Inline tables across multiple lines and with trailing commas
      • \xHH and \e string escape characters
      • Optional seconds in times (sets to 0)

      For example, a dependency like this:

      serde = { version = "1.0", features = ["derive"] }
      

      ... can now be written like this:

      serde = {
          version = "1.0",
          features = ["derive"],
      }
      

      Note that using these features in Cargo.toml will raise your development MSRV (minimum supported Rust version) to require this new Cargo parser, and third-party tools that read the manifest may also need to update their parsers. However, Cargo automatically rewrites manifests on publish to remain compatible with older parsers, so it is still possible to support an earlier MSRV for your crate's users.

      Stabilized APIs

      These previously stable APIs are now stable in const contexts:

      Other changes

      Check out everything that changed in Rust, Cargo, and Clippy.

      Contributors to 1.94.0

      Many people came together to create Rust 1.94.0. We couldn't have done it without all of you. Thanks!

    40. šŸ”— matklad JJ LSP Follow Up rss

      JJ LSP Follow Up

      Mar 5, 2026

      In Majjit LSP, I described an idea of implementing Magit style UX for jj once and for all, leveraging LSP protocol. I’ve learned today that the upcoming 3.18 version of LSP has a feature to make this massively less hacky: Text Document Content Request

      LSP can now provide virtual documents, which aren’t actually materialized on disk. So this:

      can now be such a virtual document, where highlighting is provided by semantic tokens, things like ā€œcheck out this commitā€ are code actions, and ā€œgoto definitionā€ jumps from the diff in the virtual file to a real file in the working tree.

      Exciting!

    41. šŸ”— Console.dev newsletter Bubble Tea v2 rss

      Description: Terminal UI framework.

      What we like: Combined set of framework tools for TUI development: Bubble Tea (interactions), Lip Gloss (layouts), Bubbles (UI components). Uses a new rendering engine for performance. Improved support for inline images, clipboards, rendering sync. Declarative views.

      What we dislike: Note that it’s Go-only. A good choice for terminal utilities though.

    42. šŸ”— Console.dev newsletter numpy-ts rss

      Description: NumPy implementation in TypeScript/JS.

      What we like: Avoid splitting your stack if you’re already building in TS/JS. Tree-shakable library with universal runtime support across server and client. No additional dependencies. Validated against NumPy tests.

      What we dislike: Not quite at 100% NumPY API coverage yet (94%). Slower than NumPy (average 11x slower, median 3x slower) for many of the benchmarks, as you’d expect.

    43. šŸ”— Llogiq on stuff Write small Rust scripts rss

      Recently I was working on a Rust PR to reduce unreachable_code lint churn after todo!() calls, that basically removes lint messages from unreachable_code after todo!() and instead adds a todo_macro_uses lint which can be turned off while the code is still being worked on. However, once that change was done, I ran into a number of failing tests, because while they had a #![allow(unused)] or some such, this didn’t cover the todo_macro_uses lint.

      Brief digression: rustc itself is tested by a tool called compiletest. That tool runs the compiler on code snippets, captures the output and compares it with known-good golden master output it stores alongside the snippets. In this case, there were a good number of tests that had todo!() but didn’t #![allow(todo_macro_uses)]. More tests than I’d care to change manually.

      In this year of the lord, many of us would ask some agent to do it for them, but I didn’t like the fact that I would have to review the output (I have seen too many needless formatting changes to be comfortable with investing time and tokens into that). Also I had a code snippet to find all rust files lying around that only used standard library functions and could easily be pasted into a throwaway project.

      use std::io;
      use std::path::Path;
      
      fn check_files(path: &Path) -> io::Result<()> {
          for e in std::fs::read_dir(path)? {
              let Ok(d) = e else { continue; };
              if d.file_type().is_ok_and(|ft| ft.is_dir()) {
                  check_files(&d.path())?;
              } else {
                  let path = d.path();
                  if path.extension().is_some_and(|ext| ext == "rs") {
                      check_file(&path)?;
                  }
              }
          }
          Ok(())
      }
      

      This can be called on a Path and walks it recursively, calling check_file on all Rust files. I also had done a few read-modify-write functions in Rust (notably in my twirer tool I use for my weekly This Week in Rust contributions). They look like this:

      fn check_file(path: &Path) -> io::Result<()> {
          let orig_text = std::fs::read_to_string(path)?;
      
          let text = todo!(); // put the changed `orig_text` into `text`
      
          std::fs::write(path, text)
      }
      

      There was some slight complication in that a) I wanted to amend any #![allow(..)] annotation I would find instead of adding another, and b) to add one, I would have to find the first position after the initial comments (which are interpreted by compiletest, which would be foiled by putting them below a non-comment line). Also I didn’t want to needlessly add empty lines, so I had to check whether to insert a newline. All in all this came out to less than 50 lines of Rust code, which I’m reproducing here; perhaps someone can use them to copy into their own code to have their own one-off Rust scripts.

      use std::fs::{read_dir, read_to_string, write};
      use std::io;
      use std::path::Path;
      
      fn check_file(path: &Path) -> io::Result<()> {
          let orig_text = read_to_string(path)?;
          if !orig_text.contains("todo!(") || orig_text.contains("todo_macro_uses") {
              return Ok(());
          }
          let text = if let Some(pos) = orig_text.find("#![allow(") {
             // we have an `#[allow(..)]` we can extend
             let Some(insert_pos) = orig_text[pos..].find(")]") else {
                 panic!("unclosed #![allow()]");
             };
             let (before, after) = orig_text.split_at(pos + insert_pos);
             format!("{before}, todo_macro_uses{after}")
          } else {
              // find the first line after all // comments
              let mut pos = 0;
              while orig_text[pos..].starts_with("//") {
                  let Some(nl) = orig_text[pos..].find("\n") else {
                      pos = orig_text.len();
                      break;
                  };
                  pos += nl + 1;
              }
              let (before, after) = orig_text.split_at(pos);
              // insert a newline unless at beginning or we already have one
              let nl = if pos == 0 || before.ends_with('\n') {
                  ""
              } else {
                  "\n"
              };
              format!("{before}{nl}#![allow(todo_macro_uses)]\n{after}")
          };
          write(path, text)
      }
      
      fn check_files(path: &Path) -> io::Result<()> {
          for e in read_dir(path)? {
              let Ok(d) = e else { continue; };
              if d.file_type().is_ok_and(|ft| ft.is_dir()) {
                  check_files(&d.path())?;
              } else {
                  let path = d.path();
                  if path.extension().is_some_and(|ext| ext == "rs") {
                      check_file(&path)?;
                  }
              }
          }
          Ok(())
      }
      
      fn main() -> io::Result<()> {
          check_files(&Path::new("../rust/tests/ui"))
      }
      

      The script ran flawlessly, I didn’t need to check the output for errors, and I can reuse parts of it whenever I feel like it.

      Conclusion: It’s easy and quick to write small Rust scripts to transform code. And since you know what the code does, you don’t need any time to review the output. And Rust’s standard library, while missing pieces that might simplify some tasks, is certainly servicable for work like this. Even if I had the need for, say, regexes, those would’ve been a mere cargo add regex away. So next time you need to mechanically transform some code, don’t reach for AI, simply rust it.

    44. šŸ”— Ampcode News GPT-5.4, The New Oracle rss

      Habemus oraculum! We have a new oracle in Amp and it's GPT-5.4.

      It's a great model. In our internal evals response quality went from 60.8% (GPT-5.2) to 68.2% (GPT-5.4). Mean latency is down from ~6.7min to ~4.9min.

      In Amp's smart mode GPT-5.4 works really well with Opus 4.6, which is smart mode's current main model. They complement each other with the oracle bringing sage advice on architecture, code reviews, and tricky bugs to the context window, just as we're used to from previous incantations.

      On top of that, we also decided to add the oracle subagent to deep mode. Now you might wonder, since deep mode currently uses GPT-5.3-Codex as the main model, why add another GPT model in the same mode? Does that even make sense?

      We think it does. GPT-5.3-Codex is fantastic at coding (as Codex models tend to be), which is exactly why it is the main model in deep, but the oracle is plain GPT-5.4, a non-Codex model. Less a code specialist, more an all-rounder.

      That gives us two models from the same family, but trained for different goals, with different system prompts, in the same mode — two distinct voices in the same conversation.

      We're still learning what GPT-5.4 can do in practice. There are very likely hidden smarts and treasures we haven't found yet. Let us know once you do.

    45. šŸ”— exe.dev APIs for the RESTless rss

      Exe.dev's API to create a new machine is:

      ssh exe.dev **new --name=restless --json**
      

      That assumes your SSH key is already registered to your account.

      If you want to do it over HTTPS, it's:

      curl -X POST https://exe.dev/exec \
        -H "Authorization: Bearer $TOKEN" \
        -d '**new --name=restless --json** '
      

      Our CLI and our API are one and the same. The conventions are unix-y (how to parse command-line flags) rather than web-by, but they're familiar to our end users, and you don't have to learn two different conventions.

      Minting Your Own Tokens

      The only tricky bit is giving our users bearer tokens, and here we did something new: you can use your SSH key to mint your own tokens, and you can give those self-minted tokens restrictions (when they're valid, what they can do) without chatting with us. If the signature checks out, we know that the token was generated by the SSH private key.

      We walk through building a token step by step in our documentation, but this shell function does the trick:

      exetoken() {
        # Generate an exe.dev API token.
        #   exetoken [permissions_json] [ssh_key_path]
        #   permissions_json defaults to '{}' (no restrictions)
        #   ssh_key_path defaults to the first IdentityFile from ssh config
        local perms
        if [ -n "$1" ]; then
          perms="$1"
        else
          perms='{}'
        fi
        local key
        if [ -n "$2" ]; then
          key="$2"
        else
          local default_key=$(ssh -G exe.dev | grep -i identityfile | head -n1 | awk '{print $2}')
          key="${default_key/#\~/$HOME}"
        fi
        b() { tr -d '\n=' | tr '+/' '-_'; }
        local p=$(printf '%s' "$perms" | base64 | b)
        local s=$(printf '%s' "$perms" | ssh-keygen -Y sign -f "$key" -n v0@exe.dev 2>/dev/null | sed '1d;$d' | b)
        echo "exe0.$p.$s"
      }
      

      The key aspects here are the inputs:

      • A permissions JSON — e.g. {"cmds":["whoami"]} says "this key can execute the whoami command."
      • The SSH key is the secret that signs the token.

      The output is the permissions and the signature of the permissions, encoded with URL-safe base64 to prevent any troubles.

      $ curl -s -X POST https://exe.dev/exec \
          -H "Authorization: Bearer $(exetoken '{"cmds":["whoami"]}')" \
          -d whoami | jq -r '.email'
      philip.zeyliger@bloggy.exe.xyz
      

      Gadzooks, it works!

      Scopes, Expiry, and Revocation

      You can associate multiple SSH keys with an exe.dev account. Removing an SSH key from your exe.dev account revokes all tokens signed with that SSH key.

      This, dare we say unusual, scheme gives you scopes, expiry, offline token creation, and revocation. We admit it's a little weird.

      Extending to the SSH Auth Proxy

      Exe.dev VMs come with a built-in auth proxy. If you wanted to script talking to a web server on your VM, you could log in manually and steal the cookie. Stealing cookies is naughty, so you could instead mark the VM publicly accessible and implement your own authentication. Our API keys give you a third way: mint a bearer token scoped to just that VM, and access it directly.

      For VM tokens, the signing namespace changes from v0@exe.dev to v0@myvm.exe.xyz:

      # Without a token — the proxy redirects you to log in:
      $ curl -s -o /dev/null -w "%{http_code}" https://myvm.exe.xyz/api/data
      307
      
      # With a bearer token — you're in:
      $ curl -s -H "Authorization: Bearer $VM_TOKEN" https://myvm.exe.xyz/api/data
      {"status": "ok"}
      

      References

      See https://exe.dev/docs/https-api for the full details, including how to mint short-lived tokens.

    46. šŸ”— Armin Ronacher AI And The Ship of Theseus rss

      Because code gets cheaper and cheaper to write, this includes re- implementations. I mentioned recently that I had an AI port one of my libraries to another language and it ended up choosing a different design for that implementation. In many ways, the functionality was the same, but the path it took to get there was different. The way that port worked was by going via the test suite.

      Something related, but different, happened with chardet. The current maintainer reimplemented it from scratch by only pointing it to the API and the test suite. The motivation: enabling relicensing from LGPL to MIT. I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard.

      Unsurprisingly, that new implementation caused a stir. In particular, Mark Pilgrim, the original author of the library, objects to the new implementation and considers it a derived work. The new maintainer, who has maintained it for the last 12 years, considers it a new work and instructs his coding agent to do precisely that. According to author, validating with JPlag, the new implementation is distinct. If you actually consider how it works, that's not too surprising. It's significantly faster than the original implementation, supports multiple cores and uses a fundamentally different design.

      What I think is more interesting about this question is the consequences of where we are. Copyleft code like the GPL heavily depends on copyrights and friction to enforce it. But because it's fundamentally in the open, with or without tests, you can trivially rewrite it these days. I myself have been intending to do this for a little while now with some other GPL libraries. In particular I started a re-implementation of readline a while ago for similar reasons, because of its GPL license. There is an obvious moral question here, but that isn't necessarily what I'm interested in. For all the GPL software that might re-emerge as MIT software, so might be proprietary abandonware.

      For me personally, what is more interesting is that we might not even be able to copyright these creations at all. A court still might rule that all AI- generated code is in the public domain, because there was not enough human input in it. That's quite possible, though probably not very likely.

      But this all causes some interesting new developments we are not necessarily ready for. Vercel, for instance, happily re-implemented bash with Clankers but got visibly upset when someone re- implemented Next.js in the same way.

      There are huge consequences to this. When the cost of generating code goes down that much, and we can re-implement it from test suites alone, what does that mean for the future of software? Will we see a lot of software re- emerging under more permissive licenses? Will we see a lot of proprietary software re-emerging as open source? Will we see a lot of software re-emerging as proprietary?

      It's a new world and we have very little idea of how to navigate it. In the interim we will have some fights about copyrights but I have the feeling very few of those will go to court, because everyone involved will actually be somewhat scared of setting a precedent.

      In the GPL case, though, I think it warms up some old fights about copyleft vs permissive licenses that we have not seen in a long time. It probably does not feel great to have one's work rewritten with a Clanker and one's authorship eradicated. Unlike the Ship of Theseus, though, this seems more clear-cut: if you throw away all code and start from scratch, even if the end result behaves the same, it's a new ship. It only continues to carry the name. Which may be another argument for why authors should hold on to trademarks rather than rely on licenses and contract law.

      I personally think all of this is exciting. I'm a strong supporter of putting things in the open with as little license enforcement as possible. I think society is better off when we share, and I consider the GPL to run against that spirit by restricting what can be done with it. This development plays into my worldview. I understand, though, that not everyone shares that view, and I expect more fights over the emergence of slopforks as a result. After all, it combines two very heated topics, licensing and AI, in the worst possible way.