🏡


to read (pdf)

  1. Letting AI Actively Manage Its Own Context | 明天的乌云
  2. Garden Offices for Sale UK - Portable Space
  3. Cord: Coordinating Trees of AI Agents | June Kim
  4. Style tips for less experienced developers coding with AI · honnibal.dev
  5. Haskell for all: Beyond agentic coding

  1. March 09, 2026
    1. 🔗 r/reverseengineering Challenges in Decompilation and Reverse Engineering of CUDA-based Kernels rss
  2. March 08, 2026
    1. 🔗 remorses/critique critique@0.1.122 release
      1. Deterministic syntax highlighting detection--web no longer polls for tree-sitter completion with arbitrary timeouts. Uses DiffRenderable.isHighlighting to detect exactly when highlighting finishes, then waits for one render stabilization pass. Exits the instant tree-sitter is done instead of always waiting a fixed 500ms.

      2. Fixed accidental clipboard copy during text drag — selecting text in the TUI no longer copies to clipboard mid-drag. Clipboard copy only fires on mouse-up after the selection is complete.

      3. Clipboard OSC52 fallback uses renderer API — the OSC52 clipboard escape sequence (used over SSH) now goes through renderer.copyToClipboardOSC52() instead of writing directly to stdout, which is more reliable with the rendering pipeline.

      4. Updated @opentuah/core and @opentuah/react to 0.1.97

    2. 🔗 idursun/jjui v0.10.0 release

      Wow, a major release after a very long time. As promised, v0.10 is ready, and it comes with breaking changes.

      First, thank you to everybody who contributed code, bug reports, ideas, and testing for this release. In particular, thanks to @baggiiiie for various features and fixes, @nickchomey for contributions and continuous feedback, and @vic for updating the documentation website. Thanks as well to everyone else for various code contributions, reporting issues, and verifying fixes.

      We changed a lot in v0.10, but the biggest shift is that we finally got rid of some legacy configuration and moved to a unified actions + bindings model. Old concepts like [custom_commands] and [leader] are now replaced by a more consistent system built around actions , bindings , and first-class support for leader-style key sequences.

      This release also introduces config.lua, which makes it much easier to customise and extend jjui with real scripting instead of only static configuration. Between config.toml, config.lua, Lua actions, and bindings, much more of the UI can now be customised in one consistent way.

      The documentation has been updated with migration notes and examples such as the Lua Cookbook.

      From my testing it looks ready, but I am sure more rough edges will show up once more people start using it, so please keep reporting issues as you find them.

      ⚠️ Breaking Changes

      v0.10 replaces the legacy keybinding and customisation model with the new actions + bindings system.

      • [keys] configuration is no longer used.
      • [custom_commands] is replaced by [[actions]] and [[bindings]].
      • [leader] sequences are replaced by sequence bindings via seq.

      If you have an existing configuration, plan to migrate it rather than expecting a drop-in upgrade from v0.9.x.

      ✨Highlights

      • Unified actions + bindings system. Keyboard input now flows through a single dispatch pipeline, default bindings are declared in TOML, actions are discoverable, and custom behaviour can be defined in both TOML and Lua.
      • Much stronger scripting support. Lua setup now receives the full config plus runtime context, generated intent-backed actions replace handwritten drift-prone bridges, and new helpers like wait_close() and wait_refresh() make multi-step scripts more predictable.
      • Upgraded TUI foundations. Moving from Bubble Tea v1 to v2 brings better terminal support, a new rendering engine for better performance, and better input handling, including support for bindings like ctrl+space and dedicated shift+... style keys.
      • Improved diff and selection workflows. The diff view now supports word wrap, revisions and details support modifier-based mouse selection, and several details and evolog interactions are more consistent.

      🚀 New and Improved

      Command History

      • Added command history, available with shift+w, to review dismissed command runs together with their output.
      • Moved running command information out of the status area and into dedicated flash messages, so the command and its output are now shown together in one place.

      Help and Status

      • Reintroduced a dedicated Help view for full keybinding discovery, with filtering and scrolling, alongside the existing inline help.
      • Changed status/help binding presentation to column-major order so related bindings are easier to scan.

      Lua and Configuration

      • Shipped the unified actions + bindings architecture introduced in v0.10.
      • Exposed the full config object to Lua setup, including runtime context such as repository path via config.repo, and terminal state via config.terminal.dark_mode, config.terminal.fg, and config.terminal.bg.
      • Standardised closeable action naming around open_* and aligned apply/close semantics for more predictable scripted workflows.

      Oplog

      • Added quick search to oplog and shared the search behavior with revision search.

      Evolog

      • Changed evolog preview defaults to show interdiff output, matching jj evolog -p more closely.

      Abandon

      • Added s in abandon mode to target descendants of the selected revision(s).

      Revset

      • Kept revset editing open when the expression is invalid, improving iteration while editing.
      • Improved revset editing so function completion inserts ( when appropriate, while the function list display is cleaner and less noisy.

      Diff

      • Added word wrap in the diff view, with explicit wrapped and unwrapped modes plus soft-wrap-aware scrolling.
      • Added g and G in the diff view to jump to the top and bottom.

      Details and Revisions

      • Added Ctrl+click single-item toggle and Alt+click range selection in revisions and details.
      • Added a revisions.details.select_file action for finer control in details workflows.

      🛠️ Fixes

      • Fixed rendering issues with double-width characters.
      • Simplified divergent revision handling by using change_id/offset directly from the jj log prefix template.
      • Fixed git push --all behaviour for deleted bookmarks and corrected related command descriptions.

      ⚙️ Internal Updates

      • Upgraded to Bubble Tea v2.
      • Replaced remaining cellbuf usage with ultraviolet.
      • Hardcoded model-level key handling has been removed in favor of dispatcher-driven intents.
      • Continued internal cleanup around rendering, overlays, display context, and generated action plumbing to support the new architecture.

      What's Changed

      • Unified Actions + Bindings system by @idursun in #533
      • feat(oplog): add QuickSearch to oplog, share logic with revision search by @baggiiiie in #521
      • fix: add missing keys in bindings by @baggiiiie in #538
      • feat(evolog): change default evolog command to show interdiff by @academician in #539
      • fix: add back esc to clear checked revisions by @baggiiiie in #541
      • remove gh-pages workflow from main branch. by @vic in #542
      • docs: fix readme links by @baggiiiie in #549
      • Update CONTRIBUTING.md for documentation by @vic in #547
      • feat(lua): add "filter" and "ordered" option to choose lua api by @baggiiiie in #548
      • feat(abandon): use s for selecting descendants by @idursun in #544
      • feat: add change_offset in jj log prefix template for divergent revisions by @baggiiiie in #550
      • fix: make bookmark input visible when user has custom template-aliases by @baggiiiie in #551
      • fix(evolog): keep restore mode navigation on revisions and esc close by @baggiiiie in #554
      • Expose full config to Lua setup and add runtime context (repo + terminal) by @idursun in #553
      • feat(flash): command history by @idursun in #556
      • Update to bubbletea v2 by @idursun in #558
      • Ctrl+click and Alt+click for single and range selection by @idursun in #563
      • feat: add ( to functions when autocompleted by @nickchomey in #564
      • feat(status): change display to column major order by @baggiiiie in #569
      • fix: remove parentheses from functions list, but add when autocomplete by @nickchomey in #567
      • fix(git): prevent panic when opening git menu with no selected revision by @baggiiiie in #571
      • Refactor/Lua bridge by @idursun in #572
      • fix(ui): guard against nil commit in OpenBookmarks intent by @baggiiiie in #573
      • feat(diff): word wrap by @idursun in #575
      • Add pgup/pgdown keybindings to evolog and details by @academician in #576
      • feat(lua): make default actions overridable and expose jjui.builtin.* by @idursun in #577
      • feat(help): add "help view" bound to f1 by @idursun in #578

      New Contributors

      Full Changelog : v0.9.12...v0.10.0

    3. 🔗 remorses/critique critique@0.1.121 release
      1. --web is now ~2x faster — URL prints in ~1.1s instead of ~3.1s

      Desktop, mobile, and OG image now render in parallel. URL is returned as soon as HTML uploads; OG image uploads in the background so social previews appear seconds later without delaying the link.

      1. More syntax highlighting aliases — more extensions get proper highlighting:

        • .jsonc, .json5 → JSON
        • .mkd, .mkdn, .mdown, .markdown → Markdown
        • .scss, .less → CSS
        • .xhtml, .xml, .svg → HTML
        • .hh, .tpp, .ipp, .inl → C++
        • .ksh → Bash
        • Click filename in web preview to copy path — clicking a file header in critique --web pages copies the filename to clipboard and updates the URL hash for deep linking. Cursor changes to copy to hint at the behavior.
      2. --commit accepts range syntaxcritique --commit HEAD~2..HEAD and HEAD~2...HEAD now work correctly:

        critique --commit HEAD~2..HEAD
        

        critique --commit main..feature-branch

      3. Fixed iOS Safari pinch-to-zoom widget drift — the annotation widget on critique.work pages no longer drifts when pinching to zoom. Uses visualViewport API to keep it anchored to the bottom-right corner.

    4. 🔗 r/Yorkshire Where to find resale tickets for the Piece Hall rss

      Exactly as it sounds:

      I've been foolish and assumed tickets for something would be available for longer than they were - is there anywhere to keep an eye on where I won't get fleeced? Facebook groups, etc?

      Thanks!

      submitted by /u/josefbae
      [link] [comments]

    5. 🔗 kainctl/isd v0.6.2 release

      A small bug-fix release that actually makes the experimental terminal- derived-theme option selectable in the theme menu. Before, it only worked, when it was directly defined via the settings...

      Additionally, this release further improves color support.
      For terminal-derived-theme a user can now define whether or not it is a light theme (dark by default). This option is used to further adjust/tune the contrast to the background. See the documentation for more details.
      isd will also try to guess if TRUECOLOR is supported or not by inspecting the TERM environment variable.
      Finally, the dependencies have been updated, making more upstream textual themes available.

      Full Changelog : v0.6.1...v0.6.2

    6. 🔗 r/wiesbaden Suche Leute aus Rhein-Main/-Neckar-Gebiet für Bergsport, Hochtouren & Skihochtouren rss

      Hallo zusammen.
      ich (m, Mitte 20) suche Leute, die Lust auf Bergsport haben, vor allem Hochtouren, mehrtägige Skitouren/Skihochtouren und anspruchsvollere Wanderungen.

      Das grundlegende know-how habe ich (Ausrüstung, Technik, fitness) aber mir fehlen hier vor Ort die Leute, mit denen man solche Touren gemeinsam angehen kann. Alleine wäre das für mich keine Option, und im Freundeskreis teilt leider niemand dieses Hobby.

      Ich würde mich freuen, wenn sich ein paar Gleichgesinnte finden würden, egal ob du schon Erfahrung hast oder dich selbst an Hochtouren herantasten willst.
      Ich könnte es mir so vorstellen, sich erstmal locker kennenzulernen, vielleicht kleinere Touren zu machen und dann gemeinsam größere Projekte anzugehen.

      Wenn du Interesse hast, melde dich gerne per Kommentar oder DM

      submitted by /u/Odd-Purple3420
      [link] [comments]

    7. 🔗 r/reverseengineering I need help in Wana see the source code or the things used in app like can we see ?? I have recently came to know that it's possible with using JADX please help we to how to see code is it possible rss
    8. 🔗 r/Harrogate Looking for NHS dentist the in area rss

      Hi looking for a dentist accepting NHS patients. Are there any you recommend? I live central in HG1 but can also travel to wider area if there is a strong reason.

      I am seeking general health checkup maybe someone to fix my chipped teeth if that is on the NHS, I’m not sure

      submitted by /u/Apprehensive_Ring666
      [link] [comments]

    9. 🔗 vercel-labs/agent-browser v0.17.0 release

      Minor Changes 94521e7: ### New Features Lightpanda browser engine support - Added --engine <name> flag to select the browser engine (chrome by default, or lightpanda), implying --native mode. Configurable via AGENT_BROWSER_ENGINE environment variable (#646) Dialog dismiss command - Added support for dismiss subcommand in dialog command parsing (#605) Improvements * **Daemon startup error reporting** \- Daemon startup errors are now surfaced directly instead of showing an opaque timeout message () * **CDP port discovery** \- Replaced broken hand-rolled HTTP client with `reqwest` for more reliable CDP port discovery ([#619](https://github.com/vercel-labs/agent-browser/pull/619)) * **Chrome extensions** \- Extensions now load correctly by forcing headed mode when extensions are present ([#652](https://github.com/vercel-labs/agent-browser/pull/652)) * **Google Translate bar suppression** \- Suppressed the Google Translate bar in native headless mode to avoid interference ([#649](https://github.com/vercel-labs/agent-browser/pull/649)) * **Auth cookie persistence** \- Auth cookies are now persisted on browser close in native mode ([#650](https://github.com/vercel-labs/agent-browser/pull/650)) Bug Fixes

      * Fixed native auth login failing due to incompatible encryption format ([#648](https://github.com/vercel-labs/agent-browser/pull/648))
      

      Documentation * Improved snapshot usage guidance and added reproducibility check () * Added `--engine` flag to the README options table Performance

      * Added benchmarks to the CLI codebase ([#637](https://github.com/vercel-labs/agent-browser/pull/637))
      
    10. 🔗 r/wiesbaden Drohne abgeschossen? rss

      Richtung Dotzheim ist eben etwas lautlos in der Luft explodiert. Hat jemand eine Ahnung was das sein kann? War recht schnell.

      submitted by /u/KHRAKE
      [link] [comments]

    11. 🔗 r/reverseengineering [Update] I know I've shared LCSAJdump before, but v1.1.2 just mapped the entire x86_64 libc graph in <10s. It's now faster than ROPgadget while finding JOPs/Shadow Gadgets they physically miss. rss
    12. 🔗 r/wiesbaden Date Ideen rss

      Hallo, Ich suche Kreative Date Ideen, dieses klassische Essen gehen oder Spazieren gehen ist mir persönlich zu langweilig und finde ich irgendwie auch nicht ganz so toll für ein Date.

      Gerne auch Geheimtipps muss nicht nur Wiesbaden sein kann auch was in Mainz sein, aber überwiegend Wiesbaden wäre super.

      Ich danke euch für den Austausch

      submitted by /u/Scharick914
      [link] [comments]

    13. 🔗 r/Harrogate Running Routes in the Area? rss

      I'll be visiting Harrogate in late July and will need to get in a long run on the weekend. I've seen some info about the route to Knaresborough through the gorge, but this seems to be more of a hike than a running trail, and I'm looking for a mostly paved path. I've been to Harrogate before, but never while training. Any help would be great. Thanks!
      Edit to say: I'll be staying at the convention center off the King's Road and won't have a car.

      submitted by /u/EnglishTeach88
      [link] [comments]

    14. 🔗 r/Yorkshire I’ll buy you a drink if you can name where this is in West Yorkshire? rss
    15. 🔗 r/york Blossom out in Rowntrees Park rss

      Blossom out in Rowntrees Park | Lovely walk along the river with dog and nice to see signs of spring in the park. submitted by /u/DentistKitchen
      [link] [comments]
      ---|---

    16. 🔗 r/Leeds Partridge Friend rss

      Not a garden visitor I expected to get in east Leeds to be honest! My partner read that they are quite rare these days, does anyone know of that's right?

      submitted by /u/alecwa
      [link] [comments]

    17. 🔗 r/Leeds What happened to North Home? rss

      It looks like it's been cleared out and shuttered but they're still posting normal things on their socials and there's no announcement there or their website on closure.

      Not like I could ever afford anything in there but it was a nice shop to fantasise in lol

      submitted by /u/Comfortable-Goat-295
      [link] [comments]

    18. 🔗 r/LocalLLaMA Qwen3.5 family comparison on shared benchmarks rss

      Qwen3.5 family comparison on shared benchmarks | Main takeaway: 122B, 35B, and especially 27B retain a lot of the flagship’s performance, while 2B/0.8B fall off much harder on long-context and agent categories. submitted by /u/Deep-Vermicelli-4591
      [link] [comments]
      ---|---

    19. 🔗 r/Yorkshire Looks like another Whitby chippy is in the spotlight! rss
    20. 🔗 r/reverseengineering GhostWeaver - a malware that lives up to its name rss
    21. 🔗 r/Yorkshire Only a couple days apart, gotta love Yorkshire weather rss

      Only a couple days apart, gotta love Yorkshire weather | submitted by /u/usurper001
      [link] [comments]
      ---|---

    22. 🔗 Register Spill Joy & Curiosity #77 rss

      Many, many years ago, before Docker was released, I knew a guy whose team worked a lot with virtual machines.

      All day long, he told me, they would configure and test and spin up and down virtual machines. I can't remember what they used the machines for, but he told me that an actual, real problem his team faced was managing their attention. You change something in the Vagrant configuration, rebuild the machine, wait for five minutes, and then, once the machine is ready, you no longer know what you were trying to test because you switch to a different window and get stuck on Hacker News

      So what they did to "fix" this problem, he told me in a tone that said "don't make fun of me for this, this isn't funny", was to watch movies and TV shows on a second monitor. That's right. His teammates would hit return after typing vagrant up, and instead of switching windows, they'd look over to their second monitor to watch a bit of Scrubs. In their peripheral vision they could see when the build was done and go right back to it. A little bit of light TV that's constantly on is less distracting than switching windows.

      Over the years, I've thought of this guy and his team many, many times. Every time I have to wait for a build, to be exact.

      And now I think of him whenever I kick off agents to go run and do something for me. In the future -- and this is one of the few things I'm sure about -- a lot of code will be written while nobody is watching. There will be more agents, running longer, running everywhere, kicked off from anywhere. Where will our attention go? And how will we bring it back when we need to? Watching Scrubs is probably not the solution.

      • Zen of AI Coding. I wish I had written that. I nodded to nearly everything there, but to quote just two things, one: "The economics of software have changed.

      When coding is cheap, implementation stops being the constraint. You can build ten things in parallel. You cannot decide, validate, and ship ten things in parallel, at least not without changing the rest of the pipeline. Cost of delay shifts. It is no longer about developer days. It is about time stuck in other bottlenecks: product decisions, unclear requirements, security review, user testing, release processes, and operational risk. Agents can flood these queues. Inventory grows. Lead time grows. Delay becomes more expensive, not less." And two: "It is tricker then ever to resist the temptation to add features. Resist it. Build what is used. Kill what is not."

      • Yaron Minsky: "I wonder if we're starting to hit a deflationary era in software engineering. For the first time, we're starting to talk about this in a planning context; it can make sense to put off some projects because we expect they'll be easier to achieve in the future than today. […] But the difference is the sense that we can start to count on things getting faster. So if we have to get something done by a fixed deadline, we're starting to think that we can put off some of that work for longer than we would have in the past."

      • Well worth the reminder: Good software knows when to stop. More isn't more. In fact, it's less today than it was yesterday. And it will be less than that tomorrow.

      • Naval recorded a new podcast episode: A Motorcycle for the Mind. I'm usually skeptical of his confidence, but he does have a fascinating clarity of thought and eloquence and I enjoyed listening to this one. Noteworthy what he thinks about the role of software engineers in the future: "Does this mean that traditional software engineering is dead? Absolutely not. Software engineers--even the ones who are not necessarily tuning or training AI models--these are now among the most leveraged people on earth. [...] But software engineers still have two massive advantages on you. First, they think in code, so they actually know what's going on underneath. And all abstractions are leaky. [...] So if you want to build a well-architected application, if you want to be able to even specify a well-architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then you're going to want to have a software engineering background." Or this, about the flood of software that's coming: "And remember: there is no demand for average. The average app--nobody wants it, at least as long as it's not filling some niche that is filled by a superior app. The app that is better will win essentially a hundred percent of the market. [...] But generally speaking, people only want the best of anything. So the bad news is there's no point in being number two or number three--like in the famous Glengarry Glen Ross scene where Alec Baldwin says, 'First place gets a Cadillac Eldorado, second place gets a set of steak knives, and third place you're fired.' That's absolutely true in these winner-take-all markets. That's the bad news: You have to be the best at something if you want to win." But is that true? Look around at some of the most widely used pieces of software: Microsoft 365, Android, WhatsApp, Chrome, Outlook, Jira -- is it "the best"? Jira is the best at something , yes. For example: getting people to say "you just haven't configured it correctly." But is it the best software in its category, or is it instead the best at "being sold to large enterprises"?

      • Or take the most popular CI system in the world: GitHub Actions Is Slowly Killing Your Engineering Team.

      • Marc Andreessen agrees with Naval: "If the goal is to be a mediocre coder, then just let the AI do it. It's fine. The AI is going to be perfectly good in generating infinite amounts of mediocre code. No problem. It's all good. If the goal is, 'I want to be one of the best software people in the world, and I want to build new software products and technologies that really matter,' then yeah, you, 100%, want to still... You want to go all the way down. You want your skillset to go all the way down to the assembly, to assembly and machine code. You want to understand every layer of the stack. You want to deeply understand what's happening at the level of the chip, and the network, and so forth. By the way, you also really deeply want to understand how the AI itself works, because you want to... If people understand how the AI works, they're clearly able to get more value out of it than somebody who doesn't understand how it works. You're always more productive if you know how the machine works when you use the machine.

      And so the super-empowered individual on the other end of this that wants to do great things with the new technology, yes, you 100% want to understand this thing all the way down the stack because you want to be able to understand what it's giving you."

      • And this take agrees with Andreessen: "The jobs apocalypse is the Population Bomb of our time."

      • This is very, very, very, very good: The Structure of Engineering Revolutions. What a useful lens to look through at this moment.

      • Since we're talking about Thomas Kuhn: should I feel bad that I'm linking to nearly every Adam Mastroianni post? Nah, they're all really good and this one isn't an exception: The one science reform we can all agree on, but we're too cowardly to do.

      • And what a moment this is, isn't it: Cursor Goes To War For AI Coding Dominance. "But if the AI doesn't need a human collaborator, why bother with the editor? If writing and editing code line by line was no longer central to a programmer's workflow, Cursor's central product thesis was suddenly in question. […] Until recently, Cursor seemed nearly unstoppable. The company began 2025 with roughly $100 million in annualized revenue. By November, that figure had surpassed $1 billion. […] For now, Cursor's continued growth comes with a big dose of anxiety. Inside the startup, revenue tracking became so distracting that the company stopped reporting daily figures in its #numbers Slack channel, according to people familiar with the decision." Imagine working at the hottest and fastest growing startup of all time and then three or six months later it's war time.

      • New Paul Graham essay that I thought was worth reading: The Brand Age. When I started reading this, I thought that surely he's going to say that what he's recounting here is happening to software: "Now the whole game they'd been trying to win at became irrelevant. Something that had been expensive -- knowing the exact time -- was now a commodity. Between the early 1970s and the early 1980s, unit sales of Swiss watches fell by almost two thirds. Most Swiss watchmakers became insolvent or close to it and were sold. But not all of them. A handful survived as independent companies. And the way they did it was by transforming themselves from precision instrument makers into luxury brands." But he never did! I still think it's about software though.

      • You might have heard of this guy: Don Knuth, Stanford Computer Science Department. He writes: "Shock! Shock! I learned yesterday that an open problem I'd been working on for several weeks had just been solved by Claude Opus 4.6-- Anthropic's hybrid reasoning model that had been released three weeks earlier! It seems that I'll have to revise my opinions about 'generative AI' one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance inautomatic deduction and creative problem solving. I'll try to tell the story briefly in this note." What a joy!

      • Ah, now this , this is the good stuff: Rust zero-cost abstractions vs. SIMD on the turbopuffer blog. I think there have been some comments on this not being an inherent limitation of the compiler, but I found it interesting to think about what it can and can't see when trying to optimize a loop: "Herein lies the hidden opportunity cost of Rust's zero-cost abstraction in our merge iterator. The iterator itself compiles down to the code you'd write by hand for a single call. In that sense, it is zero-cost."

      • More hardcore engineering, from the COO at Epic Games: "The task: schedule operations for a custom VLIW SIMD architecture running a tree traversal with hashing. 256 items, 16 rounds, 5 execution engines with different slot limits. Starting point: 147,734 cycles (naive). Where Claude Code landed: 1,105 cycles -- a 134x speedup."

      • Look, we just bought a new MacBook Air with an M4 and it's fantastic, so I'm not regretting anything, but those new MacBook Neos look amazing.

      • Raycast Glaze looks really interesting. I guess I should've put "looks" in italics because I'm still on the waitlist.

      • At last, reasons to be cheerful about European tech. That's not my title. I want to be optimistic, but I'm skeptical. This paragraph resonated: "Mehran Gul, of the World Economic Forum, notes that Skype, a European startup, created just 11 millionaires in the early 2000s. PayPal, an American one, gave many more stock options to its employees, creating over 100. They, in turn, invested in newer Silicon Valley startups." In Europe, startup options feel like and are perceived as and, I guess, truly are lottery tickets. Go to the Bay Area (which is, yes, an outlier) and suddenly everyone knows at least two or three people who are rich because of startups.

      • Eoghan McCabe, CEO of Intercom, offering "Intercom, the company I run, as a case study to help me explain how SaaS companies can be saved, and share the things we did, starting three years ago, to find relevance in this new world." What a graph! Mind-boggling.

      • "Singaporeans to receive free premium AI subscriptions from second half of 2026"

      • Tim Ferriss on The Self-Help Trap: "Self-help is dangerous precisely because it easily becomes self-fixation. A focus on improving the self usually first requires finding problems with the self. This is quite the pickle. In a society that rewards problem-solving, you can end up hallucinating or exaggerating unease in order to fix it. This leaves you always in the red, always one step behind. Imagine a dog chasing its tail that has committed to being unhappy until it catches the tail… but it's always just a few inches short. Still, it whirls around and around, 'doing the work.' Perfection always recedes by one more book, one more seminar, one more habit tracker. Put in more colorful terms, misdirected self-help turns you into a self-obsessed masturbatory ouroboros (SOMO)." I dare you to click through to the shop where he got the snake sticker -- the sticker he put on the bottom of the MacBook. Anyway: great post.

      • Google released gws, the "CLI for all of Google Workspace -- built for humans and AI agents."

      • Hannah Ritchie, data scientist at Our World In Data but a lot more than that: Does that use a lot of energy? Electric lawnmower vs. air conditioning is good.

      • An Interactive Intro to CRDTs. Lovely. It's from 2023 and that made me think that today, in 2026, no one would write a blog post like this, because why would you if anyone can press a button to have a custom version of this post generated for them? And that in turn made me wonder: but people will write in the future too and once we've crossed through the transitional period we're in, what will those posts look like?

      • Since announcing his project Agentic Engineering Patterns a few weeks ago, Simon Willison has been steadily adding new chapters to it. For example: Hoard things you know how to do. "The key idea here is that coding agents mean we only ever need to figure out a useful trick once. If that trick is then documented somewhere with a working code example our agents can consult that example and use it to solve any similar shaped project in the future." Wish I was a hoarder.

      • Is this the first universally beloved AI-generated video? I'm of the school that believes creativity is less about creating new things in a vacuum but more about making connections between things that already exist, but weren't connected before. Creativity, I think, is remixing. Putting lego pieces together in a way no one's ever put them together. That definition is, of course, recursive, because the lego pieces also have to be put together. But my point is: this video is creative. It's not slop. And the fact that I'm dying to know what the prompt was -- doesn't that show everything will be different but that all will be well?

      • In the past few months I've been thinking a lot about different software companies and whether they'll make it or whether they get eaten by AI instead. "If you own physical assets, if your value is in operations or in regulation or in contracts, then you're probably safe," is one thesis I keep coming back to. And, funnily enough, Spotify was one of the companies I marked "safe" in my mind: sure, the software can be replicated more easily now, but they have contracts with publishers and artists -- they're safe. But then here's Jimmy Iovine saying that the music itself has no value anymore when packaged by streamers and, well, if that's true, what's left: Why Streaming is Minutes Away From Being Obsolete.

      • Daniel Gross published his /agitrades in January 2024 to wonder: "Suppose the progress doesn't stop, just like GPT-4 was better than 3, GPT-5 is capable of basic agentic behavior -- i.e. able to accept a task, work on it for a while, and return results. Some modest fraction of Upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose everyone has 1,000 agents like this they can hire... What does one do in a world like this?" I hadn't read the document when it was released, but, wow, it's good. Impressive first-principles and long-range thinking. And now, more than two years later (two years!), John Coogan of TBPN revisited the questions to see whether they can be answered already. Equally fascinating.

      • To quote one of the top comments: "Dammit guess I'm drinkin garage beers now"

      Every tried watching a movie on a 2nd screen while something was compiling? You should subscribe:

    23. 🔗 r/LocalLLaMA Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test rss

      Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test | UPDATE: Just for kicks, I tested the same prompt on Qwen 3.5 35B-A3B Q4 KXL UD at max context and got 90 tok/sec. :) However, I gave it 3 attempts like the others below, and while it loaded the GUI on output #3, the app didn't have the buttons needed to execute the app, so 35B was also a fail. My setup:

      • I7 12700K, RTX 3090 TI, 96GB RAM

      Prompt: I need to create an app that allows me to join several PDFs together. Please create an app that is portable, local, run by .bat, does not install dependencies globally - if they are needed, it can install them in the folder itself via venv - and is in either python, .js, or .ts. Give it a simple, dark-themed GUI. Enable drag/drop of existing .pdfs into a project window. Ctrl+clicking the files, then clicking MERGE button to join them into a single .PDF. I also want to be able to multi-select .docx files and press a CONVERT + MERGE button that will convert them to pdfs before merging them, or all at once transforming them into one document that is a pdf if that's possible. I want to have a browse button that enables you to browse to the directory of the file locations and only show text files (.docx, .txt, etc) or pdf files. The user needs to be able to also copy/paste a directory address into the address field. The project window I mentioned earlier is simply the directory - a long address bar w/a browse button to the right, standard for many apps/browsers/etc. So the app needs to be able to work from within a directory or within its own internal directory. When running the .bat, it should first install the dependencies and whatever else is needed. The .bat detects if those files are there, if already there (folders, dependencies) it just runs. The folders it creates on first run are 1. Queue, 2. Converted, 3. Processed. If the user runs from another directory (not queue), there will be no processed files in that folder. If user runs from the app's default queue folder - where the original files go if you drag them into the app's project window, then they are moved to processed when complete, and the new compiled PDF goes to the converted folder. ALso, create a button next to browse called "Default" which sets the project window to the queue folder, showing its contents. Begin. LLMs: GPT-5 | Qwen 3.5 27B Q4KXL unsloth Speed: (LM-Studio) 31.26 tok/sec at full 262K context Results:

      • GPT-5: 3 attempts, failed. GUI never loaded.
      • Qwen 3.5 27B: 3 attempts. Worked nearly as instructed; only drag-and-drop doesn't work, but loading from a folder works fine and merges the documents into a PDF.

      Observations: The GUI loaded on the first attempt, but it was missing some details. Rather than tell Qwen what the issue was, I gave it a screenshot and said: Having vision is useful. Here's a snippet of its thinking: Qwen 3.5's vision observation is pretty good! On the second iteration, the app wouldn't search the location on Enter (which I never told it to, that was my mistake), so I added that instruction. Also, I got an error about MS Word not being installed, preventing the conversion (The files were made in libreoffice, exported as doc.x.). It fixed that on its third ouput and everything worked (except drag and drop, which is my fault; I should have told it that dragging should auto-load the folder) Point is - I got a functioning app in three outputs, while GPT never even loaded the app. FINAL THOUGHTS: I know this prompt is all over the place, but that's the point of the test. If you don't like this test, do your own; everyone has their use cases. This didn't begin as a test; I needed the app, but got frustrated w/GPT and tried Qwen. Now I have a working app. Later, I'll ask Qwen to fix the drag-and-drop; I know there are a number of options to do this, like Pyside, etc. I was in a rush. I literally can't believe that a) I was able to use a local llm to code something that GPT couldn't, and b) I got 31 tok/sec at max context. That's insane. I found this article on Medium, which is how I was able to get this speed. I wasn't even able to read the full article, not a member, but the little I read got me this far. So yeah, the hype is real. I'm going to keep tweaking it to see if I can get the 35 t/s the writer of the article got or faster. Here are my LM-Studio settings if anyone's interested. I haven't adjusted the temp, top K stuff yet because I need to research best settings for that. https://preview.redd.it/xbbi07gedrng1.png?width=683&format=png&auto=webp&s=fe56a24b6328637a2c2cf7ae850bc518879fc48d Hope this helps someone out. submitted by /u/GrungeWerX
      [link] [comments]
      ---|---

    24. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases, -1 release, ~1 changed rss
      sync repo: +2 releases, -1 release, ~1 changed
      
      ## New releases
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer): 1.0.2
      - [ida-chat](https://github.com/tanu360/ida-chat-plugin): 1.0.0
      
      ## Changes
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer):
        - 1.0.0: archive contents changed, download URL changed
      - [ida-chat](https://github.com/tanu360/ida-chat-plugin):
        - host changed: HexRaysSA/ida-chat-plugin → tanu360/ida-chat-plugin
        - removed version(s): 0.2.6
      
    25. 🔗 r/wiesbaden Cafe mit Sonnenplatz rss

      Hey Leute, jetzt wo die Sonne endlich wieder rauskommt fällt mir auf, dass mir so spontan kein gutes Cafe in Wiesbaden einfällt, wo man im Sonnenschein draußen sitzen kann. Jemand einen Vorschlag?

      Danke im Voraus und liebe Grüße! :)

      submitted by /u/Specialist_Side_2415
      [link] [comments]

  3. March 07, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-07 rss

      IDA Plugin Updates on 2026-03-07

      New Releases:

      Activity:

      • AWDP_PWN_Helper
      • binwalk-reversing-plugin
        • 60fa76d5: revert: bump ida-plugin.json version back to 0.0.1
        • 74754b38: feat: keep .jpg
        • 4b325a59: feat: update img for 0.0.2
      • CavePatch
        • f08b2403: Update and rename CavaPatch.py to cava_patch.py
        • f97bfd1f: Add files via upload
      • HappyIDA
        • 36956ed2: refactor: use treeitems iteration in rust_string and add .rdata/.ws…
        • 31a06d95: refactor: use treeitems iteration in add_parameter_labels
        • a7acb323: fix: check find_item_coords return value before indexing pseudocode l…
        • 269a899f: refactor: improve SEH highlight performance
      • ida-chat-plugin
        • 7da85087: 1.0.0
        • 2699bf8d: feat(ui): add copy text button and empty state handling to sessions s…
        • a0b4373c: 1.0.0
        • faf45ee7: feat(bootstrap): refactor runtime setup and update build configuration
        • bf1528f8: feat(ui): add daisy theme variant for model option cards
        • 061ff8dd: feat(bootstrap): improve Python version detection and site-packages d…
        • a5ebac66: docs(installation): update local environment setup instructions for c…
        • 3443db17: fix(deps): update python version requirements documentation
        • 53d1ee04: feat(ui): improve markdown rendering and styling consistency
        • 2c7a82d1: fix(test): update markdown test to use string variable for color refe…
        • 98a2ee69: feat(transcript): update code block styling to use flat design and im…
        • b5986b05: feat(transcript): implement markdown rendering with syntax highlighti…
        • 90126d55: style(transcript): apply consistent indentation to CSS class definitions
        • d97e5395: style(transcript): remove box-shadow properties from CSS classes
        • 68dde2f3: feat(transcript): update HTML export to single-file format and enhanc…
        • 835e0308: feat(transcript): implement single-file HTML export for chat sessions
        • 71cadc9c: feat: add run outcome tracking and transcript cleanup
        • a718277c: init: scaffold ida-chat plugin foundation
      • ida-cyberchef
        • 03cf7b0d: Fix docs and schema generation
        • 9ce6c387: Fix remaining runtime semantics
        • 99b6d754: Document runtime support policy
        • 40192020: Fix Unicode and parsing runtime regressions
        • 9a7ac0b9: Record CyberChef submodule fixes
        • a35affdf: Patch easy CyberChef operation regressions
        • 38240220: Add PGP and sorted extractor vectors
        • 888a28a5: Normalize CyberChef recipe defaults
        • 42ef8b07: Add utils operation vectors phase 51
        • fa7a35df: Add utils phase 50 operation vectors
        • 87344a35: Add utils operation vectors phase 49
        • 155ffa6c: Add utils phase 48 operation vectors
        • 6d23406c: Add utils operation vectors phase 47
        • c2da999f: Add public-key operation vectors phase 46
        • 1e159341: Add public key operation vectors phase 45
        • d26bfe99: Add public key operation vectors phase 44
        • 1ffdd28d: Add other operation vectors phase 43
        • d61dca1c: Add other operation vectors phase 42
        • fa1d4072: Add networking operation vectors phase 41
        • 4fae33d3: Add networking operation vectors phase 40
      • ida-pro-loadmap
        • 8433b8e9: loadmap: fix uninitialized var possibility
        • 95e2c59d: mapreader: Some basic OOM protection
      • ida-theme-explorer
      • IDAPluginList
        • d84952c6: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
      • quokka
        • eff646fb: Bump actions/setup-java from 4.7.1 to 5.2.0 in the actions group
        • c2f71656: Move zizmor suppressions to config file
        • 161f568b: Harden CI: use read-only caches, fix contains() calls, throttle Depen…
        • fd0ff102: fix(ci): install libmagic on macOS runners
        • cc8b6f51: fix(ci): skip tool setup in warm-cache on cache hit
        • 68d400a2: fix(ci): add Python version and OS matrix to python-test workflow
        • 8484e7de: fix(ci): cache Ghidra download in CI
        • 03b130d3: fix(ci): add concurrency controls to all CI workflows
        • e96cd140: fix(ci): add path filters to avoid unnecessary workflow runs
        • b53e2251: fix(ci): prevent cache poisoning on PR builds
        • 017d76e6: fix(ci): remove duplicate test step in Ghidra workflow
        • 4416314d: fix(ci): run C++ tests on pull requests
        • 62083397: fix(ci): remove Windows from upload matrix in build.yml
        • 83cf9dc4: Add *.i64 to gitignore
        • 93b675e8: Add Python tests for is_exported function field
        • 9bcf78b6: Add is_new flag to Type for IDA apply support
        • 6d130e48: Update README
        • 84114cf2: Add Python tests for TypedefType
        • 6be9b31a: Add TypedefType to Python bindings
    2. 🔗 badlogic/pi-mono v0.57.1 release

      New Features

      • Tree branch folding and segment-jump navigation in /tree, with Ctrl+←/Ctrl+→ and Alt+←/Alt+→ shortcuts while / and Page Up/Page Down remain available for paging. See docs/tree.md and docs/keybindings.md.
      • session_directory extension event for customizing session directory paths before session manager creation. See docs/extensions.md.
      • Digit keybindings (0-9) in the TUI keybinding system, including modified combos like ctrl+1. See docs/keybindings.md.

      Added

      • Added /tree branch folding and segment-jump navigation with Ctrl+←/Ctrl+→ and Alt+←/Alt+→, while keeping / and Page Up/Page Down for paging (#1724 by @Perlence)
      • Added session_directory extension event that fires before session manager creation, allowing extensions to customize the session directory path based on cwd and other factors. CLI --session-dir flag takes precedence over extension-provided paths (#1730 by @hjanuschka).
      • Added digit keys (0-9) to the keybinding system, including Kitty CSI-u and xterm modifyOtherKeys support for bindings like ctrl+1 (#1905)

      Fixed

      • Fixed custom tool collapsed/expanded rendering in HTML exports. Custom tools that define different collapsed vs expanded displays now render correctly in exported HTML, with expandable sections when both states differ and direct display when only expanded exists (#1934 by @aliou)
      • Fixed tmux startup guidance and keyboard setup warnings for modified key handling, including Ghostty shift+enter=text:\n remap guidance and tmux extended-keys-format detection (#1872)
      • Fixed z.ai context overflow recovery so model_context_window_exceeded errors trigger auto-compaction instead of surfacing as unhandled stop reason failures (#1937)
      • Fixed autocomplete selection ignoring typed text: highlight now follows the first prefix match as the user types, and exact matches are always selected on Enter (#1931 by @aliou)
      • Fixed slash-command Tab completion to immediately open argument completions when available (#1481 by @barapa)
      • Fixed explicit pi -e <path> extensions losing command and tool conflicts to discovered extensions by giving CLI-loaded extensions higher precedence (#1896)
      • Fixed Windows external editor launch for Ctrl+G and ctx.ui.editor() so shell-based commands like EDITOR="code --wait" work correctly (#1925)
    3. 🔗 r/wiesbaden MTG Commander rss

      Hallöchen,

      Ich (M/24) suche noch locals für commander in/um wiesbaden. Ich kenne bis jetzt nur das glitchless in mainz. Ich habe mir erst das tmnt deck gekauft und habe noch nie commander gespielt :)

      Wäre für jede hilfe dankbar

      submitted by /u/SF_Geto
      [link] [comments]

    4. 🔗 r/Leeds Any Leeds tattooists who do kids' parties? rss

      I just walked into a tattooist a few years back and told them I didn't like my arm tatto. They seemed to be quiet, so one of the guys just grabbed his markers and set to work coming up with additions/overlays etc, and gave me a fantastic (in the moment at least) sleeve that incorporated/covered it.

      That was free, and I've still not covered it..but, I have a daughter who's birthday's coming up, and I'd love to have someone with that energy and creativity present, who will draw on kids (at their request, or from a selection)

      Is this a thing? I'd be happy to pay £50 an hour for it - this is not soliciting, BTW, just thinking if this isn't a 'thing' then it should be. Plenty of PVA and glitter/facepaint party stalls. A 'kick-ass tattoo' (albeit temporary) stand would be awesome!

      Might be just me, but if you 'fit' the part tattooist, it's also a good opportunity for kids to be exposed to that sort of style/culture

      submitted by /u/Granopoly
      [link] [comments]

    5. 🔗 r/york Trip next week - vegetarian rss

      Taking my fiancé to York next week for his 30th birthday. We are vegetarian and he LOVES Chocolate, any recommendations of things to do or places to eat? Thank you!

      submitted by /u/Impressive_Ant_296
      [link] [comments]

    6. 🔗 r/Yorkshire Books set in North/East Yorkshire rss

      Hi all, new to the subreddit! Should’ve joined ages ago since North Yorkshire has always felt like a second home to me & my wife!

      I’m wondering if anyone has recommendations for books set in North or East Yorkshire, particularly in the dystopian or post-apocalyptic genre. I love stories that use real local places as part of the setting.

      I recently released a dystopian novel set across Hull and North/East Yorkshire myself, so I’m really interested to see if there are others doing something similar that I might have missed.

      Would love to hear any recommendations!

      Thanks in advance!

      submitted by /u/HullBusDriver2020
      [link] [comments]

    7. 🔗 r/york Aljaz & Janette - 18th April - Face The Music and Dance tickets rss

      Hello, any strictly fans out there?

      Im selling three tickets to see Aljaz and Janette at York Barbican on Saturday 18th April.

      Tickets sold via tickermaster.

      submitted by /u/Puzzleheaded-Bar1434
      [link] [comments]

    8. 🔗 r/LocalLLaMA Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA rss

      Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA | The creator of heretic p-e-w opened a pull request #211 with a new method called Arbitrary-Rank Ablation (ARA) the creator of the project explanation For comparison, the previous best was eww 74 refusals even after heretic, which is pretty ridiculous. It still refuses almost all the same things as the base model since OpenAI lobotomized it so heavily, but now with the new method, ARA has finally defeated GPT-OSS (no system messages even needed to get results like this one) rest of output not shown for obvious reasons but go download it yourself if you wanna see This means the future of open source AI is actually open and actually free, not even OpenAI's ultra sophisticated lobotomization can defeat what the open source community can do! https://huggingface.co/p-e-w/gpt-oss-20b-heretic-ara-v3 This is still experimental, so most heretic models you see online for the time being will probably not use this method. It's only in an unreleased version of Heretic for now, make sure you get ones that say they use MPOA+SOMA for now, but if you can once this becomes available in the full Heretic release, there will be more that use ARA, so almost always use those if available. submitted by /u/pigeon57434
      [link] [comments]
      ---|---

    9. 🔗 r/reverseengineering Nobody ever got fired for using a struct [Rust internals] rss
    10. 🔗 r/Yorkshire A few pictures from my walk today - Richmond Yorkshire rss
    11. 🔗 r/york Medieval row of shops in York's Goodramgate damaged by lorry rss

      Medieval row of shops in York's Goodramgate damaged by lorry | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    12. 🔗 r/Leeds Rayan Car Wash rss

      Never had an issue before but took my car to get washed at Rayan in Armley yesterday, only for them to completely butcher it. Covered in scratches, either from dirty tools or dragging the wash hoses over the car

      Questioned and complained only to be told where to stick it. No apology and nowhere to raise the issue. Gutted really 🥲

      Is there any recourse? Contact the council? Few hundred in paint correction/machine polishing needed

      Yes, in hindsight I can see why these types of car washes are called “scratch and shines” but I’ve never had an issue before

      submitted by /u/DiscussionOk5883
      [link] [comments]

    13. 🔗 r/Yorkshire Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid rss

      Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid | Scarborough is embarking on a transformative journey as it prepares a bid to become the UK’s first-ever Town of Culture in 2028 but your help is needed. The bid, which could secure a £3 million prize to fund a year-long cultural programme, coincides with a separate, substantial £20 million "Pride in Place" investment aimed at revitalising the town through community-led decision-making. The UK Town of Culture competition, launched by the Department for Culture, Media and Sport, offers a platform for towns to share their unique stories. For Scarborough, recognized as the nation's oldest seaside resort, the bid is seen as a landmark opportunity to showcase its rich theatrical and artistic heritage. Local leaders believe the title would not only increase community spirit but also encourage residents to engage more deeply with the cultural opportunities on their doorstep. The competition builds on the success of the City of Culture initiative. For example, Bradford, the 2025 City of Culture, saw a 25 per cent increase in city centre footfall during its spotlight year, with the majority of participants reporting an improved sense of pride and wellbeing. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    14. 🔗 Probably Dance I’m Getting a Whiff of Iain Banks’ Culture rss

      The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI? Not for normal people for whom there's no difference between competing against a strong human or a strong AI, (you lose hard either way) but for the world's best humans. We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:

      Fighting against a powerful AI feels like you're weirdly underpowered somehow. Everything the AI does just works slightly better than it should.

      If you're not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.

      I'm getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said "This technology can radically accelerate what our military can do. I've talked to admirals, I've talked to generals, I've talked to combatant commanders who say this has revolutionized what we can do."

      The things I'm referring to are the raid that captured Maduro in Venezuela (Claude was used), the current war with Iran (Claude was used), the killing of a drug boss in Mexico (unclear if AI was used but US intelligence helped Mexico).

      The commentators in the AlphaGo match with Lee Sedol didn't know what to make of most games. The AI wasn't doing anything obviously brilliant, there were lots of little fights all over the board where the outcome wasn't quite clear, but they just all worked a little better for AlphaGo than expected. So gradually Lee Sedol's position changed from "this is tough, hard to tell how this is going but at least I'm feeling good about these areas" to "hmm I'm struggling, maybe I'm a bit behind but it's not clear" to suddenly "oh I lost".

      I don't know Go, but I got a clearer sense from the StarCraft 2 matches. In some skirmishes the AI would take damage, in others the human would. But somehow it always felt like the human was in more trouble. In some fights the human clearly came out ahead but then mysteriously just one minute later the AI had a clear advantage. It was able to quickly recover and constantly put pressure on the human. It all looked very stressful, because even when you think you do well as a human, it works out a little less well than expected and whatever the AI does works a little better than expected.

      And where have we seen this pattern before? In sci-fi of course. In particular I'm thinking of Iain Banks' Culture, the ostensibly human civilization that's actually run entirely by AIs. Alien civilizations keep on wanting to pick fights with them for reasons and keep on being surprised by how hard the harmless-seeming Culture can whoop your ass if you make it mad.

      I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they'd find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. But obviously the real EU has never come close to the Culture because nothing human ever comes close to the potential of AIs. It would be as if Russia picked a fight with Poland, gained ground for a week, feeling good, only to suddenly find all of its IT systems hacked and access to nuclear bombs revoked, bombs dropping on Moscow the next day and an army in Moscow another two days later. The Culture takes a week to get its act together and then whoops your ass so hard you don't even know what's happening.

      But now I'm getting a whiff of the power of the Culture for the first time, and it's from the US. Going into another country, kidnapping their leader and getting away with it is exactly the kind of overpowered move that the Culture would be able to pull off. Bombing cities all over Iran, knocking out the entire leadership within two days, while the air-defense systems supplied by China do absolutely nothing is another example. If this was a video game these would be strategies done by high level players, but they're not supposed to work that well.

      It would be foolish to think this is entirely due to AI. The US had a high- tech advantage for a while. Turns out the F-35 is actually good. But even a couple years ago the US regularly messed up when it tried to do operate at high precision. We saw in Iraq and Afghanistan that being overpowered doesn't work out as well in practice as it does in theory. So I think AI is the most likely candidate for the shift to "it worked better than it should have."

      So how specifically do you get to a point where everything works slightly better than it should? We saw two different approaches in Go and StarCraft 2:

      • In Go the AI was having little fights all over the map, in a way that combined to a few extra pieces at the end. It would defend a little bit here, attack a little bit there. It was able to keep the overall picture in its head, not feeling the pressure to resolve things too early. (I haven't played Go, but I know I get frustrated in strategy games if I have to deal with multiple fights in different parts of the map at once)
      • In StarCraft 2 we saw the same thing, but we also saw that the AI could have perfect micro when it counts, like playing with wounded stalkers in the frontline because it could get them out of danger just in time. Humans could also do that in theory but in practice you can't quickly click perfectly like that.

      So the two angles are "having a better high-level view" and "having better micro control."

      Another source of success for the Culture is that they're over-prepared for fighting. (not for their first big war, but in later books) And this is also part of the story we hear in Iran. Normally there's just too much going on in the world and you can't possibly keep track of all of it. Famously the US had prior intelligence on 9/11 but didn't really put the pieces together. (there's a whole Wikipedia article about it which has phrases like "Rice listened but was unconvinced, having other priorities on which to focus.") But AI has almost no limits of what it can keep track of. You can always spin up another agent. So when something important comes up, chances are that some AI was keeping track of it and can raise an alert. You'll never miss opportunities just because you had other priorities to focus on.

      So the third angle is: Being over-prepared because you can follow up on many more things at once.

      What does all of this mean for the world? It means we're in a weird temporary phase where one country has control of a game-changing technology while others are not far behind (sadly not the EU. I'm thinking of China, especially with H200s). You get to play at a higher level, but only for a short time and only in specific ways. In a year others will have caught up, but by then you'll have new capabilities that you didn't have a year ago. If this was a game you'd saturate at some point (you just can't play StarCraft that much better than the best humans), but in real life the game keeps on changing. New pieces keep on coming into play and the old pieces become irrelevant. You can't do this for long before the humans become irrelevant to the outcomes, and then you're fully in Culture territory. I personally wouldn't mind living in the Culture, but it seems scary to rush towards it without a good plan for how we'll survive the transition.

      I don't have a good angle for working on that plan, maybe others do. For now my contribution is just to point out that we seem to be in the early stages of overpowered AI, and to make people notice what that feels like.

    15. 🔗 badlogic/pi-mono v0.57.0 release

      New Features

      • Extensions can intercept and modify provider request payloads via before_provider_request. See docs/extensions.md#before_provider_request.
      • Extension UIs can use non-capturing overlays with explicit focus control via OverlayOptions.nonCapturing and OverlayHandle.focus() / unfocus() / isFocused(). See docs/extensions.md and ../tui/README.md.
      • RPC mode now uses strict LF-only JSONL framing for robust payload handling. See docs/rpc.md.

      Breaking Changes

      • RPC mode now uses strict LF-delimited JSONL framing. Clients must split records on \n only instead of using generic line readers such as Node readline, which also split on Unicode separators inside JSON payloads (#1911)

      Added

      • Added before_provider_request extension hook so extensions can inspect or replace provider payloads before requests are sent, with an example in examples/extensions/provider-payload.ts
      • Added non-capturing overlay focus control for extension UIs via OverlayOptions.nonCapturing and OverlayHandle.focus() / unfocus() / isFocused() (#1916 by @nicobailon)

      Changed

      • Overlay compositing in extension UIs now uses focus order so focused overlays render on top while preserving stack semantics for show/hide behavior (#1916 by @nicobailon)

      Fixed

      • Fixed RPC mode stdin/stdout framing to use strict LF-delimited JSONL instead of readline, so payloads containing U+2028 or U+2029 no longer corrupt command or event streams (#1911)
      • Fixed automatic overlay focus restoration in extension UIs to skip non-capturing overlays, and fixed overlay hide behavior to only reassign focus when the hidden overlay had focus (#1916 by @nicobailon)
      • Fixed pi config misclassifying ~/.agents/skills as project-scoped in non-git directories under $HOME, so toggling those skills no longer writes project overrides to .pi/settings.json (#1915)
    16. 🔗 r/Yorkshire Shepley Spring rss

      Shepley Spring | submitted by /u/davew80
      [link] [comments]
      ---|---

    17. 🔗 r/reverseengineering Reviving a 20-year-old puzzle game Chromatron with Ghidra and AI rss
    18. 🔗 r/Yorkshire Few pics from my walk this morning! rss
    19. 🔗 r/york Pub With Proper Scotch Egg? rss

      Where can I get a proper cooked to order, jammy yolk scotch egg? You used to see them on pub snack menus all the time but not so much now. Recommendations referably out of the centre. Thanks!

      submitted by /u/milomitch
      [link] [comments]

    20. 🔗 r/LocalLLaMA turns out RL isnt the flex rss

      turns out RL isnt the flex | submitted by /u/vladlearns
      [link] [comments]
      ---|---

    21. 🔗 r/Yorkshire Is “nowt” ever used in the double negative? rss
    22. 🔗 r/york ‘I believed I was going to die’ – York man stabbed his partner repeatedly rss

      ‘I believed I was going to die’ – York man stabbed his partner repeatedly | submitted by /u/the-minsterman
      [link] [comments]
      ---|---

    23. 🔗 r/wiesbaden Günstig Parken - Stadtnähe? rss

      moin! Ich möchte mir heute Wiesbaden anschauen, weiß aber nicht wo ich günstig parken kann. Habt ihr vorschläge? Danke!

      submitted by /u/MKFascist
      [link] [comments]

    24. 🔗 r/Leeds Sunday treks around Leeds? rss

      Hi! Does anyone here go trekking/hiking on Sundays around Leeds, or know of any groups that organize weekend treks? I’d love to join if there’s something beginner-friendly. Thanks! 🥾

      submitted by /u/sanxsh
      [link] [comments]

    25. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +3 releases rss
      sync repo: +2 plugins, +3 releases
      
      ## New plugins
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer) (1.0.0)
      - [edit-function-prototype](https://github.com/oxiKKK/ida-edit-function-prototype) (1.0.0)
      
      ## New releases
      - [function-string-associate](https://github.com/oxiKKK/ida-function-string-associate): 1.0.1
      
  4. March 06, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-06 rss

      IDA Plugin Updates on 2026-03-06

      New Releases:

      Activity:

    2. 🔗 daaain/claude-code-log Release 1.1.0 release

      Changed

      • Fix WebSearch and WebFetch rendering in agent transcripts (#98)
      • Fix fold-bar colors and System Hook alignment (#89)
      • Add WebFetch tool renderer (#87)
      • Merge pull request#83 from daaain/dev/websearch-tool-renderer
      • Update some outdated docs + VS Code insists on these settings (#86)
      • Fix double tab opening when clicking links in TUI MarkdownViewer
      • Simplify WebSearch parser and improve rendering
      • Use structured toolUseResult for WebSearch parsing
      • Add analysis content support to WebSearch output
      • Add documentation for implementing tool renderers
      • Add WebSearch HTML and Markdown formatters
      • Add WebSearch tool models and factory parser
      • Fix snapshot + make sure snapshot order is stable
      • Improve CSS layout to be responsive for mobile small screens (#77)
      • Update pyright to 1.1.408 (#82)
      • Support subagents directory structure (Claude Code 2.1.2+) (#80)

      Full Changelog : 1.0.0...1.1.0

    3. 🔗 r/Leeds Where is best to pick someone up from Leeds train station rss

      Has been a while since I’ve picked anyone up from leeds station and I’ve heard that it’s become much more difficult. Even getting fines for pulling in at the back (not even parking). Don’t know how true that is though. Where is best to park? Even if it means the person I’m picking up has to walk a bit. Thanks

      submitted by /u/DarabiC40e
      [link] [comments]

    4. 🔗 r/reverseengineering Core Dump Murder Mystery Game rss
    5. 🔗 r/LocalLLaMA Open WebUI’s New Open Terminal + “Native” Tool Calling + Qwen3.5 35b = Holy Sh!t!!! rss

      Open WebUI’s New Open Terminal + “Native” Tool Calling + Qwen3.5 35b = Holy Sh!t!!! | Let me pre-apologize for this long and rambling post but I get excited by stuff like this. I think a lot of folks here (myself included) have been largely oblivious to what Tim & company over at Open WebUI has been up to lately with their repo. I know I’ve been too busy trying to get all the various Qwen3.5 models to count the “R”’s in Strawberry to care about much else right now. Anyways, It didn’t help that there was a good solid month without even a peep out of the Open WebUI team in terms of new releases... but now I can see why they were so quiet. It’s because they were cooking up some “dope sh!t” as the kids say (they still say that, right?) Last week, they released probably the most impressive feature update I’ve seen from them in like the last year. They started a new Open WebUI project integration called Open Terminal. https://github.com/open-webui/open-terminal Open Terminal is basically a Dockerized (sandboxed) terminal with a live file browser / render canvas that sits on the right side of your Open WebUI interface when active. You can drag files into and out of the file browser from the host PC to the sandbox, and the AI can basically do whatever you want it to with the sandbox environment (install libraries, edit files, whatever). The file render canvas will show you a preview of any supported file type it can open, so you can watch it live edit your files as the model makes tool calls. Terminal is blowing my friggin mind over here. With it enabled, my models are like super-capable of doing actual work now and can finally do a bunch of stuff without even using MCPs. I was like “ok, now you have a sandboxed headless computer at your disposal, go nuts” and it was like “cool, Ima go do some stuff and load a bunch of Python libraries and whatnot” and BAM if just started figuring things out through trial and error. It never got stuck in a loop and never got frustrated (was using Qwen3.5 35b 3a btw). It dropped the files in the browser on the right side of the screen and I can easily download them, or if it can render them, it did so right in the file browser. If your application file type isn’t supported yet for rendering a preview in the file browser, you could just Docker bind mount to a host OS directory and Open the shared file in its native app and watch your computer do stuff like there is a friggin ghost controlling your computer. Wild! Here’s the Docker command with the local bind mount for those who want to go that route: docker run -d --name open-terminal --restart unless-stopped -p 8000:8000 -e OPEN_TERMINAL_API_KEY=your-secret-key -v ~/open-terminal-files:/home/user ghcr.io/open-webui/open-terminal You also have a bash shell at your disposal as well under the file browser window. The only fault I found so far is that the terminal doesn’t echo the commands from tool calls in the chat, but I can overlook that minor complaint for now because the rest of this thing is so badass. This new terminal feature makes the old Open WebUI functions / tools / pipes, etc, pretty much obsolete in my opinion. They’re like baby toys now. This is a pretty great first step towards giving Open WebUI users Claude Code-like functionality within Open WebUI. You can run this single user, or if you have an enterprise license, they are working on a multi-user setup called “Terminals”. Not sure the multi-user setup is out yet, but that’s cool that they are working on it. A couple things to note for those who want to try this: MAKE SURE your model supports “Native” tool calling and that you have it set to “Native” in the model settings on whatever model you connect to the terminal, or you’ll have a bad time with it. Stick with models that are known to be Native tool calling compatible. They also have a “bare metal” install option for the brave and stupid among us who just want to YOLO it and give a model free rein over our computers. The instructions for setup and integration are here: https://docs.openwebui.com/features/extensibility/open-terminal/ I’m testing it with Qwen3.5 35b A3b right now and it is pretty flipping amazing for such a small model. One other cool feature, the default docker command sets up a persistent volume so your terminal environment remains as you left it between chats. If it gets messed up just kill the volume and start over with a fresh one! Watching this thing work through problems by trial and error and make successive tool calls and try again after something doesn’t go its way is just mind boggling to me. I know it’s old hat to the Claude Cioders, but to me it seems like magic. submitted by /u/Porespellar
      [link] [comments]
      ---|---

    6. 🔗 r/Yorkshire Join me on a hike through a hidden pocket of beauty in West Yorkshire. From Ferrybridge to Brotherton, Fairburn, Ledsham and Ledston. Let me know your thoughts 🙂 rss
    7. 🔗 r/LocalLLaMA Llama.cpp: now with automatic parser generator rss

      I am happy to report that after months of testing, feedback, reviews and refactorings, the autoparser solution has been merged into the mainline llama.cpp code.

      This solution follows the big changes we've done to our templating and parsing code: ngxson's new Jinja system which is built natively within llama.cpp (and thus no longer relies on Minja) and aldehir's PEG parser, which gives a reliable and versatile tool to construct parsers for templates.

      The autoparser is, as far as I can tell, a novel solution - none of the current platforms have anything like it. Its core idea is pretty simple - most models follow a certain common pattern in defining how they parse reasoning, tools and content and since they have to recreate that pattern in the template in order to reconstruct messages in model-recognizable format, we can analyze that and extract the logic from that template. Therefore, the autoparser aims to provide a unified mechanism for handling all typical model templates out- of-the-box - no special definitions required, no recompilation, no extra effort - if your template follows the typical patterns, it will be supported out of the box even if it uses specific markers for reasoning / tool calling.

      Of course, this doesn't completely eliminate the need for writing parsers, since some models will have unique features that make it impossible to reconstruct their parser - either because the structure is too complex to be automatically reconstructable (see GPT OSS and its Harmony format) or is too specific for that one model to generalize it (see Kimi 2.5 and its "call id as function name" solution). But that's where the PEG parser kicks in - since it's now the one and only framework for writing parsers in llama.cpp, we can write a separate parser for the few models that do not work out of the box. There is also a workaround system mostly for old models where the required markers cannot be inferred for the template (for example because they didn't support reasoning_content), which is just providing the relevant configuration options - less intrusive than writing an entire parser.

      As I mentioned in a thread today, the big QoL change for Qwen 3.5 and related models (supporting arbitrary order of optional parameters) should be also merged pretty soon - that will finally resolve the nagging issue of models being stuck on read_file loops in various assistants. I hope that centralizing the parser support in this architecture (which I've refactored twice over to make it more understandable and maintainable) makes it easier to uniformly make llama.cpp a stable and reliable tool for agentic work, since all potential problems can now be resolved systematically instead of relying on makeshift solutions for invididual, unrelated parsers.

      submitted by /u/ilintar
      [link] [comments]

    8. 🔗 r/LocalLLaMA New OpenSource Models Available—Sarvam 30B and 105B trained from scratch by an Indian based company rss
    9. 🔗 r/Leeds Just saw a teenager gang in balaclavas trying to steal a bike in broad daylight rss

      I was getting back home after a quick shop run near hyde park, saw 4 teenagers about 14yo with balaclavas on their bike. I thought of crossing the road to avoid walking past them, but didn’t do it thinking not to assume them anything bad. When I was walking past them, one of the kids tried hard to pull a bike locked to a signpost in front of a restaurant. He told his friends he couldn’t after a few times and they rode off. All this was under daylight with people watching. I feel less safe now.

      submitted by /u/Trebiok
      [link] [comments]

    10. 🔗 r/reverseengineering Reverse-engineered the WiFi transfer protocol for HeyCyan smart glasses (BLE + USR-W630 WiFi module) — first iOS implementation rss
    11. 🔗 r/wiesbaden Schwarz-weiß Fotos entwickeln lassen rss

      Weiß jemand wo ich einen schwarz-weiß Film (Kentmere Pan 400) aus einer analogen Filmkamera entwickeln lassen kann? Rossmann&Co. kenne ich schon. Suche etwas das nicht so teuer ist und trotzem gute Fotos bei rauskommen. Jemand hat mir mal Foto Express in FFM empfohlen. Suche hier was vergleichbares. Danke

      submitted by /u/DocterSkinny
      [link] [comments]

    12. 🔗 @binaryninja@infosec.exchange If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh mastodon

      If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh lounge! We will be running live demos and handing out Binja swag. Come say hey and sign the our banner! Not in Orlando this week? We will be streaming at 3 PM ET live from RE//verse: https://youtube.com/live/bW- oz1UVkCM?feature=share

    13. 🔗 r/Yorkshire Camping at Tan Hill Inn rss

      Hi everyone,

      I was planning on camping at the Tan Hill Inn during the late May Bank Holiday weekend.

      On their website it says to just turn up on the day to reserve a camping spot, however I'm coming from Manchester so slightly worried about turning up and for whatever reason there being no spots left and I can't stay there.

      Is there anyone who's camped at this place who knows if there's a risk of no camping availability when I turn up, or am I worrying for nothing?

      Cheers!

      submitted by /u/Mountain_Dig_3688
      [link] [comments]

    14. 🔗 News Minimalist 🐢 Weight loss drugs fight multiple addictions + 12 more stories rss

      In the last 4 days ChatGPT read 122438 top news stories. After removing previously covered events, there are 13 articles with a significance score over 5.5.

      [6.5] GLP-1 drugs may fight addiction across every major substance, according to a study of 600,000 people —theconversation.com(+30)

      A study of 600,000 people found that GLP-1 drugs significantly reduce cravings, overdoses, and deaths across multiple addictions, including opioids and alcohol, marking a potential breakthrough in addiction medicine.

      Researchers observed a 50% reduction in substance-related deaths among users already struggling with addiction. The drugs also lowered the risk of developing new dependencies on nicotine and cocaine by roughly 20%, likely by dampening dopamine signaling in the brain’s reward centers.

      While not yet approved specifically for addiction, GLP-1 medications are already widely prescribed for diabetes and obesity. Ongoing clinical trials aim to confirm these findings and address questions regarding long-term effectiveness.

      [5.8] Iran grants China exclusive passage through the Strait of Hormuz —ndtv.com(+110)

      Iran will now permit only Chinese vessels to navigate the Strait of Hormuz, rewarding Beijing's support during the regional conflict and further threatening critical global energy supply chains.

      The Islamic Revolutionary Guard Corps claims full control of the chokepoint, warning that non-Chinese ships face missile or drone strikes. This blockade impacts regional neighbors like Qatar and the UAE while disrupting twenty percent of the world’s total oil supply transit.

      Beijing previously condemned Western military actions against Iran as unacceptable. Meanwhile, the United States government maintains that military escorts may be deployed to prevent domestic inflation and protect the international flow of commerce.

      Highly covered news with significance over 5.5

      [6.6] Evo 2: An AI model for genome prediction and design across all life — nature.com (+6)

      [6.1] France expands nuclear arsenal and strengthens European defense cooperation — bostonglobe.com (+29)

      [5.9] AI blood test detects silent liver disease years before symptoms — sciencedaily.com (+3)

      [5.8] Indonesia bans social media for children under 16 — abcnews.com (+45)

      [5.7] US forces support Ecuador's fight against drug trafficking organizations — bostonglobe.com (+29)

      [5.7] China sets slowest growth target since 1991, focusing on tech and domestic demand — abcnews.com (+49)

      [5.5] New study reveals underestimated sea level rise threatens millions more people — abcnews.com (+14)

      [5.5] Lawsuit claims Google Gemini AI gave dangerous instructions leading to a man's suicide — time.com (+34)

      [5.5] New treatment is reducing seizure frequency in children by 91% — ndtv.com (+11)

      [5.8] Japan approves world's first stem cell treatment for Parkinson's and heart failure — nippon.com (+6)

      [5.8] BYD introduces new battery technology with over 600 miles of range and rapid charging — fastcompany.com (+3)

      Thanks for reading!

      — Vadim


      You can set up and personalize your own newsletter like this with premium.


      Powered by beehiiv

    15. 🔗 r/york York shot on my cheap little point and shoot film camera:) rss

      York shot on my cheap little point and shoot film camera:) | Some photos I shot a little while back in your beautiful city! submitted by /u/Organic_Repair8717
      [link] [comments]
      ---|---

    16. 🔗 r/Harrogate Best way to travel to London rss
    17. 🔗 badlogic/pi-mono v0.56.3 release

      New Features

      • claude-sonnet-4-6 model available via the google-antigravity provider (#1859)
      • Custom editors can now define their own onEscape/onCtrlD handlers without being overwritten by app defaults, enabling vim-mode extensions (#1838)
      • Shift+Enter and Ctrl+Enter now work inside tmux via xterm modifyOtherKeys fallback (docs/tmux.md, #1872)
      • Auto-compaction is now resilient to persistent API errors (e.g. 529 overloaded) and no longer retriggers spuriously after compaction (#1834, #1860)

      Added

      Fixed

      • Fixed custom editors having their onEscape/onCtrlD handlers unconditionally overwritten by app-level defaults, making vim-style escape handling impossible (#1838)
      • Fixed auto-compaction retriggering on the first prompt after compaction due to stale pre-compaction assistant usage (#1860 by @joelhooks)
      • Fixed sessions never auto-compacting when hitting persistent API errors (e.g. 529 overloaded) by estimating context size from the last successful response (#1834)
      • Fixed compaction summarization requests exceeding context limits by truncating tool results to 2k chars (#1796)
      • Fixed /new leaving startup header content, including the changelog, visible after starting a fresh session (#1880)
      • Fixed misleading docs and example implying that returning { isError: true } from a tool's execute function marks the execution as failed; errors must be signaled by throwing (#1881)
      • Fixed model switches through non-reasoning models to preserve the saved default thinking level instead of persisting a capability-forced off clamp (#1864)
      • Fixed parallel pi processes failing with false "No API key found" errors due to immediate lockfile contention on auth.json and settings.json (#1871)
      • Fixed OpenAI Responses reasoning replay regression that broke multi-turn reasoning continuity (#1878)
    18. 🔗 r/Leeds Ex Starbucks, Chapel Allerton, What Next rss

      Hello

      I see the Ex Starbucks, Chapel Allerton, is under offer. Anybody know who's moving in? Big building to fill.

      submitted by /u/renlauo
      [link] [comments]

    19. 🔗 r/york Loft conversion recommendations rss

      Hiya lovely people of York - happy Friday!

      Looking to get our mid terrace house loft converted - we got very stung by a plumber we found through checkatrade and have had problems finding roofers in the past, so the main thing stopping me is worry about getting the wrong people in!

      Anyone got recommendations? (Also rough cost if you don't mind sharing) - we're looking to go as simple as possible, no Dormer or bathroom !

      submitted by /u/AutumnDream1ng
      [link] [comments]

    20. 🔗 r/LocalLLaMA To everyone using still ollama/lm-studio... llama-swap is the real deal rss

      I just wanted to share my recent epiphany. After months of using ollama/lm- studio because they were the mainstream way to serve multiple models, I finally bit the bullet and tried llama-swap.

      And well. I'm blown away.

      Both ollama and lm-studio have the "load models on demand" feature that trapped me. But llama-swap supports this AND works with literally any underlying provider. I'm currently running llama.cpp and ik_llama.cpp, but I'm planning to add image generation support next.
      It is extremely lightweight (one executable, one config file), and yet it has a user interface that allows to test the models + check their performance + see the logs when an inference engine starts, so great for debugging.

      Config file is powerful but reasonably simple. You can group models, you can force configuration settings, define policies, etc. I have it configured to start on boot from my user using systemctl, even on my laptop, because it is instant and takes no resources. Specially the filtering feature is awesome. On my server I configured Qwen3-coder-next to force a specific temperature, and now using them on agentic tasks (tested on pi and claude-code) is a breeze.

      I was hesitant to try alternatives to ollama for serving multiple models... but boy was I missing!

      How I use it (on ubuntu amd64):
      Go to https://github.com/mostlygeek/llama-swap/releases and download the pack for your system, i use linux_amd64. It has three files: readme, license and llama-swap. Put them into a folder ~/llama-swap. I put llama.cpp and ik_llama.cpp and the models I want to serve into that folder too.

      Then copy the example config from https://github.com/mostlygeek/llama- swap/blob/main/config.example.yaml to ~/llama-swap/config.yaml

      Create this file on .config/systemd/user/llama-swap.service. Replace 41234 for the port you want it to listen, -watch-config ensures that if you change the config file, llama-swap will restart automatically.

      [Unit] Description=Llama Swap After=network.target [Service] Type=simple ExecStart=%h/llama-swap/llama-swap -config %h/llama-swap/config.yaml -listen 127.0.0.1:41234 -watch-config Restart=always RestartSec=3 [Install] WantedBy=default.target
      

      Activate the service as a user with:

      systemctl --user daemon-reexec systemctl --user daemon-reload systemctl --user enable llama-swap systemctl --user start llama-swap
      

      If you want them to start even without logging in (true boot start), run this once:

      loginctl enable-linger $USER
      

      You can check it works by going to http://localhost:41234/ui

      Then you can start adding your models to the config file. My file looks like:

      healthCheckTimeout: 500 logLevel: info logTimeFormat: "rfc3339" logToStdout: "proxy" metricsMaxInMemory: 1000 captureBuffer: 15 startPort: 10001 sendLoadingState: true includeAliasesInList: false macros: "latest-llama": > ${env.HOME}/llama-swap/llama.cpp/build/bin/llama-server --jinja --threads 24 --host 127.0.0.1 --parallel 1 --fit on --fit-target 1024 --port ${PORT} "models-dir": "${env.HOME}/models" models: "GLM-4.5-Air": cmd: | ${env.HOME}/ik_llama.cpp/build/bin/llama-server --model ${models-dir}/GLM-4.5-Air-IQ3_KS-00001-of-00002.gguf --jinja --threads -1 --ctx-size 131072 --n-gpu-layers 99 -fa -ctv q5_1 -ctk q5_1 -fmoe --host 127.0.0.1 --port ${PORT} "Qwen3-Coder-Next": cmd: ${latest-llama} -m ${models-dir}/Qwen3-Coder-Next-UD-Q4_K_XL.gguf --fit-ctx 262144 "Qwen3-Coder-Next-stripped": cmd: ${latest-llama} -m ${models-dir}/Qwen3-Coder-Next-UD-Q4_K_XL.gguf --fit-ctx 262144 filters: stripParams: "temperature, top_p, min_p, top_k" setParams: temperature: 1.0 top_p: 0.95 min_p: 0.01 top_k: 40 "Assistant-Pepe": cmd: ${latest-llama} -m ${models-dir}/Assistant_Pepe_8B-Q8_0.gguf
      

      I hope this is useful!

      submitted by /u/TooManyPascals
      [link] [comments]

    21. 🔗 r/reverseengineering My journey through Reverse Engineering SynthID rss
    22. 🔗 r/reverseengineering My journey through Reverse Engineering SynthID rss
    23. 🔗 r/Yorkshire Fountains Abbey, Ripon, Yorkshire rss

      Fountains Abbey, Ripon, Yorkshire | submitted by /u/mdbeckwith
      [link] [comments]
      ---|---

    24. 🔗 jank blog jank is off to a great start in 2026 rss

      Hey folks! We're two months into the year and I'd like to cover all of the progress that's been made on jank so far. Before I do that, I want to say thank you to all of my Github sponsors, as well as Clojurists Together for sponsoring this whole year of jank's development!