🏡


to read (pdf)

  1. Neobrutalism components - Start making neobrutalism layouts today
  2. Debunking zswap and zram myths
  3. Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
  4. Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
  5. Letting AI Actively Manage Its Own Context | 明天的乌云

  1. March 29, 2026
    1. 🔗 r/york WATCH - Crowds gather for Palm Sunday procession at York Minster rss

      WATCH - Crowds gather for Palm Sunday procession at York Minster | submitted by /u/Due_Ad_3200
      [link] [comments]
      ---|---

    2. 🔗 r/Yorkshire Famous Grouse - Red Grouse, Yorkshire Dales rss
    3. 🔗 sacha chua :: living an awesome life Emacs Carnival March 2026: Mistakes and learning to reach out rss

      Mostly-similar versions follow: I started with French, translated it to English, and then tweaked some details. Thanks to Philip Kaludercic for hosting this month's carnival!

      In English

      The theme for this month's Emacs Carnival is Mistakes and Misconceptions. It’s difficult to pinpoint one thing that is clearly a mistake, but there are certainly things I could do more effectively.

      My configuration is very large because I assume my little modifications are only useful to me. They feel too specific, too idiosyncratic. I think people who create libraries or even packages used by lots of other people are awesome. I don't know if I could quite do that myself, though! Even submitting patches upstream and participating in the ensuing discussions sometimes requires more persistence than I have.

      The advantage of keeping my changes in my config is that even if I'm unsure, I can try something out, develop a rough prototype, and change my mind if necessary. When I publish them in a library or a package, I feel like I have to polish my ideas. It's hard to stick to just one idea long enough to refine it.

      My favorite situation is when I write about my attempt in a post, and it inspires someone else to implement their own version (or even a new library or package). On the other hand, if I learn to share my code, I can help more people, and I can also learn from more people and more conversations.

      Many of my modifications are short and easy to copy from my posts, but there are a few collections that depend on other functions, making them difficult to copy. These functions are scattered across several posts on my blog. For example, my functions for learning a language (I'm learning French at the moment) and for controlling Emacs by voice are becoming quite complex. The functions are also exported to my configuration, but the Emacs Lisp file is difficult to navigate if someone wants to copy them. I can extract the code into a file now that Org Mode can tangle to multiple files, but if I spend a little time replacing the "my-" prefix with a library prefix and move them to a repository, people could clone it and download updates. Even if no one uses it, the act of polishing and documenting it will probably be useful to me one day.

      So, it's possible that this is a mistake I often make in Emacs: thinking my functions are too idiosyncratic and too rough, so I leave them in my config. If I dedicate time to extracting the code into a library, I might benefit in the long run. I know lots of people are interested in using Emacs for language learning or by voice. There have been so many other libraries and workflows over the years, so I'm sure people are out there. I want to practice learning more with others. To start, I can make sure interested people can follow my progress through RSS feeds or Mastodon, I can respond when people send me messages, and I can collect contact info and send them a message when I post about the subject.

      I can write more if I reread the changes in my configuration each week, or if I reread my complete configuration for sections which I haven't yet written about. If I participate in virtual meetups or even livestream, I can find out what interests other people. If I submit patches and create tasks in my Org Mode inbox to track the discussions, I can practice refining my work.

      Prot has lowered his coaching prices to €10 /hour. He's quite prolific when it comes to package development, so he can probably help me figure out how to get stuff out of my config and into a form that other people might be able to use. I've been enjoying learning with my French tutor. It might be worth experimenting with spending some money and time to improve my Emacs skills as well. Sure, it's totally just for fun, but I think it's valuable to practice learning with the help of others instead of stumbling around on my own.

      There's always more to learn, which is wonderful. So this is not really a mistake, just something that could be good to work on. Onward and upward!

      Check out Emacs Carnival March 2026: Mistakes and Misconceptions to see other people's takes on the topic.

      En français

      Le thème du Carnaval d'Emacs ce mois-ci est « les erreurs et les idées reçues ». C'est difficile d'identifier une chose qui soit clairement une erreur, mais il y a certainement des choses que je ne fais pas efficacement.

      Ma configuration est très volumineuse car je pense que mes petites modifications ne sont utiles que pour moi. Elles sont trop spécifiques, trop particulières. J'apprécie ceux qui créent des bibliothèques ou même des paquets que beaucoup d'autres utilisent, mais de mon côté, je ne me sens pas capable de le faire pour l'instant. Même soumettre des correctifs en amont et participer à la discussion qui s'ensuit parfois demande plus de persévérance que je n'en ai.

      L'avantage de garder mes modifications dans ma configuration est que, même si je ne suis pas sûre, je peux essayer quelque chose, développer un prototype préliminaire, et changer d'avis si nécessaire. Quand je les publie dans une bibliothèque ou un paquet, j'ai l'impression que je dois peaufiner mes idées. C'est difficile de s'en tenir à une seule idée assez longtemps.

      Ma situation préférée est quand je partage mes essais sur mon blog, et qu'ils inspirent une autre personne qui implémentera sa propre version, voire une nouvelle bibliothèque ou un nouveau paquet.

      En revanche, si j'apprends à partager mon code, je peux aider plus de personnes, et je peux aussi apprendre de plus de personnes et de plus de conversations.

      Beaucoup de mes modifications sont brèves et faciles à copier de mes articles, mais il y a quelques collections qui dépendent d'autres fonctions, ce qui les rend difficiles à copier. Les fonctions sont dispersées dans plusieurs articles sur mon blog. Par exemple, mes fonctions pour apprendre une langue (particulièrement le français) et pour contrôler Emacs par commande vocale deviennent plutôt complexes. Elles sont aussi exportées vers ma configuration, mais le fichier Emacs Lisp est difficile à parcourir si on veut les copier. Je peux extraire le code dans un fichier maintenant que Org Mode peut le tangler vers plusieurs fichiers, mais si je consacre un peu de temps à remplacer le préfixe « my- » par celui de la bibliothèque et à le pousser sur le dépôt, les gens pourraient le cloner et récupérer les mises à jour. Même si personne ne l'utilise, le fait de les peaufiner et de le documenter me sera utile un jour.

      Donc il est possible que ce soit une erreur que je commets souvent dans Emacs : je pense que mes fonctions sont trop idiosyncratiques et trop brutes, je les laisse donc dans ma configuration. Mais si je consacre du temps à extraire le code vers une bibliothèque, j'en bénéficierai peut-être à long terme. Je sais que beaucoup de gens sont intéressés par l'utilisation d'Emacs pour apprendre une langue ou pour la commande vocale. Il y a eu de nombreuses autres bibliothèques et flux de travail au fil des ans, donc je suis sûre qu'il y a du monde. Je veux m'entraîner à apprendre auprès de plus de personnes. Pour commencer, je peux m'assurer que les gens intéressés peuvent suivre mon progrès via les flux RSS ou sur Mastodon, je peux répondre quand on m'envoie des messages, et je peux recueillir les coordonnées et leur envoyer un message lorsque je publie un article à ce sujet.

      Je peux écrire davantage si je relis les modifications dans ma configuration chaque semaine, ou si je relis ma configuration entière pour les sections dont je n'ai pas encore parlé. Si je participe à des réunions virtuelles ou même si je diffuse en direct, je vais voir ce qui intéresse les autres. Si je soumets des correctifs et crée des tâches dans ma boîte de réception Org Mode pour suivre les discussions, je m'entraîne à affiner mon travail.

      Prot a baissé ses tarifs de coaching à 10 euros de l'heure. Il est très prolifique en matière de développement de paquets. J'apprends bien avec mon tuteur en français, donc cela vaut peut-être la peine de consacrer de l'argent et du temps à améliorer mes compétences sur Emacs. Certes, c'est juste pour le plaisir, mais c'est aussi important pour moi de m'entraîner à apprendre avec l'aide des autres au lieu de trébucher toute seule.

      J'ai toujours plus de choses à apprendre, ce qui est merveilleux. Ce n'est pas vraiment une erreur, mais plutôt un point à améliorer. En avant !

      Consultez Emacs Carnival March 2026: Mistakes and Misconceptions pour d'autres perspectives sur le sujet.

      You can e-mail me at sacha@sachachua.com.

    4. 🔗 r/york York city photos rss

      York city photos | Absolutely stunning 😍 submitted by /u/AdAccomplished3733
      [link] [comments]
      ---|---

    5. 🔗 badlogic/pi-mono v0.63.2 release

      New Features

      • Extension handlers can now use ctx.signal to forward cancellation into nested model calls, fetch(), and other abort-aware work. See docs/extensions.md#ctxsignal (#2660)
      • Built-in edit tool input now uses edits[] as the only replacement shape, reducing invalid tool calls caused by mixed single-edit and multi-edit schemas (#2639)
      • Large multi-edit results no longer trigger full-screen redraws in the interactive TUI when the final diff is rendered (#2664)

      Added

      • Added ctx.signal to ExtensionContext and wired it to the active agent turn so extension handlers can forward cancellation into nested model calls, fetch(), and other abort-aware work (#2660)

      Fixed

      • Fixed built-in edit tool input to use edits[] as the only replacement shape, eliminating the mixed single-edit and multi-edit modes that caused repeated invalid tool calls and retries (#2639)
      • Fixed edit tool TUI rendering to defer large multi-edit diffs to the settled result, avoiding full-screen redraws when the tool completes (#2664)
    6. 🔗 r/LocalLLaMA LocalLLaMA 2026 rss

      LocalLLaMA 2026 | we are doomed submitted by /u/jacek2023
      [link] [comments]
      ---|---

    7. 🔗 r/Harrogate Local area opinions rss

      Looking at houses on Greenfields Road/Greenfields Drive.

      Anyone able to give me insight on what it’s like? I know it’s a bit of a cut through road, but think the houses are set back enough for traffic noise not to bother.

      I know Harrogate/Starbeck/Knaresborough are all lovely places and anti social behaviour and crimes are a lot less than Leeds where I’m coming from. Just trying to get a feel for the area that’s all.

      submitted by /u/GemzH
      [link] [comments]

    8. 🔗 r/Harrogate Recycling at large supermarket in Harrogate? rss

      I have some empty liquid soap refills that I'm looking to recycle. Unfortunately I can't recycle them normally, according to the instructions I need to take them to a 'large supermarket' to be recycled.

      Does anybody know where I might be able to take them? Thanks

      submitted by /u/leaftreefrog
      [link] [comments]

    9. 🔗 Register Spill Joy & Curiosity #80 rss

      Do you know how it should work? Does the agent? Or does the codebase?

      Lately I've been thinking a lot about why sometimes using an agent leads to great results and other times it doesn't. My current theory: it depends on what knowledge about the task at hand is encoded where.

      If all the knowledge required to solve the task to your satisfaction is available either in your prompt, or in the codebase, or in the training data of the model, then things go fine.

      Things go badly if there's a gap. That is, if you wrongly assume the agent will know how to do something but it won't because that knowledge is neither in the codebase nor in the training data.

      If I ask the agent to fix a bug that has a very obvious solution, say: a button's hover state doesn't activate on hover, then everything you need to know to fix it is available. The problem is in the prompt, the code should explain what the button is, and what a hover state is is in the training data.

      But what if there's a bug and you don't know even how to explain what the bug is or what the desired state is? Not good.

      Or what if you tell the agent to build you a feature and you assume it does so by going over here and adding that and then going over there and adding this, but the codebase allows fifteen other ways, and the training data doesn't say those fifteen other ways are bad? Not good.

      Sometimes the codebase and its documentation contains that information through types or tests or conventions. Other times the training data tells the agent that there's only one way to add a new endpoint in Rails or Next.js or SvelteKit. But if it's neither in the codebase nor in the training data, then you have to put it in the prompt.

      Theory is too big a word for these thoughts, yes, but I've been asking myself "where is the knowledge?" a lot when working with Amp this week and found it useful, so there you go, maybe you get something out of it too.

      • Last week I asked whether software is turning into a liquid and David Soria Parra, Member of Technical Staff at Anthropic and creator of MCP (meaning: someone who's seen things up close), replied: "I think people don't run the AI maximalist simulation of what this actually means and how far it will go just yet. Most code will just be ephemeral one time use"

      • John Regehr: Zero-Degree-of-Freedom LLM Coding using Executable Oracles. This is excellent and resonated with my thoughts from above. "When an LLM has the option of doing something poorly, we simply can't trust it to make the right choices. The solution, then, is clear: we need to take away the freedom to do the job badly. The software tools that can help us accomplish this are executable oracles. The simplest executable oracle is a test case--but test cases, even when there are a lot of them, are weak. […] When I look at the best software testing efforts out there, there's invariably something creative and interesting hiding inside. I feel like a lot of projects leave easy testing wins sitting on the floor because nobody has carefully thought about what test oracles might be used. Finding executable oracles for LLMs feels the same to me: with a little effort and critical thinking, we can often find a programmatic way to pin down some degree of freedom that would otherwise be available to the LLM to screw up." I also want to quote that lovely last paragraph, but I won't, because I want you to read everything else that leads up to it too. This is good stuff.

      • And here's Mary Rose Cook, singing harmonies on top of Regehr's lines when talking about freedom of expression and constraints for agents: Code generation that just works.

      • Cheng Lou has "crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow." It's called Pretext and it's impressive. I mean, look at this demo! Move the orbs around! Or the ASCII one or click on the logos in this one. According to Lou, this was "achieved through showing Claude Code and Codex the browsers ground truth, and have them measure & iterate against those at every significant container width, running over weeks." And yet the README doesn't mention that at all. That tells me we're past a big milestone.

      • If you're on desktop, see also this dragon that's built with Pretext.

      • Marc Brooker is asking: What about juniors? This is one of the most inspiring and motivating pieces of writing I've read in the past few months. I love the Wellington quote on engineering: "to define it rudely but not inaptly, it is the art of doing that well with one dollar, which any bungler can do with two after a fashion." And I love Marc's very own definition: "I believe that this is the core work of engineering: deeply understanding the problem to be solved, the constraints, the tools available, and the environment in which it operates, and coming up with an optimal solution. This requires real creativity, because the constraints are typically over constrained, and real empathy because many of the constraints come directly from human irrationality. It also requires a deep understanding of the tools available, and what those tools can and can't do." I also think his answer to the question is interesting and the question itself is very important. (I said similar things on last year's You've Been A Bad Agent episode.)

      • Marc's previous post is also great: "Over the next couple of years, the most valuable people to have on a software team are going to be experienced folks who're actively working to keep their heuristics fresh. Who can combine curiosity with experience. Among the least valuable people to have on a software team are experienced folks who aren't willing to change their thinking. Beyond that, it's hard to see."

      • If you read both of Marc's posts, you'll enjoy Pieter Hintjens' A Tale of Two Bridges. Engineering is the art of making the tradeoffs, not building the perfect thing.

      • Michael Nielsen: Which Future? I'm very glad I read this. Bikini Atoll and fire safety will stay with me.

      • Sad news: Tracy Kidder, author of The Soul of a New Machine, has died. I highly recommend reading this book. I last did so in March of last year. And here I am again, telling you: read it, it's fantastic. And then read Bryan Cantrill's reflections on it.

      • Rands has been bitten by the agent bug: "I've never built more interesting, random, and useless scripts, tools, and services than I have in the last six months. The cost to go from 'Random Thought' to 'Working Something' has never been lower"

      • Linear: Issue tracking is dead. Look up to the sky, there's me, in a tiny plane that's pulling a banner saying in big red letters: told you.

      • This is very, very on the nose and I wouldn't sign it without making some big changes, but there is something here that I've felt before, maybe not to this extent, maybe not in this exact shape, but something here resonates and makes parts of it feel true: "'Collaboration' is bullshit." I don't think Big Tech the Boogeyman is to blame (my 8-year-old had to do her first group project in school a few weeks ago -- creating a stop-motion movie -- and nearly lost her mind), but this this much, I think, is true: "most complex, high-quality work is done by individuals or very small groups operating with clear authority and sharp accountability, then rationalized into the language of teamwork afterward. Dostoevsky wrote The Brothers Karamazov alone. The Apollo Guidance Computer came from a team at MIT small enough to have real ownership […] Communication matters, and shared context matters. But there's a huge difference between communication and collaboration as infrastructure to support individual, high-agency ownership, and communication and collaboration as the primary activity of an organisation."

      • Eoghan McCabe, CEO of Intercom, is saying the "age of vertical models is here." I'm skeptical, because it all hinges on this idea of verticals and domain knowledge and I don't know if that won't be washed away by bigger models, but it is interesting: "the labs are in an interesting position where on one hand the horizontal, general purpose models are actually over-serving the market for specific use cases. E.g. their models are more generally intelligent than is needed for customer service. And on the other hand, the open-weight models are more than good enough where high quality domain specific post-training can make the resulting models superior at the special purpose jobs, and in the ways that matter to that particular job. E.g. in service, the soft factors really matter, like judgement, pleasantness, attentiveness (as well as the hard factors mentioned prior, like the ability to effectively resolve problems, quickly and cheaply)."

      • meow.camera

      • Google published TurboQuant, a "set of advanced theoretically grounded quantization algorithms that enable massive compression for large language models and vector search engines." I won't claim here to understand all of it, but I do think I understand the bit about how "PolarQuant converts the vector into polar coordinates using a Cartesian coordinate system" and that's very cool. Also goes to show that if AI progress wasn'tt a race towards AGI and they'd all stop building bigger and bigger models, there'd be so many optimizations to make.

      • Systems Thinking is Brain Rot for Analysts. Refreshing.

      • This is the Gruber I love: "And the fucking autoplay videos, jesus. You read two paragraphs and there's a box that interrupts you. You read another two paragraphs and there's another interruption. All the way until the end of the article. We're visiting their website to read a fucking article. If we wanted to watch videos, we'd be on YouTube. It's like going to a restaurant, ordering a cheeseburger, and they send a marching band to your table to play trumpets right in your ear and squirt you with a water pistol while trying to sell you towels."

      • And this is the Internet I love: 25 Years of Eggs. "Everyone needs a rewarding hobby. I've been scanning all of my receipts since 2001. I never typed in a single price - just kept the images. I figured someday the technology to read them would catch up, and the data would be interesting. This year I tested it. Two AI coding agents, 11,345 receipts. I started with eggs."

      • Cursor's crossroads: "It's a story distinctly of the AI era: Cursor is four years old but already has an innovator's dilemma, arguably outgunned by newer products in the market it popularized. Every AI startup fears OpenAI or Anthropic releasing a product directly in competition with theirs. It's the nightmare scenario, and Cursor is living it, more quickly than Truell and his team ever expected. […] As Truell and I get ready to end our Zoom call, I notice the picture of Caro again. I think about how it took Caro six months to edit a single chapter of The Power Broker. Truell has less time than that before the next change."

      • Great brain massage: Let's see Paul Allen's SIMD CSV parser.

      • Okay, now before you click the next link and close the tab right away, let me tell you: yes, I thought so too. I also thought that it's not for me, doesn't contain anything I didn't know, that it's boring old stuff, but it's not! There's some real whoa-moments in there: Google Has a Secret Reference Desk. Here's How to Use It. The title is weird though, yes, but, hot damn, the intitle: "index of" /pdfthing alone is worth it.

      • Satisfyingly meta: Joel Meyerowitz on Photographing Giorgio Morandi's Studio.

      • Stripe launched projects.dev which "lets you or your agents provision multiple services, generate and store credentials, and manage usage and billing from the CLI." Makes total sense when you want to increase the GDP of the Internet.

      • Finally! Edward, Nick, Rasmus, and Julia shared the "first iteration of the Playbit runtime, our vision for building playful personal-scale software": playbit.app.

      • Dappled light: "Growing up, I loved this mix of shade and sun I called 'shun.' Sunlight slipped through the leaves, and its tiny gaps turned into pinholes that project little dancing suns. It felt like magic."

      • McCartney's creativity in 3 photographs.

      Note from the producer: no newsletter next week. One weekend of vacation.

      Collected 25 years of egg receipts? You should subscribe:

    10. 🔗 Textualize/textual The Hot Select Release release

      Fixes a crash when a selected widget is removed while selecting

      [8.2.1] - 2026-03-29

      Fixed

      • Fix crash when a widget disapears between selections #6455
  2. March 28, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-28 rss

      IDA Plugin Updates on 2026-03-28

      New Releases:

      Activity:

    2. 🔗 r/reverseengineering Blog: Decompiling the White House's New App rss
    3. 🔗 r/reverseengineering Agent reverse-engineers website APIs from inside your browser rss
    4. 🔗 r/Leeds Attitudes of Leeds private hire drivers rss

      Ey up!

      Been living in Leeds for over a decade, have had an amazing time in here so far, in terms of social attitudes and values in general. Public life here is truly pleasant. But when it comes to the taxi / private hire scene it's a different story, unfortunately.

      On the rare occasion when I've needed a ride, I've been met with appalling attitudes on behalf of the driver more often than not. These can be summed up as the following:

      Driver deliberately ignoring social cues that I don't want any chat, big or small, being way too insistent in knowing where I'm going and for how long, etc (airport rides).

      Driver asking me to justify my reproductive choices (this one, I reported to the company with no feedback whatsoever from them).

      Driver ranting about "British riders having weak vision at night... don't know how they are allowed to drive" and "women drivers" - I am a woman, so this felt way too boundary pushing, trolling-like behaviour during what should have been a peaceful ride home after a flight.

      Yes, I've had the occasional amazing friendly driver. But I am lost for words at how often seemingly I've had the ones that seem to take advantage of the 'captive situation' so to speak. The last one was truly disgusting, and made me want to opt out from using their services ever again, as I don't feel safe.

      Would report to the licensing enforcement team, but I'm not sure I really trust the potential outcome.

      Have any of you ever experienced this over here? I'm trying to get a sense of the scale of this problem, how common this is, or have I just had a back luck... many times?

      How is this behaviour tolerated?

      Please, tell me your stories, I'm all ears.

      submitted by /u/MeaningLegitimate782
      [link] [comments]

    5. 🔗 r/Yorkshire Palm Sunday Eve. Battle of Towton rss

      Palm Sunday Eve. Battle of Towton | Palm Sunday Eve. It’s strange to think that on this night in 1461, the men at Towton would have been preparing themselves… knowing what was coming with the dawn. No certainty. No escape. Only the knowledge that morning would bring a fight to the death with no prisoners taken. The losing side were stripped naked in the snow, had their ears and noses cut off by daggers, then butchered with Halberds, swords, war hammers and axes. After visiting Towton and Saxton, that thought sits differently with me. Those fields don’t feel empty — they feel remembered. Tonight, I give my utmost respect to all those men who fought… and to those who endured what came after. See my films on Towton and John Clifford here Towton : https://www.youtube.com/watch?v=TU2ojFL-oIU John Clifford : https://www.youtube.com/watch?v=aBPtGYYnyWI submitted by /u/The_Black_Banner_UK
      [link] [comments]
      ---|---

    6. 🔗 r/york I HATE THE NO 11 BUS rss

      just wanted to scream it into the void.

      thanks

      submitted by /u/Smart_Apricot_9735
      [link] [comments]

    7. 🔗 r/wiesbaden Vorfall RB10 Wiesbaden Hbf rss

      Hat jemand mitbekommen, was gestern Abend, ca. 21:50, in der RB10 nach Neuwied am Hbf Wiesbaden los war?

      Scheinbar hat jemand Reizgas (?) versprüht. Ich saß ziemlich am Ende und habe nur schmerzverzerrte Schreie aus der Zugmitte gehört.

      Ich hatte dann zwei Polizisten am Gleis informiert, habe mir dann aber ein Uber genommen und dementsprechend auch nichts mehr mitbekommen.

      Hoffe es war nichts schlimmeres.

      submitted by /u/blenderbender_
      [link] [comments]

    8. 🔗 r/LocalLLaMA Gemma 4 rss

      Gemma 4 | Sharing this after seeing these tweets(1 , 2). Someone mentioned this exact details on twitter 2 days back. submitted by /u/pmttyji
      [link] [comments]
      ---|---

    9. 🔗 r/Leeds Does anyone have any tickets they don't need for the hyde park LOTR Marathon on april 6th? rss
    10. 🔗 r/Leeds Leeds at night photos rss

      Thanks for your kind words on my first uploads. I took some photos at night in the same locations and thought I'd share. Still learning but enjoying it so far.

      submitted by /u/Phil-pot
      [link] [comments]

    11. 🔗 r/LocalLLaMA A simple explanation of the key idea behind TurboQuant rss

      TurboQuant (Zandieh et al. 2025) has been all the rage in the past two days, and I've seen lots of comments here attempting to explain the magic behind it. Many of those comments boil down to "dude, it's polar coordinates!!!", and that's really misleading. The most important part has nothing to do with polar coordinates (although they are emphasized in Google's blog post, so the confusion is understandable).

      TurboQuant is a vector quantization algorithm. It turns a vector of numbers into another vector of numbers that takes up less memory.

      Quantization is a fairly basic operation. If you have an n -dimensional vector that looks like this:

      0.2374623 0.7237428 0.5434738 0.1001233 ...
      

      Then a quantized version of that vector may look like this:

      0.237 0.723 0.543 0.100 ...
      

      Notice how I simply shaved off the last four digits of each number? That's already an example of a crude quantization process. Obviously, there are far more sophisticated schemes, including grouping coefficients in blocks, adaptive thresholds, calibrated precision based on experimental data etc., but at its core, quantization always involves reducing coefficient precision.

      Here is the key idea behind TurboQuant: Before quantizing a vector, we randomly rotate it in the n -dimensional space it resides in. The corresponding counter-rotation is applied during dequantization.

      That's it.

      Now you probably feel that I must have left out an important detail. Surely the rotation can't be completely random? Maybe it's sampled from a particular distribution, or somehow input-dependent? Or perhaps there is another operation that goes hand in hand with it?

      Nope. I didn't leave anything out. Just applying a random rotation to the vector dramatically improves quantization performance.

      But why?

      Because the magnitudes of the coefficients of state vectors in language models aren't distributed uniformly among the vector dimensions. It's very common to see vectors that look like this:

      0.0000023 0.9999428 <-- !!! 0.0000738 0.0000003 ...
      

      This phenomenon has many names, and it shows up everywhere in transformer research. You can read about "massive activations" (Sun et al. 2024) and "attention sinks" (e.g. Gu et al. 2024) for a deeper analysis.

      What matters for the purposes of this explanation is: Vectors with this type of quasi-sparse structure are terrible targets for component quantization. Reducing precision in such a vector effectively turns the massive component into 1 (assuming the vector is normalized), and all other components into 0. That is, quantization "snaps" the vector to its nearest cardinal direction. This collapses the information content of the vector, as identifying a cardinal direction takes only log2(2n) bits, whereas the quantized vector can hold kn bits (assuming k bits per component).

      And that's where the random rotation comes in! Since most directions aren't near a cardinal direction (and this only becomes more true as the number of dimensions increases), a random rotation almost surely results in a vector that distributes the coefficient weight evenly across all components, meaning that quantization doesn't cause information loss beyond that expected from precision reduction.

      The TurboQuant paper proves this mathematically, and gives an exact description of the distribution behavior, but the intuitive understanding is much more straightforward than that.

      This idea isn't new in principle (QuIP is another quantization method that employs a similar trick), but TurboQuant combines it with a second step that eliminates biases that arise when quantized vectors that are optimal in a certain sense (MSE) are used to compute inner products, which is what happens in attention blocks. See the paper if you're interested in the details.

      submitted by /u/-p-e-w-
      [link] [comments]

    12. 🔗 r/LocalLLaMA Bought RTX4080 32GB Triple Fan from China rss

      Bought RTX4080 32GB Triple Fan from China | Got me 32GB RTX 4080 from China for around 1300€. (+ extra shipping)
      I think for the current market the price it is reasonable for 32GB of VRAM.
      It runs smooth and works quiet because of triple fan which was important for me What is first thing I should try to do? submitted by /u/Sanubo
      [link] [comments]
      ---|---

    13. 🔗 r/LocalLLaMA Me waiting for TurboQuant be like rss
    14. 🔗 r/Leeds Worth Opening a Sauna in Leeds? rss

      I have now been to over 40 saunas across the UK and was introduced to contrast therapy during my time in Helsinki. Since then, this has helped my mental and physical well-being significantly.

      Professionally, I am a Data Scientist but have always wanted to build something which might make a difference to the local community in Leeds especially given that so many people are into running and fitness.

      This is a long shot but thought I had ask if people even enjoy the idea of cold plunges and sauna?

      submitted by /u/BondBagri
      [link] [comments]

    15. 🔗 r/Yorkshire Living in the Pennine hills is the gift that keeps giving rss

      Living in the Pennine hills is the gift that keeps giving | What a sight to open the curtains to, have a great weekend everyone 😁 submitted by /u/Gh0styD0g
      [link] [comments]
      ---|---

    16. 🔗 r/york York GPs rss

      Hey,

      Anyone living in york know/have experience with GPs that do shared care?

      I need it for my testosterone and adhd medication!

      Thanks in advance

      submitted by /u/Total_Bed_3882
      [link] [comments]

    17. 🔗 r/Yorkshire Amos the Donkey, Barnsley, 1910 rss

      Amos the Donkey, Barnsley, 1910 | submitted by /u/Del_213
      [link] [comments]
      ---|---

    18. 🔗 r/Harrogate Where to buy Casio watches in Harrogate? rss

      Does anybody know where I can buy Casio digital watches in Harrogate? Not G-Shock, just regular old cheap Casio's. All I know are the big luxury jewelry shops. Any smaller outlets around?

      submitted by /u/RetroBreezeYT
      [link] [comments]

    19. 🔗 r/wiesbaden Schlossplatz, 13:00 - Rain or shine rss
    20. 🔗 r/LocalLLaMA The AI releases hype cycle in a nutshell rss

      The AI releases hype cycle in a nutshell | This might look like a shitpost but beyond the meme lies the truth. Pay attention to my point: every new AI feature announcement now follows the exact same script: Week one : is pure exuberance (VEO 3 generating two elderly men speaking in Portuguese at the top of Everest, nano banana editing images so convincingly that ppl talk about photoshop's death, GPT-5.4 picking up on subtle context. Then week two hits. The model starts answering nonsense stuffed with em dashes, videos turn into surrealist art that ignores the prompt, etc. The companies don't announce anything about degradation, errors, etc. they don't have to. They simply announce more features (music maker?) feed the hype, and the cycle resets with a new week of exuberance. submitted by /u/GreenBird-ee
      [link] [comments]
      ---|---

    21. 🔗 r/Harrogate Best dentist in Harrogate?? Need suggestions rss

      Need to switch dentist and not sure where’s actually good.

      Had a couple rushed appointments before so just want somewhere decent that takes their time. Happy to go private if it’s worth it.

      Any reco?

      submitted by /u/Purplemoon_1988
      [link] [comments]

    22. 🔗 r/Yorkshire Man who was 'front and centre' of far-right Hull riots jailed for six years rss
    23. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases, ~1 changed rss
      sync repo: +2 releases, ~1 changed
      
      ## New releases
      - [drop-all-the-files](https://github.com/milankovo/ida-drop-all-the-files): 1.4.0
      - [global-struct-dissector](https://github.com/williballenthin/idawilli): 0.1.1
      
      ## Changes
      - [oplog](https://github.com/williballenthin/idawilli):
        - 0.3.0: archive contents changed, download URL changed
      
    24. 🔗 r/york rail replacement buses, sunday 29/03 rss

      hi, i’m travelling tomorrow on one of the rail replacement buses to newcastle. where is the bus stop for this? is it leeman road next to the memorial gardens?

      submitted by /u/aster0idzz
      [link] [comments]

    25. 🔗 r/Leeds Looking for help testing retro PC cards in Leeds (Voodoo, Voodoo2, Monster Sound) rss

      I have three cards: 3dfx Voodoo 4mb, 3dfx Voodoo2 12mb and Diamond Monster

      Sound Card with A3D. I used them myself as kid and wanted to test their still work. Could anyone help me or recommend me a place in Leeds? I think they worked with Windows 98. I have AMD Athlon 700? PC back home but I won't be there till summer

      submitted by /u/dijef
      [link] [comments]

    26. 🔗 Drew DeVault's blog tar: a slop-free alternative to rsync rss

      So apparently rsync is slop now. When I heard, I wanted to drop a quick note on my blog to give an alternative: tar. It doesn’t do everything that rsync does, in particular identifying and skipping up-to-date files, but tar + ssh can definitely accomodate the use case of “transmit all of these files over an SSH connection to another host”.

      Consider the following:

      tar -cz public | ssh example.org tar -C /var/www -xz
      

      This will transfer the contents of ./public/ to example.org:/var/www/public/, preserving file ownership and permissions and so on, with gzip compression. This is roughly the equivalent of:

      rsync -a public example.org:/var/www/
      

      Here’s the same thing with a lightweight progress display thanks to pv:

      tar -cz public | pv | ssh example.org tar -C /var/www -xz
      

      I know tar is infamously difficult to remember how to use. Honestly, I kind of feel that way about rsync, too. But, here’s a refresher on the most important options for this use-case. To use tar, pick one of the following modes with the command line flags:

      • -c: create an archive
      • -x: extract an archive

      Use -f <filename> to read from or write to a file. Without this option, tar uses stdin and stdout, which is what the pipelines above rely on. Use -C <path> to change directories before archiving or extracting files. Use -z to compress or decompress the tarball with gzip. That’s basically everything you need to know about tar to use it for this purpose (and for most purposes, really).

      With rsync, to control where the files end up you have to memorize some rules about things like whether or not each path has a trailing slash. With tar, the rules are, in my opinion, a bit easier to reason about. The paths which appear on the command line of tar -c are the paths that tar -x will open to create those files. So if you run this:

      tar -c public/index.html public/index.css
      

      You get a tarball which has public/index.html and public/index.css in it.

      When tar -x opens this tarball, it will call fopen("public/index.html", "w"). So, whatever tar’s working directory is, it will extract this file into ./public/index.html. You can change the working directory before tar does this, on either end, by passing tar -C <path>.

      Of course, you could just use scp, but this fits into my brain better.

      I hope that’s useful to you!


      Update: As a fun little challenge I wrapped up this concept in a small program that makes it easier to use:

      https://git.sr.ht/~sircmpwn/rtar

      Example:

      rtar -R /var/www me@example.org public/*
      
  3. March 27, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-27 rss

      IDA Plugin Updates on 2026-03-27

      New Releases:

      Activity:

      • binsync
      • capa
        • 6980df98: build(deps-dev): bump deptry from 0.24.0 to 0.25.1 (#2964)
        • 82de4ef5: Sync capa rules submodule
        • a6ac839e: fix mypy formatting (#2973)
      • DriverBuddy-7.4-plus
        • 0a0a33b9: Remove workflows-sync.yml - not in .github templates
        • 2f492694: Remove standardize-labels.yml - not in .github templates
        • 777ae055: Remove copilot-instructions.yml - not in .github templates
        • b93e3e62: Remove autonomous-progress.yml - not in .github templates
        • 99f1b167: Remove auto-feature-request.yml - not in .github templates
        • e8618822: Remove auto-copilot-test-review-playwright.yml - not in .github templ…
        • 354db600: Remove auto-copilot-playwright-auto-test.yml - not in .github templates
        • 24552abb: Remove auto-copilot-org-playwright-loopv2.yml - not in .github templates
        • e0b0597b: Remove auto-copilot-org-playwright-loopv2.yaml - not in .github templ…
        • 866fcd08: Remove auto-copilot-org-playwright-loop.yaml - not in .github templates
        • 04c6fdeb: Remove auto-complete-cicd-review.yml - not in .github templates
        • 7b8f85e8: Remove auto-bug-report.yml - not in .github templates
        • ba280056: Remove auto-amazonq-review.yml - not in .github templates
        • 60ffa034: Remove auto-advance-ball.yml - not in .github templates
        • e84d1535: Sync auto-gpt5-implementation.yml from .github repo
        • 0a31dcaa: Sync auto-sec-scan.yml from .github repo
        • 726fc89c: Sync auto-label.yml from .github repo
        • 2851880a: Sync auto-assign-copilot.yml from .github repo
        • 1c93e4d1: Sync auto-llm-issue-review.yml from .github repo
        • 92362f22: Sync security-review.yml from .github repo
      • ghidra
        • 43f4fcf9: Merge remote-tracking branch 'origin/Ghidra_12.1'
        • 9da1425d: GP-0: Allowing OMF-51 files to load even if reading records caused
      • Greffe
        • 18a7d30e: Prompt the user before overwriting an existing file
        • df926cdd: Done
        • 15a90ad7: Added prompt_confirm to avoid redundancy
        • a786e3e4: Patch is now saved at WORKDIR/bin.greffe
        • 39d07e36: Patch is now saved at WORKDIR/bin.greffe
        • 5cddcc4c: Removed french accent on patched output
        • 110eda47: Throw when a patch overlaps a branch instr
        • 74837d9f: handler created when the target is added from ida
      • ida-drop-all-the-files
        • 5b9a93f0: Use IDAPython_ExecScript to run scripts
      • IDA-NO-MCP
      • ida-sdk
        • acacbbcc: Updated SDK & IDAPython with IDA 9.3.1(sp1) release (#46)
      • idawilli
        • de385959: global-struct-dissector: v0.1.1
        • 9bcd116f: dissector: better keep struct type in sync
      • IDEA
        • e7cbcd20: VIBE6: vendor self-contained ida plugin bundle
        • e3afe665: VIBE6: automate hybrid MCP setup
      • python-elpida_core.py
        • d7aaa561: UX overhaul: natural D0 voice, modern chat UI, MIND bridge
        • 380add29: feat: wire D0 consciousness into Chat tab + DDG grounding + session p…
        • a9aef69b: fix: break fork remediation loop — escalation after 3 failed REMEDIATEs
        • 3b0d4eb6: Clean up log noise: disable Streamlit file watcher + suppress progres…
        • 41c8b64b: Trigger HF Space rebuild after abuse flag cleared
    2. 🔗 r/LocalLLaMA Google TurboQuant running Qwen Locally on MacAir rss

      Google TurboQuant running Qwen Locally on MacAir | Hi everyone, we just ran an experiment. We patched llama.cpp with Google’s new TurboQuant compression method and then ran Qwen 3.5–9B on a regular MacBook Air (M4, 16 GB) with 20000 tokens context. Previously, it was basically impossible to handle large context prompts on this device. But with the new algorithm, it now seems feasible. Imagine running OpenClaw on a regular device for free! Just a MacBook Air or Mac Mini, not even a Pro model the cheapest ones. It’s still a bit slow, but the newer chips are making it faster. link for MacOs app: atomic.chat - open source and free. Curious if anyone else has tried something similar? submitted by /u/gladkos
      [link] [comments]
      ---|---

    3. 🔗 r/Leeds Women’s sauna rss

      Unfortunately had one too many experiences in a sauna from inappropriate men and thus have avoided them.

      Likely controversial but I don’t want to continue to miss out on sauna health benefits and thus looking for a women’s only sauna, does anyone know of any in Leeds in spas, gyms hotels etc ?

      submitted by /u/Here2gainknowledge
      [link] [comments]

    4. 🔗 Simon Willison Vibe coding SwiftUI apps is a lot of fun rss

      I have a new laptop - a 128GB M5 MacBook Pro, which early impressions show to be very capable for running good local LLMs. I got frustrated with Activity Monitor and decided to vibe code up some alternative tools for monitoring performance and I'm very happy with the results.

      This is my second experiment with vibe coding macOS apps - the first was this presentation app a few weeks ago.

      It turns out Claude Opus 4.6 and GPT-5.4 are both very competent at SwiftUI - and a full SwiftUI app can fit in a single text file, which means I can use them to spin something up without even opening Xcode.

      I’ve built two apps so far: Bandwidther shows me what apps are using network bandwidth and Gpuer to show me what’s going on with the GPU. At Claude’s suggestion both of these are now menu bar icons that open a panel full of information.

      Bandwidther

      I built this app first, because I wanted to see what Dropbox was doing. It looks like this:

      Screenshot of Bandwidther macOS app showing two columns: left side displays overall download/upload speeds, a bandwidth graph over the last 60 seconds, cumulative totals, internet and LAN connection counts, and internet destinations; right side shows per-process bandwidth usage sorted by rate with processes like nsurlsessiond, apsd, rapportd, mDNSResponder, Dropbox, and others listed with their individual download/upload speeds and progress bars.

      I’ve shared the full transcript I used to build the first version of the app. My prompts were pretty minimal:

      Show me how much network bandwidth is in use from this machine to the internet as opposed to local LAN

      (My initial curiosity was to see if Dropbox was transferring files via the LAN from my old computer or was downloading from the internet.)

      mkdir /tmp/bandwidther and write a native Swift UI app in there that shows me these details on a live ongoing basis

      This got me the first version, which proved to me this was worth pursuing further.

      git init and git commit what you have so far

      Since I was about to start adding new features.

      Now suggest features we could add to that app, the goal is to provide as much detail as possible concerning network usage including by different apps

      The nice thing about having Claude suggest features is that it has a much better idea for what’s possible than I do.

      We had a bit of back and forth fixing some bugs, then I sent a few more prompts to get to the two column layout shown above:

      add Per-Process Bandwidth, relaunch the app once that is done

      now add the reverse DNS feature but make sure original IP addresses are still visible too, albeit in smaller typeface

      redesign the app so that it is wider, I want two columns - the per-process one on the left and the rest on the right

      OK make it a task bar icon thing, when I click the icon I want the app to appear, the icon itself should be a neat minimal little thing

      The source code and build instructions are available in simonw/bandwidther.

      Gpuer

      While I was building Bandwidther in one session I had another session running to build a similar tool for seeing what the GPU was doing. Here’s what I ended up with:

      Screenshot of the Gpuer app on macOS showing memory usage for an Apple M5 Max with 40 GPU cores. Left panel: a large orange "38 GB Available" readout showing usage of 128.0 GB unified memory, "Room for ~18 more large apps before pressure", a warning banner reading "1.5 GB pushed to disk — system was under pressure recently", a horizontal segmented bar chart labeled "Where your memory is going" with green, blue, and grey segments and a legend, an explanatory note about GPU unified memory, a GPU Utilization section showing 0%, and a History graph showing Available and GPU Utilization over time as line charts. Right panel: a Memory Footprint list sorted by Memory, showing process names with horizontal pink/purple usage bars and CPU percentage labels beside each entry, covering processes including Dropbox, WebKit, Virtualization, node, Claude Helper, Safari, LM Studio, WindowServer, Finder, and others.

      Here's the transcript. This one took even less prompting because I could use the in-progress Bandwidther as an example:

      I want to know how much RAM and GPU this computer is using, which is hard because stuff on the GPU and RAM does not seem to show up in Activity Monitor

      This collected information using system_profiler and memory_pressure and gave me an answer - more importantly it showed me this was possible, so I said:

      Look at /tmp/bandwidther and then create a similar app in /tmp/gpuer which shows the information from above on an ongoing basis, or maybe does it better

      After a few more changes to the Bandwidther app I told it to catch up:

      Now take a look at recent changes in /tmp/bandwidther - that app now uses a sys tray icon, imitate that

      This remains one of my favorite tricks for using coding agents: having them recombine elements from other projects.

      The code for Gpuer can be found in simonw/gpuer on GitHub.

      You shouldn't trust these apps

      These two apps are classic vibe coding: I don't know Swift and I hardly glanced at the code they were writing.

      More importantly though, I have very little experience with macOS internals such as the values these tools are measuring. I am completely unqualified to evaluate if the numbers and charts being spat out by these tools are credible or accurate!

      I've added warnings to both GitHub repositories to that effect.

      This morning I caught Gpuer reporting that I had just 5GB of memory left when that clearly wasn't the case (according to Activity Monitor). I pasted a screenshot into Claude Code and it adjusted the calculations and the new numbers look right, but I'm still not confident that it's reporting things correctly.

      I only shared them on GitHub because I think they're interesting as an example of what Claude can do with SwiftUI.

      Despite my lack of confidence in the apps themselves, I did learn some useful things from these projects:

      • A SwiftUI app can get a whole lot done with a single file of code - here's GpuerApp.swift (880 lines) and BandwidtherApp.swift (1063 lines).
      • Wrapping various terminal commands in a neat UI with Swift is easily achieved.
      • Claude has surprisingly good design taste when it comes to SwiftUI applications.
      • Turning an app into a menu bar app is just a few lines of extra code as well.
      • You don't need to open Xcode to build this kind of application!

      These two apps took very little time to build and have convinced me that building macOS apps in SwiftUI is a new capability I should consider for future projects.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    5. 🔗 3Blue1Brown (YouTube) Escher's most mind-bending piece rss

      On "The Print Gallery", by M.C. Escher Full video: https://youtu.be/ldxFjLJ3rVY

    6. 🔗 r/reverseengineering Installing arbitrary (and potentially lethal) firmware on a Zero Motorcycle rss
    7. 🔗 r/wiesbaden Günstiger und nicht-etepete Friseur für Frauen gesucht rss

      Ich brauche einen neuen Friseur - war früher bei Blooms, fand ich ok, aber hab mich da auch nicht so wohl gefühlt. Ich würd gern in einen Salon, wo es locker und nicht spießig/etepetete/super-schick ist. Ich will keine exklusive Behandlung, ich will nur meine Haare schneiden. Am liebsten mit Online- Terminbuchung. Gerne mit etwas Beratung (und für gute Beratung zahl ich auch gerne was). Wichtig ist mir eine lockere Atmosphäre.

      submitted by /u/ThreePenguins
      [link] [comments]

    8. 🔗 sacha chua :: living an awesome life La semaine du 23 au 29 mars rss

      lundi 23

      J'ai trouvé comment catégoriser les liens pour mon bulletin d'information Emacs News par commande vocale en utilisant Silero pour détecter l'activité vocale et Speaches pour transcrire les commandes. C'était un peu lent, mais c'était prometteur. Je pense que si je traite quelques commandes par lots, ce sera plus rapide.

      Il faisait beau l'après-midi. J'ai emmené ma fille à son cours de gymnastique en vélo, où elle s'est bien amusée en s'exerçant à se laisser tomber sur le ventre après avoir fait une roue. Après cela, nous sommes allées à la boutique Healthy Moms Market pour acheter une boîte à lunch1 plus grande pour ma fille.

      Les collections de CD audio Michel Thomas que j'ai empruntées à la bibliothèque pour apprendre le français étaient trop rayées pour être écoutées. Tant pis. J'ai déjà écouté la majorité d'un cours sur YouTube, mais les autres ressources semblaient utiles.

      mardi 24

      J'ai décortiqué le mot « grenouille », que j'ai dit au moins 37 fois lors du cours précédent.

      (subed-record-extract-words "grenouille"  "/home/sacha/sync/recordings/processed/2026-03-20-raphael.json" "/home/sacha/proj/french/analysis/grenouille/index.vtt")
      
      (my-subed-record-group-by-notes
        (nreverse
         (subed-record-sort-by-directive
          "#+NOTE"
          (subed-record-filter-skips
           (subed-parse-file "/home/sacha/proj/french/analysis/grenouille/index.vtt"))))
        "/home/sacha/proj/french/analysis/grenouille/grenouille"
        "/home/sacha/proj/french/analysis/grenouille/index.vtt" t)
      

      Mais l'enregistrement numéro 10 ressemblait au numéro 4… Peut-être que, lorsque mon tuteur a répété le mot, il ne voulait pas dire que c'était une erreur, il m'a simplement encouragée à le répéter. Je dois penser à la façon de profiter du bilan de mon tuteur.

      J'ai remarqué le /​ə/ comme dans le mot « je », le /​u/ dans la prononciation /​gʀənuj/ et la différence avec le mot « oui » /​ˈwi/. C'est difficile pour moi.

      (subed-record-compile-subtitle-list
       (mapcar (lambda (o)
                 (setf (elt o 3)
                       (format "%s: %s"
                               (subed-record-get-directive "#+NOTE" (elt o 4))
                               (elt o 3)))
                 o)
               (nreverse
                (subed-record-sort-by-directive
                 "#+NOTE"
                 (subed-record-filter-skips
                  (subed-record-filter-for-directive
                   "#+NOTE"
                   (subed-parse-file "/home/sacha/proj/french/analysis/grenouille/index.vtt"))))))
       "/home/sacha/proj/french/analysis/grenouille/grenouille-compiled.opus"
       nil
       '(:interleaved "/home/sacha/proj/french/chime.opus"))
      

      Mon tuteur m'a donné de nouvelles phrases :

      • Ma compagne m’accompagne à la campagne avec une autre compagne.
      • Une tortue têtue marche dessus sous une pluie continue.

      Je me suis concentrée sur la différence entre « compagne » et « campagne. »

      (let ((my-lang-words-for-review-context-function 'my-lang-words-for-review-phrase-context))
        (my-lang-words-for-review "La semaine du 16 au 22 mars"))
      

      Les points pour réviser ma prononciation :

      • nous avons acheté des nouilles instantanées.
      • qui proposait la boîte à déjeuner qu'elle voulait
      • donc si elle veut être assortie, alors nous serons assorties.
      • J'ai cherché dans le groupe de précaution COVID et j'ai trouvé une recommandation
      • Ma fille a également réalisé une pancarte avec son nom et quelques Pokémon.
      • qui fournit les appareils Holter pour lancer les démarches.
      • J'ai actualisé mon script pour réserver des livres à la bibliothèque.
      • pour rechercher la cause de ses symptômes.
      • Elle aime bien le skee-ball et elle a obtenu son meilleur score jusqu'à présent.
      • donc je l'ai emmenée au magasin de tissus du centre-ville.
      • Dans un autre magasin à proximité
      • … et un gloss à lèvres.
      • Mon mari a installé deux lumières à côté du lit mezzanine de ma fille parce
      • mais après avoir gratté le dessus
      • J'ai cousu une housse de protection
      • mon mari a préparé des nouilles ramen aux wontons.
      • Je me suis assise sur le porche et j'ai réécrit mon journal et mes notes sur l'IA en français.
      • qu'elle avait mal au ventre.
      • Je leur ai donné des guimauves et elles (et le grand-père d'une amie de ma fille) les ont fait griller sur des brochettes.
      • nous avons cousu ensemble.
      • La bosse près du piercing de ma fille a commencé à saigner et suppurer.
      • elle dormait probablement sur le côté.
      • J'ai participé à la réunion virtuelle OrgMeetup.
      • grâce à la reconnaissance vocale.

      Pendant la conversation, j'ai pu lui décrire la bibliothèque de Toronto que j'adore tellement. La bibliothèque de Toronto est l'une des plus grandes au monde. Chaque quartier a sa propre bibliothèque, et on peut réserver jusqu'à 50 livres pour les faire envoyer à la bibliothèque la plus proche. La bibliothèque offre aussi énormément de livres électroniques, ce qui est très pratique.

      (subed-record-extract-all-approximately-matching-phrases
         (split-string (org-file-contents "/home/sacha/proj/french/analysis/virelangues-2026-03-13/phrases.txt") "\n")
         "/home/sacha/sync/recordings/processed/2026-03-24 12-29-53-sacha.json"
         "/home/sacha/proj/french/analysis/virelangues-2026-03-13/2026-03-24-raphael-script.vtt")
      

      Ma fille était grincheuse avec moi et vis-à-vis de l'école aujourd'hui. L'école a une remplaçante, ce qu'elle n'aime jamais. Elle a aussi eu l'impression que je l'avais pressée pendant la pause déjeuner parce que je n'avais pas voulu être en retard pour mon cours.

      Elle n'a pas trouvé son ordinateur qui était sur le banc devant la salle de bains. Je pense qu'elle n'a pas cherché bien loin. Elle n'a pas demandé à mon mari de l'aider à chercher. Elle a simplement séché les cours. J'ai reprogrammé mon prochain cours avec mon tuteur pour qu'il commence à 12h45 au lieu de 12h30. De toute façon, puisqu'elle a décidé de sécher les cours, j'ai proposé de faire une promenade au parc ou de coudre la couverture de pique-nique. Elle a fermement décidé d'être grincheuse. Les turbulences font partie de la vie avec un enfant.

      L'appareil Holter était arrivé, mais ce n'était pas un bon moment pour le lui poser.

      mercredi 25

      J'ai travaillé comme consultante. J'ai eu trois tâches à accomplir et je les ai finies, donc j'étais satisfaite. J'ai terminé les cours de formation, j'ai configuré des paramètres pour permettre à un autre développeur de vérifier mon logiciel avant la mise à jour du système, et j'ai fait un prototype d'affichage vidéo qui est plus moderne que la version actuelle.

      Ma fille n'a pas voulu sortir pour jouer à cause de l'appareil Holter qu'elle doit porter pendant deux semaines. Heureusement, la mère de son ami m'a envoyé une invitation à jouer ensemble à Minecraft. Ma fille a joué avec son ami sur son serveur Minecraft Java. Ils ont commencé un nouveau monde, donc je les ai rejoints pour aider à récolter des ressources. Nous avons établi notre base sur une montagne. J'ai coupé des épicéas, j'ai miné un tunnel jusqu'à la couche moins 54 et j'ai commencé une ferme de blé et de canne à sucre. Ma fille et son ami ont exploré des cavernes, combattu beaucoup de monstres, et décoré notre base. Elle a aimé jouer avec un ami qui coopérait avec elle, ce qui est différent des griefers dans son club Minecraft à l'école. ( Elle a dit que s'il se conduisait mal, elle pouvait parler avec la mère de son ami. )

      J'ai participé à la réunion virtuelle Emacs Berlin, qui se tenait sur mon serveur. Malheureusement, j'ai raté le courriel de l'hôte indiquant que son code de modérateur ne fonctionnait pas parce que j'étais concentrée sur mon travail, donc j'ai corrigé le problème environ une heure après le début de la réunion virtuelle. Néanmoins, la réunion s'est bien déroulée. Après ça, j'ai mis à jour mon serveur BigBlueButton pour les réunions virtuelles, juste au cas où cela pourrait aider.

      J'étais tellement fière de ma fille, qui porte l'appareil Holter quoiqu'il soit pénible. Elle peut même gérer la batterie elle-même, et elle a déjà remarqué un épisode de palpitations. Elle a dit qu'elle détestait l'appareil Holter, mais elle voulait capturer des données pour que la cardiologue puisse analyser ses symptômes.

      jeudi 26

      J'ai dépoussiéré mon serveur Minecraft pour inviter l'ami de ma fille après les cours. Il fallait que je configure quelques règles sur le pare-feu et le routeur. Après avoir un peu tourné en rond, j'ai vérifié que je pouvais me connecter en dehors de notre réseau. J'ai aussi installé CraftyController pour gérer le serveur via une interface web. Ma fille et moi avons joué sur une carte de parkour Minecraft jusqu'à ce que la mère de son ami m'ait envoyé un message, puis ma fille et son ami ont joué indépendamment.

      J'ai appelé l'entreprise qui nous a envoyé le Holter cardiaque pour demander conseil car les patchs démangeaient tellement ma fille. Après avoir confirmé qu'ils recevaient les données de ma fille et demandé des conseils, je suis allée à la pharmacie pour acheter une crème barrière.

      Une fois que je suis rentrée, ma fille jouait à Minecraft toute seule. Elle a dit qu'elle avait construit une poubelle en utilisant un cactus, un coffre, et un entonnoir ; que son ami avait placé son épée dans la poubelle ; qu'il y avait un désaccord quelconque, et qu'ensuite son ami avait fait exploser toutes les choses avec de la TNT, malgré leur accord de ne pas faire de griefing… De toute façon, j'ai envoyé un message à la mère de son ami au cas où elle aurait compris un peu plus que moi. Peut-être que leurs styles de jeu ne sont pas compatibles pour l'instant. C'est la vie.

      J'ai mis la nouvelle crème barrière sur la peau de ma fille et nous avons appliqué de nouveaux patchs pour le Holter cardiaque. Nous avons reconnecté les électrodes. L'entreprise a dit qu'elle pouvait faire une pause si c'était nécessaire.

      Ma fille s'est demandé comment faire des bracelets d'amitié, donc j'ai cherché du fil et lui ai montré comment les nouer.

      Analyse de mon journal

      J'ai mis à jour mon analyse de journal et j'ai utilisé l'IA Claude pour visualiser les données. Depuis novembre, j'ai écrit plus de 300 mots à presque chaque entrée sur plus de 140 entrées.

      02_words_per_session.png

      À mon étonnement, je trouve constamment de nouveaux mots pour décrire ma vie quotidienne, même si cette sorte de petite vie peut ennuyer autrui. Je prends soin de ma fille, je l'emmène à quelques endroits, je bidouille ma configuration d'Emacs, je réfléchis… C'est simple, mais c'est la mienne.

      Il y a une légère diminution du taux de nouveaux mots au fur et à mesure que je développe mon vocabulaire et que je m'habitue aux expressions figées.

      01_cumulative_vocab.png

      C'est plus facile de s'en rendre compte si l'on regarde le pourcentage des nouveaux lemmes en fonction des mots écrits, qui semble être d'environ 5 %.

      06_vocab_efficiency.png

      Je pense que je dois faire un effort pour utiliser plus d'adjectifs pour décrire davantage mes expériences, mais même lorsque j'écris en anglais, je suis attirée par la précision du verbe ou du nom juste. Si j'écris sur des choses qui sont semblables les unes aux autres, ou si je veux peindre un tableau d'une scène, les adjectifs seront plus nécessaires.

      05_pos_usage.png

      Ma sœur est devenue une écrivaine merveilleuse, mais elle ne donne pas dans le langage fleuri. Elle écrit avec son cœur sur des choses douloureusement claires et tout l'humour qu'elle peut trouver en aimant passionnément la vie et sa famille. Je n'ai pas besoin d'atteindre cette qualité d'écriture pour le moment; je n'ai pas la moitié de la sagesse qu'elle a dû acquérir. Je suis contente de consigner ma journée sachant qu'un jour j'oublierai les détails.

      Les logiciels sont surtout écrits par l'IA Claude :

      Un brouillon pour le Carnaval d'Emacs

      Le thème du Carnaval d'Emacs ce mois-ci est « les erreurs et les idées reçues ». C'est difficile de penser à une chose qui soit clairement une erreur, mais il y a certainement des choses que je ne fais pas efficacement.

      Ma configuration est très volumineuse car je pense que mes petites modifications ne sont utiles que pour moi. Elles sont trop spécifiques, trop idiosyncratiques. J'apprécie ceux qui créent des bibliothèques ou même des packages que beaucoup d'autres utilisent, mais de mon côté, je ne me sens pas capable de le faire pour l'instant. Même soumettre des correctifs en amont et participer à la discussion qui s'ensuit parfois demande plus de persistance que je n'en ai.

      L'avantage de garder mes modifications dans ma configuration est que même si je ne suis pas sûre, je pourrais essayer quelque chose, développer un prototype préliminaire, et changer d'avis si nécessaire. Quand je les publie dans une bibliothèque ou un package, j'ai l'impression que je dois peaufiner mes idées. C'est difficile de s'en tenir à une idée.

      Ma situation préférée est quand j'écris mon essai dans un article, et qu'il inspire une autre personne qui implémente sa propre version, voire une nouvelle bibliothèque ou un nouveau package.

      En revanche, si j'apprends à partager mon code, je peux aider plus de personnes, et je peux aussi apprendre de plus de personnes et plus de conversations.

      Beaucoup de mes modifications sont brèves et faciles à copier de mes articles, mais il y a quelques collections qui dépendent des autres fonctions, ce qui les rend difficiles à copier. Les fonctions sont dispersées dans plusieurs articles sur mon blog. Par exemple, mes fonctions pour apprendre une langue ( particulièrement le français ) et pour contrôler Emacs par la voix deviennent plutôt complexes. Elles sont aussi exportées vers ma configuration, mais le fichier Emacs Lisp est difficile à parcourir si on veut les copier. Je peux copier le code dans un fichier maintenant que Org Mode peut l'extraire vers plusieurs fichiers, mais si je consacre un peu de temps à remplacer le préfixe « my- » par celui de la bibliothèque et à les copier sur le dépôt, on peut le cloner et télécharger des mises à jour. Même si personne ne l'utilise, le fait de le peaufiner et de le documenter me sera utile un jour.

      Donc il est possible que ce soit une erreur que je commette souvent sur Emacs : je pense que mes fonctions sont trop idiosyncratiques et trop préliminaires, donc je les ai laissées dans ma configuration. Mais si j'y consacre du temps pour extraire le code vers une bibliothèque, j'en bénéficierai peut-être à long terme. Je sais que beaucoup de gens sont intéressés par l'utilisation d'Emacs pour apprendre une langue ou par la voix. Il y a de nombreuses autres bibliothèques et flux de travail depuis longtemps. Je veux m'entraîner à apprendre intentionnellement avec d'autres. Pour commencer, je pourrais peut-être recueillir les coordonnées des gens intéressés et leur envoyer un message lorsque je publie un article sur le sujet.

      Prot avait baissé ses tarifs de coaching. Quant au développement des packages, il est prolifique. J'apprends bien avec mon tuteur en français, donc cela vaut peut-être la peine de consacrer de l'argent et du temps pour améliorer cette compétence. Certes, c'est juste pour le plaisir, mais c'est aussi important pour moi que je m'entraîne à apprendre avec l'aide des autres au lieu de trébucher toute seule. Je peux aussi écrire davantage, participer aux réunions virtuelles ou même diffuser en direct. J'ai toujours plus de choses à apprendre, ce qui est merveilleux.

      vendredi 27

      J'ai recommencé à dessiner des moments quotidiens comme je l'avais fait il y a quelques années quand ma fille était plus jeune. Quand je suis tombée sur les dessins dans ma liste « En ce jour » (en fait le format RSS que j'avais ajouté à mon agrégateur), le dessin me manquait. Depuis le début de l'année scolaire, je dessinais une note pour la boîte à lunch de ma fille chaque jour de classe parce que ma fille voulait « l'expérience complète d'écolière » malgré le fait d'aller à l'école à distance. Je dessinais quelques Pokémon et d'autres centres d'intérêt de ma fille. Ça m'est enfin venu à l'idée de combiner des Pokémon et nos moments. Jusqu'à la semaine précédente, je les ai dessinés sur des fiches bristol. Je me suis remémoré que notre imprimante peut traiter les fiches bristol, donc j'ai utilisé Procreate sur mon iPad pour dessiner un moment de notre vie quotidienne et je l'ai imprimé pour l'insérer dans sa boîte à lunch.

      Voilà :

      Ugh, mon OBS n'a pas enregistré mon côté du rendez-vous avec mon tuteur, donc je ne peux pas ajouter les extraits à Comparing pronunciation recordings across time. Ce n'est pas grave, je dois les réenregistrer. C'est la deuxième fois que cela se produit. Redémarrer mon ordinateur résout les problèmes, mais si je ne détecte pas le problème tôt, je n'ai pas le temps de redémarrer avant mon rendez-vous. J'ai un autre profil sur OBS qui se connecte directement à mon microphone au lieu du récepteur audio virtuel, ce qui sera peut-être plus fiable. Je dois aussi enregistrer une sauvegarde sur mon téléphone la prochaine fois.

      J'ai envoyé un message et de l'argent à Prot pour du coaching. Voilà, je m'y engage.

      You can e-mail me at sacha@sachachua.com.

    9. 🔗 hyprwm/Hyprland v0.54.3 release

      A standard patch release with more fixes on top of 0.54.2

      Fixes backported

      • algo/dwindle: fix precise mouse setting (#13678)
      • algo/master: fix crash on null target in getNextTarget
      • algo/scroll: fix std::clamp assertion crash on resume from suspend (#13737)
      • desktop/rules: fix static rules and content type. (#13725)
      • hyprctl: fix json output for the submap command (#13726)
      • layershell: fix popup crash with nullptr mon (#13763)
      • overridableVar: fix reassignment
      • protocols: fix image-copy-capture stop handling and remove non protocol errors (#13706)
      • compositor: When processing fullscreen states, only use effective mode where necessary (#13607)
      • compositor: be more selective about how we expand the window box in getting coord (#13720)
      • layersurface: simulate mouse movement on layer change (#13747)
      • layout: guard null workspace in CWindowTarget::updatePos() (#13861)
      • protocols/workspace: schedule done after output update (#13743)
      • view: consolidate group flags and apply window rules (#13694)
      • xwayland: prevent potential buffer overflow in socket path handling (#13797)

      Special thanks

      As always, massive thanks to our wonderful donators and sponsors:

      Sponsors

      Diamond

      37Signals

      Gold

      Framework

      Donators

      Top Supporters:

      Seishin, Kay, johndoe42, d, vmfunc, Theory_Lukas, --, MasterHowToLearn, iain, ari-cake, TyrHeimdal, alexmanman5, MadCatX, Xoores, inittux111, RaymondLC92, Insprill, John Shelburne, Illyan, Jas Singh, Joshua Weaver, miget.com, Tonao Paneguini, Brandon Wang, Arkevius, Semtex, Snorezor, ExBhal, alukortti, lzieniew, taigrr, 3RM, DHH, Hunter Wesson, Sierra Layla Vithica, soy_3l.beantser, Anon2033, Tom94

      New Monthly Supporters:

      monkeypost, lorenzhawkes, Adam Saudagar, Donovan Young, SpoderMouse, prafesa, b3st1m0s, CaptainShwah, Mozart409, bernd, dingo, Marc Galbraith, Mongoss, .tweep, x-wilk, Yngviwarr, moonshiner113, Dani Moreira, Nathan LeSueur, Chimal, edgarsilva, NachoAz, mo, McRealz, wrkshpstudio, crutonjohn

      One-time Donators:

      macsek, kxwm, Bex Jonathan, Alex, Tomas Kirkegaard, Viacheslav Demushkin, Clive, phil, luxxa, peterjs, tetamusha, pallavk, michaelsx, LichHunter, fratervital, Marpin, SxK, mglvsky, Pembo, Priyav Shah, ChazBeaver, Kim, JonGoogle, matt p, tim, ybaroj, Mr. Monet Baches, NoX, knurreleif, bosnaufal, Alex Vera, fathulk, nh3, Peter, Charles Silva, Tyvren, BI0L0G0S, fonte-della- bonitate, Alex Paterson, Ar, sK0pe, criss, Dnehring, Justin, hylk, 邱國玉KoryChiu, KSzykula, Loutci, jgarzadi, vladzapp, TonyDuan, Brian Starke, Jacobrale, Arvet, Jim C, frank2108, Bat-fox, M.Bergsprekken, sh-r0, Emmerich, davzucky, 3speed, 7KiLL, nu11p7r, Douglas Thomas, Ross, Dave Dashefsky, gignom, Androlax, Dakota, soup, Mac, Quiaro, bittersweet, earthian, Benedict Sonntag, Plockn, Palmen, SD, CyanideData, Spencer Flagg, davide, ashirsc, ddubs, dahol, C. Willard A.K.A Skubaaa, ddollar, Kelvin, Gwynspring, Richard, Zoltán, FirstKix, Zeux, CodeTex, shoedler, brk, Ben Damman, Nils Melchert, Ekoban, D., istoleyurballs , gaKz, ComputerPone, Cell the Führer, defaltastra, Vex, Bulletcharm, cosmincartas, Eccomi, vsa, YvesCB, mmsaf, JonathanHart, Sean Hogge, leat bear, Arizon, JohannesChristel, Darmock, Olivier, Mehran, Anon, Trevvvvvvvvvvvvvvvvvvvv, C8H10N4O2, BeNe, Ko-fi Supporter :3, brad, rzsombor, Faustian, Jemmer, Antonio Sanguigni, woozee, Bluudek, chonaldo, LP, Spanching, Armin, BarbaPeru, Rockey, soba, FalconOne, eizengan, むらびと, zanneth, 0xk1f0, Luccz, Shailesh Kanojia, ForgeWork , Richard Nunez, keith groupdigital.com, pinklizzy, win_cat_define, Bill, johhnry, Matysek, anonymus, github.com/wh1le, Iiro Ullin, Filinto Delgado, badoken, Simon Brundin, Ethan, Theo Puranen Åhfeldt, PoorProgrammer, lukas0008, Paweł S, Vandroiy, Mathias Brännström, Happyelkk, zerocool823, Bryan, ralph_wiggums, DNA, skatos24, Darogirn , Hidde, phlay, lindolo25, Siege, Gus, Max, John Chukwuma, Loopy, Ben, PJ, mick, herakles, mikeU-1F45F, Ammanas, SeanGriffin, Artsiom, Erick, Marko, Ricky, Vincent mouline

      Full Changelog : v0.54.2...v0.54.3

    10. 🔗 News Minimalist 🐢 Juries hold Meta and YouTube liable for harm + 9 more stories rss

      In the last 3 days Gemini read 96074 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [6.3] Juries hold Meta and YouTube liable for harming children —apnews.com(+272)

      Juries in Los Angeles and New Mexico have found Meta and YouTube liable for harming children, signaling a pivotal shift in holding social media giants accountable for their product designs.

      The verdicts focused on addictive platform features and Meta’s alleged concealment of child exploitation risks. By targeting deliberate design choices, these lawsuits successfully bypassed Section 230 legal protections that historically shielded tech companies from liability regarding third-party content and platform-related harms.

      Meta and Google plan to appeal the verdicts. These bellwether trials may lead to broader settlements, similar to historic tobacco litigation, as public concern regarding social media’s developmental impact grows.

      [6.3] Iran starts to formalize its chokehold on the Strait of Hormuz with a ‘toll booth’ regime —apnews.com(+1234)

      Iran is cementing control over the Strait of Hormuz using a mandatory vetting and toll regime, causing global oil prices to surge as shipping traffic drops by 90 percent.

      Ships must now enter Iranian waters for vetting by the Islamic Revolutionary Guards Corps, with some paying fees in yuan.

      While overall traffic has plummeted, vessels linked to Iran and its top customer, China, still frequently transit the vital energy artery.

      Highly covered news with significance over 5.5

      [6.0] OpenAI closes AI video app Sora — abc.net.au (+68)

      [6.0] Wikipedia bans AI-generated content in its online encyclopedia — theguardian.com (+10)

      [5.9] UN General Assembly declares slavery the gravest crime against humanity — www1.folha.uol.com.br (Portuguese) (+43)

      [5.8] Arm enters chip market with AI CPU for data centers — pcworld.com (+17)

      [5.7] South American malaria mosquitoes evolve insecticide resistance — hsph.harvard.edu (+3)

      [6.2] Astronomers observe two giant gas planets forming around a young star — euronews.com (+13)

      [5.7] Ukraine and Saudi Arabia sign defense cooperation agreement — euronews.com (+33)

      [5.6] IOC requires genetic testing for women's Olympic events — npr.org (+81)

      Thanks for reading!

      — Vadim


      You can create your own significance-based RSS feed with premium.


      Powered by beehiiv

    11. 🔗 r/wiesbaden Hey, ich (21, m) suche neue Leute/Freunde in Wiesbaden rss

      Hab mich in letzter Zeit ein bisschen abgekapselt und will das jetzt ändern. Wäre cool, Leute zu finden, mit denen man auch spontan mal was machen kann rausgehen, chillen, zocken oder einfach quatschen.

      Ich komme aus der Nähe von Schleifgraben.

      Kurz zu mir:
      Ich zocke viel, schaue Animes, Serien und Filme und gehe auch gerne nachts spazieren (macht mit anderen definitiv mehr Spaß). Meine Lieblingsspiele sind Hollow Knight, OMORI und OneShot.

      Aktuell interessiere ich mich ziemlich für Cosplay und versuche da ein bisschen reinzukommen. Außerdem gehe ich eigentlich gerne in Bars und Clubs, auch wenn das zuletzt etwas weniger geworden ist.

      Hab gerade ziemlich viel Zeit und bin teilweise auch etwas nerdy

      Wenn du aus der Gegend bist und Bock hast, schreib einfach!

      Mein dc : .aymann

      Mein insta: https://www.instagram.com/aymaninkoln/

      submitted by /u/Superb_Gas7119
      [link] [comments]

    12. 🔗 @brandur The Second Wave of the API-first Economy rss

      Fifteen years ago, when some colleagues and I were building Heroku's V3 API, we set an ambitious goal: the public API should be powerful enough to run our own dashboard. No private endpoints, no escape hatches.

      It was a stretch, but it worked. A new version of the company's dashboard shipped on V3, and an unaffiliated developer who we'd never met before built Heroku's first iOS app on it, without a single feature request sent our way.


      The first wave

      Our dashboard-on-public-APIs-only seems needlessly idealistic nowadays, but it was an objective born of the time. The year was 2011, and the optimism around the power of APIs was palpable. A new world was opening up. One of openness, interconnectivity, unbounded possibility.

      And we weren't the only ones thinking that way:

      • Only a year before (2010) Facebook released its original Open Graph API, providing immensely powerful insights into its platform data.

      • Twitter's API at the time was almost completely open. You didn't even need an OAuth token -- just authenticate on API endpoints with your username/password and get access to just about anything.

      • GitHub was doing really impressive API design work, providing an expansive, feature-complete API with access to anything developers could need, and playing with forward-thinking ideas like hypermedia APIs/HATEOAS.

      You can still find traces of this bygone era, standing like some cyclopean ruins from a previous age. Hit the root GitHub API and you'll find an artifact over a decade old -- a list of links that were intended to be followed as hypermedia:

      $ curl https://api.github.com | jq
      
      {
        "current_user_url": "https://api.github.com/user",
        "current_user_authorizations_html_url": "https://github.com/settings/connections/applications{/client_id}",
        "authorizations_url": "https://api.github.com/authorizations",
        "code_search_url": "https://api.github.com/search/code?q={query}{&page,per_page,sort,order}",
        "commit_search_url": "https://api.github.com/search/commits?q={query}{&page,per_page,sort,order}",
        "emails_url": "https://api.github.com/user/emails",
        "emojis_url": "https://api.github.com/emojis",
        "events_url": "https://api.github.com/events",
        ...
      

      This wasn't a pre-planned, stack-ranked feature that a product team spent half a year putting together. It was one or two early engineers who got really excited about an API idea, and shipped it, probably without even asking for permission.


      Part of the push for open APIs was simple good will towards the rest of the world. The engineers building them were brought up in the earliest days of the internet, steeped in its original counterculture, and had an innate bias for radical openness.

      There was also a feeling from the companies involved that the APIs would be beneficial for their bottom lines. Users and third parties would use APIs to supplement the core product with add-ons and extensions that'd drive growth and increase product retention and satisfaction.

      Sites like the now defunct ProgrammableWeb popped up to discuss and catalog the newly appearing APIs, and the "programmable web" wasn't only a website, it was a principle.

      In the near future, all platforms would be API-first, providing full programmatic access and opening a new wave of interoperability across the web that'd let any service talk to any other service and massively accelerate the scope and reach of the internet. APIs would help expand everything from freedom to communication to commerce. An overwhelming force for good in the world.


      API winter

      Of course, it didn't last. The programmable web went through a phase of expansion, reached its maximum extent, and began to contract.

      • Twitter's famous API, which used to be an API tinkerer's dream, leveled off and began to dip as the company struggled to find ways to generate revenue. New features no longer got first-class API treatment. Access to the firehose was closed. Third-party Twitter clients were restricted and eventually locked out.

      • The power of Facebook's Graph API was hugely constricted post-Cambridge Analytica where a single rogue app was able to suck up data on millions of users and put it up for sale. Strict app review procedures were implemented. The API went from open access to a walled garden.

      • Even more extreme, Instagram's previously public API was deprecated totally. Realizing they had a real money maker on their hands, they saw no reason to share ad revenue with anyone else. Use Instagram through the first-party app or not at all.

      • Even APIs like GitHub's that stayed quite open had to crack down to a degree. Endpoints became authenticated by necessity and aggressive rate limiting was put in to curb abuse and reduce operational toil. And even when APIs were still largely accessible, using them to build a full-scale third-party app became more difficult as limiters flattened heavy (even if legitimate) use.

      The rationale for why APIs were being declawed or disappearing completely varied--abuse, monetization pressure, competitive risk, privacy, etc.--but the pattern was clear. Walls were going up across the world.

      APIs didn't disappear, but it was a cold winter for them. The expectation of an API became more limited to developer-focused platforms whose users paid them -- Stripe, Twilio, Slack, etc. When new consumer products appeared on the market (e.g. TikTok), no one expected them to have much in the way of an API.


      The coming second wave

      For many years this was the status quo. If you were using Twitter, you'd use it from Twitter.com. Facebook, from Facebook.com. Instagram or TikTok, from their respective iOS/Android apps. Developer products like GitHub and Stripe continued strong, but elsewhere, APIs weren't enough of a competitive advantage for anyone who didn't have one to suffer.

      But around mid-2025, the world changed. The last half year especially has been distinguished by the rise of indescribably powerful LLMs, which now dominate discourse as the most useful new tool in a generation.

      They're already useful enough as incredible trivia machines or code generators, but they really start to shine when they integrate with things. It's pretty neat having one generate a valid Kubernetes configuration for your new app, but it's really neat watching it provision an EKS cluster via awscli and send out its first production deploy on your behalf.

      Suddenly, an API is no longer liability, but a major saleable vector to give users what they want: a way into the services they use and pay for so that an agent can carry out work on their behalf. Especially given a field of relatively undifferentiated products, in the near future the availability of an API might just be the crucial deciding factor that leads to one choice winning the field.

      Picking my future bank

      Let's think about banks. I have a couple bank accounts, each offering a standard set of features largely unchanged since the 60s. If I call them, they'll send me some checks. I can request a transfer between two internal accounts and they will transfer the money … in 1-5 business days. Nowadays, they even offer ultra-modern features (from 2010) like gasp , MFA, just as long as it's through a provider that's paid them off (Symantec VIP). Suffice it to say, they're comfortable in the status quo. My banks do not have good APIs.

      So far this has worked out okay for them. People aren't known to migrate banks often, and even if they did, regulatory moats make new incumbents rare.

      But in the modern age, can it last? When I want to move $100 from one bank to another, my banks put me through a humiliating ritual of logging into both accounts, and bypassing multiple security checks and captchas before I can perform any operation. All this despite me having just logged into both accounts from this exact location and biometrically-secured computer the day before.

      The world I want is to instruct an LLM: "move $100 from Wells Fargo checking to Charles Schwab brokerage" and it will just happen. And to be fair, LLMs are already so absurdly good at reverse engineering things that this might already work today. But you know what'd work better? If both banks shipped with APIs, LLM-friendly usage instructions (through MCP or the like), and a strong auth layer to give me confidence that the whole process is secure.

      If I were choosing a bank today, some considerations would be the same as they've always been--competent security, free checking, no foreign transaction fees--but I'd also futureproof the choice by picking one that's established technical bona fides by providing an API. Even if I'm not quite ready to trust my banking credentials to an agent quite yet, I assume that this day is coming.

      Ubiquitous again

      Now apply the same principle to every service you use during the course of a week, or ever:

      • Online marketplaces: Robot, schedule my normal Amazon Fresh order for the first available slot tomorrow morning.

      • Office co-working: Robot, book me a desk at Embarcadero Center today.

      • Ski resorts: Robot, buy me a day pass for tomorrow and load it to my resort card. Confirm the price with me first.

      • Restaurants: Robot, put in my usual lunch order at Musubi Kai. Get me the unadon!

      Where wouldn 't you want an API?

      Forecasting the future is infamously hazardous, but based on the adoption patterns of myself and the people around me, I expect the demand to interact with services through LLMs is going to be overwhelming, and services aiming to provide a good product experience or which face competitive pressure (i.e. someone else could provide that experience instead) will offer APIs.

      I used to wish that we'd gone down an alternative branch of web technology and adopted a protocol like Gopher) so we'd have a more standardized web experience instead of every product you use producing its own unique UX, most bad. I think we will see more standardization, just not in the form I expected. The convention of the future will be human language, fed into what looks a lot like a terminal, and fulfilled via API.

      On behalf of people

      Notably, this is different than the first wave of APIs that I described above. Instead of APIs being to offer infinitely flexible access for inter-service communication, scrape data, or build apps on top of someone else's platform, their primary use will be to fulfill requests on behalf of a primary user. Exactly like what they'd be doing through a first-party app, but in a programmatic way.

      During the first wave, APIs were largely aimed at third parties who'd use
them to extend and augment the underlying platform to provide additional
features for users. During the first wave, APIs were largely aimed at third parties who'd use them to extend and augment the underlying platform to provide additional features for users. In the second wave, APIs map cleanly
to normal product capabilities. They provide programmatic access for agents
that act on behalf of people. In the second wave, APIs map cleanly to normal product capabilities. They provide programmatic access for agents that act on behalf of people.

      It may seem like a subtle distinction, but there are considerable differences. The second model better incentivizes APIs to exist:

      • APIs aren't for building a product that aims to displace the offerings of the underlying platform, but rather for giving users an alternative way to access it.

      • Security models are simplified because they're the same ones used by the product itself. Users have the same visibility that they'd have through a first-party app, and no more.

      • Aiming to support access patterns for a single person, platforms can rate limit much more aggressively to curb expenses and operational problems associated with offering an API.

      APIs should aim to provide a little more leeway than they would for a human, but only nominally so. An agent acting on my behalf should be able to occasionally poll LinkedIn for old colleagues that I should be reconnecting with and send them connect requests, but if someone's set up their ClawBot to scrape the entire social graph on their behalf, platforms should feel more than free to throttle the hell out of them and give them a strike towards a permanent ban.

      Slack's rate limits are a good example of this, supporting numbers like 50 channel or 100 profile reads per minute. You can't build a multi-user app with 50 channel reads per minute, but it's plenty for a single user to access their own account.

      Limits of the model

      While can expect many products and services to offer APIs for good agentic interoperability, it won't be forthcoming everywhere.

      Don't expect much out of Instagram, TikTok, or other platforms that power themselves with ads. Neither from monopolies that won't feel any serious pressure to change -- you won't be reliably paying your Xfinity bill via agent anytime soon.

      Hints of the future, today

      In this section I figured I'd call out a few services that are already pulling this future forward:

      API spring

      Fifteen years ago, us API maximalists thought that APIs were going to eat the world, ushering in a new paradigm of interoperability that would vastly expand our capabilities as users, and even change the world for the better.

      What we got instead was an API winter. As useful as APIs were in some situations, that usefulness was outweighed by concerns around revenue, privacy, and abuse.

      But as scary of a thought as it was that this might be the end, it wasn't. We're at the beginning of a new spring of APIs that'll appear to support use by agents acting on behalf of people. As this mode of operation gets more popular, expect the availability of an API to be a competitive edge that differentiates a service from its competitors. The result will be a global proliferation of APIs and expanding product capability like never before seen.

    13. 🔗 r/LocalLLaMA Skipping 90% of KV dequant work → +22.8% decode at 32K (llama.cpp, TurboQuant) rss

      I’ve been working on an open source TurboQuant implementation for KV cache compression in llama.cpp and ran into a hard bottleneck: dequantization.

      At long context (32K on M5 Max), dequant alone was taking around 40 percent of decode time.

      I tried fixing it the usual way: - register LUTs
      - SIMD tricks
      - fused kernels
      - branchless math

      Tested about 14 different approaches. None beat the baseline. Hardware was already at the limit.

      What ended up working was much simpler.

      Flash attention computes softmax weights before touching V.
      At long context, most of those weights are basically zero.

      So instead of making dequant faster, I just skip V dequant entirely for positions with negligible attention.

      It’s about 3 lines in the kernel.

      Results on Qwen3.5-35B-A3B (M5 Max):

      TurboQuant KV (turbo3): - +22.8% decode at 32K
      - PPL unchanged
      - NIAH: 7/9 → 9/9

      Standard q8_0 KV cache: - +5% decode
      - PPL identical
      - NIAH identical

      So this is not TurboQuant-specific. It’s using attention sparsity directly.

      Also tested on M2 Pro: - 4-mag LUT on K side + sparse V stack cleanly
      - turbo3 went from ~0.45x → ~0.73x vs q8_0

      Repo and benchmarks:
      https://github.com/TheTom/turboquant_plus

      Writeup:
      https://github.com/TheTom/turboquant_plus/blob/main/docs/papers/sparse-v- dequant.md

      If anyone wants to try this on CUDA or other setups I’d be interested to see results.

      Note: a CUDA port is currently being tested independently. Will share results once available.

      submitted by /u/Pidtom
      [link] [comments]

    14. 🔗 r/york Tailor please rss

      I have a wax jacket and a pair of trousers I want altered. Any recommendations? Cheers.

      submitted by /u/MobiusNaked
      [link] [comments]

    15. 🔗 Hex-Rays Blog Product Update: IDA 9.3sp1 Release rss

      IDA 9.3sp1

      We are pleased to announce the release of the first IDA 9.3 Service Pack (sp1).

    16. 🔗 r/york Are there any working automatic car washers in York? rss

      The machines at Sainsbury’s and Morrisons were both broken last time I checked.

      submitted by /u/OneItchy396
      [link] [comments]

    17. 🔗 r/Leeds Connexions services over Easter rss
    18. 🔗 r/york Lidl horse rss

      Feeling concerned about the horse that is often kept around Foss islands. It never looks to have any food, not being groomed or looked after, being left outside in storms, etc - surely this is animal abuse? Will reporting it be of any use or is there nothing to be done? Makes me sad every time I see it.

      submitted by /u/Icy-Strength7691
      [link] [comments]

    19. 🔗 r/Yorkshire University tutors pay tribute to 'warm' Leeds student who died in Woodhouse Lane car crash rss
    20. 🔗 r/LocalLLaMA Glm 5.1 is out rss

      Glm 5.1 is out | submitted by /u/Namra_7
      [link] [comments]
      ---|---

    21. 🔗 The Pragmatic Engineer Is the FDE role becoming less desirable? rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .

      An interesting trend highlighted by The Wall Street Journal: companies want to hire for FDE roles, but devs are just not that interested:

      "Job postings on Indeed grew more than 10-fold in 2025 compared with 2024. The number of public company transcripts mentioning the role jumped to 50 from eight over the same period, according to data from AlphaSense.

      The only problem? Few engineers want the job, which has historically been seen as demanding, undesirable, and less prestigious than product-focused engineering roles.

      "Everyone wants them and there's only maybe 10% of the market that wants that role," said Patrick Kellenberger, president and chief operating officer at Betts Recruiting."

      Last summer, we covered the rise of the FDE role, and looked into what it's like. Back then, this is how I visualized what was then a very hot role:

      altMy 2025 visualization of the FDE role

      At the companies where I interviewed FDE folks - OpenAI and Ramp - the role seemed to live up to this visualization. However, I've since talked with two engineers who took FDE roles and were disappointed. This is how they saw it, in practice:

      altReality of the FDE role: less software engineering, and even less platform engineering

      The role seems akin to a "sales engineer" where FDEs help close the deals, or a solutions engineer (or even consultant), where FDEs deploy to a customer to build them a solution. They don't contribute back into the platform, and don't do much that's considered "software engineering" beyond integrating software which the product team built.

      Some engineers figure out the nature of the role during the interview process and pass on it. Meanwhile, some others take the job and later quit. Here's what a dev told me who accept an FDE role at a company, but didn't find what they expected:

      "This FDE job was a typical IT services mindset. The company wanted to use me more on the engagement lead side, and nothing on software development. It's not what I signed up for, and I didn't like the vibe and culture. I quit 4 weeks later."

      In today's job market, if there's high demand for a role which pays decently but attracts little interest from engineers, there's always a reason!


      Read the full issue of last week's The Pulse, or check out this week's The Pulse.

      Catch up with recent The Pragmatic Engineer issues:

    22. 🔗 r/Leeds the city at dusk rss
    23. 🔗 r/Leeds Can't park there mate rss

      The new busses are looking a bit different

      submitted by /u/AvinchMC
      [link] [comments]

    24. 🔗 Textualize/textual The Select Release release

      This release enhances text selection, with auto-scrolling, and the ability to select across container widgets.

      This work was sponsored by Mistral AI.

      [8.2.0] - 2026-03-27

      Added

      • Auto-scrolling on select #6440
      • Selecting over containers #6440
      • Added App.ENABLE_SELECT_AUTO_SCROLL, App.SELECT_AUTO_SCROLL_LINES, App.SELECT_AUTO_SCROLL_SPEED to tweak auto scrolling behavior #6440
    25. 🔗 badlogic/pi-mono v0.63.1 release

      Added

      • Added gemini-3.1-pro-preview-customtools model availability for the google-vertex provider (#2610 by @gordonhwc)

      Fixed

      • Documented tool_call input mutation as supported extension API behavior, clarified that post-mutation inputs are not re-validated, and added regression coverage for executing mutated tool arguments (#2611)
      • Fixed repeated compactions dropping messages that were kept by an earlier compaction by re-summarizing from the previous kept boundary and recalculating tokensBefore from the rebuilt session context (#2608)
      • Fixed interactive compaction UI updates so ctx.compact() rebuilds the chat through unified compaction events, manual compaction no longer duplicates the summary block, and the trigger-compact example only fires when context usage crosses its threshold (#2617)
      • Fixed interactive compaction completion to append a synthetic compaction summary after rebuilding the chat so the latest compaction remains visible at the bottom
      • Fixed skill discovery to stop recursing once a directory contains SKILL.md, and to ignore root *.md files in .agents/skills while keeping root markdown skill files supported in ~/.pi/agent/skills, .pi/skills, and package skills/ directories (#2603)
      • Fixed edit tool diff rendering for multi-edit operations with large unchanged gaps so distant edits collapse intermediate context instead of dumping the full unchanged middle block
      • Fixed edit tool error rendering to avoid repeating the same exact-match failure in both the preview and result blocks
      • Fixed auto-compaction overflow recovery for Ollama models when the backend returns explicit prompt too long; exceeded max context length ... errors instead of silently truncating input (#2626)
      • Fixed built-in tool overrides that reuse built-in parameter schemas to still honor custom renderCall and renderResult renderers in the interactive TUI, restoring the minimal-mode example (#2595)
    26. 🔗 badlogic/pi-mono v0.63.0 release

      Breaking Changes

      • ModelRegistry.getApiKey(model) has been replaced by getApiKeyAndHeaders(model) because models.json auth and header values can now resolve dynamically on every request. Extensions and SDK integrations that previously fetched only an API key must now fetch request auth per call and forward both apiKey and headers. Use getApiKeyForProvider(provider) only when you explicitly want provider-level API key lookup without model headers or authHeader handling (#1835)
      • Removed deprecated direct minimax and minimax-cn model IDs, keeping only MiniMax-M2.7 and MiniMax-M2.7-highspeed. Update pinned model IDs to one of those supported direct MiniMax models, or use another provider route that still exposes the older IDs (#2596 by @liyuan97)

      Migration Notes

      Before:

      const apiKey = await ctx.modelRegistry.getApiKey(model);
      return streamSimple(model, messages, { apiKey });
      

      After:

      const auth = await ctx.modelRegistry.getApiKeyAndHeaders(model);
      if (!auth.ok) throw new Error(auth.error);
      return streamSimple(model, messages, {
        apiKey: auth.apiKey,
        headers: auth.headers,
      });
      

      Added

      • Added sessionDir setting support in global and project settings.json so session storage can be configured without passing --session-dir on every invocation (#2598 by @smcllns)
      • Added a startup onboarding hint in the interactive header telling users pi can explain its own features and documentation (#2620 by @ferologics)
      • Added edit tool multi-edit support so one call can update multiple separate, disjoint regions in the same file while matching all replacements against the original file content
      • Added support for PI_TUI_WRITE_LOG directory paths, creating a unique log file (tui-<timestamp>-<pid>.log) per instance for easier debugging of multiple pi sessions (#2508 by @mrexodia)

      Changed

      Fixed

      • Fixed file mutation queue ordering so concurrent edit and write operations targeting the same file stay serialized in request order instead of being reordered during queue-key resolution
      • Fixed models.json shell-command auth and headers to resolve at request time instead of being cached into long-lived model state. pi now leaves TTL, caching, and recovery policy to user-provided wrapper commands because arbitrary shell commands need provider-specific strategies (#1835)
      • Fixed Google and Vertex cost calculation to subtract cached prompt tokens from billable input tokens instead of double-counting them when providers report cachedContentTokenCount (#2588 by @sparkleMing)
      • Added missing ajv direct dependency; previously relied on transitive install via @mariozechner/pi-ai which broke standalone installs (#2252)
      • Fixed /export HTML backgrounds to honor theme.export.pageBg, cardBg, and infoBg instead of always deriving them from userMessageBg (#2565)
      • Fixed interactive bash execution collapsed previews to recompute visual line wrapping at render time, so previews respect the current terminal width after resizes and split-pane width changes (#2569)
      • Fixed RPC get_session_stats to expose contextUsage, so headless clients can read actual current context-window usage instead of deriving it from token totals (#2550)
      • Fixed pi update for git packages to fetch only the tracked target branch with --no-tags, reducing unrelated branch and tag noise while preserving force-push-safe updates (#2548)
      • Fixed print and JSON modes to emit session_shutdown before exit, so extensions can release long-lived resources and non-interactive runs terminate cleanly (#2576)
      • Fixed GitHub Copilot OpenAI Responses requests to omit the reasoning field entirely when no reasoning effort is requested, avoiding 400 errors from Copilot gpt-5-mini rejecting reasoning: { effort: "none" } during internal summary calls (#2567)
      • Fixed blockquote text color breaking after inline links (and other inline elements) due to missing style restoration prefix
      • Fixed slash-command Tab completion from immediately chaining into argument autocomplete after completing the command name, restoring flows like /model that submit into a selector dialog (#2577)
      • Fixed stale content and incorrect viewport tracking after TUI content shrinks or transient components inflate the working area (#2126 by @Perlence)
      • Fixed @ autocomplete to debounce editor-triggered searches, cancel in-flight fd lookups cleanly, and keep suggestions visible while results refresh (#1278)
    27. 🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
      sync plugin-repository.json
      
      No plugin changes detected
      
    28. 🔗 exe.dev Everyone is building a software factory rss

      We are all grappling with what it means to be an organization with agentic tools. We are seeing a Cambrian explosion of workflows in how to produce software. It is unwise, right now, to declare The Solution and enforce it. Developer Productivity teams that are pushing a workflow on their users are being counterproductive. Instead, the moment calls for experimentation and for giving people the agency to experiment, to learn, to iterate.

      The key is the compute primitive. You–and everyone else on your team–need to have plentiful, performant, trivial-to-provision VMs that can be accessed from your phone or anywhere, that can be shared securely, that integrate nicely, and that can be trusted with your data. Given this, you'll find an explosion of agents, automations, UIs, workflows, notifications, bots, claws, and so on. The successful ones will evolve to be the bones of your software factory.

      This is not a One Size Fits All moment. This is an Everyone's Workflow is Different moment.

      We went around the office recently, and talked through our workflows. 7 people. 9 workflows. (Not a joke!) Everyone's are different. Everyone's are wonderful. There's the newsletter that visits our Slack and tells us what's going on in support rotation. There's the integration with our Clickhouse logs. There's the background agent fighting the noble fight against test flakes. There are multi-agent orchestrators. There's an "inbox" view that gathers agent conversation state from all the VMs and sorts them by recency and annotates whether they've been pushed. There's vanilla Claude Code. There's the pi coding agent. There's our own coding agent, Shelley.

      The only common denominator? We're all using VMs to isolate, try, share, iterate, parallelize. So many VMs.

  4. March 26, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-26 rss

      IDA Plugin Updates on 2026-03-26

      New Releases:

      Activity:

      • augur
        • 229654f8: test: improve integration tests
      • capa
        • 4ba1b5d2: build(deps): bump bump-my-version from 1.2.4 to 1.3.0 (#2963)
        • f694c2ae: build(deps): bump picomatch in /web/explorer (#2967)
      • Greffe
      • haruspex
        • 5442fe2f: test: improve integration tests
      • hrtng
        • 3c2438e5: refresh all widgets after "Refactoring";
      • ida-domain
        • 7de2e5c1: Extend microcode module (#65)
      • plugin-ida
        • 89b0becb: Merge pull request #108 from RevEngAI/feat-PLU-256
        • 56cb7843: chore: bump package version
        • ed47fd57: Merge pull request #107 from RevEngAI/feat-PLU-256
        • 4e481e23: chore: bump package version
        • 10b07b25: Merge pull request #106 from RevEngAI/feat-PLU-256
        • 58a4724d: feat(PLU-256): plugin boundary changes
      • python-elpida_core.py
        • 2f5cb3d8: Reduce aggressive external request patterns (anti-abuse)
        • 9412a821: Pause BODY loop during Live Audit to prevent OOM crash
        • 1adbf746: Strip trailing newline from REPLICATE_API_TOKEN
        • 72d5ced6: Fix Live Audit results lost on Vision button click
        • 40bd18fe: Add Replicate Flux vision generator to Live Audit
        • 38a894aa: A16 governance integration: keywords (20), IANUS support, UI 16 axiom…
        • 15637da8: Add A16 + missing A11-A14 across BODY: embeddings (11→16), banner, d1…
        • 37cf4d9f: Fix D15 provider parity: HF openrouter→convergence to match root
        • b052e016: Diplomat layer + A16 Responsive Integrity ratification + cleanup
      • rhabdomancer
        • 71f97710: test: improve integration tests
    2. 🔗 r/Leeds Wire Nightclub rss

      I took this whilst in the queue at 1.00 in the morning, the week before COVID lockdown. Seemed to capture the chilled club vibe nicely.

      submitted by /u/ApprehensiveArm5689
      [link] [comments]

    3. 🔗 r/Yorkshire Progress rss
    4. 🔗 r/LocalLLaMA Dual DGX Sparks vs Mac Studio M3 Ultra 512GB: Running Qwen3.5 397B locally on both. Here's what I found. rss

      I was spending about $2K/month on Claude API tokens for a personal AI assistant I run through Slack. After about 45 days of that cost pain I decided to go local. Bought both a dual DGX Spark setup and a Mac Studio M3 Ultra 512GB, each cost me about $10K after taxes. Same price, completely different machines. Here is what I learned running Qwen3.5 397B A17B on both.

      The Mac Studio

      MLX 6 bit quantization, 323GB model loaded into 512GB unified memory. 30 to 40 tok/s generation. The biggest selling point is memory bandwidth at roughly 800 GB/s. That bandwidth is what makes token generation feel smooth on such a massive model in a single box. Setup was easy. Install mlx vlm, point it at the model, done. The weakness is raw compute. Prefill is slow (30+ seconds on a big system prompt with tool definitions) and if you want to do batch embedding alongside inference, you are going to feel it. I also had to write a 500 line async proxy because mlx vlm does not parse tool calls or strip thinking tokens natively.

      The Dual Sparks

      INT4 AutoRound quantization, 98GB per node loaded across two 128GB nodes via vLLM TP=2. 27 to 28 tok/s generation. The biggest selling point is processing speed. CUDA tensor cores, vLLM kernels, tensor parallelism. Prefill is noticeably faster than the Mac Studio. Batch embedding that takes days on MLX finishes in hours on CUDA. The entire open source GPU ecosystem just works. The weakness is memory bandwidth at roughly 273 GB/s per node, which is why generation tops out lower than the Mac Studio despite having more compute.

      The setup was brutal though. Only one QSFP cable works (the second crashes NCCL). Node2's IP is ephemeral and disappears on reboot. The GPU memory utilization ceiling is 0.88 and you have to binary search for it because going to 0.9 starves the OS and 0.85 OOMs at 262K context. Every wrong guess costs you 15 minutes while checkpoint shards reload. You have to flush page cache on BOTH nodes before every model load or you get mystery OOM failures. Some units thermal throttle within 20 minutes. It took me days to get stable.

      Why I kept both

      I am building a RAG pipeline with Qwen3 Embedding 8B and Qwen3 Reranker 8B for a personal knowledge base. On the Mac Studio, those models would compete with the main model for the same 512GB memory pool. On the Sparks, they get dedicated CUDA and never touch inference memory.

      So the architecture ended up being: Mac Studio handles inference only (full 512GB for the model and KV cache). Sparks handle RAG, embedding, reranking, and everything else. They talk over Tailscale.

      Head to head numbers

      | Mac Studio 512GB | Dual DGX Spark
      ---|---|---
      Cost | $10K | $10K
      Memory | 512GB unified | 256GB (128×2)
      Bandwidth | ~800 GB/s | ~273 GB/s per node
      Quant | MLX 6 bit (323GB) | INT4 AutoRound (98GB/node)
      Gen speed | 30 to 40 tok/s | 27 to 28 tok/s
      Max context | 256K tokens | 130K+ tokens
      Setup | Easy but hands on | Hard
      Strength | Bandwidth | Compute
      Weakness | Compute | Bandwidth

      If you can only buy one

      I cannot tell you which is better because if one were clearly better I would have returned the other. They optimize for different things.

      Mac Studio if you want it to just work, you want that 800 GB/s bandwidth for smooth generation, and you are not planning heavy embedding workloads alongside inference. An RTX 6000 Pro build was my third option but I did not want to build a custom PC on top of everything else I was planning on for this.

      Dual Sparks if you are comfortable with Linux and Docker, you want CUDA and vLLM natively, you plan to run RAG or embedding alongside inference, and you are willing to spend days on initial setup for a more powerful platform long term.

      The Mac Studio gives you 80% of the experience with 20% of the effort. The Sparks give you more capability but they extract a real cost in setup time.

      Break even math

      $2K/month API spend. $20K total hardware. 10 months to break even. After that it is free inference forever with complete privacy and no rate limits.

      I wrote a longer version of this with more detail on the full build out at https://substack.com/home/post/p-192255754 . Building a series covering the full stack including vLLM tuning, RAG without LangChain, and QLoRA fine tuning a 397B MoE. Happy to answer questions.

      submitted by /u/trevorbg
      [link] [comments]

    5. 🔗 r/Harrogate Bramham Drive HG2 area rss

      Hi everyone, I'm looking at possibly buying a flat in the Bramham Drive area. Had a look online and seen slightly elevated levels of crime. Can anybody shine a light on the area for me please? What's it like there and should I be concerned?

      Thanks a lot

      submitted by /u/blkhlznrevltionz
      [link] [comments]

    6. 🔗 r/reverseengineering r2gopclntabParser: A radare2-based Go gopclntab parser for recovering function symbols from Go binaries, including fully stripped ones. rss
    7. 🔗 r/york Kickabout Community rss

      Kickabout Community | Enjoy a friendly football game to break up the week. Kickabout Community supports independent 5-a-side and 7-a-side adult football games across York. We’re a volunteer-run group of organisers, making football accessible for players of all ability, gender, age, and fitness levels. 👉 Join Kickabout Community here: https://chat.whatsapp.com/CSt29p06AGLL1E91uu5Eze 📍 Pitches used: • York Sports Village • University of York Sports Centre • PlayFootball Clifton Moor • Energise Acomb 💷 Subs: £3-4 per session (covering pitch hire, balls, and bibs) We are not a business and not profit-making. Any surplus funds are for player socials or charitable donations. submitted by /u/Chance_Board_5424
      [link] [comments]
      ---|---

    8. 🔗 r/Leeds Woodhouse firework rss

      Does anyone living near Woodhouse know what the firework happening right now is about? It sounds like world war 3.

      submitted by /u/CraftyBrie
      [link] [comments]

    9. 🔗 r/york Sarah Ferguson stripped of Freedom of City of York title rss

      Sarah Ferguson stripped of Freedom of City of York title | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    10. 🔗 r/Leeds The Empire Cafe illustration rss

      I've been drawing pictures of Leeds now for like 10 years but I'm still not bored of drawing the city at night! Here's Empire Cafe on Fish Street :)

      submitted by /u/zacrosso_art
      [link] [comments]

    11. 🔗 r/Yorkshire Had a grand day out today in Pickering, too early in season for the castle, museum and railway, but had lots to do and visit rss

      Had a grand day out today in Pickering, too early in season for the castle, museum and railway, but had lots to do and visit | submitted by /u/arioandy
      [link] [comments]
      ---|---

    12. 🔗 r/Leeds Leeds Photos rss

      I bought a new camera in a bid to use my phone less, found I quite enjoy taking photos. I don't have much of a clue what I'm doing but took these recently.

      submitted by /u/Phil-pot
      [link] [comments]

    13. 🔗 r/Yorkshire A nostalgic long read about clubbing in Leeds in the noughties - hopefully it’s of interest to some of you! rss

      A nostalgic long read about clubbing in Leeds in the noughties - hopefully it’s of interest to some of you! | submitted by /u/Andyc1421
      [link] [comments]
      ---|---

    14. 🔗 r/reverseengineering Latest Akamai v3 deobfuscator static reversal of dynamic per request rss
    15. 🔗 r/york Daffodils rss

      I've never given much thought to the daffodils that are everywhere in York at this time of year, but is there a reason, historical or otherwise why there are so many in so many places throughout the city?

      submitted by /u/Shoddy-Television530
      [link] [comments]

    16. 🔗 r/Leeds A nostalgic long read about clubbing in Leeds in the noughties - hopefully it’s of interest to some of you! rss
    17. 🔗 r/york First time visiting rss

      I (M22) going to York for the first time between the 30th of March and 1st of April. Is there anything I should definitely check out which I might not of heard about (I am not bringing my car with me so it will all have to be quite local)? And is there anything I should know/be aware of before I go? (Such as do I need to book tickets for the dungeons or can I pay at the entrance).

      I am always open to meet new people too, so if anyone would like to join me for museums or attractions, feel free to shoot me a message.

      Thank you so much for all your help!

      submitted by /u/SneakingALook
      [link] [comments]

    18. 🔗 r/LocalLLaMA Mistral AI to release Voxtral TTS, a 3-billion-parameter text-to-speech model with open weights that the company says outperformed ElevenLabs Flash v2.5 in human preference tests. The model runs on about 3 GB of RAM, achieves 90-millisecond time-to-first-audio, supports nine languages. rss

      Mistral AI to release Voxtral TTS, a 3-billion-parameter text-to-speech model with open weights that the company says outperformed ElevenLabs Flash v2.5 in human preference tests. The model runs on about 3 GB of RAM, achieves 90-millisecond time-to-first-audio, supports nine languages. | VentureBeat: Mistral AI just released a text-to-speech model it says beats ElevenLabs — and it's giving away the weights for free: https://venturebeat.com/orchestration/mistral-ai-just-released-a-text-to-speech-model-it-says-beats-elevenlabs-and Mistral AI unlisted video on YouTube: Voxtral TTS. Find your voice.: https://www.youtube.com/watch?v=_N-ZGjGSVls Mistral new 404: https://mistral.ai/news/voxtral-tts submitted by /u/Nunki08
      [link] [comments]
      ---|---

    19. 🔗 r/Harrogate Shocking behaviour... rss

      Shocking behaviour... | They look like such nice ladies, too... submitted by /u/LurkishEmpire
      [link] [comments]
      ---|---

    20. 🔗 r/york Best Margs in York? rss

      I’m on the hunt. The hunt for good margaritas. I’ve only really discovered this cocktail in the last 12 months but for a few reasons I’ve drank most of the ones I’ve tried in other cities rather than my own.

      When they’re good. Holy hell they’re amazing. When they’re bad, it’s beyond disappointing.

      Recently I’ve tried a few places in York (where I’m born and bred) and been disappointed each time.

      - Evil eye was average at best

      - Fossgate social was ok, the spicy was better than the standard.

      I’ve not found one in York that’s been comparable to the good ones I’ve had elsewhere yet.

      An easy way I’ve found to dismiss a large cohort is if they use table salt rather than crystal salt. Let’s get the basics right please York garrrr.

      So…gives me the locations of decent margs in York centre please! I’m not looking for ‘xxx might be decent’ I’d like first hand recommendations based off experience - thanks pals.

      submitted by /u/robbo909
      [link] [comments]

    21. 🔗 r/LocalLLaMA RotorQuant: 10-19x faster alternative to TurboQuant via Clifford rotors (44x fewer params) rss

      RotorQuant: 10-19x faster alternative to TurboQuant via Clifford rotors (44x fewer params) | Kinda sounds ridiculous - but I reimagined / reinvented turboquant with Clifford Algebra Vector Quantization on both implemented on cuda + metalshaders - https://github.com/tonbistudio/turboquant-pytorch/pull/4 https://github.com/TheTom/turboquant_plus/pull/34 https://preview.redd.it/mqwnea8iidrg1.png?width=2604&format=png&auto=webp&s=597710bff942ea68180f162ed147e134d33c9639 https://preview.redd.it/n9hjiq6iidrg1.png?width=2652&format=png&auto=webp&s=1ec464ada80dfff65ae7017ab9b834190ace2987 The idea: Replace the d×d random orthogonal matrix Π with Clifford rotors in Cl(3,0). Instead of a dense matmul (16,384 FMAs for d=128), chunk the vector into groups of 3 dims and rotate each with a 4-parameter rotor via the sandwich product RvR̃ (~100 FMAs total). Results on Qwen2.5-3B-Instruct KV cache: - Cosine similarity: 0.990 (vs TurboQuant's 0.991) — effectively identical
      - 44× fewer parameters (372 vs 16,399 for d=128)
      - Fused CUDA kernel: 10-19× faster than cuBLAS matmul on RTX PRO 4000
      - Fused Metal shader: 9-31× faster on Apple M4
      - Perfect 9/9 needle-in-haystack at all bit-widths The key insight: for pure vectors, the rotor sandwich is equivalent to a sparse 3×3 rotation — the fused kernel keeps everything in registers with no memory round-trips, which is why it beats the BLAS GEMM despite TurboQuant's matmul being highly optimized. The tradeoff is higher synthetic MSE on random unit vectors (the block-diagonal rotation doesn't induce the exact Beta distribution). But with QJL correction, real-model attention fidelity is identical — and sometimes better on top-1/top-5 retrieval. Paper: https://www.scrya.com/rotorquant/ Code: https://github.com/scrya-com/rotorquant PDF: https://www.scrya.com/rotorquant.pdf submitted by /u/Revolutionary_Ask154
      [link] [comments]
      ---|---

    22. 🔗 r/wiesbaden Ramen Innenstadtnähe rss

      Hallo zusammen,

      Ich bin letzten Monat nach Wiesbaden gezogen und suche momentan noch einen guten Ramen-Laden am Besten in der Innenstadt. Ich habe bisher 1-2 ausprobiert, aber war noch nicht wirklich begeistert.

      Habt ihr Empfehlungen?

      Bonuspunkte, wenn man sich am Anfang selbst die Zutaten individuell auswählen kann anstatt nur vorgefertigter Menüs.

      Danke euch!

      submitted by /u/Amarku
      [link] [comments]

    23. 🔗 r/Yorkshire Caught this moment at Whitby Abbey last summer rss

      Caught this moment at Whitby Abbey last summer | Just one of those moments where everything lined up submitted by /u/Effective_Sink_3934
      [link] [comments]
      ---|---

    24. 🔗 r/Yorkshire First Puffins of 2026… RSPB Bempton Cliffs rss
    25. 🔗 r/reverseengineering My DAP couldn't display Arabic text, so I reverse engineered the firmware format to fix it rss
    26. 🔗 Andrew Healey's Blog Building a Runtime with QuickJS rss

      Building a tiny JavaScript runtime on top of QuickJS with timers, file I/O, and an event loop.

    27. 🔗 Rust Blog Announcing Rust 1.94.1 rss

      The Rust team has published a new point release of Rust, 1.94.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

      If you have a previous version of Rust installed via rustup, getting Rust 1.94.1 is as easy as:

      rustup update stable
      

      If you don't have it already, you can get rustup from the appropriate page on our website.

      What's in 1.94.1

      Rust 1.94.1 resolves three regressions that were introduced in the 1.94.0 release.

      And a security fix:

      Contributors to 1.94.1

      Many people came together to create Rust 1.94.1. We couldn't have done it without all of you. Thanks!

    28. 🔗 Console.dev newsletter EmailMD rss

      Description: Generate emails from Markdown.

      What we like: Uses Markdown templates to generate email output that works across mail clients. Customizable themes and fonts. Includes common components e.g. buttons, tables, images, callouts, hero. Wraps mjml which handles the compatible conversions.

      What we dislike: Built with TypeScript which makes it difficult to use from other languages.

    29. 🔗 Console.dev newsletter Pyodide rss

      Description: Run Python in the browser.

      What we like: Ports CPython to Wasm so it can run in the browser. Any pip package that has a wheel is supported. Includes a JS FFI so you can work directly with the browser (Pyodide already gets access to web APIs).

      What we dislike: Wasm/browser environment is single threaded so multi-threading or multi-processing isn’t supported. Also has relatively low memory limits due to Wasm limitations.

    30. 🔗 Ampcode News GPT‐5.4 in Deep rss

      GPT-5.4 now powers Amp's deep mode.

      It's the best model in the world right now.

      It's faster than GPT-5.3-Codex and still as great at coding.

      But, out of the box, GPT-5.4 was too chatty. That's not what we want for deep; it's not a pair programmer, it's supposed to go off and solve the problem.

      So we tuned GPT-5.4 to behave like GPT-5.3-Codex.

      Once we had that, we started to use it exclusively; even for interactive tasks. We run it at very high reasoning (deep^3) and still prefer it when we need fast interaction and fast reaction. It takes steering better than GPT-5.3-Codex.

      To use it: open the command palette with Ctrl-O and run the command mode: use deep in the Amp CLI, or select deep mode in the Amp editor extension's prompt field. By default it uses deep^2, you can switch to deep^3 by hitting Opt-D.