🏡


to read (pdf)

  1. Neobrutalism components - Start making neobrutalism layouts today
  2. Debunking zswap and zram myths
  3. Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
  4. Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
  5. Letting AI Actively Manage Its Own Context | æ˜Žć€©çš„äčŒäș‘

  1. March 28, 2026
    1. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases, ~1 changed rss
      sync repo: +2 releases, ~1 changed
      
      ## New releases
      - [drop-all-the-files](https://github.com/milankovo/ida-drop-all-the-files): 1.4.0
      - [global-struct-dissector](https://github.com/williballenthin/idawilli): 0.1.1
      
      ## Changes
      - [oplog](https://github.com/williballenthin/idawilli):
        - 0.3.0: archive contents changed, download URL changed
      
    2. 🔗 r/york rail replacement buses, sunday 29/03 rss

      hi, i’m travelling tomorrow on one of the rail replacement buses to newcastle. where is the bus stop for this? is it leeman road next to the memorial gardens?

      submitted by /u/aster0idzz
      [link] [comments]

  2. March 27, 2026
    1. 🔗 r/LocalLLaMA Google TurboQuant running Qwen Locally on MacAir rss

      Google TurboQuant running Qwen Locally on MacAir | Hi everyone, we just ran an experiment. We patched llama.cpp with Google’s new TurboQuant compression method and then ran Qwen 3.5–9B on a regular MacBook Air (M4, 16 GB) with 20000 tokens context. Previously, it was basically impossible to handle large context prompts on this device. But with the new algorithm, it now seems feasible. Imagine running OpenClaw on a regular device for free! Just a MacBook Air or Mac Mini, not even a Pro model the cheapest ones. It’s still a bit slow, but the newer chips are making it faster. link for MacOs app: atomic.chat - open source and free. Curious if anyone else has tried something similar? submitted by /u/gladkos
      [link] [comments]
      ---|---

    2. 🔗 anthropics/claude-code v2.1.86 release

      What's changed

      • Added X-Claude-Code-Session-Id header to API requests so proxies can aggregate requests by session without parsing the body
      • Added .jj and .sl to VCS directory exclusion lists so Grep and file autocomplete don't descend into Jujutsu or Sapling metadata
      • Fixed --resume failing with "tool_use ids were found without tool_result blocks" on sessions created before v2.1.85
      • Fixed Write/Edit/Read failing on files outside the project root (e.g., ~/.claude/CLAUDE.md) when conditional skills or rules are configured
      • Fixed unnecessary config disk writes on every skill invocation that could cause performance issues and config corruption on Windows
      • Fixed potential out-of-memory crash when using /feedback on very long sessions with large transcript files
      • Fixed --bare mode dropping MCP tools in interactive sessions and silently discarding messages enqueued mid-turn
      • Fixed the c shortcut copying only ~20 characters of the OAuth login URL instead of the full URL
      • Fixed masked input (e.g., OAuth code paste) leaking the start of the token when wrapping across multiple lines on narrow terminals
      • Fixed official marketplace plugin scripts failing with "Permission denied" on macOS/Linux since v2.1.83
      • Fixed statusline showing another session's model when running multiple Claude Code instances and using /model in one of them
      • Fixed scroll not following new messages after wheel scroll or click-to-select at the bottom of a long conversation
      • Fixed /plugin uninstall dialog: pressing n now correctly uninstalls the plugin while preserving its data directory
      • Fixed a regression where pressing Enter after clicking could leave the transcript blank until the response arrived
      • Fixed ultrathink hint lingering after deleting the keyword
      • Fixed memory growth in long sessions from markdown/highlight render caches retaining full content strings
      • Reduced startup event-loop stalls when many claude.ai MCP connectors are configured (macOS keychain cache extended from 5s to 30s)
      • Reduced token overhead when mentioning files with @ — raw string content no longer JSON-escaped
      • Improved prompt cache hit rate for Bedrock, Vertex, and Foundry users by removing dynamic content from tool descriptions
      • Memory filenames in the "Saved N memories" notice now highlight on hover and open on click
      • Skill descriptions in the /skills listing are now capped at 250 characters to reduce context usage
      • Changed /skills menu to sort alphabetically for easier scanning
      • Auto mode now shows "unavailable for your plan" when disabled by plan restrictions (was "temporarily unavailable")
      • [VSCode] Fixed extension incorrectly showing "Not responding" during long-running operations
      • [VSCode] Fixed extension defaulting Max plan users to Sonnet after the OAuth token refreshes (8 hours after login)
      • Read tool now uses compact line-number format and deduplicates unchanged re-reads, reducing token usage
    3. 🔗 Simon Willison Vibe coding SwiftUI apps is a lot of fun rss

      I have a new laptop - a 128GB M5 MacBook Pro, which early impressions show to be very capable for running good local LLMs. I got frustrated with Activity Monitor and decided to vibe code up some alternative tools for monitoring performance and I'm very happy with the results.

      This is my second experiment with vibe coding macOS apps - the first was this presentation app a few weeks ago.

      It turns out Claude Opus 4.6 and GPT-5.4 are both very competent at SwiftUI - and a full SwiftUI app can fit in a single text file, which means I can use them to spin something up without even opening Xcode.

      I’ve built two apps so far: Bandwidther shows me what apps are using network bandwidth and Gpuer to show me what’s going on with the GPU. At Claude’s suggestion both of these are now menu bar icons that open a panel full of information.

      Bandwidther

      I built this app first, because I wanted to see what Dropbox was doing. It looks like this:

      Screenshot of Bandwidther macOS app showing two columns: left side displays overall download/upload speeds, a bandwidth graph over the last 60 seconds, cumulative totals, internet and LAN connection counts, and internet destinations; right side shows per-process bandwidth usage sorted by rate with processes like nsurlsessiond, apsd, rapportd, mDNSResponder, Dropbox, and others listed with their individual download/upload speeds and progress bars.

      I’ve shared the full transcript I used to build the first version of the app. My prompts were pretty minimal:

      Show me how much network bandwidth is in use from this machine to the internet as opposed to local LAN

      (My initial curiosity was to see if Dropbox was transferring files via the LAN from my old computer or was downloading from the internet.)

      mkdir /tmp/bandwidther and write a native Swift UI app in there that shows me these details on a live ongoing basis

      This got me the first version, which proved to me this was worth pursuing further.

      git init and git commit what you have so far

      Since I was about to start adding new features.

      Now suggest features we could add to that app, the goal is to provide as much detail as possible concerning network usage including by different apps

      The nice thing about having Claude suggest features is that it has a much better idea for what’s possible than I do.

      We had a bit of back and forth fixing some bugs, then I sent a few more prompts to get to the two column layout shown above:

      add Per-Process Bandwidth, relaunch the app once that is done

      now add the reverse DNS feature but make sure original IP addresses are still visible too, albeit in smaller typeface

      redesign the app so that it is wider, I want two columns - the per-process one on the left and the rest on the right

      OK make it a task bar icon thing, when I click the icon I want the app to appear, the icon itself should be a neat minimal little thing

      The source code and build instructions are available in simonw/bandwidther.

      Gpuer

      While I was building Bandwidther in one session I had another session running to build a similar tool for seeing what the GPU was doing. Here’s what I ended up with:

      Screenshot of the Gpuer app on macOS showing memory usage for an Apple M5 Max with 40 GPU cores. Left panel: a large orange "38 GB Available" readout showing usage of 128.0 GB unified memory, "Room for ~18 more large apps before pressure", a warning banner reading "1.5 GB pushed to disk — system was under pressure recently", a horizontal segmented bar chart labeled "Where your memory is going" with green, blue, and grey segments and a legend, an explanatory note about GPU unified memory, a GPU Utilization section showing 0%, and a History graph showing Available and GPU Utilization over time as line charts. Right panel: a Memory Footprint list sorted by Memory, showing process names with horizontal pink/purple usage bars and CPU percentage labels beside each entry, covering processes including Dropbox, WebKit, Virtualization, node, Claude Helper, Safari, LM Studio, WindowServer, Finder, and others.

      Here's the transcript. This one took even less prompting because I could use the in-progress Bandwidther as an example:

      I want to know how much RAM and GPU this computer is using, which is hard because stuff on the GPU and RAM does not seem to show up in Activity Monitor

      This collected information using system_profiler and memory_pressure and gave me an answer - more importantly it showed me this was possible, so I said:

      Look at /tmp/bandwidther and then create a similar app in /tmp/gpuer which shows the information from above on an ongoing basis, or maybe does it better

      After a few more changes to the Bandwidther app I told it to catch up:

      Now take a look at recent changes in /tmp/bandwidther - that app now uses a sys tray icon, imitate that

      This remains one of my favorite tricks for using coding agents: having them recombine elements from other projects.

      The code for Gpuer can be found in simonw/gpuer on GitHub.

      You shouldn't trust these apps

      These two apps are classic vibe coding: I don't know Swift and I hardly glanced at the code they were writing.

      More importantly though, I have very little experience with macOS internals such as the values these tools are measuring. I am completely unqualified to evaluate if the numbers and charts being spat out by these tools are credible or accurate!

      I've added warnings to both GitHub repositories to that effect.

      This morning I caught Gpuer reporting that I had just 5GB of memory left when that clearly wasn't the case (according to Activity Monitor). I pasted a screenshot into Claude Code and it adjusted the calculations and the new numbers look right, but I'm still not confident that it's reporting things correctly.

      I only shared them on GitHub because I think they're interesting as an example of what Claude can do with SwiftUI.

      Despite my lack of confidence in the apps themselves, I did learn some useful things from these projects:

      • A SwiftUI app can get a whole lot done with a single file of code - here's GpuerApp.swift (880 lines) and BandwidtherApp.swift (1063 lines).
      • Wrapping various terminal commands in a neat UI with Swift is easily achieved.
      • Claude has surprisingly good design taste when it comes to SwiftUI applications.
      • Turning an app into a menu bar app is just a few lines of extra code as well.
      • You don't need to open Xcode to build this kind of application!

      These two apps took very little time to build and have convinced me that building macOS apps in SwiftUI is a new capability I should consider for future projects.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    4. 🔗 r/reverseengineering Reverse engineered PriPara arcade 3d model format to port exclusive content to newer game versions rss
    5. 🔗 r/reverseengineering Installing arbitrary (and potentially lethal) firmware on a Zero Motorcycle rss
    6. 🔗 r/wiesbaden GĂŒnstiger und nicht-etepete Friseur fĂŒr Frauen gesucht rss

      Ich brauche einen neuen Friseur - war frĂŒher bei Blooms, fand ich ok, aber hab mich da auch nicht so wohl gefĂŒhlt. Ich wĂŒrd gern in einen Salon, wo es locker und nicht spießig/etepetete/super-schick ist. Ich will keine exklusive Behandlung, ich will nur meine Haare schneiden. Am liebsten mit Online- Terminbuchung. Gerne mit etwas Beratung (und fĂŒr gute Beratung zahl ich auch gerne was). Wichtig ist mir eine lockere AtmosphĂ€re.

      submitted by /u/ThreePenguins
      [link] [comments]

    7. 🔗 sacha chua :: living an awesome life La semaine du 23 au 29 mars rss

      lundi 23

      J'ai trouvé comment catégoriser les liens pour mon bulletin d'information Emacs News par commande vocale en utilisant Silero pour détecter l'activité vocale et Speaches pour transcrire les commandes. C'était un peu lent, mais c'était prometteur. Je pense que si je traite quelques commandes par lots, ce sera plus rapide.

      Il faisait beau l'aprĂšs-midi. J'ai emmenĂ© ma fille Ă  son cours de gymnastique en vĂ©lo, oĂč elle s'est bien amusĂ©e en s'exerçant Ă  se laisser tomber sur le ventre aprĂšs avoir fait une roue. AprĂšs cela, nous sommes allĂ©es Ă  la boutique Healthy Moms Market pour acheter une boĂźte Ă  lunch1 plus grande pour ma fille.

      Les collections de CD audio Michel Thomas que j'ai empruntĂ©es Ă  la bibliothĂšque pour apprendre le français Ă©taient trop rayĂ©es pour ĂȘtre Ă©coutĂ©es. Tant pis. J'ai dĂ©jĂ  Ă©coutĂ© la majoritĂ© d'un cours sur YouTube, mais les autres ressources semblaient utiles.

      mardi 24

      J'ai décortiqué le mot « grenouille », que j'ai dit au moins 37 fois lors du cours précédent.

      (subed-record-extract-words "grenouille"  "/home/sacha/sync/recordings/processed/2026-03-20-raphael.json" "/home/sacha/proj/french/analysis/grenouille/index.vtt")
      
      (my-subed-record-group-by-notes
        (nreverse
         (subed-record-sort-by-directive
          "#+NOTE"
          (subed-record-filter-skips
           (subed-parse-file "/home/sacha/proj/french/analysis/grenouille/index.vtt"))))
        "/home/sacha/proj/french/analysis/grenouille/grenouille"
        "/home/sacha/proj/french/analysis/grenouille/index.vtt" t)
      

      Mais l'enregistrement numĂ©ro 10 ressemblait au numĂ©ro 4… Peut-ĂȘtre que, lorsque mon tuteur a rĂ©pĂ©tĂ© le mot, il ne voulait pas dire que c'Ă©tait une erreur, il m'a simplement encouragĂ©e Ă  le rĂ©pĂ©ter. Je dois penser Ă  la façon de profiter du bilan de mon tuteur.

      J'ai remarquĂ© le /​ə/ comme dans le mot « je », le /​u/ dans la prononciation /​gʀənuj/ et la diffĂ©rence avec le mot « oui » /​ˈwi/. C'est difficile pour moi.

      (subed-record-compile-subtitle-list
       (mapcar (lambda (o)
                 (setf (elt o 3)
                       (format "%s: %s"
                               (subed-record-get-directive "#+NOTE" (elt o 4))
                               (elt o 3)))
                 o)
               (nreverse
                (subed-record-sort-by-directive
                 "#+NOTE"
                 (subed-record-filter-skips
                  (subed-record-filter-for-directive
                   "#+NOTE"
                   (subed-parse-file "/home/sacha/proj/french/analysis/grenouille/index.vtt"))))))
       "/home/sacha/proj/french/analysis/grenouille/grenouille-compiled.opus"
       nil
       '(:interleaved "/home/sacha/proj/french/chime.opus"))
      

      Mon tuteur m'a donné de nouvelles phrases :

      • Ma compagne m’accompagne Ă  la campagne avec une autre compagne.
      • Une tortue tĂȘtue marche dessus sous une pluie continue.

      Je me suis concentrée sur la différence entre « compagne » et « campagne. »

      (let ((my-lang-words-for-review-context-function 'my-lang-words-for-review-phrase-context))
        (my-lang-words-for-review "La semaine du 16 au 22 mars"))
      

      Les points pour réviser ma prononciation :

      • nous avons achetĂ© des nouilles instantanĂ©es.
      • qui proposait la boĂźte Ă  dĂ©jeuner qu'elle voulait
      • donc si elle veut ĂȘtre assortie, alors nous serons assorties.
      • J'ai cherchĂ© dans le groupe de prĂ©caution COVID et j'ai trouvĂ© une recommandation
      • Ma fille a Ă©galement rĂ©alisĂ© une pancarte avec son nom et quelques PokĂ©mon.
      • qui fournit les appareils Holter pour lancer les dĂ©marches.
      • J'ai actualisĂ© mon script pour rĂ©server des livres Ă  la bibliothĂšque.
      • pour rechercher la cause de ses symptĂŽmes.
      • Elle aime bien le skee-ball et elle a obtenu son meilleur score jusqu'Ă  prĂ©sent.
      • donc je l'ai emmenĂ©e au magasin de tissus du centre-ville.
      • Dans un autre magasin Ă  proximitĂ©
      • … et un gloss Ă  lĂšvres.
      • Mon mari a installĂ© deux lumiĂšres Ă  cĂŽtĂ© du lit mezzanine de ma fille parce
      • mais aprĂšs avoir grattĂ© le dessus
      • J'ai cousu une housse de protection
      • mon mari a prĂ©parĂ© des nouilles ramen aux wontons.
      • Je me suis assise sur le porche et j'ai réécrit mon journal et mes notes sur l'IA en français.
      • qu'elle avait mal au ventre.
      • Je leur ai donnĂ© des guimauves et elles (et le grand-pĂšre d'une amie de ma fille) les ont fait griller sur des brochettes.
      • nous avons cousu ensemble.
      • La bosse prĂšs du piercing de ma fille a commencĂ© Ă  saigner et suppurer.
      • elle dormait probablement sur le cĂŽtĂ©.
      • J'ai participĂ© Ă  la rĂ©union virtuelle OrgMeetup.
      • grĂące Ă  la reconnaissance vocale.

      Pendant la conversation, j'ai pu lui décrire la bibliothÚque de Toronto que j'adore tellement. La bibliothÚque de Toronto est l'une des plus grandes au monde. Chaque quartier a sa propre bibliothÚque, et on peut réserver jusqu'à 50 livres pour les faire envoyer à la bibliothÚque la plus proche. La bibliothÚque offre aussi énormément de livres électroniques, ce qui est trÚs pratique.

      (subed-record-extract-all-approximately-matching-phrases
         (split-string (org-file-contents "/home/sacha/proj/french/analysis/virelangues-2026-03-13/phrases.txt") "\n")
         "/home/sacha/sync/recordings/processed/2026-03-24 12-29-53-sacha.json"
         "/home/sacha/proj/french/analysis/virelangues-2026-03-13/2026-03-24-raphael-script.vtt")
      

      Ma fille Ă©tait grincheuse avec moi et vis-Ă -vis de l'Ă©cole aujourd'hui. L'Ă©cole a une remplaçante, ce qu'elle n'aime jamais. Elle a aussi eu l'impression que je l'avais pressĂ©e pendant la pause dĂ©jeuner parce que je n'avais pas voulu ĂȘtre en retard pour mon cours.

      Elle n'a pas trouvĂ© son ordinateur qui Ă©tait sur le banc devant la salle de bains. Je pense qu'elle n'a pas cherchĂ© bien loin. Elle n'a pas demandĂ© Ă  mon mari de l'aider Ă  chercher. Elle a simplement sĂ©chĂ© les cours. J'ai reprogrammĂ© mon prochain cours avec mon tuteur pour qu'il commence Ă  12h45 au lieu de 12h30. De toute façon, puisqu'elle a dĂ©cidĂ© de sĂ©cher les cours, j'ai proposĂ© de faire une promenade au parc ou de coudre la couverture de pique-nique. Elle a fermement dĂ©cidĂ© d'ĂȘtre grincheuse. Les turbulences font partie de la vie avec un enfant.

      L'appareil Holter était arrivé, mais ce n'était pas un bon moment pour le lui poser.

      mercredi 25

      J'ai travaillé comme consultante. J'ai eu trois tùches à accomplir et je les ai finies, donc j'étais satisfaite. J'ai terminé les cours de formation, j'ai configuré des paramÚtres pour permettre à un autre développeur de vérifier mon logiciel avant la mise à jour du systÚme, et j'ai fait un prototype d'affichage vidéo qui est plus moderne que la version actuelle.

      Ma fille n'a pas voulu sortir pour jouer à cause de l'appareil Holter qu'elle doit porter pendant deux semaines. Heureusement, la mÚre de son ami m'a envoyé une invitation à jouer ensemble à Minecraft. Ma fille a joué avec son ami sur son serveur Minecraft Java. Ils ont commencé un nouveau monde, donc je les ai rejoints pour aider à récolter des ressources. Nous avons établi notre base sur une montagne. J'ai coupé des épicéas, j'ai miné un tunnel jusqu'à la couche moins 54 et j'ai commencé une ferme de blé et de canne à sucre. Ma fille et son ami ont exploré des cavernes, combattu beaucoup de monstres, et décoré notre base. Elle a aimé jouer avec un ami qui coopérait avec elle, ce qui est différent des griefers dans son club Minecraft à l'école. ( Elle a dit que s'il se conduisait mal, elle pouvait parler avec la mÚre de son ami. )

      J'ai participĂ© Ă  la rĂ©union virtuelle Emacs Berlin, qui se tenait sur mon serveur. Malheureusement, j'ai ratĂ© le courriel de l'hĂŽte indiquant que son code de modĂ©rateur ne fonctionnait pas parce que j'Ă©tais concentrĂ©e sur mon travail, donc j'ai corrigĂ© le problĂšme environ une heure aprĂšs le dĂ©but de la rĂ©union virtuelle. NĂ©anmoins, la rĂ©union s'est bien dĂ©roulĂ©e. AprĂšs ça, j'ai mis Ă  jour mon serveur BigBlueButton pour les rĂ©unions virtuelles, juste au cas oĂč cela pourrait aider.

      J'Ă©tais tellement fiĂšre de ma fille, qui porte l'appareil Holter quoiqu'il soit pĂ©nible. Elle peut mĂȘme gĂ©rer la batterie elle-mĂȘme, et elle a dĂ©jĂ  remarquĂ© un Ă©pisode de palpitations. Elle a dit qu'elle dĂ©testait l'appareil Holter, mais elle voulait capturer des donnĂ©es pour que la cardiologue puisse analyser ses symptĂŽmes.

      jeudi 26

      J'ai dépoussiéré mon serveur Minecraft pour inviter l'ami de ma fille aprÚs les cours. Il fallait que je configure quelques rÚgles sur le pare-feu et le routeur. AprÚs avoir un peu tourné en rond, j'ai vérifié que je pouvais me connecter en dehors de notre réseau. J'ai aussi installé CraftyController pour gérer le serveur via une interface web. Ma fille et moi avons joué sur une carte de parkour Minecraft jusqu'à ce que la mÚre de son ami m'ait envoyé un message, puis ma fille et son ami ont joué indépendamment.

      J'ai appelé l'entreprise qui nous a envoyé le Holter cardiaque pour demander conseil car les patchs démangeaient tellement ma fille. AprÚs avoir confirmé qu'ils recevaient les données de ma fille et demandé des conseils, je suis allée à la pharmacie pour acheter une crÚme barriÚre.

      Une fois que je suis rentrĂ©e, ma fille jouait Ă  Minecraft toute seule. Elle a dit qu'elle avait construit une poubelle en utilisant un cactus, un coffre, et un entonnoir ; que son ami avait placĂ© son Ă©pĂ©e dans la poubelle ; qu'il y avait un dĂ©saccord quelconque, et qu'ensuite son ami avait fait exploser toutes les choses avec de la TNT, malgrĂ© leur accord de ne pas faire de griefing… De toute façon, j'ai envoyĂ© un message Ă  la mĂšre de son ami au cas oĂč elle aurait compris un peu plus que moi. Peut-ĂȘtre que leurs styles de jeu ne sont pas compatibles pour l'instant. C'est la vie.

      J'ai mis la nouvelle crÚme barriÚre sur la peau de ma fille et nous avons appliqué de nouveaux patchs pour le Holter cardiaque. Nous avons reconnecté les électrodes. L'entreprise a dit qu'elle pouvait faire une pause si c'était nécessaire.

      Ma fille s'est demandé comment faire des bracelets d'amitié, donc j'ai cherché du fil et lui ai montré comment les nouer.

      Analyse de mon journal

      J'ai mis à jour mon analyse de journal et j'ai utilisé l'IA Claude pour visualiser les données. Depuis novembre, j'ai écrit plus de 300 mots à presque chaque entrée sur plus de 140 entrées.

      02_words_per_session.png

      À mon Ă©tonnement, je trouve constamment de nouveaux mots pour dĂ©crire ma vie quotidienne, mĂȘme si cette sorte de petite vie peut ennuyer autrui. Je prends soin de ma fille, je l'emmĂšne Ă  quelques endroits, je bidouille ma configuration d'Emacs, je rĂ©flĂ©chis… C'est simple, mais c'est la mienne.

      Il y a une légÚre diminution du taux de nouveaux mots au fur et à mesure que je développe mon vocabulaire et que je m'habitue aux expressions figées.

      01_cumulative_vocab.png

      C'est plus facile de s'en rendre compte si l'on regarde le pourcentage des nouveaux lemmes en fonction des mots Ă©crits, qui semble ĂȘtre d'environ 5 %.

      06_vocab_efficiency.png

      Je pense que je dois faire un effort pour utiliser plus d'adjectifs pour dĂ©crire davantage mes expĂ©riences, mais mĂȘme lorsque j'Ă©cris en anglais, je suis attirĂ©e par la prĂ©cision du verbe ou du nom juste. Si j'Ă©cris sur des choses qui sont semblables les unes aux autres, ou si je veux peindre un tableau d'une scĂšne, les adjectifs seront plus nĂ©cessaires.

      05_pos_usage.png

      Ma sƓur est devenue une Ă©crivaine merveilleuse, mais elle ne donne pas dans le langage fleuri. Elle Ă©crit avec son cƓur sur des choses douloureusement claires et tout l'humour qu'elle peut trouver en aimant passionnĂ©ment la vie et sa famille. Je n'ai pas besoin d'atteindre cette qualitĂ© d'Ă©criture pour le moment; je n'ai pas la moitiĂ© de la sagesse qu'elle a dĂ» acquĂ©rir. Je suis contente de consigner ma journĂ©e sachant qu'un jour j'oublierai les dĂ©tails.

      Les logiciels sont surtout écrits par l'IA Claude :

      Un brouillon pour le Carnaval d'Emacs

      Le thÚme du Carnaval d'Emacs ce mois-ci est « les erreurs et les idées reçues ». C'est difficile de penser à une chose qui soit clairement une erreur, mais il y a certainement des choses que je ne fais pas efficacement.

      Ma configuration est trĂšs volumineuse car je pense que mes petites modifications ne sont utiles que pour moi. Elles sont trop spĂ©cifiques, trop idiosyncratiques. J'apprĂ©cie ceux qui crĂ©ent des bibliothĂšques ou mĂȘme des packages que beaucoup d'autres utilisent, mais de mon cĂŽtĂ©, je ne me sens pas capable de le faire pour l'instant. MĂȘme soumettre des correctifs en amont et participer Ă  la discussion qui s'ensuit parfois demande plus de persistance que je n'en ai.

      L'avantage de garder mes modifications dans ma configuration est que mĂȘme si je ne suis pas sĂ»re, je pourrais essayer quelque chose, dĂ©velopper un prototype prĂ©liminaire, et changer d'avis si nĂ©cessaire. Quand je les publie dans une bibliothĂšque ou un package, j'ai l'impression que je dois peaufiner mes idĂ©es. C'est difficile de s'en tenir Ă  une idĂ©e.

      Ma situation préférée est quand j'écris mon essai dans un article, et qu'il inspire une autre personne qui implémente sa propre version, voire une nouvelle bibliothÚque ou un nouveau package.

      En revanche, si j'apprends Ă  partager mon code, je peux aider plus de personnes, et je peux aussi apprendre de plus de personnes et plus de conversations.

      Beaucoup de mes modifications sont brĂšves et faciles Ă  copier de mes articles, mais il y a quelques collections qui dĂ©pendent des autres fonctions, ce qui les rend difficiles Ă  copier. Les fonctions sont dispersĂ©es dans plusieurs articles sur mon blog. Par exemple, mes fonctions pour apprendre une langue ( particuliĂšrement le français ) et pour contrĂŽler Emacs par la voix deviennent plutĂŽt complexes. Elles sont aussi exportĂ©es vers ma configuration, mais le fichier Emacs Lisp est difficile Ă  parcourir si on veut les copier. Je peux copier le code dans un fichier maintenant que Org Mode peut l'extraire vers plusieurs fichiers, mais si je consacre un peu de temps Ă  remplacer le prĂ©fixe « my- » par celui de la bibliothĂšque et Ă  les copier sur le dĂ©pĂŽt, on peut le cloner et tĂ©lĂ©charger des mises Ă  jour. MĂȘme si personne ne l'utilise, le fait de le peaufiner et de le documenter me sera utile un jour.

      Donc il est possible que ce soit une erreur que je commette souvent sur Emacs : je pense que mes fonctions sont trop idiosyncratiques et trop prĂ©liminaires, donc je les ai laissĂ©es dans ma configuration. Mais si j'y consacre du temps pour extraire le code vers une bibliothĂšque, j'en bĂ©nĂ©ficierai peut-ĂȘtre Ă  long terme. Je sais que beaucoup de gens sont intĂ©ressĂ©s par l'utilisation d'Emacs pour apprendre une langue ou par la voix. Il y a de nombreuses autres bibliothĂšques et flux de travail depuis longtemps. Je veux m'entraĂźner Ă  apprendre intentionnellement avec d'autres. Pour commencer, je pourrais peut-ĂȘtre recueillir les coordonnĂ©es des gens intĂ©ressĂ©s et leur envoyer un message lorsque je publie un article sur le sujet.

      Prot avait baissĂ© ses tarifs de coaching. Quant au dĂ©veloppement des packages, il est prolifique. J'apprends bien avec mon tuteur en français, donc cela vaut peut-ĂȘtre la peine de consacrer de l'argent et du temps pour amĂ©liorer cette compĂ©tence. Certes, c'est juste pour le plaisir, mais c'est aussi important pour moi que je m'entraĂźne Ă  apprendre avec l'aide des autres au lieu de trĂ©bucher toute seule. Je peux aussi Ă©crire davantage, participer aux rĂ©unions virtuelles ou mĂȘme diffuser en direct. J'ai toujours plus de choses Ă  apprendre, ce qui est merveilleux.

      vendredi 27

      J'ai recommencĂ© Ă  dessiner des moments quotidiens comme je l'avais fait il y a quelques annĂ©es quand ma fille Ă©tait plus jeune. Quand je suis tombĂ©e sur les dessins dans ma liste « En ce jour » (en fait le format RSS que j'avais ajoutĂ© Ă  mon agrĂ©gateur), le dessin me manquait. Depuis le dĂ©but de l'annĂ©e scolaire, je dessinais une note pour la boĂźte Ă  lunch de ma fille chaque jour de classe parce que ma fille voulait « l'expĂ©rience complĂšte d'Ă©coliĂšre » malgrĂ© le fait d'aller Ă  l'Ă©cole Ă  distance. Je dessinais quelques PokĂ©mon et d'autres centres d'intĂ©rĂȘt de ma fille. Ça m'est enfin venu Ă  l'idĂ©e de combiner des PokĂ©mon et nos moments. Jusqu'Ă  la semaine prĂ©cĂ©dente, je les ai dessinĂ©s sur des fiches bristol. Je me suis remĂ©morĂ© que notre imprimante peut traiter les fiches bristol, donc j'ai utilisĂ© Procreate sur mon iPad pour dessiner un moment de notre vie quotidienne et je l'ai imprimĂ© pour l'insĂ©rer dans sa boĂźte Ă  lunch.

      VoilĂ  :

      Ugh, mon OBS n'a pas enregistrĂ© mon cĂŽtĂ© du rendez-vous avec mon tuteur, donc je ne peux pas ajouter les extraits Ă  Comparing pronunciation recordings across time. Ce n'est pas grave, je dois les rĂ©enregistrer. C'est la deuxiĂšme fois que cela se produit. RedĂ©marrer mon ordinateur rĂ©sout les problĂšmes, mais si je ne dĂ©tecte pas le problĂšme tĂŽt, je n'ai pas le temps de redĂ©marrer avant mon rendez-vous. J'ai un autre profil sur OBS qui se connecte directement Ă  mon microphone au lieu du rĂ©cepteur audio virtuel, ce qui sera peut-ĂȘtre plus fiable. Je dois aussi enregistrer une sauvegarde sur mon tĂ©lĂ©phone la prochaine fois.

      J'ai envoyé un message et de l'argent à Prot pour du coaching. Voilà, je m'y engage.

      You can e-mail me at sacha@sachachua.com.

    8. 🔗 News Minimalist 🐱 Juries hold Meta and YouTube liable for harm + 9 more stories rss

      In the last 3 days Gemini read 96074 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [6.3] Juries hold Meta and YouTube liable for harming children —apnews.com(+272)

      Juries in Los Angeles and New Mexico have found Meta and YouTube liable for harming children, signaling a pivotal shift in holding social media giants accountable for their product designs.

      The verdicts focused on addictive platform features and Meta’s alleged concealment of child exploitation risks. By targeting deliberate design choices, these lawsuits successfully bypassed Section 230 legal protections that historically shielded tech companies from liability regarding third-party content and platform-related harms.

      Meta and Google plan to appeal the verdicts. These bellwether trials may lead to broader settlements, similar to historic tobacco litigation, as public concern regarding social media’s developmental impact grows.

      [6.3] Iran starts to formalize its chokehold on the Strait of Hormuz with a ‘toll booth’ regime —apnews.com(+1234)

      Iran is cementing control over the Strait of Hormuz using a mandatory vetting and toll regime, causing global oil prices to surge as shipping traffic drops by 90 percent.

      Ships must now enter Iranian waters for vetting by the Islamic Revolutionary Guards Corps, with some paying fees in yuan.

      While overall traffic has plummeted, vessels linked to Iran and its top customer, China, still frequently transit the vital energy artery.

      Highly covered news with significance over 5.5

      [6.0] OpenAI closes AI video app Sora — abc.net.au (+68)

      [6.0] Wikipedia bans AI-generated content in its online encyclopedia — theguardian.com (+10)

      [5.9] UN General Assembly declares slavery the gravest crime against humanity — www1.folha.uol.com.br (Portuguese) (+43)

      [5.8] Arm enters chip market with AI CPU for data centers — pcworld.com (+17)

      [5.7] South American malaria mosquitoes evolve insecticide resistance — hsph.harvard.edu (+3)

      [6.2] Astronomers observe two giant gas planets forming around a young star — euronews.com (+13)

      [5.7] Ukraine and Saudi Arabia sign defense cooperation agreement — euronews.com (+33)

      [5.6] IOC requires genetic testing for women's Olympic events — npr.org (+81)

      Thanks for reading!

      — Vadim


      You can create your own significance-based RSS feed with premium.


      Powered by beehiiv

    9. 🔗 r/wiesbaden Hey, ich (21, m) suche neue Leute/Freunde in Wiesbaden rss

      Hab mich in letzter Zeit ein bisschen abgekapselt und will das jetzt Àndern. WÀre cool, Leute zu finden, mit denen man auch spontan mal was machen kann rausgehen, chillen, zocken oder einfach quatschen.

      Ich komme aus der NĂ€he von Schleifgraben.

      Kurz zu mir:
      Ich zocke viel, schaue Animes, Serien und Filme und gehe auch gerne nachts spazieren (macht mit anderen definitiv mehr Spaß). Meine Lieblingsspiele sind Hollow Knight, OMORI und OneShot.

      Aktuell interessiere ich mich ziemlich fĂŒr Cosplay und versuche da ein bisschen reinzukommen. Außerdem gehe ich eigentlich gerne in Bars und Clubs, auch wenn das zuletzt etwas weniger geworden ist.

      Hab gerade ziemlich viel Zeit und bin teilweise auch etwas nerdy

      Wenn du aus der Gegend bist und Bock hast, schreib einfach!

      Mein dc : .aymann

      Mein insta: https://www.instagram.com/aymaninkoln/

      submitted by /u/Superb_Gas7119
      [link] [comments]

    10. 🔗 r/wiesbaden Bestes Schnitzel und Sauerkraut in Wiesbaden oder Mainz? rss

      Hallo zusammen,

      ich reise aus MĂŒnchen an und wĂŒrde gerne das beste Schnitzel und Sauerkraut in Wiesbaden oder Mainz finden. Habt ihr Empfehlungen fĂŒr Restaurants oder GasthĂ€user, die ihr wirklich empfehlen könnt?

      Vielen Dank schon mal!

      submitted by /u/One-Athlete3953
      [link] [comments]

    11. 🔗 @brandur The Second Wave of the API-first Economy rss

      Fifteen years ago, when some colleagues and I were building Heroku's V3 API, we set an ambitious goal: the public API should be powerful enough to run our own dashboard. No private endpoints, no escape hatches.

      It was a stretch, but it worked. A new version of the company's dashboard shipped on V3, and an unaffiliated developer who we'd never met before built Heroku's first iOS app on it, without a single feature request sent our way.


      The first wave

      Our dashboard-on-public-APIs-only seems needlessly idealistic nowadays, but it was an objective born of the time. The year was 2011, and the optimism around the power of APIs was palpable. A new world was opening up. One of openness, interconnectivity, unbounded possibility.

      And we weren't the only ones thinking that way:

      • Only a year before (2010) Facebook released its original Open Graph API, providing immensely powerful insights into its platform data.

      • Twitter's API at the time was almost completely open. You didn't even need an OAuth token -- just authenticate on API endpoints with your username/password and get access to just about anything.

      • GitHub was doing really impressive API design work, providing an expansive, feature-complete API with access to anything developers could need, and playing with forward-thinking ideas like hypermedia APIs/HATEOAS.

      You can still find traces of this bygone era, standing like some cyclopean ruins from a previous age. Hit the root GitHub API and you'll find an artifact over a decade old -- a list of links that were intended to be followed as hypermedia:

      $ curl https://api.github.com | jq
      
      {
        "current_user_url": "https://api.github.com/user",
        "current_user_authorizations_html_url": "https://github.com/settings/connections/applications{/client_id}",
        "authorizations_url": "https://api.github.com/authorizations",
        "code_search_url": "https://api.github.com/search/code?q={query}{&page,per_page,sort,order}",
        "commit_search_url": "https://api.github.com/search/commits?q={query}{&page,per_page,sort,order}",
        "emails_url": "https://api.github.com/user/emails",
        "emojis_url": "https://api.github.com/emojis",
        "events_url": "https://api.github.com/events",
        ...
      

      This wasn't a pre-planned, stack-ranked feature that a product team spent half a year putting together. It was one or two early engineers who got really excited about an API idea, and shipped it, probably without even asking for permission.


      Part of the push for open APIs was simple good will towards the rest of the world. The engineers building them were brought up in the earliest days of the internet, steeped in its original counterculture, and had an innate bias for radical openness.

      There was also a feeling from the companies involved that the APIs would be beneficial for their bottom lines. Users and third parties would use APIs to supplement the core product with add-ons and extensions that'd drive growth and increase product retention and satisfaction.

      Sites like the now defunct ProgrammableWeb popped up to discuss and catalog the newly appearing APIs, and the "programmable web" wasn't only a website, it was a principle.

      In the near future, all platforms would be API-first, providing full programmatic access and opening a new wave of interoperability across the web that'd let any service talk to any other service and massively accelerate the scope and reach of the internet. APIs would help expand everything from freedom to communication to commerce. An overwhelming force for good in the world.


      API winter

      Of course, it didn't last. The programmable web went through a phase of expansion, reached its maximum extent, and began to contract.

      • Twitter's famous API, which used to be an API tinkerer's dream, leveled off and began to dip as the company struggled to find ways to generate revenue. New features no longer got first-class API treatment. Access to the firehose was closed. Third-party Twitter clients were restricted and eventually locked out.

      • The power of Facebook's Graph API was hugely constricted post-Cambridge Analytica where a single rogue app was able to suck up data on millions of users and put it up for sale. Strict app review procedures were implemented. The API went from open access to a walled garden.

      • Even more extreme, Instagram's previously public API was deprecated totally. Realizing they had a real money maker on their hands, they saw no reason to share ad revenue with anyone else. Use Instagram through the first-party app or not at all.

      • Even APIs like GitHub's that stayed quite open had to crack down to a degree. Endpoints became authenticated by necessity and aggressive rate limiting was put in to curb abuse and reduce operational toil. And even when APIs were still largely accessible, using them to build a full-scale third-party app became more difficult as limiters flattened heavy (even if legitimate) use.

      The rationale for why APIs were being declawed or disappearing completely varied--abuse, monetization pressure, competitive risk, privacy, etc.--but the pattern was clear. Walls were going up across the world.

      APIs didn't disappear, but it was a cold winter for them. The expectation of an API became more limited to developer-focused platforms whose users paid them -- Stripe, Twilio, Slack, etc. When new consumer products appeared on the market (e.g. TikTok), no one expected them to have much in the way of an API.


      The coming second wave

      For many years this was the status quo. If you were using Twitter, you'd use it from Twitter.com. Facebook, from Facebook.com. Instagram or TikTok, from their respective iOS/Android apps. Developer products like GitHub and Stripe continued strong, but elsewhere, APIs weren't enough of a competitive advantage for anyone who didn't have one to suffer.

      But around mid-2025, the world changed. The last half year especially has been distinguished by the rise of indescribably powerful LLMs, which now dominate discourse as the most useful new tool in a generation.

      They're already useful enough as incredible trivia machines or code generators, but they really start to shine when they integrate with things. It's pretty neat having one generate a valid Kubernetes configuration for your new app, but it's really neat watching it provision an EKS cluster via awscli and send out its first production deploy on your behalf.

      Suddenly, an API is no longer liability, but a major saleable vector to give users what they want: a way into the services they use and pay for so that an agent can carry out work on their behalf. Especially given a field of relatively undifferentiated products, in the near future the availability of an API might just be the crucial deciding factor that leads to one choice winning the field.

      Picking my future bank

      Let's think about banks. I have a couple bank accounts, each offering a standard set of features largely unchanged since the 60s. If I call them, they'll send me some checks. I can request a transfer between two internal accounts and they will transfer the money 
 in 1-5 business days. Nowadays, they even offer ultra-modern features (from 2010) like gasp , MFA, just as long as it's through a provider that's paid them off (Symantec VIP). Suffice it to say, they're comfortable in the status quo. My banks do not have good APIs.

      So far this has worked out okay for them. People aren't known to migrate banks often, and even if they did, regulatory moats make new incumbents rare.

      But in the modern age, can it last? When I want to move $100 from one bank to another, my banks put me through a humiliating ritual of logging into both accounts, and bypassing multiple security checks and captchas before I can perform any operation. All this despite me having just logged into both accounts from this exact location and biometrically-secured computer the day before.

      The world I want is to instruct an LLM: "move $100 from Wells Fargo checking to Charles Schwab brokerage" and it will just happen. And to be fair, LLMs are already so absurdly good at reverse engineering things that this might already work today. But you know what'd work better? If both banks shipped with APIs, LLM-friendly usage instructions (through MCP or the like), and a strong auth layer to give me confidence that the whole process is secure.

      If I were choosing a bank today, some considerations would be the same as they've always been--competent security, free checking, no foreign transaction fees--but I'd also futureproof the choice by picking one that's established technical bona fides by providing an API. Even if I'm not quite ready to trust my banking credentials to an agent quite yet, I assume that this day is coming.

      Ubiquitous again

      Now apply the same principle to every service you use during the course of a week, or ever:

      • Online marketplaces: Robot, schedule my normal Amazon Fresh order for the first available slot tomorrow morning.

      • Office co-working: Robot, book me a desk at Embarcadero Center today.

      • Ski resorts: Robot, buy me a day pass for tomorrow and load it to my resort card. Confirm the price with me first.

      • Restaurants: Robot, put in my usual lunch order at Musubi Kai. Get me the unadon!

      Where wouldn 't you want an API?

      Forecasting the future is infamously hazardous, but based on the adoption patterns of myself and the people around me, I expect the demand to interact with services through LLMs is going to be overwhelming, and services aiming to provide a good product experience or which face competitive pressure (i.e. someone else could provide that experience instead) will offer APIs.

      I used to wish that we'd gone down an alternative branch of web technology and adopted a protocol like Gopher) so we'd have a more standardized web experience instead of every product you use producing its own unique UX, most bad. I think we will see more standardization, just not in the form I expected. The convention of the future will be human language, fed into what looks a lot like a terminal, and fulfilled via API.

      On behalf of people

      Notably, this is different than the first wave of APIs that I described above. Instead of APIs being to offer infinitely flexible access for inter-service communication, scrape data, or build apps on top of someone else's platform, their primary use will be to fulfill requests on behalf of a primary user. Exactly like what they'd be doing through a first-party app, but in a programmatic way.

      During the first wave, APIs were largely aimed at third parties who'd use
them to extend and augment the underlying platform to provide additional
features for users. During the first wave, APIs were largely aimed at third parties who'd use them to extend and augment the underlying platform to provide additional features for users. In the second wave, APIs map cleanly
to normal product capabilities. They provide programmatic access for agents
that act on behalf of people. In the second wave, APIs map cleanly to normal product capabilities. They provide programmatic access for agents that act on behalf of people.

      It may seem like a subtle distinction, but there are considerable differences. The second model better incentivizes APIs to exist:

      • APIs aren't for building a product that aims to displace the offerings of the underlying platform, but rather for giving users an alternative way to access it.

      • Security models are simplified because they're the same ones used by the product itself. Users have the same visibility that they'd have through a first-party app, and no more.

      • Aiming to support access patterns for a single person, platforms can rate limit much more aggressively to curb expenses and operational problems associated with offering an API.

      APIs should aim to provide a little more leeway than they would for a human, but only nominally so. An agent acting on my behalf should be able to occasionally poll LinkedIn for old colleagues that I should be reconnecting with and send them connect requests, but if someone's set up their ClawBot to scrape the entire social graph on their behalf, platforms should feel more than free to throttle the hell out of them and give them a strike towards a permanent ban.

      Slack's rate limits are a good example of this, supporting numbers like 50 channel or 100 profile reads per minute. You can't build a multi-user app with 50 channel reads per minute, but it's plenty for a single user to access their own account.

      Limits of the model

      While can expect many products and services to offer APIs for good agentic interoperability, it won't be forthcoming everywhere.

      Don't expect much out of Instagram, TikTok, or other platforms that power themselves with ads. Neither from monopolies that won't feel any serious pressure to change -- you won't be reliably paying your Xfinity bill via agent anytime soon.

      Hints of the future, today

      In this section I figured I'd call out a few services that are already pulling this future forward:

      API spring

      Fifteen years ago, us API maximalists thought that APIs were going to eat the world, ushering in a new paradigm of interoperability that would vastly expand our capabilities as users, and even change the world for the better.

      What we got instead was an API winter. As useful as APIs were in some situations, that usefulness was outweighed by concerns around revenue, privacy, and abuse.

      But as scary of a thought as it was that this might be the end, it wasn't. We're at the beginning of a new spring of APIs that'll appear to support use by agents acting on behalf of people. As this mode of operation gets more popular, expect the availability of an API to be a competitive edge that differentiates a service from its competitors. The result will be a global proliferation of APIs and expanding product capability like never before seen.

    12. 🔗 r/LocalLLaMA Skipping 90% of KV dequant work → +22.8% decode at 32K (llama.cpp, TurboQuant) rss

      I’ve been working on an open source TurboQuant implementation for KV cache compression in llama.cpp and ran into a hard bottleneck: dequantization.

      At long context (32K on M5 Max), dequant alone was taking around 40 percent of decode time.

      I tried fixing it the usual way: - register LUTs
      - SIMD tricks
      - fused kernels
      - branchless math

      Tested about 14 different approaches. None beat the baseline. Hardware was already at the limit.

      What ended up working was much simpler.

      Flash attention computes softmax weights before touching V.
      At long context, most of those weights are basically zero.

      So instead of making dequant faster, I just skip V dequant entirely for positions with negligible attention.

      It’s about 3 lines in the kernel.

      Results on Qwen3.5-35B-A3B (M5 Max):

      TurboQuant KV (turbo3): - +22.8% decode at 32K
      - PPL unchanged
      - NIAH: 7/9 → 9/9

      Standard q8_0 KV cache: - +5% decode
      - PPL identical
      - NIAH identical

      So this is not TurboQuant-specific. It’s using attention sparsity directly.

      Also tested on M2 Pro: - 4-mag LUT on K side + sparse V stack cleanly
      - turbo3 went from ~0.45x → ~0.73x vs q8_0

      Repo and benchmarks:
      https://github.com/TheTom/turboquant_plus

      Writeup:
      https://github.com/TheTom/turboquant_plus/blob/main/docs/papers/sparse-v- dequant.md

      If anyone wants to try this on CUDA or other setups I’d be interested to see results.

      Note: a CUDA port is currently being tested independently. Will share results once available.

      submitted by /u/Pidtom
      [link] [comments]

    13. 🔗 r/york Tailor please rss

      I have a wax jacket and a pair of trousers I want altered. Any recommendations? Cheers.

      submitted by /u/MobiusNaked
      [link] [comments]

    14. 🔗 Hex-Rays Blog Product Update: IDA 9.3sp1 Release rss

      IDA 9.3sp1

      We are pleased to announce the release of the first IDA 9.3 Service Pack (sp1).

    15. 🔗 r/york Are there any working automatic car washers in York? rss

      The machines at Sainsbury’s and Morrisons were both broken last time I checked.

      submitted by /u/OneItchy396
      [link] [comments]

    16. 🔗 r/Leeds Connexions services over Easter rss
    17. 🔗 r/york Lidl horse rss

      Feeling concerned about the horse that is often kept around Foss islands. It never looks to have any food, not being groomed or looked after, being left outside in storms, etc - surely this is animal abuse? Will reporting it be of any use or is there nothing to be done? Makes me sad every time I see it.

      submitted by /u/Icy-Strength7691
      [link] [comments]

    18. 🔗 r/Yorkshire University tutors pay tribute to 'warm' Leeds student who died in Woodhouse Lane car crash rss
    19. 🔗 r/LocalLLaMA Glm 5.1 is out rss

      Glm 5.1 is out | submitted by /u/Namra_7
      [link] [comments]
      ---|---

    20. 🔗 The Pragmatic Engineer Is the FDE role becoming less desirable? rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .

      An interesting trend highlighted by The Wall Street Journal: companies want to hire for FDE roles, but devs are just not that interested:

      "Job postings on Indeed grew more than 10-fold in 2025 compared with 2024. The number of public company transcripts mentioning the role jumped to 50 from eight over the same period, according to data from AlphaSense.

      The only problem? Few engineers want the job, which has historically been seen as demanding, undesirable, and less prestigious than product-focused engineering roles.

      "Everyone wants them and there's only maybe 10% of the market that wants that role," said Patrick Kellenberger, president and chief operating officer at Betts Recruiting."

      Last summer, we covered the rise of the FDE role, and looked into what it's like. Back then, this is how I visualized what was then a very hot role:

      altMy 2025 visualization of the FDE role

      At the companies where I interviewed FDE folks - OpenAI and Ramp - the role seemed to live up to this visualization. However, I've since talked with two engineers who took FDE roles and were disappointed. This is how they saw it, in practice:

      altReality of the FDE role: less software engineering, and even less platform engineering

      The role seems akin to a "sales engineer" where FDEs help close the deals, or a solutions engineer (or even consultant), where FDEs deploy to a customer to build them a solution. They don't contribute back into the platform, and don't do much that's considered "software engineering" beyond integrating software which the product team built.

      Some engineers figure out the nature of the role during the interview process and pass on it. Meanwhile, some others take the job and later quit. Here's what a dev told me who accept an FDE role at a company, but didn't find what they expected:

      "This FDE job was a typical IT services mindset. The company wanted to use me more on the engagement lead side, and nothing on software development. It's not what I signed up for, and I didn't like the vibe and culture. I quit 4 weeks later."

      In today's job market, if there's high demand for a role which pays decently but attracts little interest from engineers, there's always a reason!


      Read the full issue of last week's The Pulse, or check out this week's The Pulse.

      Catch up with recent The Pragmatic Engineer issues:

    21. 🔗 r/Leeds the city at dusk rss
    22. 🔗 r/Leeds Can't park there mate rss

      The new busses are looking a bit different

      submitted by /u/AvinchMC
      [link] [comments]

    23. 🔗 r/york Proud to call York home when I see such poignant and agile scribeship rss
    24. 🔗 backnotprop/plannotator v0.15.5 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.15.2 | Compound Planning skill, folder annotation, /plannotator-archive slash command, skill installation via platform installers
      v0.15.0 | Live AI chat in code review, plan archive browser, folder file viewer, resizable split pane, Pi full feature parity
      v0.14.5 | GitLab merge request review, login page image fix, Windows install path fix
      v0.14.4 | GitHub review submission, repo identifier in tab title, nested code fence parser fix, Pi paste URL wiring, file header gap fix
      v0.14.3 | PR context panel, diff search in code review, OpenCode permission normalization, landing page redesign
      v0.14.2 | OpenCode plan mode prompt replacement, Windows non-ASCII path fix, Pi link fix
      v0.14.1 | Single submit_plan with auto-detect, viewed-file draft persistence, Bear nested tag fix
      v0.14.0 | PR review via GitHub URL, /plannotator-last for annotating agent messages, OpenCode plan mode permissions fix, VS Code SSH proxy fix
      v0.13.1 | OpenCode plan mode rewrite, Obsidian save fix
      v0.13.0 | Built-in themes, annotatable plan diffs, file-scoped code review comments, Octarine integration, unified review core, Pi remote sessions
      v0.12.0 | Quick annotation labels, mobile compatibility, Graphviz rendering, markdown images with lightbox, linked doc navigation in annotate mode


      What's New in v0.15.5

      v0.15.5 is a community release. 8 PRs, 5 from external contributors, 4 of them first-timers.

      GitHub Viewed File Sync

      When reviewing a PR, Plannotator now syncs with GitHub's native "Viewed" checkmarks. On load, the file tree fetches each file's viewerViewedState via GraphQL and pre-populates the viewed checkboxes. Toggling a file's viewed state in Plannotator fires a background mutation to mark or unmark it on GitHub. Your progress carries over between Plannotator and GitHub's PR page.

      GitLab PRs are unaffected — GitLab's viewed state is localStorage-only with no API.

      Custom Display Name

      Previously, annotations were attributed to an auto-generated tater identity (e.g., "Rustic Potato"). You can now set a custom display name in the settings panel. A "Use git name" button pulls from git config user.name for quick setup.

      This release also introduces ~/.plannotator/config.json as a persistent configuration file. Settings written here take precedence over cookies, giving a stable config layer that survives port changes and browser sessions.

      Expand/Collapse All in File Tree

      The code review file tree sidebar now has expand all and collapse all buttons in the header. Useful when reviewing PRs with deeply nested directory structures.

      Search Performance in Code Review

      Typing in the diff search bar previously rebuilt every <mark> highlight on every keystroke. For large diffs this caused visible lag. Highlights are now debounced by 100ms, and stepping through matches (Enter/Shift+Enter) swaps two elements' styles in O(1) instead of rebuilding the entire set.

      Additional Changes

      • WSL update command fix. The update banner now detects WSL and shows the Unix install command instead of the Windows one (#395 by @alexandresilvestri)
      • Project slug fix for dots and underscores. projectSlugFromCwd() now matches Claude Code's actual algorithm, replacing all non-alphanumeric characters (not just /) with -. This fixes annotate-last failures for working directories with dots or underscores in the path (#401 by @aletar89)
      • Pi tool-scope import fix. The published Pi package was missing tool-scope.ts, causing a load failure. Fixed the import extension and added the file to the package manifest (#392 by @jasonodonnell, closing #391 reported by @iefnaf)
      • Pi compound planning skill. The compound planning skill is now bundled in the published Pi package, so Pi users get it automatically on install
      • Diff type switcher docs. Documented all five diff type options in the code review docs (#398, closing #397 reported by @UberMouse)

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      New Contributors

      Community

      @rockneurotiko contributed the GitHub viewed file sync (#393), bridging Plannotator's review UI with GitHub's native progress tracking. @yonihorn added the expand/collapse all buttons to the file tree (#403). @alexandresilvestri fixed the update banner for WSL users (#395). @aletar89 fixed project slug derivation for paths with dots and underscores (#401). @jasonodonnell fixed the Pi tool-scope import (#392).

      On the issue side:

      Full Changelog : v0.15.2...v0.15.5

    25. 🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
      sync plugin-repository.json
      
      No plugin changes detected
      
    26. 🔗 exe.dev Everyone is building a software factory rss

      We are all grappling with what it means to be an organization with agentic tools. We are seeing a Cambrian explosion of workflows in how to produce software. It is unwise, right now, to declare The Solution and enforce it. Developer Productivity teams that are pushing a workflow on their users are being counterproductive. Instead, the moment calls for experimentation and for giving people the agency to experiment, to learn, to iterate.

      The key is the compute primitive. You–and everyone else on your team–need to have plentiful, performant, trivial-to-provision VMs that can be accessed from your phone or anywhere, that can be shared securely, that integrate nicely, and that can be trusted with your data. Given this, you'll find an explosion of agents, automations, UIs, workflows, notifications, bots, claws, and so on. The successful ones will evolve to be the bones of your software factory.

      This is not a One Size Fits All moment. This is an Everyone's Workflow is Different moment.

      We went around the office recently, and talked through our workflows. 7 people. 9 workflows. (Not a joke!) Everyone's are different. Everyone's are wonderful. There's the newsletter that visits our Slack and tells us what's going on in support rotation. There's the integration with our Clickhouse logs. There's the background agent fighting the noble fight against test flakes. There are multi-agent orchestrators. There's an "inbox" view that gathers agent conversation state from all the VMs and sorts them by recency and annotates whether they've been pushed. There's vanilla Claude Code. There's the pi coding agent. There's our own coding agent, Shelley.

      The only common denominator? We're all using VMs to isolate, try, share, iterate, parallelize. So many VMs.

  3. March 26, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-26 rss

      IDA Plugin Updates on 2026-03-26

      New Releases:

      Activity:

      • augur
        • 229654f8: test: improve integration tests
      • capa
        • 4ba1b5d2: build(deps): bump bump-my-version from 1.2.4 to 1.3.0 (#2963)
        • f694c2ae: build(deps): bump picomatch in /web/explorer (#2967)
      • Greffe
      • haruspex
        • 5442fe2f: test: improve integration tests
      • hrtng
        • 3c2438e5: refresh all widgets after "Refactoring";
      • ida-domain
        • 7de2e5c1: Extend microcode module (#65)
      • plugin-ida
        • 89b0becb: Merge pull request #108 from RevEngAI/feat-PLU-256
        • 56cb7843: chore: bump package version
        • ed47fd57: Merge pull request #107 from RevEngAI/feat-PLU-256
        • 4e481e23: chore: bump package version
        • 10b07b25: Merge pull request #106 from RevEngAI/feat-PLU-256
        • 58a4724d: feat(PLU-256): plugin boundary changes
      • python-elpida_core.py
        • 2f5cb3d8: Reduce aggressive external request patterns (anti-abuse)
        • 9412a821: Pause BODY loop during Live Audit to prevent OOM crash
        • 1adbf746: Strip trailing newline from REPLICATE_API_TOKEN
        • 72d5ced6: Fix Live Audit results lost on Vision button click
        • 40bd18fe: Add Replicate Flux vision generator to Live Audit
        • 38a894aa: A16 governance integration: keywords (20), IANUS support, UI 16 axiom

        • 15637da8: Add A16 + missing A11-A14 across BODY: embeddings (11→16), banner, d1

        • 37cf4d9f: Fix D15 provider parity: HF openrouter→convergence to match root
        • b052e016: Diplomat layer + A16 Responsive Integrity ratification + cleanup
      • rhabdomancer
        • 71f97710: test: improve integration tests
    2. 🔗 r/Leeds Wire Nightclub rss

      I took this whilst in the queue at 1.00 in the morning, the week before COVID lockdown. Seemed to capture the chilled club vibe nicely.

      submitted by /u/ApprehensiveArm5689
      [link] [comments]

    3. 🔗 anthropics/claude-code v2.1.85 release

      What's changed

      • Added CLAUDE_CODE_MCP_SERVER_NAME and CLAUDE_CODE_MCP_SERVER_URL environment variables to MCP headersHelper scripts, allowing one helper to serve multiple servers
      • Added conditional if field for hooks using permission rule syntax (e.g., Bash(git *)) to filter when they run, reducing process spawning overhead
      • Added timestamp markers in transcripts when scheduled tasks (/loop, CronCreate) fire
      • Added trailing space after [Image #N] placeholder when pasting images
      • Deep link queries (claude-cli://open?q=
) now support up to 5,000 characters, with a "scroll to review" warning for long pre-filled prompts
      • MCP OAuth now follows RFC 9728 Protected Resource Metadata discovery to find the authorization server
      • Plugins blocked by organization policy (managed-settings.json) can no longer be installed or enabled, and are hidden from marketplace views
      • PreToolUse hooks can now satisfy AskUserQuestion by returning updatedInput alongside permissionDecision: "allow", enabling headless integrations that collect answers via their own UI
      • tool_parameters in OpenTelemetry tool_result events are now gated behind OTEL_LOG_TOOL_DETAILS=1
      • Fixed /compact failing with "context exceeded" when the conversation has grown too large for the compact request itself to fit
      • Fixed /plugin enable and /plugin disable failing when a plugin's install location differs from where it's declared in settings
      • Fixed --worktree exiting with an error in non-git repositories before the WorktreeCreate hook could run
      • Fixed deniedMcpServers setting not blocking claude.ai MCP servers
      • Fixed switch_display in the computer-use tool returning "not available in this session" on multi-monitor setups
      • Fixed crash when OTEL_LOGS_EXPORTER, OTEL_METRICS_EXPORTER, or OTEL_TRACES_EXPORTER is set to none
      • Fixed diff syntax highlighting not working in non-native builds
      • Fixed MCP step-up authorization failing when a refresh token exists — servers requesting elevated scopes via 403 insufficient_scope now correctly trigger the re-authorization flow
      • Fixed memory leak in remote sessions when a streaming response is interrupted
      • Fixed persistent ECONNRESET errors during edge connection churn by using a fresh TCP connection on retry
      • Fixed prompts getting stuck in the queue after running certain slash commands, with up-arrow unable to retrieve them
      • Fixed Python Agent SDK: type:'sdk' MCP servers passed via --mcp-config are no longer dropped during startup
      • Fixed raw key sequences appearing in the prompt when running over SSH or in the VS Code integrated terminal
      • Fixed Remote Control session status staying stuck on "Requires Action" after a permission is resolved
      • Fixed shift+enter and meta+enter being intercepted by typeahead suggestions instead of inserting newlines
      • Fixed stale content bleeding through when scrolling up during streaming
      • Fixed terminal left in enhanced keyboard mode after exit in Ghostty, Kitty, WezTerm, and other terminals supporting the Kitty keyboard protocol — Ctrl+C and Ctrl+D now work correctly after quitting
      • Improved @-mention file autocomplete performance on large repositories
      • Improved PowerShell dangerous command detection
      • Improved scroll performance with large transcripts by replacing WASM yoga-layout with a pure TypeScript implementation
      • Reduced UI stutter when compaction triggers on large sessions
    4. 🔗 r/Yorkshire Progress rss
    5. 🔗 r/LocalLLaMA Dual DGX Sparks vs Mac Studio M3 Ultra 512GB: Running Qwen3.5 397B locally on both. Here's what I found. rss

      I was spending about $2K/month on Claude API tokens for a personal AI assistant I run through Slack. After about 45 days of that cost pain I decided to go local. Bought both a dual DGX Spark setup and a Mac Studio M3 Ultra 512GB, each cost me about $10K after taxes. Same price, completely different machines. Here is what I learned running Qwen3.5 397B A17B on both.

      The Mac Studio

      MLX 6 bit quantization, 323GB model loaded into 512GB unified memory. 30 to 40 tok/s generation. The biggest selling point is memory bandwidth at roughly 800 GB/s. That bandwidth is what makes token generation feel smooth on such a massive model in a single box. Setup was easy. Install mlx vlm, point it at the model, done. The weakness is raw compute. Prefill is slow (30+ seconds on a big system prompt with tool definitions) and if you want to do batch embedding alongside inference, you are going to feel it. I also had to write a 500 line async proxy because mlx vlm does not parse tool calls or strip thinking tokens natively.

      The Dual Sparks

      INT4 AutoRound quantization, 98GB per node loaded across two 128GB nodes via vLLM TP=2. 27 to 28 tok/s generation. The biggest selling point is processing speed. CUDA tensor cores, vLLM kernels, tensor parallelism. Prefill is noticeably faster than the Mac Studio. Batch embedding that takes days on MLX finishes in hours on CUDA. The entire open source GPU ecosystem just works. The weakness is memory bandwidth at roughly 273 GB/s per node, which is why generation tops out lower than the Mac Studio despite having more compute.

      The setup was brutal though. Only one QSFP cable works (the second crashes NCCL). Node2's IP is ephemeral and disappears on reboot. The GPU memory utilization ceiling is 0.88 and you have to binary search for it because going to 0.9 starves the OS and 0.85 OOMs at 262K context. Every wrong guess costs you 15 minutes while checkpoint shards reload. You have to flush page cache on BOTH nodes before every model load or you get mystery OOM failures. Some units thermal throttle within 20 minutes. It took me days to get stable.

      Why I kept both

      I am building a RAG pipeline with Qwen3 Embedding 8B and Qwen3 Reranker 8B for a personal knowledge base. On the Mac Studio, those models would compete with the main model for the same 512GB memory pool. On the Sparks, they get dedicated CUDA and never touch inference memory.

      So the architecture ended up being: Mac Studio handles inference only (full 512GB for the model and KV cache). Sparks handle RAG, embedding, reranking, and everything else. They talk over Tailscale.

      Head to head numbers

      | Mac Studio 512GB | Dual DGX Spark
      ---|---|---
      Cost | $10K | $10K
      Memory | 512GB unified | 256GB (128×2)
      Bandwidth | ~800 GB/s | ~273 GB/s per node
      Quant | MLX 6 bit (323GB) | INT4 AutoRound (98GB/node)
      Gen speed | 30 to 40 tok/s | 27 to 28 tok/s
      Max context | 256K tokens | 130K+ tokens
      Setup | Easy but hands on | Hard
      Strength | Bandwidth | Compute
      Weakness | Compute | Bandwidth

      If you can only buy one

      I cannot tell you which is better because if one were clearly better I would have returned the other. They optimize for different things.

      Mac Studio if you want it to just work, you want that 800 GB/s bandwidth for smooth generation, and you are not planning heavy embedding workloads alongside inference. An RTX 6000 Pro build was my third option but I did not want to build a custom PC on top of everything else I was planning on for this.

      Dual Sparks if you are comfortable with Linux and Docker, you want CUDA and vLLM natively, you plan to run RAG or embedding alongside inference, and you are willing to spend days on initial setup for a more powerful platform long term.

      The Mac Studio gives you 80% of the experience with 20% of the effort. The Sparks give you more capability but they extract a real cost in setup time.

      Break even math

      $2K/month API spend. $20K total hardware. 10 months to break even. After that it is free inference forever with complete privacy and no rate limits.

      I wrote a longer version of this with more detail on the full build out at https://substack.com/home/post/p-192255754 . Building a series covering the full stack including vLLM tuning, RAG without LangChain, and QLoRA fine tuning a 397B MoE. Happy to answer questions.

      submitted by /u/trevorbg
      [link] [comments]

    6. 🔗 r/Harrogate Bramham Drive HG2 area rss

      Hi everyone, I'm looking at possibly buying a flat in the Bramham Drive area. Had a look online and seen slightly elevated levels of crime. Can anybody shine a light on the area for me please? What's it like there and should I be concerned?

      Thanks a lot

      submitted by /u/blkhlznrevltionz
      [link] [comments]

    7. 🔗 r/reverseengineering r2gopclntabParser: A radare2-based Go gopclntab parser for recovering function symbols from Go binaries, including fully stripped ones. rss
    8. 🔗 r/york Kickabout Community rss

      Kickabout Community | Enjoy a friendly football game to break up the week. Kickabout Community supports independent 5-a-side and 7-a-side adult football games across York. We’re a volunteer-run group of organisers, making football accessible for players of all ability, gender, age, and fitness levels. 👉 Join Kickabout Community here: https://chat.whatsapp.com/CSt29p06AGLL1E91uu5Eze 📍 Pitches used: ‱ York Sports Village ‱ University of York Sports Centre ‱ PlayFootball Clifton Moor ‱ Energise Acomb đŸ’· Subs: ÂŁ3-4 per session (covering pitch hire, balls, and bibs) We are not a business and not profit-making. Any surplus funds are for player socials or charitable donations. submitted by /u/Chance_Board_5424
      [link] [comments]
      ---|---

    9. 🔗 r/Leeds Woodhouse firework rss

      Does anyone living near Woodhouse know what the firework happening right now is about? It sounds like world war 3.

      submitted by /u/CraftyBrie
      [link] [comments]

    10. 🔗 r/york Sarah Ferguson stripped of Freedom of City of York title rss

      Sarah Ferguson stripped of Freedom of City of York title | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    11. 🔗 r/Leeds The Empire Cafe illustration rss

      I've been drawing pictures of Leeds now for like 10 years but I'm still not bored of drawing the city at night! Here's Empire Cafe on Fish Street :)

      submitted by /u/zacrosso_art
      [link] [comments]

    12. 🔗 r/Yorkshire Had a grand day out today in Pickering, too early in season for the castle, museum and railway, but had lots to do and visit rss

      Had a grand day out today in Pickering, too early in season for the castle, museum and railway, but had lots to do and visit | submitted by /u/arioandy
      [link] [comments]
      ---|---

    13. 🔗 r/Leeds Leeds Photos rss

      I bought a new camera in a bid to use my phone less, found I quite enjoy taking photos. I don't have much of a clue what I'm doing but took these recently.

      submitted by /u/Phil-pot
      [link] [comments]

    14. 🔗 r/reverseengineering Latest Akamai v3 deobfuscator static reversal of dynamic per request rss
    15. 🔗 r/york Daffodils rss

      I've never given much thought to the daffodils that are everywhere in York at this time of year, but is there a reason, historical or otherwise why there are so many in so many places throughout the city?

      submitted by /u/Shoddy-Television530
      [link] [comments]

    16. 🔗 r/Leeds A nostalgic long read about clubbing in Leeds in the noughties - hopefully it’s of interest to some of you! rss
    17. 🔗 r/york First time visiting rss

      I (M22) going to York for the first time between the 30th of March and 1st of April. Is there anything I should definitely check out which I might not of heard about (I am not bringing my car with me so it will all have to be quite local)? And is there anything I should know/be aware of before I go? (Such as do I need to book tickets for the dungeons or can I pay at the entrance).

      I am always open to meet new people too, so if anyone would like to join me for museums or attractions, feel free to shoot me a message.

      Thank you so much for all your help!

      submitted by /u/SneakingALook
      [link] [comments]

    18. 🔗 r/LocalLLaMA Mistral AI to release Voxtral TTS, a 3-billion-parameter text-to-speech model with open weights that the company says outperformed ElevenLabs Flash v2.5 in human preference tests. The model runs on about 3 GB of RAM, achieves 90-millisecond time-to-first-audio, supports nine languages. rss

      Mistral AI to release Voxtral TTS, a 3-billion-parameter text-to-speech model with open weights that the company says outperformed ElevenLabs Flash v2.5 in human preference tests. The model runs on about 3 GB of RAM, achieves 90-millisecond time-to-first-audio, supports nine languages. | VentureBeat: Mistral AI just released a text-to-speech model it says beats ElevenLabs — and it's giving away the weights for free: https://venturebeat.com/orchestration/mistral-ai-just-released-a-text-to-speech-model-it-says-beats-elevenlabs-and Mistral AI unlisted video on YouTube: Voxtral TTS. Find your voice.: https://www.youtube.com/watch?v=_N-ZGjGSVls Mistral new 404: https://mistral.ai/news/voxtral-tts submitted by /u/Nunki08
      [link] [comments]
      ---|---

    19. 🔗 r/Harrogate Shocking behaviour... rss

      Shocking behaviour... | They look like such nice ladies, too... submitted by /u/LurkishEmpire
      [link] [comments]
      ---|---

    20. 🔗 r/york Best Margs in York? rss

      I’m on the hunt. The hunt for good margaritas. I’ve only really discovered this cocktail in the last 12 months but for a few reasons I’ve drank most of the ones I’ve tried in other cities rather than my own.

      When they’re good. Holy hell they’re amazing. When they’re bad, it’s beyond disappointing.

      Recently I’ve tried a few places in York (where I’m born and bred) and been disappointed each time.

      - Evil eye was average at best

      - Fossgate social was ok, the spicy was better than the standard.

      I’ve not found one in York that’s been comparable to the good ones I’ve had elsewhere yet.

      An easy way I’ve found to dismiss a large cohort is if they use table salt rather than crystal salt. Let’s get the basics right please York garrrr.

      So
gives me the locations of decent margs in York centre please! I’m not looking for ‘xxx might be decent’ I’d like first hand recommendations based off experience - thanks pals.

      submitted by /u/robbo909
      [link] [comments]

    21. 🔗 r/LocalLLaMA RotorQuant: 10-19x faster alternative to TurboQuant via Clifford rotors (44x fewer params) rss

      RotorQuant: 10-19x faster alternative to TurboQuant via Clifford rotors (44x fewer params) | Kinda sounds ridiculous - but I reimagined / reinvented turboquant with Clifford Algebra Vector Quantization on both implemented on cuda + metalshaders - https://github.com/tonbistudio/turboquant-pytorch/pull/4 https://github.com/TheTom/turboquant_plus/pull/34 https://preview.redd.it/mqwnea8iidrg1.png?width=2604&format=png&auto=webp&s=597710bff942ea68180f162ed147e134d33c9639 https://preview.redd.it/n9hjiq6iidrg1.png?width=2652&format=png&auto=webp&s=1ec464ada80dfff65ae7017ab9b834190ace2987 The idea: Replace the d×d random orthogonal matrix Π with Clifford rotors in Cl(3,0). Instead of a dense matmul (16,384 FMAs for d=128), chunk the vector into groups of 3 dims and rotate each with a 4-parameter rotor via the sandwich product RvR̃ (~100 FMAs total). Results on Qwen2.5-3B-Instruct KV cache: - Cosine similarity: 0.990 (vs TurboQuant's 0.991) — effectively identical
      - 44× fewer parameters (372 vs 16,399 for d=128)
      - Fused CUDA kernel: 10-19× faster than cuBLAS matmul on RTX PRO 4000
      - Fused Metal shader: 9-31× faster on Apple M4
      - Perfect 9/9 needle-in-haystack at all bit-widths The key insight: for pure vectors, the rotor sandwich is equivalent to a sparse 3×3 rotation — the fused kernel keeps everything in registers with no memory round-trips, which is why it beats the BLAS GEMM despite TurboQuant's matmul being highly optimized. The tradeoff is higher synthetic MSE on random unit vectors (the block-diagonal rotation doesn't induce the exact Beta distribution). But with QJL correction, real-model attention fidelity is identical — and sometimes better on top-1/top-5 retrieval. Paper: https://www.scrya.com/rotorquant/ Code: https://github.com/scrya-com/rotorquant PDF: https://www.scrya.com/rotorquant.pdf submitted by /u/Revolutionary_Ask154
      [link] [comments]
      ---|---

    22. 🔗 r/wiesbaden Ramen InnenstadtnĂ€he rss

      Hallo zusammen,

      Ich bin letzten Monat nach Wiesbaden gezogen und suche momentan noch einen guten Ramen-Laden am Besten in der Innenstadt. Ich habe bisher 1-2 ausprobiert, aber war noch nicht wirklich begeistert.

      Habt ihr Empfehlungen?

      Bonuspunkte, wenn man sich am Anfang selbst die Zutaten individuell auswĂ€hlen kann anstatt nur vorgefertigter MenĂŒs.

      Danke euch!

      submitted by /u/Amarku
      [link] [comments]

    23. 🔗 r/Yorkshire Caught this moment at Whitby Abbey last summer rss

      Caught this moment at Whitby Abbey last summer | Just one of those moments where everything lined up submitted by /u/Effective_Sink_3934
      [link] [comments]
      ---|---

    24. 🔗 r/Yorkshire First Puffins of 2026
 RSPB Bempton Cliffs rss
    25. 🔗 r/reverseengineering My DAP couldn't display Arabic text, so I reverse engineered the firmware format to fix it rss
    26. 🔗 anthropics/claude-code v2.1.84 release

      What's changed

      • Added PowerShell tool for Windows as an opt-in preview. Learn more at https://code.claude.com/docs/en/tools-reference#powershell-tool
      • Added ANTHROPIC_DEFAULT_{OPUS,SONNET,HAIKU}_MODEL_SUPPORTS env vars to override effort/thinking capability detection for pinned default models for 3p (Bedrock, Vertex, Foundry), and _MODEL_NAME/_DESCRIPTION to customize the /model picker label
      • Added CLAUDE_STREAM_IDLE_TIMEOUT_MS env var to configure the streaming idle watchdog threshold (default 90s)
      • Added TaskCreated hook that fires when a task is created via TaskCreate
      • Added WorktreeCreate hook support for type: "http" — return the created worktree path via hookSpecificOutput.worktreePath in the response JSON
      • Added allowedChannelPlugins managed setting for team/enterprise admins to define a channel plugin allowlist
      • Added x-client-request-id header to API requests for debugging timeouts
      • Added idle-return prompt that nudges users returning after 75+ minutes to /clear, reducing unnecessary token re-caching on stale sessions
      • Deep links (claude-cli://) now open in your preferred terminal instead of whichever terminal happens to be first in the detection list
      • Rules and skills paths: frontmatter now accepts a YAML list of globs
      • MCP tool descriptions and server instructions are now capped at 2KB to prevent OpenAPI-generated servers from bloating context
      • MCP servers configured both locally and via claude.ai connectors are now deduplicated — the local config wins
      • Background bash tasks that appear stuck on an interactive prompt now surface a notification after ~45 seconds
      • Token counts ≄1M now display as "1.5m" instead of "1512.6k"
      • Global system-prompt caching now works when ToolSearch is enabled, including for users with MCP tools configured
      • Fixed voice push-to-talk: holding the voice key no longer leaks characters into the text input, and transcripts now insert at the correct position
      • Fixed up/down arrow keys being unresponsive when a footer item is focused
      • Fixed Ctrl+U (kill-to-line-start) being a no-op at line boundaries in multiline input, so repeated Ctrl+U now clears across lines
      • Fixed null-unbinding a default chord binding (e.g. "ctrl+x ctrl+k": null) still entering chord-wait mode instead of freeing the prefix key
      • Fixed mouse events inserting literal "mouse" text into transcript search input
      • Fixed workflow subagents failing with API 400 when the outer session uses --json-schema and the subagent also specifies a schema
      • Fixed missing background color behind certain emoji in user message bubbles on some terminals
      • Fixed the "allow Claude to edit its own settings for this session" permission option not sticking for users with Edit(.claude) allow rules
      • Fixed a hang when generating attachment snippets for large edited files
      • Fixed MCP tool/resource cache leak on server reconnect
      • Fixed a startup performance issue where partial clone repositories (Scalar/GVFS) triggered mass blob downloads
      • Fixed native terminal cursor not tracking the text input caret, so IME composition (CJK input) now renders inline and screen readers can follow the input position
      • Fixed spurious "Not logged in" errors on macOS caused by transient keychain read failures
      • Fixed cold-start race where core tools could be deferred without their bypass active, causing Edit/Write to fail with InputValidationError on typed parameters
      • Improved detection for dangerous removals of Windows drive roots (C:\, C:\Windows, etc.)
      • Improved interactive startup by ~30ms by running setup() in parallel with slash command and agent loading
      • Improved startup for claude "prompt" with MCP servers — the REPL now renders immediately instead of blocking until all servers connect
      • Improved Remote Control to show a specific reason when blocked instead of a generic "not yet enabled" message
      • Improved p90 prompt cache rate
      • Reduced scroll-to-top resets in long sessions by making the message window immune to compaction and grouping changes
      • Reduced terminal flickering when animated tool progress scrolls above the viewport
      • Changed issue/PR references to only become clickable links when written as owner/repo#123 — bare #123 is no longer auto-linked
      • Slash commands unavailable for the current auth setup (/voice, /mobile, /chrome, /upgrade, etc.) are now hidden instead of shown
      • [VSCode] Added rate limit warning banner with usage percentage and reset time
      • Stats screenshot (Ctrl+S in /stats) now works in all builds and is 16× faster
    27. 🔗 Rust Blog Announcing Rust 1.94.1 rss

      The Rust team has published a new point release of Rust, 1.94.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

      If you have a previous version of Rust installed via rustup, getting Rust 1.94.1 is as easy as:

      rustup update stable
      

      If you don't have it already, you can get rustup from the appropriate page on our website.

      What's in 1.94.1

      Rust 1.94.1 resolves three regressions that were introduced in the 1.94.0 release.

      And a security fix:

      Contributors to 1.94.1

      Many people came together to create Rust 1.94.1. We couldn't have done it without all of you. Thanks!

    28. 🔗 Andrew Healey's Blog Building a Runtime with QuickJS rss

      Building a tiny JavaScript runtime on top of QuickJS with timers, file I/O, and an event loop.

    29. 🔗 Console.dev newsletter EmailMD rss

      Description: Generate emails from Markdown.

      What we like: Uses Markdown templates to generate email output that works across mail clients. Customizable themes and fonts. Includes common components e.g. buttons, tables, images, callouts, hero. Wraps mjml which handles the compatible conversions.

      What we dislike: Built with TypeScript which makes it difficult to use from other languages.

    30. 🔗 Console.dev newsletter Pyodide rss

      Description: Run Python in the browser.

      What we like: Ports CPython to Wasm so it can run in the browser. Any pip package that has a wheel is supported. Includes a JS FFI so you can work directly with the browser (Pyodide already gets access to web APIs).

      What we dislike: Wasm/browser environment is single threaded so multi-threading or multi-processing isn’t supported. Also has relatively low memory limits due to Wasm limitations.

    31. 🔗 Ampcode News GPT‐5.4 in Deep rss

      GPT-5.4 now powers Amp's deep mode.

      It's the best model in the world right now.

      It's faster than GPT-5.3-Codex and still as great at coding.

      But, out of the box, GPT-5.4 was too chatty. That's not what we want for deep; it's not a pair programmer, it's supposed to go off and solve the problem.

      So we tuned GPT-5.4 to behave like GPT-5.3-Codex.

      Once we had that, we started to use it exclusively; even for interactive tasks. We run it at very high reasoning (deep^3) and still prefer it when we need fast interaction and fast reaction. It takes steering better than GPT-5.3-Codex.

      To use it: open the command palette with Ctrl-O and run the command mode: use deep in the Amp CLI, or select deep mode in the Amp editor extension's prompt field. By default it uses deep^2, you can switch to deep^3 by hitting Opt-D.

  4. March 25, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-25 rss

      IDA Plugin Updates on 2026-03-25

      New Releases:

      Activity:

      • augur
        • 07aedb46: chore: add semver-checks to ci
        • fcef2013: fix: doc github workflow
        • b684dd2b: chore: improve security of github workflows
      • binsync
        • beb9df34: Add commit history "git reset" functionality to your user branch (#513)
        • 065c9965: add checks for invalid commits before timestamp (#514)
      • fa
        • b584a37f: Merge pull request #60 from doronz88/bugfix/default-signature-path
        • 962e9d01: fainterp: Handle missing signatures_root in global config
        • 06f3b689: fainterp: Prevent creating config.ini in site-packages
      • ghidra
        • cccc5103: Merge remote-tracking branch 'origin/Ghidra_12.1'
        • 04b63229: GP-0: Stronger ObjcMessageAnalzyer.canAnalyze() check (prevents it from
        • 2f6e25e1: GP-0: SwiftDemangler fix for ArgumentTuple vs Tuple
      • ghidra-chinese
        • 248434af: Merge pull request #94 from TC999/sync
        • c155a6fc: Merge branch 'chinese' into sync
      • haruspex
        • 62b1b360: chore: add semver-checks to ci
        • f0c84766: fix: doc github workflow
        • 3f836eee: doc: update changelog
        • 43f98173: chore: improve security of github workflows
      • ida-domain
      • ida-sdk
        • 72a4a4b9: docs: idapython: Added docstrings to patch final python files
      • PyClassInformer
        • 5c110efd: Avoided an IDA crash on IDA 9.3
      • python-elpida_core.py
        • 4e9f8f96: Add debug logging to Discord listener for webhook diagnosis
        • 2c2e08e5: Fix: allow external webhooks (Pipedream etc.) in Guest Chamber
        • 2b212cee: Allow external bot/app messages in Guest Chamber (Pipedream, etc.)
        • 5e54f412: Discord Guest Listener: read #guest-chamber messages into Parliament
        • b89afad7: Guest Chamber: real answers, not governance verdicts
        • ebe08120: Guest Chamber: route human questions through Parliament
        • 82c2a452: Fix OOM: optimize _load_memory() with deque(maxlen=50), bump Fargate 

      • qscripts
        • e754bfa2: ci: clone SDK with submodules (src/cmake is a submodule)
        • 732a3707: ci: update workflow for new IDA SDK cmake layout
        • 3ba5c49e: build: update to new IDA SDK 9.3 cmake layout and libidacpp
      • rhabdomancer
        • c1fc3e6f: doc: update changelog
        • e27c571c: chore: add semver-checks to ci
        • 8d4049c7: doc: update changelog
        • 483f9061: chore: improve security of github workflows
      • scriptwatch
    2. 🔗 r/Leeds Coffee Shops that aren't Nero to work in! rss

      Hello everyone, I am v much sick of all the Cafe Nero's (I can't be the only one who thinks the coffee is terrible) and would love to hear about any good independent coffee shops that I can work in for long periods of time!

      I have tried Black Sheep, and to be honest I just think I have a distaste for franchises. So please do let me know which is good to try!

      submitted by /u/aishrw
      [link] [comments]

    3. 🔗 r/Leeds Female Friends Group rss

      hey all!

      I'm looking for female friends based in Yorkshire to join looking a group im putting together!

      20's to 40's welcome. We can create two channels for the different age groups if people prefer!

      There's multiple sub sections including spaces dedicated to rants and support, meetup planning, casual chat, games, and piercings/tattoo spaces!

      comment below if interested!

      submitted by /u/winterberry9000
      [link] [comments]

    4. 🔗 r/york Tonight’s sun starting to set over York rss

      Tonight’s sun starting to set over York | submitted by /u/York_shireman
      [link] [comments]
      ---|---

    5. 🔗 r/york Sunday roast rss

      Can anyone tell me how to book a Sunday roast at the Ackhorne?? If they even still do them.

      I’m coming down from Scotland in two weeks, York is my home away from home and for the first time we are staying in til the Monday so we can experience a York Sunday roast. The ackhorne has been highly recommend but I cannot for the life of me find how to book!

      submitted by /u/Heavy-Assumption-202
      [link] [comments]

    6. 🔗 r/york Big White Flash in the Sky rss

      Did anyone in York see this? It happened at like ten past eight, the whole sky went white like what you'd get with sheet lightening, but there was no thunder and it doesn't match the weather.

      submitted by /u/Inner_Writing7083
      [link] [comments]

    7. 🔗 r/wiesbaden Red shopping cart rss

      Ok, wer von euch war das?

      submitted by /u/Fright-Train-Rider
      [link] [comments]

    8. 🔗 r/york How do you (residents of York) find your experiences with your GP’s? rss

      Just out of curiosity I wanted to know how people felt about their experiences with seeing their GP. Did you find them knowledgeable, helpful, or successful with treating your issue. Do you feel that when you become unwell that your GP is a safe place for you to get the correct treatment/ diagnosis?

      submitted by /u/Comfortable_End7154
      [link] [comments]

    9. 🔗 r/Leeds Anyone moved into Casa Abbey, Kirkstall? rss

      Me and my friend are supposed to move in soon (haven't signed tenancy agreement yet). We went to 2 viewings and everything looked good.

      But recently saw some really bad reviews on Google etc. and wanted to know how other people's experiences have been so far

      Just worried about moving and then getting a ton of problems. So far I've seen issues with plaster beetles and heating...

      If there's anyone who has moved into the apartments/ 2 bed flats specifically, please share how it's been going so far!

      I'm having doubts 😭

      submitted by /u/mooniebao
      [link] [comments]

    10. 🔗 Locklin on science Very small engines rss

      One of the interesting things to contemplate is the scale of the internal combustion engine. It’s a very human scale device; pistons the size of fists, Valves about as wide as knuckles. It’s the kind of thing a man with normal sized machine tools can make. Most internal combustion engines in the world are on […]

    11. 🔗 r/Yorkshire Is this AI slop? It doesn't appear to be Robin Hood's Bay, or Whitby rss

      Is this AI slop? It doesn't appear to be Robin Hood's Bay, or Whitby | I found this photo here and I couldn't find any streets which looked like this on google maps. I recognised the cliffs near Ravenscar, but idk I felt this was an adjusted image of New Road? Just wanted to ask the locals here. submitted by /u/askepticalbureaucrat
      [link] [comments]
      ---|---

    12. 🔗 r/york Best place to go for healthy breakfast? rss

      So im visiting in a few weeks, and am looking for places that do relatively healthy breakfasts in the city center. Has anyone got any suggestions of places that would fit the bill? Thanks

      submitted by /u/AcrylicandWater
      [link] [comments]

    13. 🔗 r/york Finding it incredibly hard to find a part time job as a student rss

      I am a second year university of york student and I have been trying to find a job since November. I got a Christmas temp job at the christmas market, got trained up, and then got told they had hired too many people and got let go. Since then I have applied to literally countless jobs on indeed, and sent emails to pubs and cafes. I have previously worked in a pub for 1.5 years before uni. I have been to 5 job interviews since around February, with 1 being greggs, 1 being a cafe, and 3 in pubs / bars. 2 of the bars just ignored me post interview and never even got back to me. I feel like ive got terrible luck and I am genuinely struggling with money.

      submitted by /u/lingeringLlama07
      [link] [comments]

    14. 🔗 r/reverseengineering CounterPoint: Using Hardware Event Counters to Refute and Refine Microarchitectural Assumptions rss
    15. 🔗 r/Leeds What is this place in Harehills? rss

      New to Leeds. Keep walking past this “Coffee Express” in Harehills. No Google presence, no reviews, just a sign and a pitch-black doorway. Always curious and want to walk in, but it also looks rather uninviting and a low-key dodgy.

      Has anyone actually been inside or knows what is this place?

      submitted by /u/Active-Response9478
      [link] [comments]

    16. 🔗 r/LocalLLaMA Intel will sell a cheap GPU with 32GB VRAM next week rss

      It seems Intel will release a GPU with 32 GB of VRAM on March 31, which they would sell directly for $949.

      Bandwidth would be 608 GB/s (a little less than an NVIDIA 5070), and wattage would be 290W.

      Probably/hopefully very good for local AI and models like Qwen 3.5 27B at 4 bit quantization.

      I'm definitely rooting for Intel, as I have a big percentage of my investment in their stock.

      https://www.pcmag.com/news/intel-targets-ai-workstations-with-memory-stuffed- arc-pro-b70-and-b65-gpus

      submitted by /u/happybydefault
      [link] [comments]

    17. 🔗 Anton Zhiyanov Porting Go's io package to C rss

      Creating a subset of Go that translates to C was never my end goal. I liked writing C code with Go, but without the standard library it felt pretty limited. So, the next logical step was to port Go's stdlib to C.

      Of course, this isn't something I could do all at once. So I started with the standard library packages that had the fewest dependencies, and one of them was the io package. This post is about how that went.

      io package ‱ Slices ‱ Multiple returns ‱ Errors ‱ Interfaces ‱ Type assertion ‱ Specialized readers ‱ Copy ‱ Wrapping up

      The io package

      io is one of the core Go packages. It introduces the concepts of readers and writers , which are also common in other programming languages.

      In Go, a reader is anything that can read some raw data (bytes) from a source into a slice:

      type Reader interface {
          Read(p []byte) (n int, err error)
      }
      

      A writer is anything that can take some raw data from a slice and write it to a destination:

      type Writer interface {
          Write(p []byte) (n int, err error)
      }
      

      The io package defines many other interfaces, like Seeker and Closer, as well as combinations like ReadWriter and WriteCloser. It also provides several functions, the most well-known being Copy, which copies all data from a source (represented by a reader) to a destination (represented by a writer):

      func Copy(dst Writer, src Reader) (written int64, err error)
      

      C, of course, doesn't have interfaces. But before I get into that, I had to make several other design decisions.

      Slices

      In general, a slice is a linear container that holds N elements of type T. Typically, a slice is a view of some underlying data. In Go, a slice consists of a pointer to a block of allocated memory, a length (the number of elements in the slice), and a capacity (the total number of elements that can fit in the backing memory before the runtime needs to re-allocate):

      type slice struct {
          array unsafe.Pointer
          len   int
          cap   int
      }
      

      Interfaces in the io package work with fixed-length slices (readers and writers should never append to a slice), and they only use byte slices. So, the simplest way to represent this in C could be:

      typedef struct {
          uint8_t* ptr;
          size_t len;
      } Bytes;
      

      But since I needed a general-purpose slice type, I decided to do it the Go way instead:

      typedef struct {
          void* ptr;
          size_t len;
          size_t cap;
      } so_Slice;
      

      Plus a bound-checking helper to access slice elements:

      #define so_at(T, s, i) (*so_at_ptr(T, s, i))
      #define so_at_ptr(T, s, i) ({            \
          so_Slice _s_at = (s);                \
          size_t _i = (size_t)(i);             \
          if (_i >= _s_at.len)                 \
              so_panic("index out of bounds"); \
          (T*)_s_at.ptr + _i;                  \
      })
      

      Usage example:

      // go
      nums := make([]int, 3)
      nums[0] = 11
      nums[1] = 22
      nums[2] = 33
      n1 := nums[1]
      
      
      
      // c
      so_Slice nums = so_make_slice(int, 3, 3);
      so_at(int, nums, 0) = 11;
      so_at(int, nums, 1) = 22;
      so_at(int, nums, 2) = 33;
      so_int n1 = so_at(int, nums, 1);
      

      So far, so good.

      Multiple returns

      Let's look at the Read method again:

      Read(p []byte) (n int, err error)
      

      It returns two values: an int and an error. C functions can only return one value, so I needed to figure out how to handle this.

      The classic approach would be to pass output parameters by pointer, like read(p, &n, &err) or n = read(p, &err). But that doesn't compose well and looks nothing like Go. Instead, I went with a result struct:

      typedef union {
          bool as_bool;
          so_int as_int;
          int64_t as_i64;
          so_String as_string;
          so_Slice as_slice;
          void* as_ptr;
          // ... other types
      } so_Value;
      
      typedef struct {
          so_Value val;
          so_Error err;
      } so_Result;
      

      The so_Value union can store any primitive type, as well as strings, slices, and pointers. The so_Result type combines a value with an error. So, our Read method (let's assume it's just a regular function for now):

      func Read(p []byte) (n int, err error)
      

      Translates to:

      so_Result Read(so_Slice p);
      

      And the caller can access the result like this:

      so_Result res = Read(p);
      if (res.err) {
          so_panic(res.err->msg);
      }
      so_println("read", res.val.as_int, "bytes");
      

      Errors

      For the error type itself, I went with a simple pointer to an immutable string:

      struct so_Error_ {
          const char* msg;
      };
      typedef struct so_Error_* so_Error;
      

      Plus a constructor macro:

      #define errors_New(s) (&(struct so_Error_){s})
      

      I wanted to avoid heap allocations as much as possible, so decided not to support dynamic errors. Only sentinel errors are used, and they're defined at the file level like this:

      so_Error io_EOF = errors_New("EOF");
      so_Error io_ErrOffset = errors_New("io: invalid offset");
      

      Errors are compared by pointer identity (==), not by string content — just like sentinel errors in Go. A nil error is a NULL pointer. This keeps error handling cheap and straightforward.

      Interfaces

      This was the big one. In Go, an interface is a type that specifies a set of methods. Any concrete type that implements those methods satisfies the interface — no explicit declaration needed. In C, there's no such mechanism.

      For interfaces, I decided to use "fat" structs with function pointers. That way, Go's io.Reader:

      type Reader interface {
          Read(p []byte) (n int, err error)
      }
      

      Becomes an io_Reader struct in C:

      typedef struct {
          void* self;
          so_Result (*Read)(void* self, so_Slice p);
      } io_Reader;
      

      The self pointer holds the concrete value, and each method becomes a function pointer that takes self as its first argument. This is less efficient than using a static method table, especially if the interface has a lot of methods, but it's simpler. So I decided it was good enough for the first version.

      Now functions can work with interfaces without knowing the specific implementation:

      // ReadFull reads exactly len(buf) bytes from r into buf.
      so_Result io_ReadFull(io_Reader r, so_Slice buf) {
          so_int n = 0;
          so_Error err = NULL;
          for (; n < so_len(buf) && err == NULL;) {
              so_Slice curBuf = so_slice(so_byte, buf, n, buf.len);
              so_Result res = r.Read(r.self, curBuf);
              err = res.err;
              n += res.val.as_int;
          }
          // ...
      }
      
      // A custom reader.
      typedef struct {
          so_Slice b;
      } reader;
      
      static so_Result reader_Read(void* self, so_Slice p) {
          // ...
      }
      
      int main(void) {
          // We'll read from a string literal.
          so_String str = so_str("hello world");
          reader rdr = (reader){.b = so_string_bytes(str)};
      
          // Wrap the specific reader into an interface.
          io_Reader r = (io_Reader){
              .self = &rdr,
              .Read = reader_Read,
          };
      
          // Read the first 4 bytes from the string into a buffer.
          so_Slice buf = so_make_slice(so_byte, 4, 4);
          // ReadFull doesn't care about the specific reader implementation -
          // it could read from a file, the network, or anything else.
          so_Result res = io_ReadFull(r, buf);
      }
      

      Calling a method on the interface just goes through the function pointer:

      // r.Read(buf) becomes:
      r.Read(r.self, buf);
      

      Type assertion

      Go's interface is more than just a value wrapper with a method table. It also stores type information about the value it holds:

      type iface struct {
          tab  *itab
          data unsafe.Pointer  // specific value
      }
      
      type itab struct {
          Inter *InterfaceType // method table
          Type  *Type          // type information
          // ...
      }
      

      Since the runtime knows the exact type inside the interface, it can try to "upgrade" the interface (for example, a regular Reader) to another interface (like WriterTo) using a type assertion :

      // copyBuffer copies from src to dst using the provided buffer
      // until either EOF is reached on src or an error occurs.
      func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error) {
          // If the reader has a WriteTo method, use it to do the copy.
          if wt, ok := src.(WriterTo); ok {  // try "upgrading" to WriterTo
              return wt.WriteTo(dst)
          }
          // src is not a WriterTo, proceed with the default copy implementation.
      

      The last thing I wanted to do was reinvent Go's dynamic type system in C, so dropping this feature was an easy decision.

      There's another kind of type assertion, though — when we unwrap the interface to get the value of a specific type:

      // Does r (a Reader) hold a pointer to a value of concrete type LimitedReader?
      // If true, lr will get the unwrapped pointer.
      lr, ok := r.(*LimitedReader)
      

      And this kind of assertion is quite possible in C. All we have to do is compare function pointers:

      // Are r.Read and LimitedReader_Read the same function?
      bool ok = (r.Read == LimitedReader_Read);
      if (ok) {
          io_LimitedReader* lr = r.self;
      }
      

      If two different types happened to share the same method implementation, this would break. In practice, each concrete type has its own methods, so the function pointer serves as a reliable type tag.

      Specialized readers

      After I decided on the interface approach, porting the actual io types was pretty easy. For example, LimitedReader wraps a reader and stops with EOF after reading N bytes:

      type LimitedReader struct {
          R Reader
          N int64
      }
      
      func (l *LimitedReader) Read(p []byte) (int, error) {
          if l.N <= 0 {
              return 0, EOF
          }
          if int64(len(p)) > l.N {
              p = p[0:l.N]
          }
          n, err := l.R.Read(p)
          l.N -= int64(n)
          return n, err
      }
      

      The logic is straightforward: if there are no bytes left, return EOF. Otherwise, if the buffer is bigger than the remaining size, shorten it. Then, call the underlying reader, and decrease the remaining size.

      Here's what the ported C code looks like:

      typedef struct {
          io_Reader R;
          int64_t N;
      } io_LimitedReader;
      
      so_Result io_LimitedReader_Read(void* self, so_Slice p) {
          io_LimitedReader* l = self;
          if (l->N <= 0) {
              return (so_Result){.val.as_int = 0, .err = io_EOF};
          }
          if ((int64_t)(so_len(p)) > l->N) {
              p = so_slice(so_byte, p, 0, l->N);
          }
          so_Result res = l->R.Read(l->R.self, p);
          so_int n = res.val.as_int;
          l->N -= (int64_t)(n);
          return (so_Result){.val.as_int = n, .err = res.err};
      }
      

      A bit more verbose, but nothing special. The multiple return values, the interface call with l.R.Read, and the slice handling are all implemented as described in previous sections.

      Copy

      Copy is where everything comes together. Here's the simplified Go version:

      // Copy copies from src to dst until either
      // EOF is reached on src or an error occurs.
      func Copy(dst Writer, src Reader) (written int64, err error) {
          // Allocate a temporary buffer for copying.
          size := 32 * 1024
          buf := make([]byte, size)
          // Copy from src to dst using the buffer.
          for {
              nr, er := src.Read(buf)
              if nr > 0 {
                  nw, ew := dst.Write(buf[0:nr])
                  written += int64(nw)
                  if ew != nil {
                      err = ew
                      break
                  }
              }
              if er != nil {
                  if er != EOF {
                      err = er
                  }
                  break
              }
          }
          return written, err
      }
      

      In Go, Copy allocates its buffer on the heap with make([]byte, size). I could take a similar approach in C — make Copy take an allocator and use it to create the buffer like this:

      so_Result io_Copy(mem_Allocator a, io_Writer dst, io_Reader src) {
          so_int size = 32 * 1024;
          so_Slice buf = mem_AllocSlice(so_byte, a, size, size);
          // ...
      }
      

      But since this is just a temporary buffer that only exists during the function call, I decided stack allocation was a better choice:

      so_Result io_Copy(io_Writer dst, io_Reader src) {
          so_int size = 8 * 1024;
          so_Slice buf = so_make_slice(so_byte, size, size);
          // ...
      }
      

      so_make_slice allocates memory on a stack with a bounds-checking macro that wraps C's alloca. It moves the stack pointer and gives you a chunk of memory that's automatically freed when the function returns.

      People often avoid using alloca because it can cause a stack overflow, but using a bounds-checking wrapper fixes this issue. Another common concern with alloca is that it's not block-scoped — the memory stays allocated until the function exits. However, since we only allocate once, this isn't a problem.

      Here's the simplified C version of Copy:

      so_Result io_Copy(io_Writer dst, io_Reader src) {
          so_int size = 8 * 1024; // smaller buffer, 8 KiB
          so_Slice buf = so_make_slice(so_byte, size, size);
          int64_t written = 0;
          so_Error err = NULL;
          for (;;) {
              so_Result resr = src.Read(src.self, buf);
              so_int nr = resr.val.as_int;
              if (nr > 0) {
                  so_Result resw = dst.Write(dst.self, so_slice(so_byte, buf, 0, nr));
                  so_int nw = resw.val.as_int;
                  written += (int64_t)(nw);
                  if (resw.err != NULL) {
                      err = resw.err;
                      break;
                  }
              }
              if (resr.err != NULL) {
                  if (resr.err != io_EOF) {
                      err = resr.err;
                  }
                  break;
              }
          }
          return (so_Result){.val.as_i64 = written, .err = err};
      }
      

      Here, you can see all the parts from this post working together: a function accepting interfaces, slices passed to interface methods, a result type wrapping multiple return values, error sentinels compared by identity, and a stack-allocated buffer used for the copy.

      Wrapping up

      Porting Go's io package to C meant solving a few problems: representing slices, handling multiple return values, modeling errors, and implementing interfaces using function pointers. None of this needed anything fancy — just structs, unions, functions, and some macros. The resulting C code is more verbose than Go, but it's structurally similar, easy enough to read, and this approach should work well for other Go packages too.

      The io package isn't very useful on its own — it mainly defines interfaces and doesn't provide concrete implementations. So, the next two packages to port were naturally bytes and strings — I'll talk about those in the next post.

      In the meantime, if you'd like to write Go that translates to C — with no runtime and manual memory management — I invite you to try Solod. The io package is included, of course.

    18. 🔗 r/reverseengineering Announcing ida-mcp 2.0: A Headless MCP Server for IDA Pro rss
    19. 🔗 r/york Does anyone want/need any pallets rss

      Does anyone want/need any pallets | Posted on the FB free york page, and thought id also post here, incase people dont have FB 3 pallets, free to collect Tang Hall area submitted by /u/anus-cannon
      [link] [comments]
      ---|---

    20. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [BinSync](https://github.com/binsync/binsync): 5.13.0
      
    21. 🔗 r/Yorkshire Rolls-Royce invests £19.3m in aim to double Rotherham factory output rss

      Rolls-Royce invests ÂŁ19.3m in aim to double Rotherham factory output | submitted by /u/willfiresoon
      [link] [comments]
      ---|---

    22. 🔗 r/york Best pubs/bars for smokers? (nice beer gardens etc) rss

      Going for a meal and a few afternoon drinks to central York with my best friend soon, only thing is he's a very heavy smoker- I pick the wrong place and I don't see him much as I'm sat in,​whilst he is always stood outside the front door having his next smoke!

      We're both in our 40s, so no clubs or teen hangouts please. Thank you. ​

      submitted by /u/map01302
      [link] [comments]

    23. 🔗 r/york Charging Port Cleaning rss

      Hi, my steamdeck only charges at certain angles and I think the charging port needs cleaning. Can anyone recommend anywhere that can do this for me? If it were my phone I'd go anywhere but don't want to risk the deck being broken.

      submitted by /u/victorianas
      [link] [comments]

    24. 🔗 r/wiesbaden No Tyrants Protest this Saturday, Schlossplatz from 1:00 to 3:00 PM rss

      I will also be making a speech.

      submitted by /u/ramona_rox
      [link] [comments]

    25. 🔗 r/Leeds Our Hero Nathan Newby đŸ«Ą rss

      I know it's already been spoken about, but this man is incredible. He saved many lives.

      BBC News - Patient hugged armed man to prevent bomb attack at Leeds hospital - BBC News https://www.bbc.co.uk/news/articles/c9q58xq9lxzo?app-referrer=push- notification

      submitted by /u/Johnbo_
      [link] [comments]

    26. 🔗 r/york Life in York? rss

      I'm considering working at the university of York. If I move to York, what's it like there as compared to my hometown Milton Keynes?

      submitted by /u/Beautiful_Shine_6787
      [link] [comments]

    27. 🔗 backnotprop/plannotator v0.15.2 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.15.0 | Live AI chat in code review, plan archive browser, folder file viewer, resizable split pane, Pi full feature parity
      v0.14.5 | GitLab merge request review, login page image fix, Windows install path fix
      v0.14.4 | GitHub review submission, repo identifier in tab title, nested code fence parser fix, Pi paste URL wiring, file header gap fix
      v0.14.3 | PR context panel, diff search in code review, OpenCode permission normalization, landing page redesign
      v0.14.2 | OpenCode plan mode prompt replacement, Windows non-ASCII path fix, Pi link fix
      v0.14.1 | Single submit_plan with auto-detect, viewed-file draft persistence, Bear nested tag fix
      v0.14.0 | PR review via GitHub URL, /plannotator-last for annotating agent messages, OpenCode plan mode permissions fix, VS Code SSH proxy fix
      v0.13.1 | OpenCode plan mode rewrite, Obsidian save fix
      v0.13.0 | Built-in themes, annotatable plan diffs, file-scoped code review comments, Octarine integration, unified review core, Pi remote sessions
      v0.12.0 | Quick annotation labels, mobile compatibility, Graphviz rendering, markdown images with lightbox, linked doc navigation in annotate mode
      v0.11.4 | Git add from code review, bidirectional scroll navigation, clipboard paste for annotation images, VS Code IPC port stability


      What's New in v0.15.2

      v0.15.2 introduces Compound Planning, adds folder annotation, the /plannotator-archive slash command, and fixes Pi's plan tool scoping. 5 PRs, 1 from an external contributor.

      Compound Planning: Learn From Your Own Planning Patterns

      Skill: /plannotator-compound

      Demo: https://x.com/plannotator/status/2036607307979886984

      Compound Planning is a new skill that surfaces your own insights: what kinds of plans get denied, what feedback you give most often, how your planning has evolved over time. The goal is to consistently refine and optimize the planning that works best for you, and eventually create an automated feedback loop between your review patterns and your agent's planning behavior.

      This is the first step toward a system where your agent gets better at planning for you specifically , based on your actual history of approvals, denials, and annotations.

      The platform install scripts now install Plannotator's skill (just 1 for now) automatically alongside the binary.

      Annotate Entire Folders

      plannotator annotate now accepts a directory path. Instead of opening a single file, it starts the annotate server with the sidebar Files tab pre- loaded, showing all markdown files in that directory. The viewer starts empty with a prompt to select a file. This lets you review and annotate an entire folder of docs, specs, or notes in one session.

      Works in both the Bun hook and the Pi extension.

      /plannotator-archive Slash Command

      The plan archive browser was previously only accessible via the CLI (plannotator archive) and the Pi extension. This release adds /plannotator- archive as a slash command for Claude Code and OpenCode, so all three runtimes can browse saved plan decisions the same way. The archive is read- only: it opens the browser, you browse your plans, and it closes when you're done.

      Additional Changes

      • Pi plan tool scoping. The Pi extension's plan submission tool was renamed to plannotator_submit_plan and is now hidden outside of planning mode. Previously, the tool was visible globally, which could confuse the agent. The fix also properly restores the pre-plan tool set when planning ends (#387 by @dmmulroy, closing #386)
      • Pi AI backbone bundling. The @plannotator/ai package was missing from published Pi packages because it's a private workspace dependency. AI files are now copied into generated/ai/ at build time, matching the existing pattern for shared utilities. Pi users installing from npm now get AI features in code review.

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      Community

      @dmmulroy authored the Pi plan tool scoping fix (#387), which he also reported in #386. This is his third contribution to the project.

      Full Changelog : v0.15.0...v0.15.2

    28. 🔗 anthropics/claude-code v2.1.83 release

      What's changed

      • Added managed-settings.d/ drop-in directory alongside managed-settings.json, letting separate teams deploy independent policy fragments that merge alphabetically
      • Added CwdChanged and FileChanged hook events for reactive environment management (e.g., direnv)
      • Added sandbox.failIfUnavailable setting to exit with an error when sandbox is enabled but cannot start, instead of running unsandboxed
      • Added disableDeepLinkRegistration setting to prevent claude-cli:// protocol handler registration
      • Added CLAUDE_CODE_SUBPROCESS_ENV_SCRUB=1 to strip Anthropic and cloud provider credentials from subprocess environments (Bash tool, hooks, MCP stdio servers)
      • Added transcript search — press / in transcript mode (Ctrl+O) to search, n/N to step through matches
      • Added Ctrl+X Ctrl+E as an alias for opening the external editor (readline-native binding; Ctrl+G still works)
      • Pasted images now insert an [Image #N] chip at the cursor so you can reference them positionally in your prompt
      • Agents can now declare initialPrompt in frontmatter to auto-submit a first turn
      • chat:killAgents and chat:fastMode are now rebindable via ~/.claude/keybindings.json
      • Fixed mouse tracking escape sequences leaking to shell prompt after exit
      • Fixed Claude Code hanging on exit on macOS
      • Fixed screen flashing blank after being idle for a few seconds
      • Fixed a hang when diffing very large files with few common lines — diffs now time out after 5 seconds and fall back gracefully
      • Fixed a 1–8 second UI freeze on startup when voice input was enabled, caused by eagerly loading the native audio module
      • Fixed a startup regression where Claude Code would wait ~3s for claude.ai MCP config fetch before proceeding
      • Fixed --mcp-config CLI flag bypassing allowedMcpServers/deniedMcpServers managed policy enforcement
      • Fixed claude.ai MCP connectors (Slack, Gmail, etc.) not being available in single-turn --print mode
      • Fixed caffeinate process not properly terminating when Claude Code exits, preventing Mac from sleeping
      • Fixed bash mode not activating when tab-accepting !-prefixed command suggestions
      • Fixed stale slash command selection showing wrong highlighted command after navigating suggestions
      • Fixed /config menu showing both the search cursor and list selection at the same time
      • Fixed background subagents becoming invisible after context compaction, which could cause duplicate agents to be spawned
      • Fixed background agent tasks staying stuck in "running" state when git or API calls hang during cleanup
      • Fixed --channels showing "Channels are not currently available" on first launch after upgrade
      • Fixed uninstalled plugin hooks continuing to fire until the next session
      • Fixed queued commands flickering during streaming responses
      • Fixed slash commands being sent to the model as text when submitted while a message is processing
      • Fixed scrollback jumping when collapsed read/search groups finish after scrolling offscreen
      • Fixed scrollback jumping to top when the model starts or stops thinking
      • Fixed SDK session history loss on resume caused by hook progress/attachment messages forking the parentUuid chain
      • Fixed copy-on-select not firing when you release the mouse outside the terminal window
      • Fixed ghost characters appearing in height-constrained lists when items overflow
      • Fixed Ctrl+B interfering with readline backward-char at an idle prompt — it now only fires when a foreground task can be backgrounded
      • Fixed tool result files never being cleaned up, ignoring the cleanupPeriodDays setting
      • Fixed space key being swallowed for up to 3 seconds after releasing voice hold-to-talk
      • Fixed ALSA library errors corrupting the terminal UI when using voice mode on Linux without audio hardware (Docker, headless, WSL1)
      • Fixed voice mode SoX detection on Termux/Android where spawning which is kernel-restricted
      • Fixed Remote Control sessions showing as Idle in the web session list while actively running
      • Fixed footer navigation selecting an invisible Remote Control pill in config-driven mode
      • Fixed memory leak in remote sessions where tool use IDs accumulate indefinitely
      • Improved Bedrock SDK cold-start latency by overlapping profile fetch with other boot work
      • Improved --resume memory usage and startup latency on large sessions
      • Improved plugin startup — commands, skills, and agents now load from disk cache without re-fetching
      • Improved Remote Control session titles: AI-generated titles now appear within seconds of the first message
      • Improved WebFetch to identify as Claude-User so site operators can recognize and allowlist Claude Code traffic via robots.txt
      • Reduced WebFetch peak memory usage for large pages
      • Reduced scrollback resets in long sessions from once per turn to once per ~50 messages
      • Faster claude -p startup with unauthenticated HTTP/SSE MCP servers (~600ms saved)
      • Bash ghost-text suggestions now include just-submitted commands immediately
      • Increased non-streaming fallback token cap (21k → 64k) and timeout (120s → 300s local) so fallback requests are less likely to be truncated
      • Interrupting a prompt before any response now automatically restores your input so you can edit and resubmit
      • /status now works while Claude is responding, instead of being queued until the turn finishes
      • Plugin MCP servers that duplicate an org-managed connector are now suppressed instead of running a second connection
      • Linux: respect XDG_DATA_HOME when registering the claude-cli:// protocol handler
      • Changed "stop all background agents" keybinding from Ctrl+F to Ctrl+X Ctrl+K to stop shadowing readline forward-char
      • Deprecated TaskOutput tool in favor of using Read on the background task's output file path
      • [VSCode] Spinner now turns red with "Not responding" when the backend hasn't responded for 60 seconds
      • [VSCode] Fixed session history not loading correctly when reopening a session via URL or after restart
    29. 🔗 r/LocalLLaMA Throwback to my proudest impulse buy ever, which has let me enjoy this hobby 10x more rss

      Throwback to my proudest impulse buy ever, which has let me enjoy this hobby 10x more | Can you beleive I almost bought two of them?? (oh, and they gave me 10% cashback for Prime Day) submitted by /u/gigaflops_
      [link] [comments]
      ---|---

    30. 🔗 Cal Paterson "Disregard that!" attacks rss

      Why you shouldn't share your context window with others

    31. 🔗 Mario Zechner Thoughts on slowing the fuck down rss

      Thoughts on slowing the fuck down

    32. 🔗 Drew DeVault's blog A eulogy for Vim rss

      Vim is important to me. I’m using it to write the words you’re reading right now. In fact, almost every word I have ever committed to posterity, through this blog, in my code, all of the docs I’ve written, emails I’ve sent, and more, almost all of it has passed through Vim.

      My relationship with the software is intimate, almost as if it were an extra limb. I don’t think about what I’m doing when I use it. All of Vim’s modes and keybindings are deeply ingrained in my muscle memory. Using it just feels like my thoughts flowing from my head, into my fingers, into a Vim-shaped extension of my body, and out into the world. The unique and profound nature of my relationship with this software is not lost on me.

      A picture of my right hand, with the letters “hjkl” tattooed on the wrist

      I didn’t know Bram Moolenaar. We never met, nor exchanged correspondence. But, after I moved to the Netherlands, Bram’s home country, in a strange way I felt a little bit closer to him. He passed away a couple of years after I moved here, and his funeral was held not far from where I lived at the time. When that happened, I experienced an odd kind of mourning. He was still young, and he had affected my own life profoundly. He was a stranger, and I never got to thank him.

      The people he entrusted Vim to were not strangers, they knew Bram and worked with him often, and he trusted them. It’s not my place to judge their work as disrespectful to his memory, or out of line with what he would have wanted. Even knowing Bram only through Vim, I know he and I disagreed often. However, the most personal thing I know about Bram, and that many people remember about him, was his altruistic commitment to a single cause: providing education and healthcare to Ugandan children in need. So, at the very least, I know that he cared.

      I won’t speculate on how he would have felt about generative AI, but I can say that GenAI is something I care about. It causes a lot of problems for a lot of people. It drives rising energy prices in poor communities, disrupts wildlife and fresh water supplies, increases pollution, and stresses global supply chains. It re-enforces the horrible, dangerous working conditions that miners in many African countries are enduring to supply rare metals like Cobalt for the billions of new chips that this boom demands. And at a moment when the climate demands immediate action to reduce our footprint on this planet, the AI boom is driving data centers to consume a full 1.5% of the world’s total energy production in order to eliminate jobs and replace them with a robot that lies.

      Meanwhile, this whole circus is enabling the rising tide of fascism around the world, not only by supercharging propaganda but also by directly financially supporting fascist policies and policymakers. All this to enrich the few, centralize power, reduce competition, and underwrite an enormous bubble that, once it bursts, will ruin the lives of millions of the world’s poor and marginalized classes.

      I don’t think it’s cute that someone vibe coded “battleship” in VimScript. I think it’s more important that we stop collectively pretending that we don’t understand how awful all of this is. I don’t want to use software which has slop in it. I do what I can to avoid it, and sadly even Vim now comes under scrutiny in that effort as both Vim and NeoVim are relying on LLMs to develop the software.

      So this is how, a few years after Bram’s passing, I find myself in another unusual moment of mourning: mourning Vim itself. What an odd feeling.


      To keep my conscience clear, and continue to enjoy the relationship I have with this amazing piece of software, I have forked Vim. You can find my fork here: Vim Classic.

      The choice of which version to use as the basis for a fork was a bit difficult. The last version of Vim released during Bram’s lifetime was Vim 9.0. To me, that seems like a good starting point. But, in the end, I chose to base my fork on Vim 8.2.0148 instead. Patch 148 was the patch immediately prior to the introduction of Vim9 Script, Vim 9.0’s flagship feature.

      I’m sure Bram worked hard on Vim9 script, and I want to honor that. At the same time, it was still very new when he passed away, and the job of fully realizing its potential was handed down to the current maintainers. Its absence from Vim Classic is an honest assessment that I don’t have the time or energy to try to sort out all of the work on Vim9 which followed in Bram’s footsteps, and decide what stays and what goes. It seems like a useful line to draw in the sand: Vim Classic is compatible with legacy plugins, but not the newfangled stuff.

      Since forking from this base, I have backported a handful of patches, most of which address CVEs discovered after this release, but others which address minor bug fixes. I also penned a handful of original patches which bring the codebase from this time up to snuff for building it on newer toolchains. My old vimrc needed very few changes to work on this version of Vim, and all of my plugins work with the exception of fzf.vim, which I would like to fix at some point (or maybe a sympathetic reader is willing to work on backporting the necessary changes).

      I plan to use this for a little while, look for sore points and rough edges, collect feedback from other users, and then tag a little release soon. Going forward, maintenance will be slow and quiet. I welcome your patches, particularly to help with maintaining the runtime scripts, stuff like making sure new language features end up in the syntax files. I’ll also gladly accept new bug fixes, and maybe even a few new features if a good case can be made for including them. Backporting small patches from Vim upstream will be considered, with extra scrutiny.

      In short, I invite you to use Vim Classic, if you feel the same way as me, and to maintain it with me, contributing the patches you need to support your own use cases.