🏡


to read (pdf)

  1. Advanced-Hexrays-Decompiler-reverse-engineering
  2. A Modern Recommender Model Architecture - Casey Primozic's Homepage
  3. AddyOsmani.com - 21 Lessons From 14 Years at Google
  4. On autonomous reverse-engineering | Borderline
  5. How to Win Friends And Influence People Unrevised Edition by Dale Carnegie

  1. January 07, 2026
    1. 🔗 r/reverseengineering Learning from the old Exynos Bug rss
    2. 🔗 r/wiesbaden 2h Zeitvertreib rss

      Hi hab morgen nen Termin beim St Josef Krankenhaus, dementsprechend reise ich 2h früher an. Gibts da in der Nähe Möglichkeiten wo man sich mit seinem Laptop reinsetzen kann bzw. habt ihr andere Vorschläge wie man sich die Zeit vetreiben könnte?

      submitted by /u/Living_Performer_801
      [link] [comments]

  2. January 06, 2026
    1. 🔗 @cxiao@infosec.exchange Anita Anand sur Bluesky: mastodon

      Anita Anand sur Bluesky: https://bsky.app/profile/anitaoakvilleeast.bsky.social/post/3mbrkauihpk25

      Je serai à Nuuk dans les prochaines semaines pour inaugurer officiellement le consulat du Canada et souligner une étape concrète dans le renforcement de notre engagement en soutien à la souveraineté et à l’intégrité territoriale du Danemark, y compris du Groenland.

      #canada

    2. 🔗 @cxiao@infosec.exchange Anita Anand on Bluesky: mastodon

      Anita Anand on Bluesky:
      https://bsky.app/profile/anitaoakvilleeast.bsky.social/post/3mbrkauihpk25

      I will be in Nuuk in the coming weeks to officially open Canada’s consulate and mark a concrete step in strengthening our engagement in support of Denmark’s sovereignty and territorial integrity, including Greenland.

      #canada

    3. 🔗 streamyfin/streamyfin v0.51.0 release

      Finally a new release 🥳 this one has some really nice improvements, like:

      • Approve Seerr (formerly Jellyseerr) requests directly in Streamyfin for admins
      • Updated Home Screen icon in the new iOS 26 style
      • Improved VLC integration with native playback (AirPods controls, automatic pause when other audio starts, native system controls with artwork)
      • Option to use KSPlayer on iOS - better hardware decoding support and PiP
      • Music playback (beta)
      • Option to disable player gestures at screen edges to prevent conflicts with swipe down notifications
      • Snapping scroll in all carousels for smoother and more precise navigation
      • Playback speed
      • Dolby badge displayed in technical item details when available
      • Expanded playback options with dynamically loaded streams and full media selection (Gelato support)
      • Streamystats watchlists and promoted sections integration
      • Initial KefinTweaks integration
      • A lot of other fixes and small improvements

      iOS and Android apps on the stores are soon approved and released.

      What's Changed

      New Contributors

      Full Changelog : v0.47.1...v0.51.0

    4. 🔗 streamyfin/streamyfin 0.51.0 release

      No content.

    5. 🔗 @HexRaysSA@infosec.exchange 👀 IDA 9.3 is coming soon, so we'll be sharing some of the key updates in this mastodon

      👀 IDA 9.3 is coming soon, so we'll be sharing some of the key updates in this release throughout the next few weeks...
      ➥ First up: Practical Improvements to the Type System
      https://hex-rays.com/blog/ida-9.3-type-system-improvements

    6. 🔗 Hex-Rays Blog IDA 9.3: Practical Improvements to the Type System rss

      IDA 9.3: Practical Improvements to the Type System

      The soon to be released, IDA 9.3 tightens up the type system in a few practical ways: Objective-C headers can now be parsed directly, exported headers are easier to read, and type information can make a full round trip between headers and the database without losing structure.

    7. 🔗 r/wiesbaden Vermieter ignoriert schweren Wasserschaden fast 2 Monate – Wohnung jetzt unbewohnbar rss

      Ich schreibe diesen Beitrag teilweise aus Verzweiflung und teilweise in der Hoffnung, dass mir hier jemand – oder auch lokale Medien – weiterhelfen kann.

      Seit dem 11. November gibt es in meiner Wohnung in Wiesbaden einen massiven Wasserschaden, verursacht durch eine undichte Stelle in der darüberliegenden Wohnung. Ich habe den Schaden sofort über die App meines Vermieters, die Schadenshotline sowie durch zahlreiche Nachfragen gemeldet. Aufgrund von Untätigkeit und Verzögerungen lief über Wochen weiter Wasser durch die Wände, was zu starkem Schimmelbefall geführt hat. Die Wohnung ist inzwischen nicht mehr bewohnbar.

      Vermieter ist Industria Immobilien, ein großes Immobilienunternehmen. Selbst nachdem die Ursache des Lecks in der oberen Wohnung schließlich behoben wurde, wurden bis heute weder Trocknungsgeräte aufgestellt noch eine fachgerechte Sanierung begonnen. Fast zwei Monate später sind die Wände weiterhin feucht und der Schimmel breitet sich weiter aus.

      Ich lebe mit einem 6 Monate alten Baby und habe zudem ernsthafte gesundheitliche Probleme. Aufgrund des Zustands der Wohnung war ich gezwungen, mein Zuhause zu verlassen und auf eigene Kosten anderweitig unterzukommen, während sich die Schäden weiter verschlimmerten. Trotz unzähliger Telefonate, E-Mails, schriftlicher Fristen und sogar der Einschaltung des Mieterbundes hat sich nichts bewegt. Einen echten Ansprechpartner bei der Firma zu erreichen ist nahezu unmöglich – stattdessen gibt es automatisierte Antworten und leere Versprechungen.

      Besonders erschreckend ist, dass ich offenbar kein Einzelfall bin. Nach Sichtung zahlreicher Bewertungen auf Google, in sozialen Medien und anderen öffentlichen Foren berichten viele Mieter von sehr ähnlichen Erfahrungen: verzögerte Reparaturen, ignorierte Schäden und fehlende Verantwortung.

      Ich habe inzwischen Verbraucherstellen kontaktiert, Mieterorganisationen eingeschaltet und ziehe rechtliche Schritte in Betracht. Ich teile das hier öffentlich, weil so etwas in Deutschland im Jahr 2025 nicht passieren sollte – und große Vermieter nicht monatelang unbewohnbare Zustände ignorieren dürfen.

      Falls jemand Rat, ähnliche Erfahrungen oder Kontakte hat (insbesondere zu Journalisten oder Verbraucherschutz), wäre ich sehr dankbar.

      submitted by /u/Afraid_Garden_4342
      [link] [comments]

    8. 🔗 Konloch/bytecode-viewer 2.13.2 - 2026 Release (In-Dev) release

      This release is considered an in-development version:

      • The CLI is currently in the middle of a rewrite, so it's not functional at this time (use v2.12).

      Notable Changes

      January 6th, 2026 Fixes:

      Note

      If you encounter any issues, try an older version: v2.12, v2.11.2, v2.13.1

      If you find any bugs just open up a GitHub issue or email me at Konloch@gmail.com

    9. 🔗 r/reverseengineering Reverse engineering my cloud-connected e-scooter and finding the master key to unlock all scooters rss
    10. 🔗 HexRaysSA/plugin-repository commits mirror: pick latest version, not first version rss
      mirror: pick latest version, not first version
      
      ref: https://github.com/p05wn/SuperHint/issues/2
      
    11. 🔗 r/wiesbaden Ich hoffe ich werde hier nicht gesteinigt weil es in Mainz ist 🤣 rss
    12. 🔗 Jeremy Fielding (YouTube) 11 Years of Making in 11 minutes: Jeremy Fielding rss

      Order custom parts Send Cut Send 👉 http://sendcutsend.com/jeremyfielding The playlist of all videos mentioned 👉 https://www.youtube.com/playlist?list=PL4njCTv7IRbyGx6jx1xM8YF8UL45T945d If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join

      If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding

      Social media, websites, and other channel

      Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA

      Notes:

      Technical corrections

      Nothing yet

    13. 🔗 r/wiesbaden Thalia Kino schließt am Mittwoch rss
    14. 🔗 r/LocalLLaMA Supertonic2: Lightning Fast, On-Device, Multilingual TTS rss

      Supertonic2: Lightning Fast, On-Device, Multilingual TTS | Hello! I want to share that Supertonic now supports 5 languages:
      한국어 · Español · Français · Português · English It’s an open-weight TTS model designed for extreme speed, minimal footprint, and flexible deployment. You can also use it for commercial use! Here are key features: (1) Lightning fast — RTF 0.006 on M4 Pro (2) Lightweight — 66M parameters (3) On-device TTS — Complete privacy, zero network latency (4) Flexible deployment — Runs on browsers, PCs, mobiles, and edge devices (5) 10 preset voices — Pick the voice that fits your use cases (6) Open-weight model — Commercial use allowed (OpenRAIL-M) I hope Supertonic is useful for your projects. [Demo] https://huggingface.co/spaces/Supertone/supertonic-2 [Model] https://huggingface.co/Supertone/supertonic-2 [Code] https://github.com/supertone-inc/supertonic submitted by /u/ANLGBOY
      [link] [comments]
      ---|---

    15. 🔗 HexRaysSA/plugin-repository commits add danielplohmann/mcrit-plugin rss
      add danielplohmann/mcrit-plugin
      
    16. 🔗 r/LocalLLaMA Performance improvements in llama.cpp over time rss

      Performance improvements in llama.cpp over time | submitted by /u/jacek2023
      [link] [comments]
      ---|---

    17. 🔗 r/LocalLLaMA Liquid Ai released LFM2.5, family of tiny on-device foundation models. rss

      Liquid Ai released LFM2.5, family of tiny on-device foundation models. | Hugging face: https://huggingface.co/collections/LiquidAI/lfm25 It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class.

      LFM2.5 builds on LFM2 device-optimized hybrid architecture Pretraining scaled from 10T → 28T tokens Expanded reinforcement learning post-training Higher ceilings for instruction following

      5 open-weight model instances from a single architecture:

      General-purpose instruct model Japanese-optimized chat model Vision-language model Native audio-language model (speech in/out) Base checkpoints for deep customization

      submitted by /u/Difficult-Cap-7527
      [link] [comments]
      ---|---

  3. January 05, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-05 rss

      IDA Plugin Updates on 2026-01-05

      Activity:

    2. 🔗 r/LocalLLaMA Rubin uplifts from CES conference going on now rss

      Rubin uplifts from CES conference going on now | Pretty exciting! submitted by /u/mr_zerolith
      [link] [comments]
      ---|---

    3. 🔗 r/reverseengineering A Telegram Protocol Wireshark Dissector (MTProto) rss
    4. 🔗 r/wiesbaden Wo sind die Papageien hin? rss

      Ich war heute wieder am Kasino, dort wo gerade die mobile Eisbahn steht. Auf den Bäumen drumherum tummeln sich seit ich denken kann immer hunderte Papageien in den Bäumen. Heute war alles leer gefegt. In zwei der Bäume war stattdessen ein Schwarm Krähen.

      Ich vermisse die Papageien und wüsste gerne, wo sie nun sind.

      submitted by /u/MissMcFearless
      [link] [comments]

    5. 🔗 r/wiesbaden Pass Verloren rss

      Hi, keine Ahnung ob mir jemand hier helfen kann, aber ich versuche es trotzdem. Ich habe heute am 05.01 meinen Griechischen Pass verloren, irgendwo zwischen Geschwister Stock Platz, Wilhelm Straße und Blumenstraße verloren, da ich von Bus 8 auf Bus 17 umgestiegen bin. Falls jemand was gefunden hat oder irgendwas weis, bitte Bescheid geben.

      Vielen Dank

      Edit: Heute ist das verrückteste passiert. Bin heute bei dem 1. Polizei Revier gewesen. Hab dort gefragt, in dem Moment wo ich mein Ausweis zur Identifizierung abgeben wollte ist ein älterer italienischer Herr reingekommen im Revier (unerlaubt da er nicht geklingelt hat), und wollte einen griechischen Pass abgeben, meinen Griechischen Pass. So ein Zufall das wir im gleichen Moment bei dem gleichen Polizeirevier waren, ich zum suchen under zum abgeben.

      submitted by /u/24__andi
      [link] [comments]

    6. 🔗 r/LocalLLaMA For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage rss

      For the first time in 5 years, Nvidia will not announce any new GPUs at CES — company quashes RTX 50 Super rumors as AI expected to take center stage | Welp, in case anyone had any hopes. No RTX 50 Super cards, very limited supply of the 5070Ti, 5080, and 5090, and now rumors that Nvidia will bring back the 3060 to prop demand. Meanwhile DDR5 prices continue to climb, with 128GB kits now costing $1460. Storage prices have also gone through the roof. I'm very lucky to have more than enough hardware for all my LLM and homelab needs but at the same time, I don't see any path forward if I want to upgrade in the next 3 years, and hope my gear continues to run without any major issues. submitted by /u/FullstackSensei
      [link] [comments]
      ---|---

    7. 🔗 sacha chua :: living an awesome life 2026-01-05 Emacs news rss

      Looking for something to write about? Christian Tietze is hosting the January Emacs Carnival on the theme "This Year, I'll…". Check out last month's carnival on The People of Emacs for other entries.

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    8. 🔗 sacha chua :: living an awesome life La semaine du 29 décembre rss

      Lundi, le vingt-neuf décembre

      Ma fille et moi avons pris nos pelles et son grand bâton et nous sommes allées au coin de la rue que j'avais un peu déneigée hier. Ma fille a pilé la glace avec son grand bâton, et nous avons pelleté.

      J'ai modifié ma fonction pour surligner les nouveaux mots pour qu'elle affiche la prononciation à partir de la base de données Lexique après. J'ai créé d'autres fonctions qui utilisent la bibliothèque gtts-cli pour un outil de synthèse vocale. J'ai combiné ces fonctions pour créer des raccourcis clavier qui me permettent de sauter au prochain nouveau mot et de l'écouter, ou répéter le mot actuel. Je m'en suis servi avant ma session avec ma tutrice pour réviser rapidement parce que je n'ai pas eu le temps d'écouter toutes les phrases. Ça semble très utile jusqu'à présent. L'implémenter dans Emacs était plus facile que l'implémenter dans Google Chrome parce que je peux utiliser les bases de données. J'ai hâte de l'essayer sur mon journal cette semaine.

      En raison de mes très longues entrées, je n'ai dit que l'entrée de jeudi dans mon rendez-vous avec ma tutrice. C'est correct. En corriger une partie est mieux que se précipiter pour tout finir. Je peux pratiquer l'expression orale grâce à la synthèse vocale et utiliser les rendez-vous pour identifier les mots que je prononce mal souvent.

      J'ai écrit mon bulletin sur Emacs. De plus, j'ai extrait le passage sur les gens d'Emacs de mon journal pour en faire un billet séparé pour le Carnaval d'Emacs. J'ai mis à jour GoToSocial parce que l'ancienne version ne m'a pas permis de publier. La mise à jour a pris du temps à s'installer, donc je l'ai laissée de côté pour la nuit.

      À l'heure du coucher, j'avais du mal à dormir parce que mon cerveau continuait à penser à quelques conversations. J'ai ressenti une petite douleur à cause de COVID et ses restrictions de voyage, mais je ne veux pas accepter de risquer le syndrome post-COVID pour des câlins et des moments partagés. Mon cerveau a aussi sauté à l'autre conversation sur les difficultés de communication. Toutefois, ce ne sont pas des problèmes que je peux résoudre toute seule. J'ai essayé l'exercice de respiration que ma thérapeute a suggéré, ce qui m'a aidée après un certain temps.

      Mardi, le trente décembre

      Ma fille était d'humeur grincheuse aujourd'hui, donc je me suis concentrée sur mes tâches. Premièrement, j'ai créé une fonction pour le publipostage destiné aux bénévoles par étiquette. Puis, j'ai utilisé cette fonction pour envoyer mes lettres de remerciements aux intervenants et aux bénévoles. Enfin, j'ai commencé à écrire ma réflexion sur l'organisation de la conférence.

      Ma fille a travaillé sur ses devoirs. Mon journal l'a aidée pour ses devoirs d'écriture sur ses week-ends.

      Le nouveau robot pâtissier de mon mari est arrivé. C'était fascinant à regarder. Mon mari l'a utilisé pour préparer de la pâte à pain. Il pense que le nouveau robot pâtissier est meilleur pour la pâte que le KitchenAid grâce à son mécanisme. J'ai hâte de goûter le pain demain.

      Quand il se lance dans un nouveau loisir, mon mari fait beaucoup de recherches sur les techniques et les outils. De mon côté, je me renseigne souvent sur les techniques, mais je n'achète pas beaucoup d'équipement tout de suite parce que je sais que mes nouveaux centres d'intérêt ont tendance à ne pas durer très longtemps. Mes passe-temps sont souvent immatériels, comme la programmation, la musique, ou le français.

      Les nouveaux vêtements de ma fille sont aussi arrivés. Elle a aimé presque tout à l'exception de quelques pinces à cheveux. Après tout, c'est son argent, son choix.

      Mercredi, le trente-et-un décembre

      Aujourd'hui, c'était merveilleux. Nous avons commencé par aller au parc avec nos amis. Nous sommes tombés sur eux en sortant de leur bâtiment, donc nous sommes allés tous ensemble. Avant d'aller à la patinoire, nous avons joué sur les flaques gelées. Les enfants ont glissadé. Elles ont aussi brisé la glace avec des gros bâtons. Le temps était très froid, mais grâce à l'amusement, c'était tolérable.

      Nous avons patiné. Il n'y avait pas grand monde. J'étais contente de voir que ma fille et son amie ont pu patiner si vite déjà. Ensuite, nous avons bu du chocolat chaud qu'ils avaient préparé, et ma fille a partagé ses craquelins de riz. Ils ont bien aimé ça. Ils étaient étonnés de voir qu'on pouvait en acheter au No Frills.

      Finalement, il a fallu rentrer. Pendant que nous rentrions, nous avons remarqué que la chienne boitait. Nous avons pensé que le sel l'avait blessée. J'ai offert notre landau pour transporter la chienne. C'était très utile.

      J'ai continué à écrire mon billet sur l'organisation de la conférence. Je dois attendre les factures avant que je puisse faire le total des frais, mais je pense que c'est raisonnable.

      J'ai choisi des raccourcis clavier pour rechercher les mots et les conjugaisons dans le dictionnaire sous Emacs.

      Pour le souper, mon mari a préparé du saumon, j'ai fait sauter des blettes et du chou cavalier, et ma fille a préparé de la purée de pommes de terre. Elle a aussi créé un menu. Notre souper était très sophistiqué.

      Jeudi, le jour de l'an

      Bonne année !

      Ma fille a fait la grasse matinée, donc j'ai le temps de revoir mes cartes d'Anki. Grâce au fil sur Reddit, j'ai découvert la série Extr@ French. J'ai regardé les trois premiers épisodes sur YouTube. À mon grand étonnement, je pouvais les comprendre. J'ai aussi regardé le premier épisode de Parlez-Moi, qui est plus lent et facile. Si je trouve le temps de les regarder (peut-être au réveil), je pense que je vais m'amuser pendant que j'apprends le français.

      J'ai fini le cours de Michel Thomas pour les débutants sur YouTube. J'aime ce cours parce qu'il se concentre sur la grammaire et il construit les phrases de plus en plus complexes petit à petit. Par exemple, l'instructeur leur a demandé : Now, how would you say "It is not very logical, but it is practical that way." ? (Ce n'est pas très logique, mais c'est pratique comme ça.) La bibliothèque propose d'autres cours de Michel Thomas, donc je les leur ai demandés.

      Pour le petit-déjeuner, j'ai préparé des petites crêpes épaisses. Nous devons le faire souvent sinon le levain va envahir notre réfrigérateur. Maintenant, je pense que nous sommes passés de trois grands bocaux à un grand bocal et un moyen bocal. La recette pour les petites crêpes épaisses est très utile parce qu'elle utilise seulement le levain, un peu de sel, un peu de bicarbonate de soude, et un peu de sucre. Nous n'avons pas besoin de blé, de lait, ou d'œuf.

      Aujourd'hui, le temps était très froid. Il faisait moins treize degrés. Malgré le froid, je me suis promenée au parc. Le soleil brillait, le temps était calme, et j'étais bien emmitouflée, donc c'était agréable.

      Ma fille a fait une grigne en forme d'épi sur le pain, et mon mari l'a fait cuire. Cette fois, la poussée au four était meilleure. Le résultat était beau (et délicieux, naturellement). J'ai coupé une moitié en tranches et je l'ai congelée parce que mon mari a commencé une autre pâte à pain pour la faire cuire demain.

      J'ai continué à écrire mon billet sur l'organisation de la conférence. J'ai ajouté quelques statistiques et beaucoup d'idées d'amélioration. Il y a eu moins de participants que l'année dernière, mais ça valait le coup. Peut-être que les gens sont plus occupés. S'ils veulent regarder les vidéos ou qu'ils veulent lire les discussions, c'est disponible à tout moment.

      Je pense que le billet n'a besoin que d'une petite révision avant la publication. C'est ma réflexion personnelle, donc il peut être moins soutenu. Puis je vais le réviser pour publier un compte-rendu sur le wiki d'EmacsConf.

      Pour le souper, mon mari a préparé du poulet teriyaki. Il a déglacé la poêle avec du chou et du vin. J'ai préparé des edamames.

      Les vacances vont bientôt finir. Quand ma fille retournera à l'école virtuelle, j'aurai plus de temps pour me concentrer, mais j'aurai plus de stress. Je me demande pourquoi… Peut-être que je m'inquiète pour l'interaction entre ma fille et l'école. Ma fille a toujours dit qu'elle s'ennuyait tellement. Je peux voir que c'est difficile pour elle. Nous pouvons envisager de l'instruction en famille, mais je crains qu'elle ne veuille pas écouter mes consignes ou celles des tuteurs. En fait, elle n'en fait qu'à sa tête, mais c'est normalement raisonnable. Si elle trouve un objectif précis, c'est plus facile de dire "d'accord". Pour l'instant, nous continuons. L'école lui a donné des occasions de faire des choses en autonomie et avec les camarades, ce qui est plus difficile si je l'éduque à la maison. Un jour, elle mûrira assez pour trouver sa propre façon, ou elle trouvera des façons de s'adapter aux autres. Peut-être que je m'inquiète pour rien.

      Ma fille a dit qu'elle veut se coucher toute seule après avoir lu encore un peu. Elle a lu son livre, puis elle a éteint sa lampe et elle s'est couchée. En vrai ! Il est possible qu'elle mûrisse petit à petit…

      Je souhaite que cette année apporte la santé et le bonheur à nous tous.

      Vendredi, le deux janvier

      J'ai commencé ma journée en révisant mes cartes Anki et en regardant quelques épisodes d'extr@. Je suis maintenant à la moitié du septième épisode. L'humour repose sur les stéréotypes, mais peut-être que je ne peux pas comprendre un humour plus sophistiqué pour l'instant, et les exagérations rendent la compréhension plus facile. Je me demande quels autres programmes amusants pour l'apprentissage du français je peux trouver sur YouTube. Je pense qu'il y en a beaucoup.

      Comme d'habitude, j'ai cuisiné des petites crêpes épaisses pour le petit-déjeuner.

      Ma fille a dit que ses cicatrices faisaient mal. J'ai envoyé un message à l'infirmière. Ma fille a lu longtemps dans sa chambre.

      J'ai finalement publié mes notes à propos de la conférence sur mon blog. Puis, je les ai récapitulées pour le site et j'ai envoyé un message à la liste de diffusion.

      Ma fille et moi avons finalement construit le modèle du biplan que mon père lui a donné. Malheureusement, ma fille a cassé la queue du biplan, mais heureusement, la dérive de l'avion était fixée de telle sorte qu'elle maintenait l'ensemble. C'est une leçon sur la persistance malgré les erreurs.

      Ma fille voulait un jouet comme la machine à espresso. Nous avons recherché toutes les options, mais nous ne pouvons pas choisir un produit. Elle voulait un simple jouet en bois. Nous avons trouvé un plan pour en construire un, donc peut-être que nous pouvons le modifier. C'est l'occasion d'apprendre la menuiserie. Nous avons un peu d'expérience avec ça et quelques outils dans l'atelier.

      À l'heure du coucher, ma fille et moi avons discuté de l'éducation. Elle ne veut pas aller à l'école parce qu'elle pense qu'elle s'ennuie avec son maître et ses devoirs. Je comprends. J'ai aussi eu des difficultés à l'école. J'ai expliqué quelques considérations. Pour l'instant, c'est une occasion gratuite de développer des compétences pour accomplir des tâches et apprendre avec les autres. Si le carnet de notes montre que c'est trop difficile, nous devrons nous adapter. Si elle veut que je lui enseigne, elle doit devenir plus réceptive à l'enseignement et aux remarques. Si elle veut apprendre avec les autres tuteurs ou apprendre elle-même, elle doit développer sa propre volonté. Si elle veut un rythme tranquille à l'école et obtenir des notes autour de B pour consacrer du temps à d'autres loisirs, c'est aussi possible, donc nous pouvons voir ce dont elle a besoin.

      Samedi, le trois janvier

      Mon mari a emmené notre fille à l'Eaton Centre pour faire du shopping, tandis que je suis restée à la maison. C'est une bonne occasion pour les tâches qui demandent de l'attention. J'ai corrigé quelques bogues dans mon logiciel de révision de sous-titrage et j'ai publié une nouvelle version. J'ai aussi configuré mon Emacs pour dicter mes notes sur la tâche sur laquelle je pointe en ce moment. Je veux enregistrer une vidéo pour l'expliquer, mais il faut que je résolve un problème avec l'audio d'abord.

      Quand mon mari est rentré avec notre fille, il a dit qu'ils avaient fait du lèche-vitrines, et puis ils ont passé beaucoup de temps à la librairie en lisant les nouveaux livres absents de la bibliothèque. Notre fille souffrait un peu, pauvre chérie.

      À l'heure du coucher, ma fille a encore dit qu'elle ne voulait pas aller à l'école. Au début, j'ai dit qu'il fallait pratiquer des choses ennuyeuses. Elle est devenue grincheuse et elle m'a demandé de sortir. Je l'ai laissée pendant que je réfléchissais. J'ai réalisé que j'ai mal géré la conversation. Mon rôle de parent n'est pas de lui faire la morale, mais de la soutenir. J'ai présenté mes excuses à ma fille et j'ai engagé une conversation légère, comme l'humour dans ses livres préférés. Heureusement, j'ai assez lu ses livres pour en parler. Après avoir passé un peu de temps, elle s'est déridée. Elle a dit que jouer aux jeux vidéo ensemble lui manquait. Elle s'intéressait également aux alternatives à l'école publique. Pour le moment, je pense que c'est mieux de poursuivre avec notre expérience à l'école publique, mais nous avons le temps pour les autres expériences.

      Comment me préparer aux incertitudes si nous expérimentons une des alternatives ? Nous connaissons beaucoup de familles qui choisissent des alternatives variées, donc nous n'avons pas besoin de tout résoudre tout seuls. Je sais que c'est un long voyage d'exploration et je ne peux pas tout savoir immédiatement.

      Dimanche, le quatre janvier

      J'ai regardé les onzième et douzième épisodes de l'extr@. Le onzième épisode est sur les vacances et le douzième épisode est sur le fou de foot. J'ai presque fini la série. Je ne comprends pas tous les mots, donc la revoir est une bonne idée.

      Pour le petit-déjeuner, j'ai cuisiné des saucisses et j'en ai mangé avec des œufs et du riz.

      Le temps était agréable, donc nous avons fait du vélo au centre-ville pour manger des petits pains. Nous sommes aussi allés au musée, où nous avons créé des tampons et nous les avons utilisés sur du papier. Nous avons aussi examiné les maquettes de bateaux et les expositions de Yayoi Kusama (pour la deuxième fois) et de Ranbir Sidhu. Ces deux expositions utilisent beaucoup de miroirs pour créer l'illusion de l'infini. Je préfère l'effet de Ranbir Sidhu parce que le scintillement a l'air d'étoiles, alors que l'effet de Yayoi Kusama répète nos images.

      Dans la maison, ma fille et moi avons joué aux billes. J'étais un peu stressée parce qu'elle n'aime pas l'école, mais j'ai déclaré que ce n'est pas à moi de gérer ça, mais je suis juste là pour faire des câlins.

      Pour le souper, j'ai mangé de la chaudrée de palourdes en boîte que j'ai achetée il y a longtemps.

      Nous avons fait les courses avant l'examen médical de ma fille mardi. Elle a besoin d'une échographie, donc elle doit jeûner mardi matin. Nous avons acheté du Jello, du jus de pomme, et de l'eau de noix de coco sans pulpe. Je dois appeler l'hôpital demain pour confirmer ce qui est acceptable.

      Reflexion

      • I started looking up words in Emacs instead of switching to another browser. It's pretty useful. It would be great to also add example searches, maybe from a subtitle corpus?
      • I like my function for highlighting new words. It's interesting to see where I've sprinkled in new words as well as spans of text. I'm highlighting masculine words in blue, feminine words in pink, and neutral words in green.

        2026-01-05_15-10-09.png

        This is a little similar to @jiewawa@masto.ai colouring pinyin based on tone in Org Mode.

      • I've been using il faut and a besoin de a bit more so that it's not all dois this and dois that.

      You can e-mail me at sacha@sachachua.com.

    9. 🔗 r/reverseengineering Inside Windows Keyboard Driver: i8042prt Reverse Engineering | WinDbg rss
    10. 🔗 @binaryninja@infosec.exchange 10 days left to apply for our 2026 internship program in Melbourne FL! Want to mastodon

      10 days left to apply for our 2026 internship program in Melbourne FL! Want to join the Binary Ninja Team this summer? Get your application in while the window is still open: https://binary.ninja/students/internship-2026.html

    11. 🔗 r/LocalLLaMA llama.cpp performance breakthrough for multi-GPU setups rss

      llama.cpp performance breakthrough for multi-GPU setups | While we were enjoying our well-deserved end-of-year break, the ik_llama.cpp project (a performance-optimized fork of llama.cpp) achieved a breakthrough in local LLM inference for multi-GPU configurations, delivering a massive performance leap — not just a marginal gain, but a 3x to 4x speed improvement.
      While it was already possible to use multiple GPUs to run local models, previous methods either only served to pool available VRAM or offered limited performance scaling. However, the ik_llama.cpp team has introduced a new execution mode (split mode graph) that enables the simultaneous and maximum utilization of multiple GPUs.
      Why is it so important? With GPU and memory prices at an all-time high, this is a game-changer. We no longer need overpriced high-end enterprise cards; instead, we can harness the collective power of multiple low-cost GPUs in our homelabs, server rooms, or the cloud. If you are interested, details are here submitted by /u/Holiday-Injury-9397
      [link] [comments]
      ---|---

    12. 🔗 r/wiesbaden Fantasie. Freiheit. Abenteuer: Probiere dich im Tischrollenspiel. rss
    13. 🔗 Anton Zhiyanov Go 1.26 interactive tour rss

      Go 1.26 is coming out in February, so it's a good time to explore what's new. The official release notes are pretty dry, so I prepared an interactive version with lots of examples showing what has changed and what the new behavior is.

      Read on and see!

      new(expr)Type-safe error checkingGreen Tea GCFaster cgo and syscallsFaster memory allocationVectorized operationsSecret modeReader-less cryptographyGoroutine leak profileGoroutine metricsReflective iteratorsPeek into a bufferProcess handleSignal as causeCompare IP subnetsContext-aware dialingFake example.comOptimized fmt.ErrorfOptimized io.ReadAllMultiple log handlersTest artifactsModernized go fixFinal thoughts

      This article is based on the official release notes from The Go Authors and the Go source code, licensed under the BSD-3-Clause license. This is not an exhaustive list; see the official release notes for that.

      I provide links to the documentation (𝗗), proposals (𝗣), commits (𝗖𝗟), and authors (𝗔) for the features described. Check them out for motivation, usage, and implementation details. I also have dedicated guides (𝗚) for some of the features.

      Error handling is often skipped to keep things simple. Don't do this in production ツ

      # new(expr) Previously, you could only use the new built-in with types: p := new(int) *p = 42 fmt.Println(*p) 42 Now you can also use it with expressions: // Pointer to a int variable with the value 42. p := new(42) fmt.Println(*p) 42 If the argument expr is an expression of type T, then new(expr) allocates a variable of type T, initializes it to the value of expr, and returns its address, a value of type *T. This feature is especially helpful if you use pointer fields in a struct to represent optional values that you marshal to JSON or Protobuf: type Cat struct { Name string `json:"name"` Fed *bool `json:"is_fed"` // you can never be sure } cat := Cat{Name: "Mittens", Fed: new(true)} data, _ := json.Marshal(cat) fmt.Println(string(data)) {"name":"Mittens","is_fed":true} You can use new with composite values: s := new([]int{11, 12, 13}) fmt.Println(*s) type Person struct{ name string } p := new(Person{name: "alice"}) fmt.Println(*p) [11 12 13] {alice} And function calls: f := func() string { return "go" } p := new(f()) fmt.Println(*p) go Passing nil is still not allowed: p := new(nil) // compilation error 𝗗 spec • 𝗣 45624 • 𝗖𝗟 704935, 704737, 704955, 705157 • 𝗔 Alan Donovan # Type-safe error checking The new errors.AsType function is a generic version of errors.As: // go 1.13+ func As(err error, target any) bool // go 1.26+ func AsType (E, bool) It's type-safe and easier to use: // using errors.As var target *AppError if errors.As(err, &target) { fmt.Println("application error:", target) } application error: database is down // using errors.AsType if target, ok := errors.AsType[*AppError](err); ok { fmt.Println("application error:", target) } application error: database is down AsType is especially handy when checking for multiple types of errors. It makes the code shorter and keeps error variables scoped to their if blocks: if connErr, ok := errors.AsType[*net.OpError](err); ok { fmt.Println("Network operation failed:", connErr.Op) } else if dnsErr, ok := errors.AsType[*net.DNSError](err); ok { fmt.Println("DNS resolution failed:", dnsErr.Name) } else { fmt.Println("Unknown error") } DNS resolution failed: does.not.exist Another issue with As is that it uses reflection and can cause runtime panics if used incorrectly (like if you pass a non-pointer or a type that doesn't implement error): // using errors.As var target AppError if errors.As(err, &target) { fmt.Println("application error:", target) } panic: errors: *target must be interface or implement error AsType doesn't cause a runtime panic; it gives a clear compile-time error instead: // using errors.AsType if target, ok := errors.AsType[AppError](err); ok { fmt.Println("application error:", target) } ./main.go:24:32: AppError does not satisfy error (method Error has pointer receiver) AsType doesn't use reflect, executes faster, and allocates less than As: goos: darwin goarch: arm64 cpu: Apple M1 BenchmarkAs-8 12606744 95.62 ns/op 40 B/op 2 allocs/op BenchmarkAsType-8 37961869 30.26 ns/op 24 B/op 1 allocs/op source Since AsType can handle everything that As does, it's a recommended drop- in replacement for new code. 𝗗 errors.AsType • 𝗣 51945 • 𝗖𝗟 707235 • 𝗔 Julien Cretel # Green Tea garbage collector

      The new garbage collector (first introduced as experimental in 1.25) is designed to make memory management more efficient on modern computers with many CPU cores.

      Motivation Go's traditional garbage collector algorithm operates on graph, treating objects as nodes and pointers as edges, without considering their physical location in memory. The scanner jumps between distant memory locations, causing frequent cache misses. As a result, the CPU spends too much time waiting for data to arrive from memory. More than 35% of the time spent scanning memory is wasted just stalling while waiting for memory accesses. As computers get more CPU cores, this problem gets even worse. Implementation Green Tea shifts the focus from being processor-centered to being memory- aware. Instead of scanning individual objects, it scans memory in contiguous 8 KiB blocks called spans. The algorithm focuses on small objects (up to 512 bytes) because they are the most common and hardest to scan efficiently. Each span is divided into equal slots based on its assigned size class , and it only contains objects of that size class. For example, if a span is assigned to the 32-byte size class, the whole block is split into 32-byte slots, and objects are placed directly into these slots, each starting at the beginning of its slot. Because of this fixed layout, the garbage collector can easily find an object's metadata using simple address arithmetic, without checking the size of each object it finds. When the algorithm finds an object that needs to be scanned, it marks the object's location in its span but doesn't scan it immediately. Instead, it waits until there are several objects in the same span that need scanning. Then, when the garbage collector processes that span, it scans multiple objects at once. This is much faster than going over the same area of memory multiple times. To make better use of CPU cores, GC workers share the workload by stealing tasks from each other. Each worker has its own local queue of spans to scan, and if a worker is idle, it can grab tasks from the queues of other busy workers. This decentralized approach removes the need for a central global list, prevents delays, and reduces contention between CPU cores. Green Tea uses vectorized CPU instructions (only on amd64 architectures) to process memory spans in bulk when there are enough objects. Benchmarks Benchmark results vary, but the Go team expects a 10–40% reduction in garbage collection overhead in real-world programs that rely heavily on the garbage collector. Plus, with vectorized implementation, an extra 10% reduction in GC overhead when running on CPUs like Intel Ice Lake or AMD Zen 4 and newer. Unfortunately, I couldn't find any public benchmark results from the Go team for the latest version of Green Tea, and I wasn't able to create a good synthetic benchmark myself. So, no details this time :( The new garbage collector is enabled by default. To use the old garbage collector, set GOEXPERIMENT=nogreenteagc at build time (this option is expected to be removed in Go 1.27). 𝗣 73581 • 𝗔 Michael Knyszek # Faster cgo and syscalls In the Go runtime, a processor (often referred to as a P) is a resource required to run the code. For a thread (a machine or M) to execute a goroutine (G), it must first acquire a processor. Processors move through different states. They can be _Prunning (executing code), _Pidle (waiting for work), or _Pgcstop (paused because of the garbage collection). Previously, processors had a state called _Psyscall used when a goroutine is making a system or cgo call. Now, this state has been removed. Instead of using a separate processor state, the system now checks the status of the goroutine assigned to the processor to see if it's involved in a system call. This reduces internal runtime overhead and simplifies code paths for cgo and syscalls. The Go release notes say -30% in cgo runtime overhead, and the commit mentions an 18% sec/op improvement: goos: linux goarch: amd64 pkg: internal/runtime/cgobench cpu: AMD EPYC 7B13 │ before.out │ after.out │ │ sec/op │ sec/op vs base │ CgoCall-64 43.69n ± 1% 35.83n ± 1% -17.99% (p=0.002 n=6) CgoCallParallel-64 5.306n ± 1% 5.338n ± 1% ~ (p=0.132 n=6) I decided to run the CgoCall benchmarks locally as well: goos: darwin goarch: arm64 cpu: Apple M1 │ go1_25.txt │ go1_26.txt │ │ sec/op │ sec/op vs base │ CgoCall-8 28.55n ± 4% 19.02n ± 2% -33.40% (p=0.000 n=10) CgoCallWithCallback-8 72.76n ± 5% 57.38n ± 2% -21.14% (p=0.000 n=10) geomean 45.58n 33.03n -27.53% Either way, both a 20% and a 30% improvement are pretty impressive. And here are the results from a local syscall benchmark: goos: darwin goarch: arm64 cpu: Apple M1 │ go1_25.txt │ go1_26.txt │ │ sec/op │ sec/op vs base │ Syscall-8 195.6n ± 4% 178.1n ± 1% -8.95% (p=0.000 n=10) source func BenchmarkSyscall(b *testing.B) { for b.Loop() { _, _, _ = syscall.Syscall(syscall.SYS_GETPID, 0, 0, 0) } } That's pretty good too. 𝗖𝗟 646198 • 𝗔 Michael Knyszek # Faster memory allocation The Go runtime now has specialized versions of its memory allocation function for small objects (from 1 to 512 bytes). It uses jump tables to quickly choose the right function for each size, instead of relying on a single general- purpose implementation. The Go release notes say "the compiler will now generate calls to size- specialized memory allocation routines". But based on the code, that's not completely accurate: the compiler still emits calls to the general-purpose mallocgc function. Then, at runtime, mallocgc dispatches those calls to the new specialized allocation functions. This change reduces the cost of small object memory allocations by up to 30%. The Go team expects the overall improvement to be ~1% in real allocation-heavy programs. I couldn't find any existing benchmarks, so I came up with my own. And indeed, running it on Go 1.25 compared to 1.26 shows a significant improvement: goos: darwin goarch: arm64 cpu: Apple M1 │ go1_25.txt │ go1_26.txt │ │ sec/op │ sec/op vs base │ Alloc1-8 8.190n ± 6% 6.594n ± 28% -19.48% (p=0.011 n=10) Alloc8-8 8.648n ± 16% 7.522n ± 4% -13.02% (p=0.000 n=10) Alloc64-8 15.70n ± 15% 12.57n ± 4% -19.88% (p=0.000 n=10) Alloc128-8 56.80n ± 4% 17.56n ± 4% -69.08% (p=0.000 n=10) Alloc512-8 81.50n ± 10% 55.24n ± 5% -32.23% (p=0.000 n=10) geomean 21.99n 14.33n -34.83% source var sink *byte func benchmarkAlloc(b *testing.B, size int) { b.ReportAllocs() for b.Loop() { obj := make([]byte, size) sink = &obj[0] } } func BenchmarkAlloc1(b *testing.B) { benchmarkAlloc(b, 1) } func BenchmarkAlloc8(b *testing.B) { benchmarkAlloc(b, 8) } func BenchmarkAlloc64(b *testing.B) { benchmarkAlloc(b, 64) } func BenchmarkAlloc128(b *testing.B) { benchmarkAlloc(b, 128) } func BenchmarkAlloc512(b *testing.B) { benchmarkAlloc(b, 512) } The new implementation is enabled by default. You can disable it by setting GOEXPERIMENT=nosizespecializedmalloc at build time (this option is expected to be removed in Go 1.27). 𝗖𝗟 665835 • 𝗔 Michael Matloob # Vectorized operations (experimental) The new simd/archsimd package provides access to architecture-specific vectorized operations (SIMD — single instruction, multiple data). This is a low-level package that exposes hardware-specific functionality. It currently only supports amd64 platforms. Because different CPU architectures have very different SIMD operations, it's hard to create a single portable API that works for all of them. So the Go team decided to start with a low-level, architecture-specific API first, giving "power users" immediate access to SIMD features on the most common server platform — amd64. The package defines vector types as structs, like Int8x16 (a 128-bit SIMD vector with sixteen 8-bit integers) and Float64x8 (a 512-bit SIMD vector with eight 64-bit floats). These match the hardware's vector registers. The package supports vectors that are 128, 256, or 512 bits wide. Most operations are defined as methods on vector types. They usually map directly to hardware instructions with zero overhead. To give you a taste, here's a custom function that uses SIMD instructions to add 32-bit float vectors: func Add(a, b []float32) []float32 { if len(a) != len(b) { panic("slices of different length") } // If AVX-512 isn't supported, fall back to scalar addition, // since the Float32x16.Add method needs the AVX-512 instruction set. if !archsimd.X86.AVX512() { return fallbackAdd(a, b) } res := make([]float32, len(a)) n := len(a) i := 0 // 1. SIMD loop: Process 16 elements at a time. for i <= n-16 { // Load 16 elements from a and b vectors. va := archsimd.LoadFloat32x16Slice(a[i : i+16]) vb := archsimd.LoadFloat32x16Slice(b[i : i+16]) // Add all 16 elements in a single instruction // and store the results in the result vector. vSum := va.Add(vb) // translates to VADDPS asm instruction vSum.StoreSlice(res[i : i+16]) i += 16 } // 2. Scalar tail: Process any remaining elements (0-15). for ; i < n; i++ { res[i] = a[i] + b[i] } return res } Let's try it on two vectors: func main() { a := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17} b := []float32{17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1} res := Add(a, b) fmt.Println(res) } [18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18] Common operations in the archsimd package include: Load a vector from array/slice, or Store a vector to array/slice. Arithmetic: Add, Sub, Mul, Div, DotProduct. Bitwise: And, Or, Not, Xor, Shift. Comparison: Equal, Greater, Less, Min, Max. Conversion: As, SaturateTo, TruncateTo. Masking: Compress, Masked, Merge. Rearrangement: Permute. The package uses only AVX instructions, not SSE. Here's a simple benchmark for adding two vectors (both the "plain" and SIMD versions use pre-allocated slices): goos: linux goarch: amd64 cpu: AMD EPYC 9575F 64-Core Processor BenchmarkAddPlain/1k-2 1517698 889.9 ns/op 13808.74 MB/s BenchmarkAddPlain/65k-2 23448 52613 ns/op 14947.46 MB/s BenchmarkAddPlain/1m-2 2047 1005628 ns/op 11932.84 MB/s BenchmarkAddSIMD/1k-2 36594340 33.58 ns/op 365949.74 MB/s BenchmarkAddSIMD/65k-2 410742 3199 ns/op 245838.52 MB/s BenchmarkAddSIMD/1m-2 12955 94228 ns/op 127351.33 MB/s source The package is experimental and can be enabled by setting GOEXPERIMENT=simd at build time. 𝗗 simd/archsimd • 𝗣 73787 • 𝗖𝗟 701915, 712880, 729900, 732020 • 𝗔 Junyang Shao, Sean Liao, Tom Thorogood # Secret mode (experimental) Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, ephemeral keys (temporary keys used to negotiate the session) need to be erased from memory immediately after the handshake. If there's no reliable way to clear this memory, these keys could stay there indefinitely. An attacker who finds them later could re-derive the session key and decrypt past traffic, breaking forward secrecy. In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it. The Go team's solution to this problem is the new runtime/secret package. It lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable. secret.Do(func() { // Generate an ephemeral key and // use it to negotiate the session. }) This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it. Here's an example that shows how secret.Do might be used in a more or less realistic setting. Let's say you want to generate a session key while keeping the ephemeral private key and shared secret safe: // DeriveSessionKey does an ephemeral key exchange to create a session key. func DeriveSessionKey(peerPublicKey *ecdh.PublicKey) (*ecdh.PublicKey, []byte, error) { var pubKey *ecdh.PublicKey var sessionKey []byte var err error // Use secret.Do to contain the sensitive data during the handshake. // The ephemeral private key and the raw shared secret will be // wiped out when this function finishes. secret.Do(func() { // 1. Generate an ephemeral private key. // This is highly sensitive; if leaked later, forward secrecy is broken. privKey, e := ecdh.P256().GenerateKey(rand.Reader) if e != nil { err = e return } // 2. Compute the shared secret (ECDH). // This raw secret is also highly sensitive. sharedSecret, e := privKey.ECDH(peerPublicKey) if e != nil { err = e return } // 3. Derive the final session key (e.g., using HKDF). // We copy the result out; the inputs (privKey, sharedSecret) // will be destroyed by secret.Do when they become unreachable. sessionKey = performHKDF(sharedSecret) pubKey = privKey.PublicKey() }) // The session key is returned for use, but the "recipe" to recreate it // is destroyed. Additionally, because the session key was allocated // inside the secret block, the runtime will automatically zero it out // when the application is finished using it. return pubKey, sessionKey, err } Here, the ephemeral private key and the raw shared secret are effectively "toxic waste" — they are necessary to create the final session key, but dangerous to keep around. If these values stay in the heap and an attacker later gets access to the application's memory (for example, via a core dump or a vulnerability like Heartbleed), they could use these intermediates to re-derive the session key and decrypt past conversations. By wrapping the calculation in secret.Do, we make sure that as soon as the session key is created, the "ingredients" used to make it are permanently destroyed. This means that even if the server is compromised in the future, this specific past session can't be exposed, which ensures forward secrecy. func main() { // Generate a dummy peer public key. priv, _ := ecdh.P256().GenerateKey(nil) peerPubKey := priv.PublicKey() // Derive the session key. pubKey, sessionKey, err := DeriveSessionKey(peerPubKey) fmt.Printf("public key = %x...\n", pubKey.Bytes()[:16]) fmt.Printf("error = %v\n", err) var _ = sessionKey } public key = 04288d5ade66bab4320a86d80993f628... error = <nil> The current secret.Do implementation only supports Linux (amd64 and arm64). On unsupported platforms, Do invokes the function directly. Also, trying to start a goroutine within the function causes a panic (this will be fixed in Go 1.27). The runtime/secret package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use secret.Do behind the scenes. The package is experimental and can be enabled by setting GOEXPERIMENT=runtimesecret at build time. 𝗗 runtime/secret • 𝗣 21865 • 𝗖𝗟 704615 • 𝗔 Daniel Morsing # Reader-less cryptography Current cryptographic APIs, like ecdsa.GenerateKey or rand.Prime, often accept an io.Reader as the source of random data: // Generate a new ECDSA private key for the specified curve. key, _ := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) fmt.Println(key.D) // Generate a 64-bit integer that is prime with high probability. prim, _ := rand.Prime(rand.Reader, 64) fmt.Println(prim) 31253152889057471714062019675387570049552680140182252615946165331094890182019 17433987073571224703 These APIs don't commit to a specific way of using random bytes from the reader. Any change to underlying cryptographic algorithms can change the sequence or amount of bytes read. Because of this, if the application code (mistakenly) relies on a specific implementation in Go version X, it might fail or behave differently in version X+1. The Go team chose a pretty bold solution to this problem. Now, most crypto APIs will just ignore the random io.Reader parameter and always use the system random source (crypto/internal/sysrand.Read). // The reader parameter is no longer used, so you can just pass nil. // Generate a new ECDSA private key for the specified curve. key, _ := ecdsa.GenerateKey(elliptic.P256(), nil) fmt.Println(key.D) // Generate a 64-bit integer that is prime with high probability. prim, _ := rand.Prime(nil, 64) fmt.Println(prim) 16265662996876675161677719946085651215874831846675169870638460773593241527197 14874320216361938581 The change applies to the following crypo subpackages: // crypto/dsa func GenerateKey(priv *PrivateKey, rand io.Reader) error // crypto/ecdh type Curve interface { // ... GenerateKey(rand io.Reader) (*PrivateKey, error) } // crypto/ecdsa func GenerateKey(c elliptic.Curve, rand io.Reader) (*PrivateKey, error) func SignASN1(rand io.Reader, priv *PrivateKey, hash []byte) ([]byte, error) func Sign(rand io.Reader, priv *PrivateKey, hash []byte) (r, s *big.Int, err error) func (priv *PrivateKey) Sign(rand io.Reader, digest []byte, opts crypto.SignerOpts) ([]byte, error) // crypto/rand func Prime(rand io.Reader, bits int) (*big.Int, error) // crypto/rsa func GenerateKey(random io.Reader, bits int) (*PrivateKey, error) func GenerateMultiPrimeKey(random io.Reader, nprimes int, bits int) (*PrivateKey, error) func EncryptPKCS1v15(random io.Reader, pub *PublicKey, msg []byte) ([]byte, error) ed25519.GenerateKey(rand) still uses the random reader if provided. But if rand is nil, it uses an internal secure source of random bytes instead of crypto/rand.Reader (which could be overridden). To support deterministic testing, there's a new testing/cryptotest package with a single SetGlobalRandom function. It sets a global, deterministic cryptographic randomness source for the duration of the given test: func Test(t *testing.T) { cryptotest.SetGlobalRandom(t, 42) // All test runs will generate the same numbers. p1, _ := rand.Prime(nil, 32) p2, _ := rand.Prime(nil, 32) p3, _ := rand.Prime(nil, 32) got := [3]int64{p1.Int64(), p2.Int64(), p3.Int64()} want := [3]int64{3713413729, 3540452603, 4293217813} if got != want { t.Errorf("got %v, want %v", got, want) } } PASS SetGlobalRandom affects crypto/rand and all implicit sources of cryptographic randomness in the crypto/* packages: func Test(t *testing.T) { cryptotest.SetGlobalRandom(t, 42) t.Run("rand.Read", func(t *testing.T) { var got [4]byte rand.Read(got[:]) want := [4]byte{34, 48, 31, 184} if got != want { t.Errorf("got %v, want %v", got, want) } }) t.Run("rand.Int", func(t *testing.T) { got, _ := rand.Int(rand.Reader, big.NewInt(10000)) const want = 6185 if got.Int64() != want { t.Errorf("got %v, want %v", got.Int64(), want) } }) } PASS To temporarily restore the old reader-respecting behavior, set GODEBUG=cryptocustomrand=1 (this option will be removed in a future release). 𝗗 testing/cryptotest • 𝗣 70942 • 𝗖𝗟 724480 • 𝗔 Filippo Valsorda, qiulaidongfeng # Goroutine leak profile (experimental) A leak occurs when one or more goroutines are indefinitely blocked on synchronization primitives like channels, while other goroutines continue running and the program as a whole keeps functioning. Here's a simple example: func leak() <-chan int { out := make(chan int) go func() { out <- 42 // leaks if nobody reads from out }() return out } If we call leak and don't read from the output channel, the inner leak goroutine will stay blocked trying to send to the channel for the rest of the program: func main() { leak() // ... } ok Unlike deadlocks, leaks do not cause panics, so they are much harder to spot. Also, unlike data races, Go's tooling did not address them for a long time. Things started to change in Go 1.24 with the introduction of the synctest package. Not many people talk about it, but synctest is a great tool for catching leaks during testing. Go 1.26 adds a new experimental goroutineleak profile designed to report leaked goroutines in production. Here's how we can use it in the example above: func main() { prof := pprof.Lookup("goroutineleak") leak() time.Sleep(50 * time.Millisecond) prof.WriteTo(os.Stdout, 2) // ... } goroutine 7 [chan send (leaked)]: main.leak.func1() /tmp/sandbox/main.go:16 +0x1e created by main.leak in goroutine 1 /tmp/sandbox/main.go:15 +0x67 As you can see, we have a nice goroutine stack trace that shows exactly where the leak happens. The goroutineleak profile finds leaks by using the garbage collector's marking phase to check which blocked goroutines are still connected to active code. It starts with runnable goroutines, marks all sync objects they can reach, and keeps adding any blocked goroutines waiting on those objects. When it can't add any more, any blocked goroutines left are waiting on resources that can't be reached — so they're considered leaked. Tell me more Here's the gist of it: [ Start: GC mark phase ] │ │ 1. Collect live goroutines v ┌───────────────────────┐ │ Initial roots │ <────────────────┐ │ (runnable goroutines) │ │ └───────────────────────┘ │ │ │ │ 2. Mark reachable memory │ v │ ┌───────────────────────┐ │ │ Reachable objects │ │ │ (channels, mutexes) │ │ └───────────────────────┘ │ │ │ │ 3a. Check blocked goroutines │ v │ ┌───────────────────────┐ (Yes) │ │ Is blocked G waiting │ ─────────────────┘ │ on a reachable obj? │ 3b. Add G to roots └───────────────────────┘ │ │ (No - repeat until no new Gs found) v ┌───────────────────────┐ │ Remaining blocked │ │ goroutines │ └───────────────────────┘ │ │ 5. Report the leaks v [ LEAKED! ] (Blocked on unreachable synchronization objects) Collect live goroutines. Start with currently active (runnable or running) goroutines as roots. Ignore blocked goroutines for now. Mark reachable memory. Trace pointers from roots to find which synchronization objects (like channels or wait groups) are currently reachable by these roots. Resurrect blocked goroutines. Check all currently blocked goroutines. If a blocked goroutine is waiting for a synchronization resource that was just marked as reachable — add that goroutine to the roots. Iterate. Repeat steps 2 and 3 until there are no more new goroutines blocked on reachable objects. Report the leaks. Any goroutines left in the blocked state are waiting for resources that no active part of the program can access. They're considered leaked. For even more details, see the paper by Saioc et al. If you want to see how goroutineleak (and synctest) can catch typical leaks that often happen in production — check out my article on goroutine leaks. The goroutineleak profile is experimental and can be enabled by setting GOEXPERIMENT=goroutineleakprofile at build time. Enabling the experiment also makes the profile available as a net/http/pprof endpoint, /debug/pprof/goroutineleak. According to the authors, the implementation is already production-ready. It's only marked as experimental so they can get feedback on the API, especially about making it a new profile. 𝗗 runtime/pprof • 𝗚 Detecting leaks • 𝗣 74609, 75280 • 𝗖𝗟 688335 • 𝗔 Vlad Saioc # Goroutine metrics New metrics in the runtime/metrics package give better insight into goroutine scheduling: Total number of goroutines since the program started. Number of goroutines in each state. Number of active threads. Here's the full list: /sched/goroutines-created:goroutines Count of goroutines created since program start. /sched/goroutines/not-in-go:goroutines Approximate count of goroutines running or blocked in a system call or cgo call. /sched/goroutines/runnable:goroutines Approximate count of goroutines ready to execute, but not executing. /sched/goroutines/running:goroutines Approximate count of goroutines executing. Always less than or equal to /sched/gomaxprocs:threads. /sched/goroutines/waiting:goroutines Approximate count of goroutines waiting on a resource (I/O or sync primitives). /sched/threads/total:threads The current count of live threads that are owned by the Go runtime. Per-state goroutine metrics can be linked to common production issues. For example, an increasing waiting count can show a lock contention problem. A high not-in-go count means goroutines are stuck in syscalls or cgo. A growing runnable backlog suggests the CPUs can't keep up with demand. You can read the new metric values using the regular metrics.Read function: func main() { go work() // omitted for brevity time.Sleep(100 * time.Millisecond) fmt.Println("Goroutine metrics:") printMetric("/sched/goroutines-created:goroutines", "Created") printMetric("/sched/goroutines:goroutines", "Live") printMetric("/sched/goroutines/not-in-go:goroutines", "Syscall/CGO") printMetric("/sched/goroutines/runnable:goroutines", "Runnable") printMetric("/sched/goroutines/running:goroutines", "Running") printMetric("/sched/goroutines/waiting:goroutines", "Waiting") fmt.Println("Thread metrics:") printMetric("/sched/gomaxprocs:threads", "Max") printMetric("/sched/threads/total:threads", "Live") } func printMetric(name string, descr string) { sample := []metrics.Sample{{Name: name}} metrics.Read(sample) // Assuming a uint64 value; don't do this in production. // Instead, check sample[0].Value.Kind and handle accordingly. fmt.Printf(" %s: %v\n", descr, sample[0].Value.Uint64()) } Goroutine metrics: Created: 57 Live: 21 Syscall/CGO: 0 Runnable: 0 Running: 1 Waiting: 20 Thread metrics: Max: 2 Live: 4 The per-state numbers (not-in-go + runnable + running + waiting) are not guaranteed to add up to the live goroutine count (/sched/goroutines:goroutines, available since Go 1.16). All new metrics use uint64 counters. 𝗗 runtime/metrics • 𝗣 15490 • 𝗖𝗟 690397, 690398, 690399 • 𝗔 Michael Knyszek # Reflective iterators The new Type.Fields and Type.Methods methods in the reflect package return iterators for a type's fields and methods: // List the fields of a struct type. typ := reflect.TypeFor for f := range typ.Fields() { fmt.Println(f.Name, f.Type) } Transport http.RoundTripper CheckRedirect func(*http.Request, []*http.Request) error Jar http.CookieJar Timeout time.Duration // List the methods of a struct type. typ := reflect.TypeFor[*http.Client]() for m := range typ.Methods() { fmt.Println(m.Name, m.Type) } CloseIdleConnections func(*http.Client) Do func(*http.Client, *http.Request) (*http.Response, error) Get func(*http.Client, string) (*http.Response, error) Head func(*http.Client, string) (*http.Response, error) Post func(*http.Client, string, string, io.Reader) (*http.Response, error) PostForm func(*http.Client, string, url.Values) (*http.Response, error) The new methods Type.Ins and Type.Outs return iterators for the input and output parameters of a function type: typ := reflect.TypeFor[filepath.WalkFunc]() fmt.Println("Inputs:") for par := range typ.Ins() { fmt.Println("-", par.Name()) } fmt.Println("Outputs:") for par := range typ.Outs() { fmt.Println("-", par.Name()) } Input params: - string - FileInfo - error Output params: - error The new methods Value.Fields and Value.Methods return iterators for a value's fields and methods. Each iteration yields both the type information (StructField or Method) and the value: client := &http.Client{} val := reflect.ValueOf(client) fmt.Println("Fields:") for f, v := range val.Elem().Fields() { fmt.Printf("- name=%s kind=%s\n", f.Name, v.Kind()) } fmt.Println("Methods:") for m, v := range val.Methods() { fmt.Printf("- name=%s kind=%s\n", m.Name, v.Kind()) } Fields: - name=Transport kind=interface - name=CheckRedirect kind=func - name=Jar kind=interface - name=Timeout kind=int64 Methods: - name=CloseIdleConnections kind=func - name=Do kind=func - name=Get kind=func - name=Head kind=func - name=Post kind=func - name=PostForm kind=func Previously, you could get all this information by using a for-range loop with NumX methods (which is what iterators do internally): // go 1.25 typ := reflect.TypeFor[http.Client]() for i := range typ.NumField() { field := typ.Field(i) fmt.Println(field.Name, field.Type) } Transport http.RoundTripper CheckRedirect func(*http.Request, []*http.Request) error Jar http.CookieJar Timeout time.Duration Using an iterator is more concise. I hope it justifies the increased API surface. 𝗗 reflect • 𝗣 66631 • 𝗖𝗟 707356 • 𝗔 Quentin Quaadgras # Peek into a buffer

      The new Buffer.Peek method in the bytes package returns the next N bytes from the buffer without advancing it:

      buf := bytes.NewBufferString("I love bytes")
      
      sample, err := buf.Peek(1)
      fmt.Printf("peek=%s err=%v\n", sample, err)
      
      buf.Next(2)
      
      sample, err = buf.Peek(4)
      fmt.Printf("peek=%s err=%v\n", sample, err)
      
      
      
      peek=I err=<nil>
      peek=love err=<nil>
      

      If Peek returns fewer than N bytes, it also returns io.EOF:

      buf := bytes.NewBufferString("hello")
      sample, err := buf.Peek(10)
      fmt.Printf("peek=%s err=%v\n", sample, err)
      
      
      
      peek=hello err=EOF
      

      The slice returned by Peek points to the buffer's content and stays valid until the buffer is changed. So, if you change the slice right away, it will affect future reads:

      buf := bytes.NewBufferString("car")
      sample, err := buf.Peek(3)
      fmt.Printf("peek=%s err=%v\n", sample, err)
      
      sample[2] = 't' // changes the underlying buffer
      
      data, err := buf.ReadBytes(0)
      fmt.Printf("data=%s err=%v\n", data, err)
      
      
      
      peek=car err=<nil>
      data=cat err=EOF
      

      The slice returned by Peek is only valid until the next call to a read or write method.

      𝗗 Buffer.Peek • 𝗣 73794 • 𝗖𝗟 674415 • 𝗔 Ilia Choly

      # Process handle

      After you start a process in Go, you can access its ID:

      attr := &os.ProcAttr{Files: []*os.File{os.Stdin, os.Stdout, os.Stderr}}
      proc, _ := os.StartProcess("/bin/echo", []string{"echo", "hello"}, attr)
      defer proc.Wait()
      
      fmt.Println("pid =", proc.Pid)
      
      
      
      pid = 41
      hello
      

      Internally, the os.Process type uses a process handle instead of the PID (which is just an integer), if the operating system supports it. Specifically, in Linux it uses pidfd , which is a file descriptor that refers to a process. Using the handle instead of the PID makes sure that Process methods always work with the same OS process, and not a different process that just happens to have the same ID.

      Previously, you couldn't access the process handle. Now you can, thanks to the new Process.WithHandle method:

      func (p *Process) WithHandle(f func(handle uintptr)) error
      

      WithHandle calls a specified function and passes a process handle as an argument:

      attr := &os.ProcAttr{Files: []*os.File{os.Stdin, os.Stdout, os.Stderr}}
      proc, _ := os.StartProcess("/bin/echo", []string{"echo", "hello"}, attr)
      defer proc.Wait()
      
      fmt.Println("pid =", proc.Pid)
      proc.WithHandle(func(handle uintptr) {
          fmt.Println("handle =", handle)
      })
      
      
      
      pid = 49
      handle = 6
      hello
      

      The handle is guaranteed to refer to the process until the callback function returns, even if the process has already terminated. That's why it's implemented as a callback instead of a Process.Handle field or method.

      WithHandle is only supported on Linux 5.4+ and Windows. On other operating systems, it doesn't execute the callback and returns an os.ErrNoHandle error.

      𝗗 Process.WithHandle • 𝗣 70352 • 𝗖𝗟 699615 • 𝗔 Kir Kolyshkin

      # Signal as cause

      signal.NotifyContext returns a context that gets canceled when any of the specified signals is received. Previously, the canceled context only showed the standard "context canceled" cause:

      // go 1.25
      
      // The context will be canceled on SIGINT signal.
      ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
      defer stop()
      
      // Send SIGINT to self.
      p, _ := os.FindProcess(os.Getpid())
      _ = p.Signal(syscall.SIGINT)
      
      // Wait for SIGINT.
      <-ctx.Done()
      fmt.Println("err =", ctx.Err())
      fmt.Println("cause =", context.Cause(ctx))
      
      
      
      err = context canceled
      cause = context canceled
      

      Now the context's cause shows exactly which signal was received:

      // go 1.26
      
      // The context will be canceled on SIGINT signal.
      ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
      defer stop()
      
      // Send SIGINT to self.
      p, _ := os.FindProcess(os.Getpid())
      _ = p.Signal(syscall.SIGINT)
      
      // Wait for SIGINT.
      <-ctx.Done()
      fmt.Println("err =", ctx.Err())
      fmt.Println("cause =", context.Cause(ctx))
      
      
      
      err = context canceled
      cause = interrupt signal received
      

      The returned type, signal.signalError, is based on string, so it doesn't provide the actual os.Signal value — just its string representation.

      𝗗 signal.NotifyContext • 𝗖𝗟 721700 • 𝗔 Filippo Valsorda

      # Compare IP subnets

      An IP address prefix represents an IP subnet. These prefixes are usually written in CIDR notation:

      10.0.0.0/16
      127.0.0.0/8
      169.254.0.0/16
      203.0.113.0/24
      

      In Go, an IP prefix is represented by the netip.Prefix type.

      The new Prefix.Compare method lets you compare two IP prefixes, making it easy to sort them without having to write your own comparison code:

      prefixes := []netip.Prefix{
          netip.MustParsePrefix("10.1.0.0/16"),
          netip.MustParsePrefix("203.0.113.0/24"),
          netip.MustParsePrefix("10.0.0.0/16"),
          netip.MustParsePrefix("169.254.0.0/16"),
          netip.MustParsePrefix("203.0.113.0/8"),
      }
      
      slices.SortFunc(prefixes, netip.Prefix.Compare)
      
      for _, p := range prefixes {
          fmt.Println(p.String())
      }
      
      
      
      10.0.0.0/16
      10.1.0.0/16
      169.254.0.0/16
      203.0.113.0/8
      203.0.113.0/24
      

      Compare orders two prefixes as follows:

      • First by validity (invalid before valid).
      • Then by address family (IPv4 before IPv6).
        10.0.0.0/8 < ::/8

      • Then by masked IP address (network IP).
        10.0.0.0/16 < 10.1.0.0/16

      • Then by prefix length.
        10.0.0.0/8 < 10.0.0.0/16

      • Then by unmasked address (original IP).
        10.0.0.0/8 < 10.0.0.1/8

      This follows the same order as Python's netaddr.IPNetwork and the standard IANA (Internet Assigned Numbers Authority) convention.

      𝗗 Prefix.Compare • 𝗣 61642 • 𝗖𝗟 700355 • 𝗔 database64128

      # Context-aware dialing

      The net package has top-level functions for connecting to an address using different networks (protocols) — DialTCP, DialUDP, DialIP, and DialUnix. They were made before context.Context was introduced, so they don't support cancellation:

      raddr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:12345")
      conn, err := net.DialTCP("tcp", nil, raddr)
      fmt.Printf("connected, err=%v\n", err)
      defer conn.Close()
      
      
      
      connected, err=<nil>
      

      There's also a net.Dialer type with a general-purpose DialContext method. It supports cancellation and can be used to connect to any of the known networks:

      var d net.Dialer
      ctx := context.Background()
      conn, err := d.DialContext(ctx, "tcp", "127.0.0.1:12345")
      fmt.Printf("connected, err=%v\n", err)
      defer conn.Close()
      
      
      
      connected, err=<nil>
      

      However, DialContext a bit less efficient than network-specific functions like net.DialTCP — because of the extra overhead from address resolution and network type dispatching.

      So, network-specific functions in the net package are more efficient, but they don't support cancellation. The Dialer type supports cancellation, but it's less efficient. The Go team decided to resolve this contradiction.

      The new context-aware Dialer methods (DialTCP, DialUDP, DialIP, and DialUnix) combine the efficiency of the existing network-specific net functions with the cancellation capabilities of Dialer.DialContext:

      var d net.Dialer
      ctx := context.Background()
      raddr := netip.MustParseAddrPort("127.0.0.1:12345")
      conn, err := d.DialTCP(ctx, "tcp", netip.AddrPort{}, raddr)
      fmt.Printf("connected, err=%v\n", err)
      defer conn.Close()
      
      
      
      connected, err=<nil>
      

      I wouldn't say that having three different ways to dial is very convenient, but that's the price of backward compatibility.

      𝗗 net.Dialer • 𝗣 49097 • 𝗖𝗟 490975 • 𝗔 Michael Fraenkel

      # Fake example.com

      The default httptest.Server certificate already lists example.com in its DNSNames (a list of hostnames or domain names that the certificate is authorized to secure). Because of this, Server.Client doesn't trust responses from the real example.com:

      // go 1.25
      func Test(t *testing.T) {
          handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
              w.Write([]byte("hello"))
          })
          srv := httptest.NewTLSServer(handler)
          defer srv.Close()
      
          _, err := srv.Client().Get("https://example.com")
          if err != nil {
              t.Fatal(err)
          }
      }
      
      
      
      --- FAIL: Test (0.29s)
          main_test.go:19: Get "https://example.com":
          tls: failed to verify certificate:
          x509: certificate signed by unknown authority
      

      To fix this issue, the HTTP client returned by httptest.Server.Client now redirects requests for example.com and its subdomains to the test server:

      // go 1.26
      func Test(t *testing.T) {
          handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
              w.Write([]byte("hello"))
          })
          srv := httptest.NewTLSServer(handler)
          defer srv.Close()
      
          resp, err := srv.Client().Get("https://example.com")
          if err != nil {
              t.Fatal(err)
          }
      
          body, _ := io.ReadAll(resp.Body)
          resp.Body.Close()
      
          if string(body) != "hello" {
              t.Errorf("Unexpected response body: %s", body)
          }
      }
      
      
      
      PASS
      

      𝗗 Server.Client • 𝗖𝗟 666855 • 𝗔 Sean Liao

      # Optimized fmt.Errorf

      People often point out that using fmt.Errorf("x") for plain strings causes more memory allocations than errors.New("x"). Because of this, some suggest switching code from fmt.Errorf to errors.New when formatting isn't needed.

      The Go team disagrees. Here's a quote from Russ Cox:

      Using fmt.Errorf("foo") is completely fine, especially in a program where all the errors are constructed with fmt.Errorf. Having to mentally switch between two functions based on the argument is unnecessary noise.

      With the new Go release, this debate should finally be settled. For unformatted strings, fmt.Errorf now allocates less and generally matches the allocations for errors.New.

      Specifically, fmt.Errorf goes from 2 allocations to 0 allocations for a non- escaping error, and from 2 allocations to 1 allocation for an escaping error:

      _ = fmt.Errorf("foo")    // non-escaping error
      sink = fmt.Errorf("foo") // escaping error
      

      This matches the allocations for errors.New in both cases.

      The difference in CPU cost is also much smaller now. Previously, it was ~64ns vs. ~21ns for fmt.Errorf vs. errors.New for escaping errors, now it's ~25ns vs. ~21ns.

      Tell me more

      Here are the "before and after" benchmarks for the fmt.Errorf change. The non-escaping case is called local, and the escaping case is called sink. If there's just a plain error string, it's no-args. If the error includes formatting, it's int-arg.

      Seconds per operation:

      goos: linux
      goarch: amd64
      pkg: fmt
      cpu: AMD EPYC 7B13
                               │    old.txt    │        new.txt        │
                               │      sec/op   │   sec/op     vs base  │
      Errorf/no-arsg/local-16     63.76n ± 1%     4.874n ± 0%  -92.36% (n=120)
      Errorf/no-args/sink-16      64.25n ± 1%     25.81n ± 0%  -59.83% (n=120)
      Errorf/int-arg/local-16     90.86n ± 1%     90.97n ± 1%        ~ (p=0.713 n=120)
      Errorf/int-arg/sink-16      91.81n ± 1%     91.10n ± 1%   -0.76% (p=0.036 n=120)
      

      Bytes per operation:

                               │    old.txt    │        new.txt       │
                               │       B/op    │    B/op     vs base  │
      Errorf/no-args/local-16      19.00 ± 0%      0.00 ± 0%  -100.00% (n=120)
      Errorf/no-args/sink-16       19.00 ± 0%     16.00 ± 0%   -15.79% (n=120)
      Errorf/int-arg/local-16      24.00 ± 0%     24.00 ± 0%         ~ (p=1.000 n=120)
      Errorf/int-arg/sink-16       24.00 ± 0%     24.00 ± 0%         ~ (p=1.000 n=120)
      

      Allocations per operation:

                               │    old.txt    │        new.txt       │
                               │    allocs/op  │  allocs/op   vs base │
      Errorf/no-args/local-16      2.000 ± 0%     0.000 ± 0%  -100.00% (n=120)
      Errorf/no-args/sink-16       2.000 ± 0%     1.000 ± 0%   -50.00% (n=120)
      Errorf/int-arg/local-16      2.000 ± 0%     2.000 ± 0%         ~ (p=1.000 n=120)
      Errorf/int-arg/sink-16       2.000 ± 0%     2.000 ± 0%         ~ (p=1.000 n=120)
      

      source

      If you're interested in the details, I highly recommend reading the CL — it's perfectly written.

      𝗗 fmt.Errorf • 𝗖𝗟 708836 • 𝗔 thepudds

      # Optimized io.ReadAll

      Previously, io.ReadAll allocated a lot of intermediate memory as it grew its result slice to the size of the input data. Now, it uses intermediate slices of exponentially growing size, and then copies them into a final perfectly- sized slice at the end.

      The new implementation is about twice as fast and uses roughly half the memory for a 65KiB input; it's even more efficient with larger inputs. Here are the geomean results comparing the old and new versions for different input sizes:

                            │     old     │      new       vs base    │
                sec/op           132.2µ        66.32µ     -49.83%
                  B/op          645.4Ki       324.6Ki     -49.70%
        final-capacity           178.3k        151.3k     -15.10%
          excess-ratio            1.216         1.033     -15.10%
      

      See the full benchmark results in the commit. Unfortunately, the author didn't provide the benchmark source code.

      Ensuring the final slice is minimally sized is also quite helpful. The slice might persist for a long time, and the unused capacity in a backing array (as in the old version) would just waste memory.

      As with the fmt.Errorf optimization, I recommend reading the CL — it's very good. Both changes come from thepudds, whose change descriptions are every reviewer's dream come true.

      𝗗 io.ReadAll • 𝗖𝗟 722500 • 𝗔 thepudds

      # Multiple log handlers

      The log/slog package, introduced in version 1.21, offers a reliable, production-ready logging solution. Since its release, many projects have switched from third-party logging packages to use it. However, it was missing one key feature: the ability to send log records to multiple handlers, such as stdout or a log file.

      The new MultiHandler type solves this problem. It implements the standard Handler interface and calls all the handlers you set up.

      For example, we can create a log handler that writes to stdout:

      stdoutHandler := slog.NewTextHandler(os.Stdout, nil)
      

      And another handler that writes to a file:

      const flags = os.O_CREATE | os.O_WRONLY | os.O_APPEND
      file, _ := os.OpenFile("/tmp/app.log", flags, 0644)
      defer file.Close()
      fileHandler := slog.NewJSONHandler(file, nil)
      

      Finally, combine them using a MultiHandler:

      // MultiHandler that writes to both stdout and app.log.
      multiHandler := slog.NewMultiHandler(stdoutHandler, fileHandler)
      logger := slog.New(multiHandler)
      
      // Log a sample message.
      logger.Info("login",
          slog.String("name", "whoami"),
          slog.Int("id", 42),
      )
      
      
      
      time=2025-12-31T11:46:14.521Z level=INFO msg=login name=whoami id=42
      {"time":"2025-12-31T11:46:14.521126342Z","level":"INFO","msg":"login","name":"whoami","id":42}
      

      I'm also printing the file contents here to show the results.

      When the MultiHandler receives a log record, it sends it to each enabled handler one by one. If any handler returns an error, MultiHandler doesn't stop; instead, it combines all the errors using errors.Join:

      hInfo := slog.NewTextHandler(
          os.Stdout, &slog.HandlerOptions{Level: slog.LevelInfo},
      )
      hErrorsOnly := slog.NewTextHandler(
          os.Stdout, &slog.HandlerOptions{Level: slog.LevelError},
      )
      hBroken := &BrokenHandler{
          Handler: hInfo,
          err:     fmt.Errorf("broken handler"),
      }
      
      handler := slog.NewMultiHandler(hBroken, hInfo, hErrorsOnly)
      rec := slog.NewRecord(time.Now(), slog.LevelInfo, "hello", 0)
      
      // Calls hInfo and hBroken, skips hErrorsOnly.
      // Returns an error from hBroken.
      err := handler.Handle(context.Background(), rec)
      fmt.Println(err)
      
      
      
      time=2025-12-31T13:32:52.110Z level=INFO msg=hello
      broken handler
      

      The Enable method reports whether any of the configured handlers is enabled:

      hInfo := slog.NewTextHandler(
          os.Stdout, &slog.HandlerOptions{Level: slog.LevelInfo},
      )
      hErrors := slog.NewTextHandler(
          os.Stdout, &slog.HandlerOptions{Level: slog.LevelError},
      )
      handler := slog.NewMultiHandler(hInfo, hErrors)
      
      // hInfo is enabled.
      enabled := handler.Enabled(context.Background(), slog.LevelInfo)
      fmt.Println(enabled)
      
      
      
      true
      

      Other methods — WithAttr and WithGroup — call the corresponding methods on each of the enabled handlers.

      𝗗 slog.MultiHandler • 𝗣 65954 • 𝗖𝗟 692237 • 𝗔 Jes Cok

      # Test artifacts

      Test artifacts are files created by tests or benchmarks, such as execution logs, memory dumps, or analysis reports. They are important for debugging failures in remote environments (like CI), where developers can't step through the code manually.

      Previously, the Go test framework and tools didn't support test artifacts. Now they do.

      The new methods T.ArtifactDir, B.ArtifactDir, and F.ArtifactDir return a directory where you can write test output files:

      func TestFunc(t *testing.T) {
          dir := t.ArtifactDir()
          logFile := filepath.Join(dir, "app.log")
          content := []byte("Loading user_id=123...\nERROR: Connection failed\n")
          os.WriteFile(logFile, content, 0644)
          t.Log("Saved app.log")
      }
      

      If you use go test with -artifacts, this directory will be inside the output directory (specified by -outputdir, or the current directory by default):

      go1.26rc1 test -v -artifacts -outputdir=/tmp/output
      
      
      
      === RUN   TestFunc
      === ARTIFACTS TestFunc /tmp/output/_artifacts/2933211134
          artifacts_test.go:14: Saved app.log
      --- PASS: TestFunc (0.00s)
      

      As you can see, the first time ArtifactDir is called, it writes the directory location to the test log, which is quite handy.

      If you don't use -artifacts, artifacts are stored in a temporary directory which is deleted after the test completes.

      Each test or subtest within each package has its own unique artifact directory. Subtest outputs are not stored inside the parent test's output directory — all artifact directories for a given package are created at the same level:

      func TestFunc(t *testing.T) {
          t.ArtifactDir()
          t.Run("subtest 1", func(t *testing.T) {
              t.ArtifactDir()
          })
          t.Run("subtest 2", func(t *testing.T) {
              t.ArtifactDir()
          })
      }
      
      
      
      === RUN   TestFunc
      === ARTIFACTS TestFunc /tmp/output/_artifacts/2878232317
      === RUN   TestFunc/subtest_1
      === ARTIFACTS TestFunc/subtest_1 /tmp/output/_artifacts/1651881503
      === RUN   TestFunc/subtest_2
      === ARTIFACTS TestFunc/subtest_2 /tmp/output/_artifacts/3341607601
      

      The artifact directory path normally looks like this:

      <output dir>/_artifacts/<test package>/<test name>/<random>
      

      But if this path can't be safely converted into a local file path (which, for some reason, always happens on my machine), the path will simply be:

      <output dir>/_artifacts/<random>
      

      (which is what happens in the examples above)

      Repeated calls to ArtifactDir in the same test or subtest return the same directory.

      𝗗 T.ArtifactDir • 𝗣 71287 • 𝗖𝗟 696399 • 𝗔 Damien Neil

      # Modernized go fix

      Over the years, the go fix command became a sad, neglected bag of rewrites for very ancient Go features. But now, it's making a comeback.

      The new go fix is re-implemented using the Go analysis framework — the same one go vet uses.

      While go fix and go vet now use the same infrastructure, they have different purposes and use different sets of analyzers:

      • Vet is for reporting problems. Its analyzers describe actual issues, but they don't always suggest fixes, and the fixes aren't always safe to apply.
      • Fix is (mostly) for modernizing the code to use newer language and library features. Its analyzers produce fixes are always safe to apply, but don't necessarily indicate problems with the code.

        usage: go fix [build flags] [-fixtool prog] [fix flags] [packages]

        Fix runs the Go fix tool (cmd/fix) on the named packages and applies suggested fixes.

        It supports these flags:

        -diff instead of applying each fix, print the patch as a unified diff

        The -fixtool=prog flag selects a different analysis tool with alternative or additional fixers.

      By default, go fix runs a full set of analyzers (currently, there are more than 20). To choose specific analyzers, use the -NAME flag for each one, or use -NAME=false to run all analyzers except the ones you turned off.

      For example, here we only enable the forvar analyzer:

      go fix -forvar .
      

      And here, we enable all analyzers except omitzero:

      go fix -omitzero=false .
      

      Currently, there's no way to suppress specific analyzers for certain files or sections of code.

      To give you a taste of go fix analyzers, here's one of them in action. It replaces loops with slices.Contains or slices.ContainsFunc:

      // before go fix
      func find(s []int, x int) bool {
          for _, v := range s {
              if x == v {
                  return true
              }
          }
          return false
      }
      
      
      
      // after go fix
      func find(s []int, x int) bool {
          return slices.Contains(s, x)
      }
      

      If you're interested, check out the dedicated blog post for the full list of analyzers with examples.

      𝗗 cmd/fix • 𝗚 go fix • 𝗣 71859 • 𝗔 Alan Donovan

      # Final thoughts

      Go 1.26 is incredibly big — it's the largest release I've ever seen, and for good reason:

      • It brings a lot of useful updates, like the improved new builtin, type-safe error checking, and goroutine leak detector.
      • There are also many performance upgrades, including the new garbage collector, faster cgo and memory allocation, and optimized fmt.Errorf and io.ReadAll.
      • On top of that, it adds quality-of-life features like multiple log handlers, test artifacts, and the updated go fix tool.
      • Finally, there are two specialized experimental packages: one with SIMD support and another with protected mode for forward secrecy.

      All in all, a great release!

      You might be wondering about the json/v2 package that was introduced as experimental in 1.25. It's still experimental and available with the GOEXPERIMENT=jsonv2 flag.

      P.S. To catch up on other Go releases, check out the Go features by version list or explore the interactive tours for Go 1.25 and 1.24.

      P.P.S. Want to learn more about Go? Check out my interactive book on concurrency

    14. 🔗 r/wiesbaden Potenzielle alternative Jobangebote? rss

      Hallo hallo liebe Wiesbadener,

      ich habe vor in die Stadt zu ziehen und irgendwann ein Piercingstudio zu eröffnen.

      Allerdings bin ich erstmal auf der Suche nach einem Job, um erstmal anzukommen. Da wollte ich euch mal fragen, ob ihr da coole Ideen habt oder etwas kennt, wo gerade gesucht wird. Meine Interesse liegt eher im untypischen Einzelhandel.

      Liebe geht raus <3

      submitted by /u/Niko_The_Impaler
      [link] [comments]

    15. 🔗 r/reverseengineering Reverse engineering Chase H.Q. for the ZX Spectrum rss
    16. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    17. 🔗 r/LocalLLaMA I built a visual AI workflow tool that runs entirely in your browser - Ollama, LM Studio, llama.cpp and Most cloud API's all work out of the box. Agents/Websearch/TTS/Etc. rss

      I built a visual AI workflow tool that runs entirely in your browser - Ollama, LM Studio, llama.cpp and Most cloud API's all work out of the box. Agents/Websearch/TTS/Etc. | You might remember me from LlamaCards a previous program ive built or maybe you've seen some of my agentic computer use posts with Moondream/Minicpm navigation creating reddit posts. Ive had my head down and I've finally gotten something I wanted to show you all. EmergentFlow - a visual node-based editor for creating AI workflows and agents. The whole execution engine runs in your browser. Its a great sandbox for developing AI workflows. You just open it and go. No Docker, no Python venv, no dependencies. Connect your Ollama(or other local) instance, paste your API keys for whatever providers you use, and start building. Everything runs client-side - your keys stay in your browser, your prompts go directly to the providers. Supported:

      • Ollama (just works - point it at localhost:11434, auto-fetches models)
      • LM Studio + llama.cpp (works once CORS is configured)
      • OpenAI, Anthropic, Groq, Gemini, DeepSeek, xAI

      For edge cases where you hit CORS issues, there's an optional desktop runner that acts as a local proxy. It's open source: github.com/l33tkr3w/EmergentFlow- runner But honestly most stuff works straight from the browser. The deal: It's free. Like, actually free - not "free trial" free. You get a full sandbox with unlimited use of your own API keys. The only thing that costs credits is if you use my server- paid models (Gemini) because Google charges me for those. Free tier gets 25 daily credits for server models(Gemini through my API key). Running Ollama/LMStudio/llama.cpp or BYOK? Unlimited. Forever. No catch. I do have a Pro tier ($19/mo) for power users who want more server credits and team collaboration, node/flow gallery - because I'm a solo dev with a kid trying to make this sustainable. But honestly most people here running local models won't need it. Try it: emergentflow.io/try - no signup, no credit card, just start dragging nodes. If you run into issues (there will be some), please submit a bug report. Happy to answer questions about how stuff works under the hood. Support a fellow LocalLlama enthusiast! Updoot? submitted by /u/l33t-Mt
      [link] [comments]
      ---|---

    18. 🔗 r/reverseengineering A Glimpse Into DexProtector rss
    19. 🔗 @cxiao@infosec.exchange This article mentions the Venezuelan Canadian Society of British Columbia, mastodon

      This article mentions the Venezuelan Canadian Society of British Columbia, which is a non profit organization in BC that distributes funds towards organizations in Venezuela, in places with Venezuelan diaspora, and in BC. One of their goals is to help address the Venezuelan humanitarian and refugee crisis that has been ongoing for the last 10+ years, directly due to the Maduro and Chavez regimes.

      The refugee crisis is the largest in the world but has largely been ignored for the past decade. Perhaps we can redirect some of the attention now on Venezuela towards helping Venezuelans, who are now in a time of hope and uncertainty. Let's stand with them and share some of the optimism they have about ousting an authoritarian leader who has caused so much pain, family separation, hunger, loss, and disease. Let's make the moment not about us or about the USA, but about them.

      Consider making a donation to them, or to your local Venezuelan diaspora society.

      https://vcsbc.ca/en/we-help-others/

      #canada #BritishColumbia https://flipboard.com/@blackpressmedia/victoria-563147ccz/-/a-KOMUotCFSCSD9Za8BcD3fw%3Aa%3A3177150469-%2F0

    20. 🔗 r/reverseengineering The Story of a Perfect Exploit Chain: Six Bugs That Looked Harmless Until They Became Pre-Auth RCE in a Security Appliance - Mehmet Ince @mdisec rss
    21. 🔗 tonsky.me It’s hard to justify Tahoe icons rss

      I was reading Macintosh Human Interface Guidelines from 1992 and found this nice illustration:

      accompanied by explanation:

      Fast forward to 2025. Apple releases macOS Tahoe. Main attraction? Adding unpleasant, distracting, illegible, messy, cluttered, confusing, frustrating icons (their words, not mine!) to every menu item:

      Sequoia → Tahoe

      It’s bad. But why exactly is it bad? Let’s delve into it!

      Disclaimer: screenshots are a mix from macOS 26.1 and 26.2, taken from stock Apple apps only that come pre-installed with the system. No system settings were modified.

      Icons should differentiate

      The main function of an icon is to help you find what you are looking for faster.

      Perhaps counter-intuitively, adding an icon to everything is exactly the wrong thing to do. To stand out, things need to be different. But if everything has an icon, nothing stands out.

      The same applies to color: black-and-white icons look clean, but they don’t help you find things faster!

      Microsoft used to know this:

      Look how much faster you can find Save or Share in the right variant:

      It also looks cleaner. Less cluttered.

      A colored version would be even better (clearer separation of text from icon, faster to find):

      I know you won’t like how it looks. I don’t like it either. These icons are hard to work with. You’ll have to actually design for color to look nice. But the principle stands: it is way easier to use.

      Consistency between apps

      If you want icons to work, they need to be consistent. I need to be able to learn what to look for.

      For example, I see a “Cut” command and next to it. Okay, I think. Next time I’m looking for “Cut,” I might save some time and start looking for instead.

      How is Tahoe doing on that front? I present to you: Fifty Shades of “New”:

      I even collected them all together, so the absurdity of the situation is more obvious.

      Granted, some of them are different operations, so they have different icons. I guess creating a smart folder is different from creating a journal entry. But this?

      Or this:

      Or this:

      There is no excuse.

      Same deal with open:

      Save:

      Yes. One of them is a checkmark. And they can’t even agree on the direction of an arrow!

      Close:

      Find (which is sometimes called Search, and sometimes Filter):

      Delete (from Cut-Copy-Paste-Delete fame):

      Minimize window.

      These are not some obscure, unique operations. These are OS basics, these are foundational. Every app has them, and they are always in the same place. They shouldn’t look different!

      Consistency inside the same app

      Icons are also used in toolbars. Conceptually, operations in a toolbar are identical to operations called through the menu, and thus should use the same icons. That’s the simplest case to implement: inside the same app, often on the same screen. How hard can it be to stay consistent?

      Preview:

      Photos: same and mismatch, but reversed ¯\(ツ)

      Maps and others often use different symbols for zoom:

      Icon reuse

      Another cardinal sin is to use the same icon for different actions. Imagine: I have learned that means “New”:

      Then I open an app and see. “Cool”, I think, “I already know what it means”:

      Gotcha!

      You’d think: okay, means quick look:

      Sometimes, sure. Some other times, means “Show completed”:

      Sometimes is “Import”:

      Sometimes is “Updates”:

      Same as with consistency, icon reuse doesn’t only happen between apps. Sometimes you see in a toolbar:

      Then go to the menu in the same app and see means something else:

      Sometimes identical icons meet in the same menu.

      Sometimes next to each other.

      Sometimes they put an entire barrage of identical icons in a row:

      This doesn’t help anyone. No user will find a menu item faster or will understand the function better if all icons are the same.

      The worst case of icon reuse so far has been the Photos app:

      It feels like the person tasked with choosing a unique icon for every menu item just ran out of ideas.

      Understandable.

      Too much nuance

      When looking at icons, we usually allow for slight differences in execution. That lets us, for example, understand that these technically different road signs mean the same thing:

      Same applies for icons: if you draw an arrow going out of the box in one place and also an arrow and the box but at a slightly different angle, or with different stroke width, or make one filled, we will understand them as meaning the same thing.

      Like, is supposed to mean something else from ? Come on!

      Or two letters A that only slightly differ in the font size:

      A pencil is “Rename” but a slightly thicker pencil is “Highlight”?

      Arrows that use different diagonals?

      Three dots occupying ⅔ of space vs three dots occupying everything. Seriously?

      Slightly darker dots?

      The sheet of paper that changes meaning depending on if its corner is folded or if there are lines inside?

      But the final boss are arrows. They are all different:

      Supposedly, a user must become an expert at noticing how squished the circle is, if it starts top to right or bottom to right, and how far the arrow’s end goes.

      Do I care? Honestly, no. I could’ve given it a shot, maybe, if Apple applied these consistently. But Apple considers and to mean the same thing in one place, and expects me to notice minute details like this in another?

      Sorry, I can’t trust you. Not after everything I’ve seen.

      Detalization

      Icons are supposed to be easily recognizable from a distance. Every icon designer knows: small details are no-go. You can have them sometimes, maybe, for aesthetic purposes, but you can’t rely on them.

      And icons in Tahoe menus are tiny. Most of them fit in a 12×12 pixel square (actual resolution is 24×24 because of Retina), and because many of them are not square, one dimension is usually even less than 12.

      It’s not a lot of space to work with! Even Windows 95 had 16×16 icons. If we take the typical DPI of that era at 72 dots per inch, we get a physical icon size of 0.22 inches (5.6 mm). On a modern MacBook Pro with 254 DPI, Tahoe’s 24×24 icons are 0.09 inches (2.4 mm). Sure, 24 is bigger than 16, but in reality, these icons’ area is 4 times as small!

      Simulated physical size comparison between 16×16 at 72 DPI (left) and 24×24 at 254 DPI (right)

      So when I see this:

      I struggle. I can tell they are different. But I definitely struggle to tell what’s being drawn.

      Even zoomed in 20×, it’s still a mess:

      Or here. These are three different icons:

      Am I supposed to tell plus sign from sparkle here?

      Some of these lines are half the pixel thicker than the other lines, and that’s supposed to be the main point:

      Is this supposed to be an arrow?

      A paintbrush?

      Look, a tiny camera.

      It even got an even tinier viewfinder, which you can almost see if you zoom in 20×:

      Or here. There is a box, inside that box is a circle, and inside it is a tiny letter i with a total height of 2 pixels:

      Don’t see it?

      I don’t. But it’s there...

      And this is a window! It even has traffic lights! How adorable:

      Remember: these are retina pixels, ¼ of a real pixel. Steve Jobs himself claimed they were invisible.

      It turns out there’s a magic number right around 300 pixels per inch, that when you hold something around to 10 to 12 inches away from your eyes, is the limit of the human retina to differentiate the pixels.

      And yet, Tahoe icons rely on you being able to see them.

      Pixel grid

      When you have so little space to work with, every pixel matters. You can make a good icon, but you have to choose your pixels very carefully.

      For Tahoe icons, Apple decided to use vector fonts instead of good old- fashioned bitmaps. It saves Apple resources—draw once, use everywhere. Any size, any display resolution, any font width.

      But there’re downsides: fonts are hard to position vertically, their size doesn’t map directly to pixels, stroke width doesn’t map 1-to-1 to pixel grid, etc. So, they work everywhere, but they also look blurry and mediocre everywhere:

      Tahoe icon (left) and its pixel-aligned version (right).

      They certainly start to work better once you give them more pixels.

      iPad OS 26 vs macOS 26

      or make graphics simpler. But the combination of small details and tiny icon size is deadly. So, until Apple releases MacBooks with 380+ DPI, unfortunately, we still have to care about the pixel grid.

      Confusing metaphors

      Icons might serve another function: to help users understand the meaning of the command.

      For example, once you know the context (move window), these icons explain what’s going on faster than words:

      But for this to work, the user must understand what’s drawn on the icon. It must be a familiar object with a clear translation to computer action (like Trash can → Delete), a widely used symbol, or an easy-to-understand diagram. HIG:

      A rookie mistake would be to misrepresent the object. For example, this is how selection looks like:

      But its icon looks like this:

      Honestly, I’ve been writing this essay for a week, and I still have zero ideas why it looks like that. There’s an object that looks like this, but it’s a text block in Freeform/Preview:

      It’s called character.textbox in SF Symbols:

      Why did it become a metaphor for “Select all”? My best guess is it’s a mistake.

      Another place uses text selection from iOS as a metaphor. On a Mac!

      Some concepts have obvious or well-established metaphors. In that case, it’s a mistake not to use them. For example, bookmarks: . Apple, for some reason, went with a book:

      Sometimes you already have an interface element and can use it for an icon. However, try not to confuse your users. Dots in a rectangle look like password input, not permissions:

      Icon here says “Check” but the action is “Uncheck”.

      Terrible mistake: icon doesn’t help, it actively confuses the user.

      It’s also tempting to construct a two-level icon: an object and some sort of indicator. Like, a checkbox and a cross, meaning “Delete checkbox”:

      Or a user and a checkmark, like “Check the user”:

      Unfortunately, constructs like this rarely work. Users don’t build sentences from building blocks you provide; they have no desire to solve these puzzles.

      Finding metaphors is hard. Nouns are easier than verbs, and menu items are mostly verbs. How does open look? Like an arrow pointing to the top right? Why?

      I’m not saying there’s an obvious metaphor for “Open” Apple missed. There isn’t. But that’s the point: if you can’t find a good metaphor, using no icon is better than using a bad, confusing, or nonsensical icon.

      There’s a game I like to play to test the quality of the metaphor. Remove the labels and try to guess the meaning. Give it a try:

      It’s delusional to think that there’s a good icon for every action if you think hard enough. There isn’t. It’s a lost battle from the start. No amount of money or “management decisions” is going to change that. The problems are 100% self-inflicted.

      All this being said, I gotta give Apple credit where credit is due. When they are good at choosing metaphors, they are good:

      Symmetrical actions

      A special case of a confusing metaphor is using different metaphors for actions that are direct opposites of one another. Like Undo/Redo, Open/Close, Left/Right.

      It’s good when their icons use the same metaphor:

      Because it saves you time and cognitive resources. Learn one, get another one for free.

      Because of that, it’s a mistake not to use common metaphors for related actions:

      Or here:

      Another mistake is to create symmetry where there is none. “Back” and “See all”?

      Some menus in Tahoe make both mistakes. E.g. lack of symmetry between Show/Hide and false symmetry between completed/subtasks:

      Import not mirrored by Export but by Share:

      Text in icons

      HIG again:

      Authors of HIG are arguing against including text as a part of an icon. So something like this:

      or this:

      would not fly in 1992.

      I agree, but Tahoe has more serious problems: icons consisting only of text. Like this:

      It’s unclear where “metaphorical, abstract icon text that is not supposed to be read literally” ends and actual text starts. They use the same font, the same color, so how am I supposed to differentiate? Icons just get in a way: A...Complete? AaFont? What does it mean?

      I can maybe understand and . Dots are supposed to represent something. I can imagine thinking that led to . But ? No decorations. No effects. Just plain Abc. Really?

      Text transformations

      One might think that using icons to illustrate text transformations is a better idea.

      Like, you look at this:

      or this:

      or this:

      and just from the icon alone understand what will happen with the text. Icon illustrates the action.

      Also, BIU are well-established in word processing, so all upside?

      Not exactly. The problem is the same—text icon looks like text, not icon. Plus, these icons are excessive. What’s the point of taking the first letter and repeating it? The word “Bold” already starts with a letter “B”, it reads just as easily, so why double it? Look at it again:

      It’s also repeated once more as a shortcut...

      There is a better way to design this menu:

      And it was known to Apple for at least 33 years.

      System elements in icons

      Operating system, of course, uses some visual elements for its own purposes. Like window controls, resize handles, cursors, shortcuts, etc. It would be a mistake to use those in icons.

      Unfortunately, Apple fell into this trap, too. They reused arrows.

      Key shortcuts:

      HIG has an entire section on ellipsis specifically and how dangerous it is to use it anywhere else in the menu.

      And this exact problem is in Tahoe, too.

      Icons break scanning

      Without icons, you can just scan the menu from top to bottom, reading only the first letters. Because they all align:

      macOS Sequoia

      In Tahoe, though, some menu items have icons, some don’t, and they are aligned differently:

      Some items can have both checkmarks and icons, or have only one of them, or have neither, so we get situations like this:

      Ugh.

      Special mention

      This menu deserves its own category:

      Same icon for different actions. Missing the obvious metaphor. Somehow making the first one slightly smaller than the second and third. Congratulations! It got it all.

      Is HIG still relevant?

      I’ve been mentioning HIG a lot, and you might be wondering: is an interface manual from 1992 still relevant today? Haven’t computers changed so much that entirely new principles, designs, and idioms apply?

      Yes and no. Of course, advice on how to adapt your icons to black-and-white displays is obsolete. But the principles—as long as they are good principles—still apply, because they are based on how humans work, not how computers work.

      Humans don’t get a new release every year. Our memory doesn’t double. Our eyesight doesn’t become sharper. Attention works the same way it always has. Visual recognition, motor skills—all of this is exactly as it was in 1992.

      So yeah, until we get a direct chip-to-brain interface, HIG will stay relevant.

      Conclusion

      In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

      But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

      And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

      I hope this article would be helpful in avoiding common mistakes in icon design, which Apple managed to collect all in one OS release. I love computers, I love interfaces, I love visual communication. It makes me sad seeing perfectly good knowledge already accessible 30 years ago being completely ignored or thrown away today.

      On the upside: it’s not that hard anymore to design better than Apple! Let’s drink to that. Happy New year!

      From SF Symbols: a smiley face calling somebody on the phone

      Notes

      During review of this post I was made familiar with Jim Nielsen’s article, which hits a lot of the same points as I do. I take that as a sign there’s some common truth behind our reasoning.

      Also note: Safari → File menu got worse since 26.0. Used to have only 4 icons, now it’s 18!

      Thanks Kevin, Ryan, and Nicki for reading drafts of this post.

    22. 🔗 Rust Blog Project goals update — December 2025 rss

      The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

      Flagship goals

      "Beyond the &"

      Continue Experimentation with Pin Ergonomics (rust-lang/rust-project- goals#389)

      Progress |
      ---|---
      Point of contact | Frank King
      Champions | compiler (Oliver Scherer), lang (TC)
      Task owners | Frank King

      1 detailed update available.

      Comment by @frank-king posted on 2025-12-18:

      Design a language feature to solve Field Projections (rust-lang/rust-project- goals#390)

      Progress |
      ---|---
      Point of contact | Benno Lossin
      Champions | lang (Tyler Mandry)
      Task owners | Benno Lossin

      5 detailed updates available.

      Comment by @BennoLossin posted on 2025-12-07:

      Since we have chosen virtual places as the new approach, we reviewed what open questions are most pressing for the design. Our discussion resulted in the following five questions:

      1. Should we have 1-level projections xor multi-level projections?
      2. What is the semantic meaning of the borrow checker rules (BorrowKind)?
      3. How should we add "canonical projections" for types such that we have nice and short syntax (like x~y or x.@y)?
      4. What to do about non-indirected containers (Cell, MaybeUninit, Mutex, etc)?
      5. How does one inspect/query Projection types?

      We will focus on these questions in December as well as implementing FRTs.

      Comment by @BennoLossin posted on 2025-12-12:

      Canonical Projections

      We have discussed canonical projections and come up with the following solution:

      pub trait CanonicalReborrow: HasPlace {
          type Output<'a, P: Projection<Source = Self::Target>>: HasPlace<Target = P::Target>
          where
              Self: PlaceBorrow<'a, P, Self::Output<'a, P>>;
      }
      

      Implementing this trait permits using the syntax @$place_expr where the place's origin is of the type Self (for example @x.y where x: Self and y is an identifier or tuple index, or @x.y.z etc). It is desugared to be:

      @<<Self as CanonicalReborrow>::Output<'_, projection_from_place_expr!($place_expr)>> $place_expr
      

      (The names of the trait, associated type and syntax are not final, better suggestions welcome.)

      Reasoning

      • We need the Output associated type to support the @x.y syntax for Arc and ArcRef.
      • We put the FRT and lifetime parameter on Output in order to force implementers to always provide a canonical reborrow, so if @x.a works, then @x.b also works (when b also is a field of the struct contained by x).
        • This (sadly or luckily) also has the effect that making @x.a and @x.b return different wrapper types is more difficult to implement and requires a fair bit of trait dancing. We should think about discouraging this in the documentation.

      Comment by @BennoLossin posted on 2025-12-16:

      Non-Indirected Containers

      Types like MaybeUninit<T>, Cell<T>, ManuallyDrop<T>, RefCell<T> etc. currently do not fit into our virtual places model, since they don't have an indirection. They contain the place directly inline (and some are even repr(transparent)). For this reason, we currently don't have projections available for &mut MaybeUninit<T>.

      Enter our new trait PlaceWrapper which these types implement in order to make projections available for them. We call these types place wrappers. Here is the definition of the trait:

      pub unsafe trait PlaceWrapper<P: Projection<Source = Self::Target>>: HasPlace {
          type WrappedProjection: Projection<Source = Self>;
      
          fn wrap_projection(p: P) -> Self::WrappedProjection;
      }
      

      This trait should only be implemented when Self doesn't contain the place as an indirection (so for example Box must not implement the trait). When this trait is implemented, then Self has "virtual fields" available (actually all kinds of place projections). The name of these virtual fields/projections is the same as the ones of the contained place. But their output type is controlled by this trait.

      As an example, here is the implementation for MaybeUninit:

      impl<T, P: Projection<Source = T>> PlaceWrapper<P> for MaybeUninit<T> {
          type WrappedProjection = TransparentProjection<P, MaybeUninit<T>, MaybeUninit<P::Target>>;
      
          fn wrap_projection(p: P) -> Self::WrappedProjection {
              TransparentProjection(p, PhantomData, PhantomData)
          }
      }
      

      Where TransparentProjection will be available in the standard library defined as:

      pub struct TransparentProjection<P, Src, Tgt>(P, PhantomData<Src>, PhantomData<Tgt>);
      
      impl<P: Projection, Src, Tgt> Projection for TransparentProjection<P, Src, Tgt> {
          type Source = Src;
          type Target = Tgt;
      
          fn offset(&self) -> usize {
              self.0.offset()
          }
      }
      

      When there is ambiguity, because the wrapper and the wrapped types both have the same field, the wrapper's field takes precedence (this is the same as it currently works for Deref). It is still possible to refer to the wrapped field by first dereferencing the container, so x.field refers to the wrapper's field and (*x).field refers to the field of the wrapped type.

      Comment by @BennoLossin posted on 2025-12-20:

      Field-by-Field Projections vs One-Shot Projections

      We have used several different names for these two ways of implementing projections. The first is also called 1-level projections and the second multi-level projections.

      The field-by-field approach uses field representing types (FRTs), which represent a single field of a struct with no indirection. When writing something like @x.y.z, we perform the place operation twice, first using the FRT field_of!(X, y) and then again with field_of!(T, z) where T is the resulting type of the first projection.

      The second approach called one-shot projections instead extends FRTs with projections , these are compositions of FRTs, can be empty and dynamic. Using these we desugar @x.y.z to a single place operation.

      Field-by-field projections have the advantage that they simplify the implementation for users of the feature, the compiler implementation and the mental model that people will have to keep in mind when interacting with field projections. However, they also have pretty big downsides, which either are fundamental to their design or would require significant complification of the feature:

      • They have less expressiveness than one-shot projections. For example, when moving out a subsubfield of x: &own Struct by doing let a = @x.field.a, we have to move out field, which prevents us from later writing let b = @x.field.b. One-shot projections allow us to track individual subsubfields with the borrow checker.
      • Field-by-field projections also make it difficult to define type-changing projections in an inference friendly way. Projecting through multiple fields could result in several changes of types in between, so we would have to require only canonical projections in certain places. However, this requires certain intermediate types for which defining their safety invariants is very complex.

      We additionally note that the single function call desugaring is also a simplification that also lends itself much better when explaining what the @ syntax does.

      All of this points in the direction of proceeding with one-shot projections and we will most likely do that. However, we must note that the field-by-field approach might yield easier trait definitions that make implementing the various place operations more manageable. There are several open issues on how to design the field-by-field API in the place variation (the previous proposal did have this mapped out clearly, but it does not translate very well to places), which would require significant effort to solve. So at this point we cannot really give a fair comparison. Our initial scouting of the solutions revealed that they all have some sort of limitation (as we explained above for intermediate projection types for example), which make field-by-field projections less desirable. So for the moment, we are set on one-shot projections, but when the time comes to write the RFC we need to revisit the idea of field-by-field projections.

      Comment by @BennoLossin posted on 2025-12-25:

      Wiki Project

      We started a wiki project at https://rust-lang.github.io/beyond-refs to map out the solution space. We intend to grow it into the single source of truth for the current state of the field projection proposal as well as unfinished and obsolete ideas and connections between them. Additionally, we will aim to add the same kind of information for the in-place initialization effort, since it has overlap with field projections and, more importantly, has a similarly large solution space.

      In the beginning you might find many stub pages in the wiki, which we will work on making more complete. We will also mark pages that contain old or abandoned ideas as such as well as mark the current proposal.

      This issue will continue to receive regular detailed updates, which are designed for those keeping reasonably up-to-date with the feature. For anyone out of the loop, the wiki project will be a much better place when it contains more content.

      Reborrow traits (rust-lang/rust-project-goals#399)

      Progress |
      ---|---
      Point of contact | Aapo Alasuutari
      Champions | compiler (Oliver Scherer), lang (Tyler Mandry)
      Task owners | Aapo Alasuutari

      1 detailed update available.

      Comment by @aapoalas posted on 2025-12-17:

      Purpose

      A refresher on what we want to achieve here: the most basic form of reborrowing we want to enable is this:

      // Note: not Clone or Copy
      #[derive(Reborrow)]
      struct MyMutMarker<'a>(...);
      
      // ...
      
      let marker: MyMarkerMut = MyMutMarker::new();
      some_call(marker);
      some_call(marker);
      

      ie. make it possible for an owned value to be passed into a call twice and have Rust inject a reborrow at each call site to produce a new bitwise copy of the original value for the passing purposes, and mark the original value as disabled for reads and writes for the duration of the borrow.

      A notable complication appears with implementing such reborrowing in userland using explicit cals when dealing with returned values:

      return some_call(marker.reborrow());
      

      If the borrowed lifetime escapes through the return value, then this will not compile as the borrowed lifetime is based on a value local to this function. Alongside convenience, this is the major reason for the Reborrow traits work.

      CoerceShared is a secondary trait that enables equivalent reborrowing that only disables the original value for writes, ie. matching the &mut T to &T coercion.

      Update

      We have the Reborrow trait working, albeit currently with a bug in which the marker must be bound as let mut. We are working towards a working CoerceShared trait in the following form:

      trait CoerceShared<Target: Copy> {}
      

      Originally the trait had a type Target ADT but this turned out to be unnecessary, as there is no reason to particularly disallow multiple coercion targets. The original reason for using an ADT to disallow multiple coercion targets was based on the trait also having an unsafe method, at which point unscrupulous users could use the trait as a generic coercion trait. Because the trait method was found to be unnecessary, the fear is also unnecessary.

      This means that the trait has better chances of working with multiple coercing lifetimes (think a collection of &muts all coercing to &s, or only some of them). However, we are currently avoiding any support of multiple lifetimes as we want to avoid dealing with rmeta before we have the basic functionality working.

      "Flexible, fast(er) compilation"

      build-std (rust-lang/rust-project-goals#274)

      Progress |
      ---|---
      Point of contact | David Wood
      Champions | cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
      Task owners | Adam Gemmell, David Wood

      1 detailed update available.

      Comment by @davidtwco posted on 2025-12-15:

      rust-lang/rfcs#3873 is waiting on one checkbox before entering the final comment period. We had our sync meeting on the 11th and decided that we would enter FCP on rust-lang/rfcs#3874 and rust-lang/rfcs#3875 after rust- lang/rfcs#3873 is accepted. We've responded to almost all of the feedback on the next two RFCs and expect the FCP to act as a forcing-function so that the relevant teams take a look, they can always register concerns if there are things we need to address, and if we need to make any major changes then we'll restart the FCP.

      Production-ready cranelift backend (rust-lang/rust-project- goals#397)

      Progress | Will not complete
      ---|---
      Point of contact | Folkert de Vries
      Champions | compiler (bjorn3)
      Task owners | bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

      1 detailed update available.

      Comment by @folkertdev posted on 2025-12-01:

      We did not receive the funding we needed to work on this goal, so no progress has been made.

      Overall I think the improvements we felt comfortable promising are on the low side. Overall the amount of time spent in codegen for realistic changes to real code bases was smaller than expected, meaning that the improvements that cranelift can deliver for the end-user experience are smaller.

      We still believe larger gains can be made with more effort, but did not feel confident in promising hard numbers.

      So for now, let's close this.

      Promoting Parallel Front End (rust-lang/rust-project- goals#121)

      Progress |
      ---|---
      Point of contact | Sparrow Li
      Task owners | Sparrow Li
      No detailed updates available.

      Relink don't Rebuild (rust-lang/rust-project- goals#400)

      Progress | Will not complete
      ---|---
      Point of contact | Jane Lusby
      Champions | cargo (Weihang Lo), compiler (Oliver Scherer)
      Task owners | @dropbear32, @osiewicz
      No detailed updates available.

      "Higher-level Rust"

      Ergonomic ref-counting: RFC decision and preview (rust-lang/rust-project- goals#107)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Champions | compiler (Santiago Pastorino), lang (Niko Matsakis)
      Task owners | Niko Matsakis, Santiago Pastorino
      No detailed updates available.

      Stabilize cargo-script (rust-lang/rust-project- goals#119)

      Progress |
      ---|---
      Point of contact | Ed Page
      Champions | cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)
      Task owners | Ed Page

      1 detailed update available.

      Comment by @epage posted on 2025-12-15:

      Key developments

      • A fence length limit was added in response to T-lang feedback (https://github.com/rust-lang/rust/pull/149358)
      • Whether to disallow or lint for CR inside of a frontmatter is under discussion (https://github.com/rust-lang/rust/pull/149823)

      Blockers

      • https://github.com/rust-lang/rust/pull/146377
      • rustdoc deciding on and implementing how they want frontmatter handled in doctests

      "Unblocking dormant traits"

      Evolving trait hierarchies (rust-lang/rust-project- goals#393)

      Progress |
      ---|---
      Point of contact | Taylor Cramer
      Champions | lang (Taylor Cramer), types (Oliver Scherer)
      Task owners | Taylor Cramer, Taylor Cramer & others

      1 detailed update available.

      Comment by @cramertj posted on 2025-12-17:

      Current status:

      • The RFC for auto impl supertraits has been updated to address SemVer compatibility issues.
      • There is a parsing PR kicking off an experimental implementation. The tracking issue for this experimental implementation is here.

      In-place initialization (rust-lang/rust-project- goals#395)

      Progress |
      ---|---
      Point of contact | Alice Ryhl
      Champions | lang (Taylor Cramer)
      Task owners | Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts
      No detailed updates available.

      Next-generation trait solver (rust-lang/rust-project- goals#113)

      Progress |
      ---|---
      Point of contact | lcnr
      Champions | types (lcnr)
      Task owners | Boxy, Michael Goulet, lcnr

      1 detailed update available.

      Comment by @lcnr posted on 2025-12-15:

      We've continued to fix a bunch of smaller issues over the last month. Tim (Theemathas Chirananthavat) helped uncover a new potential issue due to non-fatal overflow which we'll have to consider before stabilizing the new solver: https://github.com/rust-lang/trait-system- refactor-initiative/issues/258.

      I fixed two issues myself in https://github.com/rust-lang/rust/pull/148823 and https://github.com/rust-lang/rust/pull/148865.

      tiif with help by Boxy fixed query cycles when evaluating constants in where-clauses: https://github.com/rust-lang/rust/pull/148698.

      @adwinwhite fixed a subtle issues involving coroutine witnesses in https://github.com/rust-lang/rust/pull/149167 after having diagnosed the underlying issue there last month. They've also fixed a smaller diagnostics issue in https://github.com/rust-lang/rust/pull/149299. Finally, they've also fixed an edge case of impl well-formedness checking in https://github.com/rust-lang/rust/pull/149345.

      Shoyu Vanilla fixed a broken interaction of aliases and fudging in https://github.com/rust-lang/rust/pull/149320. Looking into fudging and HIR typeck Expectation handling also uncovered a bunch of broken edge-cases and I've openedhttps://github.com/rust- lang/rust/issues/149379 to track these separately.

      I have recently spent some time thinking about the remaining necessary work and posted a write-up on my personal blog: https://lcnr.de/blog/2025/12/01/next-solver-update.html. I am currently trying to get a clearer perspective on our cycle handling while slowly working towards an RFC for the changes there. This is challenging as we don't have a good theoretical foundation here yet.

      Stabilizable Polonius support on nightly (rust-lang/rust-project- goals#118)

      Progress |
      ---|---
      Point of contact | Rémy Rakic
      Champions | types (Jack Huey)
      Task owners | Amanda Stjerna, Rémy Rakic, Niko Matsakis

      2 detailed updates available.

      Comment by @lqd posted on 2025-12-30:

      This month's key developments were:

      • borrowck support in a-mir-formality has been progressing steadily — it has its own dedicated updates in https://github.com/rust-lang/rust-project-goals/issues/122 for more details
      • we were also able to find a suitable project for the master's student project on a-mir-formality (and they accepted and should start around February) and which will help expand our testing coverage for the polonius alpha as well.
      • tiif has kept making progress on fixing opaque type soundness issue https://github.com/rust-lang/trait-system-refactor-initiative/issues/159. It is the one remaining blocker for passing all tests. By itself it will not immediately fix the two remaining (soundness) issues with opaque type region liveness, but we'll able to use the same supporting code to ensure the regions are indeed live where they need to be.
      • I quickly cleaned up some inefficiencies in constraint conversion, it hasn't landed yet but it maybe won't need to because of the next item
      • but most of the time this month was spent on this final item: we have the first interesting results from the rewriting effort. After a handful of wrong starts, I have a branch almost ready to switch the constraint graph to be lazy and computed during traversal. It removes the need to index the numerous list of constraints, or to convert liveness data to a different shape. It thus greatly reduces the current alpha overhead (some rare cases look faster than NLLs but I don't yet know why, maybe due to being able to better use the sparseness, low connectivity of the constraint graph, and a small number of loans). The overhead wasn't entirely removed of course: the worst offending benchmark has a +5% wall-time regression, but icounts are worse looking (+13%). This was also only benchmarking the algorithm itself, without the improvements to the rest of borrowck mentioned in previous updates. I should be able to open a PR in the next couple days, once I figure out how to best convert the polonius mermaid graph dump to the new lazy localized constraint generation.
      • and finally, happy holidays everyone!

      Comment by @lqd posted on 2025-12-31:

      • I should be able to open a PR in the next couple days

      done in https://github.com/rust-lang/rust/pull/150551

      Goals looking for help

      Other goal updates

      Add a team charter for rustdoc team (rust-lang/rust-project- goals#387)

      Progress | Completed
      ---|---
      Point of contact | Guillaume Gomez
      Champions | rustdoc (Guillaume Gomez)
      No detailed updates available.

      Borrow checking in a-mir-formality (rust-lang/rust-project- goals#122)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Champions | types (Niko Matsakis)
      Task owners | Niko Matsakis, tiif

      4 detailed updates available.

      Comment by @nikomatsakis posted on 2025-12-03:

      PR https://github.com/rust-lang/a-mir-formality/pull/206 contains a "first draft" for the NLL rules. It checks for loan violations (e.g., mutating borrowed data) as well as some notion of outlives requirements. It does not check for move errors and there aren't a lot of tests yet.

      Comment by @nikomatsakis posted on 2025-12-03:

      The PR also includes two big improvements to the a-mir-formality framework:

      • support for (for_all) rules that can handle "iteration"
      • tracking proof trees, making it much easier to tell why something is accepted that should not be

      Comment by @nikomatsakis posted on 2025-12-10:

      Update: opened https://github.com/rust-lang/a-mir-formality/pull/207 which contains support for &mut, wrote some new tests (including one FIXME), and added a test for NLL Problem Case #3 (which behaved as expected).

      One interesting thing (cc Ralf Jung) is that we have diverged from MiniRust in a few minor ways:

      • We do not support embedding value expressions in place expressions.
      • Where MiniRust has a AddrOf operator that uses the PtrType to decide what kind of operation it is, we have added a Ref MIR operation. This is in part because we need information that is not present in MiniRust, specifically a lifetime.
      • We have also opted to extend goto with the ability to take multiple successors, so that goto b1, b2 can be seen as "goto either b1 or b2 non-deterministically" (the actual opsem would probably be to always go to b1, making this a way to add "fake edges", but the analysis should not assume that).

      Comment by @nikomatsakis posted on 2025-12-17:

      Update: opened https://github.com/rust-lang/a-mir-formality/pull/210 with today's work. We are discussing how to move the checker to support polonius- alpha. To that end, we introduced feature gates (so that a-mir-formality can model nightly features) and did some refactoring of the type checker aiming at allowing outlives to become flow-sensitive.

      C++/Rust Interop Problem Space Mapping (rust-lang/rust-project- goals#388)

      Progress |
      ---|---
      Point of contact | Jon Bauman
      Champions | compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)
      Task owners | Jon Bauman
      No detailed updates available.

      Comprehensive niche checks for Rust (rust-lang/rust-project- goals#262)

      Progress |
      ---|---
      Point of contact | Bastian Kersting
      Champions | compiler (Ben Kimock), opsem (Ben Kimock)
      Task owners | Bastian Kersting], Jakob Koschel
      No detailed updates available.

      Const Generics (rust-lang/rust-project-goals#100)

      Progress |
      ---|---
      Point of contact | Boxy
      Champions | lang (Niko Matsakis)
      Task owners | Boxy, Noah Lev

      3 detailed updates available.

      Comment by @BoxyUwU posted on 2025-12-30:

      Since the last update both of my PRs I mentioned have landed, allowing for constructing ADTs in const arguments while making use of generic parameters. This makes MGCA effectively a "full" prototype where it can now fully demonstrate the core concept of the feature. There's still a lot of work left to do but now we're at the point of finishing out the feature :)

      Once again huge thanks to camelid for sticking with me throughout this. Also thanks to errs, oli and lcnr for reviewing some of the work and chatting with me about possible impl decisions.

      Some examples of what is possible with MGCA as of the end of this goal cycle:

      #![feature(const_default, const_trait_impl, min_generic_const_args)]
      
      trait Trait {
          #[type_const]
          const ASSOC: usize;
      }
      
      fn mk_array<T: const Default + Trait>() -> [T; T::ASSOC] {
          [const { T::default() }; _]
      }
      
      
      
      #![feature(adt_const_params, min_generic_const_args)]
      
      fn foo<const N: Option<u32>>() {}
      
      trait Trait {
          #[type_const]
          const ASSOC: usize;
      }
      
      fn bar<T: Trait, const N: u32>() {
          // the initializer of `_0` is a `N` which is a legal const argument
          // so this is ok.
          foo::<{ Some::<u32> { 0: N } }>();
      
          // this is allowed as mgca supports uses of assoc consts in the
          // type system. ie `<T as Trait>::ASSOC` is a legal const argument
          foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();
      
          // this on the other hand is not allowed as `N + 1` is not a legal
          // const argument
          foo::<{ Some::<u32> { 0: N + 1 } }>(); // ERROR
      }
      

      As for adt_const_params we now have a zulip stream specifically for discussion of the upcoming RFC and the drafting of the RFC: #project-const- generics/adt_const_params-rfc. I've gotten part of the way through actually writing the RFC itself though it's gone slower than I had originally hoped as I've also been spending more time thinking through the implications of allowing private data in const generics.

      I've debugged the remaining two ICEs making adt_const_params not fully ready for stabilization and written some brief instructions on how to resolve them. One ICE has been incidentally fixed (though more masked) by some work that Kivooeo has been doing on MGCA. The other has been picked up by someone I'm not sure the github handle of so that will also be getting fixed soon.

      Comment by @BoxyUwU posted on 2025-12-30:

      Ah I forgot to mention, even though MGCA has a tonne of work left to do I expect it should be somewhat approachable for people to help out with. So if people are interested in getting involved now is a good time :)

      Comment by @BoxyUwU posted on 2025-12-30:

      Ah another thing I forgot to mention. David Wood spent some time looking into the name mangling scheme for adt_const_params stuff to make sure it would be fine to stabilize and it seems it is so that's another step closer to adt_const_params being stabilizable

      Continue resolving cargo-semver-checks blockers for merging into cargo (rust-lang/rust-project-goals#104)

      Progress |
      ---|---
      Point of contact | Predrag Gruevski
      Champions | cargo (Ed Page), rustdoc (Alona Enraght-Moony)
      Task owners | Predrag Gruevski
      No detailed updates available.

      Develop the capabilities to keep the FLS up to date (rust-lang/rust-project- goals#391)

      Progress |
      ---|---
      Point of contact | Pete LeVasseur
      Champions | bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)
      Task owners | Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

      1 detailed update available.

      Comment by @PLeVasseur posted on 2025-12-16:

      Meeting notes here: FLS team meeting 2025-12-12

      Key developments : We're close to completing the FLS release for 1.91.0, 1.91.1. We've started to operate as a team, merging a PR with the changelog entries, then opening up issues for each change required: ✅ #624(https://github.com/rust- lang/fls/issues/624), ✅ #625(https://github.com/rust-lang/fls/issues/625), ✅ #626(https://github.com/rust- lang/fls/issues/626), ⚠️ #623(https://github.com/rust-lang/fls/issues/623). #623(https://github.com/rust- lang/fls/issues/623) is still pending, as it requires a bit of alignment with the Reference on definitions and creation of a new example. Blockers : None currently Help wanted : We'd love more folks from the safety-critical community to contribute to picking up issues or opening an issue if you notice something is missing.

      Emit Retags in Codegen (rust-lang/rust-project- goals#392)

      Progress |
      ---|---
      Point of contact | Ian McCormack
      Champions | compiler (Ralf Jung), opsem (Ralf Jung)
      Task owners | Ian McCormack

      1 detailed update available.

      Comment by @icmccorm posted on 2025-12-16:

      Here's our December status update!

      • We have revised our prototype of the pre-RFC based on Ralf Jung's feedback. Now, instead of having two different retag functions for operands and places, we emit a single __rust_retag intrinsic in every situation. We also track interior mutability precisely. At this point, the implementation is mostly stable and seems to be ready for an MCP.

      • There's been some discussion here and in the pre-RFC about whether or not Rust will still have explicit MIR retag statements. We plan on revising our implementation so that we no longer rely on MIR retags to determine where to insert our lower-level retag calls. This should be a relatively straightforward change to the current prototype. If anything, it should make these changes easier to merge upstream, since they will no longer affect Miri.

      • BorrowSanitizer continues to gain new features, and we've started testing it on our first real crate (lru) (which has uncovered a few new bugs in our implementation). The two core Tree Borrows features that we have left to support are error reporting and garbage collection. Once these are finished, we will be able to expand our testing to more real-world libraries and confirm that we are passing each of Miri's test cases (and likely find more bugs lurking in our implementation). Our instrumentation pass ignores global and thread-local state for now, and it does not support atomic memory accesses outside of atomic load and store instructions. These operations should be relatively straightforward to add once we've finished higher-priority items.

      • Performance is slow. We do not know exactly how slow yet, since we've been focusing on feature support over benchmarking and optimization. This is at least partially due to the lack of garbage collection, based on what we're seeing from profiling. We will have a better sense of what our performance is like once we can compare against Miri on more real-world test cases.

      As for what's next, we plan on posting an MCP soon, now that it's clear that we will be able to do without MIR retags. You can expect a more detailed status update on BorrowSanitizer by the end of January. This will discuss our implementation and plans for 2026. We will post that here and on our project website.

      Expand the Rust Reference to specify more aspects of the Rust language (rust- lang/rust-project-goals#394)

      Progress |
      ---|---
      Point of contact | Josh Triplett
      Champions | lang-docs (Josh Triplett), spec (Josh Triplett)
      Task owners | Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

      1 detailed update available.

      Comment by @joshtriplett posted on 2025-12-17:

      In addition to further ongoing work on reference material (some of which is on track to be merged), we've had some extensive discussions about reference processes, maintenance, and stability markers. Niko Matsakis is putting together a summary and proposal for next steps.

      Finish the libtest json output experiment (rust-lang/rust-project- goals#255)

      Progress |
      ---|---
      Point of contact | Ed Page
      Champions | cargo (Ed Page)
      Task owners | Ed Page
      No detailed updates available.

      Finish the std::offload module (rust-lang/rust-project- goals#109)

      Progress |
      ---|---
      Point of contact | Manuel Drehwald
      Champions | compiler (Manuel Drehwald), lang (TC)
      Task owners | Manuel Drehwald, LLVM offload/GPU contributors

      2 detailed updates available.

      Comment by @ZuseZ4 posted on 2025-12-02:

      It's only been two weeks, but we got a good number of updates, so I already wanted to share them.

      autodiff

      1. On the autodiff side, we landed the support for rlib and better docs. This means that our autodiff frontend is "almost" complete, since there are almost no cases left where you can't apply autodiff. There are a few features like custom-derivatives or support for dyn arguments that I'd like to add, but they are currently waiting for better docs on the Enzyme side. There is also a long-term goal off replacing the fat-lto requirement with the less invasive embed-bc requirement, but this proved to be tricky in the past and only affects compile times.
      2. @sgasho picked up my old PR to dlopen enzyme, and found the culprit of it failing after my last rebase. A proper fix might take a bit longer, but it might be worth waiting for. As a reminder, using dlopen in the future allows us to ship autodiff on nightly without increasing the size of rustc and therefore without making our infra team sad.

      All in all, we have landed most of the hard work here, so that's a very comfortable position to be in before enabling it on nightly.

      offload

      1. We have landed the intrinsic implementation of Marcelo Domínguez, so now you can offload functions with almost arbitrary arguments. In my first prototype, I had limited it to pointers to 256 f64 values. The updated usage example continues to live here in our docs. As you can see, we still require #[cfg(target_os=X)] annotations. Under the hood, the LLVM-IR which we generate is also still a bit convoluted. In his next PRs, he'll clean up the generated IR, and introduce an offload macro that users shall call instead of the internal offload intrinsic.
      2. I spend more time on enabling offload in our CI, to enable std::offload in nightly. After multiple iterations and support from LLVM offload devs, we found a cmake config that does not run into bugs, should not increase Rust CI time too much, and works with both in-tree llvm/clang builds, as well as external clang's (the current case in our Rust CI).
      3. I spend more time on simplifying the usage instructions in the dev guide. We started with two cargo calls, one rustc call, two clang calls, and two clang-helper binary calls. I was able to remove the rustc and one of the clang-offload-packager calls, by directly calling the underlying LLVM APIs. I also have an unmerged PR which removes the two clang calls. Once I cleaned it up and landed it, we would be down to only two cargo calls and one binary call to clang-linker-wrapper. Once I automated this last wrapper (and enabled offload in CI), nightly users should be able to experiment with std::offload.

      Comment by @ZuseZ4 posted on 2025-12-26:

      Time for the next round of updates. Again, most of the updates were on the GPU side, but with some notable autodiff improvements too.

      autodiff:

      1. @sgasho finished his work on using dlopen to load enzyme and the pr landed. This allowed Jakub Beránek and me to start working on distributing Enzyme via a standalone component.

      2. As a first step, I added a nicer error if we fail to find or dlopen our Enzyme backend. I also removed most of our autodiff fallbacks, we now unconditionally enable our macro frontend on nightly: https://github.com/rust-lang/rust/pull/150133 You may notice thatcargo expand now works on autodiff code. This also allowed the first bug reports about ICE (internal compiler error) in our macro parser logic.

      3. Kobzol opened a PR to build Enzyme in CI. In theory, I should have been able to download that artifact, put it into my sysroot, and use the latest nightly to automatically load it. If that had worked, we could have just merged his PR, and everyone could have started using AD on nightly. Of course, things are never that easy. Even though both Enzyme, LLVM, and rustc were built in CI, the LLVM version shipped along with rustc does not seem compatible with the LLVM version Enzyme was built against. We assume some slight cmake mismatch during our CI builds, which we will have to debug.

      offload:

      1. On the gpu side, Marcelo Domínguez finished his cleanup PR, and along the way also fixed using multiple kernels within a single codebase. When developing the offload MVP I had taken a lot of inspiration from the LLVM-IR generated by clang - and it looks like I had gotten one of the (way too many) LLVM attributes wrong. That caused some metadata to be fused when multiple kernels are present, confusing our offload backend. We started to find more bugs when working on benchmarks, more about the fixes for those in the next update.

      2. I finished cleaning up my offload build PR, and Oliver Scherer reviewed and approved it. Once the dev-guide gets synced, you should see much simpler usage instructions. Now it's just up to me to automate the last part, then you can compile offload code purely with cargo or rustc. I also improved how we build offload, which allows us to build it both in CI and locally. CI had some very specific requirements to not increase build times, since our x86-64-dist runner is already quite slow.

      3. Our first benchmarks directly linked against NVIDIA and AMD intrinsics on llvm-ir level. However, we already had an nvptx Rust module for a while, and since recently also an amdgpu module which nicely wraps those intrinsics. I just synced the stdarch repository into rustc a few minutes ago, so from now on, we can replace both with the corresponding Rust functions. In the near future we should get a higher level GPU module, which abstracts away naming differences between vendors.

      4. Most of my past rustc contributions were related to LLVM projects or plugins (Offload and Enzyme), and I increasingly encountered myself asking other people for updates or backports of our LLVM submodule, since upstream LLVM has fixes which were not yet merged into our LLVM submodule. Our llvm working group is quite small and I didn't want to burden them too much with my requests, so I recently asked them to join it, which also got approved. In the future I intend to help a little with the maintenance here.

      Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust- project-goals#407)

      Progress |
      ---|---
      Point of contact | Tomas Sedovic
      Champions | compiler (Wesley Wiser)
      Task owners | (depending on the flag)

      1 detailed update available.

      Comment by @tomassedovic posted on 2025-12-05:

      Update from the 2025-12-03 meeting:

      -Zharden-sls

      Wesley reviewed it again, provided a qualification, more changes requested.

      Getting Rust for Linux into stable Rust: language features (rust-lang/rust- project-goals#116)

      Progress |
      ---|---
      Point of contact | Tomas Sedovic
      Champions | lang (Josh Triplett), lang-docs (TC)
      Task owners | Ding Xiang Fei

      2 detailed updates available.

      Comment by @tomassedovic posted on 2025-12-05:

      Update from the 2025-12-03 meeting.

      Deref / Receiver

      Ding keeps working on the Reference draft. The idea is still not well- proliferated and people are not convinced this is a good way to go. We hope the method-probing section in Reference PR could clear thins up.

      We're keeping the supertrait auto-impl experiment as an alternative.

      RFC #3851: Supertrait Auto-impl

      Ding addressed Predrag's requests on SemVer compatibility. He's also opened an implementation PR: https://github.com/rust-lang/rust/pull/149335. Here's the tracking issue: https://github.com/rust-lang/rust/issues/149556.

      derive(CoercePointee)

      Ding opened a PR to require additional checks for DispatchFromDyn: https://github.com/rust-lang/rust/pull/149068

      In-place initialization

      Ding will prepare material for a discussion at the LPC (Linux Plumbers Conference). We're looking to hear feedback on the end-user syntax for it.

      The feature is going quite large, Ding will check with Tyler on the whether this might need a series of RFCs.

      The various proposals on the table continue being discussed and there are signs (albeit slow) of convergence. The placing function and guaranteed return ones are superseded by outpointer. The more ergonomic ideas can be built on top. The guaranteed value placement one would be valuable in the compiler regardless and we're waiting for Olivier to refine it.

      The feeling is that we've now clarified the constraints that the proposals must operate under.

      Field projections

      Nadri's Custom places proposal is looking good at least for the user-facing bits, but the whole thing is growing into a large undertaking. Benno's been focused on academic work that's getting wrapped up soon. The two will sync afterwards.

      Comment by @tomassedovic posted on 2025-12-18:

      Quick bit of great news: Rust in the Linux kernel is no longer treated as an experiment, it's here to stay 🎉

      https://lwn.net/SubscriberLink/1050174/63aa7da43214c3ce/

      Implement Open API Namespace Support (rust-lang/rust-project- goals#256)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Champions | cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)
      Task owners | b-naber, Ed Page

      3 detailed updates available.

      Comment by @sladyn98 posted on 2025-12-03:

      Ed Page hey i would like to contribute to this I reached out on zulip. Bumping up the post in case it might have gone under the radar

      CC Niko Matsakis

      Comment by @epage posted on 2025-12-03:

      The work is more on the compiler side atm, so Eric Holk and b-naber could speak more to where they could use help.

      Comment by @eholk posted on 2025-12-06:

      Hi @sladyn98 - feel free to ping me on Zulip about this.

      MIR move elimination (rust-lang/rust-project- goals#396)

      Progress |
      ---|---
      Point of contact | Amanieu d'Antras
      Champions | lang (Amanieu d'Antras)
      Task owners | Amanieu d'Antras

      1 detailed update available.

      Comment by @Amanieu posted on 2025-12-17:

      The RFC draft was reviewed in detail and Ralf Jung pointed out that the proposed semantics introduce issues because they rely on "no-behavior" (NB) with regards to choosing an address for a local. This can lead to surprising "time-traveling" behavior where the set of possible addresses that a local may have (and whether 2 locals can have the same address) depends on information from the future. For example:

      // This program has DB
      let x = String::new();
      let xaddr = &raw const x;
      let y = x; // Move out of x and de-initialize it.
      let yaddr = &raw const y;
      x = String::new(); // assuming this does not change the address of x
      // x and y are both live here. Therefore, they can't have the same address.
      assume(xaddr != yaddr);
      drop(x);
      drop(y);
      
      
      
      // This program has UB
      let x = String::new();
      let xaddr = &raw const x;
      let y = x; // Move out of x and de-initialize it.
      let yaddr = &raw const y;
      // So far, there has been no constraint that would force the addresses to be different.
      // Therefore we can demonically choose them to be the same. Therefore, this is UB.
      assume(xaddr != yaddr);
      // If the addresses are the same, this next line triggers NB. But actually this next
      // line is unreachable in that case because we already got UB above...
      x = String::new();
      // x and y are both live here.
      drop(x);
      drop(y);
      

      With that said, there is still a possibility of achieving the optimization, but the scope will need to be scaled down a bit. Specifically, we would need to:

      • no longer perform a "partial free"/"partial allocation" when initializing or moving out of a single field of a struct. The lifetime of a local starts when any part of it is initialized and ends when it is fully moved out.
      • allow a local's address to change when it is re-initialized after having been fully moved out, which eliminates the need for NB.

      This reduces the optimization opportunities since we can't merge arbitrary sub-field moves, but it still allows for eliminating moves when constructing a struct from multiple values.

      The next step is for me to rework the RFC draft to reflect this.

      Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project- goals#264)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Task owners | Help wanted, Ed Page
      No detailed updates available.

      Prototype Cargo build analysis (rust-lang/rust-project- goals#398)

      Progress |
      ---|---
      Point of contact | Weihang Lo
      Champions | cargo (Weihang Lo)
      Task owners | Help wanted Weihang Lo, Weihang Lo

      2 detailed updates available.

      Comment by @weihanglo posted on 2025-12-13:

      Key developments : HTML replay logic has merge. Once it gets into nightly cargo report timings can open the timing report you have previously logged.

      • https://github.com/rust-lang/cargo/pull/16377
      • https://github.com/rust-lang/cargo/pull/16378
      • https://github.com/rust-lang/cargo/pull/16382

      Blockers : No, except my own availability

      Help wanted : Same as https://github.com/rust-lang/rust-project- goals/issues/398#issuecomment-3571897575

      Comment by @weihanglo posted on 2025-12-26:

      Key developments :

      Headline: You should always enable build analysis locally, if you are using nightly and want the timing info data always available.

      [unstable]
      build-analysis = true
      
      [build.analysis]
      enabled = true
      
      • More log events are emitted: https://github.com/rust-lang/cargo/pull/16390
        • dependency resolution time
        • unit-graph construction
        • unit-registration (which contain unit metadata)
      • Timing replay from cargo report timings now has almost the same feature parity as cargo build --timings, except CPU usage: https://github.com/rust-lang/cargo/pull/16414
      • Rename rebuild event to unit-fingerprint, and is emitted also for fresh unit: https://github.com/rust-lang/cargo/pull/16408.
      • Proposed a new cargo report sessions command so that people can retrieve previous sessions IDs not use the latest one: https://github.com/rust-lang/cargo/pull/16428
      • Proposed to remove --timings=json which timing info in log files should be a great replacement: https://github.com/rust-lang/cargo/pull/16420
      • Documenting efforts for having man pages for nested commands `cargo report : https://github.com/rust-lang/cargo/pull/16430 and https://github.com/rust-lang/cargo/pull/16432

      Besides implementations, we also discussed about:

      • The interaction of --message-format and structured logging system, as well as log event schemas and formats: https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/build.20analysis.20log.20format/with/558294271
      • A better name for RunId. We may lean towards SessionId which is a common name for logging/tracing ecosystem.
      • Nested Cargo calls to have a sticky session ID. At least a way to show they were invoked from the same top-level Cargo call.

      Blockers : No, except my own availability

      Help wanted : Same as https://github.com/rust-lang/rust-project- goals/issues/398#issuecomment-3571897575

      reflection and comptime (rust-lang/rust-project- goals#406)

      Progress |
      ---|---
      Point of contact | Oliver Scherer
      Champions | compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)
      Task owners | oli-obk

      1 detailed update available.

      Comment by @oli-obk posted on 2025-12-15:

      Updates

      • https://github.com/rust-lang/rust/pull/148820 adds a way to mark functions and intrinsics as only callable during CTFE
      • https://github.com/rust-lang/rust/pull/144363 has been unblocked and just needs some minor cosmetic work

      Blockers

      • https://github.com/rust-lang/rust/pull/146923 (reflection MVP) has not been reviewed yet

      Rework Cargo Build Dir Layout (rust-lang/rust-project- goals#401)

      Progress |
      ---|---
      Point of contact | Ross Sullivan
      Champions | cargo (Weihang Lo)
      Task owners | Ross Sullivan

      1 detailed update available.

      Comment by @ranger-ross posted on 2025-12-23:

      Status update December 23, 2025

      The majority of December was spent iterating on https://github.com/rust- lang/cargo/pull/16155 . As mentioned in the previous update, the original locking design was not correct and we have been working through other solutions.

      As locking is tricky to get right and there are many scenarios Cargo needs to support, we are trying to descope the initial implementation to an MVP, even if that means we lose some of the concurrency. Once we have an MVP on nightly, we can start gathering feedback on the scenarios that need improvement and iterate.

      I'm hopeful that we get an unstable -Zfine-grain-locking on nightly in January for folks to try out in their workflows.


      Also we are considering adding an opt-in for the new build-dir layout using an env var (CARGO_BUILD_DIR_LAYOUT_V2=true) to allow tool authors to begin migrating to the new layout. https://github.com/rust-lang/cargo/pull/16336

      Before stabilizing this, we are doing crater run to test the impact of the changes and proactively reaching out to projects to minimize breakage as much as possible. https://github.com/rust-lang/rust/pull/149852

      Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project- goals#402)

      Progress | Completed
      ---|---
      Point of contact | Guillaume Gomez
      Champions | compiler (Wesley Wiser), infra (Marco Ieni)
      Task owners | Guillaume Gomez
      No detailed updates available.

      Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust- lang/rust-project-goals#403)

      Progress |
      ---|---
      Point of contact | Jakob Koschel
      Task owners | [Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

      1 detailed update available.

      Comment by @jakos-sec posted on 2025-12-15:

      Based on the gathered feedback I opened a new MCP for the proposed new Tier 2 targets with sanitizers enabled. (https://github.com/rust-lang/compiler- team/issues/951)

      Rust Vision Document (rust-lang/rust-project- goals#269)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Task owners | vision team
      No detailed updates available.

      rustc-perf improvements (rust-lang/rust-project- goals#275)

      Progress |
      ---|---
      Point of contact | James
      Champions | compiler (David Wood), infra (Jakub Beránek)
      Task owners | James, Jakub Beránek, David Wood

      1 detailed update available.

      Comment by @Kobzol posted on 2025-12-15:

      We have enabled the second x64 machine, so we now have benchmarks running in parallel 🎉 There are some smaller things to improve, but next year we can move onto running benchmarks on Arm collectors.

      Stabilize public/private dependencies (rust-lang/rust-project- goals#272)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Champions | cargo (Ed Page)
      Task owners | Help wanted, Ed Page
      No detailed updates available.

      Stabilize rustdoc doc_cfg feature (rust-lang/rust-project- goals#404)

      Progress |
      ---|---
      Point of contact | Guillaume Gomez
      Champions | rustdoc (Guillaume Gomez)
      Task owners | Guillaume Gomez

      1 detailed update available.

      Comment by @GuillaumeGomez posted on 2025-12-17:

      Opened stabilization PR but we have blockers I didn't hear of, so stabilization will be postponed until then.

      SVE and SME on AArch64 (rust-lang/rust-project- goals#270)

      Progress |
      ---|---
      Point of contact | David Wood
      Champions | compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)
      Task owners | David Wood

      3 detailed updates available.

      Comment by @davidtwco posted on 2025-12-15:

      I haven't made any progress on Deref::Target yet, but I have been focusing on landing rust-lang/rust#143924 which has went through two rounds of review and will hopefully be approved soon.

      Comment by @nikomatsakis posted on 2025-12-18:

      Update: David and I chatted on Zulip. Key points:

      David has made "progress on the non-Sized Hierarchy part of the goal, the infrastructure for defining scalable vector types has been merged (with them being Sized in the interim) and that'll make it easier to iterate on those and find issues that need solving".

      On the Sized hierarchy part of the goal, no progress. We discussed options for migrating. There seem to be three big options:

      (A) The conservative-but-obvious route where the T: Derefin the old edition is expanded to T: Deref<Target: SizeOfVal> (but in the new edition it means T: Deref<Target: Pointee>, i.e., no additional bounds). The main downside is that new Edition code using T: Deref can't call old Edition code using T: Deref as the old edition code has stronger bounds. Therefore new edition code must either use stronger bounds than it needs or wait until that old edition code has been updated.

      (B) You do something smart with Edition.Old code where you figure out if the bound can be loose or strict by bottom-up computation. So T: Deref in the old could mean either T: Deref<Target: Pointee> or T: Deref<Target: SizeOfVal>, depending on what the function actually does.

      (C) You make Edition.Old code always mean T: Deref<Target: Pointee> and you still allow calls to size_of_val but have them cause post-monomorphization errors if used inappropriately. In Edition.New you use stricter checking.

      Options (B) and (C) have the downside that changes to the function body (adding a call to size_of_val, specifically) in the old edition can stop callers from compiling. In the case of Option (B), that breakage is at type- check time, because it can change the where-clauses. In Option (C), the breakage is post-monomorphization.

      Option (A) has the disadvantage that it takes longer for the new bounds to roll out.

      Given this, (A) seems the preferred path. We discussed options for how to encourage that roll-out. We discussed the idea of a lint that would warn Edition.Old code that its bounds are stronger than needed and suggest rewriting to T: Deref<Target: Pointee> to explicitly disable the stronger Edition.Old default. This lint could be implemented in one of two ways

      • at type-check time, by tracking what parts of the environment are used by the trait solver. This may be feasible in the new trait solver, someone from @rust-lang/types would have to say.
      • at post-mono time, by tracking which functions actually call size_of_val and propagating that information back to callers. You could then compare against the generic bounds declared on the caller.

      The former is more useful (knowing what parts of the environment are necessary could be useful for more things, e.g., better caching); the latter may be easier or more precise.

      Comment by @nikomatsakis posted on 2025-12-19:

      Update to the previous post.

      Tyler Mandry pointed me at this thread, where lcnr posted this nice blog post that he wrote detailing more about (C).

      Key insights:

      • Because the use of size_of_val would still cause post-mono errors when invoked on types that are not SizeOfVal, you know that adding SizeOfVal into the function's where-clause bounds is not a breaking change, even though adding a where clause is a breaking change more generally.
      • But, to David Wood's point, it does mean that there is a change to Rust's semver rules: adding size_of_val would become a breaking change, where it is not today.

      This may well be the best option though, particularly as it allows us to make changes to the defaults across-the-board. A change to Rust's semver rules is not a breaking change in the usual sense. It is a notable shift.

      Type System Documentation (rust-lang/rust-project- goals#405)

      Progress |
      ---|---
      Point of contact | Boxy
      Champions | types (Boxy)
      Task owners | Boxy, lcnr

      1 detailed update available.

      Comment by @BoxyUwU posted on 2025-12-30:

      This month I've written some documentation for how Const Generics is implemented in the compiler. This mostly covers the implementation of the stable functionality as the unstable features are quite in flux right now. These docs can be found here: https://rustc-dev-guide.rust-lang.org/const- generics.html

      Unsafe Fields (rust-lang/rust-project-goals#273)

      Progress |
      ---|---
      Point of contact | Jack Wrenn
      Champions | compiler (Jack Wrenn), lang (Scott McMurray)
      Task owners | Jacob Pratt, Jack Wrenn, Luca Versari
      No detailed updates available.

    23. 🔗 Szymon Kaliski Q4 2025 rss

      Independent Consulting, and Interfacing with LLMs

  4. January 04, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-04 rss

      IDA Plugin Updates on 2026-01-04

      Activity:

    2. 🔗 r/reverseengineering I made a steganalysis tool, it caught one of the most discreet algos rss
    3. 🔗 r/LocalLLaMA GLM-Image model from Z.ai is coming rss
    4. 🔗 Anton Zhiyanov Fear is not advocacy rss

      AI advocates seem to be the only kind of technology advocates who feel this imminent urge to constantly criticize developers for not being excited enough about their tech.

      It would be crazy if I presented new Go features like this:

      If you still don't use the synctest package, all your systems will eventually succumb to concurrency bugs.

      or

      If you don't use iterators, you have absolutely nothing interesting to build.

      The job of an advocate is to spark interest, not to reproach people or instill FOMO. And yet that's exactly what AI advocates do.

      What a weird way to advocate.

      It's okay not to be early

      This whole "devote your life to AI right now, or you'll be out of a job soon" narrative is false.

      You don't have to be a world-class algorithm expert to write good software. You don't have to be a Linux expert to use containers. And you don't have to spend all your time now trying to become an expert in chasing ever-changing AI tech.

      As with any new technology, developers adopting AI typically fall into four groups: early adopters, early majority, late majority, and laggards. Right now, AI advocates are trying to shame everyone into becoming early adopters. But it's perfectly okay to wait if you're sceptical. Being part of the late majority is a safe and reasonable choice. If anything, you'll have fewer bugs to deal with.

      As the industry adopts AI practices, you'll naturally absorb just the right amount of them.

      You are going to be fine.

    5. 🔗 sacha chua :: living an awesome life Using whisper.el to convert speech to text and save it to the currently clocked task in Org Mode or elsewhere rss

      : Added note about difference from MELPA package, fixed :vc

      I want to get my thoughts into the computer quickly, and talking might be a good way to do some of that. OpenAI Whisper is reasonably good at recognizing my speech now and whisper.el gives me a convenient way to call whisper.cpp from Emacs with a single keybinding. (Note: This is not the same whisper package as the one on MELPA.) Here is how I have it set up for reasonable performance on my Lenovo P52 with just the CPU, no GPU.

      I've bound <f9> to the command whisper-run. I press <f9> to start recording, talk, and then press <f9> to stop recording. By default, it inserts the text into the buffer at the current point. I've set whisper-return-cursor-to-start to nil so that I can keep going.

      (use-package whisper
        :vc (:url "https://github.com/natrys/whisper.el")
        :load-path "~/vendor/whisper.el"
        :config
        (setq whisper-quantize "q4_0")
        (setq whisper-install-directory "~/vendor")
        ;; Get it running with whisper-server-mode set to nil first before you switch to 'local.
        ;; If you change models,
        ;; (whisper-install-whispercpp (whisper--check-install-and-run nil "whisper-start"))
        (setq whisper-server-mode 'local)
        (setq whisper-model "base")
        (setq whisper-return-cursor-to-start nil)
        (setq whisper--ffmpeg-input-device "alsa_input.usb-Blue_Microphones_Yeti_Stereo_Microphone_REV8-00.analog-stereo")
        ;(setq whisper--ffmpeg-input-device "VirtualMicSink.monitor")
        (setq whisper--ffmpeg-input-device "audiorelay-virtual-mic-sink:monitor_FL")
        (setq whisper-language "en")
        (setq whisper-before-transcription-hook nil)
        (setq whisper-use-threads (1- (num-processors)))
        (setq whisper-transcription-buffer-name-function 'whisper--simple-transcription-buffer-name)
        (add-hook 'whisper-after-transcription-hook 'my-subed-fix-common-errors-from-start)
        :bind
        (("<f9>" . whisper-run)
         ("C-<f9>" . my-whisper-org-capture-to-clock)
         ("S-<f9>" . my-whisper-replay)
         ("M-<f9>" . my-whisper-toggle-language)))
      

      The technology isn't quite there yet to do real-time audio transcription so that I can see what it understands while I'm saying things, but that might be distracting anyway. If I do it in short segments, it might still be okay. I can replay the most recently recorded snippet in case it's missed something and I've forgotten what I just said.

      (defun my-whisper-replay ()
        "Replay the last temporary recording."
        (interactive)
        (mpv-play whisper--temp-file))
      

      Il peut aussi comprendre le français.

      (defun my-whisper-toggle-language ()
        "Set the language explicitly, since sometimes auto doesn't figure out the right one."
        (interactive)
        (setq whisper-language (if (string= whisper-language "en") "fr" "en"))
        ;; If using a server, we need to restart for the language
        (when (process-live-p whisper--server-process) (kill-process whisper--server-process))
        (message "%s" whisper-language))
      

      I could use this with org-capture, but that's a lot of keystrokes. My shortcut for org-capture is C-c r. I need to press at least one key to set the template, <f9> to start recording, <f9> to stop recording, and C-c C-c to save it. I want to be able to capture notes to my currently clocked in task without having an Org capture buffer interrupt my display.

      To clock in, I can use C-c C-x i or my ! speed command. Bonus: the modeline displays the current task to keep me on track, and I can use org-clock-goto (which I've bound to C-c j) to jump to it.

      Then, when I'm looking at something else and I want to record a note, I can press <f9> to start the recording, and then C-<f9> to save it to my currently clocked task along with a link to whatever I'm looking at.

      (defvar my-whisper-org-target nil
        "*Where to save the target.
      
      Nil means jump to the current clocked-in entry and insert it along with
      a link, or prompt for a capture template if nothing is clocked in.
      
      If this is set to a string, it should specify a key from
      `org-capture-templates'. The text will be in %i, and you can use %a for the link.
      For example, you could have a template entry like this:
      \(\"c\" \"Contents to current clocked task\" plain (clock) \"%i%?\n%a\" :empty-lines 1)
      
      If this is set to a function, the function will be called from the
      original marker with the text as the argument. Note that the window
      configuration and message will not be preserved after this function is
      run, so if you want to change the window configuration or display a
      message, add a timer.")
      
      (defun my-whisper-org-capture-to-clock ()
        (interactive)
        (require 'whisper)
        (add-hook 'whisper-after-transcription-hook #'my-whisper-org-save 50)
        (whisper-run))
      
      (defun my-whisper-org-save ()
        "Save the transcription."
        (let ((text (string-trim (buffer-string))))
          (remove-hook 'whisper-after-transcription-hook #'my-whisper-org-save)
          (erase-buffer)      ; stops further processing
          (save-window-excursion
            (with-current-buffer (marker-buffer whisper--marker)
              (goto-char whisper--marker)
              (cond
               ((functionp my-whisper-org-target)
                (funcall my-whisper-org-target text))
               (my-whisper-org-target
                (setq org-capture-initial text)
                (org-capture nil my-whisper-org-target)
                (org-capture-finalize)
                ;; Delay the display of the message because whisper--cleanup-transcription clears it
                (run-at-time 0.5 nil (lambda (text) (message "Captured: %s" text)) text))
               ((org-clocking-p)
                (let ((link (org-store-link nil)))
                  (org-clock-goto)
                  (org-end-of-subtree)
                  (unless (bolp)
                    (insert "\n"))
                  (insert "\n" text "\n" link "\n"))
                (run-at-time 0.5 nil (lambda (text) (message "Added clock note: %s" text)) text))
               (t
                (kill-new text)
                (setq org-capture-initial text)
                (call-interactively 'org-capture)
                ;; Delay the window configuration
                (let ((config (current-window-configuration)))
                  (run-at-time 0.5 nil
                               (lambda (text config)
                                 (set-window-configuration config)
                                 (message "Copied: %s" text))
                               text config))))))))
      

      Here's an idea for a my-whisper-org-target function that saves the recognized text with a timestamp.

      (defvar my-whisper-notes "~/sync/stream/narration.org")
      (defun my-whisper-save-to-file (text)
        (let ((link (org-store-link nil)))
          (with-current-buffer (find-file-noselect my-whisper-notes)
            (goto-char (point-max))
            (insert "\n\n" (format-time-string "%H:%M ") text "\n" link "\n")
            (save-buffer)
            (run-at-time 0.5 nil (lambda (text) (message "Saved to file: %s" text)) text))))
      

      I think I've just figured out my Pipewire setup so that I can record audio in OBS while also being able to do speech to text, without the audio stuttering. qpwgraph was super helpful for visualizing the Pipewire connections and fixing them. Actually making a demonstration video will probably need to wait for another day, though!

      This is part of my Emacs configuration.

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    6. 🔗 Kevin Lynagh Tools I loved in 2025 rss

      Hi friends!

      While answering 40 questions to ask yourself every year, I realized I'd adopted a bunch of new tools over 2025. All of them have improved my life, so I want to share them with you so they might improve yours too =D

      A common theme is that all of my favorite tools promote a sort of coziness: They help you tailor the spaces where you spend your time to your exact needs and preferences.

      Removing little annoyances and adding personal touches -- whether to your text editor or your kitchen -- not only improves your day-to-day mood, but cultivates a larger sense of security, ownership, and agency, all of which I find essential to doing great work. (And making great snacks.)

      Physical Workshop space Last summer we moved to a ground-level apartment, giving me (for the first time in my life) my very own little workshop space: We're just renting the apartment, so building a larger shed isn't an option, but it turns out that one can get quite a lot done with floor area barely larger than a sheet of plywood. I designed a Paulk-style workbench top, cut it out on my friend's CNC, and mounted it all on screwed- together 2x4 legs: So far I've mostly worked with cheap plywood from Hornbach, using the following tools: Makita track saw for both rough and precision cuts. There's a handy depth stop that makes it easy to do a shallow scoring cut before making a full cut, which reduces tearing out the thin veneer. Fein wet/dry vac to collect dust/chips. This includes an electrical outlet you can plug tools in to so the vacuum automatically turns on when the tool is drawing power, which is great. My DIY powered air respirator built around a 3M Versaflo helmet has been working great -- it's so much more comfortable than fiddling with separate respirators and eye + ear protection. Since it takes 15 seconds to take on/off, I'm pretty much always wearing it when I'm doing anything in the shop. Record Power Sabre 250 desktop bandsaw for quick cross cuts. The accuracy isn't great, but I haven't tried replacing the blade or tuning it much yet. Bosch 12V palm edge router extremely fun to use for rounding over and chamfering edges: Bosch 12V drill/driver is lightweight and compact, and comes with a handy right-angle attachment that I've actually used: Bosch PBD 40 - a drill press with digital speed control and zeroable digital depth gauge with 0.1mm precision. At $250, the 0.5mm play in the chuck is forgivable. The workbench has MFT-style 20mm "dog holes" on a 96mm grid, which allows for all sorts of useful accessories. For example: I purchased Hooked on Wood's bench dog rail hinges, which makes it easy to flip the track up/down to slide wood underneath to make cross cuts: The $10 dog hole clamp on the right holds a stop block, which allows me to make repeated cuts. Since the dog holes were cut with a CNC and the rail is straight, square cuts can be made by: making a known straight cut with the track saw on its track (whether on the workbench or outside on sawhorses) pushing this reference edge up against the two bench dogs on the top cutting along the track See Bent's Woodworking for more detail on this process. While I wish I had space for a full size sliding table saw and/or CNC, this workbench and track saw seems like a decent backyard shed solution. LED lighting Last winter I decided to fight the gloomy Dutch weather by buying a bunch of high-CRI LEDs to flood my desk with artificial daylight: See the build log for more details. This lighting has worked out swimmingly -- it helps me wake up in the morning and makes the whole space feel nice even on the grayest rainy day. After sundown, my computer switches to "dark mode" colors and I switch the room to cozier warm-white LEDs. I had about 8 meters of LED strip leftover, which I used with LED diffuser profiles to illuminate my workshop. Euroboxes When we moved into our new (completely unfurnished) apartment over the summer, I was adamant I'd build all of the furniture we needed. However, sightly storage solutions have taken longer than anticipated, so to eliminate the immediate clutter I purchased a bunch of 600x400x75mm euroboxes: At $4/each (used), they're an absolute bargain. Since they're plastic, they slide great on waxed wood rails and make perfect lil' utility drawers. The constraint of needing to use fixed size drawers makes it easier for me to design the rest of a furniture piece around them. For example, just take a brad nailer to the limitless amounts of discarded particle board available on the streets of Amsterdam, and boom , one-day- build 3d-printer cart in the meter closet: Or, throw some drawers in the hidden side of these living room plywood coffee table / bench / scooters: Now we have a tidy place to hide the TV remote and wireless keyboard/mouse, coffee table books, store extra blankets, etc. Ikea Skadis coffee/smoothie station Our kitchen only has 1.6 m2 (17 ft2) of counter space, so we mounted two Ikea Skadis pegboards to the side of our fridge to make a coffee / smoothie station: The clear Ikea containers are great for nuts and coffee since you can grab them with one hand and drop 'em in the blender or grinder. I designed and 3d-printed custom mounts for my Clever Dripper, coffee grinder (1Zpresso k-series), little water spritzer, and bottles of Friedhats beans. Since we can't screw into the fridge, the panels are hanging from some 3d-printed hooks command stripped on the top of the fridge cabinet. Clear nano-tape keeps the Skadis boards from swinging. The cheap Ninja cup blender is quite loud, so we leave a pair of Peltor X4 ear defenders hanging next to it. Ikea Maximera drawers As soon as we made the smoothie station we decided to replace the cabinet shelves underneath it with drawers. (The only drawers that came with the kitchen were installed underneath the range, creating constant conflict between the cook and anyone needing silverware.) Decent soft-close undermount drawer slides from Blum/Hettich cost like $30 each, and for the same price Ikea sells slides with a drawer box attached. Since we're renting and can't make permanent changes to the kitchen, I built a carcass for the drawers within the existing cabinet: The particle board sides carry the weight of the drawers to the base of the existing cabinet, and they're fixed to the walls with nano-tape rather than screws. Nicki 3d-printed cute pulls and we threw those onto temporary fronts made of leftover MDF. As you can see from the hardware poking through, these 8mm fronts are a bit too thin, so I plan to replace them with thicker MDF, probably sculpted with a goofy pattern in the style of Arno Hoogland. Having drawers is awesome: We bought a bunch of extra cups for the Ninja blender and keep them pre-filled with creatine and protein powder. The bottom drawer holds the dozen varieties of peanut butter remaining from the Gwern-inspired tasting I held in November. (The UK's Pip & Nut peanut butters were the crowd favorites, by the way.) Digital Emacs and LLMs I've used Emacs for something like 15 years, but after the first year or so I deliberately avoided trying to customize it, as that felt like too much of a distraction from the more important yaks I was shaving through my 20's and early 30's. However, in early 2025 I decided to dive in, for two reasons: I ran across Prot's video demonstrating a cohesive set of well-designed search and completion packages, which suggested to me that there were interesting ideas being developed in the "modern" Emacs community I discovered gptel, which makes it extremely easy to query large language models within Emacs -- just highlight text, invoke gtpel, and the response is inserted right there. What's special about Emacs compared to other text editors is that it's extremely easy to customize. Rather than a "text editor", Emacs is better thought of as an operating system and live programming environment which just so happens to have a lot of functions related to editing text. My day-to-day coding, writing, and thinking environment within Emacs has improved tremendously in 2025, as every time I've had any sort of customization or workflow-improvement idea, I've been able to try it out in just a minute or two by having an LLM generate the necessary Emacs Lisp code. My mentality changed to "yeah, I'm sure it's possible but I don't have time or interest to figure out how to do it with this archaic and quirky programming language" to "let me spend two minutes trying". Turns out there's a lot of little improvements that can be done by an LLM in few minutes. Here are some examples: Literally for this article, I asked the LLM to write a Ruby method for my static site generator to render a table of contents (which you can see above!) Lots of places don't support markdown natively, so I had an LLM write me an Emacs function to render selected markdown text to HTML on the pasteboard, which lets me write in Emacs and then just ⌘V in Gmail to get a nicely formatted message. When I write markdown and want to include an image, it's annoying to have to copy/paste in the path to the image, so I had an LLM write me an autocomplete against all of the image files in the same directory as the markdown file. I've been using this for pretty much every article/newsletter I write now, since they usually have images. I keep a daily activity log as a simple EDN file with entries like: {:start #inst "2025-12-28T12:00+01:00" :tags #{"lunch" "nicki"} :duration 2} {:start #inst "2025-12-28T09:00+01:00" :tags #{"paneltron" "computering"} :duration 6 :description "rough out add-part workflow."} {:start #inst "2025-12-28T08:20+01:00" :tags #{"wakeup"}} (Everything's rounded to the half-hour.) I started this when I was billing by the hour (a decade ago), and have kept it up because it's handy to have a lightweight, low-friction record of what I've been up to. I used to do occasional analysis manually via a REPL, but couldn't sleep one night so I spent 30 minutes having an LLM throw together a visual summary which I can invoke for whatever week is under my cursor. It looks like: Week: 2025-12-22 to 2025-12-28 Locations: Amsterdam - Friday 2025-12-19T13:30+01:00 computering 16.5h ################## box-carts 14.5h ############### woodworking 13.0h ############## llms 8.0h ######## dinner 7.0h ####### and includes my most recent :location (an attribute I started tagging entries with to help keep track of visa limitations while traveling). I'm extremely chuffed about having quick access to weekly summaries and suspect that tying that to my existing habit of recording entries will be a good intra-week feedback loop about whether I'm spending time in alignment with my priorities. Whenever I write an article or long message, before sending it I run it by an LLM with the following prompt: my friend needs feedback on this article -- are there any typos, confusing sentences, or other things that could be improved? Be blunt and I'll convey the feedback to my friend in a friendly way. This one doesn't even involve any code, it's just a habit that's easy because it's easy to call an LLM from within Emacs. LLMs will note repeated words, incorrect homonyms, and awkward sentences that simple spell-checkers miss. Here's an example from this very article: 1. Double parenthesis: `)— remove one set 2. "Ikea Maximara drawers" in the heading, but the product is actually "Maximera" 3.http://localhost:9292/newsletter/2025_06_03_prototyping_a_language/` — you've left a localhost URL in the FlowStorm section Emacs has a pretty smooth gradient between "adjust a keybinding", to "quick helper function", to a full on workflow. Here's an example from the latter end of that spectrum. I asked Claude Code to make some minor CSS modifications for a project, then got nerd-sniped trying to understand why it used a million tokens to explore my 4000-word codebase and edit a dozen lines: Usage by model: claude-haiku: 8.6k input, 5.4k output, 434.2k cache read, 33.4k cache write ($0.1207) claude-sonnet: 1.0k input, 262 output, 0 cache read, 0 cache write ($0.0069) claude-opus-4-5: 214 input, 8.3k output, 842.5k cache read, 47.8k cache write ($0.93) After a bit of digging, it seemed likely this is a combination of factors: Claude Code's system and tool prompts Repeatedly invoking tools to grep around the directory and read files 200 lines at a time Absolute nonsense -- Mario Zechner has some great analysis on this (fun fact: "Claude Code uses Haiku to generate those little whimsical "please wait" messages. For every. token. you. input."). For comparison, I invoked Opus 4.5 manually with all of my source code and asked what I needed to change, and it nailed the answer using only 5000 tokens (4500 input, 500 output). So I leaned into this and wrote my own lightweight, single-shot workflow in Emacs: I write something like: @rewrite /path/to/file1 /path/to/file2 Please do X, Y, Z, thanks! and when I send it, some elisp code: adds the specified files to the context window sets the system prompt to be "please reply only with a lil' string replacement patch format that looks like …" sends the LLM response to a Babashka script which applies this patch, sandboxed to just the specified files. I've used it a handful of times so far and it works exactly the way I'd imagined -- it's much faster than waiting for an "agent" to make a dozen tool calls and lets me take advantage of LLMs for tedious work while remaining in the loop. (Admittedly, this one took a few hours rather than a few minutes, but it was well worth it in terms of getting some hands-on experience building a structured LLM-based workflow.) A little Photos.app exporter

      When you copy images out of Photos.app, it only puts a 1024 pixel wide "preview image" on the pasteboard. This loses too much detail.

      The built-in "export" workflow is tedious to use and doesn't actually compress well, so you have to do another pass through Squoosh or ImageOptim anyway. Of course, you'll want to resize before you do that, maybe in Preview?

      I noticed how annoyed I was trying to add photos to my articles, so I vibe- coded a little Photos.app exporter that behaves exactly like how I want.

      UV (Python dependencies)

      I haven't done much Python in my career, but it's a popular language that I'd occasionally need to tap into to use some specific library (e.g., OpenCV, NumPy, JAX) or run the code behind some interesting research paper.

      The package management has always been (from my perspective of a casual outsider) an absolute mess. Yes, with lots of good reasons related to packaging native code across different operating systems and architectures, etc. etc. etc.

      Whatever, if pip install failed to build my eggs or wheels or whatever, I'd usually just give up.

      As for reproducibility and pinning to exact versions…¯\_(ツ)_/¯.

      "Maybe if I do nothing the problem will fix itself" isn't a great problem solving strategy, but it definitely worked out for me for understanding Python dependency managers: I came across UV in late 2024 and it…just works? It's also fast!

      I can now finally create a Python project, specify some dependencies, and automatically get a lockfile ensuring it'll will work just fine on my other computer, 6 months from now!

      This problem has been a thorn in the side of not just software developers, but also scientific researchers for decades. The folks who've finally resolved it deserve, like, the Nobel Peace prize.

      Mise-en-place (all dependencies)

      This past year I've also been loving mise-en-place. From their homepage:

      mise is a tool that manages installations of programming language runtimes and other tools for local development. For example, it can be used to manage multiple versions of Node.js, Python, Ruby, Go, etc. on the same machine.

      Once activated, mise can automatically switch between different versions of tools based on the directory you're in. This means that if you have a project that requires Node.js 18 and another that requires Node.js 22, mise will automatically switch between them as you move between the two projects.

      I've used language-specific versions of this idea, and it's been refreshing to throw all of those out in favor of a single uniform, fast solution.

      I have a user-wide "default" config, containing stuff like:

      • languages (a JVM, clojure, ruby)
      • language servers (clojure-lsp, rust analyzer, etc.)
      • git-absorb "git commit -fixup, but automatic"
      • Difftastic AST-aware diffing
      • Numbat, an awesome unit-aware scientific calculator

      and on specific projects I specify the same and additional tools, so that collaborators can have a "single install" that guarantees we're all on the same versions of everything.

      Sure something like Nix with hermetically sealed, content addressed, etc., etc. is better in theory, but the UX and conceptual model is a mess.

      Mise feels like a strong, relatively simple local optimum: You specify what you want through a simple hierarchy of configs, mise generates lockfiles, downloads the requested dependencies, and puts them on the $PATH based on where you are.

      I've been using it for a year and haven't had to learn anything beyond what I did in the first 15 minutes. It works.

      Atuin (shell history)

      Atuin records all of your shell commands, (optionally) syncs them across multiple computers, and makes it easy to recall commands with a fuzzy finder. It's all self-hostable and is distributed as a single binary.

      I find the shell history extremely useful, both for remembering infrequently used commands, as well as for simply avoiding typing out tediously long ones with many flags. Having a proper fuzzy search that shows a dozens of results (rather than just one at a time) makes it straightforward to use.

      Before this I wouldn't have thought twice about my shell command history, and now it's something I deliberately back up because it's so useful.

      YouTube Premium

      At some point in 2025 the ad-blocker I was using stopped working on YouTube and I started seeing either pauses or (uggghhh) commercials (in Dutch, which Google must think I understand, despite 15 years of English GMail, not to mention my Accept-Language header).

      Given that YouTube is a Wonder of the World and a $20/month leaves me with a consumer surplus in the neighborhood of $104-105, I decided to subscribe to YouTube Premium.

      Honestly I just wanted the ads to stop, but it's even better -- I can conveniently download videos to my phone, which means I can watch interesting videos while riding on the stationary exercise bike in the park near my house.

      It also comes with YouTube Music, which immediately played me a late 00's indie rock playlist that brought me back to college.

      100% worth it.

      FlowStorm (Clojure debugger)

      I don't normally use debuggers, especially since in Clojure it's usually straightforward to pretty-print the entire application state.

      However, this usual approach failed me when I was building an interpreter for some CAD programming language experiments -- each AST node holds a reference to its environment (a huge map), and simply printing a tree yields megabytes of text.

      FlowStorm takes advantage of the fact that idiomatic Clojure uses immutable data structures -- the debugger simply holds a reference to every value generated during a computation, so that you can analyze them later.

      There are facilites to search all of the recorded values. So if you a see a string "foo" on your rendered webpage or whatever, you can easily answer questions like "where is the first callstack where the string 'foo' shows up?".

      All of the recorded data is available programmatically too. I used this infrastructure to make a live visualization of a 2D graphics program where as you move your cursor around the program source code, you see the closest graphical entity rendered automatically.

      ("A debounced callback triggered by cursor movement which executes Clojure code and highlights text according to the return value" is another example of an Emacs customization I never would have attempted prior to LLMs.)

      Whispertron (transcription)

      I vibe-coded my own lil' voice transcription app back in October 2024, but I'm including it in this list because using it has become second-nature to me in 2025.

      Before I had reliable transcription at my fingertips, I never felt hindered by my typing speed (~125 words per minute). However, now that I have it, I find that I'm expressing much more when I'm dictating compared to typing.

      It reminds me of the difference between responding to an email on an iPhone versus using a computer with a large monitor and full keyboard. I find myself providing much more context and otherwise elaborating my thoughts in more detail. It's just easier to speak out loud than type out the same information.

      This yields much better results when prompting LLMs: typing, I'll say "do X"; speaking, I'll say "do X, maybe try A, B, C, remember about Y and Z constraints".

      It also yields better relationships: When emailing and texting friends, I'll dictate (and then clean up / format) much more detailed, longer responses than what I'd type.

      Misc. stuff