🏡


  1. Rust Binary Analyis 101 - Part 2 - Z3R0cool Blogs
  2. mprocs: start all your project's commands at once
  3. Jon's Arm Reference
  4. Optimize for momentum
  5. Nviso vshell report

  1. December 23, 2025
    1. 🔗 r/reverseengineering Finding Jingle Town: Debugging an N64 Game without Symbols rss
    2. 🔗 r/wiesbaden Gruppenaktivitäten für Geburtstage? :) rss

      Hi, ich bin gerade am überlegen was man noch so in Wiesbaden zusammen machen kann außer in die Superflyhalle, Keramik bemalen oder Bowling.. habt ihr eventuell Ideen wo es sich noch lohnt mit ein paar Leuten zu meinem Geburtstag hinzugehen?

      Danke! 💚

      submitted by /u/FunkINFJ
      [link] [comments]

    3. 🔗 r/reverseengineering Nintendo 64 Decomp Update: Harvest Moon 64 is now 100% decompiled! rss
    4. 🔗 langchain-ai/deepagents deepagents==0.3.1 release

      Changes since deepagents==0.3.0

      release(deepagents): 0.3.1 (#608)
      docs: fix documentation issues (#513)
      fix(deepagents): strip trailing whitespace from subagent messages to prevent Anthropic API errors (#586)
      fix(deepagents): Pass through runtime config to subagents (#602)
      chore(deepagents): test write todos from sub-agents (#605)
      fix(deepagents): exclude structured response from state update (#603)
      feat: add ability to paste images in input (#555)

    5. 🔗 r/reverseengineering Fabrice Bellard Releases MicroQuickJS rss
    6. 🔗 r/reverseengineering Fake PuTTY Installer Malware Analysis with IDA Pro rss
    7. 🔗 News Minimalist 🐢 FDA approves first weight loss pill + 8 more stories rss

      In the last 5 days ChatGPT read 146994 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

      [5.5] FDA approves first GLP-1 pill for obesity —statnews.com(+77)

      The FDA has approved the first oral GLP-1 pill for obesity, a version of Novo Nordisk’s Wegovy, potentially expanding access to effective weight loss treatments starting in January.

      The 25-milligram daily medication demonstrated 14% weight loss in trials, mirroring the efficacy of the injectable version. It also reduces cardiovascular risks and will initially cost $150 per month for the lowest dosage through direct-to-consumer channels.

      This peptide-based pill requires strict morning fasting for absorption. Meanwhile, competitor Eli Lilly is developing a small-molecule pill, orforglipron, which may offer easier manufacturing and fewer dietary restrictions once approved.

      [6.4] European governments agree to introduce a digital euro —nrc.nl(Dutch) (+5)

      European governments have agreed to create a digital euro, establishing a central bank-backed public currency to safeguard the continent’s financial sovereignty and payment resilience.

      This public currency would offer a secure alternative to commercial bank accounts and US-based payment providers. Pending European Parliament approval, the digital euro could launch by 2029 via apps or cards, featuring offline capabilities to ensure transaction continuity during cyberattacks.

      The proposal guarantees privacy and bans programmable spending to mirror the utility of physical cash. While merchants must eventually accept the currency, commercial banks remain critical of the implementation costs and competition.

      [6.1] TikTok agrees to sell US operations to American investors —theguardian.com(+93)

      TikTok has signed a binding deal to sell its United States operations to a group of American investors including Oracle and Silver Lake, preventing a ban and ensuring continued service.

      The agreement, set to close January 22, grants Oracle, Silver Lake, and MGX a combined 45 percent stake. Oracle will license TikTok’s recommendation algorithm to address long-standing national security concerns.

      Highly covered news with significance over 5.5

      [5.8] EU leaders approve €90 billion loan for Ukraine despite dissent from Hungary, Slovakia, and Czech Republic — irishtimes.com (+65)

      [5.6] FCC bans new Chinese-made drones over national security concerns — apnews.com (+21)

      [5.6] EU court rules for refugee in landmark case against Frontex — independent.co.uk (+2)

      [5.6] Austria's top court rules Meta's ad model illegal, orders overhaul of user data practices in EU — channelnewsasia.com (+4)

      [5.6] OpenAI launches an app store inside ChatGPT — tomsguide.com (+7)

      [5.5] Trump appoints special envoy to Greenland to pursue acquisition — nrc.nl (Dutch) (+148)

      Thanks for reading!

      — Vadim


      You can personalize this newsletter with premium.


      Powered by beehiiv

    8. 🔗 r/wiesbaden Rental options rss
    9. 🔗 syncthing/syncthing v2.0.13-rc.1 release

      Major changes in 2.0

      • Database backend switched from LevelDB to SQLite. There is a migration on
        first launch which can be lengthy for larger setups. The new database is
        easier to understand and maintain and, hopefully, less buggy.

      • The logging format has changed to use structured log entries (a message
        plus several key-value pairs). Additionally, we can now control the log
        level per package, and a new log level WARNING has been inserted between
        INFO and ERROR (which was previously known as WARNING...). The INFO level
        has become more verbose, indicating the sync actions taken by Syncthing. A
        new command line flag --log-level sets the default log level for all
        packages, and the STTRACE environment variable and GUI has been updated
        to set log levels per package. The --verbose and --logflags command
        line options have been removed and will be ignored if given.

      • Deleted items are no longer kept forever in the database, instead they are
        forgotten after fifteen months. If your use case require deletes to take
        effect after more than a fifteen month delay, set the
        --db-delete-retention-interval command line option or corresponding
        environment variable to zero, or a longer time interval of your choosing.

      • Modernised command line options parsing. Old single-dash long options are
        no longer supported, e.g. -home must be given as --home. Some options
        have been renamed, others have become subcommands. All serve options are
        now also accepted as environment variables. See syncthing --help and
        syncthing serve --help for details.

      • Rolling hash detection of shifted data is no longer supported as this
        effectively never helped. Instead, scanning and syncing is faster and more
        efficient without it.

      • A "default folder" is no longer created on first startup.

      • Multiple connections are now used by default between v2 devices. The new
        default value is to use three connections: one for index metadata and two
        for data exchange.

      • The following platforms unfortunately no longer get prebuilt binaries for
        download at syncthing.net and on GitHub, due to complexities related to
        cross compilation with SQLite:

        • dragonfly/amd64
        • solaris/amd64
        • linux/ppc64
        • netbsd/*
        • openbsd/386 and openbsd/arm
        • windows/arm
        • The handling of conflict resolution involving deleted files has changed. A
          delete can now be the winning outcome of conflict resolution, resulting in
          the deleted file being moved to a conflict copy.

      This release is also available as:

      • APT repository: https://apt.syncthing.net/

      • Docker image: docker.io/syncthing/syncthing:2.0.13-rc.1 or ghcr.io/syncthing/syncthing:2.0.13-rc.1
        ({docker,ghcr}.io/syncthing/syncthing:2 to follow just the major version)

      What's Changed

      Fixes

      Other

      Full Changelog : v2.0.12...v2.0.13-rc.1

    10. 🔗 obra/superpowers v4.0.1 release

      Release v4.0.1

    11. 🔗 Simon Willison Cooking with Claude rss

      I've been having an absurd amount of fun recently using LLMs for cooking. I started out using them for basic recipes, but as I've grown more confident in their culinary abilities I've leaned into them for more advanced tasks. Today I tried something new: having Claude vibe-code up a custom application to help with the timing for a complicated meal preparation. It worked really well!

      A custom timing app for two recipes at once

      We have family staying at the moment, which means cooking for four. We subscribe to a meal delivery service called Green Chef, mainly because it takes the thinking out of cooking three times a week: grab a bag from the fridge, follow the instructions, eat.

      Each bag serves two portions, so cooking for four means preparing two bags at once.

      I have done this a few times now and it is always a mad flurry of pans and ingredients and timers and desperately trying to figure out what should happen when and how to get both recipes finished at the same time. It's fun but it's also chaotic and error-prone.

      This time I decided to try something different, and potentially even more chaotic and error-prone: I outsourced the planning entirely to Claude.

      I took this single photo of the two recipe cards side-by-side and fed it to Claude Opus 4.5 (in the Claude iPhone app) with this prompt:

      Extract both of these recipes in as much detail as possible

      Two recipe cards placed next to each other on a kitchen counter. Each card has detailed instructions plus photographs of steps.

      This is a moderately challenging vision task in that there quite a lot of small text in the photo. I wasn't confident Opus could handle it.

      I hadn't read the recipe cards myself. The responsible thing to do here would be a thorough review or at least a spot-check - I chose to keep things chaotic and didn't do any more than quickly eyeball the result.

      I asked what pots I'd need:

      Give me a full list of pots I would need if I was cooking both of them at once

      Then I prompted it to build a custom application to help me with the cooking process itself:

      I am going to cook them both at the same time. Build me a no react, mobile, friendly, interactive, artifact that spells out the process with exact timing on when everything needs to happen have a start setting at the top, which starts a timer and persists when I hit start in localStorage in case the page reloads. The next steps should show prominently with countdowns to when they open. The full combined timeline should be shown slow with calculated times tor when each thing should happen

      I copied the result out onto my own hosting (you can try it here) because I wasn't sure if localStorage would work inside the Claude app and I really didn't want it to forget my times!

      Then I clicked "start cooking"!

      The recipe app shows a full timeline with 00:00 Preheat Oven and onwards, plus a big Start Cooking button. In the animation clicking the button starts a timer clicking up, adds a Do this now panel showing the Start all prep work step, shows Coming Up Next with timers counting down to the next steps and updates the full timeline to show local clock times where it previously showed durations from 00:00 upwards.

      Here's the full Claude transcript.

      There was just one notable catch: our dog, Cleo, knows exactly when her dinner time is, at 6pm sharp. I forgot to mention this to Claude, which had scheduled several key steps colliding with Cleo's meal. I got woofed at. I deserved it.

      To my great surprise, it worked. I followed the recipe guide to the minute and served up both meals exactly 44 minutes after I started cooking.

      A small bowl (a beautiful blue sea textured bowl, made by Natalie Downe) contains a chickpea stew. A larger black bowl has couscous, green beans and blackened cauliflower.

      The best way to learn the capabilities of LLMs is to throw tasks at them that may be beyond their abilities and see what happens. In this case I fully expected that something would get forgotten or a detail would be hallucinated and I'd end up scrambling to fix things half way through the process. I was surprised and impressed that it worked so well.

      Some credit for the app idea should go to my fellow hackers at /dev/fort 2 in 2009, when we rented Knockbrex Castle in Dumfries, Scotland for a week and attempted to build a cooking timer application for complex meals.

      Generating recipes from scratch

      Most of my other cooking experiments with LLMs have been a whole lot simpler than this: I ask for a recipe, ask for some variations and then cook one of them and see what happens.

      This works remarkably well considering LLMs have no taste buds.

      I've started to think of this as asking LLMs for the average recipe for a dish, based on all of the recipes they have hoovered up during their training. It turns out the mean version of every guacamole recipe on the internet is a decent guacamole!

      Here's an example of a recipe I tried recently that worked out really well. I was helping Natalie run her ceramic stall at the farmers market and the stall next to us sold excellent dried beans. I've never used dried beans before, so I took a photo of their selection and asked Claude what I could do with them:

      Several bags of tasty looking beans of different varieties and colors More bags of beans.

      Identify these beans

      It took a guess at the beans, then I said:

      Get me excited about cooking with these! If I bought two varietiew what could I make

      "Get me excited" switches Claude into a sort of hype-man mode, which is kind of entertaining:

      Oh, you're about to enter the wonderful world of bean cooking! Let me get you pumped about some killer two-bean combos: [...]

      Mixed bean salad with lemon, olive oil, fresh herbs, cherry tomatoes - light but satisfying [...]

      I replied:

      OK Bean salad has me interested - these are dried beans. Give me some salad options I can make that would last a long time in the fridge

      ... and after some back and forth we arrived on the recipe in this transcript, which I cooked the following day (asking plenty of follow-up questions) and thoroughly enjoyed.

      I've done this a bunch of times with a bunch of different recipes across both Claude and ChatGPT and honestly I've not had a notable miss yet. Being able to say "make it vegan" or "I don't have coriander, what can I use instead?" or just "make it tastier" is a really fun way to explore cooking.

      It's also fun to repeat "make it tastier" multiple times to see how absurd you can get.

      I really want someone to turn this into a benchmark!

      Cooking with LLMs is a lot of fun. There's an opportunity here for a really neat benchmark: take a bunch of leading models, prompt them for recipes, follow those recipes and taste-test the results!

      The logistics of running this are definitely too much for me to handle myself. I have enough trouble cooking two meals at once, for a solid benchmark you'd ideally have several models serving meals up at the same time to a panel of tasters.

      If someone else wants to try this please let me know how it goes!

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    12. 🔗 sacha chua :: living an awesome life 2025-12-22 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to AndrĂŠs RamĂ­rez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    13. 🔗 matklad Static Allocation For Compilers rss

      Static Allocation For Compilers

      Dec 23, 2025

      TigerBeetle famously uses “static allocation”. Infamously, the use of the term is idiosyncratic: what is meant is not static arrays, as found in embedded development, but rather a weaker “no allocation after startup” form. The amount of memory TigerBeetle process uses is not hard-coded into the Elf binary. It depends on the runtime command line arguments. However, all allocation happens at startup, and there’s no deallocation. The long-lived event loop goes round and round happily without alloc.

      I’ve wondered for years if a similar technique is applicable to compilers. It seemed impossible, but today I’ve managed to extract something actionable from this idea?

      Static Allocation

      Static allocation depends on the physics of the underlying problem. And distributed databases have surprisingly simple physics, at least in the case of TigerBeetle.

      The only inputs and outputs of the system are messages. Each message is finite in size (1MiB). The actual data of the system is stored on disk and can be arbitrarily large. But the diff applied by a single message is finite. And, if your input is finite, and your output is finite, it’s actually quite hard to need to allocate extra memory!

      This is worth emphasizing — it might seem like doing static allocation is tough and requires constant vigilance and manual accounting for resources. In practice, I learned that it is surprisingly compositional. As long as inputs and outputs of a system are finite, non-allocating processing is easy. And you can put two such systems together without much trouble. routing.zig is a good example of such an isolated subsystem.

      The only issue here is that there isn’t a physical limit on how many messages can arrive at the same time. Obviously, you can’t process arbitrary many messages simultaneously. But in the context of a distributed system over an unreliable network, a safe move is to drop a message on the floor if the required processing resources are not available.

      Counter-intuitively, not allocating is simpler than allocating, provided that you can pull it off!

      For Compilers

      Alas, it seems impossible to pull it off for compilers. You could say something like “hey, the largest program will have at most one million functions”, but that will lead to both wasted memory and poor user experience. You could also use a single yolo arena of a fixed size, like I did in Hard Mode Rust, but that isn’t at all similar to “static allocation”. With arenas, the size is fixed explicitly, but you can OOM. With static allocation it is the opposite — no OOM, but you don’t know how much memory you’ll need until startup finishes!

      The “problem size” for a compiler isn’t fixed — both the input (source code) and the output (executable) can be arbitrarily large. But that is also the case for TigerBeetle — the size of the database is not fixed, it’s just that TigerBeetle gets to cheat and store it on disk, rather than in RAM. And TigerBeetle doesn’t do “static allocation” on disk, it can fail with ENOSPACE at runtime, and it includes a dynamic block allocator to avoid that as long as possible by re-using no longer relevant sectors.

      So what we could say is that a compiler consumes arbitrarily large input, and produces arbitrarily large output, but those “do not count” for the purpose of static memory allocation. At the start, we set aside an “output arena” for storing finished, immutable results of compiler’s work. We then say that this output is accumulated after processing a sequence of chunks, where chunk size is strictly finite. While limiting the total size of the code-base is unreasonable, limiting a single file to, say, 4 MiB (runtime-overridable) is fine. Compiling then essentially becomes a “stream processing” problem, where both inputs and outputs are arbitrary large, but the filter program itself must execute in O(1) memory.

      With this setup, it is natural to use indexes rather than pointers for “output data”, which then makes it easy to persist it to disk between changes. And it’s also natural to think about “chunks of changes” not only spatially (compiler sees a new file), but also temporally (compiler sees a new version of an old file).

      Is there any practical benefits here? I don’t know! But seems worth playing around with! I feel that a strict separation between O(N) compiler output and O(1) intermediate processing artifacts can clarify compiler’s architecture, and I won’t be too surprised if O(1) processing in compilers would lead to simpler code the same way it does for databases?

  2. December 22, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-22 rss

      IDA Plugin Updates on 2025-12-22

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA DGX Spark: an unpopular opinion rss

      DGX Spark: an unpopular opinion | I know there has been a lot of criticism about the DGX Spark here, so I want to share some of my personal experience and opinion: I’m a doctoral student doing data science in a small research group that doesn’t have access to massive computing resources. We only have a handful of V100s and T4s in our local cluster, and limited access to A100s and L40s on the university cluster (two at a time). Spark lets us prototype and train foundation models, and (at last) compete with groups that have access to high performance GPUs like the H100s or H200s. I want to be clear: Spark is NOT faster than an H100 (or even a 5090). But its all-in-one design and its massive amount of memory (all sitting on your desk) enable us — a small group with limited funding, to do more research. submitted by /u/emdblc
      [link] [comments]
      ---|---

    3. 🔗 sacha chua :: living an awesome life La semaine du 15 décembre au 21 décembre rss

      Lundi, le quinze dĂŠcembre

      J'ai emmenĂŠ ma fille Ă  son cours de gymnastique. Elle a travaillĂŠ ses roues. Elle a aussi envie d'ajouter un cours de gymnastique aĂŠrienne. D'une part, j'avais dit que si nous gĂŠrions bien ses devoirs, ce serait plus facile de dire oui. D'autre part, c'est un bon exercice pour la santĂŠ. Je pense que l'entraĂŽnement individuel est meilleur pour ma fille parce qu'elle veut procĂŠder Ă  son propre rythme.

      Pour le souper, nous avons prĂŠparĂŠ des sushis avec des edamames et de la soupe au miso.

      Le mini-four a arrêtÊ de fonctionner. Heureusement, c'est notre deuxième mini-four du même modèle, et nous avons le vieux mini-four dans l'abri de jardin pour les pièces dÊtachÊes. Au lieu de faire ses devoirs, ma fille a aidÊ mon mari dans l'atelier et a appris des bases d'Êlectronique. Ensuite, ma fille a aidÊ mon mari à faire du pain. Je me suis un peu inquiÊtÊe pour ses devoirs, mais je pense que passer du temps ensemble Êtait tout aussi bien.

      Ils ont dĂŠcouvert une coccinelle dans le vieux mini-four. Ils l'ont sauvĂŠe et l'ont placĂŠe dans un petit bocal. Je lui ai donnĂŠ un morceau de raisin et un bout d'essuie-tout que j'ai humectĂŠ. Je ne sais pas si elle pourra survivre jusqu'au printemps, mais elle est lĂ , donc nous essayons.

      Mon mari s'est renseignÊ sur nos notes de latin que nous avons prises en 2011. Après une brève recherche, je les ai trouvÊes. Elles Êtaient dans un vieux format TiddlyWiki, donc je les ai transformÊes en format Org Mode pour les exporter en livre Êlectronique. Je n'Êtudie plus le latin depuis longtemps, donc j'oublie tout.

      J'ai rÊflÊchi à l'aide : comment aider quelqu'un, comment recevoir de l'aide. Mon ami qui traversait une crise personnelle voulait de l'aide sous forme d'argent, mais je pense que l'aide qu'il a voulue ne lui sera pas utile. Ma fille n'a pas voulu d'aide avec ses devoirs. Peut-être que ma fille pense que ses efforts suffisent, et peut-être que cela lui suffit. Au lieu de m'inquiÊter, je dois m'entraÎner à recevoir de l'aide moi-même. C'est une des raisons pour lesquelles j'apprends le français avec ma tutrice, j'apprends à parler de mes sentiments avec ma thÊrapeute, et j'apprÊcie la façon dont ma famille m'aide à mÝrir. Je peux amÊliorer les processus pour que les gens puissent m'aider. Par exemple, pour le traitement des vidÊos de la prÊsentation ou de la discussion en direct, je dois simplifier et documenter le processus. Si les gens sont occupÊs, ce n'est pas grave, je le fais lentement. Si les gens veulent aider, ils peuvent aider.

      Mardi, le seize dĂŠcembre

      Aujourd'hui, j'ai repris une routine normale. J'ai travaillĂŠ sur Deck the Halls au piano, j'ai suivi une courte vidĂŠo d'exercice, et j'ai finalement fait une longue promenade au parc. Je ne veux pas marcher sur le verglas parce qu'il est glissant, donc j'ai marchĂŠ sur le trottoir autour du parc.

      Quelqu'un a discutĂŠ de la modĂŠration du canal #emacs sur IRC. Il a semblĂŠ ĂŞtre frustrĂŠ. Je ne peux pas faire grand-chose, mais j'ai conseillĂŠ quelques choses qu'il pouvait faire.

      J'ai emmené ma fille à son dernier cours d'art. Elle était fière que son œuvre soit exposée dans la fenêtre. Elle a ramassé les autres œuvres dans son carton à dessins pour les transporter à la maison. Elle a apprécié le cours avec son amie, mais elle a parfois trouvé que c'était trop bruyant, donc elle ne veut pas continuer pour le moment. Nous allons garder un programme assez libre sans beaucoup de cours pour que nous puissions aller patiner ou jouer avec ses amies quand elle en a envie.

      Dans la session de thĂŠrapie, nous avons discutĂŠ des sentiments. J'intellectualise des situations difficiles au lieu de les ressentir, donc mes devoirs pour les vacances de NoĂŤl comprennent de remarquer quand j'utilise ce mĂŠcanisme de dĂŠfense. Je vais aussi ĂŠcrire un journal des sentiments.

      J'ai configurÊ un correcteur d'orthographe grâce au cours  Emacs expliquÊ à mes enfants  de @vincek.

      Mercredi, le dix-sept dĂŠcembre

      J'ai ĂŠcrit une petite fonction pour rechercher des mots dans quelques dictionnaires en ligne. Petit Ă  petit, j'amĂŠliore mon environnement d'ĂŠcriture.

      Cet après-midi, j'ai un rendez-vous pour faire rÊviser mon vÊlo cargo. J'ai fait du vÊlo jusqu'au magasin de cycles. Le mÊcanicien m'a donnÊ le devis pour le service et des conseils à propos de pneus spÊcialisÊs pour le verglas.

      Ensuite, j'ai pris le mÊtro, qui avait un problème. Au lieu d' attendre la navette à la station Keele, j'ai marchÊ sur une courte distance jusqu'à la maison.

      Je dois probablement traiter les vidÊos de la confÊrence. Un peu de travail peut les rendre prêtes pour la publication. Je vais combiner les vidÊos et les audios normalisÊs, revoir tout ça, et publier sur YouTube et sur notre site. Quelques vidÊos ont eu quelques problèmes avec la conversion, donc je dois revoir les dernières minutes attentivement pour remarquer des erreurs.

      ポポポポポ

      Après l'Êcole, j'ai emmenÊ ma fille à la patinoire au parc pour jouer avec son amie. Elles ont pris beaucoup de plaisir à jouer à chat avec le père de son amie, qui Êtait trop rapide pour elles. J'ai ÊtÊ heureuse de les regarder. Nous avons bu du chocolat chaud pendant que la surfaceuse prÊparait la glace.

      Nous avons mangÊ des restes. Après le souper, j'ai travaillÊ sur les vidÊos de la confÊrence. Deux vidÊos ont eu des erreurs de codage, donc j'ai utilisÊ les vidÊos originales et modifiÊ notre processus. Ma prochaine Êtape est de convertir les vidÊos au format WebM pour les tÊlÊcharger sur notre serveur. Je dois aussi revoir le sous-titrage, mais ça peut être fait graduellement.

      Jeudi, le dix-huit dĂŠcembre

      Une étape importante : je deviens plus à l'aise pour écrire en français sur mobile. Ça signifie que je peux ajouter à mon journal n'importe quand et n'importe où. Je recherche toujours des mots dans le dictionnaire, ce qui n'est pas si pratique sur mobile à cause du petit écran, mais c'est tolérable. Au moins, ça peut remplacer le défilement infini de Reddit pour l'énième fois. Un jour je pourrai dicter à mon portable, ce qui serait plus utile pendant les promenades en hiver, quand taper sera difficile.

      J'ai encore fait une longue promenade au parc. Le mÊdecin a dit que les promenades Êtaient bonnes pour la santÊ, donc j'essaie souvent d'en faire. Un jour je voudrais flâner pendant plusieurs heures, mais pour l'instant, une promenade de trente minutes ou une heure est suffisante.

      Les expériences de mon mari avec le pain au levain continuent. Il a acheté quelques bannetons. Ma fille l'a aidé avec cette fournée pendant la pause récré. Elle aime scarifier des motifs variés sur le pain. C'est parfait : passer du temps ensemble, apprécier la nourriture et pratiquer l'art. Ça demande de la patience, mais c'est la vie et elle peut apprendre la valeur des choses qui prennent du temps. C'est probablement plus important que les notes élevées à l'école. (Ou du moins c'est ce que je me dis quand je m'inquiète.)

      Quand je rentrerai à la maison, j'aurai trente minutes avant sa pause dÊjeuner. Je pourrai faire une courte tâche, comme envoyer des messages ou vÊrifier des vidÊos. Ma routine matinale pour prendre soin de moi prend la majeure partie de la matinÊe. Je me demande comment les autres s'organisent.

      ポポポポポ

      J'ai dÊcidÊ de cuisiner le dÊjeuner au lieu de faire de petites tâches. J'ai prÊparÊ des grilled-cheeses. On s'est rÊgalÊs.

      Après le dÊjeuner, j'ai travaillÊ sur les vidÊos de la confÊrence. J'ai ajoutÊ les chapitres à quelques vidÊos et corrigÊ quelques sous-titres.

      ポポポポポ

      Après l'Êcole, ma fille a voulu aller chez Sephora pour acheter de la brume parfumÊe. Elle en a cherchÊ en ligne. Mon mari a voulu acheter du papier toilette à No Frills, donc nous avons pris le mÊtro jusqu'au Dufferin Mall. Elle a appris à choisir par elle-même. C'est pour ça qu'elle a ses propres Êconomies. Elle a choisi  darling  qui sent les fleurs. J'ai aimÊ voir ma fille gagner en confiance et en autodÊtermination. Elle a mis longtemps à choisir, mais j'ai ÊtÊ patiente parce que j'ai pu Êcrire mon journal sur mobile.

      Ensuite, nous avons mangÊ un souper de pâtes au pesto à la tomate.

      Puis nous avons jouÊ à la marchande comme dans sa classe de thÊâtre. Nous avons lancÊ des idÊes pour les rôles, donc nous avons improvisÊ dans la situation qu'elle a choisie. Elle a dit que j'Êtais drôle.

      J'ai travaillĂŠ sur d'autres vidĂŠos, et j'ai corrigĂŠ une erreur dans le logiciel d'affichage des chapitres.

      Vendredi, le dix-neuf dĂŠcembre

      Je me suis levĂŠe un peu tard parce que mon portable ne s'est pas rechargĂŠ correctement. Heureusement, il restait un peu de temps avant l'ĂŠcole, donc j'ai pu rĂŠveiller ma fille Ă  temps pour un petit-dĂŠjeuner sur le pouce.

      Pendant qu'elle participait à l'Êcole virtuelle, j'ai fait ma routine matinale. Ensuite, j'ai travaillÊ sur le sous-titrage. Maintenant que les choses sont dÊtendues, je peux prendre plaisir à la prÊparation des ressources. C'est le dernier jour avant sa pause d'hiver, donc je dois faire les tâches qui demandent de la concentration.

      Ma fille a fait sa prÊsentation sur le Nouvel An chinois. Elle Êtait si fière. Elle a dit que ses camarades de classe avaient faim à cause de sa prÊsentation sur la nourriture traditionnelle.

      Par coĂŻncidence, mon mari a prĂŠparĂŠ du riz gluant au poulet pour le dĂŠjeuner. On s'est rĂŠgalĂŠs.

      La coccinelle ĂŠtait plus active. Nous lui avons donnĂŠ un morceau de raisin et un morceau de pomme. Ma fille a humidifiĂŠ le bout d'essuie-tout.

      Cet après-midi, j'ai continuÊ le travail sur les vidÊos. Elles Êtaient presque toutes faites, il n'en restait que quelques-unes.

      En guise de promenade, j'ai fait les courses. Ensuite, j'ai jouÊ aux cartes avec ma fille. Je gagnais toujours malgrÊ mes efforts subtils. Ma fille est devenue un peu grincheuse. La prochaine fois, je proposerai à ma fille des jeux coopÊratifs comme Space Escape ou comme on joue au Pictionary ou aux charades ensemble. Comme ça, on ne peut pas vraiment gagner à tous les coups sinon quelqu'un va être fâchÊ contre moi.

      ポポポポポ

      Elle s'est sentie mieux et elle est revenue pour manger des ailes de poulet. Elle avait froid aussi, donc elle avait envie de câlins.

      Samedi, le vingt dĂŠcembre

      J'ai fait une diffusion en direct sur Twitch pendant que je travaillais sur les sous-titres qu'un intervenant a corrigĂŠs. J'ai ĂŠcrit une courte fonction pour copier des textes dans son chapitre actuel. Trois spectateurs sont venus ĂŠtonnamment, et ils ont fait quelques commentaires sur mon processus. Avant de faire plus de chapitres de vidĂŠos, je pense que je dois copier les discussions d'IRC et de YouTube sur les pages du wiki pour les envoyer aux intervenants. Ensuite, je peux me remettre Ă  faire les chapitres.

      J'ai rÊflÊchi un peu plus à l'aide. Le sous-titrage semble une occasion facile d'aider. J'ai documentÊ le processus et j'ai crÊÊ quelques outils. Mais c'est souvent plus facile si je continue moi-même parce que je ne dois pas attendre. Bon, c'est possible pour des personnes qui se portent volontaires pour faire les sous-titres de quelques vidÊos. Je les laisse de côtÊ et je travaille sur les autres vidÊos avant elles. Est-ce que je veux inviter les volontaires à aider sur les vidÊos restantes? Peut-être. Je dois amÊliorer la page des coulisses pour plus facilement choisir parmi les tâches restantes, et je dois documenter le processus pour aider les dÊbutants. Il est tentant de travailler seul, mais il est bon de crÊer des occasions pour que d'autres personnes puissent aider. En plus, la documentation m'aidera quand j'aurai tout oubliÊ d'ici l'annÊe prochaine.

      L'après-midi, je suis allÊe à la pharmacie pour une vaccination contre la grippe. Bien que la vaccination de cette annÊe ne corresponde pas bien aux variations de grippe très courantes, c'est toujours un peu protecteur. Ma fille a marchÊ avec moi à mi-chemin, puis elle est retournÊe à la maison et elle est allÊe avec mon mari chez le perceur. Elle voulait porter des boucles d'oreilles. Elle est assez âgÊe pour choisir par elle-même. Je l'ai aidÊe pour le nettoyage avec la solution saline.

      J'ai prĂŠparĂŠ le bulletin d'information pour la Bike Brigade. Puisque personne ne s'est portĂŠ volontaire, je suis revenue Ă  mon processus qui est plus automatique. Je dĂŠteste tous les processus qui demandent plusieurs clics et offrent plusieurs occasions de faire des erreurs. Lorsqu'un bĂŠnĂŠvole s'engagera, je restaurerai le processus manuel.

      Nous avons aussi joué à une simulation de petit café sur Minecraft avec sa tante. Ma fille s'occupait du service, ma sœur s'occupait des salades, et je m'occupais d'alterner les crêpes et les gâteaux. On a bien géré dans les temps. Après ma routine du soir, nous avons aussi joué au Space Escape. Nous avons gagné ensemble !

      Dimanche, le vingt-et-un dĂŠcembre

      Après la vaccination d'hier, j'ai un peu mal au cou, donc je me la coule douce aujourd'hui. Je vais faire la lessive et peut-être copier des discussions de la confÊrence. Mais avant tout, peut-être que je vais Êtudier un peu le français.

      Mon logiciel d'analyse de mon journal a dit que j'ai écrit cinquante-deux entrées jusqu'à présent. Ça nous fait un total de 10.766 mots (1.381 lemmes). J'ai commencé à apprendre le français pour peut-être aider ma fille, mais je trouve que j'apprécie la stimulation d'écriture dans une autre langue. C'est certain que j'écris plus d'entrées à propos de ma vie. L'analyse de mon vocabulaire m'encourage à essayer de nouveaux mots et de plus longues entrées. En 2012, lors d'une conférence sur Quantified Self, j'ai rencontré une personne qui met son journal sur son système de répétition espacée pour aider à s'en souvenir. Après chaque rendez-vous avec ma tutrice, je mets mes phrases sur Anki pour étudier du vocabulaire. En cours de route, je me remémore ces moments. Je ne peux pas encore parler aisément. Peut-être que je dois pratiquer l'expression orale et trouver ma propre méthode pour pratiquer la compréhension orale. Répéter en même temps que l'audio semble utile.

      L'outil d'IA que j'ai essayé est sorti de sa phase bêta et a maintenant besoin d'un abonnement de 29 dollars chaque mois. En ce moment, je me demande si je veux l'utiliser, ou si je veux utiliser d'autres outils comme ChatGPT ou Gemini, ou si je veux créer mon propre outil. Je pense que pour le moment, je me concentre principalement sur l'écriture. À cause de COVID et du côté chronophage de l'éducation de mon enfant, je ne suis pas intéressée par des sujets fréquents comme commander au restaurant, les voyages, ou même la présentation et le bavardage. Je veux écrire et écouter des informations sur Emacs et d'autres sujets techniques, donc je peux commencer à lire « Emacs expliqué à mes enfants ». Je peux aussi utiliser la synthèse vocale pour transformer mon journal en audio, que je peux utiliser pour m'entraîner. J'ai ajouté une fonction pour attendre après chaque phrase pendant un multiple du temps initial pour pouvoir répéter plus facilement. Même si peut-être penser à écouter la prononciation quand je cherche des mots dans le dictionnaire en ligne serait suffisant quand j'utilise mon portable, ce qui arrive plus souvent.

      Je ne peux pas me concentrer sur mon travail, donc j'ai fait une sieste l'après-midi. Après deux heures, ma fille m'a rÊveillÊe parce qu'elle Êtait fière d'avoir aidÊ mon mari à mettre en conserve les betteraves qu'il avait achetÊes il y a deux semaines. Ils ont utilisÊ l'autocuiseur. Puisqu'un bocal ne s'est pas bien scellÊ, il l'a mis au rÊfrigÊrateur. Ils ont aussi fait un gâteau aux ananas et aux betteraves, que ma fille aime bien.

      Après le souper, j'ai récupéré un peu d'énergie. J'ai joué à la simulation de petit café sur Minecraft avec ma fille et ma sœur, comme hier. Cette fois, notre jeu se déroule bien. Ma sœur a fait beaucoup de salades par lots. Elle a dit : « Dix salades grecques sont prêtes » et ma fille les a servies aux clients. Moi, j'ai préparé des crêpes et des gâteaux nature sans cesse, et je les ai combinés avec d'autres ingrédients pour chaque commande, donc j'ai souvent dit : « Le gâteau au chocolat et à la banane sur le comptoir. » Nous avons franchi facilement deux étapes de plus. Je pense qu'il reste une étape.

      You can e-mail me at sacha@sachachua.com.

    4. 🔗 r/reverseengineering OGhidra: Automating dataflow analysis and vulnerability discovery in Ghidra via local Ollama models rss
    5. 🔗 r/LocalLLaMA GLM 4.7 released! rss

      GLM 4.7 released! | GLM-4.7 is here! GLM-4.7 surpasses GLM-4.6 with substantial improvements in coding, complex reasoning, and tool usage, setting new open-source SOTA standards. It also boosts performance in chat, creative writing, and role-play scenarios. Weights: http://huggingface.co/zai-org/GLM-4.7 Tech Blog: http://z.ai/blog/glm-4.7 submitted by /u/ResearchCrafty1804
      [link] [comments]
      ---|---

    6. 🔗 r/LocalLLaMA GLM 4.7 is out on HF! rss

      GLM 4.7 is out on HF! | submitted by /u/KvAk_AKPlaysYT
      [link] [comments]
      ---|---

    7. 🔗 r/reverseengineering ImHex Hex Editor v1.38.1 - Better Pattern Editor, many new Data Sources, Save Editor Mode and more rss
    8. 🔗 r/LocalLLaMA I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! rss

      I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! | Hi! I’m Eugene, and I’ve been working on Soprano : a new state-of-the-art TTS model I designed for voice chatbots. Voice applications require very low latency and natural speech generation to sound convincing, and I created Soprano to deliver on both of these goals. Soprano is the world’s fastest TTS by an enormous margin. It is optimized to stream audio playback with < 15 ms latency, 10x faster than any other realtime TTS model like Chatterbox Turbo, VibeVoice-Realtime, GLM TTS, or CosyVoice3. It also natively supports batched inference, benefiting greatly from long-form speech generation. I was able to generate a 10-hour audiobook in under 20 seconds, achieving ~2000x realtime! This is multiple orders of magnitude faster than any other TTS model, making ultra-fast, ultra-natural TTS a reality for the first time. I owe these gains to the following design choices:

      1. Higher sample rate: most TTS models use a sample rate of 24 kHz, which can cause s and z sounds to be muffled. In contrast, Soprano natively generates 32 kHz audio, which sounds much sharper and clearer. In fact, 32 kHz speech sounds indistinguishable from 44.1/48 kHz speech, so I found it to be the best choice.
      2. Vocoder-based audio decoder: Most TTS designs use diffusion models to convert LLM outputs into audio waveforms. However, this comes at the cost of slow generation. To fix this, I trained a vocoder-based decoder instead, which uses a Vocos model to perform this conversion. My decoder runs several orders of magnitude faster than diffusion-based decoders (~6000x realtime!), enabling extremely fast audio generation.
      3. Seamless Streaming: Streaming usually requires generating multiple audio chunks and applying crossfade. However, this causes streamed output to sound worse than nonstreamed output. I solve this by using a Vocos-based decoder. Because Vocos has a finite receptive field. I can exploit its input locality to completely skip crossfading, producing streaming output that is identical to unstreamed output. Furthermore, I modified the Vocos architecture to reduce the receptive field, allowing Soprano to start streaming audio after generating just five audio tokens with the LLM.
      4. State-of-the-art Neural Audio Codec: Speech is represented using a novel neural codec that compresses audio to ~15 tokens/sec at just 0.2 kbps. This helps improve generation speed, as only 15 tokens need to be generated to synthesize 1 second of audio, compared to 25, 50, or other commonly used token rates. To my knowledge, this is the highest bitrate compression achieved by any audio codec.
      5. Infinite generation length: Soprano automatically generates each sentence independently, and then stitches the results together. Theoretically, this means that sentences can no longer influence each other, but in practice I found that this doesn’t really happen anyway. Splitting by sentences allows for batching on long inputs, dramatically improving inference speed.

      I’m a second-year undergrad who’s just started working on TTS models, so I wanted to start small. Soprano was only pretrained on 1000 hours of audio (~100x less than other TTS models), so its stability and quality will improve tremendously as I train it on more data. Also, I optimized Soprano purely for speed, which is why it lacks bells and whistles like voice cloning, style control, and multilingual support. Now that I have experience creating TTS models, I have a lot of ideas for how to make Soprano even better in the future, so stay tuned for those! Github: https://github.com/ekwek1/soprano Huggingface Demo: https://huggingface.co/spaces/ekwek/Soprano-TTS Model Weights: https://huggingface.co/ekwek/Soprano-80M - Eugene submitted by /u/eugenekwek
      [link] [comments]
      ---|---

    9. 🔗 r/reverseengineering GitHub - Fatmike-GH/MCPDebugger: A lightweight MCP debugger designed for learning and experimentation. Supports Windows executables (x86 and x64). rss
    10. 🔗 r/LocalLLaMA NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! rss

      NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! | Blog Link: https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/ You'll learn about: - Training methods: LoRA, FFT, RL - When to fine-tune and why + use-cases - Amount of data and VRAM needed - How to train locally on DGX Spark, RTX GPUs & more submitted by /u/Difficult-Cap-7527
      [link] [comments]
      ---|---

    11. 🔗 langchain-ai/deepagents deepagents-cli==0.0.12 release

      Changes since deepagents-cli==0.0.11

      minor version bump, model setting, agent skill spec support, skill creator example (#600)
      Comply with Anthropic Agent Skills spec (#592)
      feat(cli): add --model flag with auto-detection (#584)
      feat: add skill-creator skill with init and validation scripts (#579)
      docs(cli): add LangSmith environment variables documentation (#583)

    12. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    13. 🔗 r/LocalLLaMA major open-source releases this year rss

      major open-source releases this year | submitted by /u/sahilypatel
      [link] [comments]
      ---|---

  3. December 21, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-21 rss

      IDA Plugin Updates on 2025-12-21

      New Releases:

      Activity:

      • chernobog
        • a52f5827: fix: Fix unsafe ctree modification and re-enable constant folding han…
        • 82016e82: fix: Prevent crashes during plugin unload and static destruction
      • IDA-VTableExplorer
        • 3081ff81: fix: add actions back to browse functions and annotate all vtables
        • 1612b4e2: feat: replace JPEG images with PNG for better quality in README
        • 3fbe47f1: Refactor VTable handling and enhance RTTI parsing
        • cf02df00: feat: Update build-all target to include clean step for improved buil…
        • 0f930310: feat: Add clean target to Makefile for removing build artifacts
      • IDAPluginList
      • twdll
    2. 🔗 r/LocalLLaMA 1 year later and people are still speedrunning NanoGPT. Last time this was posted the WR was 8.2 min. Its now 127.7 sec. rss

      1 year later and people are still speedrunning NanoGPT. Last time this was posted the WR was 8.2 min. Its now 127.7 sec. | Previous post for context. Also note original NanoGPT run from Andrej Karpathy was 45 min. I think this is a great way to understand progress in overall algorithmic speed improvements as I'm sure the big labs are using similar speedup tricks. submitted by /u/jd_3d
      [link] [comments]
      ---|---

    3. 🔗 r/LocalLLaMA llama.cpp appreciation post rss

      llama.cpp appreciation post | submitted by /u/hackiv
      [link] [comments]
      ---|---

    4. 🔗 sacha chua :: living an awesome life La semaine du 7 décembre au 14 décembre rss

      Lundi, le huit dĂŠcembre

      Je me suis concentrÊe sur mon journal en français avant le rendez-vous avec ma tutrice. J'ai Êcrit suffisamment pour bien utiliser le temps, malgrÊ la semaine dernière chargÊe. Nous avons aussi fait de la conversation. J'ai utilisÊ le Live Captions de Google Chrome pour comprendre quand elle parlait trop rapidement.

      J'ai emmenĂŠ ma fille Ă  son cours de gymnastique. C'ĂŠtait apparemment la semaine des parents, donc j'ai pu regarder ma fille dans le gymnase. J'ai pris quelques vidĂŠos pour la montrer.

      J'ai fait beaucoup de lessive, parce que je n'ai pas pu en faire pendant la confĂŠrence.

      Mardi, le neuf dĂŠcembre

      Ce matin, j'ai continuÊ à rattraper mon retard. Pendant un ou deux mois prÊcÊdant la confÊrence, je n'ai pas fait beaucoup de travail de conseil, donc j'ai accumulÊ quelques tâches. Je n'Êtais pas stressÊ, j'ai juste eu à gÊrer mon temps. J'ai pris plaisir à les aider.

      L'un de mes amis m'a appelÊ pour discuter d'une crise personnelle. Je suppose que c'est ça la crise de la quarantaine. C'est très difficile, mais on doit persÊvÊrer.

      Cet après-midi, j'ai contemplÊ mes valeurs pour mes devoirs de la session de gestion du stress avec ma thÊrapeute. Je pense que je peux les simplifier en cette liste : responsabilitÊ, adaptabilitÊ, relations, et curiositÊ. C'est utile pour faire des choix.

      Aujourd'hui, il fait froid et gris avec de la neige et un vent fort. La mĂŠtĂŠo a annoncĂŠ plus de neige. J'ai laissĂŠ ma fille choisir d'aller au cours d'art ou de rester Ă  la maison. Elle a choisi de rester, donc nous avons passĂŠ une soirĂŠe tranquille. Nous avons jouĂŠ aux jeux de cartes. Ma fille aime bien les jeux de stratĂŠgie. Moi aussi. Elle commence Ă  apprendre Ă  anticiper les choses quand elle joue Ă  Exploding Kittens et Ă  Tacos versus Burritos. Elle s'amuse beaucoup parce que les cartes sont amusantes.

      Nous avons pratiquÊ un peu le français avec l'IA. Elle apprend le vocabulaire sur la mÊtÊo à l'Êcole, donc elle a essayÊ quelques phrases. Ensuite, j'ai Êcrit mon journal pendant qu'elle regardait KPop Demon Hunters pour l'Ênième fois.

      Demain, je vais enregistrer une vidÊo sur la prÊparation du bulletin d'information pour Bike Brigade pour le transfÊrer à l'autre volontaire. Je vais aussi enregistrer une vidÊo de fÊlicitations en français. S'il y a du temps, je veux aussi traiter les vidÊos de la confÊrence.

      Mercredi, le dix dĂŠcembre

      Mon mari s'est levÊ très tôt pour prÊparer son examen mÊdical. Il a dÝ jeÝner pour son examen, donc il avait très faim et il s'ennuyait, alors il a commencÊ deux recettes de pain au levain. J'ai aidÊ ma fille avec sa routine matinale. Pendant que ma fille participait à l'Êcole virtuelle et mon mari Êtait sorti, j'ai dÝ gÊrer les deux recettes tout en cuisinant une bouillie de riz au poulet pour le dÊjeuner de mon mari.

      Alors, je me suis sentie un peu perturbÊe, mais j'Êtais aussi contente car mon mari comptait sur moi pour faire ces tâches. Il n'a pas souvent demandÊ de l'aide. C'Êtait un plaisir de l'aider, même si la situation Êtait amusante.

      J'ai fini les trois recettes moi-mĂŞme : deux sortes de pain et la bouillie. C'ĂŠtait la deuxième fois que nous essayions de faire du pain au levain, et cette fois ça a marchĂŠ ! Je pense que j'ai laissĂŠ le pain reposer plus longtemps, ce qui a mieux fonctionnĂŠ, en effet… Et ma fille aime notre pain au levain ! Enfin, notre première victoire ! Ma fille a jugĂŠ que mes essais prĂŠcĂŠdents n'ĂŠtaient pas aussi bons que le pain qu'elle achète d'habitude au marchĂŠ fermier.

      J'ai aussi enregistrÊ une courte vidÊo pour souhaiter un joyeux anniversaire en français. C'Êtait un bon exercice pour l'expression orale.

      Pour l'exercice, j'ai dÊblayÊ beaucoup de neige. Il pleuvait aussi, donc la neige Êtait lourde. Je n'ai pas pu me reposer parce que j'avais trop de tâches.

      MalgrÊ la neige et la pluie, ma fille a aussi envoyÊ une lettre au Père NoÍl. Nous sommes en retard pour le programme Lettres au Père NoÍl des Postes Canada, mais quand même ça vaut le coup d'essayer. Elle veut un jeu de piste pour elle, et des chaussettes pour moi. Le jeu de piste est une tradition dans ma famille. Je vais Êcrire quelques indices et les cacher partout dans la maison. Peut-être que cette annÊe, je peux Êcrire quelques indices en français.

      Mon mari rĂŠessaye la recette du pain maintenant. Petit Ă  petit, on s'amĂŠliore. Les rĂŠsultats intermĂŠdiaires sont dĂŠlicieux, donc la pratique est agrĂŠable.

      Jeudi, le onze dĂŠcembre

      J'étais fatiguée. L'œil de ma fille faisait un peu mal même après sa nuit de sommeil, donc je me suis un peu inquiétée. Elle a pu participer à l'école virtuelle, au moins. Je suppose que c'était une journée avec moins d'énergie.

      J'ai emmenÊ ma fille aux Stockyards à pied parce qu'elle avait envie d'une longue promenade. Pour une petite friandise, j'ai achetÊ une boÎte de feuilletÊs chez Marry Me Mochi, et elle les a gardÊs pour après le souper. Mon mari et ma fille ont cuisinÊ des sandwiches au fromage grillÊ à la purÊe de pomme de terre, une nouvelle idÊe que mon mari a trouvÊe en ligne. C'Êtait dÊlicieux.

      Après un souper rapide, j'ai eu une sÊance d'information sur le bulletin d'information de Bike Brigade. J'ai Êcrit de la documentation. Pendant la sÊance, j'ai expliquÊ le processus.

      Vendredi, le douze dĂŠcembre

      L'œil de ma fille faisait mal et était enflé pendant deux jours, donc j'ai concentré mes efforts pour obtenir de l'aide. Elle n'a pas voulu participer en classe. Ce matin, j'ai appelé quelques endroits pour essayer de prendre un rendez-vous en alternant les câlins réconfortants. Après une longue attente et quelques messages, j'ai pris un rendez-vous à l'hôpital Sick Kids.

      J'ĂŠtais fatiguĂŠe, donc j'ai fait une sieste de trente minutes Ă  midi.

      Cet après-midi, j'ai emmenÊ ma fille en mÊtro chez l'ophtalmologue à l'hôpital. Nous avons attendu pendant deux heures, ce qui Êtait très ennuyant pour ma fille mais c'Êtait nÊcessaire. Je l'ai laissÊe regarder beaucoup de vidÊos et jouer à quelques jeux.

      L'ophtalmologue a dit que ma fille a un orgelet, donc elle a conseillé des compresses chaudes et de l'érythromycine. Elle a aussi remarqué qu'elle a des cils qui frottent l'œil, donc elle a recommendé des gouttes pour les yeux. J'ai déposé ma fille à la maison et je suis allée à la pharmacie pour acheter l'érythromycine.

      Après tout ça, ce qui a pris toute la journÊe, j'Êtais très fatiguÊe.

      Samedi, le treize dĂŠcembre

      Le masque chauffant pour les yeux semble aider ma fille avec son œil. Elle l'a porté hier soir dix minutes et encore ce matin. Son œil est moins enflé maintenant, mais elle a encore un peu mal.

      Elle trouve que se concentrer sur ses devoirs est difficile. Les mathÊmatiques sont amusantes, mais les devoirs de langue sont ennuyeux. Elle a reportÊ ses tâches pendant plusieurs jours, et maintenant elles forment un gros tas. J'ai conseillÊ de faire petit à petit et de faire les diffÊrentes sortes de devoirs pour que son maÎtre puisse Êvaluer les diffÊrentes matières. J'ai travaillÊ sur mes devoirs de français dans sa chambre pour qu'elle ne se sente pas seule. Parfois elle a besoin d'un câlin avant de recommencer à travailler. Je n'ai pas le droit de lui rappeler ses devoirs, juste de la câliner. Eh, on va voir. D'une part, je souhaite le succès de ma fille. D'autre part, c'est elle qui doit dÊcouvrir ce qui fonctionne bien, et le moment prÊsent est idÊal pour expÊrimenter parce que les enjeux sont faibles. Aujourd'hui, elle veut rattraper tout son retard de devoirs de lecture au lieu de faire un peu de tout. C'est à elle de dÊcider.

      Après ses devoirs, elle veut aller à KidSpark pour jouer au magasin imaginaire. Je pense que je peux l'emmener à vÊlo malgrÊ la neige et la glace, probablement. Le mÊtro ne fonctionne pas ce week-end, donc il faudra se contenter. Je n'ai pas de pneus spÊciaux pour la glace, donc je devrai faire du vÊlo attentivement.

      ポポポポポ

      Nous sommes tous allĂŠs Ă  KidSpark malgrĂŠ la fermeture du mĂŠtro d'Ossington Ă  Spadina. Je n'ai pas eu d'ĂŠnergie pour faire du vĂŠlo, donc nous avons dĂť prendre le mĂŠtro. La navette ĂŠtait lente et bondĂŠe, mais nous sommes finalement arrivĂŠs.

      Nous n'avons jouÊ qu'une heure, mais notre fille a eu beaucoup de plaisir, donc j'Êtais contente que nous sommes venus. Nous avons jouÊ au magasin imaginaire et nous avons aussi jouÊ avec les nouveaux jouets de construction. Il y avait beaucoup d'enfants, donc c'Êtait bruyant, et notre fille a utilisÊ le protège-oreilles du sac à dos sensoriel.

      Nous avons achetÊ quelques petits pains et des raviolis aux crevettes en rentrant, avant d'attendre les navettes pendant longtemps. Les navettes Êtaient très bondÊes, et notre fille a eu froid en marchant jusqu'à la maison. Mais nous avons persÊvÊrÊ.

      Quand nous sommes rentrĂŠs, nous avons tous bu du thĂŠ. Mon mari et notre fille ont cuisinĂŠ deux fournĂŠes de petites crĂŞpes ĂŠpaisses, et j'ai fait la vaisselle.

      Dimanche, le quatorze dĂŠcembre

      J'ĂŠtais fatiguĂŠe, donc j'ai fait la grasse matinĂŠe. Ma fille s'est levĂŠe avant moi. Elle a fait tomber le sac de cĂŠrĂŠales par accident et elle est devenue un peu grincheuse. Elle est devenue plus grincheuse quand nous avons mentionnĂŠ ses devoirs. Elle a une prĂŠsentation la semaine prochaine, donc elle doit se prĂŠparer. Alors, je ne peux pas la forcer. Je me le dis : c'est son expĂŠrience, ce n'est pas moi.

      Du coup, comme elle est grincheuse, peut-être que j'ai le temps pour mes tâches. Je dois produire ma dÊclaration fiscale de l'entreprise, qui a besoin de concentration. Je peux Êcrire mon journal avant le rendez-vous avec ma tutrice lundi, et j'ai les devoirs pour la session sur la gestion du stress mardi. Je veux aussi travailler sur le reste du travail de la confÊrence. Beaucoup de choses à faire.

      Mes devoirs sur la gestion du stress comprennent la description de mon sentiment et son ĂŠvaluation en pourcentage. Cette ĂŠvaluation est ĂŠtonnamment difficile. Je suis perdue. Alors, je suppose que c'est ce que je dois apprendre.

      ポポポポポ

      Ma fille est revenue de sa chambre d'humeur assez raisonnable. Elle a mangĂŠ un peu de nourriture et a reçu des câlins. Je pense qu'elle n'a pas travaillĂŠ sur ses devoirs. Son œil fait mal et maintenant ses deux yeux dĂŠmangent, sa nouvelle molaire fait mal, elle ĂŠtait fatiguĂŠe de ses devoirs… Je ne peux pas faire grand-chose, juste des câlins rĂŠconfortants et aider avec sa routine du soir.

      Reflection

      I'm gradually expanding my vocabulary. I can now write enough that reading my vocabulary entries out loud to my tutor (and chatting a little about stuff along the way) takes up the hour. It's still good pronunciation practice while I work on picking up more words and internalizing the pronunciation rules, though, so it's probably a good idea to continue that instead of shifting that to AI.

      New root words

      absence, accumuler, adaptabilité, amélioration, anniversaire, annulation, anticiper, apparemment, appeler, apprécier, attente, attentivement, automatisation, bonder, bouillie, bruyant, cacher, car, certain, chauffer, choix, cil, commencer, comprendre, compresse, concentration, connecter, conseiller, construction, contempler, contenter, contrôler, coulisse, court, crise, crêpe, curiosité, câliner, céréale, description, deuxième, différence, différent, documentation, droit, décider, déclaration, découvrir, délicieux, démanger, dérouler, effet, effort, enfin, enfler, enjeu, ennuyant, ennuyer, entreprise, envie, essai, examen, expliquer, expérience, expérimenter, faible, falloir, façon, fenêtre, fermeture, fermier, feuilleté, fiscal, forcer, former, fournée, frotter, félicitation, glace, goutte, gras, griller, gris, gros, gymnase, général, hôpital, idéal, inattendu, indice, inspirant, intermédiaire, jeûner, jouet, joyeux, juger, lecture, lent, lessive, lettre, longtemps, lors, lourd, mal, masque, mathématique, mois, molaire, montrer, médical, métro, mêler, navette, nourriture, noël, obtenir, oeil, ophtalmologue, organisation, orgelet, outil, partager, partout, perdre, personnel, persévérer, phrase, plan, pneu, porter, poste, pourcentage, processus, produire, précédent, précéder, purée, quarantaine, raisonnable, rapidement, rattraper, recommencer, recommender, reconnecter, relation, remarquer, reposer, responsabilité, retard, réduire, répondre, résultat, réussir, sauter, sauvegarder, scène, sembler, sensoriel, sentiment, serviable, sieste, similaire, situation, soir, soirée, sommeil, sorte, souhaiter, spécial, spécialisé, stratégie, stresser, succès, suffisamment, supplémentaire, supposer, surtout, séance, taille, thé, toutefois, tradition, transcription, transformation, transférer, vaisselle, valoir, victoire, volet, ça, énergie, énième, épais, érythromycine, étonnamment, étude, évaluation, évaluer, œil

      You can e-mail me at sacha@sachachua.com.

    5. 🔗 r/reverseengineering From UART to Root: Breaking Into the Xiaomi C200 via U-Boot rss
    6. 🔗 Register Spill Joy & Curiosity #67 rss

      Last issue of the year, let's do this!

      This week, Ryan and I got to interview DHH. It's very rare that I get nervous before an online conversation, but this was one of those times. I mean, that's the guy who made Rails, man! I wouldn't be here without Rails. Rails is what I did for the first seven years of my career. Rails is the reason why I have a career. I read every book he and Jason have ever written, of course, and 37signals has had as deep an impression as a company can have on probably anybody who's worked in a startup between 2008 and 2015.

      …and then we had a great conversation. It's been a few days, and different parts of it keep popping back into my head. David said quite a few things that I now feel I have to share. Some things about marketing that resonate with what we've been talking about internally; some things I want the world to hear; some things that were funny; other things that were very fascinating (he said he still writes 95% of his code by hand); and the rant on cookie banners that I want politicians to hear.

      But here's something that I want to leave you with, in this last edition of the year, this year that brought and announced more change to this profession than any other year I've lived through as a working software developer. Here's something that David said that sums up why I'm excited and so curious about where all of this is going, something that I hope makes you feel something positive too:

      "Where does the excitement come from? First and foremost, I love computers and I love to see computers do new things. It's actually remarkable to me how many people who work in tech don't particularly like computers. Yes, even programmers who have to interact with them every day and make these computers dance, not all of them like computers. I love computers. I love computers just for the-- sheer machine of it. I'm not just trying to be instrumental about it. I'm not just trying to use computers to accomplish something. There's a whole class of people who view the computer just as a tool to get somewhere. No, no, no. For me, it's much deeper. I just love the computer itself and I love to see the computer do new things. And this is the most exciting new thing that computers have been doing, probably in my lifetime. Or at least it's on level with the network-connected computer. Yes."

      The computer can now do new things.

      • My teammate Tim wrote about how he ported his TUI framework from Zig to TypeScript and how, in the process of porting it, he noticed that he's getting in the way of the agent, slowing it down and costing more tokens. So he took his hands off the wheel and what we ended up with is this: A Codebase by an Agent for an Agent. I've shared this story quite a few times in person. I'm really happy it's out now, so we have proof: this is an world-class terminal expert and programmer, letting an agent write 90% of the code, and ending up with something that is really , really good. (Also, side note: I contributed the images and, man, it's so fun to put stuff like this out into the world.)

      • This was fantastic: Jeff Dean and Sanjay Ghemawat with Performance Hints. When I opened it I thought I'd skim it, but then I read the whole thing, looked at a lot of the examples, asked ChatGPT some questions along with screenshots. The writing is clear and precise and simple, the section with the napkin math is impressive, the emoji map optimization is what made me open ChatGPT, and then at the end there, in the CLs that demonstrate multiple techniques section, there's this header 3.3X performance in index serving speed! and when you click on it you'll read that they "found a number of performance issues when planning a switch from on-disk to in-memory index serving in 2001. This change fixed many of these problems and took us from 150 to over 500 in-memory queries per second (for a 2 GB in-memory index on dual processor Pentium III machine)" and then you realize what an impressive cathedral of software engineering Google's infrastructure is. Click here for a good time, I'm telling you.

      • The TUI renaissance isn't over: Will McGugan just released Toad, a "unified experience for AI in the terminal." Taking inspiration from Jupyter notebooks is very smart and I love those little UI interactions he built. Good stuff.

      • The title is "Prompt caching: 10x cheaper LLM tokens, but how?" so you might think that this is about prompt caching, but, haha, that's silly. Listen, this is about everything. It's one of the best all-in-one explainers of how transformers work that I've come across. It's by Sam Rose, who's very good at visual explanations, and here he does a full explanation of how text goes into an LLM and text comes out the other end, including visuals, pseudo-code, in-depth explanations. It's very, very good. If you don't know how a transformer works, do yourself a favor and read this. If you do know how it works, look at this and smile at the visualizations.

      • Imagine you're holding two rocks. One has written on it: "terminals can display images now, thanks to the kitty's terminal graphics protocol". The other: "when you think about it, a GUI framework does nothing but create images and display them, right?" Now the question is: what happens if you smash those two rocks together? This: "DVTUI" (note the quotes!), which takes a GUI framework (DVUI), gets it to save PNGs instead of rendering them to the screen, and then uses a TUI framework (libvaxis) to render those images in the terminal. To quote: "All that happens every single frame. And yet it works."

      • As you know, I'm a sucker for lists like this one: Tom Whitwell's 52 things I learned in 2025. Wonderful.

      • … and it brought me to this: write to escape your default setting. "Writing forces you to tidy that mental clutter. To articulate things with a level of context and coherence the mind alone can't achieve." Yes. Now, in times of LLMs, it's probably more apparent than ever before that writing (real writing; writing you do) is thinking.

      • How I wrote JustHTML using coding agents: "After writing the parser, I still don't know HTML5 properly. The agent wrote it for me. I guided it when it came to API design and corrected bad decisions at the high level, but it did ALL of the gruntwork and wrote all of the code." I bet there's a lot of people who read this and think "ha! so he doesn 't know HTML5 still!" And yet I wonder: was that the goal? It's a very good post. A very calm, practical post, but that raises a fundamental question: JustHTML is now "3,000 lines of Python with 8,500+ tests passing" and "passes 100% of the html5lib test suite, has zero dependencies, and includes a CSS selector query API" -- how many more dependencies could we turn into that now?

      • Martin Kleppmann: "I find it exciting to think that we could just specify in a high-level, declarative way the properties that we want some piece of code to have, and then to vibe code the implementation along with a proof that it satisfies the specification. That would totally change the nature of software development: we wouldn't even need to bother looking at the AI-generated code any more, just like we don't bother looking at the machine code generated by a compiler."

      • "The perfection of snow in the paintings of Danish artist Peder Mørk Mønsted."

      • Stripe Press: Tacit. "The mechanism for developing tacit knowledge is straightforward but slow: repeated practice that gradually moves skills from conscious effort to automatic execution. The mechanism for transmitting it is even slower: apprenticeship, where a learner works alongside someone experienced, observing and imitating until their own judgment develops. This is why tacit knowledge often concentrates in lineages, unbroken chains of practitioners passing expertise to the next generation. […] AI has elevated the distinction between what is tacit and what is not. Language models can summarize and automate, but when they attempt to create something that carries the signature of human craft, the result is often flat." In the words of Tamara Winter: Tacit is a series of mini-documentaries that are " vignettes of craftspeople who provide a pretty compelling answer to the question, 'after AI, does mastery still matter?'"

      • I need to try this: Geoffrey Litt's JIT Guide Workflow.

      • This fantastic post by Jakob Schwichtenberg shifted something in my head: "Our very definition of intelligence encodes the bias toward speed. The modern definition of intelligence is extremely narrow. It simply describes the speed at which you can solve well-defined problems. Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It's not whether you can solve problems, but how fast you solve them." And then: "In fact, it's not hard to imagine how raw processing speed can be counterproductive. People who excel at quickly solving well-defined problems tend to gravitate toward... well-defined problems. They choose what to work on based on what they're good at, not necessarily what's worth doing."

      • … but then there's James Somers saying "Speed matters: Why working quickly is more important than it seems." And Nat Friedman is saying: "It's important to do things fast. You learn more per unit time because you make contact with reality more frequently. Going fast makes you focus on what's important; there's no time for bullshit." And Patrick Collison is collecting fast projects. Then here I am, wondering, and possibly assuring myself: yeah, we're not all doing the same things, are we?

      • antirez' Reflections on AI at the end of 2025. "The fundamental challenge in AI for the next 20 years is avoiding extinction."

      • Yes, this is in The New Yorker: "I trust in TextEdit. It doesn't redesign its interface without warning, the way Spotify does; it doesn't hawk new features, and it doesn't demand I update the app every other week, as Google Chrome does. I've tried out other software for keeping track of my random thoughts and ideas in progress--the personal note-storage app Evernote; the task-management board Trello; the collaborative digital workspace Notion, which can store and share company information. Each encourages you to adapt to a certain philosophy of organization, with its own formats and filing systems. But nothing has served me better than the brute simplicity of TextEdit, which doesn't try to help you at all with the process of thinking." Great title too: TextEdit and the Relief of Simple Software.

      • Also The New Yorker, on performative reading, and reading, and books, and social media: "Reading a book is antithetical to scrolling; online platforms cannot replicate the slow, patient, and complex experience of reading a weighty novel. [...] The only way that an internet mind can understand a person reading a certain kind of book in public is through the prism of how it would appear on a feed: as a grotesquely performative posture, a false and self-flattering manipulation, or a desperate attempt to attract a romantic partner."

      • LLMs and physical laws? Maybe: "The dynamics of LLM generation are quite unique. Compared to traditional rule-based programs, LLM-based generation exhibits diverse and adaptive outputs. […] To model the dynamic behavior of LLMs, we embed the generative process of LLM within a given agent framework, viewing it as a Markov transition process in its state space. […] Based on this model, we propose a method to measure this underlying potential function based on a least action principle. By experimentally measuring the transition probabilities between states, we statistically discover […] To our knowledge, this is the first discovery of a macroscopic physical law in LLM generative dynamics that does not depend on specific model details."

      • "'Climbing Everest solo without bottled oxygen in 1980 was the hardest thing I've done. I was alone up there, completely alone. I fell down a crevasse at night and almost gave up. Only because I had this fantasy - because for two years I had been pregnant with this fantasy of soloing Everest - was I able to continue.' This is how Messner talks about how his will was governed."

      • I regularly remind myself and sometimes even others of Jason Fried's Give it five minutes. It's one of the most influential things I've read in the past ten years. I constantly think of it and I'm convinced it's improved my mental well-being and my connections to other people like few others things. Yes, I know how this sounds, but, I guess, an idea and a specific phrase that sticks with you can go a long way as far as life-changing is concerned. Now, all of that is just context, because what I want to actually share is this Jason Fried piece here: Idea protectionism. I re-found and re-read it after sharing the other Jason Fried piece and wanting to share the Jony Ive quote in this one and, yup, stumbled across it by chance. Lucky.

      • Reuters reports on China's Manhattan Project. This is it, baby! This has it all: corporate espionage, ASML, lithography, "one veteran Chinese engineer from ASML recruited to the project was surprised to find that his generous signing bonus came with an identification card issued under a false name", EUV systems that "are roughly the size of a school bus, and weigh 180 tons", Germany's Carl Zeiss AG, "networks of intermediary companies are sometimes used to mask the ultimate buyer", "employees assigned to semiconductor teams often sleep on-site and are barred from returning home during the work week, with phone access restricted for teams handling more sensitive tasks", and, of course, the tension at the heart of it all: "Starting in 2018, the United States began pressuring the Netherlands to block ASML from selling EUV systems to China. The restrictions expanded in 2022, when the Biden administration imposed sweeping export controls designed to cut off China's access to advanced semiconductor technology. No EUV system has ever been sold to a customer in China, ASML told Reuters."

      • I didn't know this is a thing, this was funny: the Beckham rumour that refuses to die.

      • At work, we ended up talking about Christmas traditions and while I was explaining that where I live the magical entity that makes presents appear is called "christkind" (christ child), I was also trying to find proof on Wikipedia so I'd seem less weird and found this map. Note the filename: Christmas-gift-bringers-Europe.jpg. Great name. But now see where the green and the brown mix, in the middle of Germany? That's where I live. So not only does one legend say it's Baby Jesus bringing presents, it's also that in the next town over it's the Christmas Man. And that dude looks an awful lot like its American cousin Santa Claus, who has a lot more media appearances and higher popularity in the younger-than-10 demographic. Try to keep your story straight when you talk to a 4-year-old who keeps asking you whether she'll get a computer for Christmas. How grand it must be to live in Iceland, where, according to that map, the Christmas Lads live.

      • "This song is called Red 40. It's about Hot Cheetos."

      If you also feel a bit, let's say, joy & curiosity about computers doing new things, you should subscribe:

    7. 🔗 Andrew Healey's Blog A Fair, Cancelable Semaphore in Go rss

      Building a fair, cancelable semaphore in Go and the subtle concurrency issues involved.

  4. December 20, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-20 rss

      IDA Plugin Updates on 2025-12-20

      New Releases:

      Activity:

    2. 🔗 Jeremy Fielding (YouTube) Machining Parts for Wall-E. Episode 03 rss

      Order custom parts or PCB's from PCBWay👉 https://pcbway.com/g/4fU4Ha If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join

      If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding

      Social media, websites, and other channel

      Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA

      Notes: Check out the Formlabs 4L Printer 👉https://bit.ly/4590tau

      WALL-E Playlist here 👉 https://www.youtube.com/playlist?list=PL4njCTv7IRbwHiU2GX5WXI8d0NzBzbMsV

      Technical corrections

      Nothing yet

    3. 🔗 jj-vcs/jj v0.35.0 release

      About

      jj is a Git-compatible version control system that is both simple and powerful. See
      the installation instructions to get started.

      Release highlights

      • Workspaces can now have their own separate configuration. For instance, you
        can use jj config set --workspace to update a configuration option only in
        the current workspace.

      • After creating a local bookmark, it is now possible to use jj bookmark track
        to associate the bookmark with a specific remote before pushing it. When
        pushing a tracked bookmark, it is not necessary to use --allow-new.

      • The new jj git colocation enable and jj git colocation disable commands
        allow converting between colocated and non-colocated workspaces.

      Breaking changes

      • The remote_bookmarks(remote=pattern) revset now includes Git-tracking
        bookmarks if the specified pattern matches git. The default is
        remote=~exact:"git" as before.

      • The deprecated flag --summary of jj abandon has been removed.

      • The deprecated command jj backout has been removed, use jj revert instead.

      • The following deprecated config options have been removed:

        • signing.sign-all
        • core.watchman.register_snapshot_trigger
        • diff.format

      Deprecations

      • jj bisect run --command <cmd> is deprecated in favor of
        jj bisect run -- <cmd>.

      • jj metaedit --update-committer-timestamp was renamed to
        jj metaedit --force-rewrite since the old name (and help text)
        incorrectly suggested that the committer name and email would not
        be updated.

      New features

      • Workspaces may have an additional layered configuration, located at
        .jj/workspace-config.toml. jj config subcommands which took layer options
        like --repo now also support --workspace.

      • jj bookmark track can now associate new local bookmarks with remote.
        Tracked bookmarks can be pushed without --allow-new.
        #7072

      • The new jj git colocation command provides sub-commands to show the
        colocation state (status), to convert a non-colocated workspace into
        a colocated workspace (enable), and vice-versa (disable).

      • New jj tag set/delete commands to create/update/delete tags locally.
        Created/updated tags are currently always exported to Git as lightweight
        tags. If you would prefer them to be exported as annotated tags, please give
        us feedback on #7908.

      • Templates now support a .split(separator, [limit]) method on strings to
        split a string into a list of substrings.

      • -G is now available as a short form of --no-graph in jj log, jj evolog,
        jj op log, jj op show and jj op diff.

      • jj metaedit now accepts -m/--message option to non-interactively update
        the change description.

      • The CryptographicSignature.key() template method now also works for SSH
        signatures and returns the corresponding public key fingerprint.

      • Added template-aliases.empty_commit_marker. Users can override this value in
        their config to change the "(empty)" label on empty commits.

      • Add support for --when.workspaces config scopes.

      • Add support for --when.hostnames config scopes. This allows configuration to
        be conditionally applied based on the hostname set in operation.hostname.

      • jj bisect run accepts the command and arguments to pass to the command
        directly as positional arguments, such as
        jj bisect run --range=..main -- cargo check --all-targets.

      • Divergent changes are no longer marked red in immutable revisions. Since the
        revision is immutable, the user shouldn't take any action, so the red color
        was unnecessarily alarming.

      • New commit template keywords local/remote_tags to show only local/remote
        tags. These keywords may be useful in non-colocated Git repositories where
        local and exported @git tags can point to different revisions.

      • jj git clone now supports the --branch option to specify the branch(es)
        to fetch during clone. If present, the first matching branch is used as the
        working-copy parent.

      • Revsets now support logical operators in string patterns.

      Fixed bugs

      • jj metaedit --author-timestamp twice with the same value no longer
        edits the change twice in some cases.

      • jj squash: fixed improper revision rebase when both --insert-after and
        --insert-before were used.

      • jj undo can now revert "fetch"/"import" operation that involves tag updates.
        #6325

      • Fixed parsing of files(expr) revset expression including parentheses.
        #7747

      • Fixed jj describe --stdin to append a final newline character.

      Contributors

      Thanks to the people who made this release happen!

    4. 🔗 jj-vcs/jj v0.36.0 release

      About

      jj is a Git-compatible version control system that is both simple and powerful. See
      the installation instructions to get started.

      Release highlights

      301 redirects are being issued towards the new domain, so any existing links
      should not be broken.

      • Fixed race condition that could cause divergent operations when running
        concurrent jj commands in colocated repositories. It is now safe to
        continuously run e.g. jj log without --ignore-working-copy in one
        terminal while you're running other commands in another terminal.
        #6830

      • jj now ignores $PAGER set in the environment and uses less -FRX on most
        platforms (:builtin on Windows). See the docs for
        more information, and #3502 for
        motivation.

      Breaking changes

      • In filesets or path patterns, glob matching
        is enabled by default. You can use cwd:"path" to match literal paths.

      • In the following commands, string pattern
        arguments
        are now parsed the same way they
        are in revsets and can be combined with logical operators: jj bookmark delete/forget/list/move, jj tag delete/list, jj git clone/fetch/push

      • In the following commands, unmatched bookmark/tag names is no longer an
        error. A warning will be printed instead: jj bookmark delete/forget/move/track/untrack, jj tag delete, jj git clone/push

      • The default string pattern syntax in revsets will be changed to glob: in a
        future release. You can opt in to the new default by setting
        ui.revsets-use-glob-by-default=true.

      • Upgraded scm-record from v0.8.0 to v0.9.0. See release notes at
        https://github.com/arxanas/scm-record/releases/tag/v0.9.0.

      • The minimum supported Rust version (MSRV) is now 1.89.

      • On macOS, the deprecated config directory ~/Library/Application Support/jj
        is not read anymore. Use $XDG_CONFIG_HOME/jj instead (defaults to
        ~/.config/jj).

      • Sub-repos are no longer tracked. Any directory containing .jj or .git
        is ignored. Note that Git submodules are unaffected by this.

      Deprecations

      • The --destination/-d arguments for jj rebase, jj split, jj revert,
        etc. were renamed to --onto/-o. The reasoning is that --onto,
        --insert-before, and --insert-after are all destination arguments, so
        calling one of them --destination was confusing and unclear. The old names
        will be removed at some point in the future, but we realize that they are
        deep in muscle memory, so you can expect an unusually long deprecation period.

      • jj describe --edit is deprecated in favor of --editor.

      • The config options git.auto-local-bookmark and git.push-new-bookmarks are
        deprecated in favor of remotes.<name>.auto-track-bookmarks. For example:

        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      • The flag --allow-new on jj git push is deprecated. In order to push new
        bookmarks, please track them with jj bookmark track. Alternatively, consider
        setting up an auto-tracking configuration to avoid the chore of tracking
        bookmarks manually. For example:
        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      New features

      • jj commit, jj describe, jj squash, and jj split now accept
        --editor, which ensures an editor will be opened with the commit
        description even if one was provided via --message/-m.

      • All jj commands show a warning when the provided fileset expression
        doesn't match any files.

      • Added files() template function to DiffStats. This supports per-file stats
        like lines_added() and lines_removed()

      • Added join() template function. This is different from separate() in that
        it adds a separator between all arguments, even if empty.

      • RepoPath template type now has a absolute() -> String method that returns
        the absolute path as a string.

      • Added format_path(path) template alias that controls how file paths are printed
        with jj file list.

      • New built-in revset aliases visible() and hidden().

      • Unquoted * is now allowed in revsets. bookmarks(glob:foo*) no longer
        needs quoting.

      • jj prev/next --no-edit now generates an error if the working-copy has some
        children.

      • A new config option remotes.<name>.auto-track-bookmarks can be set to a
        string pattern. New bookmarks matching it will be automatically tracked for
        the specified remote. See
        the docs.

      • jj log now supports a --count flag to print the number of commits instead
        of displaying them.

      Fixed bugs

      • jj fix now prints a warning if a tool failed to run on a file.
        #7971

      • Shell completion now works with non‑normalized paths, fixing the previous
        panic and allowing prefixes containing . or .. to be completed correctly.
        #6861

      • Shell completion now always uses forward slashes to complete paths, even on
        Windows. This renders completion results viable when using jj in Git Bash.
        #7024

      • Unexpected keyword arguments now return a parse failure for the coalesce()
        and concat() templating functions.

      • Nushell completion script documentation add -f option, to keep it up to
        date.
        #8007

      • Ensured that with Git submodules, remnants of your submodules do not show up
        in the working copy after running jj new.
        #4349

      Contributors

      Thanks to the people who made this release happen!

    5. 🔗 r/wiesbaden Wo arbeiten von Bar / Café nach 19 Uhr? rss

      Hi zusammen,

      ich arbeite am Laptop und setze mich dazu gerne in CafÊs. Oft bin ich bis 21 Uhr aber noch nicht wirklich fertig. Die meisten CafÊs haben spätestens dann geschlossen.

      Kennt ihr CafÊs oder Bars die geeignet wären? Musik ist fßr mich kein Problem, solange man dort auch am Laptop sitzen darf.

      submitted by /u/CalmSorry
      [link] [comments]

    6. 🔗 r/LocalLLaMA Xiaomi’s MiMo-V2-Flash (309B model) jumping straight to the big leagues rss

      Xiaomi’s MiMo-V2-Flash (309B model) jumping straight to the big leagues | submitted by /u/98Saman
      [link] [comments]
      ---|---

    7. 🔗 Anton Zhiyanov Go feature: Modernized go fix rss

      Part of theAccepted! series: Go proposals and features explained in simple terms.

      The modernized go fix command uses a fresh set of analyzers and the same infrastructure as go vet.

      Ver. 1.26 • Tools • Medium impact

      Summary The go fix is re-implemented using the Go analysis framework — the same one go vet uses. While go fix and go vet now use the same infrastructure, they have different purposes and use different sets of analyzers: Vet is for reporting problems. Its analyzers describe actual issues, but they don't always suggest fixes, and the fixes aren't always safe to apply. Fix is (mostly) for modernizing the code to use newer language and library features. Its analyzers produce fixes are always safe to apply, but don't necessarily indicate problems with the code. See the full set of fix's analyzers in the Analyzers section. Motivation The main goal is to bring modernization tools from the Go language server (gopls) to the command line. If go fix includes the modernize suite, developers can easily and safely update their entire codebase after a new Go release with just one command. Re-implementing go fix also makes the Go toolchain simpler. The unified go fix and go vet use the same backend framework and extension mechanism. This makes the tools more consistent, easier to maintain, and more flexible for developers who want to use custom analysis tools. Description Implement the new go fix command: usage: go fix [build flags] [-fixtool prog] [fix flags] [packages] Fix runs the Go fix tool (cmd/fix) on the named packages and applies suggested fixes. It supports these flags: -diff instead of applying each fix, print the patch as a unified diff The -fixtool=prog flag selects a different analysis tool with alternative or additional fixers. By default, go fix runs a full set of analyzers (see the list below). To choose specific analyzers, use the -NAME flag for each one, or use -NAME=false to run all analyzers except the ones you turned off. For example, here we only enable the forvar analyzer: go fix -forvar . And here, we enable all analyzers except omitzero : go fix -omitzero=false . Currently, there's no way to suppress specific analyzers for certain files or sections of code. The -fixtool=prog flag selects a different analysis tool instead of the default one. For example, you can build and run the "stringintconv" analyzer, which fixes string(int) conversions, by using these commands: go install golang.org/x/tools/go/analysis/passes/stringintconv/cmd/stringintconv@latest go fix -fixtool=$(which stringintconv) Alternative fix tools should be built atop unitchecker, which handles the interaction with go fix. Analyzers Here's the list of fixes currently available in go fix, along with examples. any • bloop • fmtappendf • forvar • hostport • inline • mapsloop • minmax • newexpr • omitzero • plusbuild • rangeint • reflecttypefor • slicescontains • slicessort • stditerators • stringsbuilder • stringscut • stringcutprefix • stringsseq • testingcontext • waitgroup any Replace interface{} with any: // before func main() { var val interface{} val = 42 fmt.Println(val) } // after func main() { var val any val = 42 fmt.Println(val) } bloop Replace for-range over b.N with b.Loop and remove unnecessary manual timer control: // before func Benchmark(b *testing.B) { s := make([]int, 1000) for i := range s { s[i] = i } b.ResetTimer() for range b.N { Calc(s) } } // after func Benchmark(b *testing.B) { s := make([]int, 1000) for i := range s { s[i] = i } for b.Loop() { Calc(s) } } fmtappendf Replace []byte(fmt.Sprintf) with fmt.Appendf to avoid intermediate string allocation: // before func format(id int, name string) []byte { return []byte(fmt.Sprintf("ID: %d, Name: %s", id, name)) } // after func format(id int, name string) []byte { return fmt.Appendf(nil, "ID: %d, Name: %s", id, name) } forvar Remove unnecessary shadowing of loop variables: // before func main() { for x := range 4 { x := x go func() { fmt.Println(x) }() } } // after func main() { for x := range 4 { go func() { fmt.Println(x) }() } } hostport Replace network addresses created with fmt.Sprintf by using net.JoinHostPort instead, because host-port pairs made with %s:%d or %s:%s format strings don't work with IPv6: // before func main() { host := "::1" port := 8080 addr := fmt.Sprintf("%s:%d", host, port) net.Dial("tcp", addr) } // after func main() { host := "::1" port := 8080 addr := net.JoinHostPort(host, fmt.Sprintf("%d", port)) net.Dial("tcp", addr) } inline Inline function calls accoring to the go:fix inline comment directives: // before //go:fix inline func Square(x float64) float64 { return math.Pow(float64(x), 2) } func main() { fmt.Println(Square(5)) } // after //go:fix inline func Square(x float64) float64 { return math.Pow(float64(x), 2) } func main() { fmt.Println(math.Pow(float64(5), 2)) } mapsloop Replace explicit loops over maps with calls to maps package (Copy, Insert, Clone, or Collect depending on the context): // before func copyMap(src map[string]int) map[string]int { dest := make(map[string]int, len(src)) for k, v := range src { dest[k] = v } return dest } // after func copyMap(src map[string]int) map[string]int { dest := make(map[string]int, len(src)) maps.Copy(dest, src) return dest } minmax Replace if/else statements with calls to min or max: // before func calc(a, b int) int { var m int if a > b { m = a } else { m = b } return m * (b - a) } // after func calc(a, b int) int { var m int m = max(a, b) return m * (b - a) } newexpr Replace custom "pointer to" functions with new(expr): // before type Pet struct { Name string Happy *bool } func ptrOf *T { return &v } func main() { p := Pet{Name: "Fluffy", Happy: ptrOf(true)} fmt.Println(p) } // after type Pet struct { Name string Happy *bool } //go:fix inline func ptrOf[T any](v T) *T { return new(v) } func main() { p := Pet{Name: "Fluffy", Happy: new(true)} fmt.Println(p) } omitzero

      Remove omitempty from struct-type fields because this tag doesn't have any effect on them:

      // before
      type Person struct {
          Name string `json:"name"`
          Pet  Pet    `json:"pet,omitempty"`
      }
      
      type Pet struct {
          Name string
      }
      
      
      
      // after
      type Person struct {
          Name string `json:"name"`
          Pet  Pet    `json:"pet"`
      }
      
      type Pet struct {
          Name string
      }
      

      plusbuild Remove obsolete //+build comments: //go:build linux && amd64 // +build linux,amd64 package main func main() { var _ = 42 } //go:build linux && amd64 package main func main() { var _ = 42 } rangeint Replace 3-clause for loops with for-range over integers: // before func main() { for i := 0; i < 5; i++ { fmt.Print(i) } } // after func main() { for i := range 5 { fmt.Print(i) } } reflecttypefor Replace reflect.TypeOf(x) with reflect.TypeFor when the type is known at compile time: // before func main() { n := uint64(0) typ := reflect.TypeOf(n) fmt.Println("size =", typ.Bits()) } // after func main() { typ := reflect.TypeFor[uint64]() fmt.Println("size =", typ.Bits()) } slicescontains

      Replace loops with slices.Contains or slices.ContainsFunc:

      // before
      func find(s []int, x int) bool {
          for _, v := range s {
              if x == v {
                  return true
              }
          }
          return false
      }
      
      
      
      // after
      func find(s []int, x int) bool {
          return slices.Contains(s, x)
      }
      

      slicessort Replace sort.Slice with slices.Sort for basic types: // before func main() { s := []int{22, 11, 33, 55, 44} sort.Slice(s, func(i, j int) bool { return s[i] < s[j] }) fmt.Println(s) } // after func main() { s := []int{22, 11, 33, 55, 44} slices.Sort(s) fmt.Println(s) } stditerators Use iterators instead of Len/At-style APIs for certain types in the standard library: // before func main() { typ := reflect.TypeFor for i := range typ.NumField() { field := typ.Field(i) fmt.Println(field.Name, field.Type.String()) } } // after func main() { typ := reflect.TypeFor[Person]() for field := range typ.Fields() { fmt.Println(field.Name, field.Type.String()) } } stringsbuilder

      Replace repeated += with strings.Builder:

      // before
      func abbr(s []string) string {
          res := ""
          for _, str := range s {
              if len(str) > 0 {
                  res += string(str[0])
              }
          }
          return res
      }
      
      
      
      // after
      func abbr(s []string) string {
          var res strings.Builder
          for _, str := range s {
              if len(str) > 0 {
                  res.WriteString(string(str[0]))
              }
          }
          return res.String()
      }
      

      stringscut

      Replace some uses of strings.Index and string slicing with strings.Cut or strings.Contains:

      // before
      func nospace(s string) string {
          idx := strings.Index(s, " ")
          if idx == -1 {
              return s
          }
          return strings.ReplaceAll(s, " ", "")
      }
      
      
      
      // after
      func nospace(s string) string {
          found := strings.Contains(s, " ")
          if !found {
              return s
          }
          return strings.ReplaceAll(s, " ", "")
      }
      

      stringscutprefix

      Replace strings.HasPrefix/TrimPrefix with strings.CutPrefix and strings.HasSuffix/TrimSuffix with string.CutSuffix:

      // before
      func unindent(s string) string {
          if strings.HasPrefix(s, "> ") {
              return strings.TrimPrefix(s, "> ")
          }
          return s
      }
      
      
      
      // after
      func unindent(s string) string {
          if after, ok := strings.CutPrefix(s, "> "); ok {
              return after
          }
          return s
      }
      

      stringsseq

      Replace ranging over strings.Split/Fields with strings.SplitSeq/FieldsSeq:

      // before
      func main() {
          s := "go is awesome"
          for _, word := range strings.Fields(s) {
              fmt.Println(len(word))
          }
      }
      
      
      
      // after
      func main() {
          s := "go is awesome"
          for word := range strings.FieldsSeq(s) {
              fmt.Println(len(word))
          }
      }
      

      testingcontext

      Replace context.WithCancel with t.Context in tests:

      // before
      func Test(t *testing.T) {
          ctx, cancel := context.WithCancel(context.Background())
          defer cancel()
          if ctx.Err() != nil {
              t.Fatal("context should be active")
          }
      }
      
      
      
      // after
      func Test(t *testing.T) {
          ctx := t.Context()
          if ctx.Err() != nil {
              t.Fatal("context should be active")
          }
      }
      

      waitgroup

      Replace wg.Add+wg.Done with wg.Go:

      // before
      func main() {
          var wg sync.WaitGroup
      
          wg.Add(1)
          go func() {
              defer wg.Done()
              fmt.Println("go!")
          }()
      
          wg.Wait()
      }
      
      
      
      // after
      func main() {
          var wg sync.WaitGroup
      
          wg.Go(func() {
              fmt.Println("go!")
          })
      
          wg.Wait()
      }
      

      Links & Credits

      𝗣 71859 👥 Alan Donovan, Jonathan Amsterdam

      *[Medium impact]: Likely impact for an average Go developer

    8. 🔗 mr-karan/doggo v1.1.3 release

      Changelog

      New Features

      Bug fixes

      • 15d6c34: fix: restore table text wrapping and respect -4/-6 flags for query types (@mr-karan)

      Others

    9. 🔗 r/wiesbaden künstler*innenviertel? rss

      finde online nichts dazu, aber wird die haltestelle kĂźnstlerviertel bei der buslinie 18 seit neuestem als kĂźnstlerinnenviertel angesagt? oder spinn ich und das war schon immer so

      submitted by /u/imdrixn
      [link] [comments]

    10. 🔗 r/LocalLLaMA Of course it works, in case you are wondering... and it's quite faster. rss

      Of course it works, in case you are wondering... and it's quite faster. | submitted by /u/JLeonsarmiento
      [link] [comments]
      ---|---

    11. 🔗 r/LocalLLaMA Open source LLM tooling is getting eaten by big tech rss

      I was using TGI for inference six months ago. Migrated to vLLM last month. Thought it was just me chasing better performance, then I read the LLM Landscape 2.0 report. Turns out 35% of projects from just three months ago already got replaced. This isn't just my stack. The whole ecosystem is churning.

      The deeper I read, the crazier it gets. Manus blew up in March, OpenManus and OWL launched within weeks as open source alternatives, both are basically dead now. TensorFlow has been declining since 2019 and still hasn't hit bottom. The median project age in this space is 30 months.

      Then I looked at what's gaining momentum. NVIDIA drops Dynamo, optimized for NVIDIA hardware. Google releases Gemini CLI with Google Cloud baked in. OpenAI ships Codex CLI that funnels you into their API. That's when it clicked.

      Two years ago this space was chaotic but independent. Now the open source layer is becoming the customer acquisition layer. We're not choosing tools anymore. We're being sorted into ecosystems.

      submitted by /u/Inevitable_Wear_9107
      [link] [comments]

    12. 🔗 r/LocalLLaMA Key Highlights of NVIDIA’s New Open-Source Vision-to-Action Model: NitroGen rss

      Key Highlights of NVIDIA’s New Open-Source Vision-to-Action Model: NitroGen |

      • NitroGen is a unified vision-to-action model designed to play video games directly from raw frames. It takes video game footage as input and outputs gamepad actions.
      • NitroGen is trained purely through large-scale imitation learning on videos of human gameplay.
      • NitroGen works best on games designed for gamepad controls (e.g., action, platformer, and racing games) and is less effective on games that rely heavily on mouse and keyboard (e.g., RTS, MOBA).

      How this model works?

      • RGB frames are processed through a pre-trained vision transformer (SigLip2).
      • A diffusion matching transformer (DiT) then generates actions, conditioned on SigLip output.

      Model - https://huggingface.co/nvidia/NitroGen submitted by /u/Dear- Success-1441
      [link] [comments]
      ---|---

    13. 🔗 r/LocalLLaMA Japan's Rakuten is going to release a 700B open weight model in Spring 2026 rss

      https://news.yahoo.co.jp/articles/0fc312ec3386f87d65e797ab073db56c230757e1

      Hope it works well in real life. Then it can not only be an alternative to the Chinese models. but also prompt the US companies to release big models.

      submitted by /u/Ok_Warning2146
      [link] [comments]

    14. 🔗 Filip Filmar note to self: do not remove `.bazelversion` rss

      note to self: do not remove .bazelversion from projects The other day, I was pondering whether to keep setting particular bazel version in projects. I even removed some, to see what would become of it. Since I use the bazelisk installation method, I get automatic bazel updates when a new version is released. It turns out that this remains a bad idea. An auto-update to bazel 8.5.0 caused some of my CI workflows to fail, likely because bazel 8.