🏡


  1. Rust Binary Analyis 101 - Part 2 - Z3R0cool Blogs
  2. mprocs: start all your project's commands at once
  3. Jon's Arm Reference
  4. Optimize for momentum
  5. Nviso vshell report

  1. December 24, 2025
    1. 🔗 cloudflare/capnweb v0.4.0 release

      Minor Changes

      • #121 32e362f Thanks @kentonv! - Improved compatibility with Cloudflare Workers' built-in RPC, particularly when proxying from one to the other.
    2. 🔗 gulbanana/gg GG 0.36.3 release

      Fixed

      • CLI build: added dock icon on MacOS.
      • CLI build: the advertised --foreground now actually exists and works.
      • GG now respects the snapshot.auto-track setting.
    3. 🔗 r/wiesbaden Store/Jeweler Recommendations for buying 24 Caret Gold Jewelry rss

      I am in Wiesbaden for a short term work assignment. I want to buy my daughter some gold jewelry.

      I need a recommendation for a jewelry store that sells 24 carat gold bracelets, necklaces or earrings.

      Does anyone have a recommendation of a store or jeweler in the Mainz/Wiesbaden area?

      Thank you.

      submitted by /u/J-V1972
      [link] [comments]

    4. 🔗 r/reverseengineering WIBU CodeMeter claims AES-256 military-grade encryption but entropy analysis reveals simple XOR rss
    5. 🔗 r/reverseengineering WIBU CodeMeter claims AES-256 military-grade encryption but entropy analysis reveals simple XOR rss
    6. 🔗 obra/superpowers v4.0.2 release

      Release v4.0.2

    7. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
      sync repo: +1 plugin, +1 release
      
      ## New plugins
      - [AutoRE](https://github.com/a1ext/auto_re) (2.2.0)
      
  2. December 23, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-23 rss

      IDA Plugin Updates on 2025-12-23

      New Releases:

      Activity:

    2. 🔗 r/reverseengineering Finding Jingle Town: Debugging an N64 Game without Symbols rss
    3. 🔗 r/wiesbaden GruppenaktivitĂ€ten fĂŒr Geburtstage? :) rss

      Hi, ich bin gerade am ĂŒberlegen was man noch so in Wiesbaden zusammen machen kann außer in die Superflyhalle, Keramik bemalen oder Bowling.. habt ihr eventuell Ideen wo es sich noch lohnt mit ein paar Leuten zu meinem Geburtstag hinzugehen?

      Danke! 💚

      submitted by /u/FunkINFJ
      [link] [comments]

    4. 🔗 r/reverseengineering Nintendo 64 Decomp Update: Harvest Moon 64 is now 100% decompiled! rss
    5. 🔗 r/reverseengineering Fabrice Bellard Releases MicroQuickJS rss
    6. 🔗 r/reverseengineering Fake PuTTY Installer Malware Analysis with IDA Pro rss
    7. 🔗 News Minimalist 🐱 FDA approves first weight loss pill + 8 more stories rss

      In the last 5 days ChatGPT read 146994 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

      [5.5] FDA approves first GLP-1 pill for obesity —statnews.com(+77)

      The FDA has approved the first oral GLP-1 pill for obesity, a version of Novo Nordisk’s Wegovy, potentially expanding access to effective weight loss treatments starting in January.

      The 25-milligram daily medication demonstrated 14% weight loss in trials, mirroring the efficacy of the injectable version. It also reduces cardiovascular risks and will initially cost $150 per month for the lowest dosage through direct-to-consumer channels.

      This peptide-based pill requires strict morning fasting for absorption. Meanwhile, competitor Eli Lilly is developing a small-molecule pill, orforglipron, which may offer easier manufacturing and fewer dietary restrictions once approved.

      [6.4] European governments agree to introduce a digital euro —nrc.nl(Dutch) (+5)

      European governments have agreed to create a digital euro, establishing a central bank-backed public currency to safeguard the continent’s financial sovereignty and payment resilience.

      This public currency would offer a secure alternative to commercial bank accounts and US-based payment providers. Pending European Parliament approval, the digital euro could launch by 2029 via apps or cards, featuring offline capabilities to ensure transaction continuity during cyberattacks.

      The proposal guarantees privacy and bans programmable spending to mirror the utility of physical cash. While merchants must eventually accept the currency, commercial banks remain critical of the implementation costs and competition.

      [6.1] TikTok agrees to sell US operations to American investors —theguardian.com(+93)

      TikTok has signed a binding deal to sell its United States operations to a group of American investors including Oracle and Silver Lake, preventing a ban and ensuring continued service.

      The agreement, set to close January 22, grants Oracle, Silver Lake, and MGX a combined 45 percent stake. Oracle will license TikTok’s recommendation algorithm to address long-standing national security concerns.

      Highly covered news with significance over 5.5

      [5.8] EU leaders approve €90 billion loan for Ukraine despite dissent from Hungary, Slovakia, and Czech Republic — irishtimes.com (+65)

      [5.6] FCC bans new Chinese-made drones over national security concerns — apnews.com (+21)

      [5.6] EU court rules for refugee in landmark case against Frontex — independent.co.uk (+2)

      [5.6] Austria's top court rules Meta's ad model illegal, orders overhaul of user data practices in EU — channelnewsasia.com (+4)

      [5.6] OpenAI launches an app store inside ChatGPT — tomsguide.com (+7)

      [5.5] Trump appoints special envoy to Greenland to pursue acquisition — nrc.nl (Dutch) (+148)

      Thanks for reading!

      — Vadim


      You can personalize this newsletter with premium.


      Powered by beehiiv

    8. 🔗 r/LocalLLaMA Qwen released Qwen-Image-Edit-2511 — a major upgrade over 2509 rss

      Qwen released Qwen-Image-Edit-2511 — a major upgrade over 2509 | Hugging face: https://huggingface.co/Qwen/Qwen-Image-Edit-2511 What’s new in 2511: đŸ‘„ Stronger multi-person consistency for group photos and complex scenes đŸ§© Built-in popular community LoRAs — no extra tuning required 💡 Enhanced industrial & product design generation 🔒 Reduced image drift with dramatically improved character & identity consistency 📐 Improved geometric reasoning, including construction lines and structural edits From identity-preserving portrait edits to high-fidelity multi-person fusion and practical engineering & design workflows, 2511 pushes image editing to the next level. submitted by /u/Difficult-Cap-7527
      [link] [comments]
      ---|---

    9. 🔗 r/LocalLLaMA AMA With Z.AI, The Lab Behind GLM-4.7 rss

      Hi r/LocalLLaMA

      Today we are having Z.AI, the research lab behind the GLM 4.7. We’re excited to have them open up and answer your questions directly.

      Our participants today:

      The AMA will run from 8 AM – 11 AM PST, with the Z.AI team continuing to follow up on questions over the next 48 hours.

      submitted by /u/zixuanlimit
      [link] [comments]

    10. 🔗 r/wiesbaden Rental options rss
    11. 🔗 obra/superpowers v4.0.1 release

      Release v4.0.1

    12. 🔗 Simon Willison Cooking with Claude rss

      I've been having an absurd amount of fun recently using LLMs for cooking. I started out using them for basic recipes, but as I've grown more confident in their culinary abilities I've leaned into them for more advanced tasks. Today I tried something new: having Claude vibe-code up a custom application to help with the timing for a complicated meal preparation. It worked really well!

      A custom timing app for two recipes at once

      We have family staying at the moment, which means cooking for four. We subscribe to a meal delivery service called Green Chef, mainly because it takes the thinking out of cooking three times a week: grab a bag from the fridge, follow the instructions, eat.

      Each bag serves two portions, so cooking for four means preparing two bags at once.

      I have done this a few times now and it is always a mad flurry of pans and ingredients and timers and desperately trying to figure out what should happen when and how to get both recipes finished at the same time. It's fun but it's also chaotic and error-prone.

      This time I decided to try something different, and potentially even more chaotic and error-prone: I outsourced the planning entirely to Claude.

      I took this single photo of the two recipe cards side-by-side and fed it to Claude Opus 4.5 (in the Claude iPhone app) with this prompt:

      Extract both of these recipes in as much detail as possible

      Two recipe cards placed next to each other on a kitchen counter. Each card has detailed instructions plus photographs of steps.

      This is a moderately challenging vision task in that there quite a lot of small text in the photo. I wasn't confident Opus could handle it.

      I hadn't read the recipe cards myself. The responsible thing to do here would be a thorough review or at least a spot-check - I chose to keep things chaotic and didn't do any more than quickly eyeball the result.

      I asked what pots I'd need:

      Give me a full list of pots I would need if I was cooking both of them at once

      Then I prompted it to build a custom application to help me with the cooking process itself:

      I am going to cook them both at the same time. Build me a no react, mobile, friendly, interactive, artifact that spells out the process with exact timing on when everything needs to happen have a start setting at the top, which starts a timer and persists when I hit start in localStorage in case the page reloads. The next steps should show prominently with countdowns to when they open. The full combined timeline should be shown slow with calculated times tor when each thing should happen

      I copied the result out onto my own hosting (you can try it here) because I wasn't sure if localStorage would work inside the Claude app and I really didn't want it to forget my times!

      Then I clicked "start cooking"!

      The recipe app shows a full timeline with 00:00 Preheat Oven and onwards, plus a big Start Cooking button. In the animation clicking the button starts a timer clicking up, adds a Do this now panel showing the Start all prep work step, shows Coming Up Next with timers counting down to the next steps and updates the full timeline to show local clock times where it previously showed durations from 00:00 upwards.

      Here's the full Claude transcript.

      There was just one notable catch: our dog, Cleo, knows exactly when her dinner time is, at 6pm sharp. I forgot to mention this to Claude, which had scheduled several key steps colliding with Cleo's meal. I got woofed at. I deserved it.

      To my great surprise, it worked. I followed the recipe guide to the minute and served up both meals exactly 44 minutes after I started cooking.

      A small bowl (a beautiful blue sea textured bowl, made by Natalie Downe) contains a chickpea stew. A larger black bowl has couscous, green beans and blackened cauliflower.

      The best way to learn the capabilities of LLMs is to throw tasks at them that may be beyond their abilities and see what happens. In this case I fully expected that something would get forgotten or a detail would be hallucinated and I'd end up scrambling to fix things half way through the process. I was surprised and impressed that it worked so well.

      Some credit for the app idea should go to my fellow hackers at /dev/fort 2 in 2009, when we rented Knockbrex Castle in Dumfries, Scotland for a week and attempted to build a cooking timer application for complex meals.

      Generating recipes from scratch

      Most of my other cooking experiments with LLMs have been a whole lot simpler than this: I ask for a recipe, ask for some variations and then cook one of them and see what happens.

      This works remarkably well considering LLMs have no taste buds.

      I've started to think of this as asking LLMs for the average recipe for a dish, based on all of the recipes they have hoovered up during their training. It turns out the mean version of every guacamole recipe on the internet is a decent guacamole!

      Here's an example of a recipe I tried recently that worked out really well. I was helping Natalie run her ceramic stall at the farmers market and the stall next to us sold excellent dried beans. I've never used dried beans before, so I took a photo of their selection and asked Claude what I could do with them:

      Several bags of tasty looking beans of different varieties and colors More bags of beans.

      Identify these beans

      It took a guess at the beans, then I said:

      Get me excited about cooking with these! If I bought two varietiew what could I make

      "Get me excited" switches Claude into a sort of hype-man mode, which is kind of entertaining:

      Oh, you're about to enter the wonderful world of bean cooking! Let me get you pumped about some killer two-bean combos: [...]

      Mixed bean salad with lemon, olive oil, fresh herbs, cherry tomatoes - light but satisfying [...]

      I replied:

      OK Bean salad has me interested - these are dried beans. Give me some salad options I can make that would last a long time in the fridge

      ... and after some back and forth we arrived on the recipe in this transcript, which I cooked the following day (asking plenty of follow-up questions) and thoroughly enjoyed.

      I've done this a bunch of times with a bunch of different recipes across both Claude and ChatGPT and honestly I've not had a notable miss yet. Being able to say "make it vegan" or "I don't have coriander, what can I use instead?" or just "make it tastier" is a really fun way to explore cooking.

      It's also fun to repeat "make it tastier" multiple times to see how absurd you can get.

      I really want someone to turn this into a benchmark!

      Cooking with LLMs is a lot of fun. There's an opportunity here for a really neat benchmark: take a bunch of leading models, prompt them for recipes, follow those recipes and taste-test the results!

      The logistics of running this are definitely too much for me to handle myself. I have enough trouble cooking two meals at once, for a solid benchmark you'd ideally have several models serving meals up at the same time to a panel of tasters.

      If someone else wants to try this please let me know how it goes!

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    13. 🔗 sacha chua :: living an awesome life 2025-12-22 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    14. 🔗 matklad Newtype Index Pattern In Zig rss

      Newtype Index Pattern In Zig

      Dec 23, 2025

      In efficiency-minded code, it is idiomatic to use indexes rather than pointers. Indexes have several advantages:

      First , they save memory. Typically a 32-bit index is enough, a saving of four bytes per pointer on 64-bit architectures. I haven’t seen this measured, but my gut feeling is that this is much more impactful than it might initially seem. On modern architectures, saving memory saves time (and energy) as well, because the computing bottleneck is often the bit pipe between the memory and the CPU, not the computation per se. Dense data structures use CPU cache more efficiently, removing prohibitive latency of memory accesses. Bandwidth savings are even better: smaller item size obviously improves bandwidth utilization, but having more items in cache obviates the need to use the bandwidth in the first place. Best case, the working set fits into the CPU cache!

      Note well that memory savings are evenly spread out. Using indexes makes every data structure slightly more compact, which improves performance across the board, regardless of hotspot distribution. It’s hard to notice a potential for such saving in a profiler, and even harder to test out. For these two reasons, I would default to indexes for code where speed matters, even when I don’t have the code written yet to profile it!

      There’s also a more subtle way in which indexes save memory. Using indexes means storing multiple items in an array, but such dense storage contains extra information in relative positions of the items. If you need to store a list of items, you can often avoid materializing the list of indexes by storing a range “pointing” into the shared storage. Occasionally, you can even do UTF-8 trick and use just a single bit to mark the end of a list.

      The second benefit of indexes is more natural modeling of cyclic and recursive data structures. Creating a cycle fundamentally requires mutability somewhere (“tying the knot” in Haskell relies on mutability of lazy thunks). This means that you need to make some pointers nullable, and that usually gets awkward even without borrow checker behind your back. Even without cycles and just recursion, pointers are problematic, due to a combination of two effects:

      • pointers encourage recursive functions, and
      • recursive data structures lead to arbitrary long (but finite) chains of pointers.

      The combination works fine at small scale, but then it fails with stack overflow in production every single time, requiring awkward work-arounds. For example, rustc serializes error traces from nested macro expansions as a deeply nested tree of JSON objects, which requires using stacker hack when parsing the output (which you’ll learn about only after crashes in the hands of macro connoisseur users).

      Finally , indexes greatly help serialization, they make it trivial to communicate data structures both through space (sending a network message) and time (saving to disk and reading later). Indexes are naturally relocatable, it doesn’t matter where in memory they are. But this is just a half of serialization benefit. The other is that, because everything is in few arrays, you can do bulk serialization. You don’t need to write the items one by one, you can directly memcpy arrays around (but be careful to not leak data via padding, and be sure to checksum the result).

      The big problem with “naive” u32 indexes is of course using the right index with the wrong array, or vice verse. The standard solution here is to introduce a newtype wrapper around the raw index. @andrewrk recently popularized a nice “happy accident of language design” pattern for this in Zig. The core idea is to define an index via non-exhaustive enum:

      const ItemIndex = enum(u32) {
          _
      };
      

      In Zig, enum designates a strongly-typed collection of integer constants, not a Rust-style ADT (there’s union(enum) for that). By default an backing integer type is chosen by the compiler, but you can manually override it with enum(u32) syntax:

      const Color = enum(u16) { red, green, blue };
      

      Finally, Zig allows making enums non-exhaustive with _. In a non-exhaustive enum, any numeric value is valid, and some have symbolic labels:

      const FontWeight = enum(u16) {
          normal = 400,
          bold = 700,
      
          pub fn value(weight: FontWeight) u16 {
              return @intFromEnum(weight);
          }
      }
      
      test FontWeight {
          assert(FontWeight.value(.normal) == 400);
      
          const bold: FontWeight = @enumFromInt(700);
          assert(bold == .bold);
      }
      

      @intFromEnum and @enumFromInt builtins switch abstraction level between a raw integer and an enum value. So,

      const ItemIndex = enum(u32) {
          _
      };
      

      is a way to spell “u32, but a distinct type”. Note that there’s no strong encapsulation boundary here, anyone can @enumFromInt. Zig just doesn’t provide language-enforced encapsulation mechanisms.

      Putting everything together, this is how I would model n-ary tree with parent pointers in Zig:

      pub const Tree = struct {
         nodes: []const Node.Data,
      
         pub const Node = enum(u32) {
             root = 0,
             invalid = std.math.maxInt(u32),
             _,
      
             pub const Data = struct {
                 parent: Node, // .invalid means no parent.
                 children: struct {
                     index: u32,
                     count: u32,
                 },
      
                 comptime {
                     assert(@sizeOf(Data) == 12);
                 }
             };
         };
      
         fn get(
             tree: *const Tree,
             node: Node,
         ) Node.Data {
             return tree.nodes[@intFromEnum(node)];
         }
      
         pub fn parent(
             tree: *const Tree,
             node: Node,
         ) ?Node {
             const result = tree.get(node).parent;
             return if (result == .invalid) null else result;
         }
      
         pub fn children(
             tree: *const Tree,
             node: Node,
         ) []const Node {
             const range = tree.get(node).children;
             return tree.nodes[range.index..][0..range.count];
         }
      };
      

      Some points of note:

      • As usual with indexes, you start with defining the collective noun first, a Tree rather than a Node.
      • In my experience, you usually don’t want Index suffix in your index types, so Node is just enum(u32), not the underlying data.
      • Nested types are good! Node.Data feels just right.
      • For readability, the order is fields, then nested types, then functions.
      • In Node, we have a couple of symbolic constants. .root is for the root node that is stored first, .invalid for whenever we want to apply offensive programing and make bad indexes blow up. Here, we use .invalid for “null” parent. An alternative would be to use ?Node, but that would waste of space, or making the root its own parent.
      • If you care about performance, its a good idea to comptime assert sizes of structures, not to prevent changes, but as a comment that explains to the reader just how the large the struct is.
      • I don’t know if I like index/count or start/end more for representing ranges, but I use the former just because the names align in length.
      • Both tree.method(node) and node.method(tree) are reasonable shapes for the API. I don’t know which one I prefer more. I default to the former because it works even if there are several node arguments.

      P.S. Apparently I also wrote a Rust version of this post a while back? https://matklad.github.io/2018/06/04/newtype-index-pattern.html

    15. 🔗 matklad Static Allocation For Compilers rss

      Static Allocation For Compilers

      Dec 23, 2025

      TigerBeetle famously uses “static allocation”. Infamously, the use of the term is idiosyncratic: what is meant is not static arrays, as found in embedded development, but rather a weaker “no allocation after startup” form. The amount of memory TigerBeetle process uses is not hard-coded into the Elf binary. It depends on the runtime command line arguments. However, all allocation happens at startup, and there’s no deallocation. The long-lived event loop goes round and round happily without alloc.

      I’ve wondered for years if a similar technique is applicable to compilers. It seemed impossible, but today I’ve managed to extract something actionable from this idea?

      Static Allocation

      Static allocation depends on the physics of the underlying problem. And distributed databases have surprisingly simple physics, at least in the case of TigerBeetle.

      The only inputs and outputs of the system are messages. Each message is finite in size (1MiB). The actual data of the system is stored on disk and can be arbitrarily large. But the diff applied by a single message is finite. And, if your input is finite, and your output is finite, it’s actually quite hard to need to allocate extra memory!

      This is worth emphasizing — it might seem like doing static allocation is tough and requires constant vigilance and manual accounting for resources. In practice, I learned that it is surprisingly compositional. As long as inputs and outputs of a system are finite, non-allocating processing is easy. And you can put two such systems together without much trouble. routing.zig is a good example of such an isolated subsystem.

      The only issue here is that there isn’t a physical limit on how many messages can arrive at the same time. Obviously, you can’t process arbitrary many messages simultaneously. But in the context of a distributed system over an unreliable network, a safe move is to drop a message on the floor if the required processing resources are not available.

      Counter-intuitively, not allocating is simpler than allocating, provided that you can pull it off!

      For Compilers

      Alas, it seems impossible to pull it off for compilers. You could say something like “hey, the largest program will have at most one million functions”, but that will lead to both wasted memory and poor user experience. You could also use a single yolo arena of a fixed size, like I did in Hard Mode Rust, but that isn’t at all similar to “static allocation”. With arenas, the size is fixed explicitly, but you can OOM. With static allocation it is the opposite — no OOM, but you don’t know how much memory you’ll need until startup finishes!

      The “problem size” for a compiler isn’t fixed — both the input (source code) and the output (executable) can be arbitrarily large. But that is also the case for TigerBeetle — the size of the database is not fixed, it’s just that TigerBeetle gets to cheat and store it on disk, rather than in RAM. And TigerBeetle doesn’t do “static allocation” on disk, it can fail with ENOSPACE at runtime, and it includes a dynamic block allocator to avoid that as long as possible by re-using no longer relevant sectors.

      So what we could say is that a compiler consumes arbitrarily large input, and produces arbitrarily large output, but those “do not count” for the purpose of static memory allocation. At the start, we set aside an “output arena” for storing finished, immutable results of compiler’s work. We then say that this output is accumulated after processing a sequence of chunks, where chunk size is strictly finite. While limiting the total size of the code-base is unreasonable, limiting a single file to, say, 4 MiB (runtime-overridable) is fine. Compiling then essentially becomes a “stream processing” problem, where both inputs and outputs are arbitrary large, but the filter program itself must execute in O(1) memory.

      With this setup, it is natural to use indexes rather than pointers for “output data”, which then makes it easy to persist it to disk between changes. And it’s also natural to think about “chunks of changes” not only spatially (compiler sees a new file), but also temporally (compiler sees a new version of an old file).

      Is there any practical benefits here? I don’t know! But seems worth playing around with! I feel that a strict separation between O(N) compiler output and O(1) intermediate processing artifacts can clarify compiler’s architecture, and I won’t be too surprised if O(1) processing in compilers would lead to simpler code the same way it does for databases?

  3. December 22, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-22 rss

      IDA Plugin Updates on 2025-12-22

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA DGX Spark: an unpopular opinion rss

      DGX Spark: an unpopular opinion | I know there has been a lot of criticism about the DGX Spark here, so I want to share some of my personal experience and opinion: I’m a doctoral student doing data science in a small research group that doesn’t have access to massive computing resources. We only have a handful of V100s and T4s in our local cluster, and limited access to A100s and L40s on the university cluster (two at a time). Spark lets us prototype and train foundation models, and (at last) compete with groups that have access to high performance GPUs like the H100s or H200s. I want to be clear: Spark is NOT faster than an H100 (or even a 5090). But its all-in-one design and its massive amount of memory (all sitting on your desk) enable us — a small group with limited funding, to do more research. submitted by /u/emdblc
      [link] [comments]
      ---|---

    3. 🔗 sacha chua :: living an awesome life La semaine du 15 dĂ©cembre au 21 dĂ©cembre rss

      Lundi, le quinze décembre

      J'ai emmené ma fille à son cours de gymnastique. Elle a travaillé ses roues. Elle a aussi envie d'ajouter un cours de gymnastique aérienne. D'une part, j'avais dit que si nous gérions bien ses devoirs, ce serait plus facile de dire oui. D'autre part, c'est un bon exercice pour la santé. Je pense que l'entraßnement individuel est meilleur pour ma fille parce qu'elle veut procéder à son propre rythme.

      Pour le souper, nous avons préparé des sushis avec des edamames et de la soupe au miso.

      Le mini-four a arrĂȘtĂ© de fonctionner. Heureusement, c'est notre deuxiĂšme mini-four du mĂȘme modĂšle, et nous avons le vieux mini-four dans l'abri de jardin pour les piĂšces dĂ©tachĂ©es. Au lieu de faire ses devoirs, ma fille a aidĂ© mon mari dans l'atelier et a appris des bases d'Ă©lectronique. Ensuite, ma fille a aidĂ© mon mari Ă  faire du pain. Je me suis un peu inquiĂ©tĂ©e pour ses devoirs, mais je pense que passer du temps ensemble Ă©tait tout aussi bien.

      Ils ont découvert une coccinelle dans le vieux mini-four. Ils l'ont sauvée et l'ont placée dans un petit bocal. Je lui ai donné un morceau de raisin et un bout d'essuie-tout que j'ai humecté. Je ne sais pas si elle pourra survivre jusqu'au printemps, mais elle est là, donc nous essayons.

      Mon mari s'est renseigné sur nos notes de latin que nous avons prises en 2011. AprÚs une brÚve recherche, je les ai trouvées. Elles étaient dans un vieux format TiddlyWiki, donc je les ai transformées en format Org Mode pour les exporter en livre électronique. Je n'étudie plus le latin depuis longtemps, donc j'oublie tout.

      J'ai rĂ©flĂ©chi Ă  l'aide : comment aider quelqu'un, comment recevoir de l'aide. Mon ami qui traversait une crise personnelle voulait de l'aide sous forme d'argent, mais je pense que l'aide qu'il a voulue ne lui sera pas utile. Ma fille n'a pas voulu d'aide avec ses devoirs. Peut-ĂȘtre que ma fille pense que ses efforts suffisent, et peut-ĂȘtre que cela lui suffit. Au lieu de m'inquiĂ©ter, je dois m'entraĂźner Ă  recevoir de l'aide moi-mĂȘme. C'est une des raisons pour lesquelles j'apprends le français avec ma tutrice, j'apprends Ă  parler de mes sentiments avec ma thĂ©rapeute, et j'apprĂ©cie la façon dont ma famille m'aide Ă  mĂ»rir. Je peux amĂ©liorer les processus pour que les gens puissent m'aider. Par exemple, pour le traitement des vidĂ©os de la prĂ©sentation ou de la discussion en direct, je dois simplifier et documenter le processus. Si les gens sont occupĂ©s, ce n'est pas grave, je le fais lentement. Si les gens veulent aider, ils peuvent aider.

      Mardi, le seize décembre

      Aujourd'hui, j'ai repris une routine normale. J'ai travaillé sur Deck the Halls au piano, j'ai suivi une courte vidéo d'exercice, et j'ai finalement fait une longue promenade au parc. Je ne veux pas marcher sur le verglas parce qu'il est glissant, donc j'ai marché sur le trottoir autour du parc.

      Quelqu'un a discutĂ© de la modĂ©ration du canal #emacs sur IRC. Il a semblĂ© ĂȘtre frustrĂ©. Je ne peux pas faire grand-chose, mais j'ai conseillĂ© quelques choses qu'il pouvait faire.

      J'ai emmenĂ© ma fille Ă  son dernier cours d'art. Elle Ă©tait fiĂšre que son Ɠuvre soit exposĂ©e dans la fenĂȘtre. Elle a ramassĂ© les autres Ɠuvres dans son carton Ă  dessins pour les transporter Ă  la maison. Elle a apprĂ©ciĂ© le cours avec son amie, mais elle a parfois trouvĂ© que c'Ă©tait trop bruyant, donc elle ne veut pas continuer pour le moment. Nous allons garder un programme assez libre sans beaucoup de cours pour que nous puissions aller patiner ou jouer avec ses amies quand elle en a envie.

      Dans la session de thérapie, nous avons discuté des sentiments. J'intellectualise des situations difficiles au lieu de les ressentir, donc mes devoirs pour les vacances de Noël comprennent de remarquer quand j'utilise ce mécanisme de défense. Je vais aussi écrire un journal des sentiments.

      J'ai configuré un correcteur d'orthographe grùce au cours « Emacs expliqué à mes enfants » de @vincek.

      Mercredi, le dix-sept décembre

      J'ai écrit une petite fonction pour rechercher des mots dans quelques dictionnaires en ligne. Petit à petit, j'améliore mon environnement d'écriture.

      Cet aprÚs-midi, j'ai un rendez-vous pour faire réviser mon vélo cargo. J'ai fait du vélo jusqu'au magasin de cycles. Le mécanicien m'a donné le devis pour le service et des conseils à propos de pneus spécialisés pour le verglas.

      Ensuite, j'ai pris le métro, qui avait un problÚme. Au lieu d' attendre la navette à la station Keele, j'ai marché sur une courte distance jusqu'à la maison.

      Je dois probablement traiter les vidĂ©os de la confĂ©rence. Un peu de travail peut les rendre prĂȘtes pour la publication. Je vais combiner les vidĂ©os et les audios normalisĂ©s, revoir tout ça, et publier sur YouTube et sur notre site. Quelques vidĂ©os ont eu quelques problĂšmes avec la conversion, donc je dois revoir les derniĂšres minutes attentivement pour remarquer des erreurs.

      ・・・・・

      AprÚs l'école, j'ai emmené ma fille à la patinoire au parc pour jouer avec son amie. Elles ont pris beaucoup de plaisir à jouer à chat avec le pÚre de son amie, qui était trop rapide pour elles. J'ai été heureuse de les regarder. Nous avons bu du chocolat chaud pendant que la surfaceuse préparait la glace.

      Nous avons mangĂ© des restes. AprĂšs le souper, j'ai travaillĂ© sur les vidĂ©os de la confĂ©rence. Deux vidĂ©os ont eu des erreurs de codage, donc j'ai utilisĂ© les vidĂ©os originales et modifiĂ© notre processus. Ma prochaine Ă©tape est de convertir les vidĂ©os au format WebM pour les tĂ©lĂ©charger sur notre serveur. Je dois aussi revoir le sous-titrage, mais ça peut ĂȘtre fait graduellement.

      Jeudi, le dix-huit décembre

      Une Ă©tape importante : je deviens plus Ă  l'aise pour Ă©crire en français sur mobile. Ça signifie que je peux ajouter Ă  mon journal n'importe quand et n'importe oĂč. Je recherche toujours des mots dans le dictionnaire, ce qui n'est pas si pratique sur mobile Ă  cause du petit Ă©cran, mais c'est tolĂ©rable. Au moins, ça peut remplacer le dĂ©filement infini de Reddit pour l'Ă©niĂšme fois. Un jour je pourrai dicter Ă  mon portable, ce qui serait plus utile pendant les promenades en hiver, quand taper sera difficile.

      J'ai encore fait une longue promenade au parc. Le médecin a dit que les promenades étaient bonnes pour la santé, donc j'essaie souvent d'en faire. Un jour je voudrais flùner pendant plusieurs heures, mais pour l'instant, une promenade de trente minutes ou une heure est suffisante.

      Les expĂ©riences de mon mari avec le pain au levain continuent. Il a achetĂ© quelques bannetons. Ma fille l'a aidĂ© avec cette fournĂ©e pendant la pause rĂ©crĂ©. Elle aime scarifier des motifs variĂ©s sur le pain. C'est parfait : passer du temps ensemble, apprĂ©cier la nourriture et pratiquer l'art. Ça demande de la patience, mais c'est la vie et elle peut apprendre la valeur des choses qui prennent du temps. C'est probablement plus important que les notes Ă©levĂ©es Ă  l'Ă©cole. (Ou du moins c'est ce que je me dis quand je m'inquiĂšte.)

      Quand je rentrerai à la maison, j'aurai trente minutes avant sa pause déjeuner. Je pourrai faire une courte tùche, comme envoyer des messages ou vérifier des vidéos. Ma routine matinale pour prendre soin de moi prend la majeure partie de la matinée. Je me demande comment les autres s'organisent.

      ・・・・・

      J'ai décidé de cuisiner le déjeuner au lieu de faire de petites tùches. J'ai préparé des grilled-cheeses. On s'est régalés.

      AprÚs le déjeuner, j'ai travaillé sur les vidéos de la conférence. J'ai ajouté les chapitres à quelques vidéos et corrigé quelques sous-titres.

      ・・・・・

      AprĂšs l'Ă©cole, ma fille a voulu aller chez Sephora pour acheter de la brume parfumĂ©e. Elle en a cherchĂ© en ligne. Mon mari a voulu acheter du papier toilette Ă  No Frills, donc nous avons pris le mĂ©tro jusqu'au Dufferin Mall. Elle a appris Ă  choisir par elle-mĂȘme. C'est pour ça qu'elle a ses propres Ă©conomies. Elle a choisi « darling » qui sent les fleurs. J'ai aimĂ© voir ma fille gagner en confiance et en autodĂ©termination. Elle a mis longtemps Ă  choisir, mais j'ai Ă©tĂ© patiente parce que j'ai pu Ă©crire mon journal sur mobile.

      Ensuite, nous avons mangé un souper de pùtes au pesto à la tomate.

      Puis nous avons joué à la marchande comme dans sa classe de théùtre. Nous avons lancé des idées pour les rÎles, donc nous avons improvisé dans la situation qu'elle a choisie. Elle a dit que j'étais drÎle.

      J'ai travaillé sur d'autres vidéos, et j'ai corrigé une erreur dans le logiciel d'affichage des chapitres.

      Vendredi, le dix-neuf décembre

      Je me suis levée un peu tard parce que mon portable ne s'est pas rechargé correctement. Heureusement, il restait un peu de temps avant l'école, donc j'ai pu réveiller ma fille à temps pour un petit-déjeuner sur le pouce.

      Pendant qu'elle participait à l'école virtuelle, j'ai fait ma routine matinale. Ensuite, j'ai travaillé sur le sous-titrage. Maintenant que les choses sont détendues, je peux prendre plaisir à la préparation des ressources. C'est le dernier jour avant sa pause d'hiver, donc je dois faire les tùches qui demandent de la concentration.

      Ma fille a fait sa présentation sur le Nouvel An chinois. Elle était si fiÚre. Elle a dit que ses camarades de classe avaient faim à cause de sa présentation sur la nourriture traditionnelle.

      Par coïncidence, mon mari a préparé du riz gluant au poulet pour le déjeuner. On s'est régalés.

      La coccinelle était plus active. Nous lui avons donné un morceau de raisin et un morceau de pomme. Ma fille a humidifié le bout d'essuie-tout.

      Cet aprÚs-midi, j'ai continué le travail sur les vidéos. Elles étaient presque toutes faites, il n'en restait que quelques-unes.

      En guise de promenade, j'ai fait les courses. Ensuite, j'ai jouĂ© aux cartes avec ma fille. Je gagnais toujours malgrĂ© mes efforts subtils. Ma fille est devenue un peu grincheuse. La prochaine fois, je proposerai Ă  ma fille des jeux coopĂ©ratifs comme Space Escape ou comme on joue au Pictionary ou aux charades ensemble. Comme ça, on ne peut pas vraiment gagner Ă  tous les coups sinon quelqu'un va ĂȘtre fĂąchĂ© contre moi.

      ・・・・・

      Elle s'est sentie mieux et elle est revenue pour manger des ailes de poulet. Elle avait froid aussi, donc elle avait envie de cĂąlins.

      Samedi, le vingt décembre

      J'ai fait une diffusion en direct sur Twitch pendant que je travaillais sur les sous-titres qu'un intervenant a corrigés. J'ai écrit une courte fonction pour copier des textes dans son chapitre actuel. Trois spectateurs sont venus étonnamment, et ils ont fait quelques commentaires sur mon processus. Avant de faire plus de chapitres de vidéos, je pense que je dois copier les discussions d'IRC et de YouTube sur les pages du wiki pour les envoyer aux intervenants. Ensuite, je peux me remettre à faire les chapitres.

      J'ai rĂ©flĂ©chi un peu plus Ă  l'aide. Le sous-titrage semble une occasion facile d'aider. J'ai documentĂ© le processus et j'ai créé quelques outils. Mais c'est souvent plus facile si je continue moi-mĂȘme parce que je ne dois pas attendre. Bon, c'est possible pour des personnes qui se portent volontaires pour faire les sous-titres de quelques vidĂ©os. Je les laisse de cĂŽtĂ© et je travaille sur les autres vidĂ©os avant elles. Est-ce que je veux inviter les volontaires Ă  aider sur les vidĂ©os restantes? Peut-ĂȘtre. Je dois amĂ©liorer la page des coulisses pour plus facilement choisir parmi les tĂąches restantes, et je dois documenter le processus pour aider les dĂ©butants. Il est tentant de travailler seul, mais il est bon de crĂ©er des occasions pour que d'autres personnes puissent aider. En plus, la documentation m'aidera quand j'aurai tout oubliĂ© d'ici l'annĂ©e prochaine.

      L'aprĂšs-midi, je suis allĂ©e Ă  la pharmacie pour une vaccination contre la grippe. Bien que la vaccination de cette annĂ©e ne corresponde pas bien aux variations de grippe trĂšs courantes, c'est toujours un peu protecteur. Ma fille a marchĂ© avec moi Ă  mi-chemin, puis elle est retournĂ©e Ă  la maison et elle est allĂ©e avec mon mari chez le perceur. Elle voulait porter des boucles d'oreilles. Elle est assez ĂągĂ©e pour choisir par elle-mĂȘme. Je l'ai aidĂ©e pour le nettoyage avec la solution saline.

      J'ai préparé le bulletin d'information pour la Bike Brigade. Puisque personne ne s'est porté volontaire, je suis revenue à mon processus qui est plus automatique. Je déteste tous les processus qui demandent plusieurs clics et offrent plusieurs occasions de faire des erreurs. Lorsqu'un bénévole s'engagera, je restaurerai le processus manuel.

      Nous avons aussi jouĂ© Ă  une simulation de petit cafĂ© sur Minecraft avec sa tante. Ma fille s'occupait du service, ma sƓur s'occupait des salades, et je m'occupais d'alterner les crĂȘpes et les gĂąteaux. On a bien gĂ©rĂ© dans les temps. AprĂšs ma routine du soir, nous avons aussi jouĂ© au Space Escape. Nous avons gagnĂ© ensemble !

      Dimanche, le vingt-et-un décembre

      AprĂšs la vaccination d'hier, j'ai un peu mal au cou, donc je me la coule douce aujourd'hui. Je vais faire la lessive et peut-ĂȘtre copier des discussions de la confĂ©rence. Mais avant tout, peut-ĂȘtre que je vais Ă©tudier un peu le français.

      Mon logiciel d'analyse de mon journal a dit que j'ai Ă©crit cinquante-deux entrĂ©es jusqu'Ă  prĂ©sent. Ça nous fait un total de 10.766 mots (1.381 lemmes). J'ai commencĂ© Ă  apprendre le français pour peut-ĂȘtre aider ma fille, mais je trouve que j'apprĂ©cie la stimulation d'Ă©criture dans une autre langue. C'est certain que j'Ă©cris plus d'entrĂ©es Ă  propos de ma vie. L'analyse de mon vocabulaire m'encourage Ă  essayer de nouveaux mots et de plus longues entrĂ©es. En 2012, lors d'une confĂ©rence sur Quantified Self, j'ai rencontrĂ© une personne qui met son journal sur son systĂšme de rĂ©pĂ©tition espacĂ©e pour aider Ă  s'en souvenir. AprĂšs chaque rendez-vous avec ma tutrice, je mets mes phrases sur Anki pour Ă©tudier du vocabulaire. En cours de route, je me remĂ©more ces moments. Je ne peux pas encore parler aisĂ©ment. Peut-ĂȘtre que je dois pratiquer l'expression orale et trouver ma propre mĂ©thode pour pratiquer la comprĂ©hension orale. RĂ©pĂ©ter en mĂȘme temps que l'audio semble utile.

      L'outil d'IA que j'ai essayĂ© est sorti de sa phase bĂȘta et a maintenant besoin d'un abonnement de 29 dollars chaque mois. En ce moment, je me demande si je veux l'utiliser, ou si je veux utiliser d'autres outils comme ChatGPT ou Gemini, ou si je veux crĂ©er mon propre outil. Je pense que pour le moment, je me concentre principalement sur l'Ă©criture. À cause de COVID et du cĂŽtĂ© chronophage de l'Ă©ducation de mon enfant, je ne suis pas intĂ©ressĂ©e par des sujets frĂ©quents comme commander au restaurant, les voyages, ou mĂȘme la prĂ©sentation et le bavardage. Je veux Ă©crire et Ă©couter des informations sur Emacs et d'autres sujets techniques, donc je peux commencer Ă  lire « Emacs expliquĂ© Ă  mes enfants ». Je peux aussi utiliser la synthĂšse vocale pour transformer mon journal en audio, que je peux utiliser pour m'entraĂźner. J'ai ajoutĂ© une fonction pour attendre aprĂšs chaque phrase pendant un multiple du temps initial pour pouvoir rĂ©pĂ©ter plus facilement. MĂȘme si peut-ĂȘtre penser Ă  Ă©couter la prononciation quand je cherche des mots dans le dictionnaire en ligne serait suffisant quand j'utilise mon portable, ce qui arrive plus souvent.

      Je ne peux pas me concentrer sur mon travail, donc j'ai fait une sieste l'aprÚs-midi. AprÚs deux heures, ma fille m'a réveillée parce qu'elle était fiÚre d'avoir aidé mon mari à mettre en conserve les betteraves qu'il avait achetées il y a deux semaines. Ils ont utilisé l'autocuiseur. Puisqu'un bocal ne s'est pas bien scellé, il l'a mis au réfrigérateur. Ils ont aussi fait un gùteau aux ananas et aux betteraves, que ma fille aime bien.

      AprĂšs le souper, j'ai rĂ©cupĂ©rĂ© un peu d'Ă©nergie. J'ai jouĂ© Ă  la simulation de petit cafĂ© sur Minecraft avec ma fille et ma sƓur, comme hier. Cette fois, notre jeu se dĂ©roule bien. Ma sƓur a fait beaucoup de salades par lots. Elle a dit : « Dix salades grecques sont prĂȘtes » et ma fille les a servies aux clients. Moi, j'ai prĂ©parĂ© des crĂȘpes et des gĂąteaux nature sans cesse, et je les ai combinĂ©s avec d'autres ingrĂ©dients pour chaque commande, donc j'ai souvent dit : « Le gĂąteau au chocolat et Ă  la banane sur le comptoir. » Nous avons franchi facilement deux Ă©tapes de plus. Je pense qu'il reste une Ă©tape.

      You can e-mail me at sacha@sachachua.com.

    4. 🔗 r/reverseengineering OGhidra: Automating dataflow analysis and vulnerability discovery in Ghidra via local Ollama models rss
    5. 🔗 r/LocalLLaMA GLM 4.7 released! rss

      GLM 4.7 released! | GLM-4.7 is here! GLM-4.7 surpasses GLM-4.6 with substantial improvements in coding, complex reasoning, and tool usage, setting new open-source SOTA standards. It also boosts performance in chat, creative writing, and role-play scenarios. Weights: http://huggingface.co/zai-org/GLM-4.7 Tech Blog: http://z.ai/blog/glm-4.7 submitted by /u/ResearchCrafty1804
      [link] [comments]
      ---|---

    6. 🔗 r/LocalLLaMA GLM 4.7 is out on HF! rss

      GLM 4.7 is out on HF! | submitted by /u/KvAk_AKPlaysYT
      [link] [comments]
      ---|---

    7. 🔗 r/reverseengineering ImHex Hex Editor v1.38.1 - Better Pattern Editor, many new Data Sources, Save Editor Mode and more rss
    8. 🔗 r/LocalLLaMA I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! rss

      I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! | Hi! I’m Eugene, and I’ve been working on Soprano : a new state-of-the-art TTS model I designed for voice chatbots. Voice applications require very low latency and natural speech generation to sound convincing, and I created Soprano to deliver on both of these goals. Soprano is the world’s fastest TTS by an enormous margin. It is optimized to stream audio playback with < 15 ms latency, 10x faster than any other realtime TTS model like Chatterbox Turbo, VibeVoice-Realtime, GLM TTS, or CosyVoice3. It also natively supports batched inference, benefiting greatly from long-form speech generation. I was able to generate a 10-hour audiobook in under 20 seconds, achieving ~2000x realtime! This is multiple orders of magnitude faster than any other TTS model, making ultra-fast, ultra-natural TTS a reality for the first time. I owe these gains to the following design choices:

      1. Higher sample rate: most TTS models use a sample rate of 24 kHz, which can cause s and z sounds to be muffled. In contrast, Soprano natively generates 32 kHz audio, which sounds much sharper and clearer. In fact, 32 kHz speech sounds indistinguishable from 44.1/48 kHz speech, so I found it to be the best choice.
      2. Vocoder-based audio decoder: Most TTS designs use diffusion models to convert LLM outputs into audio waveforms. However, this comes at the cost of slow generation. To fix this, I trained a vocoder-based decoder instead, which uses a Vocos model to perform this conversion. My decoder runs several orders of magnitude faster than diffusion-based decoders (~6000x realtime!), enabling extremely fast audio generation.
      3. Seamless Streaming: Streaming usually requires generating multiple audio chunks and applying crossfade. However, this causes streamed output to sound worse than nonstreamed output. I solve this by using a Vocos-based decoder. Because Vocos has a finite receptive field. I can exploit its input locality to completely skip crossfading, producing streaming output that is identical to unstreamed output. Furthermore, I modified the Vocos architecture to reduce the receptive field, allowing Soprano to start streaming audio after generating just five audio tokens with the LLM.
      4. State-of-the-art Neural Audio Codec: Speech is represented using a novel neural codec that compresses audio to ~15 tokens/sec at just 0.2 kbps. This helps improve generation speed, as only 15 tokens need to be generated to synthesize 1 second of audio, compared to 25, 50, or other commonly used token rates. To my knowledge, this is the highest bitrate compression achieved by any audio codec.
      5. Infinite generation length: Soprano automatically generates each sentence independently, and then stitches the results together. Theoretically, this means that sentences can no longer influence each other, but in practice I found that this doesn’t really happen anyway. Splitting by sentences allows for batching on long inputs, dramatically improving inference speed.

      I’m a second-year undergrad who’s just started working on TTS models, so I wanted to start small. Soprano was only pretrained on 1000 hours of audio (~100x less than other TTS models), so its stability and quality will improve tremendously as I train it on more data. Also, I optimized Soprano purely for speed, which is why it lacks bells and whistles like voice cloning, style control, and multilingual support. Now that I have experience creating TTS models, I have a lot of ideas for how to make Soprano even better in the future, so stay tuned for those! Github: https://github.com/ekwek1/soprano Huggingface Demo: https://huggingface.co/spaces/ekwek/Soprano-TTS Model Weights: https://huggingface.co/ekwek/Soprano-80M - Eugene submitted by /u/eugenekwek
      [link] [comments]
      ---|---

    9. 🔗 r/reverseengineering GitHub - Fatmike-GH/MCPDebugger: A lightweight MCP debugger designed for learning and experimentation. Supports Windows executables (x86 and x64). rss
    10. 🔗 r/LocalLLaMA NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! rss

      NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! | Blog Link: https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/ You'll learn about: - Training methods: LoRA, FFT, RL - When to fine-tune and why + use-cases - Amount of data and VRAM needed - How to train locally on DGX Spark, RTX GPUs & more submitted by /u/Difficult-Cap-7527
      [link] [comments]
      ---|---

    11. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    12. 🔗 r/LocalLLaMA major open-source releases this year rss

      major open-source releases this year | submitted by /u/sahilypatel
      [link] [comments]
      ---|---

  4. December 21, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-21 rss

      IDA Plugin Updates on 2025-12-21

      New Releases:

      Activity:

      • chernobog
        • a52f5827: fix: Fix unsafe ctree modification and re-enable constant folding han

        • 82016e82: fix: Prevent crashes during plugin unload and static destruction
      • IDA-VTableExplorer
        • 3081ff81: fix: add actions back to browse functions and annotate all vtables
        • 1612b4e2: feat: replace JPEG images with PNG for better quality in README
        • 3fbe47f1: Refactor VTable handling and enhance RTTI parsing
        • cf02df00: feat: Update build-all target to include clean step for improved buil

        • 0f930310: feat: Add clean target to Makefile for removing build artifacts
      • IDAPluginList
      • twdll
    2. 🔗 r/LocalLLaMA 1 year later and people are still speedrunning NanoGPT. Last time this was posted the WR was 8.2 min. Its now 127.7 sec. rss

      1 year later and people are still speedrunning NanoGPT. Last time this was posted the WR was 8.2 min. Its now 127.7 sec. | Previous post for context. Also note original NanoGPT run from Andrej Karpathy was 45 min. I think this is a great way to understand progress in overall algorithmic speed improvements as I'm sure the big labs are using similar speedup tricks. submitted by /u/jd_3d
      [link] [comments]
      ---|---

    3. 🔗 r/LocalLLaMA llama.cpp appreciation post rss

      llama.cpp appreciation post | submitted by /u/hackiv
      [link] [comments]
      ---|---

    4. 🔗 sacha chua :: living an awesome life La semaine du 7 dĂ©cembre au 14 dĂ©cembre rss

      Lundi, le huit décembre

      Je me suis concentrée sur mon journal en français avant le rendez-vous avec ma tutrice. J'ai écrit suffisamment pour bien utiliser le temps, malgré la semaine derniÚre chargée. Nous avons aussi fait de la conversation. J'ai utilisé le Live Captions de Google Chrome pour comprendre quand elle parlait trop rapidement.

      J'ai emmené ma fille à son cours de gymnastique. C'était apparemment la semaine des parents, donc j'ai pu regarder ma fille dans le gymnase. J'ai pris quelques vidéos pour la montrer.

      J'ai fait beaucoup de lessive, parce que je n'ai pas pu en faire pendant la conférence.

      Mardi, le neuf décembre

      Ce matin, j'ai continué à rattraper mon retard. Pendant un ou deux mois précédant la conférence, je n'ai pas fait beaucoup de travail de conseil, donc j'ai accumulé quelques tùches. Je n'étais pas stressé, j'ai juste eu à gérer mon temps. J'ai pris plaisir à les aider.

      L'un de mes amis m'a appelé pour discuter d'une crise personnelle. Je suppose que c'est ça la crise de la quarantaine. C'est trÚs difficile, mais on doit persévérer.

      Cet aprÚs-midi, j'ai contemplé mes valeurs pour mes devoirs de la session de gestion du stress avec ma thérapeute. Je pense que je peux les simplifier en cette liste : responsabilité, adaptabilité, relations, et curiosité. C'est utile pour faire des choix.

      Aujourd'hui, il fait froid et gris avec de la neige et un vent fort. La météo a annoncé plus de neige. J'ai laissé ma fille choisir d'aller au cours d'art ou de rester à la maison. Elle a choisi de rester, donc nous avons passé une soirée tranquille. Nous avons joué aux jeux de cartes. Ma fille aime bien les jeux de stratégie. Moi aussi. Elle commence à apprendre à anticiper les choses quand elle joue à Exploding Kittens et à Tacos versus Burritos. Elle s'amuse beaucoup parce que les cartes sont amusantes.

      Nous avons pratiqué un peu le français avec l'IA. Elle apprend le vocabulaire sur la météo à l'école, donc elle a essayé quelques phrases. Ensuite, j'ai écrit mon journal pendant qu'elle regardait KPop Demon Hunters pour l'éniÚme fois.

      Demain, je vais enregistrer une vidéo sur la préparation du bulletin d'information pour Bike Brigade pour le transférer à l'autre volontaire. Je vais aussi enregistrer une vidéo de félicitations en français. S'il y a du temps, je veux aussi traiter les vidéos de la conférence.

      Mercredi, le dix décembre

      Mon mari s'est levé trÚs tÎt pour préparer son examen médical. Il a dû jeûner pour son examen, donc il avait trÚs faim et il s'ennuyait, alors il a commencé deux recettes de pain au levain. J'ai aidé ma fille avec sa routine matinale. Pendant que ma fille participait à l'école virtuelle et mon mari était sorti, j'ai dû gérer les deux recettes tout en cuisinant une bouillie de riz au poulet pour le déjeuner de mon mari.

      Alors, je me suis sentie un peu perturbĂ©e, mais j'Ă©tais aussi contente car mon mari comptait sur moi pour faire ces tĂąches. Il n'a pas souvent demandĂ© de l'aide. C'Ă©tait un plaisir de l'aider, mĂȘme si la situation Ă©tait amusante.

      J'ai fini les trois recettes moi-mĂȘme : deux sortes de pain et la bouillie. C'Ă©tait la deuxiĂšme fois que nous essayions de faire du pain au levain, et cette fois ça a marchĂ© ! Je pense que j'ai laissĂ© le pain reposer plus longtemps, ce qui a mieux fonctionnĂ©, en effet… Et ma fille aime notre pain au levain ! Enfin, notre premiĂšre victoire ! Ma fille a jugĂ© que mes essais prĂ©cĂ©dents n'Ă©taient pas aussi bons que le pain qu'elle achĂšte d'habitude au marchĂ© fermier.

      J'ai aussi enregistré une courte vidéo pour souhaiter un joyeux anniversaire en français. C'était un bon exercice pour l'expression orale.

      Pour l'exercice, j'ai déblayé beaucoup de neige. Il pleuvait aussi, donc la neige était lourde. Je n'ai pas pu me reposer parce que j'avais trop de tùches.

      MalgrĂ© la neige et la pluie, ma fille a aussi envoyĂ© une lettre au PĂšre NoĂ«l. Nous sommes en retard pour le programme Lettres au PĂšre NoĂ«l des Postes Canada, mais quand mĂȘme ça vaut le coup d'essayer. Elle veut un jeu de piste pour elle, et des chaussettes pour moi. Le jeu de piste est une tradition dans ma famille. Je vais Ă©crire quelques indices et les cacher partout dans la maison. Peut-ĂȘtre que cette annĂ©e, je peux Ă©crire quelques indices en français.

      Mon mari réessaye la recette du pain maintenant. Petit à petit, on s'améliore. Les résultats intermédiaires sont délicieux, donc la pratique est agréable.

      Jeudi, le onze décembre

      J'Ă©tais fatiguĂ©e. L'Ɠil de ma fille faisait un peu mal mĂȘme aprĂšs sa nuit de sommeil, donc je me suis un peu inquiĂ©tĂ©e. Elle a pu participer Ă  l'Ă©cole virtuelle, au moins. Je suppose que c'Ă©tait une journĂ©e avec moins d'Ă©nergie.

      J'ai emmené ma fille aux Stockyards à pied parce qu'elle avait envie d'une longue promenade. Pour une petite friandise, j'ai acheté une boßte de feuilletés chez Marry Me Mochi, et elle les a gardés pour aprÚs le souper. Mon mari et ma fille ont cuisiné des sandwiches au fromage grillé à la purée de pomme de terre, une nouvelle idée que mon mari a trouvée en ligne. C'était délicieux.

      AprÚs un souper rapide, j'ai eu une séance d'information sur le bulletin d'information de Bike Brigade. J'ai écrit de la documentation. Pendant la séance, j'ai expliqué le processus.

      Vendredi, le douze décembre

      L'Ɠil de ma fille faisait mal et Ă©tait enflĂ© pendant deux jours, donc j'ai concentrĂ© mes efforts pour obtenir de l'aide. Elle n'a pas voulu participer en classe. Ce matin, j'ai appelĂ© quelques endroits pour essayer de prendre un rendez-vous en alternant les cĂąlins rĂ©confortants. AprĂšs une longue attente et quelques messages, j'ai pris un rendez-vous Ă  l'hĂŽpital Sick Kids.

      J'étais fatiguée, donc j'ai fait une sieste de trente minutes à midi.

      Cet aprÚs-midi, j'ai emmené ma fille en métro chez l'ophtalmologue à l'hÎpital. Nous avons attendu pendant deux heures, ce qui était trÚs ennuyant pour ma fille mais c'était nécessaire. Je l'ai laissée regarder beaucoup de vidéos et jouer à quelques jeux.

      L'ophtalmologue a dit que ma fille a un orgelet, donc elle a conseillĂ© des compresses chaudes et de l'Ă©rythromycine. Elle a aussi remarquĂ© qu'elle a des cils qui frottent l'Ɠil, donc elle a recommendĂ© des gouttes pour les yeux. J'ai dĂ©posĂ© ma fille Ă  la maison et je suis allĂ©e Ă  la pharmacie pour acheter l'Ă©rythromycine.

      AprÚs tout ça, ce qui a pris toute la journée, j'étais trÚs fatiguée.

      Samedi, le treize décembre

      Le masque chauffant pour les yeux semble aider ma fille avec son Ɠil. Elle l'a portĂ© hier soir dix minutes et encore ce matin. Son Ɠil est moins enflĂ© maintenant, mais elle a encore un peu mal.

      Elle trouve que se concentrer sur ses devoirs est difficile. Les mathématiques sont amusantes, mais les devoirs de langue sont ennuyeux. Elle a reporté ses tùches pendant plusieurs jours, et maintenant elles forment un gros tas. J'ai conseillé de faire petit à petit et de faire les différentes sortes de devoirs pour que son maßtre puisse évaluer les différentes matiÚres. J'ai travaillé sur mes devoirs de français dans sa chambre pour qu'elle ne se sente pas seule. Parfois elle a besoin d'un cùlin avant de recommencer à travailler. Je n'ai pas le droit de lui rappeler ses devoirs, juste de la cùliner. Eh, on va voir. D'une part, je souhaite le succÚs de ma fille. D'autre part, c'est elle qui doit découvrir ce qui fonctionne bien, et le moment présent est idéal pour expérimenter parce que les enjeux sont faibles. Aujourd'hui, elle veut rattraper tout son retard de devoirs de lecture au lieu de faire un peu de tout. C'est à elle de décider.

      AprÚs ses devoirs, elle veut aller à KidSpark pour jouer au magasin imaginaire. Je pense que je peux l'emmener à vélo malgré la neige et la glace, probablement. Le métro ne fonctionne pas ce week-end, donc il faudra se contenter. Je n'ai pas de pneus spéciaux pour la glace, donc je devrai faire du vélo attentivement.

      ・・・・・

      Nous sommes tous allés à KidSpark malgré la fermeture du métro d'Ossington à Spadina. Je n'ai pas eu d'énergie pour faire du vélo, donc nous avons dû prendre le métro. La navette était lente et bondée, mais nous sommes finalement arrivés.

      Nous n'avons joué qu'une heure, mais notre fille a eu beaucoup de plaisir, donc j'étais contente que nous sommes venus. Nous avons joué au magasin imaginaire et nous avons aussi joué avec les nouveaux jouets de construction. Il y avait beaucoup d'enfants, donc c'était bruyant, et notre fille a utilisé le protÚge-oreilles du sac à dos sensoriel.

      Nous avons acheté quelques petits pains et des raviolis aux crevettes en rentrant, avant d'attendre les navettes pendant longtemps. Les navettes étaient trÚs bondées, et notre fille a eu froid en marchant jusqu'à la maison. Mais nous avons persévéré.

      Quand nous sommes rentrĂ©s, nous avons tous bu du thĂ©. Mon mari et notre fille ont cuisinĂ© deux fournĂ©es de petites crĂȘpes Ă©paisses, et j'ai fait la vaisselle.

      Dimanche, le quatorze décembre

      J'étais fatiguée, donc j'ai fait la grasse matinée. Ma fille s'est levée avant moi. Elle a fait tomber le sac de céréales par accident et elle est devenue un peu grincheuse. Elle est devenue plus grincheuse quand nous avons mentionné ses devoirs. Elle a une présentation la semaine prochaine, donc elle doit se préparer. Alors, je ne peux pas la forcer. Je me le dis : c'est son expérience, ce n'est pas moi.

      Du coup, comme elle est grincheuse, peut-ĂȘtre que j'ai le temps pour mes tĂąches. Je dois produire ma dĂ©claration fiscale de l'entreprise, qui a besoin de concentration. Je peux Ă©crire mon journal avant le rendez-vous avec ma tutrice lundi, et j'ai les devoirs pour la session sur la gestion du stress mardi. Je veux aussi travailler sur le reste du travail de la confĂ©rence. Beaucoup de choses Ă  faire.

      Mes devoirs sur la gestion du stress comprennent la description de mon sentiment et son évaluation en pourcentage. Cette évaluation est étonnamment difficile. Je suis perdue. Alors, je suppose que c'est ce que je dois apprendre.

      ・・・・・

      Ma fille est revenue de sa chambre d'humeur assez raisonnable. Elle a mangĂ© un peu de nourriture et a reçu des cĂąlins. Je pense qu'elle n'a pas travaillĂ© sur ses devoirs. Son Ɠil fait mal et maintenant ses deux yeux dĂ©mangent, sa nouvelle molaire fait mal, elle Ă©tait fatiguĂ©e de ses devoirs… Je ne peux pas faire grand-chose, juste des cĂąlins rĂ©confortants et aider avec sa routine du soir.

      Reflection

      I'm gradually expanding my vocabulary. I can now write enough that reading my vocabulary entries out loud to my tutor (and chatting a little about stuff along the way) takes up the hour. It's still good pronunciation practice while I work on picking up more words and internalizing the pronunciation rules, though, so it's probably a good idea to continue that instead of shifting that to AI.

      New root words

      absence, accumuler, adaptabilitĂ©, amĂ©lioration, anniversaire, annulation, anticiper, apparemment, appeler, apprĂ©cier, attente, attentivement, automatisation, bonder, bouillie, bruyant, cacher, car, certain, chauffer, choix, cil, commencer, comprendre, compresse, concentration, connecter, conseiller, construction, contempler, contenter, contrĂŽler, coulisse, court, crise, crĂȘpe, curiositĂ©, cĂąliner, cĂ©rĂ©ale, description, deuxiĂšme, diffĂ©rence, diffĂ©rent, documentation, droit, dĂ©cider, dĂ©claration, dĂ©couvrir, dĂ©licieux, dĂ©manger, dĂ©rouler, effet, effort, enfin, enfler, enjeu, ennuyant, ennuyer, entreprise, envie, essai, examen, expliquer, expĂ©rience, expĂ©rimenter, faible, falloir, façon, fenĂȘtre, fermeture, fermier, feuilletĂ©, fiscal, forcer, former, fournĂ©e, frotter, fĂ©licitation, glace, goutte, gras, griller, gris, gros, gymnase, gĂ©nĂ©ral, hĂŽpital, idĂ©al, inattendu, indice, inspirant, intermĂ©diaire, jeĂ»ner, jouet, joyeux, juger, lecture, lent, lessive, lettre, longtemps, lors, lourd, mal, masque, mathĂ©matique, mois, molaire, montrer, mĂ©dical, mĂ©tro, mĂȘler, navette, nourriture, noĂ«l, obtenir, oeil, ophtalmologue, organisation, orgelet, outil, partager, partout, perdre, personnel, persĂ©vĂ©rer, phrase, plan, pneu, porter, poste, pourcentage, processus, produire, prĂ©cĂ©dent, prĂ©cĂ©der, purĂ©e, quarantaine, raisonnable, rapidement, rattraper, recommencer, recommender, reconnecter, relation, remarquer, reposer, responsabilitĂ©, retard, rĂ©duire, rĂ©pondre, rĂ©sultat, rĂ©ussir, sauter, sauvegarder, scĂšne, sembler, sensoriel, sentiment, serviable, sieste, similaire, situation, soir, soirĂ©e, sommeil, sorte, souhaiter, spĂ©cial, spĂ©cialisĂ©, stratĂ©gie, stresser, succĂšs, suffisamment, supplĂ©mentaire, supposer, surtout, sĂ©ance, taille, thĂ©, toutefois, tradition, transcription, transformation, transfĂ©rer, vaisselle, valoir, victoire, volet, ça, Ă©nergie, Ă©niĂšme, Ă©pais, Ă©rythromycine, Ă©tonnamment, Ă©tude, Ă©valuation, Ă©valuer, Ɠil

      You can e-mail me at sacha@sachachua.com.

    5. 🔗 r/reverseengineering From UART to Root: Breaking Into the Xiaomi C200 via U-Boot rss
    6. 🔗 Register Spill Joy & Curiosity #67 rss

      Last issue of the year, let's do this!

      This week, Ryan and I got to interview DHH. It's very rare that I get nervous before an online conversation, but this was one of those times. I mean, that's the guy who made Rails, man! I wouldn't be here without Rails. Rails is what I did for the first seven years of my career. Rails is the reason why I have a career. I read every book he and Jason have ever written, of course, and 37signals has had as deep an impression as a company can have on probably anybody who's worked in a startup between 2008 and 2015.

      
and then we had a great conversation. It's been a few days, and different parts of it keep popping back into my head. David said quite a few things that I now feel I have to share. Some things about marketing that resonate with what we've been talking about internally; some things I want the world to hear; some things that were funny; other things that were very fascinating (he said he still writes 95% of his code by hand); and the rant on cookie banners that I want politicians to hear.

      But here's something that I want to leave you with, in this last edition of the year, this year that brought and announced more change to this profession than any other year I've lived through as a working software developer. Here's something that David said that sums up why I'm excited and so curious about where all of this is going, something that I hope makes you feel something positive too:

      "Where does the excitement come from? First and foremost, I love computers and I love to see computers do new things. It's actually remarkable to me how many people who work in tech don't particularly like computers. Yes, even programmers who have to interact with them every day and make these computers dance, not all of them like computers. I love computers. I love computers just for the-- sheer machine of it. I'm not just trying to be instrumental about it. I'm not just trying to use computers to accomplish something. There's a whole class of people who view the computer just as a tool to get somewhere. No, no, no. For me, it's much deeper. I just love the computer itself and I love to see the computer do new things. And this is the most exciting new thing that computers have been doing, probably in my lifetime. Or at least it's on level with the network-connected computer. Yes."

      The computer can now do new things.

      • My teammate Tim wrote about how he ported his TUI framework from Zig to TypeScript and how, in the process of porting it, he noticed that he's getting in the way of the agent, slowing it down and costing more tokens. So he took his hands off the wheel and what we ended up with is this: A Codebase by an Agent for an Agent. I've shared this story quite a few times in person. I'm really happy it's out now, so we have proof: this is an world-class terminal expert and programmer, letting an agent write 90% of the code, and ending up with something that is really , really good. (Also, side note: I contributed the images and, man, it's so fun to put stuff like this out into the world.)

      • This was fantastic: Jeff Dean and Sanjay Ghemawat with Performance Hints. When I opened it I thought I'd skim it, but then I read the whole thing, looked at a lot of the examples, asked ChatGPT some questions along with screenshots. The writing is clear and precise and simple, the section with the napkin math is impressive, the emoji map optimization is what made me open ChatGPT, and then at the end there, in the CLs that demonstrate multiple techniques section, there's this header 3.3X performance in index serving speed! and when you click on it you'll read that they "found a number of performance issues when planning a switch from on-disk to in-memory index serving in 2001. This change fixed many of these problems and took us from 150 to over 500 in-memory queries per second (for a 2 GB in-memory index on dual processor Pentium III machine)" and then you realize what an impressive cathedral of software engineering Google's infrastructure is. Click here for a good time, I'm telling you.

      • The TUI renaissance isn't over: Will McGugan just released Toad, a "unified experience for AI in the terminal." Taking inspiration from Jupyter notebooks is very smart and I love those little UI interactions he built. Good stuff.

      • The title is "Prompt caching: 10x cheaper LLM tokens, but how?" so you might think that this is about prompt caching, but, haha, that's silly. Listen, this is about everything. It's one of the best all-in-one explainers of how transformers work that I've come across. It's by Sam Rose, who's very good at visual explanations, and here he does a full explanation of how text goes into an LLM and text comes out the other end, including visuals, pseudo-code, in-depth explanations. It's very, very good. If you don't know how a transformer works, do yourself a favor and read this. If you do know how it works, look at this and smile at the visualizations.

      • Imagine you're holding two rocks. One has written on it: "terminals can display images now, thanks to the kitty's terminal graphics protocol". The other: "when you think about it, a GUI framework does nothing but create images and display them, right?" Now the question is: what happens if you smash those two rocks together? This: "DVTUI" (note the quotes!), which takes a GUI framework (DVUI), gets it to save PNGs instead of rendering them to the screen, and then uses a TUI framework (libvaxis) to render those images in the terminal. To quote: "All that happens every single frame. And yet it works."

      • As you know, I'm a sucker for lists like this one: Tom Whitwell's 52 things I learned in 2025. Wonderful.

      • 
 and it brought me to this: write to escape your default setting. "Writing forces you to tidy that mental clutter. To articulate things with a level of context and coherence the mind alone can't achieve." Yes. Now, in times of LLMs, it's probably more apparent than ever before that writing (real writing; writing you do) is thinking.

      • How I wrote JustHTML using coding agents: "After writing the parser, I still don't know HTML5 properly. The agent wrote it for me. I guided it when it came to API design and corrected bad decisions at the high level, but it did ALL of the gruntwork and wrote all of the code." I bet there's a lot of people who read this and think "ha! so he doesn 't know HTML5 still!" And yet I wonder: was that the goal? It's a very good post. A very calm, practical post, but that raises a fundamental question: JustHTML is now "3,000 lines of Python with 8,500+ tests passing" and "passes 100% of the html5lib test suite, has zero dependencies, and includes a CSS selector query API" -- how many more dependencies could we turn into that now?

      • Martin Kleppmann: "I find it exciting to think that we could just specify in a high-level, declarative way the properties that we want some piece of code to have, and then to vibe code the implementation along with a proof that it satisfies the specification. That would totally change the nature of software development: we wouldn't even need to bother looking at the AI-generated code any more, just like we don't bother looking at the machine code generated by a compiler."

      • "The perfection of snow in the paintings of Danish artist Peder MĂžrk MĂžnsted."

      • Stripe Press: Tacit. "The mechanism for developing tacit knowledge is straightforward but slow: repeated practice that gradually moves skills from conscious effort to automatic execution. The mechanism for transmitting it is even slower: apprenticeship, where a learner works alongside someone experienced, observing and imitating until their own judgment develops. This is why tacit knowledge often concentrates in lineages, unbroken chains of practitioners passing expertise to the next generation. [
] AI has elevated the distinction between what is tacit and what is not. Language models can summarize and automate, but when they attempt to create something that carries the signature of human craft, the result is often flat." In the words of Tamara Winter: Tacit is a series of mini-documentaries that are " vignettes of craftspeople who provide a pretty compelling answer to the question, 'after AI, does mastery still matter?'"

      • I need to try this: Geoffrey Litt's JIT Guide Workflow.

      • This fantastic post by Jakob Schwichtenberg shifted something in my head: "Our very definition of intelligence encodes the bias toward speed. The modern definition of intelligence is extremely narrow. It simply describes the speed at which you can solve well-defined problems. Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It's not whether you can solve problems, but how fast you solve them." And then: "In fact, it's not hard to imagine how raw processing speed can be counterproductive. People who excel at quickly solving well-defined problems tend to gravitate toward... well-defined problems. They choose what to work on based on what they're good at, not necessarily what's worth doing."

      • 
 but then there's James Somers saying "Speed matters: Why working quickly is more important than it seems." And Nat Friedman is saying: "It's important to do things fast. You learn more per unit time because you make contact with reality more frequently. Going fast makes you focus on what's important; there's no time for bullshit." And Patrick Collison is collecting fast projects. Then here I am, wondering, and possibly assuring myself: yeah, we're not all doing the same things, are we?

      • antirez' Reflections on AI at the end of 2025. "The fundamental challenge in AI for the next 20 years is avoiding extinction."

      • Yes, this is in The New Yorker: "I trust in TextEdit. It doesn't redesign its interface without warning, the way Spotify does; it doesn't hawk new features, and it doesn't demand I update the app every other week, as Google Chrome does. I've tried out other software for keeping track of my random thoughts and ideas in progress--the personal note-storage app Evernote; the task-management board Trello; the collaborative digital workspace Notion, which can store and share company information. Each encourages you to adapt to a certain philosophy of organization, with its own formats and filing systems. But nothing has served me better than the brute simplicity of TextEdit, which doesn't try to help you at all with the process of thinking." Great title too: TextEdit and the Relief of Simple Software.

      • Also The New Yorker, on performative reading, and reading, and books, and social media: "Reading a book is antithetical to scrolling; online platforms cannot replicate the slow, patient, and complex experience of reading a weighty novel. [...] The only way that an internet mind can understand a person reading a certain kind of book in public is through the prism of how it would appear on a feed: as a grotesquely performative posture, a false and self-flattering manipulation, or a desperate attempt to attract a romantic partner."

      • LLMs and physical laws? Maybe: "The dynamics of LLM generation are quite unique. Compared to traditional rule-based programs, LLM-based generation exhibits diverse and adaptive outputs. [
] To model the dynamic behavior of LLMs, we embed the generative process of LLM within a given agent framework, viewing it as a Markov transition process in its state space. [
] Based on this model, we propose a method to measure this underlying potential function based on a least action principle. By experimentally measuring the transition probabilities between states, we statistically discover [
] To our knowledge, this is the first discovery of a macroscopic physical law in LLM generative dynamics that does not depend on specific model details."

      • "'Climbing Everest solo without bottled oxygen in 1980 was the hardest thing I've done. I was alone up there, completely alone. I fell down a crevasse at night and almost gave up. Only because I had this fantasy - because for two years I had been pregnant with this fantasy of soloing Everest - was I able to continue.' This is how Messner talks about how his will was governed."

      • I regularly remind myself and sometimes even others of Jason Fried's Give it five minutes. It's one of the most influential things I've read in the past ten years. I constantly think of it and I'm convinced it's improved my mental well-being and my connections to other people like few others things. Yes, I know how this sounds, but, I guess, an idea and a specific phrase that sticks with you can go a long way as far as life-changing is concerned. Now, all of that is just context, because what I want to actually share is this Jason Fried piece here: Idea protectionism. I re-found and re-read it after sharing the other Jason Fried piece and wanting to share the Jony Ive quote in this one and, yup, stumbled across it by chance. Lucky.

      • Reuters reports on China's Manhattan Project. This is it, baby! This has it all: corporate espionage, ASML, lithography, "one veteran Chinese engineer from ASML recruited to the project was surprised to find that his generous signing bonus came with an identification card issued under a false name", EUV systems that "are roughly the size of a school bus, and weigh 180 tons", Germany's Carl Zeiss AG, "networks of intermediary companies are sometimes used to mask the ultimate buyer", "employees assigned to semiconductor teams often sleep on-site and are barred from returning home during the work week, with phone access restricted for teams handling more sensitive tasks", and, of course, the tension at the heart of it all: "Starting in 2018, the United States began pressuring the Netherlands to block ASML from selling EUV systems to China. The restrictions expanded in 2022, when the Biden administration imposed sweeping export controls designed to cut off China's access to advanced semiconductor technology. No EUV system has ever been sold to a customer in China, ASML told Reuters."

      • I didn't know this is a thing, this was funny: the Beckham rumour that refuses to die.

      • At work, we ended up talking about Christmas traditions and while I was explaining that where I live the magical entity that makes presents appear is called "christkind" (christ child), I was also trying to find proof on Wikipedia so I'd seem less weird and found this map. Note the filename: Christmas-gift-bringers-Europe.jpg. Great name. But now see where the green and the brown mix, in the middle of Germany? That's where I live. So not only does one legend say it's Baby Jesus bringing presents, it's also that in the next town over it's the Christmas Man. And that dude looks an awful lot like its American cousin Santa Claus, who has a lot more media appearances and higher popularity in the younger-than-10 demographic. Try to keep your story straight when you talk to a 4-year-old who keeps asking you whether she'll get a computer for Christmas. How grand it must be to live in Iceland, where, according to that map, the Christmas Lads live.

      • "This song is called Red 40. It's about Hot Cheetos."

      If you also feel a bit, let's say, joy & curiosity about computers doing new things, you should subscribe:

    7. 🔗 Andrew Healey's Blog A Fair, Cancelable Semaphore in Go rss

      Building a fair, cancelable semaphore in Go and the subtle concurrency issues involved.