🏡


  1. Transparent Leadership Beats Servant Leadership
  2. Writing a good CLAUDE.md | HumanLayer Blog
  3. My Current global CLAUDE.md
  4. About KeePassXC’s Code Quality Control – KeePassXC
  5. How to build a remarkable command palette

  1. December 09, 2025
    1. 🔗 K4ryuu/IDA-VTableExplorer Release v1.0.0 - 2025.12.02 release

      [1.1.0] - 2025-12-02

      Added

      Function Browser

      • New Del key action: Browse all functions in a vtable
      • Secondary chooser window showing function index, address, name, and status
      • Jump to any function with Enter key
      • Pure virtual functions highlighted in red

      Pure Virtual Detection

      • Automatic detection of __cxa_pure_virtual, _purecall, and purevirt symbols
      • Abstract classes marked with [abstract] suffix and distinct color
      • Function count shows pure virtual breakdown: 26 (3 pv)

      Annotate All

      • New Ins key action: Annotate all vtables at once
      • Progress indicator with cancel support
      • Summary dialog showing total vtables and functions processed

      UI Improvements

      • New "Functions" column showing function count per vtable
      • Color coding: abstract classes in light blue, pure virtuals in red
      • Dockable tab instead of modal window
      • Singleton chooser - reopening brings back the same tab with cached data
      • Refresh action to rescan vtables

      Optimized

      • Cached vtable data for instant reopening
      • Binary search for vtable boundary detection
      • Unified scanner template eliminates duplicate code
    2. 🔗 r/reverseengineering Declarative Binary Parsing for Security Research with Kaitai Struct rss
    3. 🔗 Anton Zhiyanov Go proposal: Secret mode rss

      Part of theAccepted! series, explaining the upcoming Go changes in simple terms.

      Automatically erase used memory to prevent secret leaks.

      Ver. 1.26 ‱ Stdlib ‱ Low impact

      Summary

      The new runtime/secret package lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable.

      secret.Do(func() {
          // Generate a session key and
          // use it to encrypt the data.
      })
      

      This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it.

      The package is experimental and is mainly for developers of cryptographic libraries, not for application developers.

      Motivation

      Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy.

      In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it.

      The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds.

      Description

      Add the runtime/secret package with Do and Enabled functions:

      // Do invokes f.
      //
      // Do ensures that any temporary storage used by f is erased in a
      // timely manner. (In this context, "f" is shorthand for the
      // entire call tree initiated by f.)
      //   - Any registers used by f are erased before Do returns.
      //   - Any stack used by f is erased before Do returns.
      //   - Any heap allocation done by f is erased as soon as the garbage
      //     collector realizes that it is no longer reachable.
      //   - Do works even if f panics or calls runtime.Goexit. As part of
      //     that, any panic raised by f will appear as if it originates from
      //     Do itself.
      func Do(f func())
      
      
      
      // Enabled reports whether Do appears anywhere on the call stack.
      func Enabled() bool
      

      The current implementation has several limitations:

      • Only supported on linux/amd64 and linux/arm64. On unsupported platforms, Do invokes f directly.
      • Protection does not cover any global variables that f writes to.
      • Trying to start a goroutine within f causes a panic.
      • If f calls runtime.Goexit, erasure is delayed until all deferred functions are executed.
      • Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act.
      • If f panics, the panicked value might reference memory allocated inside f. That memory won't be erased until (at least) the panicked value is no longer reachable.
      • Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.

      The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100]), don't create a pointer to that location (don't create a pointer p to &data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.

      The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use secret.Do behind the scenes.

      As of Go 1.26, the runtime/secret package is experimental and can be enabled by setting GOEXPERIMENT=runtimesecret at build time.

      Example

      Use secret.Do to generate a session key and encrypt a message using AES-GCM:

      // Encrypt generates an ephemeral key and encrypts the message.
      // It wraps the entire sensitive operation in secret.Do to ensure
      // the key and internal AES state are erased from memory.
      func Encrypt(message []byte) ([]byte, error) {
          var ciphertext []byte
          var encErr error
      
          secret.Do(func() {
              // 1. Generate an ephemeral 32-byte key.
              // This allocation is protected by secret.Do.
              key := make([]byte, 32)
              if _, err := io.ReadFull(rand.Reader, key); err != nil {
                  encErr = err
                  return
              }
      
              // 2. Create the cipher (expands key into round keys).
              // This structure is also protected.
              block, err := aes.NewCipher(key)
              if err != nil {
                  encErr = err
                  return
              }
      
              gcm, err := cipher.NewGCM(block)
              if err != nil {
                  encErr = err
                  return
              }
      
              nonce := make([]byte, gcm.NonceSize())
              if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
                  encErr = err
                  return
              }
      
              // 3. Seal the data.
              // Only the ciphertext leaves this closure.
              ciphertext = gcm.Seal(nonce, nonce, message, nil)
          })
      
          return ciphertext, encErr
      }
      

      Note that secret.Do protects not just the raw key, but also the cipher.Block structure (which contains the expanded key schedule) created inside the function.

      This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work.

      Links

      𝗣 21865 ‱ 𝗖𝗟 704615 ‱ đŸ‘„ Daniel Morsing, Dave Anderson, Filippo Valsorda, Jason A. Donenfeld, Keith Randall, Russ Cox

      *[Low impact]: Likely impact for an average Go developer

    4. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
      sync repo: +2 plugins, +2 releases
      
      ## New plugins
      - [vt-ida-plugin](https://github.com/VirusTotal/vt-ida-plugin) (1.0.6)
      - [yarka](https://github.com/AzzOnFire/yarka) (0.7.2)
      
    5. 🔗 coder/ghostty-web v0.4.0 release

      What's Changed

      • Enable linefeed mode so newline moves cursor to column 0 by @remorses in #70
      • fix: add contenteditable attribute to prevent extension conflicts by @MichaelYuhe in #78
      • iOS support by @gregsadetsky in #76
      • Migrate to use RenderState by @kylecarbs in #75
      • feat: support dynamic font resizing by @sreya in #80
      • fix: support application cursor mode (DECCKM) for arrow keys by @sreya in #81
      • feat: add DSR response handling for nushell compatibility by @sreya in #82
      • fix: support unicode grapheme cluster rendering for complex scripts by @sreya in #85
      • fix: correct selection overflow during auto-scroll by @sreya in #86
      • fix: integrate selection highlighting into cell rendering by @sreya in #87
      • Added support for IME input for languages such as Chinese and Japanese. by @Leask in #90
      • Demo: Unify HTTP/WebSocket server for reverse proxy compatibility by @HageMaster3108 in #74
      • Run bun install in prebuild to install dependencies by @computerality in #91
      • Enable alpha transparency in canvas context by @robertdrakedennis in #93
      • chore: simplify publishing flow for new tags by @sreya in #96

      New Contributors

      Full Changelog : v0.3.0...v0.4.0

    6. 🔗 r/LocalLLaMA Check on lil bro rss
  2. December 08, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-08 rss

      IDA Plugin Updates on 2025-12-08

      New Releases:

      Activity:

      • DriverBuddy-7.4-plus
        • af14c83d: Sync auto-copilot-org-playwright-loop.yaml from .github repo
        • dc2c01a1: Sync auto-copilot-functionality-docs-review.yml from .github repo
        • a992097c: Sync auto-copilot-code-cleanliness-review.yml from .github repo
        • d2fffd91: Sync auto-complete-cicd-review.yml from .github repo
        • 44a63ecb: Sync auto-close-issues.yml from .github repo
        • da7a6e5f: Sync auto-assign-pr.yml from .github repo
        • b2174a31: Sync auto-assign-copilot.yml from .github repo
        • 538dcc3f: Sync auto-amazonq-review.yml from .github repo
        • f886f1e2: Sync auto-feature-request.yml from .github repo
        • 0d2f7a69: Sync auto-bug-report.yml from .github repo
      • ghidra
        • 8ed13a08: Merge remote-tracking branch 'origin/patch'
        • b32c0a69: GP-0: Upping patch to 12.0.1
        • 055bb3cc: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • d0ca611d: Merge remote-tracking branch 'origin/GP-1-dragonmacher-test-framework

      • ida-domain
        • 080aa8f6: 0.3.6-dev.2
        • 04fb836e: Bugfix: Issue#30 - LocalVariableAccessType Incorrect for Instruction 

      • IDA-MCP
        • 2d24f18d: Refactor create_mcp_server to directly register tool functions while 

        • bdb7fc25: Add JSON wrapper for tool functions in create_mcp_server to return JS

        • d8e5589a: Update default port from 9000 to 10000 to avoid Windows Hyper-V reser

        • fc6d5dad: Update default port from 8765 to 9000 to avoid Windows Hyper-V reserv

        • 884f01f5: update readme
      • ida_domain_mcp
        • 6272eef1: Merge branch 'master' of github.com:xxyyue/ida_domain_mcp
        • d39739bf: fix path
      • IDAPluginList
      • mcrit
      • plugin-ida
        • 9da89e7a: Merge pull request #89 from RevEngAI/bug-PLU-218-stop-uploading-plt-s

        • ab27f506: bug(PLU-218): now check if function is in .plt section prior to uploa

      • quokka
        • 342a65ca: Bump to v0.6.2
        • 043ec19f: Adapt documentation with the new building mode
      • SuperHint
    2. 🔗 sacha chua :: living an awesome life 2025-12-08 Emacs news rss

      The Emacs Carnival theme for December is "The People of Emacs", hosted by George Jones. I'm looking forward to reading your thoughts!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    3. 🔗 r/LocalLLaMA Thoughts? rss

      Thoughts? | Interesting take submitted by /u/Salt_Armadillo8884
      [link] [comments]
      ---|---

    4. 🔗 sacha chua :: living an awesome life La semaine du 1 dĂ©cembre au 6 dĂ©cembre rss

      Lundi, le premier de décembre

      Ce matin, ma fille s'est rĂ©veillĂ©e trĂšs tard et elle a manquĂ© l'Ă©cole du matin. Elle Ă©tait encore d'humeur grincheuse. Mais elle a aussi eu faim, donc aprĂšs quelques larmes, elle est revenue pour un grand cĂąlin rĂ©confortant et le dĂ©jeuner de lasagne. Je n'ai rien dit jusqu'Ă  ce qu'elle soit prĂȘte Ă  converser. J'ai gardĂ© la conversation lĂ©gĂšre.

      Elle a voulu jouer un jeu vidéo aprÚs le déjeuner. Nous avons négocié un peu l'école de l'aprÚs-midi et une alternance entre ses devoirs et le jeu vidéo aprÚs la pause récré.

      Mardi, le deux de décembre

      Cette journée était encore chargée.

      TĂŽt le matin, j'ai aussi inscrit ma fille au cours de patinage. Le cours ne coute pas cher, donc j'ai dĂ» me lever tĂŽt pour pouvoir l'inscrire.

      Nous avons repris une routine assez normale. J'ai aidé ma fille avec sa routine matinale, puis pendant qu'elle participait à l'école virtuelle, j'ai travaillé sur la chanson Golden au piano, et j'ai fait quelques exercices. Au lieu de la promenade, j'ai déblayé la neige.

      Ensuite, j'ai travaillé sur le sous-titrage pour les vidéos de la conférence. J'ai aussi enregistré une allocution. J'ai enregistré l'allocution pour ne pas avoir à m'inquiéter.

      J'ai amélioré la configuration de l'interface de l'IRC. Je dois gérer le systÚme d'administration parce que la conférence est cette fin de semaine.

      AprÚs l'école, j'ai emmené ma fille à son cours d'art. J'ai eu un Zoom sur la gestion du stress, spécifiquement sur l'anxiété. Mon devoir est de réfléchir à mes valeurs. Je pense que la clarification des objectifs est plus utile pour ce moment.

      Demain, je vais corriger plus de sous-titrage et je dois faire des préparatifs de derniÚre minute avant la conférence. Je dois probablement revoir toutes les vidéos pour corriger les erreurs.

      Mercredi

      Aujourd'hui, j'ai encore corrigé le sous-titrage pour la conférence qui aura lieu ce week-end. J'ai ajusté l'horodatage pour la synchroniser avec l'audio. La plupart du travail a été automatique, et j'ai utilisé le subed.el pour accomplir manuellement le reste. J'ai aussi ajusté le programme de la conférence parce que quelques vidéos étaient plus longues que prévues. Quelques intervenants ont annulé parce qu'ils sont trop occupés, donc nous avons du temps. Nous sommes toujours reconnaissants parce que tout le monde est volontaire.

      Ma fille s'est réveillée un peu tard, donc elle n'a pas pris son petit-déjeuner avant d'aller en classe. Elle aime se coucher tard.

      Vendredi

      Aujourd'hui j'ai corrigé le sous-titrage pour la conférence prochaine. Maintenant, la plupart du programme ont des sous-titres que les intervenants ont vérifiés. Je suis soulagée.

      Quelques intervenants vont présenter en direct, donc je vais faire leurs sous-titres aprÚs quelques jours.

      Je me suis inquiétée il y a quelques semaines parce que les autres conférenciers sont occupés, mais maintenant, je pense que la conférence se passera bien. L'année derniÚre, la conférence s'est bien passé aussi. J'aime organiser la conférence pour qu'elle soit plutÎt détendue et amicale. La conférence est toute virtuelle.

      Quand mĂȘme, ma fille s'est encore rĂ©veillĂ©e tard, juste Ă  temps pour la classe. Elle a mangĂ© en vitesse pendant la rĂ©crĂ©, et je lui ai brossĂ© les dents pendant la pause dĂ©jeuner.

      AprÚs l'école, j'ai emmené ma fille à une autre patinoire pour jouer au hockey simplifié ensemble. La conférence a lieu le week-end prochain, donc je suis toujours occupée, mais il est important de passer du temps avec ma fille.

      Mon mari a utilisé notre nouvelle friteuse à air pour cuisiner des ailes de poulet. Nous avons mangé les ailes avec de la sauce aux prunes. Il a aussi essayé de préparer du pain, mais ça n'a pas marché. Le pain au levain est difficile, particuliÚrement en hiver. Notre cuisine est froide en hiver. Nous avons mangé ça malgré tout, mais la prochaine fois, nous allons essayer une recette moins grande.

      Samedi

      C'est le premier jour de la conférence, donc je vais raconter une entrée plus longue demain.

      Dimanche, le sept de décembre

      La confĂ©rence est finie ! MalgrĂ© les absences inattendues des autres confĂ©renciers, j’ai gĂ©rĂ© cette confĂ©rence. Je suis Ă  la fois fatiguĂ©e et soulagĂ©e. Je suis si contente d'avoir créé tous les logiciels nĂ©cessaires Ă  la gestion de la confĂ©rence. Il y a encore beaucoup de travail Ă  accomplir, mais je dois rattraper mon retard sur le reste avant toute chose. Alors, avant que j'oublie, je dois Ă©crire mes notes et je vais aussi commencer les transcriptions automatiques.

      Puis je vais rattraper mon retard dans l'Ă©tude du français, parce que j'ai un Zoom avec ma tutrice Claire demain. Je pourrai Ă©crire mes notes de la confĂ©rence en français. Bien sĂ»r, je vais apprendre beaucoup de mots spĂ©cialisĂ©s en cours de route. Chaque fois que j'organise cette confĂ©rence, je pense toujours Ă  beaucoup d'idĂ©es d'amĂ©lioration. MĂȘme si tout le monde devient de plus en plus occupĂ©, cette confĂ©rence est toujours inspirante, ce qui la rend utile.

      Alors, commençons avec l'expérience générale. Le processus d'organisation jusqu'au jour de la conférence était assez normal, mais avec plus d'annulations que d'habitude parce que des intervenants étaient occupés ou malades. Ce n'était pas un problÚme pour moi, j'ai juste ajusté le programme plusieurs fois.

      Toutefois d'autres conférenciers ont dit quŽils ne pouvaient pas venir, c'était un peu plus difficile, mais nous avons pu nous organiser en simplifiant.

      Le premier jour de la confĂ©rence, c'Ă©tait surtout moi et un autre confĂ©rencier, avec une petite aide supplĂ©mentaire d'un troisiĂšme confĂ©rencier. Heureusement, j'ai prĂ©parĂ© toutes sortes d'outils d'automatisation, donc malgrĂ© la diffĂ©rence par rapport au plan, le programme s'est dĂ©roulĂ© sans problĂšme. Nous avons juste eu besoin de gĂ©rer le temps pour que nous puissions vĂ©rifier l'audio, la vidĂ©o et l'Ă©cran des intervenants en coulisses qui allaient rĂ©pondre aux questions en direct, pendant que nous devions aussi poser les questions sur scĂšne lors des deux volets de la confĂ©rence Ă  certains moments. J'ai parlĂ© rapidement et j'ai sautĂ© entre beaucoup de fenĂȘtres sur deux Ă©crans. Ça m'a un peu mĂȘlĂ©e, mais nous avons rĂ©ussi. La plupart des intervenants ont pu lire et rĂ©pondre aux questions seuls, et les participants ont aidĂ© avec les notes et les questions.

      Le deuxiÚme matin était similaire, mais les autres conférenciers étaient tous indisponibles pour l'aprÚs-midi, donc j'ai géré tout seul. J'ai organisé un seul volet pour le deuxiÚme jour au lieu de deux, ce qui a simplifié les choses. Tout le monde a été serviable. J'ai fini sans trop de stress.

      AprÚs la conférence, j'ai sauvegardé les enregistrements et réduit la taille de nos serveurs. J'étais trÚs fatiguée, mais ça valait le coup. J'ai commencé les transcriptions automatiques et la transformation des vidéos. Puis, je me suis reconnectée avec ma famille.

      J'ai pensé à beaucoup de petites améliorations que je peux faire aprÚs avoir contrÎlé les enregistrements et avoir rattrapé mon retard. Si je gÚre seule la prochaine fois, il y a de petites choses qui m'aideraient.

      J'ai vu les conversations pendant la conférence et aprÚs ça, je pense que j'ai envie de l'organiser à nouveau. La conférence est une bonne façon de partager ce que l'on sait et de se connecter avec d'autres qui apprécient des choses similaires.

      You can e-mail me at sacha@sachachua.com.

    5. 🔗 r/reverseengineering Ghidra 12.0 has been released! rss
    6. 🔗 r/LocalLLaMA I'm calling these people out right now. rss

      For being heroes of the community.

      • Unsloth |Blazing fast fine-tuning + premium GGUF quants
      • mradermacher |Quantizes literally EVERYTHING, absolute machine
      • bartowski |High-quality quants, great documentation
      • TheBloke |The OG - before he stepped back, he was THE source
      • LoneStriker |Solid AWQ/GPTQ quants
      • Nexesenex |iMatrix quants, gap hunter and filler

      Everyone here owes so much to you folks. Take a bow.

      submitted by /u/WeMetOnTheMountain
      [link] [comments]

    7. 🔗 r/LocalLLaMA After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM rss

      After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM | Yes, it's ugly and frankly embarrassing to look at. I just finished this build last night by adding 2 additional GPUs to go from 6 to 8, where I will stop & call this build complete. I've built many PCs over the years but this was a whole other level and at this point I'm just happy it works. It runs off daisy chained 1500W and 1000W PSUs (5 cards on the 1500W and 3 on the 1000W), and the system is fed by a 20A dedicated branch circuit. Cramming the GPUs in a case without having to use long GPU riser cables was the hardest part. If I were to do this again, I'd just use long PCIE 1x cables that give me the freedom to neatly stack the cards and save myself the headache, since this is just an inference system... only time PCIE bandwidth matters is when loading models. But I went down the path of using certified PCIE 4.0 cables that range from 200-250mm, & as you can see, it ain't pretty. One card has to sit outside the rack bc there was simply no space for it among the chonky GPUs & PCIE riser spaghetti. Good news is that the system has been running stable for it's entire existence as I kept adding parts & just learning as I go. GPU temps never exceed 70ish*C under load since the GPUs are pretty well spread out in an open case, and all in I spent about $8k, as almost every part in the system is used (only the motherboard was bought new - a supermicro supermicro h12ssl-i which was $400 at the time).
      The most I paid for a GPU was $700, the lowest was $500, which was just this week. FB Marketplace is great in my area - I had tons of options and I highly recommend local sellers over ebay.
      All I've done so far is load GLM 4.5 air Q6_K GGUF using llama.cpp, specifically these settings - llama-server \-m /home/hisma/llama.cpp/models/GLM-4.5-Air.i1-Q6_K/GLM-4.5-Air.i1-Q6_K.gguf -c 131072 -ngl 99 -b 4096 -ub 2048 -fa --temp 0.6 --top-p 1.0 --host 0.0.0.0 --port 8888 From the screenshot, you can see it pulled off a respectable ~49 t/s.
      My next steps -

      • power limit all cards to ~250W (maybe lower depending on how my system responds - confident I shouldn't need to go any lower than 200W which would only be a ~20% perf hit)
      • test some AWQ models using VLLM with tensor parallelism (specifically MiniMax-M2-AWQ-4bit).
        • My whole reason for going to 8 GPUs is bc TP requires either 2, 4 or 8 cards. So 8 cards was always my goal to get the most out of this system
      • Once I find a solid set of models, start doing some agentic coding with roocode & let this thing rip

      With PC hardware prices going insane lately, I feel lucky to have this thing, even with the janky ass build. It was a good learning experience & certainly would do some things different w/ the lessons I learned, but I forsee future enshittification of cloud models as the big corpos pivot to pleasing shareholders over burning cash, and in the 1 year I've had this system local models have continued to improve and trade blows with frontier models while using less memory, I'm sure the trend will continue. submitted by /u/Hisma
      [link] [comments]
      ---|---

    8. 🔗 r/LocalLLaMA GLM-4.6V (108B) has been released rss

      GLM-4.6V (108B) has been released | https://preview.redd.it/dyfhb6nhwy5g1.jpg?width=10101&format=pjpg&auto=webp&s=d03177e251a72b04491b10634e66bdde1a9544c5 GLM-4.6V series model includes two versions: GLM-4.6V (106B), a foundation model designed for cloud and high-performance cluster scenarios, and GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. Crucially, we integrate native Function Calling capabilities for the first time. This effectively bridges the gap between "visual perception" and "executable action" providing a unified technical foundation for multimodal agents in real-world business scenarios. Beyond achieves SoTA performance across major multimodal benchmarks at comparable model scales. GLM-4.6V introduces several key features:

      • Native Multimodal Function Calling Enables native vision-driven tool use. Images, screenshots, and document pages can be passed directly as tool inputs without text conversion, while visual outputs (charts, search images, rendered pages) are interpreted and integrated into the reasoning chain. This closes the loop from perception to understanding to execution.
      • Interleaved Image-Text Content Generation Supports high-quality mixed media creation from complex multimodal inputs. GLM-4.6V takes a multimodal context—spanning documents, user inputs, and tool-retrieved images—and synthesizes coherent, interleaved image-text content tailored to the task. During generation it can actively call search and retrieval tools to gather and curate additional text and visuals, producing rich, visually grounded content.
      • Multimodal Document Understanding GLM-4.6V can process up to 128K tokens of multi-document or long-document input, directly interpreting richly formatted pages as images. It understands text, layout, charts, tables, and figures jointly, enabling accurate comprehension of complex, image-heavy documents without requiring prior conversion to plain text.
      • Frontend Replication & Visual Editing Reconstructs pixel-accurate HTML/CSS from UI screenshots and supports natural-language-driven edits. It detects layout, components, and styles visually, generates clean code, and applies iterative visual modifications through simple user instructions.

      https://huggingface.co/zai-org/GLM-4.6V please notice that llama.cpp support for GLM 4.5V is still draft https://github.com/ggml-org/llama.cpp/pull/16600 submitted by /u/jacek2023
      [link] [comments]
      ---|---

    9. 🔗 r/LocalLLaMA zai-org/GLM-4.6V-Flash (9B) is here rss

      Looks incredible for your own machine.

      GLM-4.6V-Flash (9B), a lightweight model optimized for local deployment and low-latency applications. GLM-4.6V scales its context window to 128k tokens in training, and achieves SoTA performance in visual understanding among models of similar parameter scales. Crucially, we integrate native Function Calling capabilities for the first time. This effectively bridges the gap between "visual perception" and "executable action" providing a unified technical foundation for multimodal agents in real-world business scenarios.

      https://huggingface.co/zai-org/GLM-4.6V-Flash

      submitted by /u/Cute-Sprinkles4911
      [link] [comments]

    10. 🔗 r/LocalLLaMA RAM prices explained rss

      OpenAI bought up 40% of global DRAM production in raw wafers they're not even using - just stockpiling to deny competitors access. Result? Memory prices are skyrocketing. Month before chrismass.

      Source: MooreÂŽs law is Dead
      Link: Sam Altman’s Dirty DRAM Deal

      submitted by /u/Lopsided_Sentence_18
      [link] [comments]

    11. 🔗 r/LocalLLaMA Vector db comparison rss

      Vector db comparison | I was looking for the best vector for our RAG product, and went down a rabbit hole to compare all of them. Key findings: - RAG systems under ~10M vectors, standard HNSW is fine. Above that, you'll need to choose a different index. - Large dataset + cost-sensitive : Turbopuffer. Object storage makes it cheap at scale. - pgvector is good for small scale and local experiments. Specialized vector dbs perform better at scale. - Chroma - Lightweight, good for running in notebooks or small servers Here's the full breakdown: https://agentset.ai/blog/best-vector-db-for-rag submitted by /u/Kaneki_Sana
      [link] [comments]
      ---|---

    12. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    13. 🔗 cased/kit Release 3.0.0 release

      What's Changed

      • Add Go dependency analyzer and optimize performance by @tnm in #157

      Full Changelog : v2.2.1...v3.0.0

    14. 🔗 r/LocalLLaMA Is this THAT bad today? rss

      Is this THAT bad today? | I already bought it. We all know the market... This is special order so not in stock on Provantage but they estimate it should be in stock soon . With Micron leaving us, I don't see prices getting any lower for the next 6-12 mo minimum. What do you all think? For today’s market I don’t think I’m gonna see anything better. Only thing to worry about is if these sticks never get restocked ever.. which I know will happen soon. But I doubt they’re already all completely gone. link for anyone interested: https://www.provantage.com/crucial-technology-ct2k64g64c52cu5~7CIAL836.htm submitted by /u/Normal-Industry-8055
      [link] [comments]
      ---|---

    15. 🔗 Rust Blog Making it easier to sponsor Rust contributors rss

      TLDR: You can now find a list of Rust contributors that you can sponsor on this page.

      Same as with many other open-source projects, Rust depends on a large number of contributors, many of whom make Rust better on a volunteer basis or are funded only for a fraction of their open-source contributions.

      Supporting these contributors is vital for the long-term health of the Rust language and its toolchain, so that it can keep its current level of quality, but also evolve going forward. Of course, this is nothing new, and there are currently several ongoing efforts to provide stable and sustainable funding for Rust maintainers, such as the Rust Foundation Maintainer Fund or the RustNL Maintainers Fund. We are very happy about that!

      That being said, there are multiple ways of supporting the development of Rust. One of them is sponsoring individual Rust contributors directly, through services like GitHub Sponsors. This makes it possible even for individuals or small companies to financially support their favourite contributors. Every bit of funding helps!

      Previously, if you wanted to sponsor someone who works on Rust, you had to go on a detective hunt to figure out who are the people contributing to the Rust toolchain, if they are receiving sponsorships and through which service. This was a lot of work that could provide a barrier to sponsorships. So we simplified it!

      Now we have a dedicated Funding page on the Rust website, which helpfully shows members of the Rust Project that are currently accepting funds through sponsoring1. You can click on the name of a contributor to find out what teams they are a part of and what kind of work they do in the Rust Project.

      Note that the list of contributors accepting funding on this page is non- exhaustive. We made it opt in, so that contributors can decide on their own whether they want to be listed there or not.

      If you ever wanted to support the development of Rust "in the small", it is now simpler than ever.

      1. The order of people on the funding page is shuffled on every page load to reduce unnecessary ordering bias. ↩
    16. 🔗 Ampcode News Find Threads rss

      Amp can now search your threads.

      A few weeks ago we shipped the ability to read other threads. That was the first step: reference a thread by ID or URL, let Amp pull out the relevant context.

      But what if you don't have the thread ID at hand? What if the only thing you know about a thread is that it changed or created a specific file? Or some keywords?

      That's what the new find_thread tool is for. It lets Amp search your threads in two ways:

      Keyword search: find threads that mention specific terms. "Find threads where we discussed the database migration." The agent in Amp will then use the same search functionality as on the thread feed and return matches.

      File search: find threads that touched a specific file. Think of it like git blame, but for Amp. "Which thread last modified this file?" Amp looks at file changes across threads and tells you which conversations touched it.

      Here's how we've been using it to find threads:

      • "Which Amp thread created core/src/tools/tool-service.ts?"
      • "Search my threads to find the one in which we added the explosion animation. I want to continue working on that."
      • "Show me all threads that modified src/terminal/pty.rs"
      • "Find and read the thread that created scripts/deploy.sh."
      • "Find the thread in which we added @server/src/backfill-service.ts, read it, and extract the SQL snippet we used to test the migration."

      The find_thread tool is the sibling to read_thread. First you find, then you read. Together they turn your Amp threads into reusable context.

      Find thread in action

  3. December 07, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-07 rss

      IDA Plugin Updates on 2025-12-07

      Activity:

    2. 🔗 Register Spill Joy & Curiosity #65 rss

      When we started working on Amp in February, ten months ago, I couldn't have predicted this. Nor could I have in May, when we released Amp to the world. Nor in the weeks following that. But in the last few months, it began to cross our minds.

      Then, suddenly, it changed from a Maybe to an Inevitable, from something that might be a good idea to something we had to do to give it the chance it deserves, the chance it demands.

      On Tuesday, we announced that Amp is becoming its own company: Amp, Inc.

      I'm part of the founding team, one of twenty cofounders of this newly formed research lab that has one big goal: to let software builders harness the full power of artificial intelligence.

      A year ago, when I left Zed, I couldn't have imagined that rejoining Sourcegraph would turn into a new product and a new company.

      But, of course, I also couldn't have imagined how much AI would still change the practice of software development - three years after ChatGPT. Today I'm more convinced than ever that what we're going through is only in its opening movement.

      I'm also more excited than ever to figure out where this will lead. And Amp, Inc. is the vehicle we're building with which to explore that frontier.

      • Bun is joining Anthropic. I was honestly surprised that people were surprised by this and I don't mean this in a humble-braggy "what, you haven't figured out the twist of the movie in the first 5min, like I have?" way. I think the kicker is in this line: "But there's a bigger question behind that: what does software engineering even look like in two to three years?" What does software engineering look like in the future? Well, what I do know is that even right now it already looks completely different than it did in 2022 when Bun was first released to the world. Now, ask yourself: in 2026, with agents being on the trajectory they are on, would you start to work on a framework , in the classic sense, to improve developer productivity? Or are there bigger levers? I've been thinking about these questions for the last year and when I saw the news, it wasn't surprise I felt.

      • Maybe related: Brendan Gregg is leaving Intel and "accepted a new opportunity." He writes: "It's still early days for AI flame graphs. [
] I think as GPU code becomes more complex, with more layers, the need for AI flame graphs will keep increasing."

      • Daniel Lemire: "The tidy, linear model of scientific progress--professors thinking deep thoughts in ivory towers, then handing blueprints to engineers--is indefensible. Fast ships and fast trains are not just consequences of scientific discovery; they are also wellsprings of it. Real progress is messy, iterative, and deeply intertwined with the tools we build. Large language models are the latest, most dramatic example of that truth." Hard to pick a quote, the whole thing is great and thought-provoking. As is this yes-and reply to it: "we often first see something that works, then we understand it"

      • Jimmy Miller with the "easiest way to build a type checker". I've done exactly this with Monkey before, it's a lot of fun.

      • A Senior Staff Engineer at Google on "Why I Ignore The Spotlight as a Staff Engineer": "The tech industry loves to tell you to move fast. But there is another path. It is a path where leverage comes from depth, patience, and the quiet satisfaction of building the foundation that others stand on. You don't have to chase the spotlight to have a meaningful, high-impact career at a big company. Sometimes, the most ambitious thing you can do is stay put, dig in, and build something that lasts. To sit with a problem space for years until you understand it well enough to build a Bigtrace." I read the whole thing with one eyebrow raised kept thinking of the distinction between cost centers and profit centers.

      • A black hole in 125 bytes of JavaScript.

      • "Are you here because you are looking, like I was, for advice? Unfortunately--this is, in a way, the problem--whatever I could say about parenting is trapped in the soundproof box of cliche. Everything you have heard about having kids, good or bad, is true. Children are blessings and bring blessings. They are exhausting and raise the stakes of all your limitations and flaws to vertiginous heights. Love for children is Love, the romantic kind, the song kind, the ordinary kind. Parenting resembles religious practice in the way it links the broad sweep of the sacred to the smallest of everyday tasks." As so often: I don't know how I ended up reading this, but I'm glad I did.

      • Lot of talk about CVE-2025-55182 this week, the "critical-severity vulnerability in React Server Components". I'm usually not that interested in vulnerabilities (modulo how I'm affected), but I really enjoyed reading through this proof-of-concept by Moritz Sanft. Excellent technical writing and explanations. Also: jesus.

      • "But during that stretch, a friend and colleague kept repeating one line to me: 'All it takes is for one to work out.' He'd say it every time I spiraled. And as much as it made me smile, a big part of me didn't fully believe it. Still, it became a little maxim between us. And eventually, he was right - that one did work out. And it changed my life."

      • When I first heard that Haribo, the gummy bear company, now sell power banks and that they're apparently really good, I couldn't believe it. Haribo? Power banks? Non-gummy, actual, usable, very good power banks? What? Turns out there's issues with it: "The Haribo power bank weighs roughly 286g and has a capacity of 20,000 mAh. That ratio drew attention because it implied efficient cell packaging. Our scans show that the structure inside the pouch has bigger problems than sheer capacity." And that's the way the gummy melts, I suppose.

      • Tim Ferriss on the value of aggression: "The above video clip is from Dan Gable - Competitor Supreme, which my mom bought for me when I was 15. It changed my life. I watched it almost every day in high school, and it kept me fighting through all the various losses in life. Didn't finish the SAT in time? Watch Dan Gable. Have a guidance counselor laugh while telling me I don't stand a chance of getting into Princeton? More Dan Gable. Lost my first important judo match in 7 seconds? Watch the Iowa Hawkeyes
again and again and again. Then, return to the same tournament six months later and win. In life, there are dog fights. You must learn to enjoy them. Few people look forward to banging heads (literally or metaphorically), and therein lies the golden opportunity."

      • Dark patterns killed my wife's Windows 11 installation. Man, I got sweaty hands and flashbacks reading this.

      • Very entertaining: The "Mad Men" in 4K on HBO Max Debacle. I love Mad Men, but this was also a great read because it's Classic Internet in some sense: someone, somewhere, sits down with enough motivation to pull all those screengrabs together and then posts them on the Internet.

      • Matt Godbolt (the Matt Godbolt) is currently writing Advent of Compiler Optimisations 2025 and from what I've read (Why xor eax, eax? and Addressing the adding situation) it's very good.

      • Another guide on how to prompt Nano Banana Pro. Nothing groundbreaking, but the prompts are good and, man, Nano Banana Pro is just really good and I like reading more about it.

      • This PDF here, by OpenAI, is about how to build "an AI-native engineering team" and, in my opinion, the most interesting and most telling thing in the whole file is that each section has these two headers: "How coding agents help" and "What engineers do instead". What engineers do instead, indeed.

      • Murat Demirbas telling us to optimize for momentum: "So the trick is to design your workflow around staying in motion. Don't wait for the perfect idea or the right mood. Act first. Clarity comes after. If a task feels intimidating, cut it down until it becomes trivial. Open the file. Draft one paragraph. Try freewriting. Run one experiment. Or sketch one figure. You want the smallest possible task that gets the wheel turning. Completion, even on tiny tasks, builds momentum and creates the energy to do the next thing. The goal is to get traction and stop getting blocked on an empty page." Yes, yes, yes, yes. Momentum is everything. I'd take momentum and mistakes and breakage over slow perfectionism every day.

      • The first time I saw the sunshine map at the top here something in me broke: wait, you're telling me, all of the US gets more sun than I do? Including Boston? Boston, the-lake-sometimes-freezes Boston? And freaking Seattle, the place that's always portrayed as if Drizzle had been a better name for it? And New York? That New York that's also movie New York where the steam comes out of the manhole covers? That place gets more sun than I do? You're telling me when they shoot a movie there and it's blue skies, they didn't get lucky and picked the one day in the month it's clear skies, but that they often have blue skies? It's been at least four years since I first saw that map and I bring it up a lot (a lot , sorry). This week, while in Estonia, I brought it up again, of course, and while trying to find that map, I found that page that I linked to here, the one that starts with "I've always thought these sunshine maps were a little suspicious." That sentence made me flinch -- uh, oh, did I talk about a hoax all these years? Turns out I didn't. The map is real. It's all real. And if you scroll down to the solar power irradiance maps you'll find that even freaking goddamn Ottawa gets as much sun as northern Italy.

      • Deep, deep dive into how prompt caching works.

      • This was good and a great reminder: ruthless prioritization while the dog pees on the floor. "In fact we can't help but prioritize, even if mindlessly. Since we can only do one thing at a time, whatever we're doing now is definitionally our 'highest priority.' Reading this sentence is currently your highest priority." I once worked in a company that tried to set up a weekly company meeting. One team was always absent. "We just don't have the time to make it to this meeting," they said. I ranted about that team to my team lead and he said, "well, everyone has the same amount of time. They chose to prioritize it differently. If the CEO would say that everybody who doesn't show up gets fired, they would have the time to make the meeting." And the pupil was enlightened, as they say.

      • "many large firms are making massive capital commitments behind ai services this year with the hope of lowering cost, increasing scale, and (unsaid) shaving off some of the exuberant hiring of the zirp years. my suggestion is that these firms will see almost zero roi from this spend. why? because you can instrument capabilities, not competence. [
] remember: most firms got the internet in the 90s; most did not use it to its full economic potential until 2020. the gap between 1999 and 2020 was not a lack of fiber optic cable. it took thirty years for the 'competence' of remote work and digital commerce to catch up to the 'capability' of executives can buy all the compute they want. they can instrument the capability until their dashboards overflow. but until they do the messy, un-instrumentable work of rewiring entire organizations to trust and utilize these tools, the long march will drag on."

      • Bouke van der Bijl built a Rust program that can do rootless pings. One of the first programs I attempted to write as a teenager was a ping clone in Python, so when I saw that post fly past, I hopped on. The intro, though, made me pause: "The ping command line tool works without root however, how is that possible? It turns out you can create a UDP socket with a protocol flag, which allows you to send the ping rootless. I couldn't find any simple examples of this online and LLMs are surprisingly bad at this (probably because of the lack of examples)." LLMs are surprisingly bad at this? At writing a ping? Really? Let's see. So I asked Amp to build a ping without requiring root. And it did it, with one prompt from my side. Then I thought: maybe the challenge is that it must use UDP, like Bouke wrote there? So I asked Amp to do it again, this time using UDP. It did it again and concludes: "It works! The trick is using socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP) instead of SOCK_RAW. This creates an unprivileged ICMP socket that the kernel handles specially - no root required." And that's exactly what a HackerNews commenter also writes, criticizing the "trick is to use UDP" line: "This is wrong, despite the Rust library in question's naming convention. You're not creating a UDP socket. You're creating an IP (AF_INET), datagram socket (SOCK_DGRAM), using protocol ICMP (IPPROTO_ICMP). The issue is that the rust library apparently conflates datagram and UDP, when they're not the same thing." LLMs are surprisingly something, but not bad at this.

      • This is beautiful: Bootstrapping Computing.

      • Stripe City is impressive. This behind-the-scenes series of posts is too: "@bits_by_brandon not only built this interaction, he drove the train and recorded every sound."

      If you also think this is just the beginning of changes coming to software development, you should subscribe:

    3. 🔗 r/LocalLLaMA My little decentralized Locallama setup, 216gb VRAM rss

      My little decentralized Locallama setup, 216gb VRAM | submitted by /u/Goldkoron
      [link] [comments]
      ---|---

  4. December 06, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-06 rss

      IDA Plugin Updates on 2025-12-06

      Activity:

      • GTA2_RE
        • e8a0d981: ĐœĐ”ĐŒĐœĐŸĐłĐŸ ĐŽĐŸĐ±Đ°ĐČОл
      • quokka
        • 86e8c1e7: Merge pull request #69 from quarkslab/dependabot/github_actions/actio

    2. 🔗 r/wiesbaden Friends rss

      Any Americans that aren't military that live here? Or are military, I don't really care either way. I just know its hard to make non military friends if youre on active duty. It's been 6 months since I moved here with my wife, and I haven't met anyone.. my German is A1 level, so complete shit basically. Everytime I try to communicate in English to Germans here, they say they don't speak English, so no luck there. I'm open to making German friends as well! I just can't speak much german yet. I'm still learning. I'm bored, and extremely fucking lonely 🙁 đŸ„ș 😞

      submitted by /u/d00m_Prophet
      [link] [comments]

    3. 🔗 r/reverseengineering Patching Pulse Oximeter Firmware rss
    4. 🔗 r/reverseengineering GhidrAssist Ghidra LLM plugins reached v1.0 rss
    5. 🔗 r/reverseengineering elfpeek - small C tool to inspect ELF64 headers/sections/symbols rss
    6. 🔗 navidrome/navidrome v0.59.0 release

      This release brings significant improvements and new features:

      • Scanner Improvements : Selective folder scanning and enhancements to the file system watcher for better performance and reliability.
      • Scrobble History : Native scrobble/listen history tracking, allowing Navidrome to keep a record of your listening habits. This will be used in future visualizations and features (Navidrome Wrapped maybe?).
      • User Administration : New CLI commands for user management, making it easier to handle user accounts from the terminal.
      • New Themes : Two new themes have been added: SquiddiesGlass and AMusic (Apple Music inspired).
      • General : Numerous bug fixes, translation updates, and configuration options for advanced use cases.

      Added

      • UI Features:

        • Add AMusic (Apple Music inspired) theme. (#4723 by @metalheim)
        • Add SquiddiesGlass Theme. (#4632 by @rendergraf)
        • Add loading state to artist action buttons for improved user experience. (f6b2ab572 by @deluan)
        • Add SizeField to display total size in LibraryList. (73ec89e1a by @deluan)
        • Update totalSize formatting to display two decimal places. (c3e8c6711 by @deluan)
        • Backend Features:

        • Track scrobble/listens history. Note that for music added before this version, the count of scrobbles per song will not necessarily equal the song playcount. (#4770 by @deluan)

        • Add user administration to CLI. (#4754 by @kgarner7)
        • Make Unicode handling in external API calls configurable, with DevPreserveUnicodeInExternalCalls (default false). (#4277 by @deluan)
        • Rename "reverse proxy authentication" to "external authentication". (#4418 by @crazygolem)
        • Add configurable transcoding cancellation, with EnableTranscodingCancellation (default false). (#4411 by @deluan)
        • Add Rated At field. (#4660 by @zacaj)
        • Add DevOptimizeDB flag to control whether apply SQLite optimization (default true). (ca83ebbb5 by @deluan)
        • Scanner Features:

        • Implement selective folder scanning and file system watcher improvements. (#4674 by @deluan)

        • Improve error messages for cleanup operations in annotations, bookmarks, and tags. (36fa86932 by @deluan)
        • Plugins:

        • Add artist bio, top tracks, related artists and language support (Deezer). (#4720 by @deluan)

      Changed

      • UI:

        • Update Bulgarian, Esperanto, Finnish, Galician, Dutch, Norwegian, Turkish translations. (#4760 and #4773 by @deluan)
        • Update Danish, German, Greek, Spanish, French, Japanese, Polish, Russian, Swedish, Thai, Ukrainian translations. (#4687 by @deluan)
        • Update Basque translation. (#4670 by @xabirequejo)
        • New Hungarian strings and updates. (#4703 by @ChekeredList71)
        • Server:

        • Make NowPlaying dispatch asynchronous with worker pool. (#4757 by @deluan)

        • Enables quoted ; as values in ini files. (c21aee736 by @deluan)
        • Fix Navidrome build issues in VS Code dev container. (#4750 by @floatlesss)

      Fixed

      • UI:

        • Improve playlist bulk action button contrast on dark themes. (86f929499 by @deluan)
        • Increase contrast of button text in the Dark theme. (f939ad84f by @deluan)
        • Sync body background color with theme. (9f0d3f3cf by @deluan)
        • Allow scrolling in shareplayer queue by adding delay. (#4748 by @floatlesss)
        • Fix translation display for library list terms. (#4712 by @dongeunm)
        • Fix library selection state for single-library users. (#4686 by @deluan)
        • Adjust margins for bulk actions buttons in Spotify-ish and Ligera. (9b3bdc8a8 by @deluan)
        • Scanner:

        • Handle cross-library relative paths in playlists. (#4659 by @deluan)

        • Defer artwork PreCache calls until after transaction commits. (67c4e2495 by @deluan)
        • Specify exact table to use for missing mediafile filter. (#4689 by @kgarner7)
        • Refactor legacyReleaseDate logic and add tests for date mapping. (d57a8e6d8 by @deluan)
        • Server:

        • Lastfm.ScrobbleFirstArtistOnly also only scrobbles the first artist of the album. (#4762 by @maya-doshi)

        • Log warning when no config file is found. (142a3136d by @deluan)
        • Retry insights collection when no admin user available. (#4746 by @deluan)
        • Improve error message for encrypted TLS private keys. (#4742 by @deluan)
        • Apply library filter to smart playlist track generation. (#4739 by @deluan)
        • Prioritize artist base image filenames over numeric suffixes. (bca76069c by @deluan)
        • Prefer cover.jpg over cover.1.jpg. (#4684 by @deluan)
        • Ignore artist placeholder image in LastFM. (353aff2c8 by @deluan)
        • Plugins:

        • Avoid Chi RouteContext pollution by using http.NewRequest. (#4713 by @deluan)

      New Contributors

      Full Changelog : v0.58.5...v0.59.0

      Helping out

      This release is only possible thanks to the support of some awesome people!

      Want to be one of them?
      You can sponsor, pay me a Ko- fi, or contribute with code.

      Where to go next?

    7. 🔗 r/reverseengineering free, open-source file scanner rss
    8. 🔗 r/wiesbaden Whatsapp Gruppe rss

      hallo zusammen,

      hat unsere Stadt eine Whatsapp Gruppe, auf der wöchentlich ĂŒber alles kulinarisch, kulturelle und allen anderen schnick schnack in der Stadt informiert wird.

      Kenne das aus Regensburg :)

      submitted by /u/Unfair_Hornet7475
      [link] [comments]

    9. 🔗 r/reverseengineering Made yet another ApkTool GUI (at least I think it's pretty) rss
    10. 🔗 r/reverseengineering PalmOS on FisherPrice Pixter Toy rss
    11. 🔗 r/LocalLLaMA The Best Open-Source 8B-Parameter LLM Built in the USA rss

      The Best Open-Source 8B-Parameter LLM Built in the USA | Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models. These models

      • perform well across a range of programming languages.
      • boast strong agentic capabilities (e.g., inside agentic frameworks like mini-SWE-agent).
      • excel at tool-calling.

      Both raw and instruct variants are available on Hugging Face platform. Model Architecture Overview Rnj-1's architecture is similar to Gemma 3, except that it uses only global attention, and YaRN for long-context extension. Training Dynamics rnj-1 was pre-trained on 8.4T tokens with an 8K context length, after which the model’s context window was extended to 32K through an additional 380B-token mid-training stage. A final 150B-token SFT stage completed the training to produce rnj-1-instruct. submitted by /u/Dear-Success-1441
      [link] [comments]
      ---|---

    12. 🔗 @cxiao@infosec.exchange you can read about tara's case here: mastodon

      you can read about tara's case here: https://www.freetara.info/home

    13. 🔗 @cxiao@infosec.exchange RE: mastodon

      RE: https://mastodon.online/@hkfp/115670539559004434

      this is so fucking scary to me because:

      1) this girl was living in france
      2) she took pains to protect her identity - she wrote for the pro-tibet blog anonymously and had an altered voice on her podcast appearance
      3) this girl is han chinese
      4) she suspected absolutely nothing and thought it was fine to just visit home for a bit before starting a new degree
      5) like everyone else in this situation she just disappeared and no one knows where she is

      no one is more sinophobic than the ccp

    14. 🔗 matklad Mechanical Habits rss

      Mechanical Habits

      Dec 6, 2025

      My schtick as a software engineer is establishing automated processes — mechanically enforced patterns of behavior. I have collected a Santa Claus bag of specific tricks I’ve learned from different people, and want to share them in turn.

      Caution: engineering processes can be tricky to apply in a useful way. A process is a logical cut — there’s some goal we actually want, and automation can be a shortcut to achieve it, but automation per se doesn’t explain what the original goal is. Keep the goal and adjust the processes on the go. Sanity checks: A) automation should reduce toil. If robots create work for humans, down with the robots! B) good automation usually is surprisingly simple, simplistic even. Long live the duct tape!

      Weekly Releases

      By far the most impactful trick — make a release of your software every Friday. The first order motivation here is to reduce the stress and effort required for releases. If releases are small, writing changelogs is easy, assessing the riskiness of release doesn’t require anything more than mentally recalling a week’s worth of work, and there’s no need to aim to land features into a particular releases. Delaying a feature by a week is nothing, delaying by a year is a reason to put in an all-nighter.

      As an example, this Friday I was filling my US visa application, so I was feeling somewhat tired in the evening. I was also the release manager. So I just messaged “sorry, I am feeling too tired to make a release, we are skipping this one” without thinking much about it. It’s cheap to skip the release, so there’s no temptation to push yourself to get the release done (and quickly follow up with a point release, the usual consequence).

      But the real gem is the second order effect — weekly releases force you to fix all other processes to keep the codebase healthy all the time. And it is much easier to keep the flywheel going at roughly the same speed, rather than periodically to struggle to get it going. Temporal locality is the king: “I don’t have time to fix X right now, I’ll do it before the release” is the killer. By the time of release you’ll need 2X time just to load X in your head! It’s much faster overall to immediately make every line of code releasable. Work the iron while it is hot!

      Epistemic Aside

      I’ve done releases every Friday in IntelliJ Rust, rust-analyzer, and TigerBeetle, to a great success. It’s worth reflecting how I got there. The idea has two parents:

      Both seemed worthwhile to try for me, and I figured that a nice synthesis would be to release every Monday, not every six weeks (I later moved cutting the release to Friday, so that it can bake in beta/fuzzers during the weekend). I just finished University at that point, and had almost zero working experience! The ideas made sense to me not based on my past experiences, or on being promulgated by some big names, but because they made sense if you just think about them from first principles. It’s the other way around — I fell in love with Rust and Pieter’s writing because of the quality of the ideas. And I only needed common sense to assess the ideas, no decade in the industry required.

      This applies to the present blog post — engage with ideas, remix them, and improve them. Don’t treat the article as a mere cook book, it is not.

      Not Rocket Science Rule

      I feel like I link https://graydon2.dreamwidth.org/1597.html from every second post of mine, so I’ll keep it short this time.

      • Only advance the tip of the master branch to a commit hash, for which you already know the tests results. That is, make a detached merge commit, test that, then move the tip.
      • Don’t do it yourself, let the robot do it.

      The direct benefit is asynchronizing the process of getting the code in. When you submit PR, you don’t need to wait until CI is complete, and then make a judgement call if the results are fresh enough or you need to rebase to the new version of master branch. You just tell the robot “merge when the merge commit is green”. The standard setup uses robots to create work for humans. Merge queue inverts this.

      But the true benefit is second-order! You can’t really ask the robot nicely to let your very important PR in, despite a completely unrelated flaky failure elsewhere. You are forced to keep your CI setup tidy.

      There’s also a third-order benefit. NRSR encourages a holistic view of your CI, as a set of invariants that actually hold for your software, a type- system of sorts. And that thinking makes you realize that every automatable check can be a test. Again, good epistemology helps: it’s not the idea of bors that is most valuable, it’s the reasoning behind that: “automatically maintain a repository of code that always passes all the tests”, “monotonically increasing test coverage”. Go re-read Graydon’s post!

      Tidy Script

      This is another idea borrowed from Rust. Use a tidy file to collect various project-specific linting checks as tests. The biggest value of such tidy.zig is its mere existence. It’s much easier to add a new check than to create “checking infrastructure”. Some checks we do at TigerBeetle:

      • No large binary blobs in big history. Don’t repeat my rust-analyzer mistake here, and look for actual git objects, not just files in the working repository. Someone once sneaked 1MiB of reverted protobuf nonsense past me and my file-based check.
      • Line & function length.
      • No problematic (for our use case) std APIs are used.
      • No // FIXME comments. This is used positively — I add // FIXME comments to code I want to change before the merge (this one is also from Rust!).
      • No dead code (Zig specific, as the compiler is not well-positioned to tackle that, due to lazy compilation model).

      Pro tip for writing tidings — shell out to git ls-files -z to figure out what needs tidying.

      DevHub

      I don’t remember the origin here, but https://deno.com/benchmarks certainly is an influence.

      The habit is to maintain, for every large project, a directory with static files which is directly deployed from the master branch as a project’s internal web page. E.g., for TigerBeetle:

      Again, the motivation is mere existence and removal of friction. This is an office whiteboard which you can just write on, for whatever purpose! Things we use ours for:

      • Release rotation.
      • Benchmark&fuzzing results. This is a bit of social engineering: you check DevHub out of anxiety, to make sure its not your turn to make a release this week, but you get to spot performance regressions!
      • Issues in needs of triaging.

      I gave a talk about using DevHub for visualizing fuzzing results for HYTRADBOI (video).

      Another tip: JSON file in a git repository is a fine database to power such an internal website. JSONMutexDB for the win.

      Micro Benchmarks

      The last one for today, and the one that prompted this article! I am designing a new mechanical habit for TigerBeetle and I want to capture the process while it is still fresh in my mind.

      It starts with something rotten. Micro benchmarks are hard. You write one when you are working on the code, but then it bitrots, and by the time the next person has a brilliant optimization idea, they can not compile the benchmark anymore, and they also have no idea which part of the three pages of output is important.

      A useful trick for solving bitrot is to chain a new habit onto an existing one. Avoid multiplying entry points (O(1) Build File). The appropriate entry point here are the tests. So each micro benchmark is going to be just a test:

      test "benchmark: binary search" {
          // ...
      }
      

      Bitrot problem solved. Now we have two new ones. First is that you generally want to run the benchmark long enough to push the times into human range (~2 seconds), so that any improvements are immediately, viscerally perceived. But 2 seconds are too slow for a test, and test are usually run in Debug mode. The second problem is that you want to see the timing outcome of the benchmark printed when you run that benchmark. But you don’t want to see the output when you run the tests!

      So, we really want two modes here: in the first mode, we really are running a benchmark, it is compiled with optimizations, we aim to make runtime low seconds at least, and we want to print the seconds afterwards. In the second mode, we are running our test suite, and we want to run the benchmark just for some token amount of time. DWIM (do what I mean) principle helps here. We run the entire test suite as ./zig/zig build test, and a single benchmark as ./zig/zig build test -- "benchmark: search" So we use the shape of CLI invocation to select benchmarking mode.

      This mode then determines whether we should pick large or small parameters. Playing around with the code, it feels like the following is a nice shape of code to get parameter values:

      var bench = Bench.init();
      
      const element_count =
          bench.parameter("element_count", 1_000, 10_000_000);
      
      const search_count =
          bench.parameter("search_count", 5_000, 500_000);
      

      The small value is test mode, the big value is benchmark mode, and the name is useful to print actual parameter value:

      bench.report("{s}={}", .{ name, value });
      

      This report function is what decides whether to swallow (test mode) or show (benchmark mode) the output. Printing the values is useful to make copy-pasted benchmarking results obvious without context. And, now that we have put the names in, we get to override values of parameters via environmental variables for free!

      And this is more or less it? We now have a standard pattern to grow the set of microbenchmarks, which feels like it should hold up with the time passing?

      https://github.com/tigerbeetle/tigerbeetle/pull/3405

      Check back in a couple of years to see if this mechanical habit sticks!

    15. 🔗 Jamie Brandon 0056: consulting, zest progress, existentialize, modular borrowing, do we understand sql, zjit updates, books rss
      (empty)