🏑


  1. Transparent Leadership Beats Servant Leadership
  2. Writing a good CLAUDE.md | HumanLayer Blog
  3. My Current global CLAUDE.md
  4. About KeePassXC’s Code Quality Control – KeePassXC
  5. How to build a remarkable command palette

  1. December 05, 2025
    1. πŸ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [augur](https://github.com/0xdea/augur): 0.7.4
      
    2. πŸ”— cased/kit v2.2.1 release

      Bug Fixes

      • GPT-5 Model Compatibility (#154)
        • Fixed OpenAI API 400 error when using GPT-5 models for PR reviews
        • GPT-5 models require max_completion_tokens instead of the deprecated max_tokens parameter
        • Updated all OpenAI API calls in PR review module (reviewer, summarizer, commit generator, local reviewer, matrix tester)
        • Added comprehensive test coverage for GPT-5 parameter handling
    3. πŸ”— 19h/ida-lifter v1.1.0 release

      Full Changelog : v1.0.0...v1.1.0

    4. πŸ”— News Minimalist 🐒 Court orders OpenAI to share user chat logs + 9 more stories rss

      In the last 2 days ChatGPT read 61308 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [5.7] Court orders OpenAI to share ChatGPT chats in copyright lawsuit β€”techradar.com(+10)

      A U.S. court has ordered OpenAI to release 20 million user chat logs for The New York Times' copyright lawsuit, setting a major precedent for AI data privacy.

      The logs are sought by the Times to check for copyright breaches and will be de-identified before being handed over. The ruling gives OpenAI seven days to comply with the request.

      OpenAI opposes the order, stating it puts user privacy at risk despite the court's safeguards. This is the first time the company has been forced to hand over such data.

      [6.3] EU eases rules for genetically modified products β€”orf.at(German) (+19)

      The European Union has agreed to relax rules for foods made with New Genomic Techniques, removing mandatory labeling requirements for many products and exempting them from strict genetic engineering regulations.

      The agreement, reached Thursday, creates two categories. Plants with limited edits, like via CRISPR, will face fewer rules, while those with more complex modifications, such as genes from other species, will remain strictly regulated.

      Proponents expect more climate-resistant crops and increased competitiveness, while opponents' demands for consumer choice through labeling were unsuccessful. Organic farming is to remain GMO-free.

      Highly covered news with significance over 5.5

      [6.0] Trump announces peace deal between Rwanda and DRC, securing mineral access β€” thetimes.com [$] (+63)

      [6.3] British inquiry finds Putin approved 2018 Skripal poisoning, UK imposes sanctions β€” jp.reuters.com (Japanese) (+46)

      [6.1] India leases Russian submarine to bolster naval power in the Indian Ocean β€” businesstoday.in (+6)

      [5.9] EU adds Russia to high-risk financial crime list β€” dw.com (Russian) (+3)

      [5.7] Kenya and US sign $2.5 billion health aid agreement β€” bbc.com (+5)

      [5.5] Shingles vaccine linked to lower dementia death rates β€” medpagetoday.com (+10)

      [5.6] European cereals contain high levels of toxic "forever chemicals" β€” theguardian.com (+13)

      [6.0] MIT researchers develop noninvasive glucose monitor β€” medicalxpress.com (+4)

      Thanks for reading!

      β€” Vadim


      You can create a personal RSS feed with premium.


      Powered by beehiiv

    5. πŸ”— HexRaysSA/plugin-repository commits sync repo: +2 releases rss
      sync repo: +2 releases
      
      ## New releases
      - [haruspex](https://github.com/0xdea/haruspex): 0.7.4
      - [rhabdomancer](https://github.com/0xdea/rhabdomancer): 0.7.5
      
    6. πŸ”— r/wiesbaden Regional train rss

      There should be a regional train that goes directly from Wiesbaden hbf to Frankfurt hbf with no stops

      submitted by /u/Electrical-You-6513
      [link] [comments]

    7. πŸ”— @cxiao@infosec.exchange Really interesting story about a broken proxy, but also, the description of mastodon

      Really interesting story about a broken proxy, but also, the description of how AI usage erodes insight and understanding hit the nail on the head:

      "Before AI, developers were once (sometimes) forced to pause, investigate, and understand. Now, it’s becoming easier and more natural simply assume they grasp far more than they actually do. The result is seemingly an ever-growing gap between what they believe they understand, versus what they genuinely understand. This gap will only grow larger, as AI’s suggestions diverge from operators’ true knowledge."
      https://infosec.place/objects/8e460f08-cdae-4e0d-81f1-c030100d896b

    8. πŸ”— r/LocalLLaMA Basketball AI with RF-DETR, SAM2, and SmolVLM2 rss

      Basketball AI with RF-DETR, SAM2, and SmolVLM2 | resources: youtube, code, blog - player and number detection with RF-DETR - player tracking with SAM2 - team clustering with SigLIP, UMAP and K-Means - number recognition with SmolVLM2 - perspective conversion with homography - player trajectory correction - shot detection and classification submitted by /u/RandomForests92
      [link] [comments]
      ---|---

    9. πŸ”— r/wiesbaden Free or low-cost rabies vaccines (Tollwut Impfungen) for cats in Wiesbaden? rss

      Repost von einem Internationalen Sub, daher auf English. Meine Frage ist, ob jemand weiß wo es in Wiesbaden Tollwut Impfungen umsonst oder sehr billig für Tiere von Leuten in finanziellen HÀrtefÀllen gibt.

      TL;DR: urgently need to get my 4 cats Rabies vaccinations so their immigration restrictions don't expire but I am broke broke poor. Looking for vets or places in Wiesbaden, Germany to get a Tollwut - Impfung for very cheap.


      TW: Trauma dumping

      Hi, before anyone says "if you're too poor to afford the vet, you don't deserve to have pets," I wasn't always in this situation. Before I got cancer I had a pretty high salary. I've had my cats since they were 4 weeks old and I am important to them. Likewise, they are like children to me and I am only still alive because I don't want to traumatize them by suddenly disappearing. My cats are the most important people in my life.

      That being said, why am I in this situation? Well, I lived the past 15 years in the US but got cancer in 2022 and became unable to work but Social Security said I wasn't disabled, so I was without an income and became homeless in 2025.

      Additionally, when I got the news that you know who would become president in Nov 2024, I prepared having to leave the country when he takes away healthcare because I have cancer and need that shit. I didn't want to return to Germany because of severe childhood trauma and decided to move to Japan where I had felt the most home ever in my life when I studied abroad. Problem is, immigrating to Japan, you need a job and getting cats there is ridiculously complicated.

      The process for taking cats to Japan is get an international microchip > Rabies vaccine > wait 30 days > get another rabies vaccine > get a ridiculously expensive rabies titer test > wait 6 months > get a health certificate from certified vet > get government permission to take the cats to Japan.

      I went through this process and spent several thousand dollars to get my cats ready to go to Japan but the job I was interviewing for backed out at the last minute. I couldn't stay in the US, so I reluctantly went back to Germany where I'm absolutely miserable because of all the bad shit that was done to me in my childhood and I'd prefer to have an ocean between me and my abuser.

      I am currently fighting Social Security in court because I found out that they lied in both my initial application and appeal, making up medical findings that don't exist and I am still hoping that if I get backpay for the year I didn't get social security, I could go to Japan via language school or something.

      The rabies titer test is supposed to be good until end of Dec 2026 but I have a problem. Their rabies vaccinations were only for a year. Now, if their rabies vaccines expire, that means the rabies titer test expires, all the money I spent on getting my cats ready is not only wasted but the 7 month wait period would reset which would make immigrating a lot more difficult.

      I'm now in Wiesbaden Germany. It's only the beginning of the month and I have less than 150€ left from my BΓΌrgergeld for the rest of the month (to buy food etc.) due to unforeseen shit. I still owe the vet 90€ because last month I brought in Spooky to get her hyperthyroidism meds because I had to ration them when I was homeless and moving countries. But the vet insisted that we need to do a blood test before he prescribed the meds. I told him I can't afford a blood test and he let me pay half in November half in December but I still don't have Spooky's meds which are probably gonna be another 80€ or so. Then I remembered their rabies vaccines are expiring this month too. I asked how much are the rabies vaccines and they told me it's 80.50€ per cat. So 322€ for 4 cats. With what I still owe and the meds I still need for Spooky, that's almost 500€. I cannot afford that that. That's almost the entire amount of BΓΌrgergeld I get for a month. I barely have enough money to eat and my phone is falling apart.

      Does anyone know if there's a place that gives free or low cost rabies vaccinations in Wiesbaden, Germany (no, I don't have a car to take them further :(

      submitted by /u/not_ya_wify
      [link] [comments]

    10. πŸ”— Anton Zhiyanov Gist of Go: Concurrency internals rss

      This is a chapter from my book onGo concurrency, which teaches the topic from the ground up through interactive examples.

      Here's where we started this book:

      Functions that run with go are called goroutines. The Go runtime juggles these goroutines and distributes them among operating system threads running on CPU cores. Compared to OS threads, goroutines are lightweight, so you can create hundreds or thousands of them.

      That's generally correct, but it's a little too brief. In this chapter, we'll take a closer look at how goroutines work. We'll still use a simplified model, but it should help you understand how everything fits together.

      Concurrency β€’ Goroutine scheduler β€’ GOMAXPROCS β€’ Concurrency primitives β€’ Scheduler metrics β€’ Profiling β€’ Tracing β€’ Keep it up

      Concurrency

      At the hardware level, CPU cores are responsible for running parallel tasks. If a processor has 4 cores, it can run 4 instructions at the same time β€” one on each core.

        instr A     instr B     instr C     instr D
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Core 1  β”‚ β”‚ Core 2  β”‚ β”‚ Core 3  β”‚ β”‚ Core 4  β”‚ CPU
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      At the operating system level, a thread is the basic unit of execution. There are usually many more threads than CPU cores, so the operating system's scheduler decides which threads to run and which ones to pause. The scheduler keeps switching between threads to make sure each one gets a turn to run on a CPU, instead of waiting in line forever. This is how the operating system handles concurrency.

      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread E β”‚              β”‚ Thread F β”‚              OS
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread A β”‚ β”‚ Thread B β”‚ β”‚ Thread C β”‚ β”‚ Thread D β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚           β”‚           β”‚           β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Core 1   β”‚ β”‚ Core 2   β”‚ β”‚ Core 3   β”‚ β”‚ Core 4   β”‚ CPU
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      At the Go runtime level, a goroutine is the basic unit of execution. The runtime scheduler runs a fixed number of OS threads, often one per CPU core. There can be many more goroutines than threads, so the scheduler decides which goroutines to run on the available threads and which ones to pause. The scheduler keeps switching between goroutines to make sure each one gets a turn to run on a thread, instead of waiting in line forever. This is how Go handles concurrency.

      β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G15 β”‚β”‚ G16 β”‚β”‚ G17 β”‚β”‚ G18 β”‚β”‚ G19 β”‚β”‚ G20 β”‚
      β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜
      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G11 β”‚      β”‚ G12 β”‚      β”‚ G13 β”‚      β”‚ G14 β”‚      Go runtime
      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜
        β”‚            β”‚            β”‚            β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread A β”‚ β”‚ Thread B β”‚ β”‚ Thread C β”‚ β”‚ Thread D β”‚ OS
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      The Go runtime scheduler doesn't decide which threads run on the CPU β€” that's the operating system scheduler's job. The Go runtime makes sure all goroutines run on the threads it manages, but the OS controls how and when those threads actually get CPU time.

      Goroutine scheduler

      The scheduler's job is to run M goroutines on N operating system threads, where M can be much larger than N. Here's a simple way to do it:

      1. Put all goroutines in a queue.
      2. Take N goroutines from the queue and run them.
      3. If a running goroutine gets blocked (for example, waiting to read from a channel or waiting on a mutex), put it back in the queue and run the next goroutine from the queue.

      Take goroutines G11-G14 and run them:

      β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G15 β”‚β”‚ G16 β”‚β”‚ G17 β”‚β”‚ G18 β”‚β”‚ G19 β”‚β”‚ G20 β”‚          queue
      β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜
      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G11 β”‚      β”‚ G12 β”‚      β”‚ G13 β”‚      β”‚ G14 β”‚      running
      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜
        β”‚            β”‚            β”‚            β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread A β”‚ β”‚ Thread B β”‚ β”‚ Thread C β”‚ β”‚ Thread D β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      Goroutine G12 got blocked while reading from the channel. Put it back in the queue and replace it with G15:

      β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G16 β”‚β”‚ G17 β”‚β”‚ G18 β”‚β”‚ G19 β”‚β”‚ G20 β”‚β”‚ G12 β”‚          queue
      β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜
      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G11 β”‚      β”‚ G15 β”‚      β”‚ G13 β”‚      β”‚ G14 β”‚      running
      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜
        β”‚            β”‚            β”‚            β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread A β”‚ β”‚ Thread B β”‚ β”‚ Thread C β”‚ β”‚ Thread D β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      But there are a few things to keep in mind.

      Starvation

      Let's say goroutines G11–G14 are running smoothly without getting blocked by mutexes or channels. Does that mean goroutines G15–G20 won't run at all and will just have to wait (starve) until one of G11–G14 finally finishes? That would be unfortunate.

      That's why the scheduler checks each running goroutine roughly every 10 ms to decide if it's time to pause it and put it back in the queue. This approach is called preemptive scheduling: the scheduler can interrupt running goroutines when needed so others have a chance to run too.

      System calls

      The scheduler can manage a goroutine while it's running Go code. But what happens if a goroutine makes a system call, like reading from disk? In that case, the scheduler can't take the goroutine off the thread, and there's no way to know how long the system call will take. For example, if goroutines G11–G14 in our example spend a long time in system calls, all worker threads will be blocked, and the program will basically "freeze".

      To solve this problem, the scheduler starts new threads if the existing ones get blocked in a system call. For example, here's what happens if G11 and G12 make system calls:

      β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G17 β”‚β”‚ G18 β”‚β”‚ G19 β”‚β”‚ G20 β”‚                        queue
      β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜β””β”€β”€β”€β”€β”€β”˜
      
      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G15 β”‚      β”‚ G16 β”‚      β”‚ G13 β”‚      β”‚ G14 β”‚      running
      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜
        β”‚            β”‚            β”‚            β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread E β”‚ β”‚ Thread F β”‚ β”‚ Thread C β”‚ β”‚ Thread D β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      
      β”Œβ”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”
      β”‚ G11 β”‚      β”‚ G12 β”‚                                syscalls
      β””β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”˜
        β”‚            β”‚
      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ Thread A β”‚ β”‚ Thread B β”‚
      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

      Here, the scheduler started two new threads, E and F, and assigned goroutines G15 and G16 from the queue to these threads.

      When G11 and G12 finish their system calls, the scheduler will stop or terminate the extra threads (E and F) and keep running the goroutines on four threads: A-B-C-D.

      This is a simplified model of how the goroutine scheduler works in Go. If you want to learn more, I recommend watching the talk by Dmitry Vyukov, one of the scheduler's developers: Go scheduler: Implementing language with lightweight concurrency (video, slides)

      GOMAXPROCS

      We said that the scheduler uses N threads to run goroutines. In the Go runtime, the value of N is set by a parameter called GOMAXPROCS.

      The GOMAXPROCS runtime setting controls the maximum number of operating system threads the Go scheduler can use to execute goroutines concurrently (not counting the goroutines running syscalls). It defaults to the value of runtime.NumCPU, which is the number of logical CPUs on the machine.

      Strictly speaking, runtime.NumCPU is either the total number of logical CPUs or the number allowed by the CPU affinity mask, whichever is lower. This can be adjusted by the CPU quota, as explained below.

      For example, on my 8-core laptop, the default value of GOMAXPROCS is also 8:

      maxProcs := runtime.GOMAXPROCS(0) // returns the current value
      fmt.Println("NumCPU:", runtime.NumCPU())
      fmt.Println("GOMAXPROCS:", maxProcs)
      
      
      
      NumCPU: 8
      GOMAXPROCS: 8
      

      You can change GOMAXPROCS by setting GOMAXPROCS environment variable or calling runtime.GOMAXPROCS():

      // Get the default value.
      fmt.Println("GOMAXPROCS default:", runtime.GOMAXPROCS(0))
      
      // Change the value.
      runtime.GOMAXPROCS(1)
      fmt.Println("GOMAXPROCS custom:", runtime.GOMAXPROCS(0))
      
      
      
      GOMAXPROCS default: 8
      GOMAXPROCS custom: 1
      

      You can also undo the manual changes and go back to the default value set by the runtime. To do this, use the runtime.SetDefaultGOMAXPROCS function (Go 1.25+):

      GOMAXPROCS=2 go run nproc.go
      
      
      
      // Using the environment variable.
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      // Using the manual setting.
      runtime.GOMAXPROCS(4)
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      // Back to the default value.
      runtime.SetDefaultGOMAXPROCS()
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      
      
      GOMAXPROCS: 2
      GOMAXPROCS: 4
      GOMAXPROCS: 8
      

      CPU quota

      Go programs often run in containers, like those managed by Docker or Kubernetes. These systems let you limit the CPU resources for a container using a Linux feature called cgroups.

      A cgroup (control group) in Linux lets you group processes together and control how much CPU, memory, and network I/O they can use by setting limits and priorities.

      For example, here's how you can limit a Docker container to use only four CPUs:

      docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
      
      
      
      // /app/nproc.go
      maxProcs := runtime.GOMAXPROCS(0) // returns the current value
      fmt.Println("NumCPU:", runtime.NumCPU())
      fmt.Println("GOMAXPROCS:", maxProcs)
      

      Before version 1.25, the Go runtime didn't consider the CPU quota when setting the GOMAXPROCS value. No matter how you limited CPU resources, GOMAXPROCS was always set to the number of logical CPUs on the host machine:

      docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 8
      

      Starting with version 1.25, the Go runtime respects the CPU quota:

      docker run --cpus=4 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 4
      

      So, the default GOMAXPROCS value is set to either the number of logical CPUs or the CPU limit enforced by cgroup settings for the process, whichever is lower.

      Note on CPU limits

      Cgroups actually offer not just one, but two ways to limit CPU resources:

      • CPU quota β€” the maximum CPU time the cgroup may use within some period window.
      • CPU shares β€” relative CPU priorities given to the kernel scheduler.

      Docker's --cpus and --cpu-period/--cpu-quota set the quota, while --cpu-shares sets the shares.

      Kubernetes' CPU limit sets the quota, while CPU request sets the shares.

      Go's runtime GOMAXPROCS only takes the CPU quota into account, not the shares.

      Fractional CPU limits are rounded up:

      docker run --cpus=2.3 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 3
      

      On a machine with multiple CPUs, the minimum default value for GOMAXPROCS is 2, even if the CPU limit is set lower:

      docker run --cpus=1 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 2
      

      The Go runtime automatically updates GOMAXPROCS if the CPU limit changes. It happens up to once per second (less frequently if the application is idle).

      Concurrency primitives

      Let's take a quick look at the three main concurrency tools for Go: goroutines, channels, and select.

      Goroutine

      A goroutine is implemented as a pointer to a runtime.g structure. Here's what it looks like:

      // runtime/runtime2.go
      type g struct {
          atomicstatus atomic.Uint32 // goroutine status
          stack        stack         // goroutine stack
          m            *m            // thread that runs the goroutine
          // ...
      }
      

      The g structure has many fields, but most of its memory is taken up by the stack, which holds the goroutine's local variables. By default, each stack gets 2 KB of memory, and it grows if needed.

      Because goroutines use very little memory, they're much more efficient than operating system threads, which usually need about 1 MB each. Their small size lets you run tens (or even hundreds) of thousands of goroutines on a single machine.

      Channel

      A channel is implemented as a pointer to a runtime.hchan structure. Here's what it looks like:

      // runtime/chan.go
      type hchan struct {
          // channel buffer
          qcount   uint           // number of items in the buffer
          dataqsiz uint           // buffer array size
          buf      unsafe.Pointer // pointer to the buffer array
      
          // closed channel flag
          closed uint32
      
          // queues of goroutines waiting to receive and send
          recvq waitq // waiting to receive from the channel
          sendq waitq // waiting to send to the channel
      
          // protects the channel state
          lock mutex
      
          // ...
      }
      

      The buffer array (buf) has a fixed size (dataqsiz, which you can get with the cap() builtin). It's created when you make a buffered channel. The number of items in the channel (qcount, which you can get with the len() builtin) increases when you send to the channel and decreases when you receive from it.

      The close() builtin sets the closed field to 1.

      Sending an item to an unbuffered channel, or to a buffered channel that's already full, puts the goroutine into the sendq queue. Receiving from an empty channel puts the goroutine into the recvq queue.

      Select

      The select logic is implemented in the runtime.selectgo function. It's a huge function that takes a list of select cases and (very simply put) works as follows:

      • Go through the cases and check if the matching channels are ready to send or receive.
      • If several cases are ready, choose one at random (to prevent starvation, where some cases are always chosen and others are never chosen).
      • Once a case is selected, perform the send or receive operation on the matching channel.
      • If there is a default case and no other cases are ready, pick the default.
      • If no cases are ready, block the goroutine and add it to the channel queue for each case.

      ✎ Exercise: Runtime simulator

      Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises β€” that's why I recommend getting it.

      If you are okay with just theory for now, let's continue.

      Scheduler metrics

      Metrics show how the Go runtime is performing, like how much heap memory it uses or how long garbage collection pauses take. Each metric has a unique name (for example, /sched/gomaxprocs:threads) and a value, which can be a number or a histogram.

      We use the runtime/metrics package to work with metrics.

      List all available metrics with descriptions:

      func main() {
          descs := metrics.All()
          for _, d := range descs {
              fmt.Printf("Name: %s\n", d.Name)
              fmt.Printf("Description: %s\n", d.Description)
              fmt.Printf("Kind: %s\n", kindToString(d.Kind))
              fmt.Println()
          }
      }
      
      func kindToString(k metrics.ValueKind) string {
          switch k {
          case metrics.KindUint64:
              return "KindUint64"
          case metrics.KindFloat64:
              return "KindFloat64"
          case metrics.KindFloat64Histogram:
              return "KindFloat64Histogram"
          case metrics.KindBad:
              return "KindBad"
          default:
              return "Unknown"
          }
      }
      
      
      
      Name: /cgo/go-to-c-calls:calls
      Description: Count of calls made from Go to C by the current process.
      Kind: KindUint64
      
      Name: /cpu/classes/gc/mark/assist:cpu-seconds
      Description: Estimated total CPU time goroutines spent performing GC
      tasks to assist the GC and prevent it from falling behind the application.
      This metric is an overestimate, and not directly comparable to system
      CPU time measurements. Compare only with other /cpu/classes metrics.
      Kind: KindFloat64
      ...
      

      Get the value of a specific metric:

      samples := []metrics.Sample{
          {Name: "/sched/gomaxprocs:threads"},
          {Name: "/sched/goroutines:goroutines"},
      }
      metrics.Read(samples)
      
      for _, s := range samples {
          // Assumes the value is a uint64. Check the metric description
          // or use s.Value.Kind() if you're not sure.
          fmt.Printf("%s: %v\n", s.Name, s.Value.Uint64())
      }
      
      
      
      /sched/gomaxprocs:threads: 8
      /sched/goroutines:goroutines: 1
      

      Here are some goroutine-related metrics:

      /sched/goroutines-created:goroutines

      • Count of goroutines created since program start (Go 1.26+).

      /sched/goroutines:goroutines

      • Count of live goroutines (created but not finished yet).
      • An increase in this metric may indicate a goroutine leak.

      /sched/goroutines/not-in-go:goroutines

      • Approximate count of goroutines running or blocked in a system call or cgo call (Go 1.26+).
      • An increase in this metric may indicate problems with such calls.

      /sched/goroutines/runnable:goroutines

      • Approximate count of goroutines ready to execute, but not executing (Go 1.26+).
      • An increase in this metric may mean the system is overloaded and the CPU can't keep up with the growing number of goroutines.

      /sched/goroutines/running:goroutines

      • Approximate count of goroutines executing (Go 1.26+).
      • Always less than or equal to /sched/gomaxprocs:threads.

      /sched/goroutines/waiting:goroutines

      • Approximate count of goroutines waiting on a resource β€” I/O or sync primitives (Go 1.26+).
      • An increase in this metric may indicate issues with mutex locks, other synchronization blocks, or I/O issues.

      /sched/threads/total:threads

      • The current count of live threads that are owned by the runtime (Go 1.26+).

      /sched/gomaxprocs:threads

      • The current runtime.GOMAXPROCS setting β€” the maximum number of operating system threads the scheduler can use to execute goroutines concurrently.

      In real projects, runtime metrics are usually exported automatically with client libraries for Prometheus, OpenTelemetry, or other observability tools. Here's an example for Prometheus:

      package main
      
      import (
          "net/http"
          "github.com/prometheus/client_golang/prometheus/promhttp"
      )
      
      func main() {
          // Export runtime/metrics in Prometheus format at the /metrics endpoint.
          http.Handle("/metrics", promhttp.Handler())
          http.ListenAndServe("localhost:2112", nil)
      }
      

      The exported metrics are then collected by Prometheus, visualized, and used to set up alerts.

      Profiling

      Profiling helps you understand exactly what the program is doing, what resources it uses, and where in the code this happens. Profiling is often not recommended in production because it's a "heavy" process that can slow things down. But that's not the case with Go.

      Go's profiler is designed for production use. It uses sampling, so it doesn't track every single operation. Instead, it takes quick snapshots of the runtime every 10 ms and puts them together to give you a full picture.

      Go supports the following profiles:

      • CPU. Shows how much CPU time each function uses. Use it to find performance bottlenecks if your program is running slowly because of CPU-heavy tasks.
      • Heap. Shows the heap memory currently used by each function. Use it to detect memory leaks or excessive memory usage.
      • Allocs. Shows which functions have used heap memory since the profiler started (not just currently). Use it to optimize garbage collection or reduce allocations that impact performance.
      • Goroutine. Shows the stack traces of all current goroutines. Use it to get an overview of what the program is doing.
      • Block. Shows where goroutines block waiting on synchronization primitives like channels, mutexes and wait groups. Use it to identify synchronization bottlenecks and issues in data exchange between goroutines. Disabled by default.
      • Mutex. Shows lock contentions on mutexes and internal runtime locks. Use it to find "problematic" mutexes that goroutines are frequently waiting for. Disabled by default.

      The easiest way to add a profiler to your app is by using the net/http/pprof package. When you import it, it automatically registers HTTP handlers for collecting profiles:

      package main
      
      import (
          "net/http"
          _ "net/http/pprof"
          "sync"
      )
      
      func main() {
          // Enable block and mutexe profiles.
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
          // Start an HTTP server on localhost.
          // Profiler HTTP handlers are automatically
          // registered when you import "net/http/pprof".
          http.ListenAndServe("localhost:6060", nil)
      }
      

      Or you can register profiler handlers manually:

      var wg sync.WaitGroup
      
      wg.Go(func() {
          // Application server running on port 8080.
          mux := http.NewServeMux()
          mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
              w.Write([]byte("Hello, World!"))
          })
          log.Println("Starting hello server on :8080")
          log.Fatal(http.ListenAndServe(":8080", mux))
      })
      
      wg.Go(func() {
          // Profiling server running on localhost on port 6060.
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
      
          mux := http.NewServeMux()
          mux.HandleFunc("/debug/pprof/", pprof.Index)
          mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
          mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
          log.Println("Starting pprof server on :6060")
          log.Fatal(http.ListenAndServe("localhost:6060", mux))
      })
      
      wg.Wait()
      

      After that, you can start profiling with a specific profile by running the go tool pprof command with the matching URL, or just open that URL in your browser:

      go tool pprof -proto \
        "http://localhost:6060/debug/pprof/profile?seconds=N" > cpu.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/heap > heap.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/allocs > allocs.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/goroutine > goroutine.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/block > block.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/mutex > mutex.pprof
      

      For the CPU profile, you can choose how long the profiler runs (the default is 30 seconds). Other profiles are taken instantly.

      After running the profiler, you'll get a binary file that you can open in the browser using the same go tool pprof utility. For example:

      go tool pprof -http=localhost:8080 cpu.pprof
      

      The pprof web interface lets you view the same profile in different ways. My personal favorites are the flame graph , which clearly shows the call hierarchy and resource usage, and the source view, which shows the exact lines of code.

      Flame graph view The flame graph view shows the call hierarchy and resource usage. Source
view The source view shows the exact lines of code.

      You can also profile manually. To collect a CPU profile, use StartCPUProfile and StopCPUProfile:

      func main() {
          // Start profiling and stop it when main exits.
          // Ignore errors for simplicity.
          file, _ := os.Create("cpu.prof")
          defer file.Close()
          pprof.StartCPUProfile(file)
          defer pprof.StopCPUProfile()
      
          // The rest of the program code.
          // ...
      }
      

      To collect other profiles, use Lookup:

      // profile collects a profile with the given name.
      func profile(name string) {
          // Ignore errors for simplicity.
          file, _ := os.Create(name + ".prof")
          defer file.Close()
          p := pprof.Lookup(name)
          if p != nil {
              p.WriteTo(file, 0)
          }
      }
      
      func main() {
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
      
          // ...
          profile("heap")
          profile("allocs")
          // ...
      }
      

      Profiling is a broad topic, and we've only touched the surface. To learn more, start with these articles:

      Tracing

      Tracing records certain types of events while the program is running, mainly those related to concurrency and memory:

      • goroutine creation and state changes;
      • system calls;
      • garbage collection;
      • heap size changes;
      • and more.

      If you enabled the profiling server as described earlier, you can collect a trace using this URL:

      http://localhost:6060/debug/pprof/trace?seconds=N
      

      Trace files can be quite large, so it's better to use a small N value.

      After tracing is complete, you'll get a binary file that you can open in the browser using the go tool trace utility:

      go tool trace -http=localhost:6060 trace.out
      

      In the trace web interface, you'll see each goroutine's "lifecycle" on its own line. You can zoom in and out of the trace with the W and S keys, and you can click on any event to see more details:

      Trace web interface

      You can also collect a trace manually:

      func main() {
          // Start tracing and stop it when main exits.
          // Ignore errors for simplicity.
          file, _ := os.Create("trace.out")
          defer file.Close()
          trace.Start(file)
          defer trace.Stop()
      
          // The rest of the program code.
          // ...
      }
      

      Flight recorder

      Flight recording is a tracing technique that collects execution data, such as function calls and memory allocations, within a sliding window that's limited by size or duration. It helps to record traces of interesting program behavior, even if you don't know in advance when it will happen.

      The trace.FlightRecorder type (Go 1.25+) implements a flight recorder in Go. It tracks a moving window over the execution trace produced by the runtime, always containing the most recent trace data.

      Here's an example of how you might use it.

      First, configure the sliding window:

      // Configure the flight recorder to keep
      // at least 5 seconds of trace data,
      // with a maximum buffer size of 3MB.
      // Both of these are hints, not strict limits.
      cfg := trace.FlightRecorderConfig{
          MinAge:   5 * time.Second,
          MaxBytes: 3 << 20, // 3MB
      }
      

      Then create the recorder and start it:

      // Create and start the flight recorder.
      rec := trace.NewFlightRecorder(cfg)
      rec.Start()
      defer rec.Stop()
      

      Continue with the application code as usual:

      // Simulate some workload.
      done := make(chan struct{})
      go func() {
          defer close(done)
          const n = 1 << 20
          var s []int
          for range n {
              s = append(s, rand.IntN(n))
          }
          fmt.Printf("done filling slice of %d elements\n", len(s))
      }()
      <-done
      

      Finally, save the trace snapshot to a file when an important event occurs:

      // Save the trace snapshot to a file.
      file, _ := os.Create("/tmp/trace.out")
      defer file.Close()
      n, _ := rec.WriteTo(file)
      fmt.Printf("wrote %dB to trace file\n", n)
      
      
      
      done filling slice of 1048576 elements
      wrote 8441B to trace file
      

      Use go tool trace to view the trace in the browser:

      go tool trace -http=localhost:6060 /tmp/trace.out
      

      ✎ Exercise: Comparing blocks

      Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises β€” that's why I recommend getting it.

      If you are okay with just theory for now, let's continue.

      Keep it up

      Now you can see how challenging the Go scheduler's job is. Fortunately, most of the time you don't need to worry about how it works behind the scenes β€” sticking to goroutines, channels, select, and other synchronization primitives is usually enough.

      This is the final chapter of my "Gist of Go: Concurrency" book. I invite you to read it β€” the book is an easy-to-understand, interactive guide to concurrency programming in Go.

      Pre-order for $10 or read online

    11. πŸ”— @malcat@infosec.exchange [#Malcat](https://infosec.exchange/tags/Malcat) tip: mastodon

      #Malcat tip:

      #Kesakode can be useful even when facing unknown/packed samples. Check "Show UNK" and focus on unique code and strings.

      Here a simple downloader:

    12. πŸ”— The Pragmatic Engineer Downdetector and the real cost of no upstream dependencies rss

      The below is one out of five topics from[ _The Pulse

      154._](https://newsletter.pragmaticengineer.com/p/the-

      pulse-154?ref=blog.pragmaticengineer.com)Full subscribers received the below article two weeks ago. To get articles like this in your inbox, every week,subscribe here .

      Many subscribers expense The Pragmatic Engineer Newsletter to their learning and development budget. If you have such a budget, here 's an email you could send to your manager .


      One amusing detail of the November 2025 Cloudflare outage is that the realtime outage and monitoring service, Downdetector, went down, revealing a key dependency on Cloudflare. At first, this looks odd; after all, Downdetector is about monitoring uptime, so why would it take on a key dependency like Cloudflare if it means this can happen?

      Downdetector was built multi-region and multi-cloud, which**** I confirmed by talking with Senior Director of Engineering, Dhruv Arora, at Ookla, the company behind Downdetector. Multi-cloud resilience makes little sense for most products, but Downdetector was built to detect cloud provider outages, as well. And for this, they needed to be multi-cloud!

      Still, Downdetector uses Cloudflare for DNS, Content Delivery (CDN), and Bot Protection. So, why would it take on this one key dependency, as opposed to hosting everything on its own servers?

      A CDN has advantages that are hard to ignore, such as:

      • Drastically lower bandwidth costs - assets cached on the CDN are much faster
      • Faster load times because assets on a CDN are served from Edge nodes nearer users
      • Protection from sudden traffic spikes, as would be common for Downdetector, especially during outages! Without a CDN, those spikes could overload their services
      • DDoS protection from bad actors taking the site offline with a distributed denial of service attack
      • Reduced infrastructure requirements, as Downdetector can run on fewer servers

      Downdetector's usage patterns reflect that it's a service very heavily used by consumers whom the business doesn't really monetize (Downdetector is free to use.) So, Downdetector could get rid of Cloudflare, but costs would surge, the site would become slower to load, and revenue wouldn't change.

      In the end, Downdetector's dependence on Cloudflare could be a pragmatic choice based on the business model, and how removing its upstream dependency upon Cloudflare could get very expensive!

      Dhruv confirmed this and sharing more about the design choices at Downdetector:

      "Building redundancy at the DNS & CDN layers would require enormous overhead. This is especially true as Cloudflare's Bot Protection is world- class, and building similar functionality would be a lot of effort. There are hyperscalers [cloud providers] that have this kind of redundancy built in. We will look into what we can do, but with a team size in the double digits, building up a core piece of infra like this is a pretty tall order: not just for us, but for any mid-sized team.

      We've learned that there are more things that we can improve, for the future. For example, during the outage, the Cloudflare control pane was down, but their API wasn't. So, us having more Infrastructure as Code could have helped bring back Downdetector sooner.

      On our end, we also noticed that the outage wasn't global, so we were able to shift traffic around and reduce the impact.

      One more interesting detail: Cloudflare's Bot Protection went haywire during the outage, and started to block legitimate traffic. So, our team had to turn that off temporarily".

      Thanks very much to Dhruv and the Downdetector team for sharing details.

    13. πŸ”— r/reverseengineering LIVE! Analyzing TTD traces in Binary Ninja with esReverse rss
    14. πŸ”— r/reverseengineering Yesterday I came across this method in medium rss
    15. πŸ”— @binaryninja@infosec.exchange We are teaming up with eShard on December 5th at 10am EST for a live demo that mastodon

      We are teaming up with eShard on December 5th at 10am EST for a live demo that walks through the full PetyA workflow! We'll look at how the malware drops and maps its DLL into memory, how TTD recording captures the full execution, and how the Binary Ninja debugger ties into their trace tooling. Join us here: https://youtube.com/live/nzar2L4GUJ8

    16. πŸ”— Rust Blog crates.io: Malicious crates finch-rust and sha-rust rss

      Summary

      On December 5th, the crates.io team was notified by Kush Pandya from the Socket Threat Research Team of two malicious crates which were trying to cause confusion with the existing finch crate but adding a dependency on a malicious crate doing data exfiltration.

      These crates were:

      • finch-rust - 1 version published November 25, 2025, downloaded 28 times, used sha-rust as a dependency
      • sha-rust - 8 versions published between November 20 and November 25, 2025, downloaded 153 times

      Actions taken

      The user in question, face-lessssss, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

      The deletions were performed at 15:52 UTC on December 5th.

      We reported the associated repositories to GitHub and the account has been removed there as well.

      Analysis

      Socket has published their analysis in a blog post.

      These crates had no dependent downstream crates on crates.io, and there is no evidence of either of these crates being downloaded outside of automated mirroring and scanning services.

      Thanks

      Our thanks to Kush Pandya from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Adam Harvey from the Rust Foundation for aiding in the response.

  2. December 04, 2025
    1. πŸ”— @cxiao@infosec.exchange JD....bro.......😭😭😭😭😭 mastodon

      JD....bro.......😭😭😭😭😭

      (new signalgate screenshots from
      https://media.defense.gov/2025/Dec/04/2003834916/-1/-1/1/DODIG_2026_021.PDF)

    2. πŸ”— Kagi release notes Dec 4th, 2025 - New Kagi Search companions and quality-of-life improvements rss

      Kagi Search Introducing Kagi Search companions You can now choose your preferred companion on Kagi Search! And more companions coming soon. Other improvements and bug fixes Context menus for inline news and videos are stuck inside the frame #9127 @pma_snek "We haven't found anything" when asking a follow-up question in Quick Answer #8986 @jstolarek Feedback on the new Quick Answer experience #8729 @Jesal Custom Default Model for Quick Answer Follow-ups #5533 @MaCl0wSt Quick Answer 'Show More' Memory #8902 @Dustin Out of place image in wikipedia widget #9114 @Numerlor Quick. kagibara eye animations. #9103 @kagifuser Doggo's snoot isn't contrasting well on the pricing page #8477 @Thibaultmol Quick Answer - Continue in Assistant not clickable #9010 @Dustin Kagi Assistant We've made the following model upgrades: Research Assistant now uses Nano Banana Pro for image generation and editing Claude 4.5 Opus and Deepseek v3.2 have been updated to their latest versions Weird recording of voice in assistant #8672 @StefanHaglund GPT OSS 120B stray think tag #8951 @claudinec Citation popups cropped within tables #9025 @hinq Include chat title in shared chat link preview #9045 @bert Kagi Translate [Extension] Discord, Whatsapp, Telegram, Reddit integrations [Extension] Redirect from translate.kagi.com/url #8503 @Thibaultmol [Extension] Statistics page in setting [Extension] Apply suggestions (translate/proofreading) directly from overlay #8695 @orschiro Slop Detective Slop Detective page doesn't scroll, which can prevent progress #9079 @jducoeur Slop Detective image zoom/magnifying glass should be shifted on phone/mobile/touch screens #9080 @dru4522 Slop Detective calls all images "photographs" #9072 @ccombsdc Post of the week Here is this week's featured social media mention: We truly appreciate your support in spreading the word, so be sure to follow us and tag us in your comments! Community creations James Downs built a Kagi News app for Pebble watches: Check out this growing list of Kagi community creations for various devices and apps! Have one to share? Let us know. Small Web badges

      Small Web initiative members can display badges on their websites to identify themselves as part of a community committed to authentic content created by humans. Grab them here! And keep exploring what the Small Web has to offer.

      Collection of five Small Web initiative badges in pixel art style with
orange and black color scheme, some which contain Kagi's dog mascot named
Doggo

      End-of-Year Community Event

      Illustration of Kagi's mascot Doggo flying towards a toy yellow ball
surrounded by clouds, with the text: Kagi End of Year Community Event and the
date and time: December 19 at 9am
PST

      As we wrap up an exciting year for Kagi, we'd love to have you join us for our end-of-year community event on December 19 at 09:00 PST (convert to your local time).

      We'll share a comprehensive "Year in Review" covering Kagi's major updates, product launches, and what's ahead, followed by an interactive Q&A session where we'll address your questions directly.

      How to participate:

    3. πŸ”— livestorejs/livestore "v0.4.0-dev.20" release

      "Release 0.4.0-dev.20 including Chrome Extension"

    4. πŸ”— r/wiesbaden Betrunkene Frauen rauben Fahrer das Taxi -- Erst stritten sie untereinander - dann griffen sie den Taxifahrer an: Vier Frauen haben in Wiesbaden einen 66 Jahre alten Fahrer verletzt. Sie verlangten Geld, am Ende fehlte das Taxi. rss
    5. πŸ”— r/reverseengineering CVE Proof-of-Concept Finder: A Direct Lens Into Exploit Code rss
    6. πŸ”— r/LocalLLaMA legends rss

      legends | submitted by /u/Nunki08
      [link] [comments]
      ---|---

    7. πŸ”— r/reverseengineering How do I Inspect virtual memory page tables physical memory on windows rss
    8. πŸ”— r/wiesbaden Wer sich manchmal ΓΌber BusverspΓ€tungen wundert… rss
    9. πŸ”— r/LocalLLaMA New model, microsoft/VibeVoice-Realtime-0.5B rss

      New model, microsoft/VibeVoice-Realtime-0.5B | VibeVoice: A Frontier Open-Source Text-to-Speech Model VibeVoice-Realtime is a lightweight real‑time text-to-speech model supporting streaming text input. It can be used to build realtime TTS services, narrate live data streams, and let different LLMs start speaking from their very first tokens (plug in your preferred model) long before a full answer is generated. It produces initial audible speech in ~300 ms (hardware dependent). Key features: Parameter size: 0.5B (deployment-friendly) Realtime TTS (~300 ms first audible latency) Streaming text input Robust long-form speech generation submitted by /u/edward-dev
      [link] [comments]
      ---|---

    10. πŸ”— r/wiesbaden Heizkosten Altbau bei 100qm? rss

      HallΓΆchen, Ich weiß, so pauschal kann man das schlecht sagen, aber ich hΓ€tte trotzdem gerne einfach ein paar Vergleichswerte. Lebt jemand von euch in einer Altbauwohnung in Wiesbaden auf ca. 100qm und wΓΌrde mir verraten, wie viel er/sie monatlich fΓΌr Heizkosten blecht? Vorzugsweise bei Gas Verbrauch. Ich hab jetzt von nem Kollegen 100€ gehΓΆrt, von jemand anderem irgendwas von 300-400€ und bin entsprechend verunsichert. Bei mir gehts um eine Gas- Etagenheizung. Danke schon mal!

      submitted by /u/kvrioss
      [link] [comments]

    11. πŸ”— r/reverseengineering Beyond Decompilers: Runtime Analysis of Evasive Android Code rss
    12. πŸ”— jj-vcs/jj v0.36.0 release

      About

      jj is a Git-compatible version control system that is both simple and powerful. See
      the installation instructions to get started.

      Release highlights

      301 redirects are being issued towards the new domain, so any existing links
      should not be broken.

      • Fixed race condition that could cause divergent operations when running
        concurrent jj commands in colocated repositories. It is now safe to
        continuously run e.g. jj log without --ignore-working-copy in one
        terminal while you're running other commands in another terminal.
        #6830

      • jj now ignores $PAGER set in the environment and uses less -FRX on most
        platforms (:builtin on Windows). See the docs for
        more information, and #3502 for
        motivation.

      Breaking changes

      • In filesets or path patterns, glob matching
        is enabled by default. You can use cwd:"path" to match literal paths.

      • In the following commands, string pattern
        arguments
        are now parsed the same way they
        are in revsets and can be combined with logical operators: jj bookmark delete/forget/list/move, jj tag delete/list, jj git clone/fetch/push

      • In the following commands, unmatched bookmark/tag names is no longer an
        error. A warning will be printed instead: jj bookmark delete/forget/move/track/untrack, jj tag delete, jj git clone/push

      • The default string pattern syntax in revsets will be changed to glob: in a
        future release. You can opt in to the new default by setting
        ui.revsets-use-glob-by-default=true.

      • Upgraded scm-record from v0.8.0 to v0.9.0. See release notes at
        https://github.com/arxanas/scm-record/releases/tag/v0.9.0.

      • The minimum supported Rust version (MSRV) is now 1.89.

      • On macOS, the deprecated config directory ~/Library/Application Support/jj
        is not read anymore. Use $XDG_CONFIG_HOME/jj instead (defaults to
        ~/.config/jj).

      • Sub-repos are no longer tracked. Any directory containing .jj or .git
        is ignored. Note that Git submodules are unaffected by this.

      Deprecations

      • The --destination/-d arguments for jj rebase, jj split, jj revert,
        etc. were renamed to --onto/-o. The reasoning is that --onto,
        --insert-before, and --insert-after are all destination arguments, so
        calling one of them --destination was confusing and unclear. The old names
        will be removed at some point in the future, but we realize that they are
        deep in muscle memory, so you can expect an unusually long deprecation period.

      • jj describe --edit is deprecated in favor of --editor.

      • The config options git.auto-local-bookmark and git.push-new-bookmarks are
        deprecated in favor of remotes.<name>.auto-track-bookmarks. For example:

        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      • The flag --allow-new on jj git push is deprecated. In order to push new
        bookmarks, please track them with jj bookmark track. Alternatively, consider
        setting up an auto-tracking configuration to avoid the chore of tracking
        bookmarks manually. For example:
        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      New features

      • jj commit, jj describe, jj squash, and jj split now accept
        --editor, which ensures an editor will be opened with the commit
        description even if one was provided via --message/-m.

      • All jj commands show a warning when the provided fileset expression
        doesn't match any files.

      • Added files() template function to DiffStats. This supports per-file stats
        like lines_added() and lines_removed()

      • Added join() template function. This is different from separate() in that
        it adds a separator between all arguments, even if empty.

      • RepoPath template type now has a absolute() -> String method that returns
        the absolute path as a string.

      • Added format_path(path) template that controls how file paths are printed
        with jj file list.

      • New built-in revset aliases visible() and hidden().

      • Unquoted * is now allowed in revsets. bookmarks(glob:foo*) no longer
        needs quoting.

      • jj prev/next --no-edit now generates an error if the working-copy has some
        children.

      • A new config option remotes.<name>.auto-track-bookmarks can be set to a
        string pattern. New bookmarks matching it will be automatically tracked for
        the specified remote. See
        the docs.

      • jj log now supports a --count flag to print the number of commits instead
        of displaying them.

      Fixed bugs

      • jj fix now prints a warning if a tool failed to run on a file.
        #7971

      • Shell completion now works with non‑normalized paths, fixing the previous
        panic and allowing prefixes containing . or .. to be completed correctly.
        #6861

      • Shell completion now always uses forward slashes to complete paths, even on
        Windows. This renders completion results viable when using jj in Git Bash.
        #7024

      • Unexpected keyword arguments now return a parse failure for the coalesce()
        and concat() templating functions.

      • Nushell completion script documentation add -f option, to keep it up to
        date.
        #8007

      • Ensured that with Git submodules, remnants of your submodules do not show up
        in the working copy after running jj new.
        #4349

      Contributors

      Thanks to the people who made this release happen!

    13. πŸ”— @cxiao@infosec.exchange [https://qz.com/1908836/china-blocks-wikimedia-from-un-agency-wipo-over- mastodon
    14. πŸ”— Console.dev newsletter Lima rss

      Description: Linux VMs + Containers.

      What we like: Quickly launch Linux VMs from the terminal. Designed for running containers inside the VM, it includes tools for filesystem mounts, port forwarding, GPU acceleration, Intel/Arm emulation. Easy config of CPUs, memory, etc via the CLI. Can run in CI. Useful for sandboxing AI agents.

      What we dislike: Supports most, but not all Linux distros (has some minimum requirements). Windows usage is experimental.

    15. πŸ”— Console.dev newsletter gitmal rss

      Description: Static page generator for Git repos.

      What we like: Generates a static repo browser for any Git repo. The site includes commits, branches, and a file explorer with source code highlighted. The UI is themeable.

      What we dislike: No dark mode auto-switching. Can take a long time to generate for big repos.

  3. December 03, 2025
    1. πŸ”— IDA Plugin Updates IDA Plugin Updates on 2025-12-03 rss

      IDA Plugin Updates on 2025-12-03

      New Releases:

      Activity:

      • capa
        • c0ae1352: Sync capa-testfiles submodule
      • ghidra
        • cffea7e4: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • 5d6de554: Merge branch 'GP-0_ryanmkurtz_PR-8415_widberg_callother'
        • a2975887: Merge remote-tracking branch 'origin/GP-6177-dragonmacher-xref-table-…
        • 264e318b: Merge remote-tracking branch 'origin/patch' into Ghidra_12.0
        • eb8694d6: GP-6177 - Updated xref table delete action to not be enabled if only …
        • 58e5c8e4: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • b4ba6e6f: Merge remote-tracking branch
        • b3e26b6d: Merge branch
        • 5ab8d335: GP-0: PyGhidra type hint fixes
        • fc0f971c: Fix Python type annotations in PyGhidra module when using `contextman…
        • f2c1e5fb: Merge remote-tracking branch 'origin/GP-1-dragonmacher-flow-arrow-col…
        • 8deaf30a: Merge remote-tracking branch 'origin/GP-6150_ghidra1_BlockCrossAddres…
        • a8a07b14: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • a93de758: GP-6165: Changed JPype dependency to be fixed at version 1.5.2 to avoid
        • 5e6c1607: GP-0: Renaming pyghidra.monitor() to pyghidra.task_monitor() to avoid
        • c5beedac: GP-5831 Added a few speed improvements to the RecoverClassesFromRTTIS…
      • ghidra-chinese
        • cffea7e4: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • 5d6de554: Merge branch 'GP-0_ryanmkurtz_PR-8415_widberg_callother'
        • a2975887: Merge remote-tracking branch 'origin/GP-6177-dragonmacher-xref-table-…
        • 264e318b: Merge remote-tracking branch 'origin/patch' into Ghidra_12.0
        • eb8694d6: GP-6177 - Updated xref table delete action to not be enabled if only …
        • 58e5c8e4: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • b4ba6e6f: Merge remote-tracking branch
        • b3e26b6d: Merge branch
        • 5ab8d335: GP-0: PyGhidra type hint fixes
        • 8fa36ce9: Merge pull request #74 from TC999/sync
        • 7bd104b1: Merge branch 'chinese' into sync
        • fc0f971c: Fix Python type annotations in PyGhidra module when using `contextman…
        • f2c1e5fb: Merge remote-tracking branch 'origin/GP-1-dragonmacher-flow-arrow-col…
        • 8deaf30a: Merge remote-tracking branch 'origin/GP-6150_ghidra1_BlockCrossAddres…
        • a8a07b14: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • a93de758: GP-6165: Changed JPype dependency to be fixed at version 1.5.2 to avoid
        • 5e6c1607: GP-0: Renaming pyghidra.monitor() to pyghidra.task_monitor() to avoid
        • c5beedac: GP-5831 Added a few speed improvements to the RecoverClassesFromRTTIS…
      • GTA2_RE
        • 5bcf3010: Π½Π°Ρ‡Π°Π» Π½Π΅ΠΌΠ½ΠΎΠ³ΠΎ Π΄ΠΎΠΏΠΈΡΡ‹Π²Π°Ρ‚ΡŒ
      • idaplugins
      • quokka
        • 8c9e601e: Merge pull request #68 from quarkslab/dependabot/github_actions/actio…
      • ret-sync
        • b7e2979d: Merge pull request #135 from bootleg/fix-ida/PySide6_compat
      • symbolicator
      • vt-ida-plugin
        • a979431f: Merge pull request #36 from kevinmuoz/master
        • b6d445d2: Update changelog for version 1.05
      • wp81IdaDriverAnalyzer
    2. πŸ”— r/reverseengineering Interview with RollerCoaster Tycoon’s Creator, Chris Sawyer rss
    3. πŸ”— hyprwm/Hyprland v0.52.2 release

      Another patch release with a few fixes backported on top of 0.52.1.

      Fixes Backported

      • presentation: only send sync output on presented (#12255)
      • renderer: fix noscreenshare layerrule popups (#12260)
      • renderer/ime: fix fcitx5 popup artifacts (#12263)
      • screencopy: fix possible crash in renderMon
      • internal: put Linux-only header behind ifdef (#12300)
      • internal: fix crash at startup on freebsd (#12298)
      • cmake,meson: fix inclusion of gpg info in git commit info (#12302)
      • cursor: ensure cursor reset on changed window states (#12301)
      • plugin/hook: disallow multiple hooks per function (#12320)
      • protocols/workspace: fix crash in initial group sending
      • renderer: stop looping over null texture surfaces (#12446)
      • protocols/workspace: avoid crash on inert outputs
      • buffers: revert state merging (#12461)
      • protocols/lock: fix missing output enter on surface (#12448)
      • dmabuf: sys/ioctl is required for ioctl (#12483)

      Special thanks

      Special thanks as always to:

      Our sponsors

      Diamond

      37Signals

      Gold

      Framework

      Donators

      Top Supporters:

      --, mukaro, Semtex, Tom94, soy_3l.beantser, SaltyIcetea, Freya Elizabeth Goins, lzieniew, Kay, ExBhal, MasterHowToLearn, 3RM, Tonao Paneguini, Sierra Layla Vithica, Anon2033, Brandon Wang, DHH, alexmanman5, Theory_Lukas, Blake- sama, Seishin, Hunter Wesson, Illyan, TyrHeimdal, elafarge, Arkevius, d, RaymondLC92, MadCatX, johndoe42, alukortti, Jas Singh, taigrr, Xoores, ari- cake, EncryptedEnigma

      New Monthly Supporters:

      KongrooParadox, Jason Zimdars, grateful anon, Rafael Martins, Lu, Jan, Yves, Luiz Aquino, navik, EvgenyRachlenko, GENARO LOYA DOUR, trustable0370, Jorge Y. C. Rodriguez, Bobby Rivera, steven_s, Pavel DuΕ‘ek, Toshitaka Agata, mandrav

      One-time Donators:

      ryorichie, shikaji, tskulbru, szczot3k, Vincent F, myname0101, MirasM, Daniel Doherty, giri, rasa, potato, Jams Mendez, collin, koss054, LouisW, Mattisba, visooo, Razorflak, ProPatte, sgt, Bouni, EarthsonLu, W, Faab, Kenan Sharifli, ArchXceed, benvonh, J.P. Wing, 0xVoodoo, ayhan, Miray Gohan, quiron, August Lilleaas, ~hommel, Ethan Webb, fraccy, Kevin, Carlos Solórzano Cerdas, kastr, jmota, pch, darksun, JoseConseco, Maxime Gagne, joegas, Guido V, RedShed, Shane, philweber, romulus, nuelle, Nick M, Mustapha Mond, bfester, Alvin Lin, 4everN00b, riad33m, astraccato, spirossi, drxm1, anon, conig, Jonas Thern, Keli, Martin, gianu, Kevin K, @TealRaya, Benji, Borissimo, Ebbo, John, zoth, pampampampampamponponponponponponpampampampa, Himayat, Alican, curu, stelman, Q, frigidplatypus, Dan Page, Buzzard, mknpcz, bbutkovic, neonvoid, Pim Polderman, Marsimplodation, cloudscripting, StevenWalter, i_am_terence, mester, Jacob Delarosa, hl, alex, zusemat, LRVR, MichelDucartier, Jon Fredeen, Chris, maxx, Selim, Victor Rosenthal, Luis Gonzalez, say10, mcmoodoo, Grmume, Nilpointer, Lad, Pathief, Larguma, benniheiss, cannikin, NoeL, hyprcroc, Sven Krause, Matej Drobnič, vjg73_Gandhi2, SotoEstevez, jeroenvlek, SymphonySimper, simplectic, tricked, Kacper, nehalandrew, Jan Ihnen, Blub, Jonwin, tucker87, outi, chrisxmtls, pseudo, NotAriaN, ckoblue, xff, hellofriendo, Arto Olli, Jett Thedell, Momo On Code, MrFry, stjernstrom, nastymatt, iDie, IgorJ, andresfdz7, Joshua, Koko, joenu, HakierGrzonzo, codestothestars, Jrballesteros05, hanjoe, Quantumplation, mentalAdventurer, Sebastian Grant, Reptak, kiocone, dfsdfs, cdevroe, nemalex, Somebody, Nates, Luan Pinheiro, drm, Misha Andreev, Cedric

      And all hyprperks members!

      Full Changelog : v0.52.1...v0.52.2

    4. πŸ”— r/reverseengineering Analogue 3D vs MiSTer FPGA; two separate reverse engineered FPGA cores rss
    5. πŸ”— r/LocalLLaMA 8 local LLMs on a single Strix Halo debating whether a hot dog is a sandwich rss

      8 local LLMs on a single Strix Halo debating whether a hot dog is a sandwich | submitted by /u/jfowers_amd
      [link] [comments]
      ---|---

    6. πŸ”— r/LocalLLaMA Micron Announces Exit from Crucial Consumer Business rss

      Technically speaking, we're screwed.

      submitted by /u/FullstackSensei
      [link] [comments]

    7. πŸ”— NationalSecurityAgency/ghidra Ghidra 11.4.3 release
    8. πŸ”— News Minimalist 🐒 WHO releases first obesity drug guideline + 8 more stories rss

      In the last 5 days ChatGPT read 153185 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

      [6.2] WHO releases first global guideline on GLP-1 medicines for obesity treatment β€”who.int(+59)

      The World Health Organization has issued its first global guideline, conditionally recommending the use of GLP-1 medications for adults living with obesity as a chronic disease.

      The guideline follows the drugs' September 2025 addition to the Essential Medicines List for diabetes. The obesity recommendation is conditional due to limited long-term data, high costs, and concerns about equitable access for patients worldwide.

      WHO emphasizes medication is not a standalone solution and projects that fewer than 10% of eligible people will have access to the therapies by 2030, urging action on affordability and manufacturing.

      [6.5] New injectable HIV prevention drug launches in South Africa, Eswatini, and Zambia β€”afpbb.com(Japanese) (+35)

      South Africa, Eswatini, and Zambia on Monday began Africa’s first public rollout of a twice-yearly injectable HIV preventative, Lenacapavir, which is over 99.9% effective.

      The program, initially a study tracking 2,000 people, is funded by Unitaid. South Africa plans a nationwide expansion next year, while the U.S. is supporting the rollout in Zambia and Eswatini.

      The injection offers an alternative to daily PrEP pills in a region with over half the world's HIV cases. A generic version is expected after 2027 for about $40 annually.

      [6.1] DeepSeek releases powerful, open-source AI models that rival top competitors β€”techradar.com(+13)

      Chinese startup DeepSeek has released powerful open-source AI models that rival top US competitors, a move intensifying the global AI race and challenging established industry leaders.

      The new models reportedly match or outperform competitors like GPT-5 in complex reasoning and coding. Their unique architecture significantly reduces operational costs, making elite AI performance more accessible and cheaper to deploy.

      Released under an open MIT license, the models fuel innovation but also raise geopolitical and data privacy concerns in Western countries due to the company's Chinese origins.

      Highly covered news with significance over 5.5

      [5.9] Russia and US fail to reach compromise on Ukraine peace deal β€” rte.ie (+553)

      [5.5] Amazon unveils new AI chips and strengthens Nvidia partnership to expand cloud capacity [$] β€” cnbc.com (+18)

      [5.6] Synopsys and Nvidia partner to accelerate industrial design [$] β€” cnbc.com (+13)

      [5.6] AI demand fuels global memory chip shortage and price hikes β€” japantimes.co.jp (+9)

      [5.7] Dutch government pays 163 million euros to stop gas extraction under Wadden Sea β€” nos.nl (Dutch) (+2)

      [5.9] Researchers map 23,000 technologies, revealing their age and trajectory β€” techxplore.com (+4)

      Thanks for reading!

      β€” Vadim


      You can create your own significance-based RSS feed with premium.


      Powered by beehiiv

    9. πŸ”— @HexRaysSA@infosec.exchange ⚑ NEW CUSTOMER CYBER WEEK PROMO ⚑ mastodon

      ⚑ NEW CUSTOMER CYBER WEEK PROMO ⚑
      We're offering 50% off any IDA Pro product for new customers!

      To take advantage of this limited time offer, use promo code CYBER50 at check out. Or email sales@hex-rays.com.

      Cannot be combined with any other discount.
      50% off offer valid for new individual customers only, not corporations.
      New corporate customers are eligible for 40% off.
      Not applicable to upgrades or renewals.
      All new customers are required to pass the KYC process to receive the discount and license(s).
      Offer ends December 8, 2025 @ 11:59 pm CET. https://hex-rays.com/pricing

    10. πŸ”— r/LocalLLaMA DeepSeek V3.2 Technical Report rss

      DeepSeek V3.2 Technical Report | Here is a brief summary of key breakthroughs of DeepSeek V3.2 1. DeepSeek Sparse Attention (DSA) A new efficient attention mechanism that dramatically reduces computational complexity while preserving performance in long-context scenarios. It uses a lightning indexer with fine-grained top-k token selection to achieve sparse but effective attention. 2. Scalable and Stable Reinforcement Learning Framework Implements a heavily scaled post-training RL pipeline, with compute exceeding 10% of pretraining cost. 3. Large-Scale Agentic Task Synthesis Pipeline Provides a novel pipeline that programmatically generates large numbers of tool-use environments (1,800+ environments, 85,000+ complex prompts). This boosts generalization, tool-use ability, and instruction-following in interactive settings. 4. Unified Reasoning + Agentic RL Training Merges reasoning, tool-use, and human-alignment RL into a single stage rather than multi-stage pipelines. This avoids catastrophic forgetting and improves cross-domain performance simultaneously. DeepSeek-V3.2-Speciale A high-compute variant trained with relaxed length penalties and enhanced mathematical-reasoning rewards. This model even surpasses GPT-5 and exhibits reasoning proficiency on par with Gemini-3.0-Pro, achieving gold-medal performance in both the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI). Arxiv paper submitted by /u/Dear-Success-1441
      [link] [comments]
      ---|---

    11. πŸ”— r/LocalLLaMA Chinese startup founded by Google engineer claims to have developed its own tpu reportedly 1.5 times faster than nvidia a100. rss
    12. πŸ”— seanmonstar hyper-util Composable Pools rss

      I’m so excited to announce hyper’s new composable pool layers!1

      As part of making reqwest more modular, we’ve designed a new connection pool, and made the pieces available in hyper_util::client::pool. But this is more than just a β€œhey, we have a Pool, it moved other there.” We’ve literally pulled apart the pool, in a way I haven’t found elsewhere.

      Building a purpose‑specific pool is now straightforward. Add the features you want, even custom ones, and skip the bloat, no forks required.

      Read on to see what exactly we solved, how, and what comes next. If you just want to use them, here’s the docs. Everyone else, let’s dive in.

      We started with the users

      We started with the users, looking back over past issues filed, common questions in chat, and private conversations explaining what they needed to do. Boiled down, that got us to these requirements:

      • A full-featured pool, like the one in legacy, must be possible.
      • Microservices shouldn’t have to handle multiple protocols or hostnames.
      • Some clients need custom keys for the pool.
      • Others need to limit new connections made at a time.
      • Or cap the total number of connections.
      • Customize connection expiration based on idle time, max lifetime, or even poisoning.
      • And importantly, allow custom logic not already thought of.

      From past experience combining middleware, I had a strong feeling the pool requirements could be broken up into tower layers. But what would that even look like? Would it be horrible to use?

      To answer that, we took the requirements and considered the developer experience of using layers. It had to feel nice. Not just to write, but also to come back to and read.

      I then sketched out several of these layers to make sure they could actually work. Once most of it was working, the proposal was ready.

      The initial 4 working pools

      No plan survives contact with the enemy. We originally proposed five pool types, but launch with just the following four: singleton, cache, negotiate, map.

      The singleton pool wraps a connector2 that should only produce a single active connection. It bundles all concurrent calls so only one connection is made. All calls to the singleton will return a clone of the inner service once established. This fits the HTTP/2 case well.

      The cache pool maintains a list of cached services produced by a connector. Calling the cache returns either an existing service, or makes a new one. When dropped, the cached service is returned to the cache if possible. Importantly for performance, the cache supports connection racing, just like the legacy pool.

      The negotiate pool allows for a service that can decide between two service types based on an intermediate return value. Unlike typical routing, it makes decisions based on the response (the connection) rather than the request. The main use case is supporting ALPN upgrades to HTTP/2, with a fallback to HTTP/1. And its design allows combining two different pooling strategies.

      The map pool isn’t a typical service like the other pools, but rather is a stand-alone type that maps requests to keys and connectors. As a kind of router, it cannot determine which inner service to check for backpressure until the request is made. The map implementation allows customization of extracting a key, and how to construct a connector for that key.

      Ineffably unstable

      I knew this work would land in hyper-util first, because it’s not stable yet. Being so freshly designed, changes are expected after some more real-world usage. Still, I wanted to shield earlier adopters from breaking changes. At the same time, valuing performance and flexibility, I wanted to push as much as reasonably possible into the type system.

      When initially tinkering during the summer, I had one of those thoughts. The kind that clangs like a giant lock snapping open: what about type-state builders and unnameable types? I took a side quest, and tackled the warp v0.4 upgrade, to test out this API design. That post explains it a bit more.

      The various threads were all coming together.

      With each pool concept a tower service, once composed, a user shouldn’t care what it is beyond being some impl Service. I tested this out in reqwest, and yea, I don’t need to name the types. While I did need a type, I was able to store a dyn Service, and inference handled the rest.

      Real world usage: in reqwest

      Once those main pieces seemed ready, I needed a real example to test drive them. Tool-makers that don’t use their tools make bad tools, after all.

      I started by replacing the legacy pool inside reqwest. Part of the larger diff in reqwest is handling all of reqwest’s different pool configuration options.

      But, putting the default case together is pretty self-explanatory:

      // Note: some noise has been trimmed
      let http1 = (
          pool::cache(exec),
          util::http1_request_target(),
          util::http1_set_host(),
          util::meta(MyMetaIdleAt::new),
          conn::http1(),
      );
      
      let http2 = (
          pool::singleton(),
          conn::http2(),
      );
      
      let pool_layers = tower::layer::layer_fn(move |svc| {
          pool::negotiate::builder()
              .fallback(http1.clone())
              .upgrade(http2.clone())
              .inspect(|conn| conn.is_negotiated_h2())
              .connect(svc)
              .build()
      });
      
      let pool_map = pool::map::builder::<http::Uri>()
          .keys(|dst| scheme_and_auth(dst))
          .values(move |_dst| {
              pool_layers.layer(connector.clone())
          })
          .build();
      

      And it works! Making the full-featured pool was one of the requirements: check. But, the next part was even more important.

      As I mentioned before, I punted one of the proposed types: expire. Expiration is a necessary concept to a pool. But try as I might to fit the various generic shapes, it just wasn’t happening. Thankfully, this work had a hard deadline. And deadlines keep you user-driven: let them have something now, it can always be better later.

      To prove the general design allowed expiration, I implemented a specific version of it directly in reqwest.

      tokio::spawn(async move {
          loop {
              tokio::time::sleep(idle_dur).await;
              let now = Instant::now();
              let Some(pool) = pool_map.upgrade() else { return };
      
              pool.lock().unwrap().retain(|_key, svc| {
                  svc.fallback_mut().retain(|svc| {
                      if svc.inner().inner().inner().is_closed() {
                          return false;
                      }
      
                      if let Some(idle_at) = svc.meta().idle_at {
                          return now > idle_at + idle_dur;
                      }
                      true
                  });
                  svc.upgrade_mut().retain(|svc| {
                      !svc.is_closed()
                  });
                  !svc.fallback_mut().is_empty() || !svc.upgrade_mut().is_empty()
              });
          }
      });
      

      The ease of adding it helped solidify to me that this was definitely the right design. I was able to slot in a meta layer tracking idle time, and then use that to retain services. I placed that layer in right next to some of the other HTTP/1-specific layers. Easy!

      Being modular opens up customization

      With the ability to build a stack for your pool, consider an example of how we can start to solve other requirements listed earlier.

      let svc = ServiceBuilder::new()
          // cached connections are unaware of the limit
          .layer(pool::cache())
          // in-flight handshakes are limited
          .concurrency_limit(5)    
          .layer(conn::http1())
          .service(connect::tcp());
      

      It also allows adding in layers we don’t currently have, such as per-host connection semaphores, or a few layers up over all hosts. Adding new functionality isn’t blocked on us, and no one has to β€œpay” for features they don’t need.

      I can’t wait to see what else is done with the design!

      Pools ready

      The hyper_util::client::pool module is now available in v0.1.19. Go check the docs, and try to build cool things. Please file issues if parts are missing, we’ll keep iterating.

      I’ve been working on this feature set for long time. It’s something I started thinking about years ago, and after months of work this year, it feels awesome to finally be able to release it.

      Thanks to my sponsors, retainers, and grants for making this all possible!

      1. I mean, who isn’t excited to announce anything? /sΒ 

      2. All β€œconnectors” are actually MakeServices, which are jsut a Service that produces a Service. It doesn’t have to create a connection, but it reads better when talking about pools.Β 

    13. πŸ”— HexRaysSA/plugin-repository commits sync repo: +3 plugins, +3 releases rss
      sync repo: +3 plugins, +3 releases
      
      ## New plugins
      - [EmuIt](https://github.com/AzzOnFire/emuit) (0.8.1)
      - [gepetto](https://github.com/JusticeRage/Gepetto) (1.5.0)
      - [icp](https://github.com/rand-tech/idaplugins) (1.3)
      
    14. πŸ”— @cxiao@infosec.exchange mentally im here mastodon

      mentally im here
      https://youtu.be/8Z9RTdj93o8

    15. πŸ”— r/LocalLLaMA Who’s got them Q_001_X_S_REAP Mistral Large 3 GGUFs? rss

      Who’s got them Q_001_X_S_REAP Mistral Large 3 GGUFs? | I’m looking at you, Unsloth 😁 submitted by /u/Porespellar
      [link] [comments]
      ---|---

    16. πŸ”— Rust Blog Lessons learned from the Rust Vision Doc process rss

      Starting earlier this year, a group of us set on a crazy quest: to author a "Rust vision doc". As we described it in the original project goal proposal:

      The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust.

      Over the course of this year, the Vision Doc group has gathered up a lot of data. We began with a broad-based survey that got about 4200 responses. After that, we conducted over 70 interviews, each one about 45 minutes, with as broad a set of Rust users as we could find1.

      This is the first of a series of blog posts covering what we learned throughout that process and what recommendations we have to offer as a result. This first post is going to go broad. We'll discuss the process we used and where we think it could be improved going forward. We'll talk about some of the big themes we heard -- some that were surprising and others that were, well, not surprising at all. Finally, we'll close with some recommendations for how the project might do more work like this in the future.

      The questions we were trying to answer

      One of the first things we did in starting out with the vision doc was to meet with a User Research expert, Holly Ellis, who gave us a quick tutorial on how User Research works2. Working with her, we laid out a set of research questions that we wanted to answer. Our first cut was very broad, covering three themes:

      • Rust the technology:
        • "How does Rust fit into the overall language landscape? What is Rust's mission?"
        • "What brings people to Rust and why do they choose to use it for a particular problem...?"
        • "What would help Rust to succeed in these domains...?" (e.g., network systems, embedded)
        • "How can we scale Rust to industry-wide adoption? And how can we ensure that, as we do so, we continue to have a happy, joyful open-source community?"
      • Rust the global project:
        • "How can we improve the experience of using Rust for people across the globe?"
        • "How can we improve the experience of contributing to and maintaining Rust for people across the globe?"
      • Rust the open-source project:
        • "How can we tap into the knowledge, experience, and enthusiasm of a growing Rust userbase to improve Rust?"
        • "How can we ensure that individual or volunteer Rust maintainers are well-supported?"
        • "What is the right model for Foundation-project interaction?"

      Step 1: Broad-based survey

      Before embarking on individual interviews, we wanted to get a broad snapshot of Rust usage. We also wanted to find a base of people that we could talk to. We created a survey that asked a few short "demographic" questions -- e.g., where does the respondent live, what domains do they work on, how would they rate their experience -- and some open-ended questions about their journey to Rust, what kind of projects they feel are a good fit for Rust, what they found challenging when learning, etc. It also asked for (optional) contact information.

      We got a LOT of responses -- over 4200! Analyzing this much data is not easy, and we were very grateful to Kapiche, who offered us free use of their tool to work through the data. ❀

      The survey is useful in two ways. First, it's an interesting data-set in its own right, although you have to be aware of selection bias. Second, the survey also gave us something that we can use to cross-validate some of what we heard in 1:1 interviews and to look for themes we might otherwise have missed. And of course it gave us additional names of people we can talk to (though most respondents didn't leave contact information).

      Step 2: Interviewing individuals

      The next step after the survey was to get out there and talk to people. We sourced people from a lot of places: the survey and personal contacts, of course, but we also sat down with people at conferences and went to meetups. We even went to a Python meetup in an effort to find people who were a bit outside the usual "Rust circle".

      When interviewing people, the basic insight of User Experience research is that you don't necessarily ask people the exact questions you want to answer. That is likely to get them speculating and giving you the answer that they think they "ought" to say. Instead, you come at it sideways. You ask them factual, non-leading questions. In other words, you certainly don't say, "Do you agree the borrow checker is really hard?" And you probably don't even say, "What is the biggest pain point you had with Rust?" Instead, you might say, "What was the last time you felt confused by an error message?" And then go from there, "Is this a typical example? If not, what's another case where you felt confused?"

      To be honest, these sorts of "extremely non-leading questions" are kind of difficult to do. But they can uncover some surprising results.

      We got answers -- but not all the answers we wanted

      4200 survey responses and 70 interviews later, we got a lot of information -- but we still don't feel like we have the answers to some of the biggest questions. Given the kinds of questions we asked, we got a pretty good view on the kinds of things people love about Rust and what it offers relative to other languages. We got a sense for the broad areas that people find challenging. We also learned a few things about how the Rust project interacts with others and how things vary across the globe.

      What we really don't have is enough data to say "if you do X, Y, and Z, that will really unblock Rust adoption in this domain". We just didn't get into enough technical detail, for example, to give guidance on which features ought to be prioritized, or to help answer specific design questions that the lang or libs team may consider.

      One big lesson: there are only 24 hours in a day

      One of the things we learned was that you need to stay focused. There were so many questions we wanted to ask, but only so much time in which to do so. Ultimately, we wound up narrowing our scope in several ways:

      • we focused primarily on the individual developer experience, and only had minimal discussion with companies as a whole;
      • we dove fairly deep into one area (the Safety Critical domain) but didn't go as deep into the details of other domains;
      • we focused primarily on Rust adoption, and in particular did not even attempt to answer the questions about "Rust the open-source project".

      Another big lesson: haters gonna... stay quiet?

      One thing we found surprisingly difficult was finding people to interview who didn't like Rust. 49% of survey respondents, for example, rated their Rust comfort as 4 or 5 out of 5, and only 18.5% said 1 or 2. And of those, only a handful gave contact information.

      It turns out that people who think Rust isn't worth using mostly don't read the Rust blog or want to talk about that with a bunch of Rust fanatics.3 This is a shame, of course, as likely those folks have a lot to teach us about the boundaries of where Rust adds value. We are currently doing some targeted outreach in an attempt to grow our scope here, so stay tuned, we may get more data.

      One fun fact: enums are Rust's underappreciated superpower

      We will do a deeper dive into the things people say that they like about Rust later (hint: performance and reliability both make the cut). One interesting thing we found was the number of people that talked specifically about Rust enums, which allow you to package up the state of your program along with the data it has available in that state. Enums are a concept that Rust adapted from functional languages like OCaml and Haskell and fit into the system programming setting.

      "The usage of Enum is a new concept for me. And I like this concept. It's not a class and it's not just a boolean, limited to false or true. It has different states." -- New Rust developer

      "Tagged unions. I don't think I've seriously used another production language which has that. Whenever I go back to a different language I really miss that as a way of accurately modeling the domain." -- Embedded developer

      Where do we go from here? Create a user research team

      When we set out to write the vision doc, we imagined that it would take the form of an RFC. We imagined that RFC identifying key focus areas for Rust and making other kinds of recommendations. Now that we've been through it, we don't think we have the data we need to write that kind of RFC (and we're also not sure if that's the right kind of RFC to write). But we did learn a lot and we are convinced of the importance of this kind of work.

      Therefore, our plan is to do the following. First, we're going to write-up a series of blog posts diving into what we learned about our research questions along with other kinds of questions that we encountered as we went.

      Second, we plan to author an RFC proposing a dedicated user research team for the Rust org. The role of this team would be to gather data of all forms (interviews, surveys, etc) and make it available to the Rust project. And whenever they can, they would help to connect Rust customers directly with people extending and improving Rust.

      The vision doc process was in many ways our first foray into this kind of research, and it taught us a few things:

      • First, we have to go broad and deep. For this first round, we focused on high-level questions about people's experiences with Rust, and we didn't get deep into technical blockers. This gives us a good overview but limits the depth of recommendations we can make.
      • Second, to answer specific questions we need to do specific research. One of our hypotheses was that we could use UX interviews to help decide thorny questions that come up in RFCs -- e.g., the notorious debate between await x and x.await from yesteryear. What we learned is "sort of". The broad interviews we did did give us information about what kinds of things are important to people (e.g., convenience vs reliability, and so forth), and we'll cover some of that in upcoming write-ups. But to shed light on specific questions (e.g., "will x.await be confused for a field access") will really require more specific research. This may be interviews but it could also be other kinds of tests. These are all things though that a user research team could help with.
      • Third, we should find ways to "open the data" and publish results incrementally. We conducted all of our interviews with a strong guarantee of privacy and we expect to delete the information we've gathered once this project wraps up. Our goal was to ensure people could talk in an unfiltered way. This should always be an option we offer people -- but that level of privacy has a cost, which is that we are not able to share the raw data, even widely across the Rust teams, and (worse) people have to wait for us to do analysis before they can learn anything. This won't work for a long-running team. At the same time, even for seemingly innocuous conversations, posting full transcripts of conversations openly on the internet may not be the best option, so we need to find a sensible compromise.

      • "As wide a variety of Rust users as we could find " -- the last part is important. One of the weaknesses of this work is that we wanted to hear from more Rust skeptics than we did. ↩

      • Thanks Holly! We are ever in your debt. ↩

      • Shocking, I know. But, actually, it is a little -- most programmers love telling you how much they hate everything you do, in my experience? ↩

    17. πŸ”— Rust Blog crates.io: Malicious crates evm-units and uniswap-utils rss

      Summary

      On December 2nd, the crates.io team was notified by Olivia Brown from the Socket Threat Research Team of two malicious crates which were downloading a payload that was likely attempting to steal cryptocurrency.

      These crates were:

      • evm-units - 13 versions published in April 2025, downloaded 7257 times
      • uniswap-utils - 14 versions published in April 2025, downloaded 7441 times, used evm-units as a dependency

      Actions taken

      The user in question, ablerust, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

      The deletions were performed at 22:01 UTC on December 2nd.

      Analysis

      Socket has published their analysis in a blog post.

      These crates had no dependent downstream crates on crates.io.

      Thanks

      Our thanks to Olivia Brown from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Walter Pearce and Adam Harvey from the Rust Foundation for aiding in the response.

    18. πŸ”— Mitchell Hashimoto Ghostty Is Now Non-Profit rss
      (empty)
  4. December 02, 2025
    1. πŸ”— IDA Plugin Updates IDA Plugin Updates on 2025-12-02 rss

      IDA Plugin Updates on 2025-12-02

      New Releases:

      Activity:

      • diffrays
        • 6a63def7: Add auto issue assignment workflow
        • db288f0f: Add auto issue assignment workflow
        • 70715b4a: Deleted auto issue assignment workflow
        • ac5850c2: Add auto issue assignment workflow
        • 4584c6df: Add auto issue assignment workflow
        • f0cec8ab: Add auto issue assignment workflow
      • ghidra
        • a0acfb8f: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • f901a1bb: GP-0: Upping gradle wrapper version to 9.2.1
        • 99987885: GP-0: Fixing javadoc
        • 3d0da548: Merge remote-tracking branch 'origin/GP-6176_ryanmkurtz_objc-refactor'
        • 17ac51c4: GP-6176: Refactored Objective-C type metadata analyzers
        • aabeb6d6: Merge remote-tracking branch 'origin/GP-0-dragonmacher-test-fixes-12-…
        • 44ee4636: Merge remote-tracking branch 'origin/GP-1-dragonmacher-flow-arrow-npe'
        • 95b96e31: Merge remote-tracking branch 'origin/GP-1-dragonmacher-help-location-…
        • d8f3960f: Fix for flow arrow NPE
      • IDA-VTableExplorer
        • e770cb35: fix: Simplify IDA SDK prerequisites in README
        • 503f36a6: Refactor code structure for improved readability and maintainability
        • 219134c8: feat: Add screenshots and images to README for better visualization o…
      • quokka
        • 812f87cc: Merge pull request #65 from quarkslab/dependabot/github_actions/actio…
        • 3486fa93: Bump the actions group across 1 directory with 8 updates
    2. πŸ”— r/LocalLLaMA I'm surprised how simple Qwen3 VL's architecture is. rss

      I'm surprised how simple Qwen3 VL's architecture is. | the new 3D position id logic really got a lot more intuitive compared to qwen2.5 vl. it basically index image patches on width and height dimension in addition to the regular token sequence / temporal dimension (while treating text as one same number across all 3 dimensions). in addition to this, they added deepstack, which essentially is just some residual connections between vision encoder blocks and downstream LLM blocks. here's the full repo if you want to read more: https://github.com/Emericen/tiny-qwen submitted by /u/No-Compote-6794
      [link] [comments]
      ---|---

    3. πŸ”— sharkdp/bat v0.26.1 release

      v0.26.1

      Features

      Bugfixes

      • Fix hang when using --list-themes with an explicit pager, see #3457 (@abhinavcool42)
      • Fix negative values of N not being parsed in line ranges without = flag value separator, see #3442 (@lmmx)
      • Fix broken Docker syntax preventing use of custom assets, see #3476 (@keith-hall)
      • Fix decorations being applied unexpectedly when piping. Now only line numbers explicitly required on the command line should be applied in auto decorations mode for cat compatibility. See #3496 (@keith-hall)
      • Fix diagnostics attempting to find the version of an executable named builtin when builtin pager is used. See #3498 (@keith-hall)
      • --help now correctly reads the config file for theme information etc. See #3507 (@keith-hall)

      Other

      • Improve README documentation on pager options passed to less, see #3443 (@injust)
      • Make PowerShell completions compatible with PowerShell v5.1, see #3495 (@keith-hall)
      • Use more robust approach to escaping in Bash completions, see #3448 (@akinomyoga)

      Syntaxes

      • Update quadlet syntax mapping to include *.{build,pod} files #3484 (@cyqsimon)
      • Fix inconsistencies in Ada syntax, see #3481 (@AldanTanneo)
      • Add syntax mapping for podman's artifact quadlet files, see #3497 (@xduugu)
      • Highlight Korn Shell scripts (i.e. with a shebang of ...ksh) using Bash syntax, see #3509 (@keith-hall)
    4. πŸ”— 19h/ida-lifter v1.0.0 release
    5. πŸ”— r/wiesbaden Live @ The Fox and Hound Frankfurt West End rss
    6. πŸ”— r/wiesbaden Kinderschuhe/Kleidung rss

      Hi, hoffe das ist ok hier zu fragen. Wir haben aussortiert und dabei sind mehrere Kisten Kinderkleidung und -schuhe zusammengekommen die noch in sehr gutem Zustand sind. Ich würde sie gern irgendwo hingeben, wo sie auch gebraucht werden, besonders die Schuhe. Geld mâchte ich keins. Weiß jemand wohin man sich da wenden kânnte?

      submitted by /u/Snargels
      [link] [comments]

    7. πŸ”— r/wiesbaden English cinemas rss

      Hey!! What movie Theaters in Wiesbaden play movies in English? I'm planning to watch the new FNAF2 movie in English and only know of citydome in Darmstadt.

      submitted by /u/Old-Bus-6698
      [link] [comments]

    8. πŸ”— r/LocalLLaMA Mistral just released Mistral 3 β€” a full open-weight model family from 3B all the way up to 675B parameters. rss

      All models are Apache 2.0 and fully usable for research + commercial work.

      Quick breakdown:

      β€’ Ministral 3 (3B / 8B / 14B) – compact, multimodal, and available in base, instruct, and reasoning variants. Surprisingly strong for their size.

      β€’ Mistral Large 3 (675B MoE) – their new flagship. Strong multilingual performance, high efficiency, and one of the most capable open-weight instruct models released so far.

      Why it matters: You now get a full spectrum of open models that cover everything from on-device reasoning to large enterprise-scale intelligence. The release pushes the ecosystem further toward distributed, open AI instead of closed black-box APIs.

      Full announcement: https://mistral.ai/news/mistral-3

      submitted by /u/InternationalToe2678
      [link] [comments]

    9. πŸ”— r/LocalLLaMA Ministral-3 has been released rss

      Ministral-3 has been released | https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512 https://huggingface.co/mistralai/Ministral-3-14B-Instruct-2512 https://huggingface.co/mistralai/Ministral-3-14B-Base-2512 The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities. https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512 https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512 https://huggingface.co/mistralai/Ministral-3-8B-Base-2512 A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities. https://huggingface.co/mistralai/Ministral-3-3B-Reasoning-2512 https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512 https://huggingface.co/mistralai/Ministral-3-3B-Base-2512 The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities. https://preview.redd.it/471e4lma6t4g1.png?width=1078&format=png&auto=webp&s=c23d37e6a361041132ccec451c0a03921acc6e13 https://preview.redd.it/c2szd14b6t4g1.png?width=1210&format=png&auto=webp&s=3d97fc5e8626f25f8c13a5b159e6351976f45de5 https://huggingface.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF https://huggingface.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF https://huggingface.co/unsloth/Ministral-3-8B-Instruct-2512-GGUF https://huggingface.co/unsloth/Ministral-3-3B-Reasoning-2512-GGUF https://huggingface.co/unsloth/Ministral-3-3B-Instruct-2512-GGUF submitted by /u/jacek2023
      [link] [comments]
      ---|---

    10. πŸ”— r/LocalLLaMA Mistral 3 Blog post rss

      Mistral 3 Blog post | submitted by /u/rerri
      [link] [comments]
      ---|---

    11. πŸ”— r/reverseengineering Ghidra Copilot - Conversational Reverse Engineering Assistant rss
    12. πŸ”— r/LocalLLaMA Only the real ones remember (he is still the contributor with the most likes for his models) rss

      Only the real ones remember (he is still the contributor with the most likes for his models) | Hugging Face space by TCTF: Top Contributors To Follow - November 2025: https://huggingface.co/spaces/TCTF/TCTF
      Team mradermacher and Bartowski on the podium, legends.
      From Yağız Γ‡alΔ±k on 𝕏: https://x.com/Weyaxi/status/1995814979543371869 submitted by /u/Nunki08
      [link] [comments]
      ---|---

    13. πŸ”— Anton Zhiyanov Go proposal: Type-safe error checking rss

      Part of theAccepted! series, explaining the upcoming Go changes in simple terms.

      Introducing errors.AsType β€” a modern, type-safe alternative to errors.As.

      Ver. 1.26 β€’ Stdlib β€’ High impact

      Summary The new errors.AsType function is a generic version of errors.As: // go 1.13+ func As(err error, target any) bool // go 1.26+ func AsType (E, bool) It's type-safe, faster, and easier to use: // using errors.As var appErr AppError if errors.As(err, &appErr) { fmt.Println("Got an AppError:", appErr) } // using errors.AsType if appErr, ok := errors.AsType[AppError](err); ok { fmt.Println("Got an AppError:", appErr) } errors.As is not deprecated (yet), but errors.AsType is recommended for new code. Motivation

      The errors.As function requires you to declare a variable of the target error type and pass a pointer to it:

      var appErr AppError
      if errors.As(err, &appErr) {
          fmt.Println("Got an AppError:", appErr)
      }
      

      It makes the code quite verbose, especially when checking for multiple types of errors:

      var connErr *net.OpError
      var dnsErr *net.DNSError
      
      if errors.As(err, &connErr) {
          fmt.Println("Network operation failed:", connErr.Op)
      } else if errors.As(err, &dnsErr) {
          fmt.Println("DNS resolution failed:", dnsErr.Name)
      } else {
          fmt.Println("Unknown error")
      }
      

      With a generic errors.AsType, you can specify the error type right in the function call. This makes the code shorter and keeps error variables scoped to their if blocks:

      if connErr, ok := errors.AsType[*net.OpError](err); ok {
          fmt.Println("Network operation failed:", connErr.Op)
      } else if dnsErr, ok := errors.AsType[*net.DNSError](err); ok {
          fmt.Println("DNS resolution failed:", dnsErr.Name)
      } else {
          fmt.Println("Unknown error")
      }
      

      Another issue with As is that it uses reflection and can cause runtime panics if used incorrectly (like if you pass a non-pointer or a type that doesn't implement error). While static analysis tools usually catch these issues, using the generic AsType has several benefits:

      • No reflection1.
      • No runtime panics.
      • Less allocations.
      • Compile-time type safety.
      • Faster.

      Finally, AsType can handle everything that As does, so it's a drop-in improvement for new code.

      Description Add the AsType function to the errors package: // AsType finds the first error in err's tree that matches the type E, // and if one is found, returns that error value and true. Otherwise, it // returns the zero value of E and false. // // The tree consists of err itself, followed by the errors obtained by // repeatedly calling its Unwrap() error or Unwrap() []error method. // When err wraps multiple errors, AsType examines err followed by a // depth-first traversal of its children. // // An error err matches the type E if the type assertion err.(E) holds, // or if the error has a method As(any) bool such that err.As(target) // returns true when target is a non-nil *E. In the latter case, the As // method is responsible for setting target. func AsType (E, bool) Recommend using AsType instead of As: // As finds the first error in err's tree that matches target, and if one // is found, sets target to that error value and returns true. Otherwise, // it returns false. // ... // For most uses, prefer [AsType]. As is equivalent to [AsType] but sets its // target argument rather than returning the matching error and doesn't require // its target argument to implement error. // ... func As(err error, target any) bool Example

      Open a file and check if the error is related to the file path:

      // go 1.25
      var pathError *fs.PathError
      if _, err := os.Open("non-existing"); err != nil {
          if errors.As(err, &pathError) {
              fmt.Println("Failed at path:", pathError.Path)
          } else {
              fmt.Println(err)
          }
      }
      
      
      
      Failed at path: non-existing
      
      
      
      // go 1.26
      if _, err := os.Open("non-existing"); err != nil {
          if pathError, ok := errors.AsType[*fs.PathError](err); ok {
              fmt.Println("Failed at path:", pathError.Path)
          } else {
              fmt.Println(err)
          }
      }
      
      
      
      Failed at path: non-existing
      

      Further reading

      𝗣 51945 β€’ π—–π—Ÿ 707235


      1. Unlike errors.As, errors.AsType doesn't use the reflect package, but it still relies on type assertions and interface checks. These operations access runtime type metadata, so AsType isn't completely "reflection-free" in the strict sense. β†©οΈŽ

      *[High impact]: Likely impact for an average Go developer

    14. πŸ”— r/LocalLLaMA Would you rent B300 (Blackwell Ultra) GPUs in Mongolia at ~$5/hr? (market sanity check) rss

      I work for a small-ish team that somehow ended up with a pile of B300 (Blackwell Ultra) allocations and a half-empty data center in Ulaanbaatar (yes, the capital of Mongolia, yes, the coldest one).

      Important bit so this doesn’t sound totally random:
      ~40% of our initial build-out is already committed (local gov/enterprise workloads + two research labs). My actual job right now is to figure out what to do with the rest of the capacity β€” I’ve started cold-reaching a few teams in KR/JP/SG/etc., and Reddit is my β€œtalk to actual humans” channel.

      Boss looked at the latency numbers, yelled β€œEUREKA,” and then voluntold me to do β€œmarket research on Reddit” because apparently that’s a legitimate business strategy in 2025.

      So here’s the deal (numbers are real, measured yesterday):

      • B300 bare-metal: β‰ˆ $5 / GPU-hour on-demand (reserved is way lower)
      • Ping from the DC right now:
        • Beijing ~35 ms
        • Seoul ~85 ms
        • Tokyo ~95 ms
        • Singapore ~110 ms
      • Experience: full root, no hypervisor, 3.2 Tb/s InfiniBand, PyTorch + SLURM pre-installed so you don’t hate us immediately
      • Jurisdiction: hosted in Mongolia β†’ neutral territory, no magical backdoors or surprise subpoenas from the usual suspects

      Questions I was literally told to ask (lightly edited from my boss’s Slack message):

      1. Would any team in South Korea / Japan / Singapore / Taiwan / HK / Vietnam / Indonesia actually use this instead of CoreWeave, Lambda, or the usual suspects for training/fine-tuning/inference?
      2. Does the whole β€œ cold steppe bare-metal neutrality” thing sound like a real benefit or just weird marketing?
      3. How many GPUs do you normally burn through and for how long? (Boss keeps saying β€œeveryone wants 256-GPU clusters for three years” and I’m… unconvinced.)

      Landing page my designer made at 3 a.m.: https://b300.fibo.cloud (still WIP, don’t judge the fonts).

      Thanks in advance, and sorry if this breaks any rules β€” I read the sidebar twice πŸ™‚

      submitted by /u/CloudPattern1313
      [link] [comments]

    15. πŸ”— r/reverseengineering Optimizing libdwarf .eh_frame enumeration rss
    16. πŸ”— sacha chua :: living an awesome life 2025-12-01 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to AndrΓ©s RamΓ­rez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    17. πŸ”— Ampcode News Amp, Inc. rss

      Amp is becoming a separate company. We're spinning out of Sourcegraph to become an independent research lab.

      Our goal: let software builders harness the full power of artificial intelligence.

      We believe the way we develop software will change. All of it will change, fundamentally and drastically. Nobody knows exactly how. We intend to find out.

      We believe that shipping is the best way to do that. We don't want to write papers about the future; we want to put it in your hands.

      Flying pig pair illustration

      Amp Inc. gives us more freedom to do that, to focus ruthlessly on the frontier, to explore the absurd and find the possible.

      Amp's traction spun us out of Sourcegraph. Amp is profitable. Now, as our own company, we can follow where it leads.

      Come with us. Let's see what's possible.

      Signed,

      Alex Kemper Β· Beyang Liu Β· Brady Jeong Β· Brett Jones Β· Camden Cheek Β· Connor O'Brien Β· Dario Hamidi Β· Harry Charlesworth Β· Hitesh Sagtani Β· Isuru Fonseka Β· Jesse Edelstein Β· Karl Clement Β· Lewis Metcalf Β· Nicolay Gerold Β· Quinn Slack Β· Ryan Carson Β· Thorsten Ball Β· Tim Culverhouse Β· Tim Lucas Β· Will Dollman

      Co-founders of Amp

      Read Quinn and Dan's announcement on the Sourcegraph blog.