🏡


to read (pdf)

  1. A Protocol for Package Management | Andrew Nesbitt
  2. No management needed: anti-patterns in early-stage engineering teams | Antoine Boulanger
  3. Reconstructing Program Semantics from Go Binaries
  4. Long time ago, I was looking for game with some hidden rules, browsing random wi... | Hacker News
  5. Keychron’s Nape Pro turns your mechanical keyboard into a laptop‑style trackball rig: Hands-on at CES 2026 - Yanko Design

  1. January 23, 2026
    1. 🔗 r/reverseengineering Organized Traffer Gang on the Rise Targeting Web3 Employees and Crypto Holders rss
    2. 🔗 remorses/critique critique@0.1.43 release
      • Show full submodule diffs instead of just commit hashes:
        • Added --submodule=diff flag to git commands
        • Strip submodule header lines (Submodule name hash1..hash2:) before parsing
        • Works with TUI, --web, and review commands
    3. 🔗 r/reverseengineering How to detect arguments in a decompiler (rev.ng hour 2023-10-13) rss
    4. 🔗 r/LocalLLaMA OpenAI CFO hinting at "Outcome-Based Pricing" (aka royalties on your work)? Makes the case for local even stronger. rss

      UPDATE : My bad on this one, guys. I got caught by the clickbait.

      Thanks to u/evilbarron2 for digging up the original Business Insider source.

      CFO was actually talking about " Outcome-Based Pricing" for huge enterprise deals (e.g., if AI helps a Pharma company cure a disease, OpenAI wants a cut of that specific win).

      There is basically zero evidence this applies to us regular users, indie devs, or the API. I'm keeping the post up because the concept is still interesting to debate, but definitely take the headline with a huge grain of salt.


      Original Post:

      Saw some screenshots floating around about OpenAI planning to "take a cut" of customer discoveries (like pharma drugs, etc).

      I tried to dig up the primary source to see if it’s just clickbait. The closest official thing is a recent blog post from their CFO Sarah Friar talking about "outcome-based pricing" and "sharing in the value created" for high-value industries.

      ~~Even if the "royalty" headlines are sensationalized by tech media, the direction is pretty clear. They are signaling a shift from "paying for electricity" (tokens) to "taxing the factory output" (value).~~

      It kind of reminds me of the whole Grid vs. Solar debate. relying on the Grid (Cloud APIs) is cheap and powerful, but you don't control the terms. If they decide your specific use case is "high value" and want a percentage, you're locked in.

      Building a local stack is like installing solar/batteries. Expensive upfront, pain in the ass to maintain, but at least nobody knocks on your door asking for 5% of your project revenue just because you used their weights to run the math.

      Link to article: https://www.gizmochina.com/2026/01/21/openai-wants-a-cut-of- your-profits-inside-its-new-royalty-based-plan-and-other-business-models/

      Link to the actual source: https://www.businessinsider.com/openai-cfo-sarah- friar-future-revenue-sources-2026-1

      submitted by /u/distalx
      [link] [comments]

    5. 🔗 Servo Blog December in Servo: multiple windows, proxy support, better caching, and more! rss

      Servo 0.0.4 and our December nightly builds now support multiple windows (@mrobinson, @mukilan, #40927, #41235, #41144)! This builds on features that landed in Servo’s embedding API last month. We’ve also landed support for several web platform features, both old and new:

      Note: due to a known issue, servoshell on macOS may not be able to directly open new windows, depending on your system settings.

      Servo 0.0.4 showing new support for multiple
windows

      For better compatibility with older web content, we now support vendor- prefixed CSS properties like ‘-moz-transform’ (@mrobinson, #41350), as well as window.clientInformation (@Taym95, #41111).

      We’ve continued shipping the SubtleCrypto API, with full support for ChaCha20-Poly1305 , RSA-OAEP , RSA-PSS , and RSASSA-PKCS1-v1_5 (see below), plus importKey() for ML-KEM (@kkoyung, #41585) and several other improvements (@kkoyung, @PaulTreitel, @danilopedraza, #41180, #41395, #41428, #41442, #41472, #41544, #41563, #41587, #41039, #41292):

      Algorithm |
      ---|---
      ChaCha20-Poly1305 | (@kkoyung, #40978, #41003, #41030)
      RSA-OAEP | (@kkoyung, @TimvdLippe, @jdm, #41225, #41217, #41240, #41316)
      RSA-PSS | (@kkoyung, @jdm, #41157, #41225, #41240, #41287)
      RSASSA-PKCS1-v1_5 | (@kkoyung, @jdm, #41172, #41225, #41240, #41267)

      When using servoshell on Windows, you can now see --help and log output, as long as servoshell was started in a console (@jschwe, #40961).

      Servo diagnostics options are now accessible in servoshell via the SERVO_DIAGNOSTICS environment variable (@atbrakhi, #41013), in addition to the usual -Z / --debug= arguments.

      Servo’s devtools now partially support the Network > Security tab (@jiang1997, #40567), allowing you to inspect some of the TLS details of your requests. We’ve also made it compatible with Firefox 145 (@eerii, #41087), and use fewer IPC resources (@mrobinson, #41161).

      this website in Servo’s devtools, showing that the main request used TLS
1.3, TLS13_AES_256_GCM_SHA384 cipher suite, and X25519MLKEM768 key exchange,
with HSTS enabled and HPKP
disabled

      We’ve fixed rendering bugs related to ‘float’ , ‘order’ , ‘max- width’ , ‘max-height’ , ‘:link’ selectors , < audio> layout, and getClientRects() , affecting intrinsic sizing (@Loirooriol, #41513), anonymous blocks (@Loirooriol, #41510), incremental layout (@Loirooriol, #40994), flex item sizing (@Loirooriol, #41291), selector matching (@andreubotella, #41478), replaced element layout (@Loirooriol, #41262), and empty fragments (@Loirooriol, #41477).

      Servo now fires ‘toggle’ events on < dialog> (@lukewarlow, #40412). We’ve also improved the conformance of ‘wheel’ events (@mrobinson, #41182), ‘hashchange’ events (@Taym95, #41325), ‘dblclick’ events on <input> (@Taym95, #41319), ‘resize’ events on <video> (@tharkum, #40940), ‘seeked’ events on <video> and <audio> (@tharkum, #40981), and the ‘transform’ property in getComputedStyle() (@mrobinson, #41187).

      Embedding API Servo now has basic support for HTTP proxies (@Narfinger, #40941). You can set the proxy URL in the http_proxy (@Narfinger, #41209) or HTTP_PROXY (@treeshateorcs, @yezhizhen, #41268) environment variables, or via --pref network_http_proxy_uri. We now use the system root certificates by default (@Narfinger, @mrobinson, #40935, #41179), on most platforms. If you don’t want to trust the system root certificates, you can instead continue to use Mozilla’s root certificates with --pref network_use_webpki_roots. As always, you can also add your own root certificates via Opts::certificate_path (--certificate-path=). We have a new SiteDataManager API for managing localStorage , sessionStorage , and cookies (@janvarga, #41236, #41255, #41378, #41523, #41528), and a new NetworkManager API for managing the cache (@janvarga, @mrobinson, #41255, #41474, #41386). To clear the cache, call NetworkManager::clear_cache, and to list cache entries, call NetworkManager::cache_entries. Simple dialogs – that is alert(), confirm(), and prompt() – are now exposed to embedders via a new SimpleDialog type in EmbedderControl (@mrobinson, @mukilan, #40982). This new interface is harder to misuse, and no longer requires boilerplate for embedders that wish to ignore simple dialogs. Web console messages , including messages from the Console API, are now accessible via ServoDelegate::show_console_message and WebViewDelegate::show_console_message (@atbrakhi, #41351). Servo, the main handle for controlling Servo, is now cloneable for sharing within the same thread (@mukilan, @mrobinson, #41010). To shut down Servo, simply drop the last Servo handle or let it go out of scope. Servo::start_shutting_down and Servo::deinit have been removed (@mukilan, @mrobinson, #41012). Several interfaces have also been renamed: Servo::clear_cookies is now SiteDataManager::clear_cookies (@janvarga, #41236, #41255) DebugOpts::disable_share_style_cache is now Preferences::layout_style_sharing_cache_enabled (@atbrakhi, #40959) The rest of DebugOpts has been moved to DiagnosticsLogging, and the options have been renamed (@atbrakhi, #40960) Perf and stability

      We can now evict entries from our HTTP cache (@Narfinger, @gterzian, @Taym95, #40613), rather than having it grow forever (or get cleared by an embedder). about:memory now tracks SVG-related memory usage (@d-kraus, #41481), and we’ve fixed memory leaks in <video> and <audio> (@tharkum, #41131).

      Servo now does less work when matching selectors (@webbeef, #41368), when focus changes (@mrobinson, @Loirooriol, #40984), and when reflowing boxes whose size did not change (@Loirooriol, @mrobinson, #41160).

      To allow for smaller binaries, gamepad support is now optional at build time (@WaterWhisperer, #41451).

      We’ve fixed some undefined behaviour around garbage collection (@sagudev, @jdm, @gmorenz, #41546, mozjs#688, mozjs#689, mozjs#692). To better avoid other garbage-collection-related bugs (@sagudev, mozjs#647, mozjs#638), we’ve continued our work on defining (and migrating to) safer interfaces between Servo and the SpiderMonkey GC (@sagudev, #41519, #41536, #41537, #41520, #41564).

      We’ve fixed a crash that occurs when < link rel=“shortcut icon”> has an empty ‘href’ attribute , which affected chiptune.com (@webbeef, #41056), and we’ve also fixed crashes in:

      Donations Thanks again for your generous support! We are now receiving 7110 USD/month (+10.5% over November) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo. Servo is also on thanks.dev, and already 30 GitHub users (+2 over November) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community. We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsors: Anthropy , Niclas Overby , and RxDB! If you’re interested in this kind of sponsorship, please contact us at join@servo.org. 7110 USD/month 10000 Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page. Conference talks and blogs [

      __](https://servo.org/blog/2026/01/23/december-in-servo/#conference-talks-and- blogs)

      We’ve recently published one talk and one blog post:

      We also have two upcoming talks at FOSDEM 2026 in Brussels later this month:

      Servo developers Martin Robinson (@mrobinson) and Delan Azabani (@delan) will also be attending FOSDEM 2026, so it would be a great time to come along and chat about Servo!

  2. January 22, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-22 rss

      IDA Plugin Updates on 2026-01-22

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA Am I the only one who feels that, with all the AI boom, everyone is basically doing the same thing? rss

      Lately I go on Reddit and I keep seeing the same idea repeated over and over again. Another chat app, another assistant, another “AI tool” that, in reality, already exists — or worse, already exists in a better and more polished form.

      Many of these are applications that could be solved perfectly with an extension, a plugin, or a simple feature inside an app we already use. I’m not saying AI is bad — quite the opposite, it’s incredible. But there are people pouring all their money into Anthropic subscriptions or increasing their electricity bill just to build a less polished version of things like OpenWebUI, Open Code, Cline, etc

      submitted by /u/Empty_Enthusiasm_167
      [link] [comments]

    3. 🔗 @cxiao@infosec.exchange RE: mastodon
    4. 🔗 r/reverseengineering Symphony of the Night Decomp Updates! It's Getting Closer rss
    5. 🔗 News Minimalist 🐱 Renewables overtake fossil fuels in EU + 9 more stories rss

      In the last 3 days ChatGPT read 95356 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [5.6] Wind and solar power lead EU energy mix for the first time in 2025 —lavenir.net(French) (+15)

      For the first time, wind and solar power surpassed fossil fuels in the European Union’s electricity production mix in 2025, marking a major milestone in the region's energy transition.

      According to think tank Ember, these renewables generated 841 terawatt-hours, accounting for 30.1% of EU electricity. This exceeded fossil fuels, which fell to 29% as solar production hit record highs and coal reached a historic low of 9.2%.

      [5.7] Hundreds of artists, including Scarlett Johansson and Cate Blanchett, launch anti-AI campaign against unauthorized use of copyrighted work —variety.com(+3)

      Scarlett Johansson, Cate Blanchett, and Joseph Gordon-Levitt joined 700 creators in launching a campaign condemning tech companies for training artificial intelligence on copyrighted work without authorization, labeling it theft.

      The group asserts that unauthorized scraping endangers millions of jobs and economic growth. They urge developers to adopt transparent licensing agreements, proving that technological advancement can coexist with the protection of creators' intellectual property rights and legal authorship.

      This initiative follows several high-profile legal disputes, notably involving Johansson, who has previously taken action against the unauthorized use of her name, voice, and likeness in AI-generated advertisements and content.

      Highly covered news with significance over 5.5

      [6.3] Personalized mRNA vaccine shows lasting benefits for high-risk melanoma patients — euronews.com (+6)

      [6.2] South Korea enacts comprehensive laws to regulate artificial intelligence — japantimes.co.jp (+5)

      [5.9] Snap settles social media addiction lawsuit, avoiding trial — nytimes.com (+5)

      [5.8] Moldova begins withdrawal from the Commonwealth of Independent States — pravda.com.ua (Ukrainian) (+7)

      [5.7] Israel demolishes UNRWA headquarters in East Jerusalem — dw.com (Russian) (+22)

      [5.6] Trump unveils Board of Peace initiative with global leaders — irishtimes.com (+38)

      [5.5] Argentina welcomes large shipment of Chinese electric vehicles as it eases import restrictions — apnews.com (+10)

      [5.6] Canada prepares for potential US invasion following Trump's provocations — fr.de (German) (+23)

      Thanks for reading!

      — Vadim


      You can create your own personal newsletter like this with premium.


      Powered by beehiiv

    6. 🔗 r/reverseengineering apk.sh makes reverse engineering Android apps easier, automating repetitive tasks like pulling, decoding, rebuilding and patching an APK. It supports direct bytecode manipulation with no decompilation, this avoids decompilation/recompilation errors. rss
    7. 🔗 langchain-ai/deepagents deepagents-cli==0.0.13 release

      Changes since deepagents-cli==0.0.12

      release(sdk): bump version (#879)
      fix(sdk): make sure that tool truncation applies to execute (#547)
      fix(cli): hitl spinner errors (#861)
      fix(cli): Improve HITL approval UX (#859)
      release: patch release 0.3.7 (#869)
      fix(cli): resume should load previous thread messages (#862)
      chore(cli): clean checks in cli spacing (#854)
      fix(cli): avoid jumping (#853)
      chore(cli): turn on more linting (#852)
      Add current model display to status bar (#844)
      Disable double message submission while agent is working
      update remember prompt to work with memory and skills (#842)
      feat(cli): focus input when clicking anywhere in the terminal (#826)
      feat: summarization offloading (#742)
      Bump version to 0.3.7a1 (#817)
      chore(deps): bump the uv group across 5 directories with 1 update (#811)
      fix(infra): exclude build/ from typechecking (#808)
      fix(sdk): BaseSandbox.ls_info() to return absolute paths (#797)
      chore: bump deepagents-cli to 0.0.13a2 (#795)
      docs: add testing readme (#788)
      fix(cli): include tcss and py.typed in package data (#781)
      feat(cli): format file tree with markdown (#782)
      fix(cli): add explicit package inclusion for setuptools (#780)
      add prompt seeding with -m flag (#755)
      docs: update model configuration details in README (#772)
      fix: import rules (#763)
      release(deepagents-cli): 0.0.13a1 (#756)
      cli-token-tracking-fixes (#706)
      release: deepagents 0.3.6 (#752)
      chore: automatically sort imports (#740)
      Add LangSmith tracing status to welcome banner (#741)
      feat(cli): inject local context into system prompt via LocalContextMiddleware
      fix: don't allow Rich markup from user content (#704)
      fix(cli): remove duplicate version from welcome.py (#737)
      feat(cli): add --version//version commands (#698)
      minor release(deepagents): bump version to 0.3.5 (#695)
      Port SDK Memory to CLI (#691)
      fix thread id (#692)
      chore(ci): add uv lock checks (#681)
      update version bounds (#687)
      CLI Refactor to Textual (#686)
      Fix invalid YAML in skill-creator SKILL.md frontmatter (#675)
      feat(deepagents): add skills to sdk (#591)
      docs: replace gemini 1.5 (#653)
      feat(cli): show version in splash screen (#610)
      chore(cli): expose version (#609)
      fix(cli): handle read_file offset exceeding file length by returning all lines (issue #559) (#568)
      chore(cli): remove line (#601)

    8. 🔗 langchain-ai/deepagents deepagents==0.3.8 release

      Changes since deepagents==0.3.7

      release(sdk): bump version (#879)
      fix(sdk): make sure that tool truncation applies to execute (#547)
      test(sdk): use unique thread_id for summarization test configuration (#871)

    9. 🔗 r/wiesbaden Könnten E-Scooter in der Innenstadt verboten werden? rss

      Ich habe die Nase voll... sorry im vorraus fĂŒr den Rant. Seit Monaten gehe ich nur noch ungerne in die Innenstadt, weil ich im Februar einen Unfall mit einem Halbstarken auf einem E-Scooter hatte. Der ist mit vollem Tempo durch die Einkaufsstraße gedĂŒst und hat mich erwischt. Unterarm angebrochen. Schnell geheilt aber tat sau-weh. Ich konnte nichts machen, weil der einfach wieder aufgestiegen ist und weiter gefahren ist.

      Heute fast wieder. Marktstraße runter auf dem Weg z BĂŒrgerbĂŒro. Ein Typ zwischen 16-20 Jahren rast an mir vorbei als wenns kein morgen gibt.

      Ich habe langsam echt die Nase voll davon im einer FußgĂ€nger Zone um meine Sicherheit fĂŒrchten zu mĂŒssen, nur weil es anscheinend der neue Trend ist mit 20-30kmh da durch zu donnern.

      submitted by /u/Winston_Duarte
      [link] [comments]

    10. 🔗 r/reverseengineering What Nobody Tells You About Becoming a Vulnerability Researcher rss
    11. 🔗 @HexRaysSA@infosec.exchange 🧭 Jump Anywhere in IDA 9.3 makes everyday navigation faster and more mastodon

      🧭 Jump Anywhere in IDA 9.3 makes everyday navigation faster and more responsive — especially on large databases.

      Here’s how it works and what’s improved.
      https://hex-rays.com/blog/ida-9.3-jump-anywhere

    12. 🔗 Hex-Rays Blog Jump Anywhere: Unified Navigation Gets an Upgrade in IDA 9.3 rss

      Jump Anywhere: Unified Navigation Gets an Upgrade in IDA 9.3

      An IDA database stores many different kinds of information: functions, named global variables, types, and more. Jump Anywhere , introduced in IDA 9.2, is a unified “quick navigation” dialog that lets you search across those database items from a single place. It also supports resolving simple expressions that the user could have entered into the “Jump to address” dialog.

    13. 🔗 r/wiesbaden Date-Ideen im Winter in Wiesbaden (öffifreundlich) rss

      Hallo ihr Lieben,

      ich plane im Moment ein Date mit meinem Freund. Da ich am Valentinstag nicht da bin, möchte ich gerne den Tag mit ihm nachholen und etwas schönes planen. Ich wĂŒrde gerne mit ihm zur Kaiser-Friedrich-Therme fahren und abends ins Theater. Habt ihr gute Ideen, wie man sich dazwischen schön die Zeit vertreiben kann? Ich bin leider nicht ortskundig, wĂŒrde mich aber sehr ĂŒber Tipps freuen!

      Danke schonmal :)

      submitted by /u/ichbineindummeraffe
      [link] [comments]

    14. 🔗 remorses/critique critique@0.1.42 release
      • New --image flag for all diff commands:
        • Generates WebP images of terminal output (saved to /tmp)
        • Splits long diffs into multiple images (70 lines per image)
        • Uses takumi for high-performance image rendering
        • @takumi-rs/core and @takumi-rs/helpers added as optional dependencies
        • Library export: import { renderTerminalToImages } from "critique/src/image.ts"
      • Web output: Use default theme to enable dark/light mode switching based on system preference
      • review command:
        • Improved AI prompt: order hunks by code flow, think upfront before writing, split heavy logic across sections
      • Dependencies:
        • Update opentui to 367a9408
    15. 🔗 r/LocalLLaMA Qwen have open-sourced the full family of Qwen3-TTS: VoiceDesign, CustomVoice, and Base, 5 models (0.6B & 1.8B), Support for 10 languages rss
    16. 🔗 r/LocalLLaMA Qwen3 TTS just dropped đŸ—ŁïžđŸ”ˆ rss
    17. 🔗 r/LocalLLaMA Qwen dev on Twitter!! rss
    18. 🔗 r/wiesbaden Fußballkneipe rss

      Hello, kennt jemand in Wiesbaden oder Umgebung so eine richtige Fußballkneipe, wo am besten noch viel Fußball Deko hĂ€ngt? Ich wĂŒrde mich ĂŒber Tipps freuen :)

      submitted by /u/Turbulent_Life_5826
      [link] [comments]

    19. 🔗 Anton Zhiyanov Interfaces and traits in C rss

      Everyone likes interfaces in Go and traits in Rust. Polymorphism without class-based hierarchies or inheritance seems to be the sweet spot. What if we try to implement this in C?

      Interfaces in Go ‱ Traits in Rust ‱ Toy example ‱ Interface definition ‱ Interface data ‱ Method table ‱ Method table in implementor ‱ Type assertions ‱ Final thoughts

      Interfaces in Go

      An interface in Go is a convenient way to define a contract for some useful behavior. Take, for example, the honored io.Reader:

      // Reader is the interface that wraps the basic Read method.
      type Reader interface {
          // Read reads up to len(p) bytes into p. It returns the number of bytes
          // read (0 <= n <= len(p)) and any error encountered.
          Read(p []byte) (n int, err error)
      }
      

      Anything that can read data into a byte slice provided by the caller is a Reader. Quite handy, because the code doesn't need to care where the data comes from — whether it's memory, the file system, or the network. All that matters is that it can read the data into a slice:

      // work processes the data read from r.
      func work(r io.Reader) int {
          buf := make([]byte, 8)
          n, err := r.Read(buf)
          if err != nil && err != io.EOF {
              panic(err)
          }
          // ...
          return n
      }
      

      We can provide any kind of reader:

      func main() {
          var total int
          b := bytes.NewBufferString("hello world")
      
          // bytes.Buffer implements io.Reader, so we can use it with work.
          total += work(b)
          total += work(b)
      
          fmt.Println("total =", total)
      }
      
      
      
      total = 11
      

      Go's interfaces are structural, which is similar to duck typing. A type doesn't need to explicitly state that it implements io.Reader; it just needs to have a Read method:

      // Zeros is an infinite stream of zero bytes.
      type Zeros struct{}
      
      func (z Zeros) Read(p []byte) (n int, err error) {
          clear(p)
          return len(p), nil
      }
      

      The Go compiler and runtime take care of the rest:

      func main() {
          var total int
          var z Zeros
      
          // Zeros implements io.Reader, so we can use it with work.
          total += work(z)
          total += work(z)
      
          fmt.Println("total =", total)
      }
      
      
      
      total = 16
      

      Traits in Rust

      A trait in Rust is also a way to define a contract for certain behavior. Here's the std::io::Read trait:

      // The Read trait allows for reading bytes from a source.
      pub trait Read {
          // Readers are defined by one required method, read(). Each call to read()
          // will attempt to pull bytes from this source into a provided buffer.
          fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize>;
      
          // ...
      }
      

      Unlike in Go, a type must explicitly state that it implements a trait:

      // An infinite stream of zero bytes.
      struct Zeros;
      
      impl io::Read for Zeros {
          fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
              buf.fill(0);
              Ok(buf.len())
          }
      }
      

      The Rust compiler takes care of the rest:

      // Processes the data read from r.
      fn work(r: &mut dyn io::Read) -> usize {
          let mut buf = [0; 8];
          match r.read(&mut buf) {
              Ok(n) => n,
              Err(e) => panic!("Error: {}", e),
          }
      }
      
      fn main() {
          let mut total = 0;
          let mut z = Zeros;
      
          // Zeros implements Read, so we can use it with work.
          total += work(&mut z);
          total += work(&mut z);
      
          println!("total = {}", total);
      }
      
      
      
      total = 16
      

      Either way, whether it's Go or Rust, the caller only cares about the contract (defined as an interface or trait), not the specific implementation.

      Toy example

      Let's make an even simpler version of Reader — one without any error handling (Go):

      // Reader an interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      type Reader interface {
          Read(p []byte) int
      }
      

      Usage example:

      // Zeros is an infinite stream of zero bytes.
      type Zeros struct {
          total int // total number of bytes read
      }
      
      // Read reads len(p) bytes into p.
      func (z *Zeros) Read(p []byte) int {
          clear(p)
          z.total += len(p)
          return len(p)
      }
      
      // work processes the data read from r.
      func work(r Reader) int {
          buf := make([]byte, 8)
          return r.Read(buf)
      }
      
      func main() {
          z := new(Zeros)
          work(z)
          work(z)
          fmt.Println("total =", z.total)
      }
      
      
      
      total = 16
      

      Let's see how we can do this in C!

      Interface definition

      The main building blocks in C are structs and functions, so let's use them. Our Reader will be a struct with a single field called Read. This field will be a pointer to a function with the right signature:

      // An interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
      } Reader;
      

      To make Zeros fully dynamic, let's turn it into a struct with a Read function pointer (I know, I know — just bear with me):

      // An infinite stream of zero bytes.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
          size_t total;
      } Zeros;
      

      Here's the Zeros_Read "method" implementation:

      // Reads up to len(p) bytes into p.
      size_t Zeros_Read(void* self, uint8_t* p, size_t len) {
          Zeros* z = (Zeros*)self;
          for (size_t i = 0; i < len; i++) {
              p[i] = 0;
          }
          z->total += len;
          return len;
      }
      

      The work is pretty obvious:

      // Does some work reading from r.
      size_t work(Reader* r) {
          uint8_t buf[8];
          return r->Read(r, buf, sizeof(buf));
      }
      

      And, finally, the main function:

      int main(void) {
          Zeros z = {.Read = Zeros_Read, .total = 0};
      
          Reader* r = (Reader*)&z;
          work(r);
          work(r);
      
          printf("total = %zu\n", z.total);
      }
      
      
      
      total = 16
      

      See how easy it is to turn a Zeros into a Reader: all we need is (Reader*)&z. Pretty cool, right?

      Not really. Actually, this implementation is seriously flawed in almost every way (except for the Reader definition).

      Memory overhead. Each Zeros instance has its own function pointers (8 bytes per function on a 64-bit system) as "methods", which isn't practical even if there are only a few of them. Regular objects should store data, not functions.

      Layout dependency. Converting from Zeros* to Reader* like (Reader*)&z only works if both structures have the same Read field as their first member. If we try to implement another interface:

      // Reader interface.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
      } Reader;
      
      // Closer interface.
      typedef struct {
          void (*Close)(void* self);
      } Closer;
      
      // Zeros implements both Reader and Closer.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
          void (*Close)(void* self);
          size_t total;
      } Zeros;
      

      Everything will fall apart:

      int main(void) {
          Zeros z = {
              .Read = Zeros_Read,
              .Close = Zeros_Close,
              .total = 0,
          };
          Closer* c = (Closer*)&z;  // (X)
          c->Close(c);
      }
      
      
      
      Segmentation fault: 11
      

      Closer and Zeros have different layouts, so type conversion in ⓧ is invalid and causes undefined behavior.

      Lack of type safety. Using a void* as the receiver in Zeros_Read means the caller can pass any type, and the compiler won't even show a warning:

      int main(void) {
          int x = 42;
          uint8_t buf[8];
          Zeros_Read(&x, buf, sizeof(buf));  // bad decision
      }
      
      size_t Zeros_Read(void* self, uint8_t* p, size_t len) {
          Zeros* z = (Zeros*)self;
          // ...
          z->total += len;                   // consequences
          return len;
      }
      
      
      
      Abort trap: 6
      

      C isn't a particularly type-safe language, but this is just too much. Let's try something else.

      Interface data

      A better way is to store a reference to the actual object in the interface:

      // An interface that wraps the basic Read method.
      // Read reads up to len(p) Zeros into p.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
          void* self;
      } Reader;
      

      We could have the Read method in the interface take a Reader instead of a void*, but that would make the implementation more complicated without any real benefits. So, I'll keep it as void*.

      Then Zeros will only have its own fields:

      // An infinite stream of zero bytes.
      typedef struct {
          size_t total;
      } Zeros;
      

      We can make the Zeros_Read method type-safe:

      // Reads len(p) bytes into p.
      size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) {
          for (size_t i = 0; i < len; i++) {
              p[i] = i % 256;
          }
          z->total += len;
          return len;
      }
      

      To make this work, we add a Zeros_Reader method that returns the instance wrapped in a Reader interface:

      // Returns a Reader implementation for Zeros.
      Reader Zeros_Reader(Zeros* z) {
          return (Reader){
              .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read,
              .self = z,
          };
      }
      

      The work and main functions remain quite simple:

      // Does some work reading from r.
      size_t work(Reader r) {
          uint8_t buf[8];
          return r.Read(r.self, buf, sizeof(buf));
      }
      
      int main(void) {
          Zeros z = {0};
      
          Reader r = Zeros_Reader(&z);
          work(r);
          work(r);
      
          printf("total = %zu\n", z.total);
      }
      
      
      
      total = 16
      

      This approach is much better than the previous one:

      • The Zeros struct is lean and doesn't have any interface-related fields.
      • The Zeros_Read method takes a Zeros* instead of a void*.
      • The cast from Zeros to Reader is handled inside the Zeros_Reader method.
      • We can implement multiple interfaces if needed.

      Since our Zeros type now knows about the Reader interface (through the Zeros_Reader method), our implementation is more like a basic version of a Rust trait than a true Go interface. For simplicity, I'll keep using the term "interface".

      There is one downside, though: each Reader instance has its own function pointer for every interface method. Since Reader only has one method, this isn't an issue. But if an interface has a dozen methods and the program uses a lot of these interface instances, it can become a problem.

      Let's fix this.

      Method table

      Let's extract interface methods into a separate strucute — the method table. The interface references its methods though the mtab field:

      // An interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
      } ReaderTable;
      
      typedef struct {
          const ReaderTable* mtab;
          void* self;
      } Reader;
      

      Zeros and Zeros_Read don't change at all:

      // An infinite stream of zero bytes.
      typedef struct {
          size_t total;
      } Zeros;
      
      // Reads len(p) bytes into p.
      size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) {
          for (size_t i = 0; i < len; i++) {
              p[i] = i % 256;
          }
          z->total += len;
          return len;
      }
      

      The Zeros_Reader method initializes the static method table and assigns it to the interface instance:

      // Returns a Reader implementation for Zeros.
      Reader Zeros_Reader(Zeros* z) {
          // The method table is only initialized once.
          static const ReaderTable impl = {
              .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read,
          };
          return (Reader){.mtab = &impl, .self = z};
      }
      

      The only difference in work is that it calls the Read method on the interface indirectly using the method table (r.mtab->Read instead of r.Read):

      // Does some work reading from r.
      size_t work(Reader r) {
          uint8_t buf[8];
          return r.mtab->Read(r.self, buf, sizeof(buf));
      }
      

      main stays the same:

      int main(void) {
          Zeros z = {0};
      
          Reader r = Zeros_Reader(&z);
          work(r);
          work(r);
      
          printf("total = %zu\n", z.total);
      }
      
      
      
      total = 16
      

      Now the Reader instance always has a single pointer field for its methods. So even for large interfaces, it only uses 16 bytes (mtab + self fields). This approach also keeps all the benefits from the previous version:

      • Lightweight Zeros structure.
      • Easy conversion from Zeros to Reader.
      • Supports multiple interfaces.

      We can even add a separate Reader_Read helper so the client doesn't have to worry about r.mtab->Read implementation detail:

      // Reads len(p) bytes into p.
      size_t Reader_Read(Reader r, uint8_t* p, size_t len) {
          return r.mtab->Read(r.self, p, len);
      }
      
      // Does some work reading from r.
      size_t work(Reader r) {
          uint8_t buf[8];
          return Reader_Read(r, buf, sizeof(buf));
      }
      

      Nice!

      Alternative: Method table in implementor

      There's another approach I've seen out there. I don't like it, but it's still worth mentioning for completeness.

      Instead of embedding the Reader method table in the interface, we can place it in the implementation (Zeros):

      // An interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
      } ReaderTable;
      
      typedef ReaderTable* Reader;
      
      // An infinite stream of zero bytes.
      typedef struct {
          Reader mtab;
          size_t total;
      } Zeros;
      

      We initialize the method table in the Zeros constructor:

      // Returns a new Zeros instance.
      Zeros NewZeros(void) {
          static const ReaderTable impl = {
              .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read,
          };
          return (Zeros){
              .mtab = (Reader)&impl,
              .total = 0,
          };
      }
      

      work now takes a Reader pointer:

      // Does some work reading from r.
      size_t work(Reader* r) {
          uint8_t buf[8];
          return (*r)->Read(r, buf, sizeof(buf));
      }
      

      And main converts Zeros* to Reader* with a simple type cast:

      int main(void) {
          Zeros z = NewZeros();
      
          Reader* r = (Reader*)&z;
          work(r);
          work(r);
      
          printf("total = %zu\n", z.total);
      }
      
      
      
      total = 16
      

      This keeps Zeros pretty lightweight, only adding one extra mtab field. But the (Reader*)&z cast only works because Reader mtab is the first field in Zeros. If we try to implement a second interface, things will break — just like in the very first solution.

      I think the "method table in the interface" approach is much better.

      Bonus: Type assertions

      Go has an io.Copy function that copies data from a source (a reader) to a destination (a writer):

      func Copy(dst Writer, src Reader) (written int64, err error)
      

      There's an interesting comment in its documentation:

      If src implements WriterTo, the copy is implemented by calling src.WriteTo(dst). Otherwise, if dst implements ReaderFrom, the copy is implemented by calling dst.ReadFrom(src).

      Here's what the function looks like:

      func Copy(dst Writer, src Reader) (written int64, err error) {
          // If the reader has a WriteTo method, use it to do the copy.
          // Avoids an allocation and a copy.
          if wt, ok := src.(WriterTo); ok {
              return wt.WriteTo(dst)
          }
          // Similarly, if the writer has a ReadFrom method, use it to do the copy.
          if rf, ok := dst.(ReaderFrom); ok {
              return rf.ReadFrom(src)
          }
          // The default implementation using regular Reader and Writer.
          // ...
      }
      

      src.(WriterTo) is a type assertion that checks if the src reader is not just a Reader, but also implements the WriterTo interface. The Go runtime handles these kinds of dynamic type checks.

      Can we do something like this in C? I'd prefer not to make it fully dynamic, since trying to recreate parts of the Go runtime in C probably isn't a good idea.

      What we can do is add an optional AsWriterTo method to the Reader interface:

      // An interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      typedef struct {
          // required
          size_t (*Read)(void* self, uint8_t* p, size_t len);
          // optional
          WriterTo (*AsWriterTo)(void* self);
      } ReaderTable;
      
      typedef struct {
          const ReaderTable* mtab;
          void* self;
      } Reader;
      

      Then we can easily check if a given Reader is also a WriterTo:

      void work(Reader r) {
          // Check if r implements WriterTo.
          if (r.mtab->AsWriterTo) {
              WriterTo wt = r.mtab->AsWriterTo(r.self);
              // Use r as WriterTo...
              return;
          }
          // Use r as a regular Reader...
          return;
      }
      

      Still, this feels a bit like a hack. I'd rather avoid using type assertions unless it's really necessary.

      Final thoughts

      Interfaces (traits, really) in C are possible, but they're not as simple or elegant as in Go or Rust. The method table approach we discussed is a good starting point. It's memory-efficient, as type-safe as possible given C's limitations, and supports polymorphic behavior.

      Here's the full source code if you are interested:

      #include <stdint.h>
      #include <stdio.h>
      #include <stdlib.h>
      
      // An interface that wraps the basic Read method.
      // Read reads up to len(p) bytes into p.
      typedef struct {
          size_t (*Read)(void* self, uint8_t* p, size_t len);
      } ReaderTable;
      
      typedef struct {
          const ReaderTable* mtab;
          void* self;
      } Reader;
      
      // Reads len(p) bytes into p.
      size_t Reader_Read(Reader r, uint8_t* p, size_t len) {
          return r.mtab->Read(r.self, p, len);
      }
      
      // An infinite stream of zero bytes.
      typedef struct {
          size_t total;
      } Zeros;
      
      // Reads len(p) bytes into p.
      size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) {
          for (size_t i = 0; i < len; i++) {
              p[i] = i % 256;
          }
          z->total += len;
          return len;
      }
      
      // Returns a Reader implementation for Zeros.
      Reader Zeros_Reader(Zeros* z) {
          // The method table is only initialized once.
          static const ReaderTable impl = {
              .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read,
          };
          return (Reader){.mtab = &impl, .self = z};
      }
      
      // Does some work reading from r.
      size_t work(Reader r) {
          uint8_t buf[8];
          return Reader_Read(r, buf, sizeof(buf));
      }
      
      int main(void) {
          Zeros z = {0};
      
          Reader r = Zeros_Reader(&z);
          work(r);
          work(r);
      
          printf("total = %zu\n", z.total);
      }
      
      
      
      total = 16
      

      Cheers!

    20. 🔗 r/wiesbaden Zeitreise am Bahnsteig: Dampflok in Wiesbaden rss
    21. 🔗 r/wiesbaden Fahrradreparatur in der Innenstadt? rss

      Hallo zusammen! Ich suche nach einer guten Möglichkeit, mein E-Lastenrad (Tern GSD) in der Wiesbadener Innenstadt reparieren zu lassen. Bisher war Lucky Bike an der Mainzer Straße meine Anlaufstelle, aber die sind ja jetzt nach Biebrich gezogen. Ambrosius in Biebrich finde ich auch gut, aber beides ist fĂŒr mich nicht so gut zu erreichen. Kennt jemand von euch eine Werkstatt in der Innenstadt, die sich damit auskennt? Ich wĂ€re fĂŒr jeden Tipp dankbar!

      submitted by /u/eggsplorer
      [link] [comments]

    22. 🔗 @cxiao@infosec.exchange RE: mastodon

      RE: https://infosec.exchange/@watchTowr/115935944816059052

      another incredible post about the power of simply reverse engineering patches

      no spoilers but a preview:

    23. 🔗 r/LocalLLaMA Fei Fei Li dropped a non-JEPA world model, and the spatial intelligence is insane rss

      Fei Fei Li dropped a non-JEPA world model, and the spatial intelligence is insane | Fei-Fei Li, the "godmother of modern AI" and a pioneer in computer vision, founded World Labs a few years ago with a small team and $230 million in funding. Last month, they launched https://marble.worldlabs.ai/, a generative world model that’s not JEPA, but instead built on Neural Radiance Fields (NeRF) and Gaussian splatting. It’s insanely fast for what it does, generating explorable 3D worlds in minutes. For example: this scene. Crucially, it’s not video. The frames aren’t rendered on-the-fly as you move. Instead, it’s a fully stateful 3D environment represented as a dense cloud of Gaussian splats—each with position, scale, rotation, color, and opacity. This means the world is persistent, editable, and supports non-destructive iteration. You can expand regions, modify materials, and even merge multiple worlds together. You can share your world, others can build on it, and you can build on theirs. It natively supports VR (Vision Pro, Quest 3), and you can export splats or meshes for use in Unreal, Unity, or Blender via USDZ or GLB. It's early, there are (very literally) rough edges, but it's crazy to think about this in 5 years. For free, you get a few generations to experiment; $20/month unlocks a lot, I just did one month so I could actually play, and definitely didn't max out credits. Fei-Fei Li is an OG AI visionary, but zero hype. She’s been quiet, especially about this. So Marble hasn’t gotten the attention it deserves. At first glance, visually, you might think, “meh”... but there’s no triangle-based geometry here, no real-time rendering pipeline, no frame-by-frame generation. Just a solid, exportable, editable, stateful pile of splats. The breakthrough isn't the image though, it’s the spatial intelligence. Y'all should play around, it's wild. I know this is a violation of Rule #2 but honestly there just aren't that many subs with people smart enough to appreciate this; no hard feelings if it needs be removed though. submitted by /u/coloradical5280
      [link] [comments]
      ---|---

    24. 🔗 badlogic/pi-mono v0.49.3 release

      Added

      • markdown.codeBlockIndent setting to customize code block indentation in rendered output (#855 by @terrorobe)
      • Added inline-bash.ts example extension for expanding !{command} patterns in prompts (#881 by @scutifer)
      • Added antigravity-image-gen.ts example extension for AI image generation via Google Antigravity (#893 by @benvargas)
      • Added PI_SHARE_VIEWER_URL environment variable for custom share viewer URLs (#889 by @andresaraujo)
      • Added Alt+Delete as hotkey for delete word forwards (#878 by @Perlence)

      Changed

      • Tree selector: changed label filter shortcut from l to Shift+L so users can search for entries containing "l" (#861 by @mitsuhiko)
      • Fuzzy matching now scores consecutive matches higher for better search relevance (#860 by @mitsuhiko)

      Fixed

      • Fixed error messages showing hardcoded ~/.pi/agent/ paths instead of respecting PI_CODING_AGENT_DIR (#887 by @aliou)
      • Fixed write tool not displaying errors in the UI when execution fails (#856)
      • Fixed HTML export using default theme instead of user's active theme (#870 by @scutifer)
      • Show session name in the footer and terminal / tab title (#876 by @scutifer)
      • Fixed 256color fallback in Terminal.app to prevent color rendering issues (#869 by @Perlence)
      • Fixed viewport tracking and cursor positioning for overlays and content shrink scenarios
      • Fixed autocomplete to allow searches with / characters (e.g., folder1/folder2) (#882 by @richardgill)
      • Fixed autolinked emails displaying redundant (mailto:...) suffix (#888 by @terrorobe)
      • Fixed @ file autocomplete adding space after directories, breaking continued autocomplete into subdirectories
    25. 🔗 @cxiao@infosec.exchange **Content warning:** cwing my further deranged mark carney speech thoughts for mastodon

      Content warning: cwing my further deranged mark carney speech thoughts for the sanctity of ur timeline, also NDP leadership race


      Speaking of the other federal parties having absolutely no foreign policy vision to counter the Liberals: This is what the NDP leadership candidates have said so far about foreign policy as of January 15th. Frankly this is embarrassing. Like if none of the leadership candidates can even produce any coherent thoughts on US relations what are we even doing here.

      Also not in this image: Immigration. But if you check out the summary page with all policy proposals (linked) it's still "No policy released". There should be several slam dunks here about immigrant worker rights, about protection from exploitation, about the concerning rise in anti immigrant sentiment, about the key role immigrants are going to play in nation building, about the building of Canada's own immigration security and border apparatus which is extremely concerning, etc. Having absolutely no thoughts on this is completely unacceptable.

      All of the nice domestic promises in the other policy images, the climate promises, the affordability promises, depend on us having a functional economy, on immigrants to power it, on resources and services from other countries to build it, and on not being blockaded or invaded! Some of the most critical threats to workers in several sectors across the country right now is directly due to tariffs, and companies' response to tariffs!

      I hope more comes out before the next leadership debate but right now, if we are going to criticize the Carney speech from the left, it's kind of concerning that the contenders to be leader of the main left party seem to be unable to formulate any international relations thoughts

      https://progresscanada.substack.com/p/tracking-the-policy-commitments- in

      #canada #canpoli #ndp #ndpleadership #ndp2026

    26. 🔗 Console.dev newsletter qmd rss

      Description: CLI search for local Markdown.

      What we like: On-device text search. Indexes anything Markdown. Supports keyword and natural language search. Embeds a local LLM to help with ranking results. Includes an MCP server so you can integrate with your AI of choice. Various output formats (text, JSON, CSV, Markdown).

      What we dislike: Not multi-platform - only works on macOS.

    27. 🔗 Console.dev newsletter RepoBar rss

      Description: Access GitHub from your status bar.

      What we like: Exposes key GitHub primitives in your status bar - issues, PRs, releases, actions, checks, latest activity. Dig into each repo to browse open issues, PRs, etc. Shows local Git state so you can easily switch worktrees. Will automatically discover local repos. Bundles a basic CLI to get the data in your terminal.

      What we dislike: Source is available on GitHub, but there’s no license (yet?).

    28. 🔗 Rust Blog Announcing Rust 1.93.0 rss

      The Rust team is happy to announce a new version of Rust, 1.93.0. Rust is a programming language empowering everyone to build reliable and efficient software.

      If you have a previous version of Rust installed via rustup, you can get 1.93.0 with:

      $ rustup update stable
      

      If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.93.0.

      If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

      What's in 1.93.0 stable

      Update bundled musl to 1.2.5

      The various *-linux-musl targets now all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.

      For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable Linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.

      However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2.5 years ago), and we believe has sufficiently widely propagated that we're ready to make the change in Rust targets.

      See our previous announcement for more details.

      Allow the global allocator to use thread-local storage

      Rust 1.93 adjusts the internals of the standard library to permit global allocators written in Rust to use std's thread_local! and std::thread::current without re-entrancy concerns by using the system allocator instead.

      See docs for details.

      cfg attributes on asm! lines

      Previously, if individual parts of a section of inline assembly needed to be cfg'd, the full asm! block would need to be repeated with and without that section. In 1.93, cfg can now be applied to individual statements within the asm! block.

      asm!( // or global_asm! or naked_asm!
          "nop",
          #[cfg(target_feature = "sse2")]
          "nop",
          // ...
          #[cfg(target_feature = "sse2")]
          a = const 123, // only used on sse2
      );
      

      Stabilized APIs

      Other changes

      Check out everything that changed in Rust, Cargo, and Clippy.

      Contributors to 1.93.0

      Many people came together to create Rust 1.93.0. We couldn't have done it without all of you. Thanks!

  3. January 21, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-21 rss

      IDA Plugin Updates on 2026-01-21

      New Releases:

      Activity:

    2. 🔗 langchain-ai/deepagents deepagents==0.3.7 release

      Changes since deepagents==0.3.6

      fix(sdk): don't dedent summarization prompt (#870)
      feat(deepagents): truncate old write / edit calls in message history (#806)
      release: patch release 0.3.7 (#869)
      chore(deepagents): add end to end tests to confirm file reducer working properly in state backend (#754)
      chore: improve filesystem and subagents tool descriptions (#807)
      docs(SDK): clarify usage of file system backend (#850)
      nit: standardize naming (#849)
      docs(sdk): FilesystemBackend refs fixes (#791)
      feat: show allowed tools for each skill in skill list (#837)
      fix(sdk): FilesystemMiddleware forward 'name' attribute in large_tool_result from the original tool msg (#825)
      docs(sdk): docstring formatting nits (#824)
      feat: summarization offloading (#742)
      Bump version to 0.3.7a1 (#817)
      Add Async ops to Store Backend (#816)
      chore(deepagents): add tests for grep in end to end tests (#805)
      chore(deepagents): bump langchain in lock file (#800)
      chore(deps): bump the uv group across 5 directories with 1 update (#811)
      fix(deepagents): respect continuation markers when reading files (#809)
      fix(infra): exclude build/ from typechecking (#808)
      feat: support SystemMessage for parity w/ create_agent (#803)
      chore(deepagnets): end to end tests for agent writing editing files (#804)
      fix(sdk): BaseSandbox.ls_info() to return absolute paths (#797)
      fix(deepagents): truncate lines on read (#784)
      chore(deps): bump the uv group across 3 directories with 3 updates (#796)
      fix: refinements for test_summarization (#786)
      docs: fix old URLs (#787)
      docs: add testing readme (#788)
      fix: added error catching for file operations without permissions (#734)
      docs(deepagents): update subagent spec (#785)
      chore(deepagents): add mini eval for summarization (#751)
      docs(sdk): improve FileSystemBackend ref docs (#783)

    3. 🔗 remorses/critique critique@0.1.41 release
      • review command:
        • Filter --resume reviews by current working directory (only shows reviews from cwd or subdirectories)
        • Use ACP unstable_listSessions for OpenCode instead of parsing JSON files directly
        • Falls back to file-based parsing for Claude Code when ACP method unavailable
        • Add instruction to always close code blocks before new text (fixes unclosed diagram blocks)
    4. 🔗 r/LocalLLaMA 8x AMD MI50 32GB at 26 t/s (tg) with MiniMax-M2.1 and 15 t/s (tg) with GLM 4.7 (vllm-gfx906) rss

      8x AMD MI50 32GB at 26 t/s (tg) with MiniMax-M2.1 and 15 t/s (tg) with GLM 4.7 (vllm-gfx906) |

      • MiniMax-M2.1 AWQ 4bit @ 26.8 tok/s (output) // 3000 tok/s (input of 30k tok) on vllm-gfx906 with MAX context length (196608)
      • GLM 4.7 AWQ 4bit @ 15.6 tok/s (output) // 3000 tok/s (input of 30k tok) on vllm-gfx906 with context length 95000

      GPUs cost : 880$ for 256GB VRAM (early 2025 prices) Power draw : 280W (idle) / 1200W (inference) Goal : reach one of the most cost effective solution of the world for one of the best fast intelligent local inference setup. Credits : BIG thanks to the Global Open source Community! All setup details here: https://github.com/ai-infos/guidances- setup-8-mi50-glm47-minimax-m21/tree/main Feel free to ask any questions and/or share any comments. PS : few weeks ago, I posted here this setup of 16 MI50 with deepeseek v3.2: https://www.reddit.com/r/LocalLLaMA/comments/1q6n5vl/16x_amd_mi50_32gb_at_10_ts_tg_2k_ts_pp_with/ After few more tests/dev on it, I could have reached 14 tok/s but still not stable after ~18k tokens context input (generating garbage output) so almost useless for me. Whereas, the above models (Minimax M2.1 and GLM 4.7) are pretty stable at long context so usable for coding agents usecases etc. submitted by /u/ai-infos
      [link] [comments]
      ---|---

    5. 🔗 remorses/critique critique@0.1.40 release
      • review command:
        • Increased session/review picker limits from 10/20 to 25 for both ACP sessions and --resume
    6. 🔗 remorses/critique critique@0.1.39 release

      0.1.39

      • review command:
        • Enhanced splitting rules in system prompt: never show hunks larger than 10 lines
        • Added files must be split into parts with descriptions for each function/method
        • More aggressive chunk splitting for reduced cognitive load
        • Track review status: in_progress (interrupted) or completed
        • Interrupted reviews saved on Ctrl+C/exit and can be restarted via --resume
        • Use ACP session ID as review ID
        • Show status indicator in review picker (yellow for in progress)
        • JSON file only written on exit/completion to prevent concurrent access issues

      0.1.38

      • review command:
        • Add --resume flag to view previously saved reviews
        • Reviews are automatically saved to ~/.critique/reviews/ on completion
        • Select from recent reviews with interactive picker (ordered by creation time)
        • Resume supports --web flag to generate shareable URL
        • AI now generates a title field in YAML for better review summaries
        • Keeps last 50 reviews, auto-cleans older ones

      0.1.37

      • review command:
        • Add --model <id> option to specify which model to use for review
        • Model format depends on agent:
        • OpenCode: provider/model-id (e.g., anthropic/claude-sonnet-4-20250514)
        • Claude Code: model-id (e.g., claude-sonnet-4-20250514)
        • Shows available models with helpful error message if invalid model specified
    7. 🔗 remorses/critique critique@0.1.36 release
      • review command:
        • Use Unicode filled arrows (▶, ◀, â–Œ) in diagram examples for proper parsing
        • Use secondary theme color for diagram text (purple in github theme)
    8. 🔗 r/reverseengineering capa in the browser - fully local static analysis to detect binary capabilities and behaviors rss
    9. 🔗 @malcat@infosec.exchange A quick update on Malcat's MacOS development (apple silicon): mastodon

      A quick update on Malcat's MacOS development (apple silicon):

      A couple of visual glitches, but the analysis & UI are now functional \o/

    10. 🔗 r/LocalLLaMA Fix for GLM 4.7 Flash has been merged into llama.cpp rss

      Fix for GLM 4.7 Flash has been merged into llama.cpp | The world is saved! FA for CUDA in progress https://github.com/ggml-org/llama.cpp/pull/18953 submitted by /u/jacek2023
      [link] [comments]
      ---|---

    11. 🔗 r/wiesbaden Wiesbaden Taco Bell Party Details rss

      Location: P+R Parkplatz Berliner Straße near Kasse 4 (next to BRITA-Arena (edit))

      Date/Time: Friday 23 January 2026 at 1730

      Cost: Free! First come, first served. I will except donations or beer, but not required.

      RSVP: Please comment below with the number of people attending, followed by any requests.

      Request for items: You can request items but no guarantees. Cut off time is 1700 on 22 January.

      What I’m I providing: I will buy an assortment of Tacos (hard, soft & Dorito (if in stock)), burritos, quesadillas, etc, along with hot sauce.

      Drinks: BYOB (bring your own beverage), there is no Baha Blast anyways so you aren’t missing out on anything you can’t already get.

      submitted by /u/OldBayExorcism
      [link] [comments]

    12. 🔗 @cxiao@infosec.exchange **Content warning:** cwing my further deranged mark carney speech thoughts for mastodon

      Content warning: cwing my further deranged mark carney speech thoughts for the sanctity of ur timeline


      Some good Takesâ„ąïž from other people on this:

      1) This position of middle power tightrope walking with different alliances among capricious partners for different purposes is what the Global South has has to deal with for years (including all the countries now dealing with a China that has colonial interests...)
      2) The "when we only negotiate bilaterally with a hegemon, we negotiate from weakness, we compete with each other to be the most accommodating" feels in some ways like...a very subtle shade throw to countries which did not really care when Canada's sovereignty was being threatened last year and who mostly rolled over with trade deals

      #canada #canpoli #carney #davos

    13. 🔗 r/LocalLLaMA vLLM v0.14.0 released rss

      vLLM v0.14.0 released | submitted by /u/jinnyjuice
      [link] [comments]
      ---|---

    14. 🔗 @cxiao@infosec.exchange And honestly, the more I think about it, the more I realize that the other mastodon

      And honestly, the more I think about it, the more I realize that the other parties really have no coherent foreign policy, and the more I think that is a huge problem. The CPC's foreign policy is ?????, swinging wildly between loving Trump (for the base) and criticizing the Liberals for not standing up to Trump (but never saying how they themselves would walk that tightrope). The NDP can be forgiven for not having one now. But none of the leadership candidates seem to have any big ideas about how to steer through this dangerous world we're in now. We can't do all of the nice domestic things either of the parties promise, if our sovereignty isn't even guaranteed...

      #canada #canpoli

    15. 🔗 @cxiao@infosec.exchange RE: [https://flipboard.com/@cbcnews/politics-2qr4m137z/-/a-zVLYs- mastodon

      RE: https://flipboard.com/@cbcnews/politics-2qr4m137z/-/a-zVLYs- OUTPKU827dJAL8Nw%3Aa%3A107108217-%2F0

      Yes. This was a really important speech and a really important signal. Whether we can meet this vision is a different question. But I think regardless of that, I think we will all see it, in 3 years, as a turning point.

      I think many of us in Canada who follow the news maybe do not think it is as consequential because some of the latter half of the speech is talking points we have heard before, and because we have been dealing with the US breathing down our necks more directly than others. Carney has been saying to us for a year already that things have fundamentally changed (and we are tired of hearing it from him, and frustrated with how slow the change has been).

      But the clarity of the speech, the direct positioning of Canada as a middle power leader, and the frank assessment of what the global order is like now - it is a huge contrast to what everyone else who is dealing with the US, especially the Europeans, has been saying. Especially with Trump posting last night the image of Greenland, Canada, and Venezuela all covered by the US flag, no one seems to have been conveying an attitude that matched the seriousness of the moment.

      The blunt assessment and lofty vision in the speech sets this government up for a very challenging task with sky high expectations. But honestly, neither the NDP or the Conservatives have a foreign policy vision that is nearly as cohesive as this, and that comes close to matching the danger of the moment we're in now. That's really bad and those parties must do better, because there are huge risks with this new vision and we need opposition to seriously grapple with those risks. It is easy to say "we will simply stand up more to the US", but it is honestly hard to believe the opposition parties when they say this, because there is no coherent vision for how they propose to navigate differently.

      I hate living through an era of Canadian history that feels like it came from my school textbooks. But now this is what we are all dealing with.

      #canada #carney #davos

    16. 🔗 Kagi Waiting for dawn in search: Search index, Google rulings and impact on Kagi rss

      This blog post is a follow-up to Dawn of a new era in Search ( https://blog.kagi.com/dawn-new-era-search ) , published last year.

    17. 🔗 Mitchell Hashimoto Don't Trip[wire] Yourself: Testing Error Recovery in Zig rss
      (empty)
    18. 🔗 Rust Blog crates.io: development update rss

      Time flies! Six months have passed since our last crates.io development update, so it's time for another one. Here's a summary of the most notable changes and improvements made to crates.io over the past six months.

      Security Tab

      Crate pages now have a new "Security" tab that displays security advisories from the RustSec database. This allows you to quickly see if a crate has known vulnerabilities before adding it as a dependency.

      Security Tab Screenshot

      The tab shows known vulnerabilities for the crate along with the affected version ranges.

      This feature is still a work in progress, and we plan to add more functionality in the future. We would like to thank the OpenSSF (Open Source Security Foundation) for funding this work and Dirkjan Ochtman for implementing it.

      Trusted Publishing Enhancements

      In our July 2025 update, we announced Trusted Publishing support for GitHub Actions. Since then, we have made several enhancements to this feature.

      GitLab CI/CD Support

      Trusted Publishing now supports GitLab CI/CD in addition to GitHub Actions. This allows GitLab users to publish crates without managing API tokens, using the same OIDC-based authentication flow.

      Note that this currently only works with GitLab.com. Self-hosted GitLab instances are not supported yet. The crates.io implementation has been refactored to support multiple CI providers, so adding support for other platforms like Codeberg/Forgejo in the future should be straightforward. Contributions are welcome!

      Trusted Publishing Only Mode

      Crate owners can now enforce Trusted Publishing for their crates. When enabled in the crate settings, traditional API token-based publishing is disabled, and only Trusted Publishing can be used to publish new versions. This reduces the risk of unauthorized publishes from leaked API tokens.

      Blocked Triggers

      The pull_request_target and workflow_run GitHub Actions triggers are now blocked from Trusted Publishing. These triggers have been responsible for multiple security incidents in the GitHub Actions ecosystem and are not worth the risk.

      Source Lines of Code

      Crate pages now display source lines of code (SLOC) metrics, giving you insight into the size of a crate before adding it as a dependency. This metric is calculated in a background job after publishing using the tokei crate. It is also shown on OpenGraph images:

      OpenGraph image showing SLOC metric

      Thanks to XAMPPRocky for maintaining the tokei crate!

      Publication Time in Index

      A new pubtime field has been added to crate index entries, recording when each version was published. This enables several use cases:

      • Cargo can implement cooldown periods for new versions in the future
      • Cargo can replay dependency resolution as if it were a past date, though yanked versions remain yanked
      • Services like Renovate can determine release dates without additional API requests

      Thanks to Rene Leonhardt for the suggestion and Ed Page for driving this forward on the Cargo side.

      Svelte Frontend Migration

      At the end of 2025, the crates.io team evaluated several options for modernizing our frontend and decided to experiment with porting the website to Svelte. The goal is to create a one-to-one port of the existing functionality before adding new features.

      This migration is still considered experimental and is a work in progress. Using a more mainstream framework should make it easier for new contributors to work on the frontend. The new Svelte frontend uses TypeScript and generates type-safe API client code from our OpenAPI description, so types flow from the Rust backend to the TypeScript frontend automatically.

      Thanks to eth3lbert for the helpful reviews and guidance on Svelte best practices. We'll share more details in a future update.

      Miscellaneous

      These were some of the more visible changes to crates.io over the past six months, but a lot has happened "under the hood" as well.

      • Cargo user agent filtering : We noticed that download graphs were showing a constant background level of downloads even for unpopular crates due to bots, scrapers, and mirrors. Download counts are now filtered to only include requests from Cargo, providing more accurate statistics.

      • HTML emails : Emails from crates.io now support HTML formatting.

      • Encrypted GitHub tokens : OAuth access tokens from GitHub are now encrypted at rest in the database. While we have no evidence of any abuse, we decided to improve our security posture. The tokens were never included in the daily database dump, and the old unencrypted column has been removed.

      • Source link : Crate pages now display a "Browse source" link in the sidebar that points to the corresponding docs.rs page. Thanks to Carol Nichols for implementing this feature.

      • Fastly CDN : The sparse index at index.crates.io is now served primarily via Fastly to conserve our AWS credits for other use cases. In the past month, static.crates.io served approximately 1.6 PB across 11 billion requests, while index.crates.io served approximately 740 TB across 19 billion requests. A big thank you to Fastly for providing free CDN services through their Fast Forward program!

      • OpenGraph image improvements : We fixed emoji and CJK character rendering in OpenGraph images, which was caused by missing fonts on our server.

      • Background worker performance : Database indexes were optimized to improve background job processing performance.

      • CloudFront invalidation improvements : Invalidation requests are now batched to avoid hitting AWS rate limits when publishing large workspaces.

      Feedback

      We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

  4. January 20, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-20 rss

      IDA Plugin Updates on 2026-01-20

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA Current GLM-4.7-Flash implementation confirmed to be broken in llama.cpp rss

      Recent discussion in https://github.com/ggml-org/llama.cpp/pull/18936 seems to confirm my suspicions that the current llama.cpp implementation of GLM-4.7-Flash is broken.

      There are significant differences in logprobs compared to vLLM. That could explain the looping issues, overthinking, and general poor experiences people have been reporting recently.

      Edit:
      There is a potential fix already in this PR thanks to Piotr:
      https://github.com/ggml-org/llama.cpp/pull/18980

      submitted by /u/Sweet_Albatross9772
      [link] [comments]

    3. 🔗 r/reverseengineering This open-source Windows XP alternative finally gets a much-awaited speed boost rss
    4. 🔗 @HexRaysSA@infosec.exchange We're heading to D.C. for mastodon

      We're heading to D.C. for @DistrictCon this weekend and would love to connect.

      Our Head of Marketing is available to discuss content collaborations and other partnerships, and our Product Evangelist is always eager for product feedback, user insights, and more.

      Book a few minutes with us during the conference: https://meetings- eu1.hubspot.com/justine-benjamin/districtcon-2026

    5. 🔗 r/LocalLLaMA You have 64gb ram and 16gb VRAM; internet is permanently shut off: what 3 models are the ones you use? rss

      No more internet: you have 3 models you can run

      What local models are you using?

      submitted by /u/Adventurous-Gold6413
      [link] [comments]

    6. 🔗 r/reverseengineering Google Meet Reactions: Reverse Engineering the WebRTC Channel for Emoji rss
    7. 🔗 sacha chua :: living an awesome life Emacs and whisper.el :Trying out different speech-to-text backends and models rss

      I was curious about parakeet because I heard that it was faster than Whisper on the HuggingFace leaderboard. When I installed it and got it running on my laptop (CPU only, no GPU), it seemed like my results were a little faster than whisper.cpp with the large model, but much slower than whisper.cpp with the base model. The base model is decent for quick dictation, so I got curious about other backends and other models.

      In order to try natrys/whisper.el with other backends, I needed to work around how whisper.el validates the model names and sends requests to the servers. Here's the quick and dirty code for doing so, in case you want to try it out for yourself.

      (defvar my-whisper-url-format "http://%s:%d/transcribe")
      (defun whisper--transcribe-via-local-server ()
        "Transcribe audio using the local whisper server."
        (message "[-] Transcribing via local server")
        (whisper--setup-mode-line :show 'transcribing)
        (whisper--ensure-server)
        (setq whisper--transcribing-process
              (whisper--process-curl-request
               (format my-whisper-url-format whisper-server-host whisper-server-port)
               (list "Content-Type: multipart/form-data")
               (list (concat "file=@" whisper--temp-file)
                     "temperature=0.0"
                     "temperature_inc=0.2"
                     "response_format=json"
                     (concat "model=" whisper-model)
                     (concat "language=" whisper-language)))))
      (defun whisper--check-model-consistency () t)
      

      Then I have this function for trying things out.

      (defun my-test-whisper-api (url &optional args)
        (with-temp-buffer
          (apply #'call-process "curl" nil t nil "-s"
                 url
               (append (mapcan
                        (lambda (h) (list "-H" h))
                        (list "Content-Type: multipart/form-data"))
                       (mapcan
                        (lambda (h) (list "-F" h))
                        (list (concat "file=@" whisper--temp-file)
                              "temperature=0.0"
                              "temperature_inc=0.2"
                              "response_format=verbose_json"
                              (concat "language=" whisper-language)))
                       args))
          (message "%s %s" (buffer-string) url)))
      

      Here's the audio file. It is around 10 seconds long. I run the benchmark 3 times and report the average time.

      Download

      Code for running the benchmarks
      (mapcar
       (lambda (group)
         (let ((whisper--temp-file "/home/sacha/recordings/whisper/2026-01-19-14-17-53.wav"))
           ;; warm up the model
           (eval (cadr group))
           (list
            (format "%.3f"
                    (/ (car
                        (benchmark-call (lambda () (eval (cadr group))) times))
                       times))
            (car group))))
       '(
         ("parakeet"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 5092)))
         ("whisper.cpp base-q4_0"
          (my-test-whisper-api
           (format "http://%s:%d/inference" whisper-server-host 8642)))
         ("speaches whisper-base"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001)
           (list "-F" "model=Systran/faster-whisper-base")))
         ("speaches whisper-base.en"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001)
           (list "-F" "model=Systran/faster-whisper-base.en")))
         ("speaches whisper-small"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001)
           (list "-F" "model=Systran/faster-whisper-small")))
         ("speaches whisper-small.en"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001)
           (list "-F" "model=Systran/faster-whisper-small.en")))
         ("speaches lorneluo/whisper-small-ct2-int8"
          (my-test-whisper-api
           (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001)
           (list "-F" "model=lorneluo/whisper-small-ct2-int8")))
         ;; needed export TORCH_FORCE_NO_WEIGHTS_ONLY_LOAD=1
         ("whisperx-server Systran/faster-whisper-small"
          (my-test-whisper-api
           (format "http://%s:%d/transcribe" whisper-server-host 8002)))))
      
      3.694 parakeet
      2.484 whisper.cpp base-q4_0
      1.547 speaches whisper-base
      1.425 speaches whisper-base.en
      4.076 speaches whisper-small
      3.735 speaches whisper-small.en
      2.870 speaches lorneluo/whisper-small-ct2-int8
      4.537 whisperx-server Systran/faster-whisper-small

      I tried it with:

      Looks like speaches + faster-whisper-base is the winner for now. I like how speaches lets me switch models on the fly, so maybe I can use base.en generally and switch to base when I want to try dictating in French. Here's how I've set it up to use the server I just set up.

      (setq whisper-server-port 8001 whisper-model "Systran/faster-whisper-base.en"
            my-whisper-url-format "http://%s:%d/v1/audio/transcriptions")
      

      At some point, I'll override whisper--ensure-server so that starting it up is smoother.

      Benchmark notes: I have a Lenovo P52 laptop (released 2018) with an Intel Core i7-8850H (6 cores, 12 threads; 2.6 GHz base / 4.3 GHz turbo) with 64GB RAM and an SSD. I haven't figured out how to get the GPU working under Ubuntu yet.

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    8. 🔗 r/wiesbaden Any fun events this weekend rss

      Hi I’m new to Germany I’m 19 male type. I’m interested in anything from a big fun soccer game to a small party just let me know

      submitted by /u/GuavaCool4628
      [link] [comments]

    9. 🔗 r/LocalLLaMA Liquid AI released the best thinking Language Model Under 1GB rss

      Liquid AI released the best thinking Language Model Under 1GB | Liquid AI released LFM2.5-1.2B-Thinking, a reasoning model that runs entirely on-device. What needed a data centre two years ago now runs on any phone with 900 MB of memory. -> Trained specifically for concise reasoning
      -> Generates internal thinking traces before producing answers
      -> Enables systematic problem-solving at edge-scale latency
      -> Shines on tool use, math, and instruction following
      -> Matches or exceeds Qwen3-1.7B (thinking mode) acrross most performance benchmarks, despite having 40% less parameters. At inference time, the gap widens further, outperforming both pure transformer models and hybrid architectures in speed and memory efficiency. LFM2.5-1.2B-Thinking is available today: with broad, day-one support across the on-device ecosystem.
      Hugging Face: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking
      LEAP: https://leap.liquid.ai/models?model=lfm2.5-1.2b-thinking
      Liquid AI Playground: https://playground.liquid.ai/login?callbackUrl=%2F At submitted by /u/PauLabartaBajo
      [link] [comments]
      ---|---

    10. 🔗 r/LocalLLaMA 768Gb Fully Enclosed 10x GPU Mobile AI Build rss

      768Gb Fully Enclosed 10x GPU Mobile AI Build | I haven't seen a system with this format before but with how successful the result was I figured I might as well share it. Specs:
      Threadripper Pro 3995WX w/ ASUS WS WRX80e-sage wifi ii 512Gb DDR4 256Gb GDDR6X/GDDR7 (8x 3090 + 2x 5090) EVGA 1600W + Asrock 1300W PSU's Case: Thermaltake Core W200 OS: Ubuntu Est. expense: ~$17k The objective was to make a system for running extra large MoE models (Deepseek and Kimi K2 specifically), that is also capable of lengthy video generation and rapid high detail image gen (the system will be supporting a graphic designer). The challenges/constraints: The system should be easily movable, and it should be enclosed. The result technically satisfies the requirements, with only one minor caveat. Capital expense was also an implied constraint. We wanted to get the most potent system possible with the best technology currently available, without going down the path of needlessly spending tens of thousands of dollars for diminishing returns on performance/quality/creativity potential. Going all 5090's or 6000 PRO's would have been unfeasible budget-wise and in the end likely unnecessary, two 6000's alone could have eaten the cost of the entire amount spent on the project, and if not for the two 5090's the final expense would have been much closer to ~$10k (still would have been an extremely capable system, but this graphic artist would really benefit from the image/video gen time savings that only a 5090 can provide). The biggest hurdle was the enclosure problem. I've seen mining frames zip tied to a rack on wheels as a solution for mobility, but not only is this aesthetically unappealing, build construction and sturdiness quickly get called into question. This system would be living under the same roof with multiple cats, so an enclosure was almost beyond a nice-to-have, the hardware will need a physical barrier between the expensive components and curious paws. Mining frames were quickly ruled out altogether after a failed experiment. Enter the W200, a platform that I'm frankly surprised I haven't heard suggested before in forum discussions about planning multi-GPU builds, and is the main motivation for this post. The W200 is intended to be a dual-system enclosure, but when the motherboard is installed upside-down in its secondary compartment, this makes a perfect orientation to connect risers to mounted GPU's in the "main" compartment. If you don't mind working in dense compartments to get everything situated (the sheer density overall of the system is among its only drawbacks), this approach reduces the jank from mining frame + wheeled rack solutions significantly. A few zip ties were still required to secure GPU's in certain places, but I don't feel remotely as anxious about moving the system to a different room or letting cats inspect my work as I would if it were any other configuration. Now the caveat. Because of the specific GPU choices made (3x of the 3090's are AIO hybrids), this required putting one of the W200's fan mounting rails on the main compartment side in order to mount their radiators (pic shown with the glass panel open, but it can be closed all the way). This means the system technically should not run without this panel at least slightly open so it doesn't impede exhaust, but if these AIO 3090's were blower/air cooled, I see no reason why this couldn't run fully closed all the time as long as fresh air intake is adequate. The final case pic shows the compartment where the actual motherboard is installed (it is however very dense with risers and connectors so unfortunately it is hard to actually see much of anything) where I removed one of the 5090's. Airflow is very good overall (I believe 12x 140mm fans were installed throughout), GPU temps remain in good operation range under load, and it is surprisingly quiet when inferencing. Honestly, given how many fans and high power GPU's are in this thing, I am impressed by the acoustics, I don't have a sound meter to measure db's but to me it doesn't seem much louder than my gaming rig. I typically power limit the 3090's to 200-250W and the 5090's to 500W depending on the workload. . Benchmarks Deepseek V3.1 Terminus Q2XXS (100% GPU offload) Tokens generated - 2338 tokens Time to first token - 1.38s Token gen rate - 24.92tps ____ GLM 4.6 Q4KXL (100% GPU offload) Tokens generated - 4096 Time to first token - 0.76s Token gen rate - 26.61tps ___ Kimi K2 TQ1 (87% GPU offload) Tokens generated - 1664 Time to first token - 2.59s Token gen rate - 19.61tps _____ Hermes 4 405b Q3KXL (100% GPU offload) Tokens generated - was so underwhelmed by the response quality I forgot to record lol Time to first token - 1.13s Token gen rate - 3.52tps ____ Qwen 235b Q6KXL (100% GPU offload) Tokens generated - 3081 Time to first token - 0.42s Token gen rate - 31.54tps ______ I've thought about doing a cost breakdown here, but with price volatility and the fact that so many components have gone up since I got them, I feel like there wouldn't be much of a point and may only mislead someone. Current RAM prices alone would completely change the estimate cost of doing the same build today by several thousand dollars. Still, I thought I'd share my approach on the off chance it inspires or is interesting to someone. submitted by /u/SweetHomeAbalama0
      [link] [comments]
      ---|---

    11. 🔗 r/wiesbaden Freizeitfußball / casual football rss

      Spielt hier jemand Freizeitfußball? Mein Verlobter (33) ist gerade nach Wiesbaden gezogen und hat gesagt, dass er gerne ein- oder zweimal pro Woche spielen wĂŒrde.

      —-

      Is anyone here playing casual football? My fiancĂ© (33M) has just moved to Wiesbaden and said he’d like to play once or twice a week.

      submitted by /u/SillyRate1329
      [link] [comments]

    12. 🔗 r/reverseengineering frida-ipa-extract rss
    13. 🔗 r/LocalLLaMA It's been one year since the release of Deepseek-R1 rss

      It's been one year since the release of Deepseek-R1 | submitted by /u/Recoil42
      [link] [comments]
      ---|---

    14. 🔗 r/reverseengineering I have made an app to collect, decompile apk with apktool and jadx to have a reference, recompile it, sign it, zipalign it and install it. rss
    15. 🔗 @cxiao@infosec.exchange "To Every American Who's Sorry" mastodon

      "To Every American Who's Sorry"

      https://www.reddit.com/r/greenland/comments/1qhhijq/to_every_american_whos_sorry/

      We see similar behaviour from Americans in the Canadian online spaces, and offline as well (for example, https://www.ctvnews.ca/vancouver/article/anonymous-american-apologizes-to- canadians-on-vancouver- billboards/).

      I agree with this post: It is annoying, tiring to deal with, and not useful. It serves only the purpose of making Americans feel better by dumping their guilt externally. Americans should redirect their energy elsewhere.

      #greenland #canada

    16. 🔗 r/LocalLLaMA Bartowski comes through again. GLM 4.7 flash GGUF rss
    17. 🔗 @cxiao@infosec.exchange RE: mastodon

      RE: https://flipboard.com/@cbcnews/calgary-s2m5l3ffz/-/a-4N3WgbkgTGaoZbWQVTDYZQ%3Aa%3A107108217-%2F0

      "cyber threats are a risk"

      looks inside

      "The report states the City of Calgary's rate of clicking on malicious links between May and August 2024 was up to 15 times higher than other regional or similar-sized organizations."

      😭 how the f is this a valid way of measuring cyber risk in the year of our lord 2026

    18. 🔗 r/LocalLLaMA Unsloth GLM 4.7-Flash GGUF rss
    19. 🔗 r/reverseengineering On the Coming Industrialisation of Exploit Generation with LLMs rss
    20. 🔗 r/reverseengineering Conditions in the Intel 8087 floating-point chip's microcode rss
    21. 🔗 matklad Vibecoding #2 rss

      Vibecoding #2

      Jan 20, 2026

      I feel like I got substantial value out of Claude today, and want to document it. I am at the tail end of AI adoption, so I don’t expect to say anything particularly useful or novel. However, I am constantly complaining about the lack of boring AI posts, so it’s only proper if I write one.

      Problem Statement

      At TigerBeetle, we are big on deterministic simulation testing. We even use it to track performance, to some degree. Still, it is crucial to verify performance numbers on a real cluster in its natural high-altitude habitat.

      To do that, you need to procure six machines in a cloud, get your custom version of tigerbeetle binary on them, connect cluster’s replicas together and hit them with load. It feels like, quarter of a century into the third millennium, “run stuff on six machines” should be a problem just a notch harder than opening a terminal and typing ls, but I personally don’t know how to solve it without wasting a day. So, I spent a day vibecoding my own square wheel.

      The general shape of the problem is that I want to spin a fleet of ephemeral machines with given specs on demand and run ad-hoc commands in a SIMD fashion on them. I don’t want to manually type slightly different commands into a six- way terminal split, but I also do want to be able to ssh into a specific box and poke it around.

      Solution

      My idea for the solution comes from these three sources:

      The big idea of rsyscall is that you can program distributed system in direct style. When programming locally, you do things by issuing syscalls:

      const fd = open("/etc/passwd");
      

      This API works for doing things on remote machines, if you specify which machine you want to run the syscall on:

      const fd_local = open(.host, "/etc/passwd");
      const fd_cloud = open(.{.addr = "1.2.3.4"}, "/etc/passwd");
      

      Direct manipulation is the most natural API, and it pays to extend it over the network boundary.


      Peter’s post is an application of a similar idea to a narrow, mundane task of developing on Mac and testing on Linux. Peter suggests two scripts:

      remote-sync synchronizes a local and remote projects. If you run remote- sync inside ~/p/tb folder, then ~/p/tb materializes on the remote machine. rsync does the heavy lifting, and the wrapper script implements DWIM behaviors.

      It is typically followed by remote-run some --command, which runs command on the remote machine in the matching directory, forwarding output back to you.

      So, when I want to test local changes to tigerbeetle on my Linux box, I have roughly the following shell session:

      $ cd ~/p/tb/work
      $ code . # hack here
      $ remote-sync
      $ remote-run ./zig/zig build test
      

      The killer feature is that shell-completion works. I first type the command I want to run, taking advantage of the fact that local and remote commands are the same, paths and all, then hit ^A and prepend remote-run (in reality, I have rr alias that combines sync&run).

      The big thing here is not the commands per se, but the shift in the mental model. In a traditional ssh & vim setup, you have to juggle two machines with a separate state, the local one and the remote one. With remote-sync, the state is the same across the machines, you only choose whether you want to run commands here or there.

      With just two machines, the difference feels academic. But if you want to run your tests across six machines, the ssh approach fails — you don’t want to re-vim your changes to source files six times, you really do want to separate the place where the code is edited from the place(s) where the code is run. This is a general pattern — if you are not sure about a particular aspect of your design, try increasing the cardinality of the core abstraction from 1 to 2.


      The third component, dax library, is pretty mundane — just a JavaScript library for shell scripting. The notable aspects there are:

      • JavaScript’s template literals, which allow implementing command interpolation in a safe by construction way. When processing $ls ${paths}`, a string is never materialized, it’s arrays all the way to theexec` syscall ( more on the topic).

      • JavaScript’s async/await, which makes managing concurrent processes (local or remote) natural:

        await Promise.all([
        

        $sleep 5, $remote-run sleep 5, ]);

      • Additionally, deno specifically valiantly strives to impose process-level structured concurrency, ensuring that no processes spawned by the script outlive the script itself, unless explicitly marked detached — a sour spot of UNIX.


      Combining the three ideas, I now have a deno script, called box, that provides a multiplexed interface for running ad-hoc code on ad-hoc clusters.

      A session looks like this:

      # Switch to project with local modifications
      $ cd ~/p/tb/work
      $ git status --short
       M src/lsm/forest.zig
      
      # Spin up 3 machines, print their IPs
      $ box create 3
      108.129.172.206,52.214.229.222,3.251.67.25
      
      $ box list
      0 108.129.172.206
      1 52.214.229.222
      2 3.251.67.25
      
      # Move my code to remote machines
      $ box sync 0,1,2
      
      # Run pwd&ls on machine 0; now the code is there:
      $ box run 0 pwd
      /home/alpine/p/tb/work
      
      $ box run 0 ls
      CHANGELOG.md  LICENSE       README.md     build.zig
      docs/         src/          zig/
      
      # Setup dev env and run build on all three machines.
      $ box run 0,1,2 ./zig/download.sh
      Downloading Zig 0.14.1 release build...
      Extracting zig-x86_64-linux-0.14.1.tar.xz...
      Downloading completed (/home/alpine/p/tb/work/zig/zig)!
      Enjoy!
      
      # NB: using local commit hash here (no git _there_).
      $ box run 0,1,2 \
          ./zig/zig build -Drelease -Dgit-commit=$(git rev-parse HEAD)
      
      # ?? is replaced by machine id
      $ box run 0,1,2 \
          ./zig-out/bin/tigerbeetle format \
          --cluster=0 --replica=?? --replica-count=3 \
          0_??.tigerbeetle
      2026-01-20 19:30:15.947Z info(io): opening "0_0.tigerbeetle"...
      
      # Cleanup machines (they also shutdown themselves after 8 hours)
      $ box destroy 0,1,2
      

      I like this! Haven’t used in anger yet, but this is something I wanted for a long time, and now I have it

      Structure

      The problem with implementing above is that I have zero practical experience with modern cloud. I only created my AWS account today, and just looking at the console interface ignited the urge to re-read The Castle. Not my cup of pu-erh. But I had a hypothesis that AI should be good at wrangling baroque cloud API, and it mostly held.

      I started with a couple of paragraphs of rough, super high-level description of what I want to get. Not a specification at all, just a general gesture towards unknown unknowns. Then I asked ChatGPT to expand those two paragraphs into a more or less complete spec to hand down to an agent for implementation.

      This phase surfaced a bunch of unknowns for me. For example, I wasn’t thinking at all that I somehow need to identify machines, ChatGPT suggested using random hex numbers, and I realized that I do need 0,1,2 naming scheme to concisely specify batches of machines. While thinking about this, I realized that sequential numbering scheme also has an advantage that I can’t have two concurrent clusters running, which is a desirable property for my use-case. If I forgot to shutdown a machine, I’d rather get an error on trying to re-create a machine with the same name, then to silently avoid the clash. Similarly, turns out the questions of permissions and network access rules are something to think about, as well as what region and what image I need.

      With the spec document in hand, I turned over to Claude code for actual implementation work. The first step was to further refine the spec, asking Claude if anything is unclear. There were couple of interesting clarifications there.

      First, the original ChatGPT spec didn’t get what I meant with my “current directory mapping” idea, that I want to materialize a local ~/p/tb/work as remote ~/p/tb/work, even if ~ are different. ChatGPT generated an incorrect description and an incorrect example. I manually corrected example, but wasn’t able to write a concise and correct description. Claude fixed that working from the example. I feel like I need to internalize this more — for current crop of AI, examples seem to be far more valuable than rules.

      Second, the spec included my desire to auto-shutdown machines once I no longer use them, just to make sure I don’t forget to turn the lights off when leaving the room. Claude grilled me on what precisely I want there, and I asked it to DWIM the thing.

      The spec ended up being 6KiB of English prose. The final implementation was 14KiB of TypeScript. I wasn’t keeping the spec and the implementation perfectly in sync, but I think they ended up pretty close in the end. Which means that prose specifications are somewhat more compact than code, but not much more compact.

      My next step was to try to just one-shot this. Ok, this is embarrassing, and I usually avoid swearing in this blog, but I just typoed that as “one-shit”, and, well, that is one flavorful description I won’t be able to improve upon. The result was just not good (more on why later), so I almost immediately decided to throw it away and start a more incremental approach.

      In my previous vibe-post, I noticed that LLM are good at closing the loop. A variation here is that LLMs are good at producing results, and not necessarily good code. I am pretty sure that, if I had let the agent to iterate on the initial script and actually run it against AWS, I would have gotten something working. I didn’t want to go that way for three reasons:

      • Spawning VMs takes time, and that significantly reduces the throughput of agentic iteration.
      • No way I let the agent run with a real AWS account, given that AWS doesn’t have a fool-proof way to cap costs.
      • I am fairly confident that this script will be a part of my workflow for at least several years, so I care more about long-term code maintenance, than immediate result.

      And, as I said, the code didn’t feel good, for these specific reasons:

      • It wasn’t the code that I would have written, it lacked my character, which made it hard for me to understand it at a glance.
      • The code lacked any character whatsoever. It could have worked, it wasn’t “naively bad”, like the first code you write when you are learning programming, but there wasn’t anything good there.
      • I never know what the code should be up-front. I don’t design solutions, I discover them in the process of refactoring. Some of my best work was spending a quiet weekend rewriting large subsystems implemented before me, because, with an implementation at hand, it was possible for me to see the actual, beautiful core of what needs to be done. With a slop-dump, I just don’t get to even see what could be wrong.
      • In particular, while you are working the code (as in “wrought iron”), you often go back to requirements and change them. Remember that ambiguity of my request to “shut down idle cluster”? Claude tried to DWIM and created some horrific mess of bash scripts, timestamp files, PAM policy and systemd units. But the right answer there was “lets maybe not have that feature?” (in contrast, simply shutting the machine down after 8 hours is a one-liner).

      The incremental approach worked much better, Claude is good at filling-in the blanks. The very first thing I did for box-v2 was manually typing-in:

      type CLI =
        | CLICreate
        | CLIDestroy
        | CLIList
        | CLISync
      
      type BoxList = string[];
      type CLICreate = { tag: "create"; count: number };
      type CLIDestroy = { tag: "destroy"; boxes: BoxList };
      type CLIList = { tag: "list" };
      type CLISync = { tag: "sync"; boxes: BoxList; };
      
      function fatal(message: string): never {
        console.error(message);
        Deno.exit(1);
      }
      
      function CLIParse(args: string[]): CLI {
      
      }
      

      Then I asked Claude to complete the CLIParse function, and I was happy with the result. Note Show, Don’t Tell

      I am not asking Claude to avoid throwing an exception and fail fast instead. I just give fatal function, and it code-completes the rest.

      I can’t say that the code inside CLIParse is top-notch. I’d probably written something more spartan. But the important part is that, at this level, I don’t care. The abstraction for parsing CLI arguments feel right to me, and the details I can always fix later. This is how this overall vibe-coding session transpired — I was providing structure, Claude was painting by the numbers.

      In particular, with that CLI parsing structure in place, Claude had little problem adding new subcommands and new arguments in a satisfactory way. The only snag was that, when I asked to add an optional path to sync, it went with string | null, while I strongly prefer string | undefined. Obviously, its better to pick your null in JavaScript and stick with it. The fact that undefined is unavoidable predetermines the winner. Given that the argument was added as an incremental small change, course-correcting was trivial.

      The null vs undefined issue perhaps illustrates my complaint about the code lacking character. | null is the default non-choice. | undefined is an insight, which I personally learned from VS Code LSP implementation.

      The hand-written skeleton/vibe-coded guts worked not only for the CLI. I wrote

      async function main() {
        const cli = CLIParse(Deno.args);
      
        if (cli.tag === "create") return await mainCreate(cli.count);
        if (cli.tag === "destroy") return await mainDestroy(cli.boxes);
        ...
      }
      
      async function mainDestroy(boxes: string[]) {
        for (const box of boxes) {
          await instanceDestroy(box);
        }
      }
      
      async function instanceDestroy(id: string) {
      
      }
      

      and then asked Claude to write the body of a particular function according to the SPEC.md.

      Unlike with the CLI, Claude wasn’t able to follow this pattern itself. With one example it’s not obvious, but the overall structure is that instanceXXX is the AWS-level operation on a single box, and mainXXX is the CLI-level control flow that deals with looping and parallelism. When I asked Claude to implement box run, without myself doing the main / instance split, Claude failed to notice it and needed a course correction.

      Implementation

      However , Claude was massively successful with the actual logic. It would have taken me hours to acquire specific, non-reusable knowledge to write:

      // Create spot instance
      const instanceMarketOptions = JSON.stringify({
        MarketType: "spot",
        SpotOptions: { InstanceInterruptionBehavior: "terminate" },
      });
      const tagSpecifications = JSON.stringify([
        { ResourceType: "instance", Tags: [{ Key: moniker, Value: id }] },
      ]);
      
      const result = await $`aws ec2 run-instances \
        --image-id ${image} \
        --instance-type ${instanceType} \
        --key-name ${moniker} \
        --security-groups ${moniker} \
        --instance-market-options ${instanceMarketOptions} \
        --user-data ${userDataBase64} \
        --tag-specifications ${tagSpecifications} \
        --output json`.json();
      
      const instanceId = result.Instances[0].InstanceId;
      
      // Wait for instance to be running
      await $`aws ec2 wait instance-status-ok --instance-ids ${instanceId}`;
      

      I want to be careful — I can’t vouch for correctness and especially completeness of the above snippet. However, given that the nature of the problem is such that I can just run the code and see the result, I am fine with it. If I were writing this myself, trial-and-error would totally be my approach as well.

      Then there’s synthesis — with several instance commands implemented, I noticed that many started with querying AWS to resolve symbolic machine name, like “1”, to the AWS name/IP. At that point I realized that resolving symbolic names is a fundamental part of the problem, and that it should only happen once, which resulting in the following refactored shape of the code:

      async function main() {
        const cli = CLIParse(Deno.args);
        const instances = await instanceMap();
      
        if (cli.tag === "create") return await mainCreate(instances, cli.count);
        if (cli.tag === "destroy") return await mainDestroy(instances, cli.boxes);
        ...
      }
      

      Claude was ok with extracting the logic, but messed up the overall code layout, so the final code motions were on me. “Context” arguments go first , not last, common prefix is more valuable than common suffix because of visual alignment.

      The original “one-shotted” implementation also didn’t do up-front querying. This is an example of a shape of a problem I only discover when working with code closely.


      Of course, the script didn’t work perfectly the first time and we needed quite a few iterations on the real machines both to fix coding bugs, as well gaps in the spec. That was an interesting experience of speed-running rookie mistakes. Claude made naive bugs, but was also good at fixing them.

      For example, when I first tried to box ssh after box create, I got an error. Pasting it into Claude immediately showed the problem. Originally, the code was doing aws ec2 wait instance-running and not aws ec2 wait instance- status-ok.

      The former checks if instance is logically created, the latter waits until the OS is booted. It makes sense that these two exist, and the difference is clear (and its also clear that OS booted != SSH demon started). Claude’s value here is in providing specific names for the concepts I already know to exist.

      Another fun one was about the disk. I noticed that, while the instance had an SSD, it wasn’t actually used. I asked Claude to mount it as home, but that didn’t work. Claude immediately asked me to run $ box run 0 cat /var/some/unintuitive/long/path.log and that log immediately showed the problem. This is remarkable! 50% of my typical Linux debugging day is wasted not knowing that a useful log exists, and the other 50% is for searching for the log I know should exist somewhere.

      After the fix, I lost the ability to SSH. Pasting the error immediately gave the answer — by mounting over /home, we were overwriting ssh keys configured prior.

      There were couple of more iterations like that. Rookie mistakes were made, but they were debugged and fixed much faster than my personal knowledge allows (and again, I feel that is trivia knowledge, rather than deep reusable knowledge, so I am happy to delegate it!).

      It worked satisfactorily in the end, and, what’s more, I am happy to maintain the code, at least to the extent that I personally need it. Kinda hard to measure productivity boost here, but, given just the sheer number of CLI flags required to make this work, I am pretty confident that time was saved, even factoring the writing of the present article!

      Coda

      I’ve recently read The Art of Doing Science and Engineering by Hamming (of distance and code), and one story stuck with me:

      A psychologist friend at Bell Telephone Laboratories once built a machine with about 12 switches and a red and a green light. You set the switches, pushed a button, and either you got a red or a green light. After the first person tried it 20 times they wrote a theory of how to make the green light come on. The theory was given to the next victim and they had their 20 tries and wrote their theory, and so on endlessly. The stated purpose of the test was to study how theories evolved.

      But my friend, being the kind of person he was, had connected the lights to a random source! One day he observed to me that no person in all the tests (and they were all high-class Bell Telephone Laboratories scientists) ever said there was no message. I promptly observed to him that not one of them was either a statistician or an information theorist, the two classes of people who are intimately familiar with randomness. A check revealed I was right!