🏡


to read (pdf)

  1. No management needed: anti-patterns in early-stage engineering teams | Antoine Boulanger
  2. Reconstructing Program Semantics from Go Binaries
  3. Long time ago, I was looking for game with some hidden rules, browsing random wi... | Hacker News
  4. Keychron’s Nape Pro turns your mechanical keyboard into a laptop‑style trackball rig: Hands-on at CES 2026 - Yanko Design
  5. The Code-Only Agent • Rijnard van Tonder

  1. January 19, 2026
    1. 🔗 HexRaysSA/ida-domain v0.4.0 release

      v0.4.0 (2026-01-19)

      This release is published under the MIT License.


      Detailed Changes : v0.3.6-dev.3...v0.4.0

    2. 🔗 roboflow/supervision 0.28.0rc0: Debugging workflow permissions and add PR dry-run (#2083) release
      • Debugging workflow permissions and add PR dry-run

      • ls all

      • path: dist/

    3. 🔗 r/wiesbaden POL-PDLD: Öffentlichkeitsfahndung nach Soliman H. - Abgängiger aus dem Maßregelvollzug Klingenmünster rss
    4. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    5. 🔗 jellyfin/jellyfin 10.11.6 release

      🚀 Jellyfin Server 10.11.6

      We are pleased to announce the latest stable release of Jellyfin, version 10.11.6! This minor release brings several bugfixes to improve your Jellyfin experience. As always, please ensure you take a full backup before upgrading!

      You can find more details about and discuss this release on our forums.

      Changelog (20)

      📈 General Changes

    6. 🔗 @cxiao@infosec.exchange oh it's [#KpopMonday](https://infosec.exchange/tags/KpopMonday) and the theme mastodon

      oh it's #KpopMonday and the theme is #SharpObjects, this is my perfect excuse to post DUMB LITTY

      https://youtu.be/W01_e6hw288

      jiwoo sword changed my life

      #KARD #kpop

  2. January 18, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-18 rss

      IDA Plugin Updates on 2026-01-18

      New Releases:

      Activity:

    2. 🔗 badlogic/pi-mono v0.49.1 release

      Added

      • Added strictResponsesPairing compat option for custom OpenAI Responses models on Azure (#768 by @nicobako)
      • Session selector (/resume) now supports path display toggle (Ctrl+P) and session deletion (Ctrl+D) with inline confirmation (#816 by @w-winter)
      • Added undo support in interactive mode with Ctrl+- hotkey. (#831 by @Perlence)

      Changed

      • Share URLs now use hash fragments (#) instead of query strings (?) to prevent session IDs from being sent to buildwithpi.ai (#829 by @terrorobe)
      • API keys in models.json can now be retrieved via shell command using ! prefix (e.g., "apiKey": "!security find-generic-password -ws 'anthropic'" for macOS Keychain) (#762 by @cv)

      Fixed

      • Fixed IME candidate window appearing in wrong position when filtering menus with Input Method Editor (e.g., Chinese IME). Components with search inputs now properly propagate focus state for cursor positioning. (#827)
      • Fixed extension shortcut conflicts to respect user keybindings when built-in actions are remapped. (#826 by @richardgill)
      • Fixed photon WASM loading in standalone compiled binaries.
      • Fixed tool call ID normalization for cross-provider handoffs (e.g., Codex to Antigravity Claude) (#821)
    3. 🔗 r/wiesbaden Parken Westend ? rss

      Hallo, wir sind neu nach Wiesbaden gezogen und schockiert von der Parksituation. Aktuell haben wir noch keinen Anwohner Parkausweis und bei unserer Wohnung kann man nicht parken. Gibt es in der Nähe Möglichkeiten zum längerfristigen parken? Wir benötigen das Auto nicht täglich.

      Über jeden Hinweis sind wir dankbar. :)

      Wir haben vorher in Hamburg gewohnt und dachten da sei es schlimm - falsch gedacht :D

      submitted by /u/spicyspicegirl1
      [link] [comments]

    4. 🔗 r/wiesbaden New to Wiesbaden rss

      I moved from America I’ve been here for about a week now and I’m ready to start going out and need new friends that are willing to show me all the best bars and food spots

      I only know a little German right now and I’m trying to learn more

      submitted by /u/GuavaCool4628
      [link] [comments]

    5. 🔗 @cxiao@infosec.exchange 现在说是回国不用拿签证,但是肯定会有什么别的限制 mastodon

      现在说是回国不用拿签证,但是肯定会有什么别的限制

      #canada

    6. 🔗 @cxiao@infosec.exchange kind of interesting that the message about visa free travel to china is tied mastodon

      kind of interesting that the message about visa free travel to china is tied to us (chinese-canadians) being able to visit relatives

      https://bsky.app/profile/mark- carney.bsky.social/post/3mcneqhofes23

      #canada

    7. 🔗 r/LocalLLaMA 4x AMD R9700 (128GB VRAM) + Threadripper 9955WX Build rss

      4x AMD R9700 (128GB VRAM) + Threadripper 9955WX Build | Disclaimer: I am from Germany and my English is not perfect, so I used an LLM to help me structure and write this post. Context & Motivation: I built this system for my small company. The main reason for all new hardware is that I received a 50% subsidy/refund from my local municipality for digitalization investments. To qualify for this funding, I had to buy new hardware and build a proper "server-grade" system. My goal was to run large models (120B+) locally for data privacy. With the subsidy in mind, I had a budget of around 10,000€ (pre-refund). I initially considered NVIDIA, but I wanted to maximize VRAM. I decided to go with 4x AMD RDNA4 cards (ASRock R9700) to get 128GB VRAM total and used the rest of the budget for a solid Threadripper platform. Hardware Specs: Total Cost: ~9,800€ (I get ~50% back, so effectively ~4,900€ for me).

      • CPU: AMD Ryzen Threadripper PRO 9955WX (16 Cores)
      • Mainboard: ASRock WRX90 WS EVO
      • RAM: 128GB DDR5 5600MHz
      • GPU: 4x ASRock Radeon AI PRO R9700 32GB (Total 128GB VRAM)
        • Configuration: All cards running at full PCIe 5.0 x16 bandwidth.
      • Storage: 2x 2TB PCIe 4.0 SSD
      • PSU: Seasonic 2200W
      • Cooling: Alphacool Eisbaer Pro Aurora 360 CPU AIO
      • Case: PHANTEKS Enthoo Pro 2 Server
      • Fans: 11x Arctic P12 Pro

      Benchmark Results I tested various models ranging from 8B to 230B parameters. Llama.cpp (Focus: Single User Latency) Settings: Flash Attention ON, Batch 2048 | Modell | NGL | Prompt t/s | Gen t/s | Größe
      ---|---|---|---|---
      GLM-4.7-REAP-218B-A32B-Q3_K_M | 999 | 504.15 | 17.48 | 97.6GB
      GLM-4.7-REAP-218B-A32B-Q4_K_M | 65 | 428.80 | 9.48 | 123.0GB
      gpt-oss-120b-GGUF | 999 | 2977.83 | 97.47 | 58.4GB
      Meta-Llama-3.1-70B-Instruct-Q4_K_M | 999 | 399.03 | 12.66 | 39.6GB
      Meta-Llama-3.1-8B-Instruct-Q4_K_M | 999 | 3169.16 | 81.01 | 4.6GB
      MiniMax-M2.1-Q4_K_M | 55 | 668.99 | 34.85 | 128.83 GB
      Qwen2.5-32B-Instruct-Q4_K_M | 999 | 848.68 | 25.14 | 18.5GB
      Qwen3-235B-A22B-Instruct-2507-Q3_K_M | 999 | 686.45 | 24.45 | 104.7GB

      Side note: I found that with PCIe 5.0, standard Pipeline Parallelism (Layer Split) is significantly faster (~97 t/s) than Tensor Parallelism/Row Split (~67 t/s) for a single user on this setup.

      vLLM (Focus: Throughput) Model: GPT-OSS-120B (bfloat16), TP=4, test for 20 requests

      Total Throughput: ~314 tokens/s (Generation) Prompt Processing: ~5339 tokens/s Single user throughput 50 tokens/s

      I used rocm 7.1.1 for llama.cpp also testet Vulkan but it was worse

      If I could do it again, I would have used the budget to buy a single NVIDIA RTX Pro 6000 Blackwell (96GB). Maybe I will, if local AI is going well for my use case, I swap the R9700 with Pro 6000 in the future.

      **Edit nicer view for the results

      submitted by /u/NunzeCs
      [link] [comments]

    8. 🔗 r/wiesbaden Suche Zahnarzt für Angstpatienten rss
    9. 🔗 r/wiesbaden Barbershops in Wiesbaden rss

      Ich bin schon ziemlich lange in Deutschland, weiß aber immer noch nicht, wo ich mich hier die Haare schneiden lassen kann. Alles, was ich gefunden habe, sind arabische Friseursalons mit furchtbarer Qualität. Kennt jemand einen guten Friseursalon in Wiesbaden/Frankfurt, wo man sich relativ günstig die Haare schneiden lassen kann?

      submitted by /u/demiurgewasright
      [link] [comments]

    10. 🔗 Register Spill Joy & Curiosity #70 rss

      Here's a question I think about every day now: what will happen to code?

      A year ago I started to build a little Rust program. The plan: I paste an email in, the program sends the email to an LLM inside a specific prompt, I get back a reply. The usecase: sometimes I get emails from people that ask for a discount on my books. I nearly always say yes, go to LemonSqueezy, create a personalized coupon code (think: HEYANNIE), go back to the email and reply with where and how they can use that coupon code.

      So I started to build this. Single file, one API call, a few-shot prompt, works if I hardcode an email in. But then I had to figure out what type of interface I want so I can paste the email in and get the response out and I got lazy and gave up on it. I don't want to build a chat interface and image upload and whatnot. Once I had Amp, though, I came back to the project: maybe Amp can build all of this for me? But while doing that, I realized that, wait a second, why can't Amp itself do what the program's supposed to do? If I paste an email into Amp, it sure can figure out how to talk to LemonSqueezy, no problem. It can also write the two sentences for the reply email. But because I'm lazy I didn't even do that, I just marked that as a possibility in the back of my head. Only four weeks ago did I go into that codebase again.

      This time I told Amp: analyze what this codebase is supposed to do, here's the documentation for the LemonSqueezy API, figure out how to create coupons and what type of response emails to write, then put everything you figured out into a SKILL.md file. After a minute it spit out that Markdown file and I asked it: can you create a coupon code for Annie who sent me this email? Yes, Amp said, I sure can. And off it went with curl and created a coupon code and gave me two sentences with instructions to send back to Annie.

      No code, only Markdown.

      Yes, not every codebase can be turned into instructions for an agent, and yes, it's inefficient and costs money (even more money than the Rust-harness around a few-shot prompt would cost). But, directionally, there are a quite a few things that can be deconstructed into simply an agent with the right instructions and tools, are there not?

      And then you throw things like exe.dev and sprites.dev into the mix, where you could run an agent and store some tools and potentially have the agent write some helper code too, and you start to wonder what'll happen to codebases and code.

      Code will always be around and codebases for Serious Programs too, but you have to wonder: how much and which ones?

      • Our interview with DHH is now live! Admittedly, we a made the mistake of not releasing it right after recording. In the time since, it seems as if David has (at least slightly) shifted his view on Opus 4.5 and writing code by hand. That being said: many other interesting things came up in the conversation. I especially found his thoughts on marketing & social media and the changing of that landscape to be very interesting.

      • If you're curious about which future we see at Amp: we're removing Amp Tab. The post also has a video in which Quinn and I go into a little bit more detail about why we're doing this and how the ratio of hand-written vs. generated code has flipped.

      • A deep dive into ASCII rendering. Fantastic. A+. Impressive work and care and great writing too. This is the Gem of the Week (wish I could play a theme song for you right now.)

      • Mark my words: this blog post is a gunshot in the quiet night and it'll ring out for a very long time. Why We Built Our Own Background Agent, by Zach Bruggeman, Jason Quense, and Rahul Sengottuveluramp at ramp. Nearly a year ago I stood on stage and said to a group of engineers that yes, you can ignore the hype, you can ignore AI, you can dig into your text editor and put your fingers in your ears. But not if you're working in developer tooling, because your field will change like few others. And now here we are and look at what an internal team built.

      • antirez is encouraging his readers to not "fall into the anti-AI hype": "Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months." I wrote a similar thing over a year ago and now I can add antirez to the list. The biggest surprise to me is how long it took agents to replace copy-pasting ChatGPT responses.

      • Another entry on the list: Linus Torvalds. He used Antigravity to help him fix his audio visualization tool: "It mostly went smoothly, although I had to figure out what the problem with using the builtin rectangle select was. After telling antigravity to just do a custom RectangleSelector, things went much better. Is this much better than I could do by hand? Sure is."

      • The team at ramp also put Claude Code into Rollercoaster Tycoon.

      • Now this, this is what it's about, this is it , this is why computers have a power-on button and why we get up in the morning and why we have fingers to move the mouse and click and why we have eyes to see: so that someone, somewhere, can create something like this and then shoot it through thousands of miles of undersea cables into our eyeballs, and the only thing they get in return is the chance to have made the tiny muscles in our cheeks pull up the edges of our mouth: gradient.horse.

      • Wonderful personal blog post: Paul Stamatiou's 2025 in review. It's personal, it's long, it's about work (Paul "was Co-Founder and Head of Design at Limitless (nee Rewind AI), which was acquired by Meta. Before that I spent 9 years at Twitter"), it's about computers, about his car, about his home, about books, about design.

      • Paul's post led me to this one, by Jenny Wen, on design & design process: "But along the way, we lost something. The actual work we were producing. We spent so much time trying to decode our users in so many ways -- a persona! Then a journey! Then a user flow! Then a lo-fi wireframe! Then a concept test! We focused on it so much, that we deemed the pixels unserious and unimportant. We stopped doing the real thing that would be the most empathetic, useful, and that would actually serve business outcomes best: building stuff that worked well and that people would love." If you've never seen it happen, it's hard to believe how much effort can go into building something without someone saying, "Wait a second, this is dogshit. I would never use this." But I've seen it, I've done it : it's tempting and very easy to tell yourself that you're doing everything right when you're following the process and doing the capital-I Important things in the order they shall be done, while forgetting that the most important thing is to build something you and others love.

      • This post is already worth reading just because of that one gif in here, you'll know which one: the struggle of resizing windows on macOS Tahoe.

      • "Apple picks Google's Gemini to run AI-powered Siri coming this year" As someone who thinks that Gemini 3 was the inflection point, not Opus 4.5: hell yes, bring it.

      • And OpenAI is collaborating with Cerebras. If they can get the speeds out of GPT-5.2 that you'd expect when you hear the name Cerebras then the game will change.

      • Brian Lovin says to give our agents a laboratory. Yes! Let the codebases and the agents melt! This is the year in which they do. Think of your codebase as an application: can the agent use it? If not, what is it missing? These models will get better, it's time to prepare for the day when you no longer need to babysit them.

      • Cursor has been "experimenting with running coding agents autonomously for weeks. Our goal is to understand how far we can push the frontier of agentic coding for projects that typically take human teams months to complete." They didn't aim low: "To test this system, we pointed it at an ambitious goal: building a web browser from scratch. The agents ran for close to a week, writing over 1 million lines of code across 1,000 files." They had agents write a whole browser? On Twitter Michael Truell added: "It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM. It kind of works! It still has issues and is of course very far from Webkit/Chromium parity, but we were astonished that simple websites render quickly and largely correctly." Holy fucking shit, right? Simon Willison's 3-year prediction can already be checked off? Eh, says embedding-shapes, not so fast. And that's also what the HackerNews comments say. Still, as someone who wants to be an optimist and who works with these agents on a day-to-day basis: maybe a quiet "holy shit" is apt?

      • The Death of Software Development: "While software development as we know it is dead, software engineering is alive and well. The role has transformed. Engineers are no longer writing software -- they're designing higher-order systems. They've moved from crafting code to designing systems that write code. […] This new reality requires rethinking everything. Forty years of best practices are now outdated. The patterns we relied on, the team structures we built, the processes we followed -- all of it needs to be reconsidered."

      • Uber blog on Forecasting Models to Improve Driver Availability at Airports. What a read! So many thoughts: how complex the system is (it needs to be, right?), how hard some of these problems are and how easy they would be for a human, how much work went into this, when did they decide it'll pay off to invest this much effort into optimizing that part of the system, … Real world software, baby.

      • Take this and send it to everyone you know who's switching from a large company to a startup: no management needed: anti-patterns in early-stage engineering teams. It's spot-on and it's fascinating how hard it is to rethink this stuff from first principles when you've only experienced one side of it.

      • "If all you think about is the tools that are available to you, then today is always a better time to start a company than yesterday, and today will always be worse than tomorrow. The cost of doing something with a computer goes one direction: Down. But what if those costs are falling quickly? What if doing things gets 10 percent cheaper every month? Imagine what you could build if you just wait a year."

      • Inside Denmark's struggle to break up with Silicon Valley. I had no clue that's a thing, fascinating. Reminded me a lot of LiMux, even though it's not comparable. But it kinda is.

      • Bill Kennedy: "I need everyone to start focusing on their engineering skills." Great list.

      • Content aside (because I haven't even read it): this thing has 82m views right now. Is that a lot? Is that not a lot? Are articles a thing? Yes, no? To tie this back to the DHH interview: there's no playbook right now.

      • Somehow I've come across this clip of Tom Brady talking about becoming a master of the game and I've now watched it four times and I think what he says in that one minute and fifty-six seconds is exactly what I wanted to say with this and and this.

      If you think code will change, you should subscribe:

    11. 🔗 r/reverseengineering Shredder-RS: A polymorphic mutation engine for x86_64 written in Rust rss
    12. 🔗 r/LocalLLaMA Qwen 4 might be a long way off !? Lead Dev says they are "slowing down" to focus on quality. rss
    13. 🔗 Armin Ronacher Agent Psychosis: Are We Going Insane? rss

      You can use Polecats without the Refinery and even without the Witness or Deacon. Just tell the Mayor to shut down the rig and sling work to the polecats with the message that they are to merge to main directly. Or the polecats can submit MRs and then the Mayor can merge them manually. It's really up to you. The Refineries are useful if you have done a LOT of up- front specification work, and you have huge piles of Beads to churn through with long convoys.

      Gas Town Emergency User Manual, Steve Yegge

      Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things. Every once in a while that interaction involves other humans, and all of a sudden we get a reality check that maybe we overdid it. The most obvious example of this is the massive degradation of quality of issue reports and pull requests. As a maintainer many PRs now look like an insult to one's time, but when one pushes back, the other person does not see what they did wrong. They thought they helped and contributed and get agitated when you close it down.

      But it's way worse than that. I see people develop parasocial relationships with their AIs, get heavily addicted to it, and create communities where people reinforce highly unhealthy behavior. How did we get here and what does it do to us?

      I will preface this post by saying that I don't want to call anyone out in particular, and I think I sometimes feel tendencies that I see as negative, in myself as well. I too, have thrown some vibeslop up to other people's repositories.

      Our Little Dæmons

      In His Dark Materials, every human has a dæmon, a companion that is an externally visible manifestation of their soul. It lives alongside as an animal, but it talks, thinks and acts independently. I'm starting to relate our relationship with agents that have memory to those little creatures. We become dependent on them, and separation from them is painful and takes away from our new-found identity. We're relying on these little companions to validate us and to collaborate with. But it's not a genuine collaboration like between humans, it's one that is completely driven by us, and the AI is just there for the ride. We can trick it to reinforce our ideas and impulses. And we act through this AI. Some people who have not programmed before, now wield tremendous powers, but all those powers are gone when their subscription hits a rate limit and their little dæmon goes to sleep.

      Then, when we throw up a PR or issue to someone else, that contribution is the result of this pseudo-collaboration with the machine. When I see an AI pull request come in, or on another repository, I cannot tell how someone created it, but I can usually after a while tell when it was prompted in a way that is fundamentally different from how I do it. Yet it takes me minutes to figure this out. I have seen some coding sessions from others and it's often done with clarity, but using slang that someone has come up with and most of all: by completely forcing the AI down a path without any real critical thinking. Particularly when you're not familiar with how the systems are supposed to work, giving in to what the machine says and then thinking one understands what is going on creates some really bizarre outcomes at times.

      But people create these weird relationships with their AI agent and once you see how some prompt their machines, you realize that it dramatically alters what comes out of it. To get good results you need to provide context, you need to make the tradeoffs, you need to use your knowledge. It's not just a question of using the context badly, it's also the way in which people interact with the machine. Sometimes it's unclear instructions, sometimes it's weird role-playing and slang, sometimes it's just swearing and forcing the machine, sometimes it's a weird ritualistic behavior. Some people just really ram the agent straight towards the most narrow of all paths towards a badly defined goal with little concern about the health of the codebase.

      Addicted to Prompts

      These dæmon relationships change not just how we work, but what we produce. You can completely give in and let the little dæmon run circles around you. You can reinforce it to run towards ill defined (or even self defined) goals without any supervision.

      It's one thing when newcomers fall into this dopamine loop and produce something. When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. "You can just do things" was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.

      The thing is that the dopamine hit from working with these agents is so very real. I've been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it's decoupled from any external validation. For as long as nobody looks under the hood, you're good. But when an outsider first pokes at it, it looks pretty crazy. And damn some things look amazing. I too was blown away (and fully expected at the same time) when Cursor's AI written Web Browser landed. It's super impressive that agents were able to bootstrap a browser in a week! But holy crap! I hope nobody ever uses that thing or would try to build an actual browser out of it, at least with this generation of agents, it's still pure slop with little oversight. It's an impressive research and tech demo, not an approach to building software people should use. At least not yet.

      There is also another side to this slop loop addiction: token consumption.

      Consider how many tokens these loops actually consume. A well-prepared session with good tooling and context can be remarkably token-efficient. For instance, the entire port of MiniJinja to Go took only 2.2 million tokens. But the hands-off approaches—spinning up agents and letting them run wild—burn through tokens at staggering rates. Patterns like Ralph are particularly wasteful: you restart the loop from scratch each time, which means you lose the ability to use cached tokens or reuse context.

      We should also remember that current token pricing is almost certainly subsidized. These patterns may not be economically viable for long. And those discounted coding plans we're all on? They might not last either.

      Slop Loop Cults

      And then there are things like Beads and Gas Town, Steve Yegge's agentic coding tools, which are the complete celebration of slop loops. Beads, which is basically some sort of issue tracker for agents, is 240,000 lines of code that … manages markdown files in GitHub repositories. And the code quality is abysmal.

      There appears to be some competition in place to run as many of these agents in parallel with almost no quality control in some circles. And to then use agents to try to create documentation artifacts to regain some confidence of what is actually going on. Except those documents themselves read like slop.

      Looking at Gas Town (and Beads) from the outside, it looks like a Mad Max cult. What are polecats, refineries, mayors, beads, convoys doing in an agentic coding system? If the maintainer is in the loop, and the whole community is in on this mad ride, then everyone and their dæmons just throw more slop up. As an external observer the whole project looks like an insane psychosis or a complete mad art project. Except, it's real? Or is it not? Apparently a reason for slowdown in Gas Town is contention on figuring out the version of Beads, which takes 7 subprocess spawns. Or using the doctor command times out completely. Beads keeps growing and growing in complexity and people who are using it, are realizing that it's almost impossible to uninstall. And they might not even work well together even though one apparently depends on the other.

      I don't want to pick on Gas Town or these projects, but they are just the most visible examples of this in-group behavior right now. But you can see similar things in some of the AI builder circles on Discord and X where people hype each other up with their creations, without much critical thinking and sanity checking of what happens under the hood.

      Asymmetric and Maintainer's Burden

      It takes you a minute of prompting and waiting a few minutes for code to come out of it. But actually honestly reviewing a pull request takes many times longer than that. The asymmetry is completely brutal. Shooting up bad code is rude because you completely disregard the time of the maintainer. But everybody else is also creating AI-generated code, but maybe they passed the bar of it being good. So how can you possibly tell as a maintainer when it all looks the same? And as the person writing the issue or the PR, you felt good about it. Yet what you get back is frustration and rejection.

      I'm not sure how we will go ahead here, but it's pretty clear that in projects that don't submit themselves to the slop loop, it's going to be a nightmare to deal with all the AI-generated noise.

      Even for projects that are fully AI-generated but are setting some standard for contributions, some folks now prefer actually just getting the prompts over getting the actual code. Because then it's clearer what the person actually intended. There is more trust in running the agent oneself than having other people do it.

      Is Agent Psychosis Real?

      Which really makes me wonder: am I missing something here? Is this where we are going? Am I just not ready for this new world? Are we all collectively getting insane?

      Particularly if you want to opt out of this craziness right now, it's getting quite hard. Some projects no longer accept human contributions until they have vetted the people completely. Others are starting to require that you submit prompts alongside your code, or just the prompts alone.

      I am a maintainer who uses AI myself, and I know others who do. We're not luddites and we're definitely not anti-AI. But we're also frustrated when we encounter AI slop on issue and pull request trackers. Every day brings more PRs that took someone a minute to generate and take an hour to review.

      There is a dire need to say no now. But when one does, the contributor is genuinely confused: "Why are you being so negative? I was trying to help." They were trying to help. Their dæmon told them it was good.

      Maybe the answer is that we need better tools — better ways to signal quality, better ways to share context, better ways to make the AI's involvement visible and reviewable. Maybe the culture will self-correct as people hit walls. Maybe this is just the awkward transition phase before we figure out new norms.

      Or maybe some of us are genuinely losing the plot, and we won't know which camp we're in until we look back. All I know is that when I watch someone at 3am, running their tenth parallel agent session, telling me they've never been more productive — in that moment I don't see productivity. I see someone who might need to step away from the machine for a bit. And I wonder how often that someone is me.

      Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.

  3. January 17, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-17 rss

      IDA Plugin Updates on 2026-01-17

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA 128GB VRAM quad R9700 server rss

      128GB VRAM quad R9700 server | This is a sequel to my previous thread from 2024. I originally planned to pick up another pair of MI100s and an Infinity Fabric Bridge, and I picked up a lot of hardware upgrades over the course of 2025 in preparation for this. Notably, faster, double capacity memory (last February, well before the current price jump), another motherboard, higher capacity PSU, etc. But then I saw benchmarks for the R9700, particularly in the llama.cpp ROCm thread, and saw the much better prompt processing performance for a small token generation loss. The MI100 also went up in price to about $1000, so factoring in the cost of a bridge, it'd come to about the same price. So I sold the MI100s, picked up 4 R9700s and called it a day. Here's the specs and BOM. Note that the CPU and SSD were taken from the previous build, and the internal fans came bundled with the PSU as part of a deal: | Component | Description | Number | Unit Price
      ---|---|---|---
      CPU | AMD Ryzen 7 5700X | 1 | $160.00
      RAM | Corsair Vengance LPX 64GB (2 x 32GB) DDR4 3600MHz C18 | 2 | $105.00
      GPU | PowerColor AMD Radeon AI PRO R9700 32GB | 4 | $1,300.00
      Motherboard | MSI MEG X570 GODLIKE Motherboard | 1 | $490.00
      Storage | Inland Performance 1TB NVMe SSD | 1 | $100.00
      PSU | Super Flower Leadex Titanium 1600W 80+ Titanium | 1 | $440.00
      Internal Fans | Super Flower MEGACOOL 120mm fan, Triple-Pack | 1 | $0.00
      Case Fans | Noctua NF-A14 iPPC-3000 PWM | 6 | $30.00
      CPU Heatsink | AMD Wraith Prism aRGB CPU Cooler | 1 | $20.00
      Fan Hub | Noctua NA-FH1 | 1 | $45.00
      Case | Phanteks Enthoo Pro 2 Server Edition | 1 | $190.00
      Total | | | $7,035.00

      128GB VRAM, 128GB RAM for offloading, all for less than the price of a RTX 6000 Blackwell.

      Some benchmarks:

      model | size | params | backend | ngl | n_batch | n_ubatch | fa | test | t/s
      ---|---|---|---|---|---|---|---|---|---
      llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 6524.91 ± 11.30
      llama 7B Q4_0 | 3.56 GiB | 6.74 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 90.89 ± 0.41
      qwen3moe 30B.A3B Q8_0 | 33.51 GiB | 30.53 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 2113.82 ± 2.88
      qwen3moe 30B.A3B Q8_0 | 33.51 GiB | 30.53 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 72.51 ± 0.27
      qwen3vl 32B Q8_0 | 36.76 GiB | 32.76 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 1725.46 ± 5.93
      qwen3vl 32B Q8_0 | 36.76 GiB | 32.76 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 14.75 ± 0.01
      llama 70B IQ4_XS - 4.25 bpw | 35.29 GiB | 70.55 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 1110.02 ± 3.49
      llama 70B IQ4_XS - 4.25 bpw | 35.29 GiB | 70.55 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 14.53 ± 0.03
      qwen3next 80B.A3B IQ4_XS - 4.25 bpw | 39.71 GiB | 79.67 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 821.10 ± 0.27
      qwen3next 80B.A3B IQ4_XS - 4.25 bpw | 39.71 GiB | 79.67 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 38.88 ± 0.02
      glm4moe ?B IQ4_XS - 4.25 bpw | 54.33 GiB | 106.85 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 1928.45 ± 3.74
      glm4moe ?B IQ4_XS - 4.25 bpw | 54.33 GiB | 106.85 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 48.09 ± 0.16
      minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 113.52 GiB | 228.69 B | ROCm | 99 | 1024 | 1024 | 1 | pp8192 | 2082.04 ± 4.49
      minimax-m2 230B.A10B IQ4_XS - 4.25 bpw | 113.52 GiB | 228.69 B | ROCm | 99 | 1024 | 1024 | 1 | tg128 | 48.78 ± 0.06
      minimax-m2 230B.A10B Q8_0 | 226.43 GiB | 228.69 B | ROCm | 30 | 1024 | 1024 | 1 | pp8192 | 42.62 ± 7.96
      minimax-m2 230B.A10B Q8_0 | 226.43 GiB | 228.69 B | ROCm | 30 | 1024 | 1024 | 1 | tg128 | 6.58 ± 0.01

      A few final observations:

      • glm4 moe and minimax-m2 are actually GLM-4.6V and MiniMax-M2.1, respectively.
      • There's an open issue for Qwen3-Next at the moment; recent optimizations caused some pretty hefty prompt processing regressions. The numbers here are pre #18683, in case the exact issue gets resolved.
      • A word on the Q8 quant of MiniMax-M2.1; --fit on isn't supported on llama-bench, so I can't give an apples to apples comparison to simply reducing the number of gpu layers, but it's also extremely unreliable for me in llama-server, giving me HIP error 906 on the first generation. Out of a dozen or so attempts, I've gotten it to work once, with a TG around 8.5 t/s, but take that with a grain of salt. Otherwise, maybe the quality jump is worth letting it run overnight? You be the judge. It also takes 2 hours to load, but that could be because I'm loading it off external storage.
      • The internal fan mount on the case only has screws on one side; in the intended configuration, the holes for power cables are on the opposite side of where the GPU power sockets are, meaning the power cables will block airflow from the fans. How they didn't see this, I have no idea. Thankfully, it stays in place from a friction fit if you flip it 180 like I did. Really, I probably could have gone without it, it was mostly a consideration for when I was still going with MI100s, but the fans were free anyway.
      • I really, really wanted to go AM5 for this, but there just isn't a board out there with 4 full sized PCIe slots spaced for 2 slot GPUs. At best you can fit 3 and then cover up one of them. But if you need a bazillion m.2 slots you're golden /s. You might then ask why I didn't go for Threadripper/Epyc, and that's because I was worried about power consumption and heat. I didn't want to mess with risers and open rigs, so I found the one AM4 board that could do this, even if it comes at the cost of RAM speeds/channels and slower PCIe speeds.
      • The MI100s and R9700s didn't play nice for the brief period of time I had 2 of both. I didn't bother troubleshooting, just shrugged and sold them off, so it may have been a simple fix but FYI.
      • Going with a 1 TB SSD in my original build was a mistake, even 2 would have made a world of difference. Between LLMs, image generation, TTS, ect. I'm having trouble actually taking advantage of the extra VRAM with less quantized models due to storage constraints, which is why my benchmarks still have a lot of 4-bit quants despite being able to easily do 8-bit ones.
      • I don't know how to control the little LCD display on the board. I'm not sure there is a way on Linux. A shame.

      submitted by /u/Ulterior-Motive_
      [link] [comments]

    3. 🔗 r/reverseengineering How can I get this kodak m820 digital picture frame running doom? (Directed here by a guy in r/itrunsdoom who identified the firmware as linux based and recommended I post it here) rss
    4. 🔗 r/wiesbaden Bin gerade erst aus Amerika nach Wiesbaden gezogen. rss

      Hallo zusammen! Ich bin vor ein paar Wochen beruflich von Amerika nach Wiesbaden gezogen und suche nach Tipps für Unternehmungen und Möglichkeiten, neue Leute kennenzulernen.

      Ich spiele gerne Basketball und Volleyball, gehe auf Konzerte und treffe mich mit Freunden in Bars. Gelegentlich gehe ich auch in Clubs, bevorzuge aber eine entspanntere Atmosphäre. Außerdem bin ich ein großer Filmfan und überlege, vielleicht wieder mit dem Filmemachen anzufangen.

      Ich möchte einfach in einer neuen Stadt neue Freunde finden und idealerweise auch die Sprache lernen. Wenn jemand Tipps hat, immer her damit!

      submitted by /u/Grandpas_leftnut
      [link] [comments]

    5. 🔗 r/LocalLLaMA The Search for Uncensored AI (That Isn’t Adult-Oriented) rss

      I’ve been trying to find an AI that’s genuinely unfiltered and technically advanced, uncensored something that can reason freely without guardrails killing every interesting response.

      Instead, almost everything I run into is marketed as “uncensored,” but it turns out to be optimized for low-effort adult use rather than actual intelligence or depth.

      It feels like the space between heavily restricted corporate AI and shallow adult-focused models is strangely empty, and I’m curious why that gap still exists...

      Is there any uncensored or lightly filtered AI that focuses on reasoning, creativity,uncensored technology or serious problem-solving instead? I’m open to self-hosted models, open-source projects, or lesser-known platforms. Suggestions appreciated.

      submitted by /u/Fun-Situation-4358
      [link] [comments]

    6. 🔗 idursun/jjui v0.9.10 release

      Release Notes

      🆕 Features

      Lua Scripting Enhancements

      • Lua Context Module (#465): Added context module to Lua scripting API, exposing methods for accessing selected item metadata and checked items:
        • context.change_id() - Get the change ID of selected revision or file
        • context.commit_id() - Get the commit ID of selected revision, file, or commit
        • context.file() - Get the file path of selected file
        • context.operation_id() - Get the operation ID of selected operation
        • context.checked_files() - Get array of checked file paths
        • context.checked_change_ids() - Get array of change IDs from checked items
        • context.checked_commit_ids() - Get array of commit IDs from checked items
      • Shell Execution in Lua (#471): Exposed exec_shell() function to Lua scripts, enabling interactive commands like opening files in external editors directly from jjui. This enables custom commands such as:
        [custom_commands.open_file]
        

        key = ["O"] lua = ''' local file = context.file() if not file then flash("No file selected") return end exec_shell("vim " .. file) '''

      ✨ Improvements

      • Abandon Workflow : Removed confirmation dialog; users can now use Space to add/remove revisions from the abandon list

      🐛 Bug Fixes

      • Preview Pane Scrolling (#472): Fixed broken Ctrl-U/Ctrl-D scrolling in the preview pane that was introduced by earlier input routing changes. Preview commands are now properly grouped and always handled.
      • Parser : Fixed handling of divergent change ID format
      • Bookmarks : Fixed tracking of new bookmarks (currently tracks all remotes)

      🔧 Compatibility

      • Jujutsu 0.36.0 Support (#407): Updated commands to work with breaking changes in Jujutsu 0.36.0:
        • Changed --destination flag to --onto
        • Changed --edit flag to --editor
        • Removed deprecated --allow-new flag from git push commands
        • Updated keybinding from d to o for --onto flag in related modes

      📝 Documentation

      • README Updates (#470):
        • Added missing op log revert item to help menu
        • Fixed redo documentation
        • Updated custom command examples

      What's Changed

      • jj-update: fix commands to work with breaking changes in jj-0.36.0 by @baggiiiie in #407
      • refactor(abandon): remove confirmation dialog by @idursun in #462
      • fix(bookmarks): track new bookmarks by @idursun in #463
      • feat(lua): add context module by @idursun in #465
      • lua: expose exec_shell to lua script by @baggiiiie in #471
      • README improvements and help menu missing item by @baggiiiie in #470
      • ui,preview: fix preview pane ctrl-u/d scrolling by @baggiiiie in #472

      Full Changelog : v0.9.9...v0.9.10

    7. 🔗 badlogic/pi-mono v0.49.0 release

      Added

      • pi.setLabel(entryId, label) in ExtensionAPI for setting per-entry labels from extensions (#806)
      • Export keyHint, appKeyHint, editorKey, appKey, rawKeyHint for extensions to format keybinding hints consistently (#802 by @dannote)
      • Exported VERSION from the package index and updated the custom-header example. (#798 by @tallshort)
      • Added showHardwareCursor setting to control cursor visibility while still positioning it for IME support. (#800 by @ghoulr)
      • Added Emacs-style kill ring editing with yank and yank-pop keybindings, plus legacy Alt+letter handling and Alt+D delete word forward support in the interactive editor. (#810 by @Perlence)
      • Added ctx.compact() and ctx.getContextUsage() to extension contexts for programmatic compaction and context usage checks.
      • Added documentation for delete word forward and kill ring keybindings in interactive mode. (#810 by @Perlence)

      Changed

      • Updated the default system prompt wording to clarify the pi harness and documentation scope.
      • Simplified Codex system prompt handling to use the default system prompt directly for Codex instructions.

      Fixed

      • Fixed photon module failing to load in ESM context with "require is not defined" error (#795 by @dannote)
      • Fixed compaction UI not showing when extensions trigger compaction.
      • Fixed orphaned tool results after errored assistant messages causing Codex API errors. When an assistant message has stopReason: "error", its tool calls are now excluded from pending tool tracking, preventing synthetic tool results from being generated for calls that will be dropped by provider-specific converters. (#812)
      • Fixed Bedrock Claude max_tokens handling to always exceed thinking budget tokens, preventing compaction failures. (#797 by @pjtf93)
      • Fixed Claude Code tool name normalization to match the Claude Code tool list case-insensitively and remove invalid mappings.

      Removed

      • Removed pi-internal:// path resolution from the read tool.
    8. 🔗 r/wiesbaden Coors Bier rss

      Hi zusammen, ich bin auf der Suche nach einem Händler in Wiesbaden, der Coors Bier verkauft. Hat da wer einen Tipp? Danke euch :)

      submitted by /u/youideez3ro
      [link] [comments]

    9. 🔗 r/LocalLLaMA Best "End of world" model that will run on 24gb VRAM rss

      Hey peeps, I'm feeling in a bit of a omg the world is ending mood and have been amusing myself by downloading and hoarding a bunch of data - think wikipedia, wiktionary, wikiversity, khan academy, etc etc

      What's your take on the smartest / best model(s) to download and store - they need to fit and run on my 24gb VRAM / 64gb RAM PC.?

      submitted by /u/gggghhhhiiiijklmnop
      [link] [comments]

    10. 🔗 sacha chua :: living an awesome life Emacs: Updating a Mailchimp campaign using a template, sending test e-mails, and scheduling it rss

      I'm helping other volunteers get on board with doing the Bike Brigade newsletter. Since not everyone has access to (or the patience for) MailChimp, we've been using Google Docs to draft the newsletter and share it with other people behind the scenes. I've previously written about getting a Google Docs draft ready for Mailchimp via Emacs and Org Mode, which built on my code for transforming HTML clipboard contents to smooth out Mailchimp annoyances: dates, images, comments, colours. Now I've figured out how to update, test, and schedule the MailChimp campaign directly from Emacs so that I don't even have to go into the MailChimp web interface at all. I added those functions to sachac/mailchimp-el.

      I used to manually download a ZIP of the Google Docs newsletter draft. I didn't feel like figuring out authentication and Google APIs from Emacs, so I did that in a NodeJS script instead. convert-newsletter.js can either create or download the latest newsletter doc from our Google Shared Drive. (google-api might be helpful if I want to do this in Emacs, not sure.) If I call convert-newsletter.js with the download argument, it unpacks the zip into ~/proj/bike-brigade/temp_newsletter, where my Emacs Lisp function for processing the latest newsletter draft with images can turn it into the HTML to insert into the HTML template I've previously created. I've been thinking about whether I want to move my HTML transformation code to NodeJS as well so that I could run the whole thing from the command-line and possibly have other people run this in the future, or if I should just leave it in Emacs for my convenience.

      Updating the campaign through the Mailchimp API means that I don't have to log in, replicate the campaign, click on the code block, and paste in the code. Very nice, no clicks needed. I also use TRAMP to write the HTML to a file on my server (my-bike-brigade-output-file is of the form /ssh:hostname:/path/to/file) so that other volunteers can get a web preview without waiting for the test email.

      (defun my-brigade-next-campaign (&optional date)
        (setq date (or date (org-read-date nil nil "+Sun")))
        (seq-find
         (lambda (o)
           (string-match (concat "^" date)
                         (alist-get 'title (alist-get 'settings o))))
         (alist-get 'campaigns (mailchimp-campaigns 5))))
      
      (defvar my-bike-brigade-output-file nil)
      
      (defun my-brigade-download-newsletter-from-google-docs ()
        "Download the newsletter from Google Docs and puts it in ~/proj/bike-brigade/temp_newsletter/."
        (interactive)
        (let ((default-directory "~/proj/bike-brigade"))
          (with-current-buffer (get-buffer-create "*Newsletter*")
            (erase-buffer)
            (display-buffer (current-buffer))
            (call-process "node" nil t t "convert-newsletter.js" "download"))))
      
      (defun my-brigade-create-or-update-campaign ()
        (interactive)
        (let* ((date (org-read-date nil nil "+Sun"))
               (template-name "Bike Brigade weekly update")
               (list-name "Bike Brigade")
               (template-id
                (alist-get
                 'id
                 (seq-find
                  (lambda (o)
                    (string= template-name (alist-get 'name o)))
                  (alist-get 'templates (mailchimp--request-json "templates")))))
               (list-id (seq-find
                         (lambda (o)
                           (string= list-name
                                    (alist-get 'name o)))
                         (alist-get 'lists (mailchimp--request-json "lists"))))
               (campaign (my-brigade-next-campaign date))
               (body `((type . "regular")
                       (recipients (list_id . ,(alist-get 'id list-id)))
                       (settings
                        (title . ,date)
                        (subject_line . "Bike Brigade: Weekly update")
                        (from_name . "Bike Brigade")
                        (reply_to . "info@bikebrigade.ca")
                        (tracking
                         (opens . t)
                         (html_clicks . t))))))
          (unless campaign
            (setq campaign (mailchimp--request-json
                            "/campaigns"
                            :method "POST"
                            :body
                            body)))
          ;; Download the HTML
          (my-brigade-download-newsletter-from-google-docs)
          ;; Upload to Mailchimp
          (mailchimp-campaign-update-from-template
           (alist-get 'id campaign)
           template-id
           (list
            (cons "main_content_area"
                  (my-brigade-process-latest-newsletter-draft-with-images
                   date))))
          (when my-bike-brigade-output-file
            (with-temp-file my-bike-brigade-output-file
              (insert (alist-get 'html (mailchimp--request-json (format "/campaigns/%s/content" (alist-get 'id campaign)))))))
          (message "%s" "Done!")))
      

      Now to send the test e-mails…

      (defvar my-brigade-test-emails nil "Set to a list of e-mail addresses.")
      (defun my-brigade-send-test-to-me ()
        (interactive)
        (mailchimp-campaign-send-test-email (my-brigade-next-campaign) user-mail-address))
      
      (defun my-brigade-send-test ()
        (interactive)
        (if my-brigade-test-emails
            (mailchimp-campaign-send-test-email (my-brigade-next-campaign) my-brigade-test-emails)
          (error "Set `my-brigade-test-emails'.")))
      

      And schedule it:

      (defun my-brigade-schedule ()
        (interactive)
        (let ((sched (format-time-string "%FT%T%z" (org-read-date t t "+Sun 11:00") t))
              (campaign (my-brigade-next-campaign)))
          (mailchimp-campaign-schedule campaign sched)
          (message "Scheduled %s" (alist-get 'title (alist-get 'settings campaign)))))
      

      Progress, bit by bit! Here's a screenshot showing the Google Docs draft on one side and my web preview in the other:

      2026-01-17_13-00-27.png
      Figure 1: Google Docs and Mailchimp campaign preview

      It'll be even cooler if I can get some of this working via systemd persistent tasks so that they happen automatically, or have some kind of way for the other newsletter volunteers to trigger a rebuild. Anyway, here's https://github.com/sachac/mailchimp-el in case the code is useful for anyone else.

      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.

    11. 🔗 HexRaysSA/plugin-repository commits sync repo: +4 plugins, +4 releases rss
      sync repo: +4 plugins, +4 releases
      
      ## New plugins
      - [export_to_git](https://github.com/milankovo/ida_export_scripts) (1.2.0)
      - [hexinlay](https://github.com/milankovo/hexinlay) (1.2.0)
      - [instrlen](https://github.com/milankovo/instrlen) (1.0.2)
      - [navcolor](https://github.com/milankovo/navcolor) (1.0.2)
      
    12. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
      sync repo: +2 plugins, +2 releases
      
      ## New plugins
      - [drop-all-the-files](https://github.com/milankovo/ida-drop-all-the-files) (1.3.0)
      - [ida-enums-helper](https://github.com/milankovo/ida_enums_helper) (1.0.1)
      
    13. 🔗 r/reverseengineering Introducing rzweb: A Web-Based Binary Analyzer Using Rizin and WebAssembly – Open-Source and Browser-Only rss
    14. 🔗 r/LocalLLaMA DeepSeek Engram : A static memory unit for LLMs rss

      DeeepSeek AI released a new paper titled "Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models" introducing Engram. The key idea: instead of recomputing static knowledge (like entities, facts, or patterns) every time through expensive transformer layers, Engram adds native memory lookup.

      Think of it as separating remembering from reasoning. Traditional MoE focuses on conditional computation, Engram introduces conditional memory. Together, they let LLMs reason deeper, handle long contexts better, and offload early-layer compute from GPUs.

      Key highlights:

      • Knowledge is looked up in O(1) instead of recomputed.
      • Uses explicit parametric memory vs implicit weights only.
      • Improves reasoning, math, and code performance.
      • Enables massive memory scaling without GPU limits.
      • Frees attention for global reasoning rather than static knowledge.

      Paper : https://github.com/deepseek-ai/Engram/blob/main/Engram_paper.pdf

      Video explanation : https://youtu.be/btDV86sButg?si=fvSpHgfQpagkwiub

      submitted by /u/Technical-Love-8479
      [link] [comments]

  4. January 16, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-16 rss

      IDA Plugin Updates on 2026-01-16

      New Releases:

      Activity:

      • capa
        • 3de84eff: Merge pull request #2813 from doomedraven/patch-1
        • 7e16ed74: Add '2.5-CAPE' to tested versions
      • DeepExtractIDA
        • d4423134: Adding headless batch extractor framework, data format reference, and…
      • ghidra-chinese
        • 008cf800: Merge pull request #86 from TC999/sync
      • HappyIDA
        • 532be918: release: v1.0.1
        • ce634b90: ci: setup release automation
        • ded2dc0f: fix: remove hash prefix from seh_bgcolor setting spec
        • 9daee7a3: docs: update hcli installation method and config guide
        • cf948a97: feat: add config options to enable/disable hooks
        • aa15ff7d: feat: support config seh highlight through hcli
      • hrtng
        • a6aa668e: bugfix unflat: -1 is used as mark of shifting exit block;
      • ida-claude-plugins
        • 3f23a17f: Remove ida-domain submodule reference
        • ee1b759a: Remove obsolete tests, results, and docs; update ida-domain skill
      • IDA-MCP
      • ida-structor
        • 1ca775fb: refactor: Modularize codebase and separate implementation from headers
      • IDAPluginList
        • 51b224c5: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
      • rhabdomancer
        • b5181e70: fix: update the list of insecure functions
        • 180d06a2: feat: add wordexp to the list of bad functions
        • e1d13fd4: feat: add utmp* to the list of bad functions
        • 009d5179: feat: add umask to the list of bad functions
        • 81270e8e: feat: add truncate to the list of bad functions
        • b9383abc: feat: add tmpnam_r to the list of bad functions
        • a311718c: feat: add missing str* family functions to the list of bad functions
        • 4747b59b: feat: add p2open to the list of bad functions
        • e9b5bacf: feat: add missing mk* family functions to the list of bad functions
        • d6069467: feat: add getlogin to the list of bad functions
        • 37c8d65c: feat: add ftw/nftw to the list of bad functions
        • db4f1891: fix: remove fdopen from the list of bad functions
        • aceba4db: feat: add fdopen and fmemopen to the list of bad functions
        • 1df42394: feat: add fattach to the list of bad functions
        • 082e655d: feat: add execvP to the list of bad functions
        • 7ba1f45c: feat: add rand48 family functions to the list of bad functions
        • c5e72ba3: feat: add copylist, dbm_open, dbminit to the list of bad functions
      • smd_ida_tools2
        • 2f9dd969: Added paintform_moc.cpp to gitignore.
        • 4ae8c818: Fixed gensida build on windows.
        • bb893325: Fixed gensida build on windows.
    2. 🔗 badlogic/pi-mono v0.48.0 release

      Added

      • Added quietStartup setting to silence startup output (version header, loaded context info, model scope line). Changelog notifications are still shown. (#777 by @ribelo)
      • Added editorPaddingX setting for horizontal padding in input editor (0-3, default: 0)
      • Added shellCommandPrefix setting to prepend commands to every bash execution, enabling alias expansion in non-interactive shells (e.g., "shellCommandPrefix": "shopt -s expand_aliases") (#790 by @richardgill)
      • Added bash-style argument slicing for prompt templates (#770 by @airtonix)
      • Extension commands can provide argument auto-completions via getArgumentCompletions in pi.registerCommand() (#775 by @ribelo)
      • Bash tool now displays the timeout value in the UI when a timeout is set (#780 by @dannote)
      • Export getShellConfig for extensions to detect user's shell environment (#766 by @dannote)
      • Added thinkingText and selectedBg to theme schema (#763 by @scutifer)
      • navigateTree() now supports replaceInstructions option to replace the default summarization prompt entirely, and label option to attach a label to the branch summary entry (#787 by @mitsuhiko)

      Fixed

      • Fixed crash during auto-compaction when summarization fails (e.g., quota exceeded). Now displays error message instead of crashing (#792)
      • Fixed --session <UUID> to search globally across projects if not found locally, with option to fork sessions from other projects (#785 by @ribelo)
      • Fixed standalone binary WASM loading on Linux (#784)
      • Fixed string numbers in tool arguments not being coerced to numbers during validation (#786 by @dannote)
      • Fixed --no-extensions flag not preventing extension discovery (#776)
      • Fixed extension messages rendering twice on startup when pi.sendMessage({ display: true }) is called during session_start (#765 by @dannote)
      • Fixed PI_CODING_AGENT_DIR env var not expanding tilde (~) to home directory (#778 by @aliou)
      • Fixed session picker hint text overflow (#764)
      • Fixed Kitty keyboard protocol shifted symbol keys (e.g., @, ?) not working in editor (#779 by @iamd3vil)
      • Fixed Bedrock tool call IDs causing API errors from invalid characters (#781 by @pjtf93)

      Changed

      • Hardware cursor is now disabled by default for better terminal compatibility. Set PI_HARDWARE_CURSOR=1 to enable (replaces PI_NO_HARDWARE_CURSOR=1 which disabled it).
    3. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [HappyIDA](https://github.com/HappyIDA/HappyIDA): 1.0.1
      
    4. 🔗 HexRaysSA/plugin-repository commits sync repo: -1 plugin, -1 release rss
      sync repo: -1 plugin, -1 release
      
      ## Removed plugins
      - security-poc-plugin
      
    5. 🔗 @HexRaysSA@infosec.exchange 🏃🏽 IDA 9.3 will see faster, more responsive tabular views, contributing to a mastodon

      🏃🏽 IDA 9.3 will see faster, more responsive tabular views, contributing to a noticeably smoother experience when working with large files.

      Check it out: https://hex-rays.com/blog/ida-9.3-more-responsive-tabular- views

    6. 🔗 @malcat@infosec.exchange [#malcat](https://infosec.exchange/tags/malcat) 0.9.12 is out! mastodon

      #malcat 0.9.12 is out!

      Enjoy .pyc and .net stack analysis, py 3.14 support, nuitka / inno 6.7 / .net singlefile bundle parsers and may other improvements:

      https://malcat.fr/blog/0912-is-out-python-314-pyc-and-net-stack- analysis/

    7. 🔗 @cxiao@infosec.exchange also, my kingdom to the NDP leadership candidate that can articulate their mastodon

      also, my kingdom to the NDP leadership candidate that can articulate their opposition to this move by mentioning chinese labour rights, and how the rights of chinese workers are tied to the rights of our workers, in this grand old international economy we all have

      (im not optimistic though)

    8. 🔗 organicmaps/organicmaps 2026.01.16-8-android release

      • NEW: Higher-contrast dark theme colors
      • NEW: Google Assistant for navigation and search
      • OSM map data as of January 11
      • “Auto” navigation theme setting follows the system dark/light mode
      • Thinner subway lines
      • Search results show capacity for motorcycle parking, bicycle rental, bicycle charging, and car charging
      • Show floor level in search results
      • Albanian translations and TTS voice guidance
      • Updated FAQ and app translations
      • Fixed crashes
      …more at omaps.org/news

      See a detailed announce on our website when app updates are published in all stores.
      You can get automatic app updates from GitHub using Obtainium.

      sha256sum:

      38bba983100c48d244032a133f95812ea3acb3009f56febe2de727e1033ea3a3  OrganicMaps-26011608-web-release.apk
      
    9. 🔗 @cxiao@infosec.exchange seriously though not super happy with this, one reason why chinese cars are mastodon

      seriously though not super happy with this, one reason why chinese cars are cheap is because they just ignore labour rights

      but this whole thing is such a big sign of how the world has changed

    10. 🔗 @cxiao@infosec.exchange RE: mastodon
    11. 🔗 3Blue1Brown (YouTube) The ladybug clock puzzle rss

      This is the first in a set of monthly puzzles, curated by Peter Winkler. This one was originally suggested by Richard Stanley.

      You can sign up to hear his description of the answer at http://momath.org/mindbenders

    12. 🔗 r/LocalLLaMA GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) rss

      GPT-5.2 xhigh, GLM-4.7, Kimi K2 Thinking, DeepSeek v3.2 on Fresh SWE-rebench (December 2025) | Hi all, I’m Anton from Nebius. We’ve updated the SWE-bench leaderboard with our December runs on 48 fresh GitHub PR tasks (PRs created in the previous month only). The setup is standard SWE-bench: models read real PR issues, edit code, run tests, and must make the full suite pass. A few observations from this release:

      • Claude Opus 4.5 leads this snapshot at 63.3% resolved rate.
      • GPT-5.2 (extra high effort) follows closely at 61.5%.
      • Gemini 3 Flash Preview slightly outperforms Gemini 3 Pro Preview (60.0% vs 58.9%), despite being smaller and cheaper.
      • GLM-4.7 is currently the strongest open-source model on the leaderboard, ranking alongside closed models like GPT-5.1-codex.
      • GPT-OSS-120B shows a large jump in performance when run in high-effort reasoning mode, highlighting the impact of inference-time scaling.

      Looking forward to your thoughts and feedback. submitted by /u/CuriousPlatypus1881
      [link] [comments]
      ---|---

    13. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
      sync repo: +1 plugin, +2 releases
      
      ## New plugins
      - [DeepExtract](https://github.com/marcosd4h/DeepExtractIDA) (0.0.6, 0.0.5)
      
    14. 🔗 r/LocalLLaMA I fucking love this community rss

      Thank you guys, thanks to everyone who took the time to write a comment or a post explaining, teaching people how things work, the people behind llama.cpp, vllm, and all the contributors who keep the open-source community thriving.

      I'm able to run huge models on my weak ass pc from 10 years ago relatively fast, my fastest one being nemotron-3-nano-30B-a3b-iq4_nl running @14-13.5 t/s with 65k context. While my actual GPU having only 4GB of vram, that's fucking ridiculous and it blows my mind everytime that I'm able to run these models.

      What's been key for me is having a good amount of system memory, and as long as the model is a MoE architecture they run pretty decently.

      submitted by /u/alhinai_03
      [link] [comments]

    15. 🔗 Locklin on science Conditional probability: an educational defect in Physics didactics rss

      Conditional probability is something physicists have a hard time with. There are a number of reasons I know this is true. Primarily I know it is true from my own experience: I had a high-middling to excellent didactics experience in physics, and was basically never exposed to the idea. When I got out into the […]

    16. 🔗 HexRaysSA/plugin-repository commits Merge pull request #18 from marcosd4h/feature/adding-deepextract-plugin rss
      Merge pull request #18 from marcosd4h/feature/adding-deepextract-plugin
      
    17. 🔗 Hex-Rays Blog Faster, More Responsive Tabular Views in IDA 9.3 rss

      Faster, More Responsive Tabular Views in IDA 9.3

      We have started a long-term effort to improve IDA’s performance across the board.

    18. 🔗 r/reverseengineering Drone Hacking Part 1: Dumping Firmware and Bruteforcing ECC rss
    19. 🔗 badlogic/pi-mono v0.47.0 release

      Breaking Changes

      • Extensions using Editor directly must now pass TUI as the first constructor argument: new Editor(tui, theme). The tui parameter is available in extension factory functions. (#732)

      Added

      • OpenAI Codex official support : Full compatibility with OpenAI's Codex CLI models (gpt-5.1, gpt-5.2, gpt-5.1-codex-mini, gpt-5.2-codex). Features include static system prompt for OpenAI allowlisting, prompt caching via session ID, and reasoning signature retention across turns. Set OPENAI_API_KEY and use --provider openai-codex or select a Codex model. (#737)
      • pi-internal:// URL scheme in read tool for accessing internal documentation. The model can read files from the coding-agent package (README, docs, examples) to learn about extending pi.
      • New input event in extension system for intercepting, transforming, or handling user input before the agent processes it. Supports three result types: continue (pass through), transform (modify text/images), handled (respond without LLM). Handlers chain transforms and short-circuit on handled. (#761 by @nicobailon)
      • Extension example: input-transform.ts demonstrating input interception patterns (quick mode, instant commands, source routing) (#761 by @nicobailon)
      • Custom tool HTML export: extensions with renderCall/renderResult now render in /share and /export output with ANSI-to-HTML color conversion (#702 by @aliou)
      • Direct filter shortcuts in Tree mode: Ctrl+D (default), Ctrl+T (no-tools), Ctrl+U (user-only), Ctrl+L (labeled-only), Ctrl+A (all) (#747 by @kaofelix)

      Changed

      • Skill commands (/skill:name) are now expanded in AgentSession instead of interactive mode. This enables skill commands in RPC and print modes, and allows the input event to intercept /skill:name before expansion.

      Fixed

      • Editor no longer corrupts terminal display when loading large prompts via setEditorText. Content now scrolls vertically with indicators showing lines above/below the viewport. (#732)
      • Piped stdin now works correctly: echo foo | pi is equivalent to pi -p foo. When stdin is piped, print mode is automatically enabled since interactive mode requires a TTY (#708)
      • Session tree now preserves branch connectors and indentation when filters hide intermediate entries so descendants attach to the nearest visible ancestor and sibling branches align. Fixed in both TUI and HTML export (#739 by @w-winter)
      • Added upstream connect, connection refused, and reset before headers patterns to auto-retry error detection (#733)
      • Multi-line YAML frontmatter in skills and prompt templates now parses correctly. Centralized frontmatter parsing using the yaml library. (#728 by @richardgill)
      • ctx.shutdown() now waits for pending UI renders to complete before exiting, ensuring notifications and final output are visible (#756)
      • OpenAI Codex provider now retries on transient errors (429, 5xx, connection failures) with exponential backoff (#733)
    20. 🔗 r/LocalLLaMA Dang, M2 drives are the new DDR5 apparently. rss

      Dang, M2 drives are the new DDR5 apparently. | submitted by /u/Porespellar
      [link] [comments]
      ---|---

    21. 🔗 @cxiao@infosec.exchange really good points here on transnational repression and labour rights too: mastodon

      really good points here on transnational repression and labour rights too:

      What is the relationship between dissent and protest in China and the security and prosperity of ordinary Americans?

      A lot of the things that prompt dissent in China—from widespread labor rights violations to repression of ethnic minority groups—reflect consequences of the CCP systematically restricting rights like free expression and free association. We can already see the influence of this system expanding beyond China’s borders. For example, the CCP manipulates media in other countries and is the world’s worst perpetrator of transnational repression, when governments reach across borders to intimidate or attack exiles they perceive as a threat. Chinese companies import poor labor practices into the foreign countries where they work. This puts pressure on American companies to compete by lowering their labor standards. Thus CCP abuses can undermine people’s rights everywhere, including in the United States.

    22. 🔗 @cxiao@infosec.exchange RE: mastodon

      RE: https://mstdn.social/@davidonformosa/115902246202411668

      So many good bits in this interview:

      The CDM team races every day to document protest activity on China’s social media sites before it is deleted. Depending on the topic and size of the event—and whether it goes viral—some posts may disappear in minutes.

      https://chinadissent.net

    23. 🔗 r/LocalLLaMA My story of underestimating /r/LocalLLaMA's thirst for VRAM rss

      My story of underestimating /r/LocalLLaMA's thirst for VRAM | submitted by /u/EmPips
      [link] [comments]
      ---|---

    24. 🔗 r/LocalLLaMA Latest upgrade…A100 40 GB rss

      Latest upgrade…A100 40 GB | Originally this was my gaming rig but I went ITX and basically bought a new computer. So I had the case, fans, AIO, 64 GB DDR5, motherboard, PSU, and 3080 (upgraded to 5070ti RIP). I was going to sell these parts, but I started running models on my 5070ti and eventually I wanted to start running larger models. I found a 3090 on eBay for $680, and 7950x for $350. I put that together with the parts and it’s been a great AI rig for me. I really didn’t plan on upgrading this for a while, especially now with the current price surges. Welp, I saw an A100 get listed for $1000 on eBay. The catch? Listed for parts, and the description just said “card reports CUDA error”. So I figured it was worth the risk (for me), I could’ve probably sold it for the price I paid. Well, I swapped out the 3080 and on the first boot it was recognized instantly by nvidia-smi. I was able to run and train models immediately. Nice. submitted by /u/inserterikhere
      [link] [comments]
      ---|---