- ↔
- →
to read (pdf)
- sorted string tables (SST) from first principles
- What I learned building an opinionated and minimal coding agent
- SteerMouse
- Advanced-Hexrays-Decompiler-reverse-engineering
- A Modern Recommender Model Architecture - Casey Primozic's Homepage
- January 10, 2026
-
🔗 Jessitron AI-coding trick: ask it how it knows rss
Here's the conversation I want:
Me: What do we have to work on?
Claude: [checks its issue tracker] Oh look at these nice issues that are ready for work…But the issue tracker (Beads) is new, and is Claude going to check it? or is it going to hare off looking at git log and searching the codebase for TODO?
So I ask it.
Me: if I say "what were we working on?" what actions will you take?
Claude: I'll check the issue tracker! and then I'll look at git log and then the issue tracker!This is good news! I'm curious how it knows that. I didn't update CLAUDE.md.
Me : great! How do you know to check beads?
Claude: … deep explanation of the startup hook that Beads installedI enjoy that it can explain its own workings. When I don't know how to get it to do something, "ask it" usually works. It can go meta and explain itself. So fun!
-
🔗 badlogic/pi-mono v0.42.4 release
Fixed
- Bash output expanded hint now says "(ctrl+o to collapse)" (#610 by @tallshort)
- Fixed UTF-8 text corruption in remote bash execution (SSH, containers) by using streaming TextDecoder (#608)
-
🔗 badlogic/pi-mono v0.42.3 release
Changed
- OpenAI Codex: updated to use bundled system prompt from upstream
-
🔗 r/reverseengineering Galago executes Android ARM64 native libraries as raw code. rss
submitted by /u/zboralski
[link] [comments] -
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mastodon.online/@charlesmok/115868370578688572
check out https://smc.peering.tw from this article! it's a very nice visualization of submarine cables around taiwan, and active incidents affecting them
-
🔗 badlogic/pi-mono v0.42.2 release
Added
/model <search>now pre-filters the model selector or auto-selects on exact match. Useprovider/modelsyntax to disambiguate (e.g.,/model openai/gpt-4). (#587 by @zedrdave)FooterDataProviderfor custom footers:ctx.ui.setFooter()now receives a thirdfooterDataparameter providinggetGitBranch(),getExtensionStatuses(), andonBranchChange()for reactive updates (#600 by @nicobailon)Alt+Uphotkey to restore queued steering/follow-up messages back into the editor without aborting the current run (#604 by @tmustier)
Fixed
-
🔗 Will McGugan Good AI, Bad AI - the experiment rss
If you are in tech, or possibly even if you aren’t, your social feeds are likely awash with AI. Most developers seem to be either all-in or passionately opposed to AI (with a leaning towards the all-in camp). Personally I think the needle is hovering somewhere between bad and good.
Good AI
AI for writing code is a skill multiplier.
We haven’t reached the point where a normie can say “Photoshop, but easier to use”. Will we ever? But for now it seems those who are already skilled in what they are asking the AI to do, are getting the best results.
I’ve seen accomplished developers on X using AI to realize their projects in a fraction of the time. These are developers who absolutely could write every line that the LLM produces. They choose not to, because time is their most precious commodity.
Why is this good AI? It means that skills acquired in the age before AI 1 are still valuable. We have a little time before mindless automatons force senior developers into new careers as museum exhibits with the other fossils, tapping on their mechanical keyboards in front of gawping school kids.
Bad AI
The skill multiplier effect may not be enough to boost inexperienced (or mediocre) developers to a level they would like. But AI use does seem to apply a greater boost to the Dunning-Kruger effect.
If you maintain an Open Source project you may be familiar with AI generated Pull Requests. Easily identifiable by long bullet lists in the description, these PRs are often from developers who copied an issue from a project into their prompt, prefixed with the words “please fix”.
These drive-by AI PRs generate work for the FOSS developer. They can look superficially correct, but it takes time to figure out if the changes really do satisfy the requirements. The maintainer can’t use the usual signals to cut through the noise when reviewing AI generated PRs. Copious amounts of (passing) tests and thorough documentation are no longer a signal that the PR won’t miss the point, either subtly or spectacularly.
This is bad AI (more accurately a bad outcome), because it typically takes more time for the maintainer to review such PRs than the creator took to type in the prompt. And those that contribute such PRs rarely respond to requests for changes.
In the past you could get around this with a blanket ban on AI generated code. Now, I think developers would be foolish to do that. Good code is good code, whether generated by a fleshy mammalian brain or a mechanical process. And it is undeniable that AI code can be good code.
The Experiment
This makes me wonder if the job of maintainer could be replaced with AI.
I want to propose an experiment…
Let’s create a repository with some initial AI generated code: “Photoshop, but easier to use” is as a starting point as good as any. An AI agent will review issues, respond via comments, and may tag the issue with “todo” or close it if it doesn’t reach a bar for relevance and quality.
PRs are accepted for “todo” issues and will be reviewed, discussed, and ultimately merged or closed by the AI. These PRs may be human or AI generated—the AI doesn’t care (as if it could).
Note that PRs could modify any of the prompts used by the AI, and those edits will be reviewed by the AI in the same way as any other file.
Would the end result be quality software or a heinous abomination, succeeding only in creating a honeypot for prompt-injection attacks?
I have no intention of making this happen. But if somebody does, tell me how it goes.
- Feels like a long time, but there has only been a single Fast and Furious movie made since the advent of the AI age. ↩
-
- January 09, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-09 rss
IDA Plugin Updates on 2026-01-09
New Releases:
Activity:
- capa
- 7f3e35ee: loader: gracefully handle ELF files with unsupported architectures (#…
- ida-hcli
- ida-structor
- idawilli
- ec5df57b: Merge pull request #63 from williballenthin/claude/analyze-malware-sa…
- 62a63d97: Add documentation resource links and API exploration guidance
- 7d66d507: Document Hex-Rays decompiler license requirement in idalib skill
- 09818f89: Merge pull request #62 from williballenthin/claude/remove-api-key-log…
- 2b25942f: Simplify install-ida.sh by removing file logging
- 24942f7d: Remove credential clearing logic from install-ida.sh
- f9ad6220: Merge pull request #61 from williballenthin/claude/test-idapro-import…
- 436ab3a6: Add ida-domain support and improve skill documentation
- 5dc5124f: Restructure as proper Claude Code skill
- 50fc7c6d: Remove session start hook
- bdf0a3f4: Move IDA Pro installation from session hook to skill
- e1b8d367: Merge pull request #60 from williballenthin/claude/verify-ida-setup-4…
- 7b8d42b4: Remove py-activate-idalib steps
- f04f0df4: Remove existing IDA installation discovery
- fa0add54: Add debug logging to session start hook
- eab18712: Merge pull request #59 from williballenthin/claude/update-ida-hcli-in…
- c26c26a4: Update IDA session hook to use uv pip install ida-hcli
- 9881c787: Merge pull request #58 from williballenthin/claude/add-ida-session-ho…
- f9b3470b: Add session start hook for IDA Pro development in Claude Code web
- msc-thesis-LLMs-to-rank-decompilers
- 5dc8698f: Remove obsolete output files and update extraction script for better …
- suture
- d0b27285: added: support for stack structures added: StackRuleSet added: tests …
- Unicorn-Trace
- capa
-
🔗 Simon Willison Fly's new Sprites.dev addresses both developer sandboxes and API sandboxes at the same time rss
New from Fly.io today: Sprites.dev. Here's their blog post and YouTube demo. It's an interesting new product that's quite difficult to explain - Fly call it "Stateful sandbox environments with checkpoint & restore" but I see it as hitting two of my current favorite problems: a safe development environment for running coding agents and an API for running untrusted code in a secure sandbox.
Disclosure: Fly sponsor some of my work. They did not ask me to write about Sprites and I didn't get preview access prior to the launch. My enthusiasm here is genuine.
- Developer sandboxes
- Storage and checkpoints
- Really clever use of Claude Skills
- A sandbox API
- Scale-to-zero billing
- Two of my favorite problems at once
Developer sandboxes
I predicted earlier this week that "we’re due a Challenger disaster with respect to coding agent security" due to the terrifying way most of us are using coding agents like Claude Code and Codex CLI. Running them in
--dangerously-skip-permissionsmode (aka YOLO mode, where the agent acts without constantly seeking approval first) unlocks so much more power, but also means that a mistake or a malicious prompt injection can cause all sorts of damage to your system and data.The safe way to run YOLO mode is in a robust sandbox, where the worst thing that can happen is the sandbox gets messed up and you have to throw it away and get another one.
That's the first problem Sprites solves:
curl https://sprites.dev/install.sh | bash sprite login sprite create my-dev-environment sprite console -s my-dev-environmentThat's all it takes to get SSH connected to a fresh environment, running in an ~8GB RAM, 8 CPU server. And... Claude Code and Codex and Gemini CLI and Python 3.13 and Node.js 22.20 and a bunch of other tools are already installed.
The first time you run
claudeit neatly signs you in to your existing account with Anthropic. The Sprites VM is persistent so future runs ofsprite console -swill get you back to where you were before.... and it automatically sets up port forwarding, so you can run a localhost server on your Sprite and access it from
localhost:8080on your machine.There's also a command you can run to assign a public URL to your Sprite, so anyone else can access it if they know the secret URL.
Storage and checkpoints
In the blog post Kurt Mackey argues that ephemeral, disposable sandboxes are not the best fit for coding agents:
The state of the art in agent isolation is a read-only sandbox. At Fly.io, we’ve been selling that story for years, and we’re calling it: ephemeral sandboxes are obsolete. Stop killing your sandboxes every time you use them. [...]
If you force an agent to, it’ll work around containerization and do work . But you’re not helping the agent in any way by doing that. They don’t want containers. They don’t want “sandboxes”. They want computers.
[...] with an actual computer, Claude doesn’t have to rebuild my entire development environment every time I pick up a PR.
Each Sprite gets a proper filesystem which persists in between sessions, even while the Sprite itself shuts down after inactivity. It sounds like they're doing some clever filesystem tricks here, I'm looking forward to learning more about those in the future.
There are some clues on the homepage:
You read and write to fast, directly attached NVMe storage. Your data then gets written to durable, external object storage. [...]
You don't pay for allocated filesystem space, just the blocks you write. And it's all TRIM friendly, so your bill goes down when you delete things.
The really clever feature is checkpoints. You (or your coding agent) can trigger a checkpoint which takes around 300ms. This captures the entire disk state and can then be rolled back to later.
For more on how that works, run this in a Sprite:
cat /.sprite/docs/agent-context.mdHere's the relevant section:
## Checkpoints - Point-in-time checkpoints and restores available - Copy-on-write implementation for storage efficiency - Last 5 checkpoints mounted at `/.sprite/checkpoints` - Checkpoints capture only the writable overlay, not the base imageOr run this to see the
--helpfor the command used to manage them:sprite-env checkpoints --help
Which looks like this:
sprite-env checkpoints - Manage environment checkpoints USAGE: sprite-env checkpoints <subcommand> [options] SUBCOMMANDS: list [--history <ver>] List all checkpoints (optionally filter by history version) get <id> Get checkpoint details (e.g., v0, v1, v2) create Create a new checkpoint (auto-versioned) restore <id> Restore from a checkpoint (e.g., v1) NOTE: Checkpoints are versioned as v0, v1, v2, etc. Restore returns immediately and triggers an async restore that restarts the environment. The last 5 checkpoints are mounted at /.sprite/checkpoints for direct file access. EXAMPLES: sprite-env checkpoints list sprite-env checkpoints list --history v1.2.3 sprite-env checkpoints get v2 sprite-env checkpoints create sprite-env checkpoints restore v1Really clever use of Claude Skills
I'm a big fan of Skills, the mechanism whereby Claude Code (and increasingly other agents too) can be given additional capabilities by describing them in Markdown files in a specific directory structure.
In a smart piece of design, Sprites uses pre-installed skills to teach Claude how Sprites itself works. This means you can ask Claude on the machine how to do things like open up ports and it will talk you through the process.
There's all sorts of interesting stuff in the
/.spritefolder on that machine - digging in there is a great way to learn more about how Sprites works.A sandbox API
Also from my predictions post earlier this week: "We’re finally going to solve sandboxing". I am obsessed with this problem: I want to be able to run untrusted code safely, both on my personal devices and in the context of web services I'm building for other people to use.
I have so many things I want to build that depend on being able to take untrusted code - from users or from LLMs or from LLMs-driven-by-users - and run that code in a sandbox where I can be confident that the blast radius if something goes wrong is tightly contained.
Sprites offers a clean JSON API for doing exactly that, plus client libraries in Go and TypeScript and coming-soon Python and Elixir.
From their quick start:
# Create a new sprite curl -X PUT https://api.sprites.dev/v1/sprites/my-sprite \ -H "Authorization: Bearer $SPRITES_TOKEN" # Execute a command curl -X POST https://api.sprites.dev/v1/sprites/my-sprite/exec \ -H "Authorization: Bearer $SPRITES_TOKEN" \ -d '{"command": "echo hello"}'You can also checkpoint and rollback via the API, so you can get your environment exactly how you like it, checkpoint it, run a bunch of untrusted code, then roll back to the clean checkpoint when you're done.
Managing network access is an important part of maintaining a good sandbox. The Sprites API lets you configure network access policies using a DNS-based allow/deny list like this:
curl -X POST \ "https://api.sprites.dev/v1/sprites/{name}/policy/network" \ -H "Authorization: Bearer $SPRITES_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "rules": [ { "action": "allow", "domain": "github.com" }, { "action": "allow", "domain": "*.npmjs.org" } ] }'
Scale-to-zero billing
Sprites have scale-to-zero baked into the architecture. They go to sleep after 30 seconds of inactivity, wake up quickly when needed and bill you for just the CPU hours, RAM hours and GB-hours of storage you use while the Sprite is awake.
Fly estimate a 4 hour intensive coding session as costing around 46 cents, and a low traffic web app with 30 hours of wake time per month at ~$4.
(I calculate that a web app that consumes all 8 CPUs and all 8GBs of RAM 24/7 for a month would cost ((7 cents * 8 * 24 * 30) + (4.375 cents * 8 * 24 * 30)) / 100 = $655.2 per month, so don't necessarily use these as your primary web hosting solution for an app that soaks up all available CPU and RAM!)
Two of my favorite problems at once
I was hopeful that Fly would enter the developer-friendly sandbox API market, especially given other entrants from companies like Cloudflare and Modal and E2B.
I did not expect that they'd tackle the developer sandbox problem at the same time, and with the same product!
My one concern here is that it makes the product itself a little harder to explain.
I'm already spinning up some prototypes of sandbox-adjacent things I've always wanted to build, and early signs are very promising. I'll write more about these as they turn into useful projects.
Update: Here's some additional colour from Thomas Ptacek on Hacker News:
This has been in the works for quite awhile here. We put a long bet on "slow create fast start/stop" --- which is a really interesting and useful shape for execution environments --- but it didn't make sense to sandboxers, so "fast create" has been the White Whale at Fly.io for over a year.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/reverseengineering Hacking Denuvo rss
submitted by /u/RazerOG
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [Suture](https://github.com/libtero/suture) (1.0.0) -
🔗 HexRaysSA/plugin-repository commits Merge pull request #16 from 19h/v1 rss
Merge pull request #16 from 19h/v1 chore: Register libtero and 19h IDA tools in known repositories -
🔗 r/LocalLLaMA I clustered 3 DGX Sparks that NVIDIA said couldn't be clustered yet...took 1500 lines of C to make it work rss
| NVIDIA officially supports clustering two DGX Sparks together. I wanted three. The problem: each Spark has two 100Gbps ConnectX-7 ports. In a 3-node triangle mesh, each link ends up on a different subnet. NCCL's built-in networking assumes all peers are reachable from a single NIC. It just... doesn't work. So I wrote a custom NCCL network plugin from scratch. What it does:- Subnet-aware NIC selection (picks the right NIC for each peer)
- Raw RDMA verbs implementation (QP state machines, memory registration, completion queues)
- Custom TCP handshake protocol to avoid deadlocks
- ~1500 lines of C
The result: Distributed inference across all 3 nodes at 8+ GB/s over RDMA. The NVIDIA support tier I'm currently on:
├── Supported configs ✓ ├── "Should work" configs ├── "You're on your own" configs ├── "Please don't call us" configs ├── "How did you even..." configs └── You are here → "Writing custom NCCL plugins to cluster standalone workstations over a hand-wired RDMA mesh"GitHub link: https://github.com/autoscriptlabs/nccl-mesh-plugin Happy to answer questions about the implementation. This was a mass of low-level debugging (segfaults, RDMA state machine issues, GID table problems) but it works. submitted by /u/Ok-Pomegranate1314
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.42.1 release
-
🔗 r/LocalLLaMA RTX Blackwell Pro 6000 wholesale pricing has dropped by $150-200 rss
Obviously the RTX Blackwell Pro 6000 cards are of great interest to the people here. I see them come up a lot. And we all ooh and ahh over the people that have 8 of them lined up in a nice row.
It also seems to me like the market is suffering from lack of transparency on these.
My employer buys these cards wholesale, and I can see current pricing and stock in our distributors' systems. (And I may have slipped in an order for one for myself...) It's eye-opening.
I'm probably not supposed to disclose the exact price we buy these at. But I wanted people to know that unlike everything else with RAM in it, the wholesale price of these has dropped by about ~$150-200 from December to January.
I will also say that the wholesale price for the 6000 Pro is only about $600 higher than the wholesale price for the new 72GiB 5000 Pro. So, for the love of god, please don't buy that!
(And no, this is not marketing or an ad; I cannot sell anyone these cards at any price. I would be fired immediately. I just want people to have the best available information when they're looking to buy something this expensive.)
submitted by /u/TastesLikeOwlbear
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits chore: Register libtero and 19h IDA tools in known repositories rss
chore: Register libtero and 19h IDA tools in known repositories known-repositories.txt (modified): - Added three repositories from user libtero: suture, graphviewer, and idaguides - Added four repositories from user 19h: ida-lifter, ida-codedump, ida-semray, idalib-dump, chernobog Impact: - Expands the tracking list to include additional IDA Pro related utilities, specifically focusing on lifting, dumping, debofuscation, and graph visualization tools. -
🔗 r/LocalLLaMA The reason why RAM has become so expensive rss
| submitted by /u/InvadersMustLive
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [BinSync](https://github.com/binsync/binsync) (5.10.1) -
🔗 r/LocalLLaMA DeepSeek V4 Coming rss
According to two people with direct knowledge, DeepSeek is expected to roll out a next‑generation flagship AI model in the coming weeks that focuses on strong code‑generation capabilities.
The two sources said the model, codenamed V4, is an iteration of the V3 model DeepSeek released in December 2024. Preliminary internal benchmark tests conducted by DeepSeek employees indicate the model outperforms existing mainstream models in code generation, including Anthropic’s Claude and the OpenAI GPT family.
The sources said the V4 model achieves a technical breakthrough in handling and parsing very long code prompts, a significant practical advantage for engineers working on complex software projects. They also said the model’s ability to understand data patterns across the full training pipeline has been improved and that no degradation in performance has been observed.
One of the insiders said users may find that V4’s outputs are more logically rigorous and clear, a trait that indicates the model has stronger reasoning ability and will be much more reliable when performing complex tasks.
submitted by /u/External_Mood4719
[link] [comments] -
🔗 @malcat@infosec.exchange [#kesakode](https://infosec.exchange/tags/kesakode) DB update to 1.0.48: mastodon
#kesakode DB update to 1.0.48:
● new sigs: Crazyhunter, Echogather, IranBot, MaskGramStealer, PulsarRat and Themeforestrat
● 9 existing entries updated
● FP-fixed signatures: 82
● 1146 new clean programs whitelisted
● +527K unique functions
● +700K unique strings -
🔗 r/LocalLLaMA (The Information): DeepSeek To Release Next Flagship AI Model With Strong Coding Ability rss
| (paywall): https://www.theinformation.com/articles/deepseek-release-next-flagship-ai-model-strong-coding-ability submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 pranshuparmar/witr v0.2.2 release
What's Changed
- Fix: sudo command usage in install.sh (introduced in
7ee5907) by @jinks908 in #132 - docs: Mention windows for conda-forge installation by @pavelzw in #133
- Main PR by @pranshuparmar in #134
- fix: escape ansi control characters to avoid security issues by @ggmolly in #117
- Feat/extended darwin by @chojs23 in #135
- Main PR by @pranshuparmar in #136
New Contributors
Full Changelog :
v0.2.1...v0.2.2 - Fix: sudo command usage in install.sh (introduced in
-
🔗 r/LocalLLaMA Big tech companies, now "DRAM beggars," are staying in Pangyo and Pyeongtaek, demanding "give us some supplies." rss
| Not a Korean speaker. Came across this in another sub. The TLDR is that everyone is scrambling to buy as much as they can as soon as they can, because "demanding a 50-60% increase in server DRAM supply prices from the previous quarter during their first-quarter negotiations with customers". Per the article, DDR4 prices went up from $1.40 last January to $9.30 in December (my interpretation is $/GB). If they're increasing by another 50%, that's almost $14/GB!!! So, 1TB of DDR4-3200 will cost north of $14k by Q2 if this is true 🤯 In case anyone thought things weren't already bad, it's going to get much much worse this year. Here's the full Google translate of the article: DRAM, a type of memory semiconductor, was the key driver behind Samsung Electronics' first-quarter operating profit surpassing 20 trillion won. DRAM products, including high-bandwidth memory (HBM), are a core component of the computing infrastructure supporting the artificial intelligence (AI) era. The semiconductor industry predicts that the DRAM shortage, which began in earnest in the second half of last year, will continue until the end of this year, with prices also expected to continue rising. Samsung Electronics and SK Hynix, major suppliers of DRAM, are reportedly demanding a 50-60% increase in server DRAM supply prices from the previous quarter during their first-quarter negotiations with customers. A semiconductor industry insider reported, "Even with significantly higher prices, the prevailing sentiment is 'let's buy as much as we can before it gets more expensive.'" Recently, semiconductor purchasing managers from Silicon Valley tech companies, nicknamed "DRAM Beggars," have been reportedly competing fiercely to secure remaining DRAM inventory at hotels in the Pangyo and Pyeongtaek areas. The semiconductor industry analyzes that "the demand that was initially focused on HBM in the early days of the AI craze is now spreading to server DRAM, creating an unprecedented semiconductor boom." DRAM is a semiconductor that manages a computer's "short-term memory." It stores and quickly transmits necessary data when the central processing unit (CPU), the brain, performs tasks. HBM is specialized for seamlessly delivering the massive data required for AI by increasing the data transmission path (bandwidth) dozens of times compared to conventional DRAM. However, HBM is extremely expensive and has limitations in increasing capacity. This explains why big tech companies are scrambling to secure server DRAM products to store more data. The average contract price of DRAM soared from $1.40 (based on 8GB DDR4) in January last year to $9.30 in December. This marks the first time in seven years and four months that DRAM prices have surpassed the $9 threshold. Kim Dong-won, head of the research center at KB Securities, said, "Due to this price increase, the operating profit margin (the ratio of operating profit to sales) of some general-purpose memories (widely used standard memories) is expected to reach 70%, and DDR5 may even surpass the margin of HBM3E. This year, semiconductor companies' performance is expected to be determined by general-purpose memories." submitted by /u/FullstackSensei
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases, ~1 changed rss
sync repo: +1 plugin, +2 releases, ~1 changed ## New plugins - [ida-security-scanner](https://github.com/SymbioticSec/ida-security-scanner) (0.1.2, 0.0.1) ## Changes - [unicorn-tracer-arm64](https://github.com/chenxvb/Unicorn-Trace): - 0.1: archive contents changed, download URL changed -
🔗 HexRaysSA/plugin-repository commits Merge pull request #15 from Anthony-Bondu/patch-1 rss
Merge pull request #15 from Anthony-Bondu/patch-1 Add SymbioticSec/ida-security-scanner to known repositories -
🔗 HexRaysSA/plugin-repository commits Add SymbioticSec/ida-security-scanner to known repositories rss
Add SymbioticSec/ida-security-scanner to known repositories Manually added a new repository entry for 'SymbioticSec/ida-security-scanner'. -
🔗 badlogic/pi-mono v0.42.0 release
Added
- Added OpenCode Zen provider support. Set
OPENCODE_API_KEYenv var and useopencode/<model-id>(e.g.,opencode/claude-opus-4-5).
- Added OpenCode Zen provider support. Set
-
🔗 badlogic/pi-mono v0.41.0 release
Added
- Anthropic OAuth support is back! Use
/loginto authenticate with your Claude Pro/Max subscription.
- Anthropic OAuth support is back! Use
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://infosec.exchange/@watchTowr/115860948823554212
spoiler alert it's ../ AGAIN 😭😭😭😭
-
🔗 badlogic/pi-mono v0.40.1 release
Removed
- Anthropic OAuth support (
/login). Use API keys instead.
- Anthropic OAuth support (
-
🔗 r/LocalLLaMA OK I get it, now I love llama.cpp rss
I just made the switch from Ollama to llama.cpp. Ollama is fantastic for the beginner because it lets you super easily run LLMs and switch between them all. Once you realize what you truly want to run, llama.cpp is really the way to go.
My hardware ain't great, I have a single 3060 12GB GPU and three P102-100 GPUs for a total of 42GB. My system ram is 96GB along with an Intel i7-9800x. It blows my mind that with some tuning what difference it can make. You really need to understand each of the commands for llama.cpp to get the most out of it especially with uneven vram like mine. I used Chatgpt, Perplexity and suprisingly only Google AI studio could optimize my settings while teaching me along the way.
Crazy how these two commands both fill up the ram but one is twice as fast as the other. Chatgpt helped me with the first one, Google AI with the other ;). Now I'm happy running local lol.
11t/s:
sudo pkill -f llama-server; sudo nvidia-smi --gpu-reset -i 0,1,2,3 || true; sleep 5; sudo CUDA_VISIBLE_DEVICES=0,1,2,3 ./llama-server --model /home/llm/llama.cpp/models/gpt-oss-120b/Q4_K_M/gpt- oss-120b-Q4_K_M-00001-of-00002.gguf --n-gpu-layers 21 --main-gpu 0 --flash- attn off --cache-type-k q8_0 --cache-type-v f16 --ctx-size 30000 --port 8080 --host 0.0.0.0 --mmap --numa distribute --batch-size 384 --ubatch-size 256 --jinja --threads $(nproc) --parallel 2 --tensor-split 12,10,10,10 --mlock21t/s
sudo pkill -f llama-server; sudo nvidia-smi --gpu-reset -i 0,1,2,3 || true; sleep 5; sudo GGML_CUDA_ENABLE_UNIFIED_MEMORY=0 CUDA_VISIBLE_DEVICES=0,1,2,3 ./llama-server --model /home/llm/llama.cpp/models/gpt-oss-120b/Q4_K_M/gpt- oss-120b-Q4_K_M-00001-of-00002.gguf --n-gpu-layers 99 --main-gpu 0 --split- mode layer --tensor-split 5,5,6,20 -ot "blk\.(2[1-9]|[3-9][0-9])\.ffn_.*_exps\.weight=CPU" --ctx-size 30000 --port 8080 --host 0.0.0.0 --batch-size 512 --ubatch-size 256 --threads 8 --parallel 1 --mlockNothing here is worth copying and pasting as it is unique to my config but the moral of the story is, if you tune llama.cpp this thing will FLY!
submitted by /u/vulcan4d
[link] [comments] -
🔗 Ampcode News Agents Panel rss

The Amp editor extension now has a new panel to view and manage all active agent threads.
You can use the keyboard to navigate between threads:
j/kor arrow keys to move between threadsSpaceto expand a thread panel to show the last message or tool resultEnterto open a threadeto archive or unarchive a threadEscto toggle focus between the thread list and the input, which starts new threads
We recommend archiving old threads so the displayed threads represent your working set. You can use
Archive Old Threadsfrom the Amp command palette (Cmd-K from the Amp panel) to archive threads older than 72 hours.As coding agents improve and require less direct human oversight, more time will be spent by humans in managing and orchestrating work across multiple agent threads. We'll have more to share soon.
To get started, click the button on the left end of the navbar or use Cmd-Opt-I (macOS) or Ctrl-Alt-I (Windows/Linux).
-
- January 08, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-08 rss
IDA Plugin Updates on 2026-01-08
New Releases:
Activity:
- DriverBuddy-7.4-plus
- 3d8ad303: Sync auto-tag-based-review.yml from .github repo
- e774e4fc: Sync auto-llm-issue-review.yml from .github repo
- 96ded052: Sync auto-advance-ball.yml from .github repo
- f85aa8ad: Sync auto-copilot-functionality-docs-review.yml from .github repo
- 9ba310a3: Sync auto-label-comment-prs.yml from .github repo
- 074368c9: Sync auto-close-issues.yml from .github repo
- 4284808c: Sync auto-assign-pr.yml from .github repo
- 59a93cef: Sync auto-assign-copilot.yml from .github repo
- 7693bcee: Sync trigger-all-repos.yml from .github repo
- 49baebb4: Sync auto-sec-scan.yml from .github repo
- f19df2f1: Sync workflows-sync-template-backup.yml from .github repo
- c4998750: Sync auto-label.yml from .github repo
- 59cc4361: Sync auto-llm-pr-review.yml from .github repo
- 93829e8b: Sync auto-copilot-code-cleanliness-review.yml from .github repo
- a61ff536: Sync auto-tag-based-review.yml from .github repo
- e52f2d12: Sync auto-llm-issue-review.yml from .github repo
- b6ae4d1e: Sync auto-advance-ball.yml from .github repo
- 59a6fca8: Sync auto-gpt5-implementation.yml from .github repo
- c64529f6: Sync auto-copilot-org-playwright-loop.yaml from .github repo
- 9334def1: Sync auto-copilot-functionality-docs-review.yml from .github repo
- ghidra
- ida-hcli
- ida-structor
- fa17365f: feat: Implement Z3-based constraint solver for structure synthesis
- f8c836bf: Amend readme…
- 0321cba9: feat: Migrate configuration from IDB netnodes to global INI file
- 82de087a: refactor: Replace UIIntegration singleton with stateless namespace fu…
- ba48dbd3: fix: Listen for ui_database_closed to ensure safe cleanup
- a30cb60c: feat: Implement Structor plugin for automated structure synthesis
- ida2llvm
- 85b440d7: PySide6 quick change
- IDAPluginList
- dbc907dd: Update
- suture
- DriverBuddy-7.4-plus
-
🔗 badlogic/pi-mono v0.40.0 release
Added
- Documentation on component invalidation and theme changes in
docs/tui.md
Fixed
- Components now properly rebuild their content on theme change (tool executions, assistant messages, bash executions, custom messages, branch/compaction summaries)
- Documentation on component invalidation and theme changes in
-
🔗 badlogic/pi-mono v0.39.1 release
Fixed
setTheme()now triggers a full rerender so previously rendered components update with the new theme colorsmac-system-theme.tsexample now polls every 2 seconds and usesosascriptfor real-time macOS appearance detection
-
🔗 badlogic/pi-mono v0.39.0 release
Breaking Changes
before_agent_startevent now receivessystemPromptin the event object and returnssystemPrompt(full replacement) instead ofsystemPromptAppend. Extensions that were appending must now useevent.systemPrompt + extrapattern. (#575)discoverSkills()now returns{ skills: Skill[], warnings: SkillWarning[] }instead ofSkill[]. This allows callers to handle skill loading warnings. (#577 by @cv)
Added
ctx.ui.getAllThemes(),ctx.ui.getTheme(name), andctx.ui.setTheme(name | Theme)methods for extensions to list, load, and switch themes at runtime (#576)--no-toolsflag to disable all built-in tools, allowing extension-only tool setups (#557 by @cv)- Pluggable operations for built-in tools enabling remote execution via SSH or other transports (#564). Interfaces:
ReadOperations,WriteOperations,EditOperations,BashOperations,LsOperations,GrepOperations,FindOperations user_bashevent for intercepting user!/!!commands, allowing extensions to redirect to remote systems (#528)setActiveTools()in ExtensionAPI for dynamic tool management- Built-in renderers used automatically for tool overrides without custom
renderCall/renderResult ssh.tsexample: remote tool execution via--ssh user@host:/pathinteractive-shell.tsexample: run interactive commands (vim, git rebase, htop) with full terminal access via!iprefix or auto-detection- Wayland clipboard support for
/copycommand using wl-copy with xclip/xsel fallback (#570 by @OgulcanCelik) - Experimental:
ctx.ui.custom()now accepts{ overlay: true }option for floating modal components that composite over existing content without clearing the screen (#558 by @nicobailon) AgentSession.skillsandAgentSession.skillWarningsproperties to access loaded skills without rediscovery (#577 by @cv)
Fixed
- String
systemPromptincreateAgentSession()now works as a full replacement instead of having context files and skills appended, matching documented behavior (#543) - Update notification for bun binary installs now shows release download URL instead of npm command (#567 by @ferologics)
- ESC key now works during "Working..." state after auto-retry (#568 by @tmustier)
- Abort messages now show correct retry attempt count (e.g., "Aborted after 2 retry attempts") (#568 by @tmustier)
- Fixed Antigravity provider returning 429 errors despite available quota (#571 by @ben-vargas)
- Fixed malformed thinking text in Gemini/Antigravity responses where thinking content appeared as regular text or vice versa. Cross-model conversations now properly convert thinking blocks to plain text. (#561)
--no-skillsflag now correctly prevents skills from loading in interactive mode (#577 by @cv)
-
🔗 r/LocalLLaMA The NO FAKES Act has a "Fingerprinting" Trap that kills Open Source. We need to lobby for a Safe Harbor. rss
Hey everyone, I’ve been reading the text of the "NO FAKES Act" currently in Congress, and it’s worse than I thought. The Tldr: It creates a "digital replica right" for voices/likenesses. That sounds fine for stopping deepfake porn, but the liability language is a trap. It targets anyone who "makes available" a tool that is primarily used for replicas.
The Problem: If you release a TTS model or a voice-conversion RVC model on HuggingFace, and someone else uses it to fake a celebrity, you (the dev) can be liable for statutory damages ($5k-$25k per violation). There is no Section 230 protection here. This effectively makes hosting open weights for audio models a legal s*icide mission unless you are OpenAI or Google.What I did: I contacted my reps email to flag this as an "innovation killer." If you run a repo or care about open weights, you might want to do the same. We need them to add a "Safe Harbor" for tool devs.
S.1367 - 119th Congress (2025-2026): NO FAKES Act of 2025 | Congress.gov | Library of Congress https://share.google/u6dpy7ZQDvZWUrlfc
UPDATE: ACTION ITEMS (How to actually stop this) If you don't want to go to jail for hosting a repo, you need to make noise now. 1. The "Lazy" Email (Takes 30 seconds): Go to Democracy.io or your Senator’s contact page. Subject: Opposition to NO FAKES Act (H.R. 2794 / S. 1367) - Open Source Liability Message: "I am a constituent and software engineer. I oppose the NO FAKES Act unless it includes a specific Safe Harbor for Open Source Code Repositories. The current 'Digital Fingerprinting' requirement (Section 3) is technically impossible for raw model weights to comply with. This bill effectively bans open-source AI hosting in the US and hands a monopoly to Big Tech. Please amend it to protect tool developers." 2. The "Nuclear" Option (Call them): Call the Capitol Switchboard: (202) 224-3121 Ask for Senators Wyden (D) or Massie (R) if you want to thank them for being tech-literate, or call your own Senator to complain. Script: "The NO FAKES Act kills open- source innovation. We need a Safe Harbor for developers who write code, separate from the bad actors who use it."
submitted by /u/PostEasy7183
[link] [comments] -
🔗 r/LocalLLaMA Z.ai (the AI lab behind GLM) has officially IPO'd on the Hong Kong Stock Exchange rss
submitted by /u/Old-School8916
[link] [comments] -
🔗 Simon Willison LLM predictions for 2026, shared with Oxide and Friends rss
I joined a recording of the Oxide and Friends podcast on Tuesday to talk about 1, 3 and 6 year predictions for the tech industry. This is my second appearance on their annual predictions episode, you can see my predictions from January 2025 here. Here's the page for this year's episode, with options to listen in all of your favorite podcast apps or directly on YouTube.
Bryan Cantrill started the episode by declaring that he's never been so unsure about what's coming in the next year. I share that uncertainty - the significant advances in coding agents just in the last two months have left me certain that things will change significantly, but unclear as to what those changes will be.
Here are the predictions I shared in the episode.
- 1 year: It will become undeniable that LLMs write good code
- 1 year: We're finally going to solve sandboxing
- 1 year: A "Challenger disaster" for coding agent security
- 1 year: Kākāpō parrots will have an outstanding breeding season
- 3 years: the coding agents Jevons paradox for software engineering will resolve, one way or the other
- 3 years: Someone will build a new browser using mainly AI-assisted coding and it won't even be a surprise
- 6 years: Typing code by hand will go the way of punch cards
1 year: It will become undeniable that LLMs write good code
I think that there are still people out there who are convinced that LLMs cannot write good code. Those people are in for a very nasty shock in 2026. I do not think it will be possible to get to the end of even the next three months while still holding on to that idea that the code they write is all junk and it's it's likely any decent human programmer will write better code than they will.
In 2023, saying that LLMs write garbage code was entirely correct. For most of 2024 that stayed true. In 2025 that changed, but you could be forgiven for continuing to hold out. In 2026 the quality of LLM-generated code will become impossible to deny.
I base this on my own experience - I've spent more time exploring AI-assisted programming than most.
The key change in 2025 (see my overview for the year) was the introduction of "reasoning models" trained specifically against code using Reinforcement Learning. The major labs spent a full year competing with each other on who could get the best code capabilities from their models, and that problem turns out to be perfectly attuned to RL since code challenges come with built-in verifiable success conditions.
Since Claude Opus 4.5 and GPT-5.2 came out in November and December respectively the amount of code I've written by hand has dropped to a single digit percentage of my overall output. The same is true for many other expert programmers I know.
At this point if you continue to argue that LLMs write useless code you're damaging your own credibility.
1 year: We're finally going to solve sandboxing
I think this year is the year we're going to solve sandboxing. I want to run code other people have written on my computing devices without it destroying my computing devices if it's malicious or has bugs. [...] It's crazy that it's 2026 and I still
pip installrandom code and then execute it in a way that it can steal all of my data and delete all my files. [...] I don't want to run a piece of code on any of my devices that somebody else wrote outside of sandbox ever again.This isn't just about LLMs, but it becomes even more important now there are so many more people writing code often without knowing what they're doing. Sandboxing is also a key part of the battle against prompt injection.
We have a lot of promising technologies in play already for this - containers and WebAssembly being the two I'm most optimistic about. There's real commercial value involved in solving this problem. The pieces are there, what's needed is UX work to reduce the friction in using them productively and securely.
1 year: A "Challenger disaster" for coding agent security
I think we're due a Challenger disaster with respect to coding agent security[...] I think so many people, myself included, are running these coding agents practically as root, right? We're letting them do all of this stuff. And every time I do it, my computer doesn't get wiped. I'm like, "oh, it's fine".
I used this as an opportunity to promote my favourite recent essay about AI security, the Normalization of Deviance in AI by Johann Rehberger.
The Normalization of Deviance describes the phenomenon where people and organizations get used to operating in an unsafe manner because nothing bad has happened to them yet, which can result in enormous problems (like the 1986 Challenger disaster) when their luck runs out.
Every six months I predict that a headline-grabbing prompt injection attack is coming soon, and every six months it doesn't happen. This is my most recent version of that prediction!
1 year: Kākāpō parrots will have an outstanding breeding season
(I dropped this one to lighten the mood after a discussion of the deep sense of existential dread that many programmers are feeling right now!)
I think that Kākāpō parrots in New Zealand are going to have an outstanding breeding season. The reason I think this is that the Rimu trees are in fruit right now. There's only 250 of them, and they only breed if the Rimu trees have a good fruiting. The Rimu trees have been terrible since 2019, but this year the Rimu trees were all blooming. There are researchers saying that all 87 females of breeding age might lay an egg. And for a species with only 250 remaining parrots that's great news.
(I just checked Wikipedia and I was right with the parrot numbers but wrong about the last good breeding season, apparently 2022 was a good year too.)
In a year with precious little in the form of good news I am utterly delighted to share this story. Here's more:
- Kākāpō breeding season 2026 introduction from the Department of Conservation from June 2025 .
- Bumper breeding season for kākāpō on the cards - 3rd December 2025, University of Auckland.
I don't often use AI-generated images on this blog, but the Kākāpō image the Oxide team created for this episode is just perfect:

3 years: the coding agents Jevons paradox for software engineering will resolve, one way or the other
We will find out if the Jevons paradox saves our careers or not. This is a big question that anyone who's a software engineer has right now: we are driving the cost of actually producing working code down to a fraction of what it used to cost. Does that mean that our careers are completely devalued and we all have to learn to live on a tenth of our incomes, or does it mean that the demand for software, for custom software goes up by a factor of 10 and now our skills are even more valuable because you can hire me and I can build you 10 times the software I used to be able to? I think by three years we will know for sure which way that one went.
The quote says it all. There are two ways this coding agents thing could go: it could turn out software engineering skills are devalued, or it could turn out we're more valuable and effective than ever before.
I'm crossing my fingers for the latter! So far it feels to me like it's working out that way.
3 years: Someone will build a new browser using mainly AI-assisted coding and it won't even be a surprise
I think somebody will have built a full web browser mostly using AI assistance, and it won't even be surprising. Rolling a new web browser is one of the most complicated software projects I can imagine[...] the cheat code is the conformance suites. If there are existing tests that it'll get so much easier.
A common complaint today from AI coding skeptics is that LLMs are fine for toy projects but can't be used for anything large and serious.
I think within 3 years that will be comprehensively proven incorrect, to the point that it won't even be controversial anymore.
I picked a web browser here because so much of the work building a browser involves writing code that has to conform to an enormous and daunting selection of both formal tests and informal websites-in-the-wild.
Coding agents are really good at tasks where you can define a concrete goal and then set them to work iterating in that direction.
A web browser is the most ambitious project I can think of that leans into those capabilities.
6 years: Typing code by hand will go the way of punch cards
I think the job of being paid money to type code into a computer will go the same way as punching punch cards [...] in six years time, I do not think anyone will be paid to just to do the thing where you type the code. I think software engineering will still be an enormous career. I just think the software engineers won't be spending multiple hours of their day in a text editor typing out syntax.
The more time I spend on AI-assisted programming the less afraid I am for my job, because it turns out building software - especially at the rate it's now possible to build - still requires enormous skill, experience and depth of understanding.
The skills are changing though! Being able to read a detailed specification and transform it into lines of code is the thing that's being automated away. What's left is everything else, and the more time I spend working with coding agents the larger that "everything else" becomes.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 @HexRaysSA@infosec.exchange 🔎 Here's another sneak peek! mastodon
🔎 Here's another sneak peek!
IDA 9.3 will expand its decompiler lineup w/ RH850, improve Golang support, update the Microcode Viewer, add the "forbid assignment propagation" feature, and more.Get the details here: https://hex-rays.com/blog/ida-9.3-expands-decompiler- lineup
-
🔗 apple/embedding-atlas v0.15.0 release
New Features
- Load data from a URL with the "Load Data" button in the home page.
- The previous
/uploadpage is renamed to/app.
What's Changed
- Upgrade GitHub Actions to latest versions by @salmanmkc in #122
- ci: upgrade GitHub Actions for Node 24 compatibility by @salmanmkc in #121
- ci: revert to recommended version tag by @domoritz in #123
- feat: allow loading data from URL by @donghaoren in #124
- fix: race condition in search and perf issues by @donghaoren in #125
- fix: typo in line chart description by @kwonoh in #129
- fix: visual glitch with downsampled points when points are ordered spatially by @donghaoren in #130
- fix: (security) preact has JSON VNode Injection issue by @donghaoren in #132
- chore: bump version to 0.15.0 by @donghaoren in #133
New Contributors
- @salmanmkc made their first contribution in #122
- @kwonoh made their first contribution in #129
Full Changelog :
v0.14.0...v0.15.0 -
🔗 r/LocalLLaMA Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt rss
| Someone had to count it. Turns out Jensen said "AI" exactly 121 times in the CES 2025 keynote. I used https://github.com/OpenAgentPlatform/Dive (open-source MCP client) + two MCPs I made: - https://github.com/kevinwatt/yt-dlp-mcp - YouTube download
- https://github.com/kevinwatt/ffmpeg-mcp-lite - video editing One prompt:Task: Create a compilation video of every exact moment Jensen Huang says "AI".
Video source: https://www.youtube.com/watch?v=0NBILspM4c4 Instructions: Download video in 720p + subtitles in JSON3 format (word- level timestamps) Parse JSON3 to find every "AI" instance with precise start/end times Use ffmpeg to cut clips (~50-100ms padding for natural sound) Concatenate all clips chronologically Output: Jensen_CES_AI.mp4Dive chained the two MCPs together - download → parse timestamps → cut 121 clips → merge. All local, no cloud. If you want to see how it runs: https://www.youtube.com/watch?v=u_7OtyYAX74 The result is... hypnotic. submitted by /u/Prior-Arm-6705
[link] [comments]
---|--- -
🔗 pranshuparmar/witr v0.2.1 release
What's Changed
- added condition for sudo checking in installation script by @5p4r70n in #129
- Fix nil map panic in launchd_darwin.go when running without root by @Archangelwu in #130
- Main PR by @pranshuparmar in #131
New Contributors
- @5p4r70n made their first contribution in #129
- @Archangelwu made their first contribution in #130
Full Changelog :
v0.2.0...v0.2.1 -
🔗 gulbanana/gg GG 0.37.0 release
This release is based on Jujutsu 0.37.
Added
- Repository -> Init... and Repository -> Clone... menuitems, for creating repositories.
- Progress bar for slow git operations (fetch, push, clone).
- Relative timestamps update on each snapshot (which happen after modifications or when the window/tab is focused).
- GG now respects the
snapshot.auto-update-stalesetting. Additionally, when first opening a repo, it will always update the working copy if it's stale.
Fixed
- In GUI mode, the Repository -> Open... menuitem always opened a new window even if you didn't have a workspace loaded in the current window.
-
🔗 r/LocalLLaMA Dialogue Tree Search - MCTS-style tree search to find optimal dialogue paths (so you don't have to trial-and-error it yourself) rss
| Hey all! I'm sharing an updated version of my MCTS-for-conversations project. Instead of generating single responses, it explores entire conversation trees to find dialogue strategies and prunes bad paths. I built it to help get better research directions for projects, but it can be used for anything https://preview.redd.it/shr3e0liv1cg1.png?width=2560&format=png&auto=webp&s=eec800c6dcd9f1a4fd033d003fe80e102cba8079 Github: https://github.com/MVPandey/DTS Motivation: I like MCTS :3 and I originally wanted to make this a dataset-creation agent, but this is what it evolved into on its own. Basically:DTS runs parallel beam search over conversation branches. You give it a goal and opening message, and it: (Note: this isnt mcts. It's parallel beam search. UCB1 is too wild with llms for me)- Generates N diverse strategies
- Forks each into user intent variants - skeptical, cooperative, confused, resistant (if enabled, or defaults to engaged + probing)
- Rolls out full multi-turn conversations down each branch
- Has 3 independent LLM judges score each trajectory, takes the median
- Prunes branches below threshold, backpropagates scores
- Repeats for however many rounds you configure
https://preview.redd.it/zkii0idvv1cg1.png?width=762&format=png&auto=webp&s=905f9787a8b7c7bfafcc599e95a3b73005c331b4 Three judges with median voting helps a lot with the LLM-as-judge variance problem from CAE. Still not grounded in anything real, but outlier scores get filtered. Research context helps but the scroing is still stochastic. I tried a rubric based approach but it was trash. Main additions over CAE:
- user intent forking (strategies get stress-tested against different personas)
- deep research integration via GPT-Researcher for domain context
- proper visualization with conversation playback
Only supports openai compatible endpoints atm - works with whatever models you have access to there. It's token-hungry though, a full run can hit 300+ LLM calls depending on config. If running locally, disable parallel calls It's open source (Apache 2.0) and I'm happy to take contributions if anyone wants to help out. Just a project. -- BTW: Backend was done mostly by me as the planner/sys designer, etc + Claude Code for implementation/refactoring. Frontend was purely vibe coded. Sorry if the code is trash. submitted by /u/ManavTheWorld
[link] [comments]
---|--- -
🔗 jj-vcs/jj v0.37.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- A new syntax for referring to hidden and divergent change IDs is available:
xyz/nwherenis a number. For instance,xyz/0refers to the latest
version ofxyz, whilexyz/1refers to the previous version ofxyz.
This allows you to perform actions likejj restore --from xyz/1 --to xyzto
restorexyzto its previous contents, if you made a mistake.
For divergent changes, the numeric suffix will always be shown in the log,
allowing you to disambiguate them in a similar manner.Breaking changes
-
String patterns in revsets, command
arguments, and configuration are now parsed as globs by default. Use
substring:orexact:prefix as needed. -
remotes.<name>.auto-track-bookmarksis now parsed the same way they
are in revsets and can be combined with logical operators. -
jj bookmark track/untracknow accepts--remoteargument. If omitted, all
remote bookmarks matching the bookmark names will be tracked/untracked. The
old<bookmark>@<remote>syntax is deprecated in favor of<bookmark> --remote=<remote>. -
On Windows, symlinks that point to a path with
/won't be supported. This
path is invalid on Windows. -
The template alias
format_short_change_id_with_hidden_and_divergent_info(commit)
has been replaced byformat_short_change_id_with_change_offset(commit). -
The following deprecated config options have been removed:
git.push-bookmark-prefixui.default-descriptionui.diff.formatui.diff.tool- The deprecated
commit_id.normal_hex()template method has been removed.
-
Template expansion that did not produce a terminating newline will not be
fixed up to provide one byjj log,jj evolog, orjj op log. -
The
diffconflict marker style can now use\\\\\\\markers to indicate
the continuation of a conflict label from the previous line.
Deprecations
- The
git_head()andgit_refs()functions will be removed from revsets and
templates.git_head()should point to thefirst_parent(@)revision in
colocated repositories.git_refs()can be approximated as
remote_bookmarks(remote=glob:*) | tags().
New features
-
Updated the executable bit representation in the local working copy to allow
ignoring executable bit changes on Unix. By default we try to detect the
filesystem's behavior, but this can be overridden manually by setting
working-copy.exec-bit-change = "respect" | "ignore". -
jj workspace addnow also works for empty destination directories. -
jj git remotefamily of commands now supports different fetch and push URLs. -
[colors]table now supportsdim = trueattribute. -
In color-words diffs, context line numbers are now rendered with decreased
intensity. -
Hidden and divergent commits can now be unambiguously selected using their
change ID combined with a numeric suffix. For instance, if there are two
commits with change IDxyz, then one can be referred to asxyz/0and the
other can be referred to asxyz/1. These suffixes are shown in the log when
necessary to make a change ID unambiguous. -
jj util gcnow prunes unreachable files in.jj/repo/store/extrato save
disk space. -
Early version of a
jj file searchcommand for searching for a pattern in
files (likegit grep). -
Conflict labels now contain information about where the sides of a conflict
came from (e.g.nlqwxzwn 7dd24e73 "first line of description"). -
--insert-beforenow accepts a revset that resolves to an empty set when
used with--insert-after. The behavior is similar to--onto. -
jj tag listnow supports--sortoption. -
TreeDiffEntrytype now has adisplay_diff_path()method that formats
renames/copies appropriately. -
TreeDiffEntrynow has astatus_char()method that returns
single-character status codes (M/A/D/C/R). -
CommitEvolutionEntrytype now has apredecessors()method which
returns the predecessor commits (previous versions) of the entry's commit. -
CommitEvolutionEntrytype now has ainter_diff()method which
returns aTreeDiffbetween the entry's commit and its predecessor version.
Optionally accepts a fileset literal to limit the diff. -
jj file annotatenow reports an error for non-files instead of succeeding
and displaying no content. -
jj workspace forgetnow warns about unknown workspaces instead of failing.
Fixed bugs
-
Broken symlink on Windows. #6934.
-
Fixed failure on exporting moved/deleted annotated tags to Git. Moved tags are
exported as lightweight tags. -
jj gerrit uploadnow correctly handles mixed explicit and implicit
Change-Ids in chains of commits (#8219) -
jj git pushnow updates partially-pushed remote bookmarks accordingly.
#6787 -
Fixed problem of loading large Git packfiles.
GitoxideLabs/gitoxide#2265 -
The builtin pager won't get stuck when stdin is redirected.
-
jj workspace addnow prevents creating an empty workspace name. -
Fixed checkout of symlinks pointing to themselves or
.git/.jjon Unix. The
problem would still remain on Windows if symlinks are enabled.
#8348 -
Fixed a bug where jj would fail to read git delta objects from pack files.
GitoxideLabs/gitoxide#2344
Contributors
Thanks to the people who made this release happen!
- Anton Älgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Bryce Berger (@bryceberger)
- Carlos Knippschild (@chuim)
- Cole Helbling (@cole-h)
- David Higgs (@higgsd)
- Eekle (@Eekle)
- Gaëtan Lehmann (@glehmann)
- Ian Wrzesinski (@isuffix)
- Ilya Grigoriev (@ilyagr)
- Julian Howes (@jlnhws)
- Kaiyi Li (@06393993)
- Lukas Krejci (@metlos)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Ori Avtalion (@salty-horse)
- Scott Taylor (@scott2000)
- Shaoxuan (Max) Yuan (@ffyuanda)
- Stephen Jennings (@jennings)
- Steve Fink (@hotsphink)
- Steve Klabnik (@steveklabnik)
- Theo Buehler (@botovq)
- Thomas Castiglione (@gulbanana)
- Vincent Ging Ho Yim (@cenviity)
- xtqqczze (@xtqqczze)
- Yuantao Wang (@0WD0)
- Yuya Nishihara (@yuja)
- A new syntax for referring to hidden and divergent change IDs is available:
-
🔗 Hex-Rays Blog IDA 9.3 Expands and Improves Its Decompiler Lineup rss
We know you’re always looking for broader platform coverage from the Hex-Rays decompiler, which is why we’re adding another one to the lineup: the RH850 decompiler. And of course, we haven’t stopped improving what’s already there. In this upcoming release, we’ve enhanced the analysis of Golang programs, fine-tuned value range optimization, made the new microcode viewer easier to use, and more.

-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mas.to/@Bislick/115856677525425915
solidarity with venezuelans and those on the bottom resisting authoritarianism, for 26 years. may venezuelans everywhere, those who have stayed and those who have left, have a better country in their lifetimes
-
🔗 Console.dev newsletter Taws rss
Description: Terminal UI for AWS.
What we like: Uses existing auth options (AWS SSO, credentials, config, env-vars) with multiple profile and region support. Supports lots of resource types (compute, databases, networking, logs). Vim-style navigation and commands. Provides detailed (JSON/YAML) views of resources. Filtering and pagination.
What we dislike: Doesn’t support all resources, so may have some limitations depending on your AWS service usage.
-
🔗 Console.dev newsletter uv rss
Description: Python package & project manager.
What we like: Replaces your Python toolchain - makes it easy to manage virtual environments, dependencies, Python versions, workspaces. Supports package version management and publishing workflows. Built-in build backend. Cached dependency deduplication. Very fast.
What we dislike: Not quite at a stable release version yet, but is effectively stable.
-
🔗 Julia Evans A data model for Git (and other docs updates) rss
Hello! This past fall, I decided to take some time to work on Git's documentation. I've been thinking about working on open source docs for a long time - usually if I think the documentation for something could be improved, I'll write a blog post or a zine or something. But this time I wondered: could I instead make a few improvements to the official documentation?
So Marie and I made a few changes to the Git documentation!
a data model for Git
After a while working on the documentation, we noticed that Git uses the terms "object", "reference", or "index" in its documentation a lot, but that it didn't have a great explanation of what those terms mean or how they relate to other core concepts like "commit" and "branch". So we wrote a new "data model" document!
You can read the data model here for now. I assume at some point (after the next release?) it'll also be on the Git website.
I'm excited about this because understanding how Git organizes its commit and branch data has really helped me reason about how Git works over the years, and I think it's important to have a short (1600 words!) version of the data model that's accurate.
The "accurate" part turned out to not be that easy: I knew the basics of how Git's data model worked, but during the review process I learned some new details and had to make quite a few changes (for example how merge conflicts are stored in the staging area).
updates to
git push,git pull, and moreI also worked on updating the introduction to some of Git's core man pages. I quickly realized that "just try to improve it according to my best judgement" was not going to work: why should the maintainers believe me that my version is better?
I've seen a problem a lot when discussing open source documentation changes where 2 expert users of the software argue about whether an explanation is clear or not ("I think X would be a good way to explain it! Well, I think Y would be better!")
I don't think this is very productive (expert users of a piece of software are notoriously bad at being able to tell if an explanation will be clear to non- experts), so I needed to find a way to identify problems with the man pages that was a little more evidence-based.
getting test readers to identify problems
I asked for test readers on Mastodon to read the current version of documentation and tell me what they find confusing or what questions they have. About 80 test readers left comments, and I learned so much!
People left a huge amount of great feedback, for example:
- terminology they didn't understand (what's a pathspec? what does "reference" mean? does "upstream" have a specific meaning in Git?)
- specific confusing sentences
- suggestions of things things to add ("I do X all the time, I think it should be included here")
- inconsistencies ("here it implies X is the default, but elsewhere it implies Y is the default")
Most of the test readers had been using Git for at least 5-10 years, which I think worked well - if a group of test readers who have been using Git regularly for 5+ years find a sentence or term impossible to understand, it makes it easy to argue that the documentation should be updated to make it clearer.
I thought this "get users of the software to comment on the existing documentation and then fix the problems they find" pattern worked really well and I'm excited about potentially trying it again in the future.
the man page changes
We ended updating these 4 man pages:
git add(before, after)git checkout(before, after)git push(before, after)git pull(before, after)
The
git pushandgit pullchanges were the most interesting to me: in addition to updating the intro to those pages, we also ended up writing:- a section describing what the term "upstream branch" means (which previously wasn't really explained)
- a cleaned-up description of what a "push refspec" is
Making those changes really gave me an appreciation for how much work it is to maintain open source documentation: it's not easy to write things that are both clear and true, and sometimes we had to make compromises, for example the sentence "
git pushmay fail if you haven’t set an upstream for the current branch, depending on whatpush.defaultis set to." is a little vague, but the exact details of what "depending" means are really complicated and untangling that is a big project.on the process for contributing to Git
It took me a while to understand Git's development process. I'm not going to try to describe it here (that could be a whole other post!), but a few quick notes:
- Git has a Discord server with a "my first contribution" channel for help with getting started contributing. I found people to be very welcoming on the Discord.
- I used GitGitGadget to make all of my contributions. This meant that I could make a GitHub pull request (a workflow I'm comfortable with) and GitGitGadget would convert my PRs into the system the Git developers use (emails with patches attached). GitGitGadget worked great and I was very grateful to not have to learn how to send patches by email with Git.
- Otherwise I used my normal email client (Fastmail's web interface) to reply to emails, wrapping my text to 80 character lines since that's the mailing list norm.
I also found the mailing list archives on lore.kernel.org hard to navigate, so I hacked together my own git list viewer to make it easier to read the long mailing list threads.
Many people helped me navigate the contribution process and review the changes: thanks to Emily Shaffer, Johannes Schindelin (the author of GitGitGadget), Patrick Steinhardt, Ben Knoble, Junio Hamano, and more.
(I'm experimenting with comments on Mastodon, you can see the comments here)
-
🔗 Ampcode News The Frontier Is Now Free rss
Every Amp user can now receive daily free credits to use the full Amp experience, including our frontier
smartagent powered by Opus 4.5. No payment required, powered by ads—turn them off if you don't want free credits.If you're a new user, just sign up for Amp, and download the CLI or editor extension.
If you're an existing user, go to user settings to enable ad-supported free credits.
What You Get
The free credit grant replenishes hourly, giving you a total of $10 worth of credits per day or roughly $300 of credits per month. When you've used up free credits, you can either wait for the hourly reset or purchase paid credits.
Amp's
smartmode is currently driven by Opus 4.5 (with GPT-5 and Gemini-3-powered subagents like the oracle + librarian). You can also userushmode, which provides faster inference at a lower cost per token, currently driven by Haiku 4.5.Really?
Really! Like everything we do, ad-supported inference is an experiment. We can't promise we can do this forever, but we've already rolled it out to a sizable beta group thanks to our ad partners and it has been well-received. You can apply to become an ad partner.
Ads are text-only and never influence Amp's responses. If you don't like ads, you can opt out and just pay the cost of inference.
We invite new users to check out our manual, which contains good tips for using Amp efficiently.
-
🔗 Ampcode News Efficient MCP Tool Loading rss
MCP servers often provide a lot of tools, many of which aren't used. That costs a lot of tokens, because these tool definitions have to be inserted into the context window whether they're used by the agent or not.
As an example: the chrome-devtools MCP currently provides 26 tools that together take up 17k tokens; that's 10% of Opus 4.5's context window and 26 tools isn't even a lot for many MCP servers.
To help with that, Amp now allows you to combine MCP server configurations with Agent Skills, allowing the agent to load an MCP server's tool definitions only when the skill is invoked.
How It Works
Create an
mcp.jsonfile in the skill definition, next to theSKILL.mdfile, containing the MCP servers and tools you want the agent to load along with the skill:{ "chrome-devtools": { "command": "npx", "args": ["-y", "chrome-devtools-mcp@latest"], "includeTools": [ // Tool names or glob patterns "navigate_page", "take_screenshot", "new_page", "list_pages" ] } }At the start of a thread, all the agent will see in the context window is the skill description. When (and if) it then invokes the skill, Amp will append the tool descriptions matching the
includeToolslist to the context window, making them available just in time.With this specific configuration, instead of loading all 26 tools that
chrome-devtoolsprovides, we instead load only four tools, taking up 1.5k tokens instead of 17k.Take a look at our ui-preview skill, that makes use of the
chrome-devtoolsMCP, for a full example.If you want to learn more about skills in Amp, take a look at the Agent Skills section in the manual.
To find out more about the implementation of this feature and how we arrived at it, read this blog post by Nicolay.
-
- January 07, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-07 rss
IDA Plugin Updates on 2026-01-07
New Releases:
Activity:
- BorderBinaryRecognizer
- dylib_dobby_hook
- def4171a: Update Builder.yml
- ghidra
- ghidra-chinese
- 8ba6d042: 测试:修复无法搜索
- ida-codex-mcp
- bc6a5168: new feature
- ida-security-scanner
- IDAPluginList
- 65789c1d: Update
- py-jobs
- quokka
- 3e10c376: Merge pull request #76 from quarkslab/dependabot/github_actions/actio…
-
🔗 r/LocalLLaMA Sopro: A 169M parameter real-time TTS model with zero-shot voice cloning rss
As a fun side project, I trained a small text-to-speech model that I call Sopro. Some features:
- 169M parameters
- Streaming support
- Zero-shot voice cloning
- 0.25 RTF on CPU, meaning it generates 30 seconds of audio in 7.5 seconds
- Requires 3-12 seconds of reference audio for voice cloning
- Apache 2.0 license
Yes, I know, another English-only TTS model. This is mainly due to data availability and a limited compute budget. The model was trained on a single L40S GPU.
It’s not SOTA in most cases, can be a bit unstable, and sometimes fails to capture voice likeness. Nonetheless, I hope you like it!
GitHub repo: https://github.com/samuel-vitorino/sopro
submitted by /u/SammyDaBeast
[link] [comments] -
🔗 Bits About Money One Regulation E, Two Very Different Regimes rss

Programming note: Happy New Year! Bits about Money is made possible--and freely accessible to all--by the generous support of professionals who find it useful. If you're one of them, thank you--and consider purchasing a membership.
The U.S. is often maligned as being customer-hostile compared to other comparable nations, particularly those in Europe. One striking counterexample is that the government, by regulation, outsources to the financial industry an effective, virtually comprehensive, and extremely costly consumer protection apparatus covering a huge swath of the economy. It does this by strictly regulating the usage of what were once called "electronic" payment methods, which you now just call "payment" methods, in Regulation E.
Reg E is not uniformly loved in the financial industry. In particular, there has been a concerted effort by banks to renegotiate the terms of it with respect to Zelle in particular. This is principally because Zelle has been anomalously expensive, as Reg E embeds a strong, intentionally bank-funded anti-fraud regime, but Zelle does not monetize sufficiently to pay for it.
And thus a history lesson, a primer, and an explanation of a live public policy controversy.
These newfangled computers might steal our money
If you were to ask your friendly neighborhood reference librarian for Electronic Fund Transfers (Regulation E), 44 Fed. Reg. 18469 (Mar. 28, 1979), you might get back a document yellowed with age. Congress, in its infinite wisdom, intended the Electronic Funds Transfer Act to rein in what it saw as the downsides of automation of the finance industry, which was in full swing by this time.
Many electronic transactions might not issue paper receipts , and this would complicate he-said bank-said dispute resolution. So those were mandated. Customers might not realize transactions were happening when they didn't have to physically pull out a checkbook for each one. Therefore, institutions were required to issue periodic statements, via a trustworthy scaled distribution system, paper delivered by the United States Postal Service. And electronic access devices--the magnetic-stripe cards, and keyfobs [0], and whatever the geeks dreamed up next --might be stolen from customers. And therefore the banks were mandated to be able to take reports of mislaid access devices, and there was a strict liability transfer, where any unauthorized use of a device was explicitly and intentionally laid at the foot of the financial institution.
Some of the concerns that were top of mind for lawmakers sound even more outlandish to us, today. Financial institutions can't issue credit cards without receiving an "oral or written request" for the credit card. That sounds like "Why would you even need to clarify that, let alone legislate against it?!" unless you have the recent memory of Bank of America having the Post Office blanket a city with unsolicited credit cards then just waiting to see what happened. [1]
The staff who implemented Reg E and the industry advocates commenting on it devoted quite a bit of effort to timelines , informed by their impression of the cadence of life in a middle class American household and the capabilities of the Operations departments at financial institutions across the U.S.'s wide spectrum of size and sophistication. Two business days felt like a reasonable timeline after the theft of a card to let the financial institution know. They picked sixty business days from the postmark for discovering an unauthorized transaction in your periodic statements. That felt like a fair compromise between wanting to eventually give financial institutions some level of finality while still giving customers a reasonable buffer to account for holidays, vacation schedules, the time it takes a piece of mail to travel from New York City to Hawaii, and the reality that consumers, unlike banks, do not have teams paid to open and act upon mail.
And, very importantly for the future, Congress decided that unsophisticated Americans might be conned into using these newfangled electronic devices in ways that might cost them money, and this was unacceptable. Fraudulent use of an electronic fund transfer mechanism was considered an error as grave as the financial institution simply making up transactions. It had the same remedy: the financial institution corrects their bug at their cost.
" Unauthorized electronic fund transfer" means an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit.
Reg E provided for two caps on consumer liability for unauthorized electronic fund transfer: $50 in the case of timely notice to the financial institution, as sort of a deductible (Congress didn't want to encourage moral hazard), and $500 for those customers who didn't organize themselves sufficiently. Above those thresholds, it was the bank's problem.
Reg E also establishes some procedural rights: an obligation for institutions to investigate claims of unauthorized funds transfers (among other errors-- Congress was quite aware that banks frequently made math and recordkeeping mistakes), to provisionally credit customers during those investigations, strict timelines for the financial institutions, and the presumptive burden of proof.
In this privately-administered court system, the bank is the prosecutor, the defendant, and the judge simultaneously, and the default judgment is "guilty." It can exonerate itself only by, at its own expense and peril, producing a written record of the evidence examined. This procedural hurdle is designed to simplify review by the United States' actual legal system, regulators, and consumer advocates.
The institution 's report of the results of its investigation shall include a written explanation of the institution's findings and shall note the consumer's right to request the documents that the institution relied on in making its determination. Upon request, the institution shall promptly provide copies of the documents.
Having done informal consumer advocacy for people with banking and debt issues for a few years, I cannot overstate the degree to which this prong of Reg E is a gift to consumer advocates. Many consumers are not impressively detail-oriented, and Reg E allows an advocate to conscript a financial institution's Operations department to backfill the customer's files about a transaction they do not have contemporaneous records of. In the case that the Operations department itself isn't organized, great, at least from my perspective. Reg E says the bank just ate the loss. And indeed, several times over the years, the prototypical grandmother in Kansas received a letter from a bank vice president of consumer lending explaining that the bank was in receipt of her Reg E complaint, had credited her checking account, and considered the matter closed. It felt like a magic spell to me at the time.
The contractual liability waterfall in card payments
Banks do not like losing money, citation hopefully unnecessary, and part of the business of banking is arranging for liability transfers. Insurance is many peoples' paradigmatic way to understand liability transfers, but banks make minimal use of insurance in core banking services. (A bank which is robbed almost always self-insures, and the loss--averaging four figures and trending down--is so tiny that it isn't worth specifically budgeting for.)
The liability transfer which most matters to Reg E is a contractual one, from issuing banks to card processors and from card processors to card-accepting businesses. These parties' obligations to banks and cardholders are substantially broader than the banks' obligations under Reg E, but the banks use a fraction of those contracts to defray a large portion of their Reg E liability.
For example, under the various brands' card rules, an issuer must have the capability for a customer to say that a transaction which happened over plastic (or the electronic equivalent) simply didn't meet their expectations. The issuer's customer service representative will briefly collect facts from the customer, and then initiate an automatic process to request information from a representative of the card-accepting business. On receipt of that information, or non-receipt of it, a separate customer service representative makes a decision on the case. This mechanism is called a "chargeback" in the industry, and some banks are notorious for favoring the high-income quite- desirable customers who hold their plastic over the e.g. restaurant that the bank has no relationship with. "My eggs were undercooked" is a sufficient reason to ask for a chargeback and will result in the bank restoring your money a large percentage of the time.
In the case where the complaint is "My card was stolen and used without my knowledge", essentially the same waterfall activates, perhaps with the internal note made that this dispute is Reg E sensitive. But mechanically it will be quite similar: bank tells processor "Customer asserts fraud", processor tells business, business replies with a fax, bank staff reviews fax and adjudicates.
There are on the order of 5 million criminal cases in the formal U.S. legal system every year. There are more than 100 million complaints to banks, some of them alleging a simple disagreement (undercooked eggs) and very many alleging crime (fraud). It costs banks billions of dollars to adjudicate them.
The typical physical form of an adjudication is not a weeks-long trial with multiple highly-educated representatives debating in front of a more-senior finder of fact. It is a CSR clicking a button on their web app's interface after 3 minutes of consideration, and then entire evidentiary record often fits in a tweet.
"Customer ordered from online store. Customer asserts they didn't receive the item in six weeks. No response from store. Customer wins. Next.", "Customer ordered from online store. Customer asserts they didn't receive item. Store provided evidence of shipping via UPS. Customer does not have a history of fraudulent chargebacks. Customer wins. Next.", "Customer's bookkeeper asserts ignorance of software as a service provider charge. Business provided written statement from customer's CEO stating chargeback filed in error by new bookkeeper. Customer wins. Next." (I'm still annoyed by that last one, years later, but one has to understand why it is rational for the bank and, in a software company's clearer-minded moments, rational for them to accept the risk of this given how lucrative software is.)
The funds flow in a chargeback mirrors the contractual liability waterfall: the issuing bank gets money back from a financial intermediary, who gets it back from a card processor (like Stripe, which I once worked for, and which doesn't specifically endorse things I write in my own spaces), who will attempt to get it back from the card accepting business.
That word "attempt" is important. What if the business doesn't have sufficient money to pay the aggrieved customer, or they can't be located anymore when the system comes to collect? Reg E has a list of exceptions and those aren 't on it. The card processor then eats the loss.
The same frequently happens to cover the provisional credit mandated while the bank does its investigation, and the opposite happens in the case where the issuing bank decides that the card accepting business is in the right, and should be restored the money they charged a customer.
This high-frequency privately-funded alternative legal system has quietly ground out hundreds of millions of cases for the last half century. It is a foundation upon which commerce rests. It even exerts influence internationally, since the card brand rules essentially embed a variant of the Reg E rights for cardholders globally, and since nowhere in Reg E is there a carveout for transactions that a customer might make electronically with their U.S. financial institution while not physically located in the United States. If you are mugged and forced to withdraw money at an ATM in Caracas, Uncle Sam says your bank knows that some tiny percentage of cardholders will be mugged every year, and mandates they pay.
Enter Zelle
Zelle, operated by Early Warning Systems (owned by a consortium of large banks), is a substantially real-time electronic transfer method between U.S. bank accounts. Bank web and mobile apps have for decades supported peer to peer and customer to business transfers, via push ACH (and, less frequently, by wire), but ACH will, in standard practice, take a few days to be credited to the recipient and a few hours until it will become known to them as pending.
Zelle is substantially a blocking play, against Venmo, Cash App, and similar. Those apps captivated a large number of mostly-young users with the P2P payments, for use cases like e.g. splitting dinner, spotting a buddy $20, or collecting donations for a Christmas gift for the teacher from all the parents in a class. After attracting the users with those features, they kept them with product offerings which, in the limit, resemble bank accounts and which actually had bank accounts under the hood for at least some users.
And so the banks, fearing that real-time payment rails would not arrive in time (FedNow has been FedLater for a decade and RTP has relatively poor coverage), stood up Zelle, on the theory that this feature could be swiftly built into all the bank apps. Zelle launched in 2017.
Zelle processes enormous volumes. It crowed recently that it did $600 billion in volume in the first half of 2025. Zelle is much larger than the upstarts like Venmo (about $250 billion in annual volume) and Cash App (about $300 billion in customer inflows annually). This is not nearly in the same league as card payments (~$10 trillion annually) or ACH transfers (almost $100 trillion annually), but it is quite considerable.
All of it is essentially free to the transacting customers, unlike credit cards, which are extremely well- monetized. And there is the rub.
Zelle is an enormous fraud target
"Hiya, this is Susan calling from your bank. Your account has been targeted by fraudsters. I need you to initiate a Zelle payment to yourself to move it to a safe account while we conduct our investigation. Just open your mobile banking app, type the password, select Zelle from the menu, and send it to your own phone number. Thank you for your cooperation."
Susan is lying. Her confederates have convinced at least one financial institution in the U.S. that the customer's phone number is tied to a bank account which fraudsters control. That financial institution registered it with Zelle, so that when the victim sends money, the controlled account receives it substantially instantaneously. They will then attempt to immediately exfiltrate that money, sending it to another financial institution or a gift card or a crypto exchange, to make it difficult for investigators to find it faster than they can spend it. This process often repeats; professionals call this "layering."
So, some days later, when the victim calls the bank and asks what happened to the money the bank was trying to secure from fraud, what does the bank tell them?
Zelle is quick to point out that only 0.02% of transactions over it have fraud reported, and they assert this compares favorably to competing payments methods. Splendid, then do the banks want to absorb on the order of $240 million a year in losses from fraudulent use of a technology they built into their own apps which is indisputably by any intellectually serious person an electronic funds access device?
Frequently in the last few years, the bank has said "Well, as Gen Z would say, that sounds like a bit of a skill issue." And Reg E? "We never heard of it. Caveat emptor."
To be slightly more sympathetic to the banks, they're engaged in fine- grained decisioning on Zelle frauds, which have many mechanisms and flavor texts. They are more likely to reimburse as required in the case of account takeovers, where the criminal divines a customer's password, pops an email address, or steals access to a phone number, and then uses it to empty a bank account. They are far less likely to reimburse where the criminal convinces the customer to operate their access device (mobile phone) in a way against their interests. Skill issue.
Why do banks aggressively look for reasons to deny claims? Elementary: there is no waterfall for Zelle. If there is a reimbursement for the user, it has to come from the bank's balance sheet. (Zelle as originally shipped was incapable of reversing a transaction to claw back funds. That mechanism was something of an antipriority at design time, since funds subject to a clawback might be treated by receiving banks as non-settled, and the user experience banks wanted to deliver was "instantly spendable, like on Venmo." Instantaneous funds availability exists in fundamental tension with security guarantees even if the finality gets relaxed, as Zelle's was in 2023 under regulatory pressure.)
Banks like to pretend that the dominant fraud pattern is e.g. a "social media scam", where an ad on Facebook or a Tiktok video leads someone to purchase sneakers with a Zelle payment from an unscrupulous individual, who doesn't actually send the sneakers. This pattern matches more towards "well, that's a disagreement about how your eggs were done, not a disagreement about how we operate payment rails." Use a card and we'll refund the eggs (via getting the restaurant to pay for them); don't and we won't.
So, in sum and in scaled practice at call centers, the bank wants to quickly get customers to admit their fingers were on their phone when defrauded. If so, no reimbursement.
This rationale is new and is against our standard practice, for decades. If you are defrauded via a skimming device attached to an ATM, the bank is absolutely liable, and will almost always come to the correct conclusion immediately. It would be absurdly cynical to say that you intended to transact with the skimming device and demonstrated your assent by physically dipping your card past it.
Bank recalcitrance caused the Consumer Financial Protection Bureau to sue a few large banks in late 2024. The CFPB alleged they had a pattern and practice of not paying out claims for fraud conducted over Zelle rails. The banks will tell you the same, using slightly different wording. Chase, for example, now buries in the fine print "Neither Chase nor Zelle® offers reimbursement for authorized payments you make using Zelle®, except for a limited reimbursement program that applies for certain imposter scams where you sent money with Zelle®. This reimbursement program is not required by law and may be modified or discontinued at any time."
The defensible gloss of banks' position on "purchase protection" is that the purchase protection that customers pay for in credit cards which makes them whole for eggs not cooked to their liking is not available for Zelle payments. Fine.
The indefensible extension is that banks aren't liable for defrauded customers. That is a potential policy regime, chosen by the polity of many democratic nations. The United States is not one of those nations. Our citizens, through their elected representatives, made the considered choice that financial institutions would need to provide extraordinary levels of safety in electronic payments. In reliance upon that regime, the people of the United States transacted many trillions of dollars over payment rails, which was and is very lucrative for all considered.
The CFPB's lawsuit was dropped in early 2025, as CFPB's enforcement priorities were abruptly curtailed. (Readers interested in why might see Debanking and Debunking and Ctrl-F "wants some examples made.") To the extent it still exists after being gutted, it is fighting for its life.
But knifing the CFPB doesn't repeal Reg E. In theory, any bank regulator (and many other actors besides) can hold them to account for obligations under it. One of the benefits of Reg E is that the single national standard is easiest to reason about, but in the absence of it, one can easily imagine a patchwork of state-by-state consumer protection actions and/or coalitioning between state attorneys general. I will be unmoved if banks complain that this is all so complicated and they welcome regulation but it has to be a single national standard.
Banks may attempt to extend the Zelle precedent
Having for the moment renegotiated their Reg E obligations by asserting they don't exist, and mostly getting away with it, some banks might attempt to feel their oats a bit and assert that customers bear fraud risks more generally.
For example, in my hometown of Chicago, there has been a recent spate of tap- to-pay donation fraud. The fraudster gets a processing account, in their own name or that of a confederate/dupe, to collect donations for a local charitable cause. (This is not in itself improper; the financial industry understands that the parent in charge of a church bake sale will not necessarily be able to show paperwork to that effect before the cookies go stale.) Bad actors purporting to be informal charities accost Chicagoans on the street and ask for a donation via tap-to-pay, but the actual charged donation was absurdly larger than what the donor expected to donate; $4,000 versus $10, for example. The bad actor then exits the scene quickly.
(A donor who discovers the fraud in the moment is then confronted with the unfortunate reality that they are outnumbered by young men who want to rob them. This ends about as well as you'd expect. Chicago has an arrest rate far under 1% for this. A cynic might say that if you don't kill the victim, it's legal. I'm not quite that cynical.)
But Reg E doesn't care about the safety of city streets, in Chicago or anywhere else. It assumes that payment instruments will continue to be used in an imperfect world. This case has a very clear designed outcome: customer calls bank, bank credits customer $4,000 because the customer was defrauded and therefore the "charity" lacked actual authority for the charge, bank pulls $4,000 from credit card processor, credit card processor attempts to pull $4,000 from the "charity", card processor fails in doing so, card processor chalks it up to tuition to improve its fraud models in the future.
Except at least some banks, per the Chicago Tribune's reporting, have adopted specious rationales to deny these claims. Some victims surrender physical control of their device, and banks argue that that means they authorized the transaction. Some banks asserted the manufactured-out-of-their-hindquarters rationale that Reg E only triggers when there is a physical receipt. (This inverts the Act's responsibility graph, where banks were required to provide physical hardcopy receipts to avoid an accountability sink swallowing customer funds.)
Banks will often come to their senses after being contacted by the Chicago Tribune or someone with social power and gravitas who knows how to cite Reg E. But it is designed to work even for less sophisticated customers who don't know the legislative history of the state machine. They just have to know "Call your bank if you have a problem."
That should work and we are diminished if it doesn't.
Reg E encompasses almost every technology which exists and many which don't
yet
With a limited number of carveouts (e.g. wire transfers), Reg E is intentionally drafted to be future-proof against changes in how Americans transact. This is why, when banks argue that some new payments rail is exempt because it is "different," the correct legal response is usually some variation of: doesn't matter--that's Reg E.
Our friends in crypto generally believe that Reg E is one star in the constellation of regulations that they're not subject to. They created Schrodinger's financial infrastructure, which is the future of finance in the boardroom and just some geeks playing with an open source project once grandma gets defrauded. There is an unresolved tension in saying "Traditional institutions like Visa are adopting stablecoins" and in the see-no-evil reimburse-no-losses attitude issuers and others in the industry take towards fraud which goes over their rails.
Reg E doesn't have an exception in its text for electronic funds transfers which happen over slow databases.
A hypothetical future CFPB, given the long-standing premise that fraud is not an acceptable outcome of consumer payment systems, would swiftly come to the conclusion that if it walks like a checking account, quacks like a checking account, and is marketed as an alternative to checking accounts, then it is almost certainly within Reg E scope.
Casting one's eyes across the fintech landscape, many players seem to have checking account envy. In the era of the "financial superapp" where everyone wants to bolt on high-frequency use cases like payments to e.g. AUM gathering machines like brokerage accounts, that is worth a quick chat with Legal before you start getting the letters from Kansan grandmas.
[0] The first "credit cards" were not the plastic-with-a-magstripe form factor which came to dominate but rather "charge plates." They were physical tokens which pointed at a record at e.g. a department store's internal accounts, usually by means of an embossed account number, to be read by the Mk 0 human eyeball and, later, physically copied to a paper record via ink. Many were metal and designed to be kept around a key ring. As Matt Levine and many others have mentioned, the crypto community has speedrun hundreds of years of financial history, and keeping your account identifier on etched metal enjoyed a short renaissance recently. Unlike the department stores' bookkeepers, crypto enthusiasts lost many millions of dollars of customer funds by misplacing their metal (see page 20 particularly).
[1] Market research in the 1950s was hard. Short version of the Fresno drop: they lost money due to abuse by a small segment of users, but successfully proved that the middle class would happily use plastic to transact if they were offered it and it was generally accepted by businesses as opposed to being tied to a single store. They then scaled the 60,000 card pilot to millions within a year. Visa is the corporate descendant of that program; Mastercard that of what competitors did in response.
-
🔗 The Pragmatic Engineer The grief when AI writes most of the code rss
I'm coming to terms with the high probability that AI will write most of my code which I ship to prod, going forward. It already does it faster, and with similar results to if I'd typed it out. For languages/frameworks I'm less familiar with, it does a better job than me.
It feels like something valuable is being taken away, and suddenly. It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn't work as it should. I still remember how daunting my first "real" programming class was at university (learning C), how lost I felt on my first job with a complex codebase, and how it took years of practice, learning from other devs, books, and blogs, to get better at the craft. Once you're pretty good, you have something that's valuable and easy to validate by writing code that works!
Some of my best memories of building software are about coding. Being "locked in" and balancing several ideas while typing them out, of being in the zone, then compiling the code, running it and seeing that "YES ", it worked as expected!
It's been a love-hate relationship, to be fair, based on the amount of focus needed to write complex code. Then there's all the conflicts that time estimates caused: time passes differently when you're locked in and working on a hard problem.
Now, all that looks like it will be history.
I wonder if I'll still get the same sense of satisfaction from the fact that writing complicated code is hard? Yes, AI is convenient, but there's also a loss.
Or perhaps with AI agents, being "in the zone" will shift to thinking about higher-level problems, while instructing more complex code to be written?
This was a section from my analysis piece When AI writes almost all code, what happens to software engineering?. Read the full one here.
-
🔗 r/LocalLLaMA 16x AMD MI50 32GB at 10 t/s (tg) & 2k t/s (pp) with Deepseek v3.2 (vllm-gfx906) rss
| Deepseek 3.2 AWQ 4bit @ 10 tok/s (output) // 2000 tok/s (input of 23k tok) on vllm-gfx906-deepseek with 69000 context length Power draw : 550W (idle) / 2400W (peak inference) Goal : run Deepseek V3.2 AWQ 4-bit on most cost effective hardware like 16MI50 at decent speed (token generation & prompt processing) Coming next : open source a future test setup of 32 AMD MI50 32GB for Kimi K2 Thinking Credits : BIG thanks to the Global Open source Community! All setup details here: https://github.com/ai-infos/guidances-setup-16-mi50-deepseek-v32 Feel free to ask any questions and/or share any comments.* ps: it might be a good alternative to CPU hardwares as RAM price increases and the prompt processing speed will be much better with 16 TB/s bandwidth + tensor parallelism! ps2: i'm just a random guy with average software dev background using LLMs to make it run. Goal is to be ready for LOCAL AGI without spending +300k$... submitted by /u/ai-infos
[link] [comments]
---|--- -
🔗 News Minimalist 🐢 US cuts childhood vaccine list + 8 more stories rss
In the last 5 days ChatGPT read 149582 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

[6.0] US reduces routine childhood vaccine recommendations —theguardian.com(+140)
The Trump administration has slashed routine childhood vaccine recommendations from 17 to 11, effective immediately, a move experts warn will reduce immunization access and increase infectious disease transmission.
Vaccines for influenza, rotavirus, and RSV are no longer universally recommended, shifting to high-risk or shared clinical decision-making status. This change, overseen by Robert F. Kennedy Jr., aims to make several immunizations optional rather than standard routine for all children.
The shift aligns the US schedule with Denmark's as the nation faces its largest measles outbreak in decades. Concurrently, domestic cases of tetanus and fatal pertussis infections have reached multi-year highs.
[6.2] US seizes Venezuelan President Maduro, asserting control over nation's oil wealth —theconversation.com(+1938)
US special forces have seized Venezuelan President Nicolás Maduro, toppling his government. President Donald Trump announced the United States will now manage Venezuela and its massive oil reserves.
The military operation follows decades of tension over Venezuela's oil wealth, the world’s largest reserves. Trump intends for US companies to upgrade infrastructure and generate revenue, ending a thirty-year adversarial relationship that began under former leader Hugo Chávez.
Highly covered news with significance over 5.5
[6.4] Hyundai and Boston Dynamics showcase humanoid robot Atlas at CES — bostonglobe.com (+45)
[5.7] China restricts over a thousand dual-use exports to Japan, including rare earths — udn.com (Chinese) (+26)
[5.7] xAI secures $20 billion in funding from Nvidia, Cisco, and Fidelity — cnbc.com (+13)
[5.7] X platform enables creation of nonconsensual AI-generated sexual images — theconversation.com (+77)
[5.6] Guinea's junta leader confirmed president-elect after first vote since 2021 coup — financialpost.com (+3)
[5.5] Trump orders divestment of chip deal over China security concerns — apnews.com (+22)
[5.9] Nvidia launches Alpamayo AI platform, introducing deep-reasoning for autonomous vehicles — forbes.com (+619)
Thanks for reading!
— Vadim
You can create your own significance-based RSS feed with premium.
-
🔗 libtero/suture Suture v1.0.0 release
No content.
-
🔗 r/reverseengineering Coleco Zodiac (1979) Daily Preview date codes extracted from original manual rss
submitted by /u/Few-Leading-9611
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [decode_instruction](https://github.com/milankovo/decode_instruction) (1.0.0) -
🔗 r/reverseengineering Kernel driver blocking Cheat Engine (ObRegisterCallbacks) rss
submitted by /u/No_Acanthaceae1468
[link] [comments] -
🔗 pranshuparmar/witr v0.2.0 release
What's Changed
- Add FreeBSD support by @gaod in #95
- Add Windows Support by @abdullah134 in #96
- feat: enhance goreleaser and adding missing os by @GunniBusch in #103
- ci: update pr check with windows and freeBSD by @GunniBusch in #104
- feat: add bug report and feature request issue templates by @sunydepalpur in #112
- Fix: macOS Process labels from truncating at 16 chars by @chojs23 in #125
- Make verbose child process list readable by @chojs23 in #107
- Main PR by @pranshuparmar in #126
- Fix service detection and port resolution issues on FreeBSD by @gaod in #105
- Remove unnecessary branch logic judgments in resource_darwin.go by @ArNine in #127
- docs: Enhance README with new badges, detailed sections, security policy by @sunydepalpur in #110
- Main PR by @pranshuparmar in #128
New Contributors
- @gaod made their first contribution in #95
- @abdullah134 made their first contribution in #96
Full Changelog :
v0.1.8...v0.2.0 -
🔗 Locklin on science Optimizing for old battles rss
About 3/4 of our management expertocracy is optimizing for old battles. It’s a pattern which is pervasive in Western Civilization, which is one of the reasons everything is so weird right now. Gather together a group of bureaucrats to solve a real problem, it’s still there 50 years later doing …. things. Things which are […]
-
🔗 r/reverseengineering PAIMON BLESS V17.7 - Quantitative Trading System rss
submitted by /u/pmd02931
[link] [comments] -
🔗 r/reverseengineering Hermes Studio demo - React Native decompiler and disassembler rss
submitted by /u/nilla615615
[link] [comments] -
🔗 r/wiesbaden Looking for a good men’s hairdresser/barber in Wiesbaden (medium-length hair) rss
Hi everyone, I’m looking for recommendations for a good men’s hairdresser or barber in Wiesbaden.
I’m currently growing my hair out, but it’s at that awkward stage where it’s a bit too long and messy. I don’t want a short cut just someone who knows how to shape and tidy medium-length hair properly while keeping it growing.
English-speaking would be a big plus Thanks in advance!
submitted by /u/SaladWestern8139
[link] [comments] -
🔗 streamyfin/streamyfin v0.51.0 release
Finally a new release 🥳 this one has some really nice improvements, like:
- Approve Seerr (formerly Jellyseerr) requests directly in Streamyfin for admins
- Updated Home Screen icon in the new iOS 26 style
- Improved VLC integration with native playback (AirPods controls, automatic pause when other audio starts, native system controls with artwork)
- Option to use KSPlayer on iOS - better hardware decoding support and PiP
- Music playback (beta)
- Option to disable player gestures at screen edges to prevent conflicts with swipe down notifications
- Snapping scroll in all carousels for smoother and more precise navigation
- Playback speed
- Dolby badge displayed in technical item details when available
- Expanded playback options with dynamically loaded streams and full media selection (Gelato support)
- Streamystats watchlists and promoted sections integration
- Initial KefinTweaks integration
- A lot of other fixes and small improvements
What's Changed
- fix: linting by @fredrikburmester in #1184
- chore(deps): Update dependency react-native-device-info to v15 by @renovate[bot] in #1182
- chore(deps): Update actions/dependency-review-action action to v4.8.2 by @renovate[bot] in #1175
- chore(deps): Update github/codeql-action action to v4.31.3 by @renovate[bot] in #1180
- feat: Liquid Glass Icon by @SUPERHAMSTERI in #1070
- fix: auto-filling would cause state not to be updated by @fredrikburmester in #1200
- fix: update okhttp v5 and fix android download crash issues by @fredrikburmester in #1203
- fix: clean toast message jellyseerr movie request by @fredrikburmester in #1201
- chore(deps): upgrade dev dependencies and test utilities by @Gauvino in #1195
- feat: vlc apple integration - pause on other media play + controls by @fredrikburmester in #1211
- fix: disable gestures from top and bottom of screen because of interference with notification shade pull down by @fredrikburmester in #1206
- feat: move source and track selection to seperate sheet by @lostb1t in #1176
- chore(deps): Pin dependencies by @renovate[bot] in #1209
- fix: show tech details when avaiable by @lostb1t in #1213
- feat: approve jellyserr requests by @fredrikburmester in #1214
- refactor: Move media sources preload higher up the tree by @lostb1t in #1216
- feat: prefer downloaded file by @fredrikburmester in #1217
- refactor: pass down items with sources to children by @lostb1t in #1218
- ci: fix CodeQL checkout by @Gauvino in #1170
- chore: Add version 0.47.1 to issue report template by @Simon-Eklundh in #1251
- chore(deps): Update actions/setup-node action to v6.1.0 by @renovate[bot] in #1262
- fix(player): Fix skip credits seeking past video end causing pause by @retrozenith in #1277
- feat: KSPlayer as an option for iOS + other improvements by @fredrikburmester in #1266
- fix(readme): Add Obtainium button by @kernelb00t in #1293
- feat: add button to toggle video orientation in player by @KindCoder-no in #743
- fix: jellyseer categories by @lancechant in #1233
- feat: add Dolby Vision badge by @edeuss in #1177
New Contributors
- @retrozenith made their first contribution in #1277
- @kernelb00t made their first contribution in #1293
- @edeuss made their first contribution in #1177
Full Changelog :
v0.47.1...v0.51.0Feedback
Your feedback matters. It helps us spot issues faster and keep improving the app in ways that benefit everyone. If you have ideas or run into problems, please open an issue on GitHub or join our Discord
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mastodon.social/@thejapantimes/115852557729468030
the Canada Modern graphic design style stays winning 😎
-
🔗 r/wiesbaden Anyone interested in starting a book club? rss
I’m looking to read more books this year but figured a book club might encourage me to stay committed! Does anyone know of any existing clubs in the area? If not, I’d love to start one :)
submitted by /u/kentoclatinator
[link] [comments] -
🔗 r/LocalLLaMA DeepSeek-R1’s paper was updated 2 days ago, expanding from 22 pages to 86 pages and adding a substantial amount of detail. rss
| arXiv:2501.12948 [cs.CL]: https://arxiv.org/abs/2501.12948 submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Don't put off hardware purchases: GPUs, SSDs, and RAM are going to skyrocket in price soon rss
In case you thought it was going to get better:
GPU prices are going up. AMD and NVIDIA are planning to increase prices every month starting soon.
NAND flash contract price went up 20% in November, with further increases in December. This means SSDs will be a lot more expensive soon.
DRAM prices are going to skyrocket, with no increase in production capacity and datacenters and OEMs competing for everything.
Even Consoles are going to be delayed due to the shortages.
According to TrendForce, conventional DRAM contract prices in 1Q26 are forecast to rise 55–60% quarter over quarter, while server DRAM prices are projected to surge by more than 60% QoQ. Meanwhile, NAND Flash prices are expected to increase 33–38% QoQ
Industry sources cited by Kbench believe the latest price hikes will broadly affect NVIDIA’s RTX 50 series and AMD’s Radeon RX 9000 lineup. The outlet adds that NVIDIA’s flagship GeForce RTX 5090 could see its price climb to as high as $5,000 later in 2026.
NVIDIA is also reportedly weighing a 30% to 40% reduction in output for parts of its midrange lineup, including the RTX 5070 and RTX 5060 Ti, according to Kbench.
submitted by /u/Eisenstein
[link] [comments] -
🔗 r/reverseengineering Can anyone crack this website and get the premium tool rss
submitted by /u/AdvisorObvious2693
[link] [comments] -
🔗 r/reverseengineering Crackmes.one RE CTF rss
submitted by /u/xusheng1
[link] [comments] -
🔗 r/reverseengineering Learning from the old Exynos Bug rss
submitted by /u/TwizzyIndy
[link] [comments] -
🔗 r/wiesbaden 2h Zeitvertreib rss
Hi hab morgen nen Termin beim St Josef Krankenhaus, dementsprechend reise ich 2h früher an. Gibts da in der Nähe Möglichkeiten wo man sich mit seinem Laptop reinsetzen kann bzw. habt ihr andere Vorschläge wie man sich die Zeit vetreiben könnte?
submitted by /u/Living_Performer_801
[link] [comments] -
🔗 r/LocalLLaMA NousResearch/NousCoder-14B · Hugging Face rss
| from NousResearch: "We introduce NousCoder-14B , a competitive programming model post-trained on Qwen3-14B via reinforcement learning. On LiveCodeBench v6 (08/01/2024 - 05/01/2025), we achieve a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. We trained on 24k verifiable coding problems using 48 B200s over the course of four days." submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 Ampcode News User Invokable Skills rss
Since we added support for Agent Skills, we became heavy users of them. There are now fifteen skills in the Amp repository.
But one frustration we had was that skills were only invoked when the agent deemed that necessary. Sometimes, though, we knew exactly which skill the agent should use.
So we made skills user-invokable: you, as the user, can now invoke a skill, which will force the agent to use it when you send your next message.
Open the command palette (Cmd/Alt-Shift-A in the Amp editor extensions or Ctrl-O in the Amp CLI) and run
skill: invoke.
-