- ↔
- →
to read (pdf)
- What I learned building an opinionated and minimal coding agent
- SteerMouse
- Advanced-Hexrays-Decompiler-reverse-engineering
- A Modern Recommender Model Architecture - Casey Primozic's Homepage
- AddyOsmani.com - 21 Lessons From 14 Years at Google
- January 09, 2026
-
🔗 anthropics/claude-code v2.1.2 release
What's changed
- Added source path metadata to images dragged onto the terminal, helping Claude understand where images originated
- Added clickable hyperlinks for file paths in tool output in terminals that support OSC 8 (like iTerm)
- Added support for Windows Package Manager (winget) installations with automatic detection and update instructions
- Added Shift+Tab keyboard shortcut in plan mode to quickly select "auto-accept edits" option
- Added
FORCE_AUTOUPDATE_PLUGINSenvironment variable to allow plugin autoupdate even when the main auto-updater is disabled - Added
agent_typeto SessionStart hook input, populated if--agentis specified - Fixed a command injection vulnerability in bash command processing where malformed input could execute arbitrary commands
- Fixed a memory leak where tree-sitter parse trees were not being freed, causing WASM memory to grow unbounded over long sessions
- Fixed binary files (images, PDFs, etc.) being accidentally included in memory when using
@includedirectives in CLAUDE.md files - Fixed updates incorrectly claiming another installation is in progress
- Fixed crash when socket files exist in watched directories (defense-in-depth for EOPNOTSUPP errors)
- Fixed remote session URL and teleport being broken when using
/taskscommand - Fixed MCP tool names being exposed in analytics events by sanitizing user-specific server configurations
- Improved Option-as-Meta hint on macOS to show terminal-specific instructions for native CSIu terminals like iTerm2, Kitty, and WezTerm
- Improved error message when pasting images over SSH to suggest using
scpinstead of the unhelpful clipboard shortcut hint - Improved permission explainer to not flag routine dev workflows (git fetch/rebase, npm install, tests, PRs) as medium risk
- Changed large bash command outputs to be saved to disk instead of truncated, allowing Claude to read the full content
- Changed large tool outputs to be persisted to disk instead of truncated, providing full output access via file references
- Changed
/pluginsinstalled tab to unify plugins and MCPs with scope-based grouping - Deprecated Windows managed settings path
C:\ProgramData\ClaudeCode\managed-settings.json- administrators should migrate toC:\Program Files\ClaudeCode\managed-settings.json - [SDK] Changed minimum zod peer dependency to ^4.0.0
- [VSCode] Fixed usage display not updating after manual compact
-
- January 08, 2026
-
🔗 idursun/jjui v0.9.9 release
Release Notes
Another release with small improvements and bug fixes. Thanks to all contributors!
🎉 New Features
Custom Commands & Lua API Enhancements
-
Custom Commands with Sequence Keys (#420)
- Added
key_sequenceproperty allowing custom commands to be invoked with multiple key presses in sequence - Added
descproperty for command descriptions - Introduced sequence overlay UI showing available key sequences when first key is pressed
- Example:
key_sequence = ["w", "b", "l"] -
New
choose()function for interactive selection prompts in Lua scripts - New
input()function to prompt users for text input with customizable title and prompt - New
split_lines()function for text processing -
Lua API: Await on Operation Results (#422)
-
start_inline_describe()now returns boolean indicating if operation was applied or cancelled - Enables conditional command execution based on user actions
- Fixes #310
-
Lua API: Interactive Commands (commit 8b257263)
-
Added
jj_interactiveLua function for interactive jj command execution
- Added
Navigation & UI Improvements
-
Ace Jump for Operations (#445)
- Pressing 'f' in set_parents/duplicate/rebase/squash modes now triggers ace jump
- After jump completes, returns to the original operation mode instead of normal mode
- Closes #394
-
Preview Width Variable (#452)
-
Added
$preview_widthplaceholder variable for preview commands - Exposes actual view width (in columns) to enable tools like delta to use
--side-by-sidecorrectly - Width updates dynamically when preview pane is resized
- Similar to fzf's
$FZF_PREVIEW_COLUMNS -
Configurable Flash Message Display Time (#456)
-
New config key:
ui.flash_message_display_seconds(default: 4) - Special value
0means messages display until manually dismissed - Fixes #455
-
Page Up/Down Key Configuration (#437)
-
ScrollUp/Down keys now registered in config instead of hardcoded
- Keys exposed to configuration for customization
- Fixes #360
SSH & Authentication
- SSH Askpass Support (#423)
- New
[ssh] hijack_askpasssetting to prompt for SSH passphrases/PINs within jjui - Works on Linux and macOS
- Properly handles prompt overriding and cancellation
- Fixes #100
- New
🐛 Bug Fixes
-
Exec Command History (#458)
- Fixed issue where selected command history wasn't applied in exec mode
- Input value now properly updated when selecting from fuzzy/regex suggestions
- Selected commands correctly saved to history
-
Menu Pagination Display (#446)
-
Fixed incorrect
%d/%dpagination display - Height now calculated before pagination render
- Added tab/shift+tab to short help menu
- Fixes #444
-
Flash Message Width (#432)
-
Added maxWidth (50% of screen) to flash message rendering
- Messages now properly line-wrap instead of extending beyond window width
-
Operation Log Refresh (#431)
-
Operation log now returns Refresh and SelectionChanged messages upon closing
- Fixes #430
-
Custom Commands List Sorting (commit 3fa9783a)
-
Fixed custom commands list to use stable sort
- Fixes #424
-
JJ Error Pass-through (#421)
-
jjui now properly passes through stderr from jj commands
- Error messages are more informative and show actual jj errors
-
Navigation Message Display (commit 94a4a874)
-
Navigation messages now only shown for paged scrolls
What's Changed
- main: pass through jj error by @baggiiiie in #421
- feat(lua): add ability to await on operation results (cancelled/applied) by @idursun in #422
- oplog: return Refresh and SelectionChanged upon oplog closing by @baggiiiie in #431
- feat(nix): add comprehensive nix flake by @doprz in #426
- Add option to hijack SSH Askpass to prompt for passphrase/pin by @oliverpool in #423
- feat(lua): add choose method and ui by @idursun in #427
- flash: add maxWidth to flash msg rendering by @baggiiiie in #432
- keys: add pageup/down to keys config by @baggiiiie in #437
- chore(github): Add Adda0 as an automatic reviewer for Nix-related changes by @Adda0 in #440
- feat: add .editorconfig by @doprz in #434
- Add input and log to custom_commands API by @ArnaudBger in #442
- oplog,list: refactor scrolling with Scrollable/StreamableList interface by @baggiiiie in #429
- menu: calculate height before pagination render by @baggiiiie in #446
- operations: enable ace jump for set_parents/duplicate/rebase/squash by @baggiiiie in #445
- feat: make flash message display time configurable by @living180 in #456
- feat: add $width variable for preview commands by @pablospe in #452
- status: fix exec command history not applied by @baggiiiie in #458
New Contributors
- @doprz made their first contribution in #426
- @oliverpool made their first contribution in #423
- @living180 made their first contribution in #456
- @pablospe made their first contribution in #452
Full Changelog :
v0.9.8...v0.9.9 -
-
🔗 badlogic/pi-mono v0.40.0 release
Added
- Documentation on component invalidation and theme changes in
docs/tui.md
Fixed
- Components now properly rebuild their content on theme change (tool executions, assistant messages, bash executions, custom messages, branch/compaction summaries)
- Documentation on component invalidation and theme changes in
-
🔗 badlogic/pi-mono v0.39.1 release
Fixed
setTheme()now triggers a full rerender so previously rendered components update with the new theme colorsmac-system-theme.tsexample now polls every 2 seconds and usesosascriptfor real-time macOS appearance detection
-
🔗 microsoft/markitdown Version 0.1.5b1 release
-
🔗 badlogic/pi-mono v0.39.0 release
Breaking Changes
before_agent_startevent now receivessystemPromptin the event object and returnssystemPrompt(full replacement) instead ofsystemPromptAppend. Extensions that were appending must now useevent.systemPrompt + extrapattern. (#575)discoverSkills()now returns{ skills: Skill[], warnings: SkillWarning[] }instead ofSkill[]. This allows callers to handle skill loading warnings. (#577 by @cv)
Added
ctx.ui.getAllThemes(),ctx.ui.getTheme(name), andctx.ui.setTheme(name | Theme)methods for extensions to list, load, and switch themes at runtime (#576)--no-toolsflag to disable all built-in tools, allowing extension-only tool setups (#557 by @cv)- Pluggable operations for built-in tools enabling remote execution via SSH or other transports (#564). Interfaces:
ReadOperations,WriteOperations,EditOperations,BashOperations,LsOperations,GrepOperations,FindOperations user_bashevent for intercepting user!/!!commands, allowing extensions to redirect to remote systems (#528)setActiveTools()in ExtensionAPI for dynamic tool management- Built-in renderers used automatically for tool overrides without custom
renderCall/renderResult ssh.tsexample: remote tool execution via--ssh user@host:/pathinteractive-shell.tsexample: run interactive commands (vim, git rebase, htop) with full terminal access via!iprefix or auto-detection- Wayland clipboard support for
/copycommand using wl-copy with xclip/xsel fallback (#570 by @OgulcanCelik) - Experimental:
ctx.ui.custom()now accepts{ overlay: true }option for floating modal components that composite over existing content without clearing the screen (#558 by @nicobailon) AgentSession.skillsandAgentSession.skillWarningsproperties to access loaded skills without rediscovery (#577 by @cv)
Fixed
- String
systemPromptincreateAgentSession()now works as a full replacement instead of having context files and skills appended, matching documented behavior (#543) - Update notification for bun binary installs now shows release download URL instead of npm command (#567 by @ferologics)
- ESC key now works during "Working..." state after auto-retry (#568 by @tmustier)
- Abort messages now show correct retry attempt count (e.g., "Aborted after 2 retry attempts") (#568 by @tmustier)
- Fixed Antigravity provider returning 429 errors despite available quota (#571 by @ben-vargas)
- Fixed malformed thinking text in Gemini/Antigravity responses where thinking content appeared as regular text or vice versa. Cross-model conversations now properly convert thinking blocks to plain text. (#561)
--no-skillsflag now correctly prevents skills from loading in interactive mode (#577 by @cv)
-
🔗 SamuelTulach/unxorer v4 release
Changes:
- Changed GUI dialog spacing
- Added maximum loop count to prevent infinite loops in certain scenarios
- Improved performance (mostly by stack being mapped from preallocated memory that can be directly accessed, instead of using uc_mem_read)
-
🔗 Simon Willison LLM predictions for 2026, shared with Oxide and Friends rss
I joined a recording of the Oxide and Friends podcast on Tuesday to talk about 1, 3 and 6 year predictions for the tech industry. This is my second appearance on their annual predictions episode, you can see my predictions from January 2025 here. Here's the page for this year's episode, with options to listen in all of your favorite podcast apps or directly on YouTube.
Bryan Cantrill started the episode by declaring that he's never been so unsure about what's coming in the next year. I share that uncertainty - the significant advances in coding agents just in the last two months have left me certain that things will change significantly, but unclear as to what those changes will be.
Here are the predictions I shared in the episode.
- 1 year: It will become undeniable that LLMs write good code
- 1 year: We're finally going to solve sandboxing
- 1 year: A "Challenger disaster" for coding agent security
- 1 year: Kākāpō parrots will have an outstanding breeding season
- 3 years: the coding agents Jevons paradox for software engineering will resolve, one way or the other
- 3 years: Someone will build a new browser using mainly AI-assisted coding and it won't even be a surprise
- 6 years: Typing code by hand will go the way of punch cards
1 year: It will become undeniable that LLMs write good code
I think that there are still people out there who are convinced that LLMs cannot write good code. Those people are in for a very nasty shock in 2026. I do not think it will be possible to get to the end of even the next three months while still holding on to that idea that the code they write is all junk and it's it's likely any decent human programmer will write better code than they will.
In 2023, saying that LLMs write garbage code was entirely correct. For most of 2024 that stayed true. In 2025 that changed, but you could be forgiven for continuing to hold out. In 2026 the quality of LLM-generated code will become impossible to deny.
I base this on my own experience - I've spent more time exploring AI-assisted programming than most.
The key change in 2025 (see my overview for the year) was the introduction of "reasoning models" trained specifically against code using Reinforcement Learning. The major labs spent a full year competing with each other on who could get the best code capabilities from their models, and that problem turns out to be perfectly attuned to RL since code challenges come with built-in verifiable success conditions.
Since Claude Opus 4.5 and GPT-5.2 came out in November and December respectively the amount of code I've written by hand has dropped to a single digit percentage of my overall output. The same is true for many other expert programmers I know.
At this point if you continue to argue that LLMs write useless code you're damaging your own credibility.
1 year: We're finally going to solve sandboxing
I think this year is the year we're going to solve sandboxing. I want to run code other people have written on my computing devices without it destroying my computing devices if it's malicious or has bugs. [...] It's crazy that it's 2026 and I still
pip installrandom code and then execute it in a way that it can steal all of my data and delete all my files. [...] I don't want to run a piece of code on any of my devices that somebody else wrote outside of sandbox ever again.This isn't just about LLMs, but it becomes even more important now there are so many more people writing code often without knowing what they're doing. Sandboxing is also a key part of the battle against prompt injection.
We have a lot of promising technologies in play already for this - containers and WebAssembly being the two I'm most optimistic about. There's real commercial value involved in solving this problem. The pieces are there, what's needed is UX work to reduce the friction in using them productively and securely.
1 year: A "Challenger disaster" for coding agent security
I think we're due a Challenger disaster with respect to coding agent security[...] I think so many people, myself included, are running these coding agents practically as root, right? We're letting them do all of this stuff. And every time I do it, my computer doesn't get wiped. I'm like, "oh, it's fine".
I used this as an opportunity to promote my favourite recent essay about AI security, the Normalization of Deviance in AI by Johann Rehberger.
The Normalization of Deviance describes the phenomenon where people and organizations get used to operating in an unsafe manner because nothing bad has happened to them yet, which can result in enormous problems (like the 1986 Challenger disaster) when their luck runs out.
Every six months I predict that a headline-grabbing prompt injection attack is coming soon, and every six months it doesn't happen. This is my most recent version of that prediction!
1 year: Kākāpō parrots will have an outstanding breeding season
(I dropped this one to lighten the mood after a discussion of the deep sense of existential dread that many programmers are feeling right now!)
I think that Kākāpō parrots in New Zealand are going to have an outstanding breeding season. The reason I think this is that the Rimu trees are in fruit right now. There's only 250 of them, and they only breed if the Rimu trees have a good fruiting. The Rimu trees have been terrible since 2019, but this year the Rimu trees were all blooming. There are researchers saying that all 87 females of breeding age might lay an egg. And for a species with only 250 remaining parrots that's great news.
(I just checked Wikipedia and I was right with the parrot numbers but wrong about the last good breeding season, apparently 2022 was a good year too.)
In a year with precious little in the form of good news I am utterly delighted to share this story. Here's more:
- Kākāpō breeding season 2026 introduction from the Department of Conservation from June 2025 .
- Bumper breeding season for kākāpō on the cards - 3rd December 2025, University of Auckland.
I don't often use AI-generated images on this blog, but the Kākāpō image the Oxide team created for this episode is just perfect:

3 years: the coding agents Jevons paradox for software engineering will resolve, one way or the other
We will find out if the Jevons paradox saves our careers or not. This is a big question that anyone who's a software engineer has right now: we are driving the cost of actually producing working code down to a fraction of what it used to cost. Does that mean that our careers are completely devalued and we all have to learn to live on a tenth of our incomes, or does it mean that the demand for software, for custom software goes up by a factor of 10 and now our skills are even more valuable because you can hire me and I can build you 10 times the software I used to be able to? I think by three years we will know for sure which way that one went.
The quote says it all. There are two ways this coding agents thing could go: it could turn out software engineering skills are devalued, or it could turn out we're more valuable and effective than ever before.
I'm crossing my fingers for the latter! So far it feels to me like it's working out that way.
3 years: Someone will build a new browser using mainly AI-assisted coding and it won't even be a surprise
I think somebody will have built a full web browser mostly using AI assistance, and it won't even be surprising. Rolling a new web browser is one of the most complicated software projects I can imagine[...] the cheat code is the conformance suites. If there are existing tests that it'll get so much easier.
A common complaint today from AI coding skeptics is that LLMs are fine for toy projects but can't be used for anything large and serious.
I think within 3 years that will be comprehensively proven incorrect, to the point that it won't even be controversial anymore.
I picked a web browser here because so much of the work building a browser involves writing code that has to conform to an enormous and daunting selection of both formal tests and informal websites-in-the-wild.
Coding agents are really good at tasks where you can define a concrete goal and then set them to work iterating in that direction.
A web browser is the most ambitious project I can think of that leans into those capabilities.
6 years: Typing code by hand will go the way of punch cards
I think the job of being paid money to type code into a computer will go the same way as punching punch cards [...] in six years time, I do not think anyone will be paid to just to do the thing where you type the code. I think software engineering will still be an enormous career. I just think the software engineers won't be spending multiple hours of their day in a text editor typing out syntax.
The more time I spend on AI-assisted programming the less afraid I am for my job, because it turns out building software - especially at the rate it's now possible to build - still requires enormous skill, experience and depth of understanding.
The skills are changing though! Being able to read a detailed specification and transform it into lines of code is the thing that's being automated away. What's left is everything else, and the more time I spend working with coding agents the larger that "everything else" becomes.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 @HexRaysSA@infosec.exchange 🔎 Here's another sneak peek! mastodon
🔎 Here's another sneak peek!
IDA 9.3 will expand its decompiler lineup w/ RH850, improve Golang support, update the Microcode Viewer, add the "forbid assignment propagation" feature, and more.Get the details here: https://hex-rays.com/blog/ida-9.3-expands-decompiler- lineup
-
🔗 langchain-ai/deepagents deepagents==0.3.3 release
-
🔗 langchain-ai/deepagents deepagents==0.3.2 release
Changes since deepagents==0.3.1
release(deepagents): release 0.3.2 (#680)
chore(deepagents): make memory strict if AGENTS.md not found (#673)
fix(deepagents): better async support in skills, and propagate runnable config (#672)
chore: remove older integration tests for memory (#670)
feature(deepagents): add memory (#646)
feat(deepagents): add skills to sdk (#591)
chore(deepagent): add docs to composite backend (#666)
chore(deepagents): add more test coverage to composite backend (#660)
fix(deepagents): composite backend grep implementation (#659)
Fix CVE-2025-68664 (#636)
make work with model string (#626) -
🔗 r/LocalLLaMA Jensen Huang saying "AI" 121 times during the NVIDIA CES keynote - cut with one prompt rss
| Someone had to count it. Turns out Jensen said "AI" exactly 121 times in the CES 2025 keynote. I used https://github.com/OpenAgentPlatform/Dive (open-source MCP client) + two MCPs I made: - https://github.com/kevinwatt/yt-dlp-mcp - YouTube download
- https://github.com/kevinwatt/ffmpeg-mcp-lite - video editing One prompt:Task: Create a compilation video of every exact moment Jensen Huang says "AI".
Video source: https://www.youtube.com/watch?v=0NBILspM4c4 Instructions: Download video in 720p + subtitles in JSON3 format (word- level timestamps) Parse JSON3 to find every "AI" instance with precise start/end times Use ffmpeg to cut clips (~50-100ms padding for natural sound) Concatenate all clips chronologically Output: Jensen_CES_AI.mp4Dive chained the two MCPs together - download → parse timestamps → cut 121 clips → merge. All local, no cloud. If you want to see how it runs: https://www.youtube.com/watch?v=u_7OtyYAX74 The result is... hypnotic. submitted by /u/Prior-Arm-6705
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.38.0 release
Breaking Changes
ctx.ui.custom()factory signature changed from(tui, theme, done)to(tui, theme, keybindings, done)for keybinding access in custom componentsLoadedExtensiontype renamed toExtensionLoadExtensionsResult.setUIContext()removed, replaced withruntime: ExtensionRuntimeExtensionRunnerconstructor now requiresruntime: ExtensionRuntimeas second parameterExtensionRunner.initialize()signature changed from options object to positional params(actions, contextActions, commandContextActions?, uiContext?)ExtensionRunner.getHasUI()renamed tohasUI()- OpenAI Codex model aliases removed (
gpt-5,gpt-5-mini,gpt-5-nano,codex-mini-latest). Use canonical IDs:gpt-5.1,gpt-5.1-codex-mini,gpt-5.2,gpt-5.2-codex. (#536 by @ghoulr)
Added
--no-extensionsflag to disable extension discovery while still allowing explicit-epaths (#524 by @cv)- SDK:
InteractiveMode,runPrintMode(),runRpcMode()exported for building custom run modes. Seedocs/sdk.md. PI_SKIP_VERSION_CHECKenvironment variable to disable new version notifications at startup (#549 by @aos)thinkingBudgetssetting to customize token budgets per thinking level for token-based providers (#529 by @melihmucuk)- Extension UI dialogs (
ctx.ui.select(),ctx.ui.confirm(),ctx.ui.input()) now support atimeoutoption with live countdown display (#522 by @nicobailon) - Extensions can now provide custom editor components via
ctx.ui.setEditorComponent(). Seeexamples/extensions/modal-editor.tsanddocs/tui.mdPattern 7. - Extension factories can now be async, enabling dynamic imports and lazy-loaded dependencies (#513 by @austinm911)
ctx.shutdown()is now available in extension contexts for requesting a graceful shutdown. In interactive mode, shutdown is deferred until the agent becomes idle (after processing all queued steering and follow-up messages). In RPC mode, shutdown is deferred until after completing the current command response. In print mode, shutdown is a no-op as the process exits automatically when prompts complete. (#542 by @kaofelix)
Fixed
- Default thinking level from settings now applies correctly when
enabledModelsis configured (#540 by @ferologics) - External edits to
settings.jsonwhile pi is running are now preserved when pi saves settings (#527 by @ferologics) - Overflow-based compaction now skips if error came from a different model or was already handled by a previous compaction (#535 by @mitsuhiko)
- OpenAI Codex context window reduced from 400k to 272k tokens to match Codex CLI defaults and prevent 400 errors (#536 by @ghoulr)
- Context overflow detection now recognizes
context_length_exceedederrors. - Key presses no longer dropped when input is batched over SSH (#538)
- Clipboard image support now works on Alpine Linux and other musl-based distros (#533)
-
🔗 r/LocalLLaMA Dialogue Tree Search - MCTS-style tree search to find optimal dialogue paths (so you don't have to trial-and-error it yourself) rss
| Hey all! I'm sharing an updated version of my MCTS-for-conversations project. Instead of generating single responses, it explores entire conversation trees to find dialogue strategies and prunes bad paths. I built it to help get better research directions for projects, but it can be used for anything https://preview.redd.it/shr3e0liv1cg1.png?width=2560&format=png&auto=webp&s=eec800c6dcd9f1a4fd033d003fe80e102cba8079 Github: https://github.com/MVPandey/DTS Motivation: I like MCTS :3 and I originally wanted to make this a dataset-creation agent, but this is what it evolved into on its own. Basically:DTS runs parallel beam search over conversation branches. You give it a goal and opening message, and it: (Note: this isnt mcts. It's parallel beam search. UCB1 is too wild with llms for me)- Generates N diverse strategies
- Forks each into user intent variants - skeptical, cooperative, confused, resistant (if enabled, or defaults to engaged + probing)
- Rolls out full multi-turn conversations down each branch
- Has 3 independent LLM judges score each trajectory, takes the median
- Prunes branches below threshold, backpropagates scores
- Repeats for however many rounds you configure
https://preview.redd.it/zkii0idvv1cg1.png?width=762&format=png&auto=webp&s=905f9787a8b7c7bfafcc599e95a3b73005c331b4 Three judges with median voting helps a lot with the LLM-as-judge variance problem from CAE. Still not grounded in anything real, but outlier scores get filtered. Research context helps but the scroing is still stochastic. I tried a rubric based approach but it was trash. Main additions over CAE:
- user intent forking (strategies get stress-tested against different personas)
- deep research integration via GPT-Researcher for domain context
- proper visualization with conversation playback
Only supports openai compatible endpoints atm - works with whatever models you have access to there. It's token-hungry though, a full run can hit 300+ LLM calls depending on config. If running locally, disable parallel calls It's open source (Apache 2.0) and I'm happy to take contributions if anyone wants to help out. Just a project. -- BTW: Backend was done mostly by me as the planner/sys designer, etc + Claude Code for implementation/refactoring. Frontend was purely vibe coded. Sorry if the code is trash. submitted by /u/ManavTheWorld
[link] [comments]
---|--- -
🔗 jj-vcs/jj v0.37.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- A new syntax for referring to hidden and divergent change IDs is available:
xyz/nwherenis a number. For instance,xyz/0refers to the latest
version ofxyz, whilexyz/1refers to the previous version ofxyz.
This allows you to perform actions likejj restore --from xyz/1 --to xyzto
restorexyzto its previous contents, if you made a mistake.
For divergent changes, the numeric suffix will always be shown in the log,
allowing you to disambiguate them in a similar manner.Breaking changes
-
String patterns in revsets, command
arguments, and configuration are now parsed as globs by default. Use
substring:orexact:prefix as needed. -
remotes.<name>.auto-track-bookmarksis now parsed the same way they
are in revsets and can be combined with logical operators. -
jj bookmark track/untracknow accepts--remoteargument. If omitted, all
remote bookmarks matching the bookmark names will be tracked/untracked. The
old<bookmark>@<remote>syntax is deprecated in favor of<bookmark> --remote=<remote>. -
On Windows, symlinks that point to a path with
/won't be supported. This
path is invalid on Windows. -
The template alias
format_short_change_id_with_hidden_and_divergent_info(commit)
has been replaced byformat_short_change_id_with_change_offset(commit). -
The following deprecated config options have been removed:
git.push-bookmark-prefixui.default-descriptionui.diff.formatui.diff.tool- The deprecated
commit_id.normal_hex()template method has been removed.
-
Template expansion that did not produce a terminating newline will not be
fixed up to provide one byjj log,jj evolog, orjj op log. -
The
diffconflict marker style can now use\\\\\\\markers to indicate
the continuation of a conflict label from the previous line.
Deprecations
- The
git_head()andgit_refs()functions will be removed from revsets and
templates.git_head()should point to thefirst_parent(@)revision in
colocated repositories.git_refs()can be approximated as
remote_bookmarks(remote=glob:*) | tags().
New features
-
Updated the executable bit representation in the local working copy to allow
ignoring executable bit changes on Unix. By default we try to detect the
filesystem's behavior, but this can be overridden manually by setting
working-copy.exec-bit-change = "respect" | "ignore". -
jj workspace addnow also works for empty destination directories. -
jj git remotefamily of commands now supports different fetch and push URLs. -
[colors]table now supportsdim = trueattribute. -
In color-words diffs, context line numbers are now rendered with decreased
intensity. -
Hidden and divergent commits can now be unambiguously selected using their
change ID combined with a numeric suffix. For instance, if there are two
commits with change IDxyz, then one can be referred to asxyz/0and the
other can be referred to asxyz/1. These suffixes are shown in the log when
necessary to make a change ID unambiguous. -
jj util gcnow prunes unreachable files in.jj/repo/store/extrato save
disk space. -
Early version of a
jj file searchcommand for searching for a pattern in
files (likegit grep). -
Conflict labels now contain information about where the sides of a conflict
came from (e.g.nlqwxzwn 7dd24e73 "first line of description"). -
--insert-beforenow accepts a revset that resolves to an empty set when
used with--insert-after. The behavior is similar to--onto. -
jj tag listnow supports--sortoption. -
TreeDiffEntrytype now has adisplay_diff_path()method that formats
renames/copies appropriately. -
TreeDiffEntrynow has astatus_char()method that returns
single-character status codes (M/A/D/C/R). -
CommitEvolutionEntrytype now has apredecessors()method which
returns the predecessor commits (previous versions) of the entry's commit. -
CommitEvolutionEntrytype now has ainter_diff()method which
returns aTreeDiffbetween the entry's commit and its predecessor version.
Optionally accepts a fileset literal to limit the diff. -
jj file annotatenow reports an error for non-files instead of succeeding
and displaying no content. -
jj workspace forgetnow warns about unknown workspaces instead of failing.
Fixed bugs
-
Broken symlink on Windows. #6934.
-
Fixed failure on exporting moved/deleted annotated tags to Git. Moved tags are
exported as lightweight tags. -
jj gerrit uploadnow correctly handles mixed explicit and implicit
Change-Ids in chains of commits (#8219) -
jj git pushnow updates partially-pushed remote bookmarks accordingly.
#6787 -
Fixed problem of loading large Git packfiles.
GitoxideLabs/gitoxide#2265 -
The builtin pager won't get stuck when stdin is redirected.
-
jj workspace addnow prevents creating an empty workspace name. -
Fixed checkout of symlinks pointing to themselves or
.git/.jjon Unix. The
problem would still remain on Windows if symlinks are enabled.
#8348 -
Fixed a bug where jj would fail to read git delta objects from pack files.
GitoxideLabs/gitoxide#2344
Contributors
Thanks to the people who made this release happen!
- Anton Älgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Bryce Berger (@bryceberger)
- Carlos Knippschild (@chuim)
- Cole Helbling (@cole-h)
- David Higgs (@higgsd)
- Eekle (@Eekle)
- Gaëtan Lehmann (@glehmann)
- Ian Wrzesinski (@isuffix)
- Ilya Grigoriev (@ilyagr)
- Julian Howes (@jlnhws)
- Kaiyi Li (@06393993)
- Lukas Krejci (@metlos)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Ori Avtalion (@salty-horse)
- Scott Taylor (@scott2000)
- Shaoxuan (Max) Yuan (@ffyuanda)
- Stephen Jennings (@jennings)
- Steve Fink (@hotsphink)
- Steve Klabnik (@steveklabnik)
- Theo Buehler (@botovq)
- Thomas Castiglione (@gulbanana)
- Vincent Ging Ho Yim (@cenviity)
- xtqqczze (@xtqqczze)
- Yuantao Wang (@0WD0)
- Yuya Nishihara (@yuja)
- A new syntax for referring to hidden and divergent change IDs is available:
-
🔗 Hex-Rays Blog IDA 9.3 Expands and Improves Its Decompiler Lineup rss
We know you’re always looking for broader platform coverage from the Hex-Rays decompiler, which is why we’re adding another one to the lineup: the RH850 decompiler. And of course, we haven’t stopped improving what’s already there. In this upcoming release, we’ve enhanced the analysis of Golang programs, fine-tuned value range optimization, made the new microcode viewer easier to use, and more.

-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mas.to/@Bislick/115856677525425915
solidarity with venezuelans and those on the bottom resisting authoritarianism, for 26 years. may venezuelans everywhere, those who have stayed and those who have left, have a better country in their lifetimes
-
🔗 19h/ida-structor v0.0.3 release
Full Changelog :
v0.0.2...v0.0.3 -
🔗 19h/ida-structor v0.0.2 release
Full Changelog :
v0.0.1...v0.0.2 -
🔗 19h/ida-structor v0.0.1 release
Full Changelog : https://github.com/19h/ida-structor/commits/v0.0.1
-
🔗 Console.dev newsletter Taws rss
Description: Terminal UI for AWS.
What we like: Uses existing auth options (AWS SSO, credentials, config, env-vars) with multiple profile and region support. Supports lots of resource types (compute, databases, networking, logs). Vim-style navigation and commands. Provides detailed (JSON/YAML) views of resources. Filtering and pagination.
What we dislike: Doesn’t support all resources, so may have some limitations depending on your AWS service usage.
-
🔗 Console.dev newsletter uv rss
Description: Python package & project manager.
What we like: Replaces your Python toolchain - makes it easy to manage virtual environments, dependencies, Python versions, workspaces. Supports package version management and publishing workflows. Built-in build backend. Cached dependency deduplication. Very fast.
What we dislike: Not quite at a stable release version yet, but is effectively stable.
-
🔗 Julia Evans A data model for Git (and other docs updates) rss
Hello! This past fall, I decided to take some time to work on Git's documentation. I've been thinking about working on open source docs for a long time - usually if I think the documentation for something could be improved, I'll write a blog post or a zine or something. But this time I wondered: could I instead make a few improvements to the official documentation?
So Marie and I made a few changes to the Git documentation!
a data model for Git
After a while working on the documentation, we noticed that Git uses the terms "object", "reference", or "index" in its documentation a lot, but that it didn't have a great explanation of what those terms mean or how they relate to other core concepts like "commit" and "branch". So we wrote a new "data model" document!
You can read the data model here for now. I assume at some point (after the next release?) it'll also be on the Git website.
I'm excited about this because understanding how Git organizes its commit and branch data has really helped me reason about how Git works over the years, and I think it's important to have a short (1600 words!) version of the data model that's accurate.
The "accurate" part turned out to not be that easy: I knew the basics of how Git's data model worked, but during the review process I learned some new details and had to make quite a few changes (for example how merge conflicts are stored in the staging area).
updates to
git push,git pull, and moreI also worked on updating the introduction to some of Git's core man pages. I quickly realized that "just try to improve it according to my best judgement" was not going to work: why should the maintainers believe me that my version is better?
I've seen a problem a lot when discussing open source documentation changes where 2 expert users of the software argue about whether an explanation is clear or not ("I think X would be a good way to explain it! Well, I think Y would be better!")
I don't think this is very productive (expert users of a piece of software are notoriously bad at being able to tell if an explanation will be clear to non- experts), so I needed to find a way to identify problems with the man pages that was a little more evidence-based.
getting test readers to identify problems
I asked for test readers on Mastodon to read the current version of documentation and tell me what they find confusing or what questions they have. About 80 test readers left comments, and I learned so much!
People left a huge amount of great feedback, for example:
- terminology they didn't understand (what's a pathspec? what does "reference" mean? does "upstream" have a specific meaning in Git?)
- specific confusing sentences
- suggestions of things things to add ("I do X all the time, I think it should be included here")
- inconsistencies ("here it implies X is the default, but elsewhere it implies Y is the default")
Most of the test readers had been using Git for at least 5-10 years, which I think worked well - if a group of test readers who have been using Git regularly for 5+ years find a sentence or term impossible to understand, it makes it easy to argue that the documentation should be updated to make it clearer.
I thought this "get users of the software to comment on the existing documentation and then fix the problems they find" pattern worked really well and I'm excited about potentially trying it again in the future.
the man page changes
We ended updating these 4 man pages:
git add(before, after)git checkout(before, after)git push(before, after)git pull(before, after)
The
git pushandgit pullchanges were the most interesting to me: in addition to updating the intro to those pages, we also ended up writing:- a section describing what the term "upstream branch" means (which previously wasn't really explained)
- a cleaned-up description of what a "push refspec" is
Making those changes really gave me an appreciation for how much work it is to maintain open source documentation: it's not easy to write things that are both clear and true, and sometimes we had to make compromises, for example the sentence "
git pushmay fail if you haven’t set an upstream for the current branch, depending on whatpush.defaultis set to." is a little vague, but the exact details of what "depending" means are really complicated and untangling that is a big project.on the process for contributing to Git
It took me a while to understand Git's development process. I'm not going to try to describe it here (that could be a whole other post!), but a few quick notes:
- Git has a Discord server with a "my first contribution" channel for help with getting started contributing. I found people to be very welcoming on the Discord.
- I used GitGitGadget to make all of my contributions. This meant that I could make a GitHub pull request (a workflow I'm comfortable with) and GitGitGadget would convert my PRs into the system the Git developers use (emails with patches attached). GitGitGadget worked great and I was very grateful to not have to learn how to send patches by email with Git.
- Otherwise I used my normal email client (Fastmail's web interface) to reply to emails, wrapping my text to 80 character lines since that's the mailing list norm.
I also found the mailing list archives on lore.kernel.org hard to navigate, so I hacked together my own git list viewer to make it easier to read the long mailing list threads.
Many people helped me navigate the contribution process and review the changes: thanks to Emily Shaffer, Johannes Schindelin (the author of GitGitGadget), Patrick Steinhardt, Ben Knoble, Junio Hamano, and more.
(I'm experimenting with comments on Mastodon, you can see the comments here)
-
🔗 Ampcode News Efficient MCP Tool Loading rss
MCP servers often provide a lot of tools, many of which aren't used. That costs a lot of tokens, because these tool definitions have to be inserted into the context window whether they're used by the agent or not.
As an example: the chrome-devtools MCP currently provides 26 tools that together take up 17k tokens; that's 10% of Opus 4.5's context window and 26 tools isn't even a lot for many MCP servers.
To help with that, Amp now allows you to combine MCP server configurations with Agent Skills, allowing the agent to load an MCP server's tool definitions only when the skill is invoked.
How It Works
Create an
mcp.jsonfile in the skill definition, next to theSKILL.mdfile, containing the MCP servers and tools you want the agent to load along with the skill:{ "chrome-devtools": { "command": "npx", "args": ["-y", "chrome-devtools-mcp@latest"], "includeTools": [ // Tool names or glob patterns "navigate_page", "take_screenshot", "new_page", "list_pages" ] } }At the start of a thread, all the agent will see in the context window is the skill description. When (and if) it then invokes the skill, Amp will append the tool descriptions matching the
includeToolslist to the context window, making them available just in time.With this specific configuration, instead of loading all 26 tools that
chrome-devtoolsprovides, we instead load only four tools, taking up 1.5k tokens instead of 17k.Take a look at our ui-preview skill, that makes use of the
chrome-devtoolsMCP, for a full example.If you want to learn more about skills in Amp, take a look at the Agent Skills section in the manual.
To find out more about the implementation of this feature and how we arrived at it, read this blog post by Nicolay.
-
- January 07, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-07 rss
IDA Plugin Updates on 2026-01-07
New Releases:
Activity:
- BorderBinaryRecognizer
- dylib_dobby_hook
- def4171a: Update Builder.yml
- ghidra
- ghidra-chinese
- 8ba6d042: 测试:修复无法搜索
- ida-codex-mcp
- bc6a5168: new feature
- ida-security-scanner
- IDAPluginList
- 65789c1d: Update
- py-jobs
- quokka
- 3e10c376: Merge pull request #76 from quarkslab/dependabot/github_actions/actio…
-
🔗 r/LocalLLaMA Sopro: A 169M parameter real-time TTS model with zero-shot voice cloning rss
As a fun side project, I trained a small text-to-speech model that I call Sopro. Some features:
- 169M parameters
- Streaming support
- Zero-shot voice cloning
- 0.25 RTF on CPU, meaning it generates 30 seconds of audio in 7.5 seconds
- Requires 3-12 seconds of reference audio for voice cloning
- Apache 2.0 license
Yes, I know, another English-only TTS model. This is mainly due to data availability and a limited compute budget. The model was trained on a single L40S GPU.
It’s not SOTA in most cases, can be a bit unstable, and sometimes fails to capture voice likeness. Nonetheless, I hope you like it!
GitHub repo: https://github.com/samuel-vitorino/sopro
submitted by /u/SammyDaBeast
[link] [comments] -
🔗 anthropics/claude-code v2.1.1 release
No content.
-
🔗 Bits About Money One Regulation E, Two Very Different Regimes rss

Programming note: Happy New Year! Bits about Money is made possible--and freely accessible to all--by the generous support of professionals who find it useful. If you're one of them, thank you--and consider purchasing a membership.
The U.S. is often maligned as being customer-hostile compared to other comparable nations, particularly those in Europe. One striking counterexample is that the government, by regulation, outsources to the financial industry an effective, virtually comprehensive, and extremely costly consumer protection apparatus covering a huge swath of the economy. It does this by strictly regulating the usage of what were once called "electronic" payment methods, which you now just call "payment" methods, in Regulation E.
Reg E is not uniformly loved in the financial industry. In particular, there has been a concerted effort by banks to renegotiate the terms of it with respect to Zelle in particular. This is principally because Zelle has been anomalously expensive, as Reg E embeds a strong, intentionally bank-funded anti-fraud regime, but Zelle does not monetize sufficiently to pay for it.
And thus a history lesson, a primer, and an explanation of a live public policy controversy.
These newfangled computers might steal our money
If you were to ask your friendly neighborhood reference librarian for Electronic Fund Transfers (Regulation E), 44 Fed. Reg. 18469 (Mar. 28, 1979), you might get back a document yellowed with age. Congress, in its infinite wisdom, intended the Electronic Funds Transfer Act to rein in what it saw as the downsides of automation of the finance industry, which was in full swing by this time.
Many electronic transactions might not issue paper receipts , and this would complicate he-said bank-said dispute resolution. So those were mandated. Customers might not realize transactions were happening when they didn't have to physically pull out a checkbook for each one. Therefore, institutions were required to issue periodic statements, via a trustworthy scaled distribution system, paper delivered by the United States Postal Service. And electronic access devices--the magnetic-stripe cards, and keyfobs [0], and whatever the geeks dreamed up next --might be stolen from customers. And therefore the banks were mandated to be able to take reports of mislaid access devices, and there was a strict liability transfer, where any unauthorized use of a device was explicitly and intentionally laid at the foot of the financial institution.
Some of the concerns that were top of mind for lawmakers sound even more outlandish to us, today. Financial institutions can't issue credit cards without receiving an "oral or written request" for the credit card. That sounds like "Why would you even need to clarify that, let alone legislate against it?!" unless you have the recent memory of Bank of America having the Post Office blanket a city with unsolicited credit cards then just waiting to see what happened. [1]
The staff who implemented Reg E and the industry advocates commenting on it devoted quite a bit of effort to timelines , informed by their impression of the cadence of life in a middle class American household and the capabilities of the Operations departments at financial institutions across the U.S.'s wide spectrum of size and sophistication. Two business days felt like a reasonable timeline after the theft of a card to let the financial institution know. They picked sixty business days from the postmark for discovering an unauthorized transaction in your periodic statements. That felt like a fair compromise between wanting to eventually give financial institutions some level of finality while still giving customers a reasonable buffer to account for holidays, vacation schedules, the time it takes a piece of mail to travel from New York City to Hawaii, and the reality that consumers, unlike banks, do not have teams paid to open and act upon mail.
And, very importantly for the future, Congress decided that unsophisticated Americans might be conned into using these newfangled electronic devices in ways that might cost them money, and this was unacceptable. Fraudulent use of an electronic fund transfer mechanism was considered an error as grave as the financial institution simply making up transactions. It had the same remedy: the financial institution corrects their bug at their cost.
" Unauthorized electronic fund transfer" means an electronic fund transfer from a consumer's account initiated by a person other than the consumer without actual authority to initiate the transfer and from which the consumer receives no benefit.
Reg E provided for two caps on consumer liability for unauthorized electronic fund transfer: $50 in the case of timely notice to the financial institution, as sort of a deductible (Congress didn't want to encourage moral hazard), and $500 for those customers who didn't organize themselves sufficiently. Above those thresholds, it was the bank's problem.
Reg E also establishes some procedural rights: an obligation for institutions to investigate claims of unauthorized funds transfers (among other errors-- Congress was quite aware that banks frequently made math and recordkeeping mistakes), to provisionally credit customers during those investigations, strict timelines for the financial institutions, and the presumptive burden of proof.
In this privately-administered court system, the bank is the prosecutor, the defendant, and the judge simultaneously, and the default judgment is "guilty." It can exonerate itself only by, at its own expense and peril, producing a written record of the evidence examined. This procedural hurdle is designed to simplify review by the United States' actual legal system, regulators, and consumer advocates.
The institution 's report of the results of its investigation shall include a written explanation of the institution's findings and shall note the consumer's right to request the documents that the institution relied on in making its determination. Upon request, the institution shall promptly provide copies of the documents.
Having done informal consumer advocacy for people with banking and debt issues for a few years, I cannot overstate the degree to which this prong of Reg E is a gift to consumer advocates. Many consumers are not impressively detail-oriented, and Reg E allows an advocate to conscript a financial institution's Operations department to backfill the customer's files about a transaction they do not have contemporaneous records of. In the case that the Operations department itself isn't organized, great, at least from my perspective. Reg E says the bank just ate the loss. And indeed, several times over the years, the prototypical grandmother in Kansas received a letter from a bank vice president of consumer lending explaining that the bank was in receipt of her Reg E complaint, had credited her checking account, and considered the matter closed. It felt like a magic spell to me at the time.
The contractual liability waterfall in card payments
Banks do not like losing money, citation hopefully unnecessary, and part of the business of banking is arranging for liability transfers. Insurance is many peoples' paradigmatic way to understand liability transfers, but banks make minimal use of insurance in core banking services. (A bank which is robbed almost always self-insures, and the loss--averaging four figures and trending down--is so tiny that it isn't worth specifically budgeting for.)
The liability transfer which most matters to Reg E is a contractual one, from issuing banks to card processors and from card processors to card-accepting businesses. These parties' obligations to banks and cardholders are substantially broader than the banks' obligations under Reg E, but the banks use a fraction of those contracts to defray a large portion of their Reg E liability.
For example, under the various brands' card rules, an issuer must have the capability for a customer to say that a transaction which happened over plastic (or the electronic equivalent) simply didn't meet their expectations. The issuer's customer service representative will briefly collect facts from the customer, and then initiate an automatic process to request information from a representative of the card-accepting business. On receipt of that information, or non-receipt of it, a separate customer service representative makes a decision on the case. This mechanism is called a "chargeback" in the industry, and some banks are notorious for favoring the high-income quite- desirable customers who hold their plastic over the e.g. restaurant that the bank has no relationship with. "My eggs were undercooked" is a sufficient reason to ask for a chargeback and will result in the bank restoring your money a large percentage of the time.
In the case where the complaint is "My card was stolen and used without my knowledge", essentially the same waterfall activates, perhaps with the internal note made that this dispute is Reg E sensitive. But mechanically it will be quite similar: bank tells processor "Customer asserts fraud", processor tells business, business replies with a fax, bank staff reviews fax and adjudicates.
There are on the order of 5 million criminal cases in the formal U.S. legal system every year. There are more than 100 million complaints to banks, some of them alleging a simple disagreement (undercooked eggs) and very many alleging crime (fraud). It costs banks billions of dollars to adjudicate them.
The typical physical form of an adjudication is not a weeks-long trial with multiple highly-educated representatives debating in front of a more-senior finder of fact. It is a CSR clicking a button on their web app's interface after 3 minutes of consideration, and then entire evidentiary record often fits in a tweet.
"Customer ordered from online store. Customer asserts they didn't receive the item in six weeks. No response from store. Customer wins. Next.", "Customer ordered from online store. Customer asserts they didn't receive item. Store provided evidence of shipping via UPS. Customer does not have a history of fraudulent chargebacks. Customer wins. Next.", "Customer's bookkeeper asserts ignorance of software as a service provider charge. Business provided written statement from customer's CEO stating chargeback filed in error by new bookkeeper. Customer wins. Next." (I'm still annoyed by that last one, years later, but one has to understand why it is rational for the bank and, in a software company's clearer-minded moments, rational for them to accept the risk of this given how lucrative software is.)
The funds flow in a chargeback mirrors the contractual liability waterfall: the issuing bank gets money back from a financial intermediary, who gets it back from a card processor (like Stripe, which I once worked for, and which doesn't specifically endorse things I write in my own spaces), who will attempt to get it back from the card accepting business.
That word "attempt" is important. What if the business doesn't have sufficient money to pay the aggrieved customer, or they can't be located anymore when the system comes to collect? Reg E has a list of exceptions and those aren 't on it. The card processor then eats the loss.
The same frequently happens to cover the provisional credit mandated while the bank does its investigation, and the opposite happens in the case where the issuing bank decides that the card accepting business is in the right, and should be restored the money they charged a customer.
This high-frequency privately-funded alternative legal system has quietly ground out hundreds of millions of cases for the last half century. It is a foundation upon which commerce rests. It even exerts influence internationally, since the card brand rules essentially embed a variant of the Reg E rights for cardholders globally, and since nowhere in Reg E is there a carveout for transactions that a customer might make electronically with their U.S. financial institution while not physically located in the United States. If you are mugged and forced to withdraw money at an ATM in Caracas, Uncle Sam says your bank knows that some tiny percentage of cardholders will be mugged every year, and mandates they pay.
Enter Zelle
Zelle, operated by Early Warning Systems (owned by a consortium of large banks), is a substantially real-time electronic transfer method between U.S. bank accounts. Bank web and mobile apps have for decades supported peer to peer and customer to business transfers, via push ACH (and, less frequently, by wire), but ACH will, in standard practice, take a few days to be credited to the recipient and a few hours until it will become known to them as pending.
Zelle is substantially a blocking play, against Venmo, Cash App, and similar. Those apps captivated a large number of mostly-young users with the P2P payments, for use cases like e.g. splitting dinner, spotting a buddy $20, or collecting donations for a Christmas gift for the teacher from all the parents in a class. After attracting the users with those features, they kept them with product offerings which, in the limit, resemble bank accounts and which actually had bank accounts under the hood for at least some users.
And so the banks, fearing that real-time payment rails would not arrive in time (FedNow has been FedLater for a decade and RTP has relatively poor coverage), stood up Zelle, on the theory that this feature could be swiftly built into all the bank apps. Zelle launched in 2017.
Zelle processes enormous volumes. It crowed recently that it did $600 billion in volume in the first half of 2025. Zelle is much larger than the upstarts like Venmo (about $250 billion in annual volume) and Cash App (about $300 billion in customer inflows annually). This is not nearly in the same league as card payments (~$10 trillion annually) or ACH transfers (almost $100 trillion annually), but it is quite considerable.
All of it is essentially free to the transacting customers, unlike credit cards, which are extremely well- monetized. And there is the rub.
Zelle is an enormous fraud target
"Hiya, this is Susan calling from your bank. Your account has been targeted by fraudsters. I need you to initiate a Zelle payment to yourself to move it to a safe account while we conduct our investigation. Just open your mobile banking app, type the password, select Zelle from the menu, and send it to your own phone number. Thank you for your cooperation."
Susan is lying. Her confederates have convinced at least one financial institution in the U.S. that the customer's phone number is tied to a bank account which fraudsters control. That financial institution registered it with Zelle, so that when the victim sends money, the controlled account receives it substantially instantaneously. They will then attempt to immediately exfiltrate that money, sending it to another financial institution or a gift card or a crypto exchange, to make it difficult for investigators to find it faster than they can spend it. This process often repeats; professionals call this "layering."
So, some days later, when the victim calls the bank and asks what happened to the money the bank was trying to secure from fraud, what does the bank tell them?
Zelle is quick to point out that only 0.02% of transactions over it have fraud reported, and they assert this compares favorably to competing payments methods. Splendid, then do the banks want to absorb on the order of $240 million a year in losses from fraudulent use of a technology they built into their own apps which is indisputably by any intellectually serious person an electronic funds access device?
Frequently in the last few years, the bank has said "Well, as Gen Z would say, that sounds like a bit of a skill issue." And Reg E? "We never heard of it. Caveat emptor."
To be slightly more sympathetic to the banks, they're engaged in fine- grained decisioning on Zelle frauds, which have many mechanisms and flavor texts. They are more likely to reimburse as required in the case of account takeovers, where the criminal divines a customer's password, pops an email address, or steals access to a phone number, and then uses it to empty a bank account. They are far less likely to reimburse where the criminal convinces the customer to operate their access device (mobile phone) in a way against their interests. Skill issue.
Why do banks aggressively look for reasons to deny claims? Elementary: there is no waterfall for Zelle. If there is a reimbursement for the user, it has to come from the bank's balance sheet. (Zelle as originally shipped was incapable of reversing a transaction to claw back funds. That mechanism was something of an antipriority at design time, since funds subject to a clawback might be treated by receiving banks as non-settled, and the user experience banks wanted to deliver was "instantly spendable, like on Venmo." Instantaneous funds availability exists in fundamental tension with security guarantees even if the finality gets relaxed, as Zelle's was in 2023 under regulatory pressure.)
Banks like to pretend that the dominant fraud pattern is e.g. a "social media scam", where an ad on Facebook or a Tiktok video leads someone to purchase sneakers with a Zelle payment from an unscrupulous individual, who doesn't actually send the sneakers. This pattern matches more towards "well, that's a disagreement about how your eggs were done, not a disagreement about how we operate payment rails." Use a card and we'll refund the eggs (via getting the restaurant to pay for them); don't and we won't.
So, in sum and in scaled practice at call centers, the bank wants to quickly get customers to admit their fingers were on their phone when defrauded. If so, no reimbursement.
This rationale is new and is against our standard practice, for decades. If you are defrauded via a skimming device attached to an ATM, the bank is absolutely liable, and will almost always come to the correct conclusion immediately. It would be absurdly cynical to say that you intended to transact with the skimming device and demonstrated your assent by physically dipping your card past it.
Bank recalcitrance caused the Consumer Financial Protection Bureau to sue a few large banks in late 2024. The CFPB alleged they had a pattern and practice of not paying out claims for fraud conducted over Zelle rails. The banks will tell you the same, using slightly different wording. Chase, for example, now buries in the fine print "Neither Chase nor Zelle® offers reimbursement for authorized payments you make using Zelle®, except for a limited reimbursement program that applies for certain imposter scams where you sent money with Zelle®. This reimbursement program is not required by law and may be modified or discontinued at any time."
The defensible gloss of banks' position on "purchase protection" is that the purchase protection that customers pay for in credit cards which makes them whole for eggs not cooked to their liking is not available for Zelle payments. Fine.
The indefensible extension is that banks aren't liable for defrauded customers. That is a potential policy regime, chosen by the polity of many democratic nations. The United States is not one of those nations. Our citizens, through their elected representatives, made the considered choice that financial institutions would need to provide extraordinary levels of safety in electronic payments. In reliance upon that regime, the people of the United States transacted many trillions of dollars over payment rails, which was and is very lucrative for all considered.
The CFPB's lawsuit was dropped in early 2025, as CFPB's enforcement priorities were abruptly curtailed. (Readers interested in why might see Debanking and Debunking and Ctrl-F "wants some examples made.") To the extent it still exists after being gutted, it is fighting for its life.
But knifing the CFPB doesn't repeal Reg E. In theory, any bank regulator (and many other actors besides) can hold them to account for obligations under it. One of the benefits of Reg E is that the single national standard is easiest to reason about, but in the absence of it, one can easily imagine a patchwork of state-by-state consumer protection actions and/or coalitioning between state attorneys general. I will be unmoved if banks complain that this is all so complicated and they welcome regulation but it has to be a single national standard.
Banks may attempt to extend the Zelle precedent
Having for the moment renegotiated their Reg E obligations by asserting they don't exist, and mostly getting away with it, some banks might attempt to feel their oats a bit and assert that customers bear fraud risks more generally.
For example, in my hometown of Chicago, there has been a recent spate of tap- to-pay donation fraud. The fraudster gets a processing account, in their own name or that of a confederate/dupe, to collect donations for a local charitable cause. (This is not in itself improper; the financial industry understands that the parent in charge of a church bake sale will not necessarily be able to show paperwork to that effect before the cookies go stale.) Bad actors purporting to be informal charities accost Chicagoans on the street and ask for a donation via tap-to-pay, but the actual charged donation was absurdly larger than what the donor expected to donate; $4,000 versus $10, for example. The bad actor then exits the scene quickly.
(A donor who discovers the fraud in the moment is then confronted with the unfortunate reality that they are outnumbered by young men who want to rob them. This ends about as well as you'd expect. Chicago has an arrest rate far under 1% for this. A cynic might say that if you don't kill the victim, it's legal. I'm not quite that cynical.)
But Reg E doesn't care about the safety of city streets, in Chicago or anywhere else. It assumes that payment instruments will continue to be used in an imperfect world. This case has a very clear designed outcome: customer calls bank, bank credits customer $4,000 because the customer was defrauded and therefore the "charity" lacked actual authority for the charge, bank pulls $4,000 from credit card processor, credit card processor attempts to pull $4,000 from the "charity", card processor fails in doing so, card processor chalks it up to tuition to improve its fraud models in the future.
Except at least some banks, per the Chicago Tribune's reporting, have adopted specious rationales to deny these claims. Some victims surrender physical control of their device, and banks argue that that means they authorized the transaction. Some banks asserted the manufactured-out-of-their-hindquarters rationale that Reg E only triggers when there is a physical receipt. (This inverts the Act's responsibility graph, where banks were required to provide physical hardcopy receipts to avoid an accountability sink swallowing customer funds.)
Banks will often come to their senses after being contacted by the Chicago Tribune or someone with social power and gravitas who knows how to cite Reg E. But it is designed to work even for less sophisticated customers who don't know the legislative history of the state machine. They just have to know "Call your bank if you have a problem."
That should work and we are diminished if it doesn't.
Reg E encompasses almost every technology which exists and many which don't
yet
With a limited number of carveouts (e.g. wire transfers), Reg E is intentionally drafted to be future-proof against changes in how Americans transact. This is why, when banks argue that some new payments rail is exempt because it is "different," the correct legal response is usually some variation of: doesn't matter--that's Reg E.
Our friends in crypto generally believe that Reg E is one star in the constellation of regulations that they're not subject to. They created Schrodinger's financial infrastructure, which is the future of finance in the boardroom and just some geeks playing with an open source project once grandma gets defrauded. There is an unresolved tension in saying "Traditional institutions like Visa are adopting stablecoins" and in the see-no-evil reimburse-no-losses attitude issuers and others in the industry take towards fraud which goes over their rails.
Reg E doesn't have an exception in its text for electronic funds transfers which happen over slow databases.
A hypothetical future CFPB, given the long-standing premise that fraud is not an acceptable outcome of consumer payment systems, would swiftly come to the conclusion that if it walks like a checking account, quacks like a checking account, and is marketed as an alternative to checking accounts, then it is almost certainly within Reg E scope.
Casting one's eyes across the fintech landscape, many players seem to have checking account envy. In the era of the "financial superapp" where everyone wants to bolt on high-frequency use cases like payments to e.g. AUM gathering machines like brokerage accounts, that is worth a quick chat with Legal before you start getting the letters from Kansan grandmas.
[0] The first "credit cards" were not the plastic-with-a-magstripe form factor which came to dominate but rather "charge plates." They were physical tokens which pointed at a record at e.g. a department store's internal accounts, usually by means of an embossed account number, to be read by the Mk 0 human eyeball and, later, physically copied to a paper record via ink. Many were metal and designed to be kept around a key ring. As Matt Levine and many others have mentioned, the crypto community has speedrun hundreds of years of financial history, and keeping your account identifier on etched metal enjoyed a short renaissance recently. Unlike the department stores' bookkeepers, crypto enthusiasts lost many millions of dollars of customer funds by misplacing their metal (see page 20 particularly).
[1] Market research in the 1950s was hard. Short version of the Fresno drop: they lost money due to abuse by a small segment of users, but successfully proved that the middle class would happily use plastic to transact if they were offered it and it was generally accepted by businesses as opposed to being tied to a single store. They then scaled the 60,000 card pilot to millions within a year. Visa is the corporate descendant of that program; Mastercard that of what competitors did in response.
-
🔗 anthropics/claude-code v2.0.76 release
What's changed
- Fixed issue with macOS code-sign warning when using Claude in Chrome integration
-
🔗 anthropics/claude-code v2.1.0 release
chore: Update CHANGELOG.md
-
🔗 The Pragmatic Engineer The grief when AI writes most of the code rss
I'm coming to terms with the high probability that AI will write most of my code which I ship to prod, going forward. It already does it faster, and with similar results to if I'd typed it out. For languages/frameworks I'm less familiar with, it does a better job than me.
It feels like something valuable is being taken away, and suddenly. It took a lot of effort to get good at coding and to learn how to write code that works, to read and understand complex code, and to debug and fix when code doesn't work as it should. I still remember how daunting my first "real" programming class was at university (learning C), how lost I felt on my first job with a complex codebase, and how it took years of practice, learning from other devs, books, and blogs, to get better at the craft. Once you're pretty good, you have something that's valuable and easy to validate by writing code that works!
Some of my best memories of building software are about coding. Being "locked in" and balancing several ideas while typing them out, of being in the zone, then compiling the code, running it and seeing that "YES ", it worked as expected!
It's been a love-hate relationship, to be fair, based on the amount of focus needed to write complex code. Then there's all the conflicts that time estimates caused: time passes differently when you're locked in and working on a hard problem.
Now, all that looks like it will be history.
I wonder if I'll still get the same sense of satisfaction from the fact that writing complicated code is hard? Yes, AI is convenient, but there's also a loss.
Or perhaps with AI agents, being "in the zone" will shift to thinking about higher-level problems, while instructing more complex code to be written?
This was a section from my analysis piece When AI writes almost all code, what happens to software engineering?. Read the full one here.
-
🔗 r/LocalLLaMA 16x AMD MI50 32GB at 10 t/s (tg) & 2k t/s (pp) with Deepseek v3.2 (vllm-gfx906) rss
| Deepseek 3.2 AWQ 4bit @ 10 tok/s (output) // 2000 tok/s (input of 23k tok) on vllm-gfx906-deepseek with 69000 context length Power draw : 550W (idle) / 2400W (peak inference) Goal : run Deepseek V3.2 AWQ 4-bit on most cost effective hardware like 16MI50 at decent speed (token generation & prompt processing) Coming next : open source a future test setup of 32 AMD MI50 32GB for Kimi K2 Thinking Credits : BIG thanks to the Global Open source Community! All setup details here: https://github.com/ai-infos/guidances-setup-16-mi50-deepseek-v32 Feel free to ask any questions and/or share any comments.* ps: it might be a good alternative to CPU hardwares as RAM price increases and the prompt processing speed will be much better with 16 TB/s bandwidth + tensor parallelism! ps2: i'm just a random guy with average software dev background using LLMs to make it run. Goal is to be ready for LOCAL AGI without spending +300k$... submitted by /u/ai-infos
[link] [comments]
---|--- -
🔗 News Minimalist 🐢 US cuts childhood vaccine list + 8 more stories rss
In the last 5 days ChatGPT read 149582 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

[6.0] US reduces routine childhood vaccine recommendations —theguardian.com(+140)
The Trump administration has slashed routine childhood vaccine recommendations from 17 to 11, effective immediately, a move experts warn will reduce immunization access and increase infectious disease transmission.
Vaccines for influenza, rotavirus, and RSV are no longer universally recommended, shifting to high-risk or shared clinical decision-making status. This change, overseen by Robert F. Kennedy Jr., aims to make several immunizations optional rather than standard routine for all children.
The shift aligns the US schedule with Denmark's as the nation faces its largest measles outbreak in decades. Concurrently, domestic cases of tetanus and fatal pertussis infections have reached multi-year highs.
[6.2] US seizes Venezuelan President Maduro, asserting control over nation's oil wealth —theconversation.com(+1938)
US special forces have seized Venezuelan President Nicolás Maduro, toppling his government. President Donald Trump announced the United States will now manage Venezuela and its massive oil reserves.
The military operation follows decades of tension over Venezuela's oil wealth, the world’s largest reserves. Trump intends for US companies to upgrade infrastructure and generate revenue, ending a thirty-year adversarial relationship that began under former leader Hugo Chávez.
Highly covered news with significance over 5.5
[6.4] Hyundai and Boston Dynamics showcase humanoid robot Atlas at CES — bostonglobe.com (+45)
[5.7] China restricts over a thousand dual-use exports to Japan, including rare earths — udn.com (Chinese) (+26)
[5.7] xAI secures $20 billion in funding from Nvidia, Cisco, and Fidelity — cnbc.com (+13)
[5.7] X platform enables creation of nonconsensual AI-generated sexual images — theconversation.com (+77)
[5.6] Guinea's junta leader confirmed president-elect after first vote since 2021 coup — financialpost.com (+3)
[5.5] Trump orders divestment of chip deal over China security concerns — apnews.com (+22)
[5.9] Nvidia launches Alpamayo AI platform, introducing deep-reasoning for autonomous vehicles — forbes.com (+619)
Thanks for reading!
— Vadim
You can create your own significance-based RSS feed with premium.
-
🔗 libtero/suture Suture v1.0.0 release
No content.
-
🔗 r/reverseengineering Coleco Zodiac (1979) Daily Preview date codes extracted from original manual rss
submitted by /u/Few-Leading-9611
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [decode_instruction](https://github.com/milankovo/decode_instruction) (1.0.0) -
🔗 r/reverseengineering Kernel driver blocking Cheat Engine (ObRegisterCallbacks) rss
submitted by /u/No_Acanthaceae1468
[link] [comments] -
🔗 Locklin on science Optimizing for old battles rss
About 3/4 of our management expertocracy is optimizing for old battles. It’s a pattern which is pervasive in Western Civilization, which is one of the reasons everything is so weird right now. Gather together a group of bureaucrats to solve a real problem, it’s still there 50 years later doing …. things. Things which are […]
-
🔗 r/reverseengineering PAIMON BLESS V17.7 - Quantitative Trading System rss
submitted by /u/pmd02931
[link] [comments] -
🔗 r/reverseengineering Hermes Studio demo - React Native decompiler and disassembler rss
submitted by /u/nilla615615
[link] [comments] -
🔗 r/wiesbaden Looking for a good men’s hairdresser/barber in Wiesbaden (medium-length hair) rss
Hi everyone, I’m looking for recommendations for a good men’s hairdresser or barber in Wiesbaden.
I’m currently growing my hair out, but it’s at that awkward stage where it’s a bit too long and messy. I don’t want a short cut just someone who knows how to shape and tidy medium-length hair properly while keeping it growing.
English-speaking would be a big plus Thanks in advance!
submitted by /u/SaladWestern8139
[link] [comments] -
🔗 streamyfin/streamyfin v0.51.0 release
Finally a new release 🥳 this one has some really nice improvements, like:
- Approve Seerr (formerly Jellyseerr) requests directly in Streamyfin for admins
- Updated Home Screen icon in the new iOS 26 style
- Improved VLC integration with native playback (AirPods controls, automatic pause when other audio starts, native system controls with artwork)
- Option to use KSPlayer on iOS - better hardware decoding support and PiP
- Music playback (beta)
- Option to disable player gestures at screen edges to prevent conflicts with swipe down notifications
- Snapping scroll in all carousels for smoother and more precise navigation
- Playback speed
- Dolby badge displayed in technical item details when available
- Expanded playback options with dynamically loaded streams and full media selection (Gelato support)
- Streamystats watchlists and promoted sections integration
- Initial KefinTweaks integration
- A lot of other fixes and small improvements
What's Changed
- fix: linting by @fredrikburmester in #1184
- chore(deps): Update dependency react-native-device-info to v15 by @renovate[bot] in #1182
- chore(deps): Update actions/dependency-review-action action to v4.8.2 by @renovate[bot] in #1175
- chore(deps): Update github/codeql-action action to v4.31.3 by @renovate[bot] in #1180
- feat: Liquid Glass Icon by @SUPERHAMSTERI in #1070
- fix: auto-filling would cause state not to be updated by @fredrikburmester in #1200
- fix: update okhttp v5 and fix android download crash issues by @fredrikburmester in #1203
- fix: clean toast message jellyseerr movie request by @fredrikburmester in #1201
- chore(deps): upgrade dev dependencies and test utilities by @Gauvino in #1195
- feat: vlc apple integration - pause on other media play + controls by @fredrikburmester in #1211
- fix: disable gestures from top and bottom of screen because of interference with notification shade pull down by @fredrikburmester in #1206
- feat: move source and track selection to seperate sheet by @lostb1t in #1176
- chore(deps): Pin dependencies by @renovate[bot] in #1209
- fix: show tech details when avaiable by @lostb1t in #1213
- feat: approve jellyserr requests by @fredrikburmester in #1214
- refactor: Move media sources preload higher up the tree by @lostb1t in #1216
- feat: prefer downloaded file by @fredrikburmester in #1217
- refactor: pass down items with sources to children by @lostb1t in #1218
- ci: fix CodeQL checkout by @Gauvino in #1170
- chore: Add version 0.47.1 to issue report template by @Simon-Eklundh in #1251
- chore(deps): Update actions/setup-node action to v6.1.0 by @renovate[bot] in #1262
- fix(player): Fix skip credits seeking past video end causing pause by @retrozenith in #1277
- feat: KSPlayer as an option for iOS + other improvements by @fredrikburmester in #1266
- fix(readme): Add Obtainium button by @kernelb00t in #1293
- feat: add button to toggle video orientation in player by @KindCoder-no in #743
- fix: jellyseer categories by @lancechant in #1233
- feat: add Dolby Vision badge by @edeuss in #1177
New Contributors
- @retrozenith made their first contribution in #1277
- @kernelb00t made their first contribution in #1293
- @edeuss made their first contribution in #1177
Full Changelog :
v0.47.1...v0.51.0Feedback
Your feedback matters. It helps us spot issues faster and keep improving the app in ways that benefit everyone. If you have ideas or run into problems, please open an issue on GitHub or join our Discord
-
🔗 batrachianai/toad A Historic Release release
[0.5.23] - 2026-01-06
Fixed
- A few style issue: tree background, status line padding
[0.5.22] - 2026-01-06
Fixed
- Fixes for settings combinations not taking effect
Changed
- Restored prompt history
- The
/aboutslash command has been renamed to/toad:about, to crate a namespace for future Toad commands
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mastodon.social/@thejapantimes/115852557729468030
the Canada Modern graphic design style stays winning 😎
-
🔗 r/wiesbaden Anyone interested in starting a book club? rss
I’m looking to read more books this year but figured a book club might encourage me to stay committed! Does anyone know of any existing clubs in the area? If not, I’d love to start one :)
submitted by /u/kentoclatinator
[link] [comments] -
🔗 r/LocalLLaMA DeepSeek-R1’s paper was updated 2 days ago, expanding from 22 pages to 86 pages and adding a substantial amount of detail. rss
| arXiv:2501.12948 [cs.CL]: https://arxiv.org/abs/2501.12948 submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Don't put off hardware purchases: GPUs, SSDs, and RAM are going to skyrocket in price soon rss
In case you thought it was going to get better:
GPU prices are going up. AMD and NVIDIA are planning to increase prices every month starting soon.
NAND flash contract price went up 20% in November, with further increases in December. This means SSDs will be a lot more expensive soon.
DRAM prices are going to skyrocket, with no increase in production capacity and datacenters and OEMs competing for everything.
Even Consoles are going to be delayed due to the shortages.
According to TrendForce, conventional DRAM contract prices in 1Q26 are forecast to rise 55–60% quarter over quarter, while server DRAM prices are projected to surge by more than 60% QoQ. Meanwhile, NAND Flash prices are expected to increase 33–38% QoQ
Industry sources cited by Kbench believe the latest price hikes will broadly affect NVIDIA’s RTX 50 series and AMD’s Radeon RX 9000 lineup. The outlet adds that NVIDIA’s flagship GeForce RTX 5090 could see its price climb to as high as $5,000 later in 2026.
NVIDIA is also reportedly weighing a 30% to 40% reduction in output for parts of its midrange lineup, including the RTX 5070 and RTX 5060 Ti, according to Kbench.
submitted by /u/Eisenstein
[link] [comments] -
🔗 r/reverseengineering Can anyone crack this website and get the premium tool rss
submitted by /u/AdvisorObvious2693
[link] [comments] -
🔗 r/reverseengineering Crackmes.one RE CTF rss
submitted by /u/xusheng1
[link] [comments] -
🔗 r/reverseengineering Learning from the old Exynos Bug rss
submitted by /u/TwizzyIndy
[link] [comments] -
🔗 r/wiesbaden 2h Zeitvertreib rss
Hi hab morgen nen Termin beim St Josef Krankenhaus, dementsprechend reise ich 2h früher an. Gibts da in der Nähe Möglichkeiten wo man sich mit seinem Laptop reinsetzen kann bzw. habt ihr andere Vorschläge wie man sich die Zeit vetreiben könnte?
submitted by /u/Living_Performer_801
[link] [comments] -
🔗 r/LocalLLaMA NousResearch/NousCoder-14B · Hugging Face rss
| from NousResearch: "We introduce NousCoder-14B , a competitive programming model post-trained on Qwen3-14B via reinforcement learning. On LiveCodeBench v6 (08/01/2024 - 05/01/2025), we achieve a Pass@1 accuracy of 67.87%, up 7.08% from the baseline Pass@1 accuracy of 60.79% of Qwen3-14B. We trained on 24k verifiable coding problems using 48 B200s over the course of four days." submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.37.8 release
No content.
-
🔗 badlogic/pi-mono v0.37.7 release
No content.
-
🔗 Ampcode News User Invokable Skills rss
Since we added support for Agent Skills, we became heavy users of them. There are now fifteen skills in the Amp repository.
But one frustration we had was that skills were only invoked when the agent deemed that necessary. Sometimes, though, we knew exactly which skill the agent should use.
So we made skills user-invokable: you, as the user, can now invoke a skill, which will force the agent to use it when you send your next message.
Open the command palette (Cmd/Alt-Shift-A in the Amp editor extensions or Ctrl-O in the Amp CLI) and run
skill: invoke.
-
- January 06, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-06 rss
IDA Plugin Updates on 2026-01-06
New Releases:
Activity:
- capa
- chernobog
- 6000fe55: docs: Expand README with Z3 integration details and specific unflatte…
- 6848b8b6: feat: Enhance string decryption with XOR key derivation and robust CF…
- c3d8c78f: feat: Add support for CFSTR reference patterns in string decryption
- 1829f5e6: fix: Prevent plugin unload crash by removing explicit RuleRegistry cl…
- 1289cbbf: fix: Integrate char-stack and crypto results into string replacement
- d1104a45: feat: Propagate decrypted CFStrings to IDB and decompilation view
- 11b8027a: fix: Expand string decryption pattern matching for helpers and pointers
- eca82414: feat: Support pointer arithmetic assignments and fuzzy function matching
- 1ac86caa: fix: Integrate ctree_string_decrypt_handler into hexrays callback
- 463e0025: chore: Ignore temporary test files and binaries
- 39b86117: test: Add string obfuscation test program
- 768c1405: feat: Add ctree-level string decryption handler with AES support
- a35e406d: fix: Correct m_icall to m_call conversion with proper mcallinfo handling
- dylib_dobby_hook
- 532641a1: Update Builder.yml
- ida-claude-code-plugins
- ida-fast-mcp
- ida-hcli
- ida-spotlight
- IDAPluginList
- 1d31566c: Update
- malpedia-flossed
- 06f20b55: new data set release
- recover
- 415e5921: Update ida-plugin.json and add LICENSE.txt
-
🔗 badlogic/pi-mono v0.37.6 release
Added
- Extension UI dialogs (
ctx.ui.select(),ctx.ui.confirm(),ctx.ui.input()) now accept an optionalAbortSignalto programmatically dismiss dialogs. Useful for implementing timeouts. Seeexamples/extensions/timed-confirm.ts. (#474) - HTML export now shows bridge prompts in model change messages for Codex sessions (#510 by @mitsuhiko)
- Extension UI dialogs (
-
🔗 badlogic/pi-mono v0.37.5 release
Added
- ExtensionAPI:
setModel(),getThinkingLevel(),setThinkingLevel()methods for extensions to change model and thinking level at runtime (#509) - Exported truncation utilities for custom tools:
truncateHead,truncateTail,truncateLine,formatSize,DEFAULT_MAX_BYTES,DEFAULT_MAX_LINES,TruncationOptions,TruncationResult - New example
truncated-tool.tsdemonstrating proper output truncation with custom rendering for extensions - New example
preset.tsdemonstrating preset configurations with model/thinking/tools switching (#347) - Documentation for output truncation best practices in
docs/extensions.md - Exported all UI components for extensions:
ArminComponent,AssistantMessageComponent,BashExecutionComponent,BorderedLoader,BranchSummaryMessageComponent,CompactionSummaryMessageComponent,CustomEditor,CustomMessageComponent,DynamicBorder,ExtensionEditorComponent,ExtensionInputComponent,ExtensionSelectorComponent,FooterComponent,LoginDialogComponent,ModelSelectorComponent,OAuthSelectorComponent,SessionSelectorComponent,SettingsSelectorComponent,ShowImagesSelectorComponent,ThemeSelectorComponent,ThinkingSelectorComponent,ToolExecutionComponent,TreeSelectorComponent,UserMessageComponent,UserMessageSelectorComponent, plus utilitiesrenderDiff,truncateToVisualLines docs/tui.md: Common Patterns section with copy-paste code for SelectList, BorderedLoader, SettingsList, setStatus, setWidget, setFooterdocs/tui.md: Key Rules section documenting critical patterns for extension UI developmentdocs/extensions.md: Exhaustive example links for all ExtensionAPI methods and events- System prompt now references
docs/tui.mdfor TUI component development
- ExtensionAPI:
-
🔗 @cxiao@infosec.exchange Anita Anand sur Bluesky: mastodon
Anita Anand sur Bluesky: https://bsky.app/profile/anitaoakvilleeast.bsky.social/post/3mbrkauihpk25
Je serai à Nuuk dans les prochaines semaines pour inaugurer officiellement le consulat du Canada et souligner une étape concrète dans le renforcement de notre engagement en soutien à la souveraineté et à l’intégrité territoriale du Danemark, y compris du Groenland.
-
🔗 @cxiao@infosec.exchange Anita Anand on Bluesky: mastodon
Anita Anand on Bluesky:
https://bsky.app/profile/anitaoakvilleeast.bsky.social/post/3mbrkauihpk25I will be in Nuuk in the coming weeks to officially open Canada’s consulate and mark a concrete step in strengthening our engagement in support of Denmark’s sovereignty and territorial integrity, including Greenland.
-
🔗 streamyfin/streamyfin 0.51.0 release
No content.
-
🔗 badlogic/pi-mono v0.37.4 release
Release v0.37.4
-
🔗 @HexRaysSA@infosec.exchange 👀 IDA 9.3 is coming soon, so we'll be sharing some of the key updates in this mastodon
👀 IDA 9.3 is coming soon, so we'll be sharing some of the key updates in this release throughout the next few weeks...
➥ First up: Practical Improvements to the Type System
https://hex-rays.com/blog/ida-9.3-type-system-improvements -
🔗 Hex-Rays Blog IDA 9.3: Practical Improvements to the Type System rss
-
🔗 r/wiesbaden Vermieter ignoriert schweren Wasserschaden fast 2 Monate – Wohnung jetzt unbewohnbar rss
Ich schreibe diesen Beitrag teilweise aus Verzweiflung und teilweise in der Hoffnung, dass mir hier jemand – oder auch lokale Medien – weiterhelfen kann.
Seit dem 11. November gibt es in meiner Wohnung in Wiesbaden einen massiven Wasserschaden, verursacht durch eine undichte Stelle in der darüberliegenden Wohnung. Ich habe den Schaden sofort über die App meines Vermieters, die Schadenshotline sowie durch zahlreiche Nachfragen gemeldet. Aufgrund von Untätigkeit und Verzögerungen lief über Wochen weiter Wasser durch die Wände, was zu starkem Schimmelbefall geführt hat. Die Wohnung ist inzwischen nicht mehr bewohnbar.
Vermieter ist Industria Immobilien, ein großes Immobilienunternehmen. Selbst nachdem die Ursache des Lecks in der oberen Wohnung schließlich behoben wurde, wurden bis heute weder Trocknungsgeräte aufgestellt noch eine fachgerechte Sanierung begonnen. Fast zwei Monate später sind die Wände weiterhin feucht und der Schimmel breitet sich weiter aus.
Ich lebe mit einem 6 Monate alten Baby und habe zudem ernsthafte gesundheitliche Probleme. Aufgrund des Zustands der Wohnung war ich gezwungen, mein Zuhause zu verlassen und auf eigene Kosten anderweitig unterzukommen, während sich die Schäden weiter verschlimmerten. Trotz unzähliger Telefonate, E-Mails, schriftlicher Fristen und sogar der Einschaltung des Mieterbundes hat sich nichts bewegt. Einen echten Ansprechpartner bei der Firma zu erreichen ist nahezu unmöglich – stattdessen gibt es automatisierte Antworten und leere Versprechungen.
Besonders erschreckend ist, dass ich offenbar kein Einzelfall bin. Nach Sichtung zahlreicher Bewertungen auf Google, in sozialen Medien und anderen öffentlichen Foren berichten viele Mieter von sehr ähnlichen Erfahrungen: verzögerte Reparaturen, ignorierte Schäden und fehlende Verantwortung.
Ich habe inzwischen Verbraucherstellen kontaktiert, Mieterorganisationen eingeschaltet und ziehe rechtliche Schritte in Betracht. Ich teile das hier öffentlich, weil so etwas in Deutschland im Jahr 2025 nicht passieren sollte – und große Vermieter nicht monatelang unbewohnbare Zustände ignorieren dürfen.
Falls jemand Rat, ähnliche Erfahrungen oder Kontakte hat (insbesondere zu Journalisten oder Verbraucherschutz), wäre ich sehr dankbar.
submitted by /u/Afraid_Garden_4342
[link] [comments] -
🔗 r/LocalLLaMA A 30B Qwen Model Walks Into a Raspberry Pi… and Runs in Real Time rss
| Hey r/LocalLLaMA, We’re back with another ShapeLearn GGUF release (Blog, Models), this time for a model that should not feel this usable on small hardware… and yet here we are: Qwen3-30B-A3B-Instruct-2507 (device-optimized quant variants, llama.cpp-first). We’re optimizing for TPS on a specific device without output quality falling off a cliff. Instead of treating “smaller” as the goal, we treat memory as a budget: Fit first, then optimize TPS vs quality. Why? Because llama.cpp has a quirk: “Fewer bits” does not automatically mean “more speed.” Different quant formats trigger different kernels + decode overheads, and on GPUs you can absolutely end up with smaller and slower.TL;DR
- Yes, a 30B runs on a Raspberry Pi 5 (16GB). We achieve 8.03 TPS at 2.70 BPW, while retaining 94.18% of BF16 quality.
- Across devices, the pattern repeats: ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives (we compare against Unsloth and MagicQuant as requested in our previous post).
What’s new/interesting in this one
1) CPU behavior is… sane (mostly) On CPUs, once you’re past “it fits,” smaller tends to be faster in a fairly monotonic way. The tradeoff curve behaves like you’d expect. 2) GPU behavior is… quirky (kernel edition) On GPUs, performance depends as much on kernel choice as on memory footprint. So you often get sweet spots (especially around ~4b) where the kernels are “golden path,” and pushing lower-bit can get weird.
Request to the community 🙏
We’d love feedback and extra testing from folks here, especially if you can run:
- different llama.cpp builds / CUDA backends,
- weird batch sizes / context lengths,
- real workloads (coding assistants, long-form, tool-ish prompts),
- or non-NVIDIA setups (we’re aware this is where it gets spicy).
Also: we heard you on the previous Reddit post and are actively working to improve our evaluation and reporting. Evaluation is currently our bottleneck, not quantization, so if you have strong opinions on what benchmarks best match real usage, we’re all ears. submitted by /u/ali_byteshape
[link] [comments]
---|--- -
🔗 r/reverseengineering Reverse engineering my cloud-connected e-scooter and finding the master key to unlock all scooters rss
submitted by /u/crower
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits mirror: pick latest version, not first version rss
mirror: pick latest version, not first version ref: https://github.com/p05wn/SuperHint/issues/2 -
🔗 r/wiesbaden Ich hoffe ich werde hier nicht gesteinigt weil es in Mainz ist 🤣 rss
submitted by /u/TGM_E-sport_Mainz
[link] [comments] -
🔗 Jeremy Fielding (YouTube) 11 Years of Making in 11 minutes: Jeremy Fielding rss
Order custom parts Send Cut Send 👉 http://sendcutsend.com/jeremyfielding The playlist of all videos mentioned 👉 https://www.youtube.com/playlist?list=PL4njCTv7IRbyGx6jx1xM8YF8UL45T945d If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join
If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding
Social media, websites, and other channel
Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA
Notes:
Technical corrections
Nothing yet
-
🔗 r/wiesbaden Thalia Kino schließt am Mittwoch rss
submitted by /u/Whoosherx
[link] [comments] -
🔗 badlogic/pi-mono v0.37.3 release
Release v0.37.3
-
🔗 r/LocalLLaMA Supertonic2: Lightning Fast, On-Device, Multilingual TTS rss
| Hello! I want to share that Supertonic now supports 5 languages:
한국어 · Español · Français · Português · English It’s an open-weight TTS model designed for extreme speed, minimal footprint, and flexible deployment. You can also use it for commercial use! Here are key features: (1) Lightning fast — RTF 0.006 on M4 Pro (2) Lightweight — 66M parameters (3) On-device TTS — Complete privacy, zero network latency (4) Flexible deployment — Runs on browsers, PCs, mobiles, and edge devices (5) 10 preset voices — Pick the voice that fits your use cases (6) Open-weight model — Commercial use allowed (OpenRAIL-M) I hope Supertonic is useful for your projects. [Demo] https://huggingface.co/spaces/Supertone/supertonic-2 [Model] https://huggingface.co/Supertone/supertonic-2 [Code] https://github.com/supertone-inc/supertonic submitted by /u/ANLGBOY
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits add danielplohmann/mcrit-plugin rss
add danielplohmann/mcrit-plugin -
🔗 r/LocalLLaMA Performance improvements in llama.cpp over time rss
| submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Liquid Ai released LFM2.5, family of tiny on-device foundation models. rss
| Hugging face: https://huggingface.co/collections/LiquidAI/lfm25 It’s built to power reliable on-device agentic applications: higher quality, lower latency, and broader modality support in the ~1B parameter class.LFM2.5 builds on LFM2 device-optimized hybrid architecture Pretraining scaled from 10T → 28T tokens Expanded reinforcement learning post-training Higher ceilings for instruction following
5 open-weight model instances from a single architecture:
General-purpose instruct model Japanese-optimized chat model Vision-language model Native audio-language model (speech in/out) Base checkpoints for deep customization
submitted by /u/Difficult-Cap-7527
[link] [comments]
---|---
-
