- ↔
- →
- About KeePassXC’s Code Quality Control – KeePassXC
- How to build a remarkable command palette
- Leaderboard - compar:IA, the AI chatbot arena
- Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
- Automerge
- November 29, 2025
-
🔗 r/reverseengineering Stop Hardcoding Passwords! (How C++ Appears in Assembly) rss
submitted by /u/tucna
[link] [comments] -
🔗 AzzOnFire/emuit v0.8.1 release
- Fix unicode strings decode algorithm, improve metrics
- Update ida-plugin.json
Full Changelog :
v0.8.0...v0.8.1 -
🔗 r/wiesbaden Vias RB10 rss
Vias urgently need to be kicked off of the RB10 contract? When is this going to happen?
submitted by /u/Electrical-You-6513
[link] [comments] -
🔗 r/LocalLLaMA Yet another reason to stick with local models rss
| https://preview.redd.it/uofoe3u5374g1.png?width=1080&format=png&auto=webp&s=219b2ab46ac5d0b74767604bd131b78757a40ac9 Tibor Blaho, a trusted reverse engineer, found ad system strings inside the latest ChatGPT Android beta(v1.2025.329). submitted by /u/nekofneko
[link] [comments]
---|--- -
🔗 r/wiesbaden Neues Hobby gefällig? Spiel Tischrollenspiel! (Klub) rss
submitted by /u/PenPaperPiper
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
sync repo: +2 plugins, +2 releases ## New plugins - [ReCopilot](https://github.com/XingTuLab/recopilot) (0.2) - [hrtng](https://github.com/KasperskyLab/hrtng) (3.7.74) -
🔗 HexRaysSA/plugin-repository commits add known-repository XingTuLab/recopilot rss
add known-repository XingTuLab/recopilot
-
- November 28, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-28 rss
IDA Plugin Updates on 2025-11-28
Activity:
- augur
- cdcbf08b: chore: use idiomatic
anyhowmacros for early returns
- cdcbf08b: chore: use idiomatic
- dotfiles
- 365cbf19: update
- f8eb4892: update
- 4ccaaefa: update
- da98742c: update
- 9e6b7db8: update
- 2a56951a: update
- 118f20a1: update
- d5bbe438: update
- 4f2f2ddc: update
- cfd79c3e: update
- 9c0432f2: update
- 62c00328: update
- f313c189: update
- e664cf24: update
- c93bb1c1: update
- 791c025e: update
- 143fc6d8: update
- 0fb0d354: update
- haruspex
- 4a371803: chore: use idiomatic
anyhowmacros for early returns
- 4a371803: chore: use idiomatic
- ida-pro-mcp
- rhabdomancer
- 6fea619d: chore: remove unnecessary
configdefault feature flags
- 6fea619d: chore: remove unnecessary
- wp81IdaDriverAnalyzer
- 66764aad: Update wdf.py
- augur
-
🔗 r/reverseengineering Static Binary Analysis for ICS and IoT. Does It Change the Game? rss
submitted by /u/Salt-Consequence3647
[link] [comments] -
🔗 News Minimalist 🐢 Trump plans to suspend immigration from "Third World Countries" + 10 more stories rss
In the last 2 days ChatGPT read 61679 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[6.0] Trump says he will suspend immigration from all "Third World Countries" — cbsnews.com(+736)
Following a deadly shooting in Washington D.C., President Trump announced he will permanently suspend immigration from all "Third World Countries" to allow the U.S. system to recover.
The declaration came after a National Guard member was killed by an Afghan national. Trump also stated he would terminate the status of millions of migrants and reexamine green cards from 19 countries.
The detained suspect, who was admitted in 2021, had reportedly worked with the U.S. government in Afghanistan. A DHS official noted his asylum was granted during Trump's current presidency.
[5.5] Measles outbreaks surge in previously eliminated regions, impacting Eastern Mediterranean and high-income countries —elpais.com(Spanish) (+16)
The WHO reports one-quarter of 2024's major measles outbreaks occurred in countries previously free of the disease, as global vaccination rates fail to recover to pre-pandemic levels.
In 2024, 59 countries experienced large outbreaks, with global cases rising 8% since 2019. The Eastern Mediterranean region saw an 86% increase, and cases in Europe grew by 47%, while Africa reported a significant decline.
First-dose vaccination coverage has dropped to 84%. This month, the Americas lost its measles-free status due to sustained transmission in Canada, highlighting the challenge of stalled immunization progress and misinformation.
Highly covered news with significance over 5.5
[6.4] Brazil approves world's first single-dose dengue vaccine — ctvnews.ca (+15)
[6.1] Monthly injection helps severe asthma patients reduce or stop steroid use — medicalxpress.com (+7)
[6.1] EU countries agree on new rules to combat online child abuse — euronews.com (+9)
[6.0] European Parliament proposes social media ban for under-16s — theguardian.com (+11)
[5.9] Guinea-Bissau's army seizes power and removes president — zeit.de (German) (+117)
[5.8] Europe boosts space budget to 22.1 billion euros for independence — ctvnews.ca (+5)
[5.7] Taiwan puts $40 billion toward buying U.S. weapons and building a defense dome — latimes.com [$] (+17)
[5.5] Meta bans third-party LLM chatbots in WhatsApp — gsmarena.com (+8)
[5.5] IMF approves $8 billion aid package for Ukraine's economic reforms — lalibre.be (French) (+20)
Thanks for reading!
— Vadim
You can customize this newsletter with premium.
-
🔗 r/LocalLLaMA Ask me to run models rss
| Hi guys, I am currently in the process of upgrading my 4×3090 setup to 2×5090 + 1×RTX Pro 6000. As a result, I have all three kinds of cards in the rig temporarily, and I thought it would be a good idea to take some requests for models to run on my machine. Here is my current setup: - 1× RTX Pro 6000 Blackwell, power limited to 525 W - 2× RTX 5090, power limited to 500 W - 2× RTX 3090, power limited to 280 W - WRX80E (PCIe 4.0 x16) with 3975WX - 512 GB DDR4 RAM If you have any model that you want me to run with a specific setup (certain cards, parallelism methods, etc.), let me know in the comments. I’ll run them this weekend and reply with the tok/s! submitted by /u/monoidconcat
[link] [comments]
---|--- -
🔗 r/LocalLLaMA unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF · Hugging Face rss
| submitted by /u/WhaleFactory
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Model: Qwen3 Next by pwilkin · Pull Request #16095 · ggml-org/llama.cpp rss
| and it's done submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Apparently Asus is working with Nvidia on a 784GB "Coherent" Memory desktop PC with 20 PFLOPS AI Performance rss
Somehow the announcement went under the radar, but back in May, along side the Ascent GX10, Asus announced the ExpertCenter Pro ET900N G3, with GB300 Blackwell. They don't really say what's a "Coherent" memory, but my guess it's another term of saying unified memory like Apple and AMD.
The announcement and the specs are very dry on details, but given the GB300, we might get a very decent memory bandwidth, without looking like a hideous frankestein monster.
This might be r/Localllama wet dream. If they manage to price it well, and fix that memory bandwidth (that plagued Spark), they have my money.
EDIT: As many pointed out in the comments, it's based on the Nvidia DGX Station, announced back in March, which is rumored to be 80k. ServeTheHome had a nice article about it back in March.
The official specs:-
496GB LPDDR5X CPU memory at 396GB/s (Micron SOCAMM, so it seems that it will be modular not soldered!)
-
288GB HBM3e GPU memory at 8TB/s.
submitted by /u/waiting_for_zban
[link] [comments] -
-
🔗 matklad Size Matters rss
Size Matters
Nov 28, 2025
TigerStyle is pretty strict about some arbitrary limits:
…we enforce a *hard limit of 70 lines per function* …
… hard limit all line lengths, without exception, to at most 100 columns …
At the same time, we have a few quite large files, to the point of having to explicitly exclude them from our “no large binary blobs in the git history” policy: tidy.zig#L746.
Just how large should you make your functions/classes/files? I have two answers here.
Minimize The Cut
The first principle is that the size is irrelevant. Instead, you want to keep related things together, and independent things apart. You don’t want to minimize just the size of individual components, or the number of dependencies between components. If you do, you end up with a degenerate solution where there’s just a single component, or every line of code is its own file.
Instead, you want to optimize the ratio of module size to its interface. You need to divide the volume by the surface area. It’s not about the size, it’s about the shape!
You should move a data structure to a separate file when it is self contained. It doesn’t matter if it is ten or ten thousand lines long. We have replica.zig, but also timestamp_range.zig.
There’s a good visual metaphor when this rule is applied to functions. A function has inputs, the number of arguments. It also has outputs (usually there’s just one, but it can be a bundle of unrelated things). The number of inputs and the outputs together is the size of the interface. And the length of the body measures implementation. You want functions with bodies that are large relative to their interfaces. You need inverted hourglass shape. The converse is more helpful: hourglass functions/modules are a smell.
This is a useful principle for picking dependencies as well. Dependencies are useful, they do the work! But often enough, if you take a dependency apart, you might notice that it doesn’t do anything meaningful by itself , and just repackages the actual logic (implemented in a transitive dependency) with a different interface. You want to cut through the glue, and get straight to the algorithmic core.
Honor Physical Limits
Against the logic stand physical limits. Your display is only so many pixels long, and you do want to fit the code in. Hence, the 100 columns limit, as that allows you to comfortably fit two copies of code side by side on a modern 16x9 display. Two is important — you must be able to compare two versions of code, you need to see caller and callee to make the invariants meet.
Your vertical space is limited just as much as the horizontal space. There’s a sharp discontinuity between a function fitting on a screen, and just an ever so slightly larger function, when you can’t even immediately see the end of it. Hence, the Schelling point for the upper bound on function length: it’d be better to fit on a screen. Which is about 60-70 lines.
But there’s no inherent limit on the file size or number of files. So those can grow. Just make sure to not limit yourself by linear search. You need to be able quickly open any file in a project by typing just a few letters of its name. Fuzzy search is not optional. Similarly, learn to navigate large files efficiently. Can you quickly get a list of all functions? Can you jump to a function by fuzzy name?
Art Is Born Of Constraints
Physical constraints are limiting, but they can be a helpful guide to better design. The size of the “cut” doesn’t directly depend on the number of lines in a module, but there often is a correlation. Are you sure that that 10k line file isn’t three different subsystems, fighting each other? As I mentioned in today’s other article, good interface design is not natural. The resulting interface shape is obvious, once you see it. The hard part is to realize that there is (or there could be) an interface in the first place. And, if you can’t quite fit your code into your field of view, maybe it’s time to step away from the screen and think?
P.S.: Matters are plural, not a verb.
-
- November 27, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-27 rss
IDA Plugin Updates on 2025-11-27
New Releases:
Activity:
- climacros
- dotfiles
- ghidra-chinese
- d46398ea: Merge pull request #73 from TC999/sync
- rhabdomancer
- a486a847: chore: update dependencies
-
🔗 maxgoedjen/secretive 3.0.4 release
Bug fixes and updated translations for 3.0.
Fixes
- Fix bug where sometimes agent would hang and not respond. (#765)
- Fix bug where ruby net/ssh would not work properly (#747)
New in 3.0:
A huge update! UI Refresh for macOS 26 Tahoe and Liquid Glass, Post-Quantum Key Support, Enhanced Security, and lots more!
Features
- Secretive's UI has been updated for macOS 26 Tahoe and Liquid Glass, and has just gotten a facelift overall (#606, #609, #612, #657, #697, #701, #703, #704, #707, #714)
- Most key signing operations are now performed using CryptoKit (#628)
- On macOS Tahoe, MLDSA-65 and MLDSA-87 keys are now supported (#631)
- Secretive is now built with "Enhanced Security" configuration enabled. (#618)
- SocketController has been rewritten for concurrency support and overall performance (#634)
- Data payloads from agent are now parsed in isolated XPC service for improved security(#675, #681)
- Update checks now happen in isolated XPC service for improved security (#675, #681).
- New "Integrations" window for easier configuration (#657)
- Keys can now have an optional "key attribution," which is commonly an email address (#628)
- Add "Reveal in Finder" button to public key section (#669)
- Key name is now returned in SSH agent identities response (#647)
- Add help notice for post-reboot when keys sometimes fail to load (#671)
- Secretive is now built with Swift 6 and concurrency checks enabled (#578, 617)
- Secretive is now built with "String Memory Checking" enabled (#683)
- More complete SSH agent protocol parsing (#673)
- New localizations: Catalan(#558), Korean (#537), Japanese(#546), Polish (#585), Russian (#553)
- Localized strings are now type-safe (#607)
- GitHub attestation is enabled (#614, #616 #666, #667)
Special Thanks
- A big special thank you to @ultrasecreth, @orazioedoardo, @nliechti, @StephenKing, @multipleofzero, @gmessir, @dmulloy2, @HuaDeity, and @KizzyCode for their help reporting, gathering logs, and testing a fix for the agent hang.
Minimum macOS Version
14.0.0
Build
https://github.com/maxgoedjen/secretive/actions/runs/19746755792
Attestation
https://github.com/maxgoedjen/secretive/attestations/14106317
Full Changelog
-
🔗 r/reverseengineering I built SentinelNav, a binary file visualization tool to help me understand file structures (and it became way more powerful than I expected) rss
submitted by /u/FiddleSmol
[link] [comments] -
🔗 r/wiesbaden Pendeln: Taunusstein - Rüsselsheim rss
Guten Tag,
hat hier jemand Erfahrung mit täglichem Pendeln von Taunusstein nach Rüsselsheim über die Bundesstraße 54? Ist dort mit viel Stau zu rechnen im Morgenverkehr?
submitted by /u/CarefulRabbit684
[link] [comments] -
🔗 r/wiesbaden Gym in wiesbaden? rss
What's a good gym in wiesbaden that has:
Squat racks Bench presses Deadlift platform
And allows me to get a day pass
I will travel to wiesbaden for work and want to keep training
submitted by /u/earl-the-grey
[link] [comments] -
🔗 r/LocalLLaMA Yes it is possible to uncensor gpt-oss-20b - ArliAI/gpt-oss-20b-Derestricted rss
| Original discussion on the initial Arli AI created GLM-4.5-Air-Derestricted model that was ablated using u/grimjim's new ablation method is here: The most objectively correct way to abliterate so far - ArliAI/GLM-4.5-Air-Derestricted (Note: Derestricted is a name given to models created by Arli AI using this method, but the method officially is just called Norm-Preserving Biprojected Abliteration by u/grimjim) Hey everyone, Owen here from Arli AI again. In my previous post, I got a lot of requests to attempt this derestricting on OpenAI's gpt-oss models as they are models that are intelligent but was infamous for being very...restricted. I thought that it would be a big challenge and be interesting to try and attempt as well, and so that was the next model I decided to try and derestrict next. The 120b version is more unwieldy to transfer around and load in/out of VRAM/RAM as I was experimenting, so I started with the 20b version first but I will get to the 120b next which should be super interesting. As for the 20b model here, it seems to have worked! The model now can respond to questions that OpenAI never would have approved of answering (lol!). It also seems to have cut down its wasteful looping around of deciding whether it can or cannot answer a question based on a non existent policy in it's reasoning, although this isn't completely removed yet. I suspect a more customized harmful/harmless dataset to specifically target this behavior might be useful for this, so that will be what I need to work on. Otherwise I think this is just an outright improved model over the original as it is much more useful now than it's original behavior. Where it would usually flag a lot of false positives and be absolutely useless in certain situations just because of "safety". In order to work on modifying the weights of the model, I also had to use a BF16 converted version to start with as the model as you all might know was released in MXFP4 format, but then attempting the ablation on the BF16 converted model seems to work well. I think that this proves that this new method of essentially "direction-based" abliteration is really flexible and works super well for probably any models. As for quants, I'm not one to worry about making GGUFs myself because I'm sure the GGUF makers will get to it pretty fast and do a better job than I can. Also, there are no FP8 or INT8 quants now because its pretty small and those that run FP8 or INT8 quants usually have a substantial GPU setup anyways. Try it out and have fun! This time it's really for r/LocalLLaMA because we don't even run this model on our Arli AI API service. submitted by /u/Arli_AI
[link] [comments]
---|--- -
🔗 r/wiesbaden Konzerttickets für heute zu verschenken - Electric Callboy in Frankfurt rss
Hallo,
aufgrund von Krankheit kurzfristig zu verschenken. 2 Tickets für Electric Callboy heute Abend in Frankfurt. Abzuholen bei mir in Wiesbaden Stadtteil Westend.
submitted by /u/Rote_Gazelle
[link] [comments] -
🔗 r/LocalLLaMA deepseek-ai/DeepSeek-Math-V2 · Hugging Face rss
| submitted by /u/Dark_Fire_12
[link] [comments]
---|--- -
🔗 r/reverseengineering Released Zero the Hero (0tH) – a Rust-based Mach-O analysis tool for macOS rss
submitted by /u/gabriele70
[link] [comments] -
🔗 r/LocalLLaMA Anthropic just showed how to make AI agents work on long projects without falling apart rss
Most AI agents forget everything between sessions, which means they completely lose track of long tasks. Anthropic’s new article shows a surprisingly practical fix. Instead of giving an agent one giant goal like “build a web app,” they wrap it in a simple harness that forces structure, memory, and accountability.
First, an initializer agent sets up the project. It creates a full feature list, marks everything as failing, initializes git, and writes a progress log. Then each later session uses a coding agent that reads the log and git history, picks exactly one unfinished feature, implements it, tests it, commits the changes, and updates the log. No guessing, no drift, no forgetting.
The result is an AI that can stop, restart, and keep improving a project across many independent runs. It behaves more like a disciplined engineer than a clever autocomplete. It also shows that the real unlock for long-running agents may not be smarter models, but better scaffolding.
Read the article here:
https://www.anthropic.com/engineering/effective-harnesses-for-long-running- agentssubmitted by /u/purealgo
[link] [comments] -
🔗 r/LocalLLaMA Where did the Epstein emails dataset go rss
Removed from Hugging Face (link)
Removed from GitHub (link)
Reddit account deleted (last post)submitted by /u/egomarker
[link] [comments] -
🔗 Console.dev newsletter Requestly rss
Description: API client + interceptor.
What we like: Issue requests to APIs with support for environments, variables, and API collections. Intercept requests and replace, modify, redirect. Create API mocks. Local first or set up collaboration over Git or cloud drives. Import from common alternatives e.g. Postman.
What we dislike: Desktop app built in Electron.
-
🔗 Console.dev newsletter Prisma 7 rss
Description: TypeScript ORM.
What we like: Faster performance with a smaller ESM-first bundle. Drop in update from previous version. Generated client no longer buried in node_modules. Now supports mapped enums. New Prisma Studio UI visualizes database inspection.
What we dislike: Using an ORM is probably a mistake for all but the smallest side project.
-
🔗 Ampcode News Opus 4.5: Better, Faster, Often Cheaper rss
Claude Opus 4.5 is the new main model in Amp's
smartmode, two days after we shipped it for you to try out.Only a week ago, we changed Amp's main model to Gemini 3 — a historic change, we said. It was the first time since Amp's creation that we switched away from Claude. Now we're switching again and you may ask: why? Why follow a historic change with another one, in a historically short amount of time?
We love Gemini 3, but, once rolled out, its impressive highs came with lows. What we internally experienced as rough edges turned into some very frustrating behaviors for our users. Frustrating and costly.
Then, not even a week later, Opus 4.5 comes out. Opus 4.5, on the other hand, seems as capable as Gemini 3. Its highs might not be as brilliant as Gemini 3's, but it also seems to do away with the lows. It seems more polished. It's faster, even.
We're also pleasantly surprised by Opus's cost-efficiency. Yes, Opus tokens are more expensive, but it needs fewer tokens to do the job, makes fewer token-wasting mistakes, and needs less human intervention (which results in a higher cache hit rate, which means lower costs and latency).
Sonnet 4.5 Gemini 3 Pro Opus 4.5 Internal Evals 37.1% 53.7% 57.3% Avg. Thread Cost $2.75 $2.04 $2.05 0-200k Tokens Only[^1] $1.48 $1.19 $2.05 Off-the-Rails Cost[^2] 8.4% 17.8% 2.4% Speed (p50, preliminary) 2.4 min 4.3 min 3.5 min In words:
- If you use long threads (200k+ tokens): Opus will be a lot cheaper. It’s currently limited to 200k tokens of context, which forces you to use small threads—our strong recommendation anyway, for both quality and cost. If you need to temporarily keep using Sonnet's long context, use the
"amp.model.sonnet": truesetting or--use-sonnetCLI flag. - If Sonnet or Gemini frequently struggles for you or has hit a capability ceiling: Opus will be far more capable and accurate, and often cheaper too (by avoiding wasted tokens).
- If you loved Gemini 3 Pro: Opus will be ~40% more expensive but faster and more tolerant of ambiguous prompts. (This describes most of the Amp team, and we still find Opus worth it.)
- If you were perfectly satisfied with Sonnet 4.5: Opus will be ~35% more expensive for the same task. The real win comes from getting outside your comfort zone and giving it harder tasks where Sonnet would struggle.
Staying on the frontier means sometimes shipping despite issues — and sometimes shipping something better a week later.
- If you use long threads (200k+ tokens): Opus will be a lot cheaper. It’s currently limited to 200k tokens of context, which forces you to use small threads—our strong recommendation anyway, for both quality and cost. If you need to temporarily keep using Sonnet's long context, use the
-