- ↔
- →
- About KeePassXC’s Code Quality Control – KeePassXC
- How to build a remarkable command palette
- Leaderboard - compar:IA, the AI chatbot arena
- Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
- Automerge
- November 23, 2025
-
🔗 r/wiesbaden Monatlicher Pen-and-Paper-Rollenspieltreff in Wiesbaden Schierstein rss
Einmal im Monat bieten wir in der Phantasos Arena in Wiesbaden-Schierstein einen Rollenspiel-Treff an. Einsteiger und Profis sind herzlich willkommen, sowohl als Spieler als auch als Spielleiter, wir freuen uns immer über neue Gesichter in der Runde und neue Systeme. Der nächste Treff findet am 8.12. statt, und es wird weihnachtlich.
Um besser planen zu können, bitten wir um eine Anmeldung via Discord: https://discord.gg/c82e3fYKyr
Ort: Schossbergstraße 11, hinteres Bürogebäude, Wiesbaden Datum: 1x/Monat, nächstes mal am 8.12., 18.30 Uhr Systeme: Wechselnd, im Dezember zB Candela Obscura und ein W24-Abenteuer am Nordpol
Hinweise: Der Betreiber der Location bittet um 5€ Nutzungsbeitrag pro Person für Miete, Strom etc; wer sich das nicht leisten kann, ist trotzdem willkommen. Die Location ist nicht barrierefrei.
submitted by /u/Bitter-Secretary6006
[link] [comments] -
🔗 idursun/jjui v0.9.6 release
Another community driven release, mostly focusing on every day improvements.
What's Changed
- fix(ace_jump): close operation after applying jump by @baggiiiie in #351
- feat: auto-detect preview placement, on by default by @lbonn in #348
- feat: allow remote selection in git menu by @baggiiiie in #349
- fix(fuzzy_files): quote path for editor command by @baggiiiie in #354
- fix for preview copied status file and filename with {} by @baggiiiie in #357
- Support interactive file-level restores by @remo5000 in #365
- fuzzy_files: remove quoting for files in revset by @baggiiiie in #370
- ui/git: add --tracked to git fetch by @baggiiiie in #368
- operation: add ForceApply to squash/rebase footer by @baggiiiie in #371
- abandon: add force apply to footer help menu by @baggiiiie in #373
- docs(README): add winget and scoop as installation method in Windows by @abgox in #362
- fix(set_parents): keep parent order by @idursun in #375
- fix(light theme): Removed the default border style of
bright whiteas it was overriding thetextstyle which should be the terminal’s default foreground and background.
New Contributors
- @lbonn made their first contribution in #348
- @remo5000 made their first contribution in #365
- @abgox made their first contribution in #362
Full Changelog :
v0.9.5...v0.9.6 -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [gepetto](https://github.com/JusticeRage/Gepetto) (1.5.0) -
🔗 r/reverseengineering Polymorphic Encoding, DNS Tunneling & Rootkit Persistence: An Analysis of Multi-Vector Evasion Techniques rss
submitted by /u/pmd02931
[link] [comments] -
🔗 r/wiesbaden Black-/ Death-Metall Band gründen? rss
Hat wer Lust? Gerne Richtung BoltThrower, Obituary etc... Musikalisch stumpf, wenig Virtuosität, böse gucken und so. Mental auch klassisch britisch, also links-grün hardcore-humanistisch mit kollektivistischem Ansatz. Gender egal, Hauptsache Proberaum. Bin alt, männlich und verbittert; spiele aber ganz o.k. Gitarre. Cheerio & Ta-da.
submitted by /u/Last-Wolverine-1774
[link] [comments] -
🔗 r/wiesbaden Bauingenieur Julius Berger rss
submitted by /u/NoBad9507
[link] [comments] -
🔗 r/wiesbaden Steam trains around Frankfurt in the coming weeks, incl. one from Wiesbaden rss
submitted by /u/SenatorAslak
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: -1 plugin, -1 release rss
sync repo: -1 plugin, -1 release ## Removed plugins - ida-terminal-plugin -
🔗 Register Spill Joy & Curiosity #63 rss
This was the second time in four months that I happened to be in San Francisco when a new model was released by a major provider. "Gemini 3 just dropped" was overheard in the coffee shop.
Very busy but fantastic week.
-
We switched Amp to Gemini 3 Pro. I know that for some people Gemini 3 feels off, but I honestly think it's the best model I've used as an agent so far. It's fantastic.
-
I'm still holding my breath and I'm pessimistic, but hallelujah! if this actually happens in a way that's noticeable: Europe is scaling back its landmark privacy and AI laws.
-
What I found the most surprising about Every's Vibe Check on Gemini 3 is that they're saying it's "not yet a writing champ". Maybe I can't judge it on that, I very rarely use LLMs to generate more than a single sentence of prose, but I thought Gemini 3's descriptions of bugs, its summaries of what it did, its investigations -- I thought those were well-written and surprisingly well-formatted too. I actually said out loud: "if I would see a PR description written like that , I'd try to hire the person who wrote it."
-
Paul Dix, CTO of InfluxDB: "I believe the next year will show that the role of the traditional software engineer is dead. If you got into this career because you love writing lines of code, I have some bad news for you: it's over. The machines will be writing most of the code from here on out. Although there is some artisanal stuff that will remain in the realm of hand written code, it will be deeply in the minority of what gets produced."
-
"Older programs were all about what you need: you can do this, that, whatever you want, just let me know. You were in control, you were giving orders, and programs obeyed. But recently (a decade, more or less), this relationship has subtly changed. Newer programs (which are called apps now, yes, I know) started to want things from you."
-
Are You Playing to Play, or Playing to Win? Read it two days ago and already think that these definitions of scrub and maestro will stick with me. "One time I played a scrub who was pretty good at many aspects of Street Fighter, but he cried cheap as I beat him with 'no skill moves' while he performed many difficult dragon punches. He cried cheap when I threw him 5 times in a row asking, 'is that all you know how to do? throw?' I told him, 'Play to win, not to do 'difficult moves.'' He would never reach the next level of play without shedding those extra rules in his head". Can't help but think of programming and typing code by hand.
-
"#! was a great hack to make scripts look and feel like real executable binaries.", from: #! magic, details about the shebang/hash-bang mechanism on various Unix flavours.
-
"A friend of mine tells Claude to always address him as 'Mr Tinkleberry', he says he can tell when Claude is not paying attention to the instructions on CLAUDE.md when Claude stops calling him 'Mr Tinkleberry' consistently"
-
This is from all the way back in April and you actually notice that when reading, I'd say, which is interesting in itself, but the whole piece is great and contains a lot of gems: Will the Humanities Survive Artificial Intelligence?
-
"I've started a company in this space about 2 years ago. We are doing fine. What we've learned so far is that a lot of these techniques are simply optimisations to tackle some deficiency in LLMs that is a problem "today". These are not going to be problems tomorrow because the technology will shift. As it happened many time in the span of the last 2 years. So yah, cool, caching all of that... but give it a couple of months and a better technique will come out - or more capable models. [...] What I've learned from this is that often times it is better to do absolutely nothing."
-
Joan Didion, On Self-Respect: "In brief, people with self-respect exhibit a certain toughness, a kind of moral nerve; they display what was once called character, a quality which, although approved in the abstract, sometimes loses ground to other, more instantly negotiable virtues. The measure of its slipping prestige is that one tends to think of it only in connection with homely children and with United States senators who have been defeated, preferably in the primary, for re-election. Nonetheless, character--the willingness to accept responsibility for one's own life--is the source from which self-respect springs."
If your travel also maps to model releases, you should subscribe:
-
-
🔗 r/reverseengineering Luau VM Bytecode Injection via Manual Mapping rss
submitted by /u/pmd02931
[link] [comments] -
🔗 mviereck/x11docker v7.7.1 release
7.7.1 -
2025-011-23
Fixed
-
--xc --xorg: Add/dev/tty0 /dev/fb0. Needed on Debian trixie
to run from console. -
--xc --weston*: Use--ipc=hostfor X container. Needed for
Debian trixie X container to run nested in X. -
--xc --gpu: fix nvidia driver installation in X container --xc=docker --backend=podman: Fix capabilities / DAC_OVERRIDE--xc --gpu: fix check for already installed NVIDIA driver.--password: Show host user name for container user password.
(535)
(537)
-
-
- November 22, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-22 rss
IDA Plugin Updates on 2025-11-22
New Releases:
Activity:
- CTFStuff
- 0942e2d3: yurr
- diffrays
- dotfiles
- efbb952c: update
- d8cc118a: update
- 3c3d0daf: update
- c4c756d2: update
- e11e727a: update
- 68ec0ecd: update
- b5cc3b01: update
- a803e815: update
- 746e5f5f: update
- d4d1c724: update
- b891854e: update
- 6c3bfcbb: update
- 39fa1c1e: update
- 27ba4024: update
- da25f1a5: update
- 8ed50f83: update
- a7b62b3a: update
- ec5c4bf8: update
- 1179f3a9: update
- d600abb0: update
- dylib_dobby_hook
- 0a534921: fixed: cmm helper
- sig-importer
- 719537ca: feat(import/types): add abs addressing support
- CTFStuff
-
🔗 Simon Willison Olmo 3 is a fully open LLM rss
Olmo is the LLM series from Ai2 - the Allen institute for AI. Unlike most open weight models these are notable for including the full training data, training process and checkpoints along with those releases.
The new Olmo 3 claims to be "the best fully open 32B-scale thinking model" and has a strong focus on interpretability:
At its center is Olmo 3-Think (32B), the best fully open 32B-scale thinking model that for the first time lets you inspect intermediate reasoning traces and trace those behaviors back to the data and training decisions that produced them.
They've released four 7B models - Olmo 3-Base, Olmo 3-Instruct, Olmo 3-Think and Olmo 3-RL Zero, plus 32B variants of the 3-Think and 3-Base models.
Having full access to the training data is really useful. Here's how they describe that:
Olmo 3 is pretrained on Dolma 3, a new ~9.3-trillion-token corpus drawn from web pages, science PDFs processed with olmOCR, codebases, math problems and solutions, and encyclopedic text. From this pool, we construct Dolma 3 Mix, a 5.9-trillion-token (~6T) pretraining mix with a higher proportion of coding and mathematical data than earlier Dolma releases, plus much stronger decontamination via extensive deduplication, quality filtering, and careful control over data mixing. We follow established web standards in collecting training data and don't collect from sites that explicitly disallow it, including paywalled content.
They also highlight that they are training on fewer tokens than their competition:
[...] it's the strongest fully open thinking model we're aware of, narrowing the gap to the best open-weight models of similar scale – such as Qwen 3 32B – while training on roughly 6x fewer tokens.
If you're continuing to hold out hope for a model trained entirely on licensed data this one sadly won't fit the bill - a lot of that data still comes from a crawl of the web.
I tried out the 32B Think model and the 7B Instruct model using LM Studio. The 7B model is a 4.16GB download, the 32B one is 18.14GB.
The 32B model is absolutely an over-thinker! I asked it to "Generate an SVG of a pelican riding a bicycle" and it thought for 14 minutes 43 seconds, outputting 8,437 tokens total most of which was this epic thinking trace.
I don't usually quote the full SVG in these write-ups, but in this case it's short enough that I think it's worth sharing. The SVG comments give a great impression of what it was trying to do - it has a Bicycle, Bike frame, Pelican, Left and Right wings and even "Feet on pedals".
<svg width="200" height="200" viewBox="0 0 100 100"> <!-- Bicycle --> <circle cx="30" cy="60" r="15" stroke="black" fill="none"/> <circle cx="70" cy="60" r="15" stroke="black" fill="none"/> <!-- Bike frame --> <rect x="35" y="25" width="30" height="10" fill="saddlebrown"/> <line x1="35" y1="40" x2="30" y2="60" stroke="black" stroke-width="3"/> <line x1="65" y1="40" x2="70" y2="60" stroke="black" stroke-width="3"/> <!-- Pelican --> <ellipse cx="55" cy="65" rx="20" ry="15" fill="white"/> <polygon points="52 50,57 35,62 50" fill="black"/> <!-- Head/beak --> <circle cx="55" cy="45" r="2" fill="white"/> <circle cx="60" cy="45" r="2" fill="white"/> <polygon points="45 60,50 70,55 60" fill="lightgrey"/> <!-- Left wing --> <polygon points="65 60,70 70,55 60" fill="lightgrey"/> <!-- Right wing --> <!-- Feet on pedals --> <polygon points="25 75,30 85,35 75" fill="black"/> <polygon points="75 75,70 85,65 75" fill="black"/> </svg>
Rendered it looks like this:

I tested OLMo 2 32B 4bit back in March and got something that, while pleasingly abstract, didn't come close to resembling a pelican or a bicycle:

To be fair 32B models generally don't do great with this. Here's Qwen 3 32B's attempt (I ran that just now using OpenRouter):

OlmoTrace
I was particularly keen on trying out the ability to "inspect intermediate reasoning traces". Here's how that's described later in the announcement:
A core goal of Olmo 3 is not just to open the model flow, but to make it actionable for people who want to understand and improve model behavior. Olmo 3 integrates with OlmoTrace, our tool for tracing model outputs back to training data in real time.
For example, in the Ai2 Playground, you can ask Olmo 3-Think (32B) to answer a general-knowledge question, then use OlmoTrace to inspect where and how the model may have learned to generate parts of its response. This closes the gap between training data and model behavior: you can see not only what the model is doing, but why---and adjust data or training decisions accordingly.
You can access OlmoTrace via playground.allenai.org, by first running a prompt and then clicking the "Show OlmoTrace" button below the output.
I tried that on "Generate a conference bio for Simon Willison" (an ego-prompt I use to see how much the models have picked up about me from their training data) and got back a result that looked like this:

It thinks I co-founded co:here and work at Anthropic, both of which are incorrect - but that's not uncommon with LLMs, I frequently see them suggest that I'm the CTO of GitHub and other such inaccuracies.
I found the OlmoTrace panel on the right disappointing. None of the training documents it highlighted looked relevant - it appears to be looking for phrase matches (powered by Ai2's infini-gram) but the documents it found had nothing to do with me at all.
Can open training data address concerns of backdoors?
Ai2 claim that Olmo 3 is "the best fully open 32B-scale thinking model", which I think holds up provided you define "fully open" as including open training data. There's not a great deal of competition in that space though - Ai2 compare themselves to Stanford's Marin and Swiss AI's Apertus, neither of which I'd heard about before.
A big disadvantage of other open weight models is that it's impossible to audit their training data. Anthropic published a paper last month showing that a small number of samples can poison LLMs of any size - it can take just "250 poisoned documents" to add a backdoor to a large model that triggers undesired behavior based on a short carefully crafted prompt.
This makes fully open training data an even bigger deal.
Ai2 researcher Nathan Lambert included this note about the importance of transparent training data in his detailed post about the release:
In particular, we're excited about the future of RL Zero research on Olmo 3 precisely because everything is open. Researchers can study the interaction between the reasoning traces we include at midtraining and the downstream model behavior (qualitative and quantitative).
This helps answer questions that have plagued RLVR results on Qwen models, hinting at forms of data contamination particularly on math and reasoning benchmarks (see Shao, Rulin, et al. "Spurious rewards: Rethinking training signals in rlvr." arXiv preprint arXiv:2506.10947 (2025). or Wu, Mingqi, et al. "Reasoning or memorization? unreliable results of reinforcement learning due to data contamination." arXiv preprint arXiv:2507.10532 (2025).)
I hope we see more competition in this space, including further models in the Olmo series. The improvements from Olmo 1 (in February 2024) and Olmo 2 (in March 2025) have been significant. I'm hoping that trend continues!
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/wiesbaden St. Mauritius Church in Wiesbaden, Germany. (This is great) rss
submitted by /u/rebelsofliberty
[link] [comments] -
🔗 r/reverseengineering Reverse Engineering AI Behavior with Structured Data Formats rss
submitted by /u/pmd02931
[link] [comments] -
🔗 Locklin on science Wacky fun physics ideas rss
My reading lately has ventured into weird physics papers. Mainstream physics (unlike machine learning and classical statistics where real progress has been made) is booooring these days. There’s no point in reading another “shittonium on silicon 111” papers, nor am I interested in stupid big budget projects where people always get the expected answer, nor […]
-
🔗 r/reverseengineering Autonomous exploitation + trace analysis workflows with CAI (open-source) rss
submitted by /u/Obvious-Language4462
[link] [comments] -
🔗 r/reverseengineering Quantum Silicon Core Loader v5.5 Released - Universal Micro-VM for Hardware rss
submitted by /u/ComputerGlobal1249
[link] [comments] -
🔗 Kagi release notes Nov 22nd, 2025 - Kagi Hub Belgrade rss
Kagi Hub Belgrade: Making the human web real
We just opened the Kagi Hub in Belgrade, Serbia!
If you’re a Kagi member,you can book up to 5 FREE reservations per month and treat the Hub as your base whenever you’re in Belgrade. It is the same space our team uses, so you will be working directly alongside the people shaping Kagi’s future. More details, including how to reserve your spot, are in this blog post: https://blog.kagi.com/kagi- hub
Having an actual physical space makes our mission to "humanize the web" feel so much more real. It is a place for Kagi members and our fully remote team to work, trade ideas, and build the tools we all wish existed.
We are looking forward to welcoming you to Kagi's first ever Hub!
-
🔗 r/reverseengineering Reverse Engineering Casio's .CR5 File Format rss
submitted by /u/AthuVaidya
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [ida-terminal-plugin](https://github.com/HexRaysSA/ida-terminal-plugin) (0.0.6) -
🔗 r/LocalLLaMA GLM planning a 30-billion-parameter model release for 2025 rss
| submitted by /u/aichiusagi
[link] [comments]
---|--- -
🔗 matklad TigerBeetle Blog rss
TigerBeetle Blog
Nov 22, 2025
Continuing the tradition, I’ve been also blogging somewhat regularly on TigerBeetle’s blog, so you might want to check those articles out or even subscribe (my favorite RSS reader is RSSSSR):
Today’s post is a video version of Notes on Paxos!
-
🔗 Filip Filmar rules_shar: bazel rules for creating self-extracting archives (shars) rss
Details at: https://github.com/filmil/rules_shar
This repository provides Bazel rules for creating self-extracting archives (“shar"s) out of shell binaries. See section “what is this” below for details. Usage To use the rules, add the following to your MODULE.bazel file: bazel_dep(name = "rules_shar", version = "0.0.0") # Select your version, of course. Then, in your BUILD.bazel file, you can use the shar_binary rule to create a self-extracting archive, as a drop-in replacement for the rule sh_binary.
-
- November 21, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-21 rss
IDA Plugin Updates on 2025-11-21
New Releases:
Activity:
- diffrays
- e555ecc5: Heuristics Implemented
- dotfiles
- ida-domain
- 48a2f242: Fixed hooks related crash when using python 3.11 (#16)
- ida-hcli
- ffb293cd: github: handle even more rate limiting cases
- 8e0ea6eb: 0.14.0
- c4505849: commands: better handling of paths
- afa55ce8: lints
- d14d6848: plugin: install: find plugins case-insensitively
- bcc60f7a: ida: plugin: document move vs rename
- 1a790a95: Fixing crash on Linux
- 581019ca: Fixing python detection on Linux
- 6dcb3fc8: lints
- 01bafdb2: 0.13.3-dev.4
- 0d358484: remove summarize logs script
- 24a9254a: plugin: github: refactor logging context
- 404312ef: 0.13.3-dev.3
- c11fa228: lints
- 9de57503: ida: python: warn when invoking idat on 9.2/Linux with space in path
- idafridascript
- 7ca09f4a: fix(flutter_ssl_bypass): :bug: fix: specify UTF-8 encoding when readi…
- idaplugins
- playlist
- c5370740: flacs
- twdll
- Unicorn-Trace
- diffrays
-
🔗 r/LocalLLaMA Inspired by a recent post: a list of the cheapest to most expensive 32GB GPUs on Amazon right now, Nov 21 2025 rss
Inspired by a recent post where someone was putting together a system based on two 16GB GPUs for $800 I wondered how one might otherwise conveniently acquire 32GB of reasonably performant VRAM as cheaply as possible?
Bezos to the rescue!
Hewlett Packard Enterprise NVIDIA Tesla M10 Quad GPU Module
- Cost: $279
- VRAM: GDDR5 (332 GB/s)
- PCIe: 3.0
- Link: https://www.amazon.com/Hewlett-Packard-Enterprise-NVIDIA-870046-001/dp/B075VQ5LF8
AMD Radeon Instinct MI60 32GB HBM2 300W
- Cost: $499
- VRAM: HBM2 (1.02 TB/s)
- PCIe: 4.0
- Link: https://www.amazon.com/Instinct-Compute-Graphics-Accellerator-Renewed/dp/B0DMTTF15B
Tesla V100 32GB SXM2 GPU W/Pcie Adapter & 6+2 Pin
- Cost: $879.00
- VRAM: HBM2 (898 GB/s)
- PCIe: 3.0
- Link: https://www.amazon.com/Tesla-V100-32GB-Adapter-Computing/dp/B0FXWJ8HKD
NVIDIA Tesla V100 Volta GPU Accelerator 32GB
- Cost: $969
- VRAM: HBM2 (898 GB/s)
- PCIe: 3.0
- Link: https://www.amazon.com/NVIDIA-Tesla-Volta-Accelerator-Graphics/dp/B07JVNHFFX
NVIDIA Tesla V100 (Volta) 32GB
- Cost: $1144
- VRAM: HBM2 (898 GB/s)
- PCIe: 3.0
- Link: https://www.amazon.com/NVIDIA-Tesla-900-2G503-0310-000-NVLINK-GPU/dp/B07WDDNGXK
GIGABYTE AORUS GeForce RTX 5090 Master 32G
- Cost: $2599
- VRAM: GDDR7 (1792 GB/s)
- PCIe: 5.0
- Link: https://www.amazon.com/GIGABYTE-Graphics-WINDFORCE-GV-N5090AORUS-M-32GD/dp/B0DT7GHQMD
PNY NVIDIA GeForce RTX™ 5090 OC Triple Fan
- Cost: $2749
- VRAM: GDDR7 (1792 GB/s)
- PCIe: 5.0
- Link: https://www.amazon.com/PNY-GeForce-Overclocked-Graphics-3-5-Slot/dp/B0DTJF8YT4/
For comparison an RTX 3090 has 24GB of 936.2 GB/s GDDR6X~~, so for $879 it's hard to grumble about 32GB of 898 GB/s HBM2 in those V100s!~~ and the AMD card has gotta be tempting for someone at that price!
Edit: the V100 doesn’t support CUDA 8.x and later, so check compatibility before making impulse buys!
Edit 2: found an MI60!
submitted by /u/JockY
[link] [comments] -
🔗 sacha chua :: living an awesome life Tracking my oopses rss
My stress level seems to be higher this year compared to last year. There are a number of ways I can tell when I need to slow down. I feel more tired, less energetic. Enthusiasm is difficult to muster. I need to break things down into smaller tasks. I make lots of little mistakes: I misplace things, I forget things, I don't think ahead.
I track my mistakes in my home-made web-based journal under the Oops category. It feels a little better, come to think of it, when I can recover at least some data from a mistake. It's part of life, just a signal that tells me my brain is getting overloaded.
I thought it would be interesting to analyse the frequency of my oops. For example, here are some recent oopses:
- I forgot to check the EmacsConf upload folder before e-mailing the speakers a nudge about videos. There were a few more videos in there, whew! It's all good, people are patient and wonderful. Impact: people's patience. (2025-11-21-01)
- I ate the last of the brown rice because I forgot to check if there was more. Good thing W- had more in the freezer. (2025-11-18-06)
- I fumbled the Apple Pencil as I took my iPad off the piano. It disengaged from the magnetic lock, slid down the piano cover, and fell on the floor, breaking the tip. I didn't want to rush to catch it because I was tired and I was also holding the iPad. Fortunately, I have a number of backup tips. Impact: maybe $3 to replace that tip eventually. Cause: tired. (2025-11-18-03)
- I washed the clothes on hot because I'm trying to avoid insect bites, but I accidentally shrank A+'s favourite purple shirt. Maybe next time I can just do a small load of my stuff. Impact: $10. I made up by buying new clothes for her, which I've been meaning to do anyway. #household (2025-11-15-16)
- I crossed the street against the lights because I wasn't looking at the pedestrian light. (2025-11-11-06)
Here's a yearly analysis:
Code for retrieving and grouping my journal entries(append '(("Year" "Count")) (sort (mapcar (lambda (group) (list (car group) (length (cdr group)))) (seq-group-by (lambda (o) (substring (my-journal-date o) 0 4)) (seq-filter (lambda (o) (string= (my-journal-category o) "Oops")) (my-journal-get-entries)))) :key 'car))Data tableYear Count 2017 12 2018 75 2019 36 2020 38 2021 33 2022 25 2023 28 2024 37 2025 73 Code for graphing comments by yearimport pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np df = pd.DataFrame(data[1:], columns=data[0]) df['Count'] = df['Count'].astype(int) df['Year'] = df['Year'].astype(int) df = df.sort_values('Year') plt.figure(figsize=(12, 6)) ax = sns.barplot(x='Year', y='Count', data=df) plt.title('Oops by Year (2017-2025)', fontsize=16, fontweight='bold') plt.xlabel('Year') plt.ylabel('Number of oops entries') plt.xticks(rotation=45) plt.grid(axis='y') for i, v in enumerate(df['Count']): ax.text(i, v + 1, str(v), ha='center', fontsize=9) plt.tight_layout() plt.savefig('year_count_plot.svg') return 'year_count_plot.svg'2017 was not a full year of data, so that's probably why the number is so low. 2018 was when my dad died. I flew to the Philippines several times with my 2-year-old daughter so that we could spend time with my mom. It's a wonder that I managed to keep things mostly sorted out. Things were mostly manageable in between despite the ongoing pandemic. This year, though, a confluence of factors added a lot more strain on my brain. I wonder if part of it is because I'm concerned about health issues for people I care about. I also worry about the kiddo, school, change, and possibly adolescence. Who knows, maybe I'm starting to go through perimenopause, which apparently tends to include brain fog and mood swings. Fun. These are big things that I won't be able to resolve on my own, but I can get better at not overextending myself.
Might be fun to do a month-by-month analysis.
Code for retrieving and grouping my journal entries(append '(("Year" "Month" "Count")) (sort (mapcar (lambda (group) (list (substring (car group) 0 4) (string-to-number (substring (car group) 5)) (length (cdr group)))) (seq-group-by (lambda (o) (substring (my-journal-date o) 0 7)) (seq-filter (lambda (o) (string= (my-journal-category o) "Oops")) (my-journal-get-entries)))) :key (lambda (row) (format "%s-%02d" (car row) (cadr row)))))Code for making the heatmapimport pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.DataFrame(data[1:], columns=data[0]) df = pd.pivot_table(df, columns=['Month'], index=['Year'], values='Count', aggfunc='sum', fill_value=0).iloc[::-1].sort_index(ascending=True) plt.figure(figsize=(12, 8)) sns.heatmap( df, annot=True, fmt="d", # Format as integer cmap="YlGnBu", # Color map linewidths=.5, cbar_kws={'label': 'Count of "Oops" Entries'} ) # Set the title and axis labels plt.title('Heatmap of "Oops" Entries by Month and Year', fontsize=16) plt.xlabel('Month', fontsize=12) plt.ylabel('Year', fontsize=12) plt.savefig('number-of-oops-by-month.png') return dfData tableMonth 1 2 3 4 5 6 7 8 9 10 11 12 Year 2017 0 0 0 0 0 0 0 0 7 2 0 3 2018 4 7 4 10 10 8 8 1 7 7 5 4 2019 4 4 2 4 4 1 3 5 1 1 2 5 2020 3 1 2 2 1 5 7 7 4 2 2 2 2021 2 2 1 1 2 4 6 4 2 2 1 6 2022 2 3 2 1 3 2 0 3 2 2 4 1 2023 3 1 1 1 5 2 4 5 0 0 3 3 2024 0 1 1 4 4 8 8 1 3 3 2 2 2025 7 1 7 2 6 4 9 8 10 4 15 0
Oooh, what's up with this month, yeah…
Most of my mistakes are small and easy to fix, just wasting time or patience. Others are a bit more annoying (dropping electronic devices, etc.). Some are potentially life-limiting (gotta make sure I look at the right lights before crossing the street). I tend to have a hard time with:
- transitions, like coming home after a bike ride: I might accidentally hang up my keys on the wrong hook if I'm not paying attention. I've started saying the steps out loud.
- tidying: If I have an attentional lapse, I put things in not-quite-the-right-place, so I probably can't listen to podcasts or think about complex thoughts.
- travel: If I'm biking or walking, I have to pay extra attention.
- task switching: interruptions, stacks, etc. I say things out loud and write them down if needed.
When I notice myself making more oopses than usual, I try to sleep more, take things more slowly, offload more of my thoughts to drawings and notes, ask for help, and do fewer things.
I'm working on single-tasking more instead of getting distracted by interesting thoughts. Maybe I used to be able to think about stuff, but now it's probably better to just let my mind be slightly bored if necessary instead of letting it get overloaded. I have to adapt to my capacity at the moment. I can either trust that those thoughts will come back if they're important, or take a moment to write them down on my phone. I can also give myself more focus time during the day to revisit those thoughts so that I don't get tempted to squeeze them in while, say, I'm putting away the dishes.
Maybe routines, songs, physical cues like checklists, or pointing and calling (physically pointing to something and saying it out loud for confirmation) can help me with some of my frequently-made mistakes.
Little by little. I might not be able to get rid of all the little mistakes, but if I can smoothen out some of the frequent ones, have an oops fund for the monetary costs of moderate mistakes, and keep myself from making the life-limiting ones, that might be enough.
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/reverseengineering YOU ARE NOT READY FOR THIS: NEURAL NETWORKS ARE KILLING REVERSE ENGINEERING rss
submitted by /u/SapDragons
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: -1 plugin, -2 releases rss
sync repo: -1 plugin, -2 releases ## Removed plugins - fwhunt-ida -
🔗 r/reverseengineering Made a process injection detector rss
submitted by /u/Capital-Let-5619
[link] [comments] -
🔗 News Minimalist 🐢 New daily pill helps lose 10% weight + 10 more stories rss
In the last 2 days ChatGPT read 63606 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[6.1] Daily pill helps people lose 10% of weight in 18 months, study finds —medicalxpress.com(+10)
A study published Thursday found a new daily pill, orforglipron, helps people lose about 10% of their body weight, offering a cheaper, needle-free alternative to current injectable drugs.
The Lancet study involved over 1,500 adults with both obesity and type 2 diabetes. After 72 weeks, those on the highest dose lost 10% of their weight, compared to 2% for a placebo group. Eli Lilly developed the pill.
The pill has side effects similar to existing drugs and is less potent than the injectable Mounjaro. If approved, orforglipron could be available in 2026 at a significantly lower cost than current treatments.
[6.4] Trump's Ukraine war end plan favors Russia —apnews.com(+521)
A U.S.-Russia drafted peace plan proposes ending the war by requiring Ukraine to cede territory and forgo NATO membership in a deal largely favorable to Moscow.
The 28-point proposal, obtained Thursday by the Associated Press, also offers Ukraine security guarantees, limits its military, and establishes a large international fund for reconstruction while reintegrating Russia into the global economy.
Specific terms include recognizing Crimea, Luhansk, and Donetsk as Russian. The plan also calls for a ceasefire, amnesty for combatants, and using frozen Russian assets to help rebuild Ukraine.
Highly covered news with significance over 5.5
[5.5] Saudi Arabia commits $1 trillion to U.S. investments — annahar.com (Arabic) (+14)
[5.6] Trump plans massive expansion of offshore oil drilling — economictimes.indiatimes.com (+18)
[6.2] WHO reports increasing drug-resistant gonorrhea — cidrap.umn.edu*(*+6)
[5.8] Scientists inch closer to pinprick blood test to detect diseases 10 years before symptoms appear — independent.co.uk (+7)
[5.7] US tech giants issue bonds for AI investments, raising investor concerns — ctvnews.ca (+7)
[5.5] US classifies countries with DEI policies as infringing human rights — bbc.com (+2)
[5.6] Speech-restoring brain chip gets FDA approval for human trial — newatlas.com (+2)
[5.6] Skin gel delivers insulin painlessly through the skin — zmescience.com (+2)
[6.0] Fukushima owner edges towards restarting first reactor since meltdown — bbc.com (+14)
Thanks for reading!
— Vadim
You can create your own personalized newsletter like this with premium.
-
🔗 r/wiesbaden Good food spot rss
Any good ideas for tasty food restaurant ?
submitted by /u/Nonthor
[link] [comments] -
🔗 r/LocalLLaMA I made a free playground for comparing 10+ OCR models side-by-side rss
It's called OCR Arena, you can try it here: https://ocrarena.ai
There's so many new OCR models coming out all the time, but testing them is really painful. I wanted to give the community an easy way to compare leading foundation VLMs and open source OCR models side-by-side. You can upload any doc, run a variety of models, and view diffs easily.
So far I've added Gemini 3, dots, DeepSeek-OCR, olmOCR 2, Qwen3-VL-8B, and a few others.
Would love any feedback you have! And if there's any other models you'd like included, let me know.
(No surprise, Gemini 3 is top of the leaderboard right now)
submitted by /u/Emc2fma
[link] [comments] -
🔗 r/wiesbaden Kennzeichen BD23 rss
Gude! Ich sammle in einer App Kennzeichen aller Art. Die gängigen wie WI, SWA, RÜD etc. habe ich schon lange gefunden, aber das Kennzeichen BD23 fehlt mir noch, dabei handelt es sich um das Sonderkennzeichen des Bundeskriminalamtes. Weiß jemand, ob die hier in Wiesbaden herumfahren? und wenn ja, bei welcher Liegenschaft ich gute Chancen habe, ein Kennzeichen zu erspähen? Danke!
submitted by /u/CTRLPLUST
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: -4 plugins, -4 releases rss
sync repo: -4 plugins, -4 releases ## Removed plugins - gepetto - ida-terminal-plugin - parascope - unicorn-tracer-arm64 -
🔗 @binaryninja@infosec.exchange Container support is finally here in Binary Ninja 5.2! You can browse nested mastodon
Container support is finally here in Binary Ninja 5.2! You can browse nested formats like ZIP, IMG4, or CaRT right in memory with no manual extraction. When a container resolves to a single target, Binja can open it straight away. If there are multiple payloads, the new Container Browser lets you pick what to load. This all works with the Transform API, so you can extend it and add support for whatever formats you need. https://binary.ninja/2025/11/13/binary-ninja-5.2-io.html#container- support
-
🔗 The Pragmatic Engineer A startup in Mongolia translated my book rss
I published The Software Engineer's Guidebook two years ago. The book sold 40,000 copies and generated $611,911 in royalties in its first two years. We need more good books in tech, so I hope that sharing these numbers inspires other techies to write them. I shared more details on how I self-published the book, on the financials, and learnings from publishing in this post.
An unexpected highlight of publishing the book was ending up in Mongolia in June of this year, at a small-but-mighty startup called Nasha Tech. This was because the startup translated my book into Mongolian. Here's the completed book:
The
Software Engineer's Guidebook, in Mongolian. You can buy this translation
hereHere's what happened:
A little over a year ago, a small startup from Mongolia reached out, asking if they could translate the book. I was skeptical it would happen because the unit economics appeared pretty unfavorable. Mongolia's population is 3.5 million; much smaller than other countries where professional publishers had offered to do a translation (Taiwan: 23M, South Korea: 51M, Germany: 84M, Japan: 122M, China: 1.43B people).
But I agreed to the initiative, and expected to hear nothing back. To my surprise, nine months later the translation was ready, and the startup printed 500 copies on the first run. They invited me to a book signing in the capital city of Ulaanbaatar, and soon I was on my way to meet the team, and to understand why a small tech company translated my book!
Japanese startup vibes in Mongolia
The startup behind the translation is called Nasha Tech; a mix of a startup and a digital agency. Founded in 2018, its main business has been agency work, mainly for companies in Japan. They are a group of 30 people, mostly software engineers.
Nasha
Tech's offices in Ulaanbaatar, MongoliaTheir offices resembled a mansion more than a typical workplace, and everyone takes their shoes off when arriving at work and switches to "office slippers". I encountered the same vibe later at Cursor's headquarters in San Francisco, in the US.
Nasha Tech found a niche of working for Japanese companies thanks to one of its cofounders studying in Japan, and building up connections while there. Interestingly, another cofounder later moved to Silicon Valley, and advises the company from afar.
The business builds the "Uber Eats of Mongolia". Outside of working as an agency, Nasha Tech builds its own products. The most notable is called TokTok, the "UberEats of Mongolia", which is the leading food delivery app in the capital city. The only difference between TokTok and other food delivery apps is scale: the local market is smaller than in some other cities. At a few thousand orders per day, it might not be worthwhile for an international player like Uber or Deliveroo to enter the market.
The
TokTok app: a
customer base of 800K, 500 restaurants, and 400 delivery ridersThe tech stack Nasha Tech typically uses:
- Frontend: React / Next, Vue / Nuxt, TypeScript, Electron, Tailwind, Element UI
- Backend and API: NodeJS (Express, Hono, Deno, NestJS), Python (FastAPI, Flask), Ruby on Rails, PHP (Laravel), GraphQL, Socket, Recoil
- Mobile: Flutter, React Native, Fastlane
- Infra: AWS, GCP, Docker, Kubernetes, Terraform
- AI & ML: GCP Vertex, AWS Bedrock, Elasticsearch, LangChain, Langfuse
AI tools are very much widespread, and today the team uses Cursor, GitHub Copilot, Claude Code, OpenAI Codex, and Junie by Jetbrains.
I detected very few differences between Nasha Tech and other "typical" startups I've visited, in terms of the vibe and tech stack. Devs working on TokTok were very passionate about how to improve the app and reduce the tech debt accumulated by prioritizing the launch. A difference for me was the language and target market: the main language in the office is, obviously, Mongolian, and the products they build like TokTok also target the Mongolian market, or the Japanese one when working with clients.
One thing I learned was that awareness about the latest tools has no borders: back in June, a dev at Nasha Tech was already telling me that Claude Code was their daily driver, even though the tool had been released for barely a month at that point!
Why translate the book into Mongolian?
Nasha Tech was the only non-book publisher to express interest in translating the book. But why did they do it?
I was told the idea came from software engineer Suuribaatar Sainjargal, who bought and enjoyed the English-language version. He suggested translating the book so that everyone at the company could read it, not only those fluent in English.
Nasha Tech actually had some in-house experience of translation. A year earlier, in 2024, the company translated Matt Mochary's The Great CEO Within as a way to uplevel their leadership team, and to help the broader Mongolian tech ecosystem.
Also, the company's General Manager, Batutsengel Davaa, happened to have been involved in translating more than 10 books in a previous role. He took the lead in organizing this work, and here's how the timelines played out:
- Professional translator: 3 months
- Technical editor revising the draft translation: 1 month
- Technical editing #2 by a Support Engineer in Japan: 2 months
- Technical revision: 15 engineers at Nasha Tech revised the book, with a "divide and conquer" approach: 2 months
- Final edit and print: 1 month
This was a real team effort. Somehow, this startup managed to produce a high- quality translation in around the same time as it took professional book publishers in my part of the world to do the same!
A secondary goal that Nasha Tech had was to advance the tech ecosystem in Mongolia. There's understandably high demand for books in the mother tongue; I observed a number of book stands selling these books, and book fairs are also popular. The translation of my book has been selling well, where you can buy the book for 70,000 MNTs (~$19).
Book signing and the Mongolian startup scene
The book launch event was at Mongolia's startup hub, called IT Park, which offers space for startups to operate in. I met a few working in the AI and fintech spaces - and even one startup producing comics.
Book
launch event, and meeting startups inside Mongolia's IT ParkI had the impression that the government and private sector are investing heavily in startups, and want to help more companies to become breakout success stories:
- IT Park report: the country's tech sector is growing ~20%, year-on-year. The combined valuation of all startups in Mongolia is at $130M, today.It 's worth remembering that location is important for startups: being in hubs like the US, UK, and India confers advantages that can be reflected in valuations.
- Mongolian Startup Ecosystem Report 2023: the average pre-seed valuation of a startup in Mongolia is $170K, seed valuation at $330K, and Series A valuation at $870K. The numbers reflect market size; for savvy investors, this could also be an opportunity to invest early. I met a Staff Software Engineer at the book signing event who is working in Silicon Valley at Google, and invests and advises in startups in Mongolia.
- Mongolian startup ecosystem Map: better-known startups in the country.
Two promising startups from Mongolia: Chimege (an AI+voice startup) AND Global (fintech). Thanks very much to the Nasha Tech team for translating the book - keep up the great work!
-
🔗 r/LocalLLaMA HunyuanVideo-1.5: A leading lightweight video generation model rss
-
🔗 HexRaysSA/plugin-repository commits readme: tweaks rss
readme: tweaks -
🔗 HexRaysSA/plugin-repository commits readme: add details and support hints rss
readme: add details and support hints closes #7 -
🔗 r/reverseengineering Advanced Evasion Techniques in Low Level rss
submitted by /u/pmd02931
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits ci: better render logs summary rss
ci: better render logs summary -
🔗 HexRaysSA/plugin-repository commits scripts: HCLI API changes rss
scripts: HCLI API changes -
🔗 HexRaysSA/plugin-repository commits ci: collect log summary to help triage plugin repo status rss
ci: collect log summary to help triage plugin repo status ref #5 -
🔗 HexRaysSA/plugin-repository commits add script to summarize GitHub indexer logs rss
add script to summarize GitHub indexer logs summarize logs: don't rely on gh and colorize with rich -
🔗 HexRaysSA/plugin-repository commits ci: run deploy only after repo sync rss
ci: run deploy only after repo sync -
🔗 apple/embedding-atlas v0.13.0 release
New Features
- Add a dashboard layout mode that allows you to freely arrange the components.
- New chart types: eCDF, line chart, and bubble chart.
- [breaking] A new DSL syntax for charts (similar to Vega-Lite), you can now create custom charts with this syntax.
- Theme support with
chartThemeandstylesheetprops to theEmbeddingAtlascomponent. - Support rendering chat messages (data format:
Array<{role: "<role>", content: "<content>"}>). - Add a widget to show a particular column's value of the selected data point (click table or embedding plot to select a point).
- Add a widget to display custom Markdown content.
- Support multiple text embedding providers via LiteLLM (thanks @peter-gy).
Detailed Changes
- feat: support multiple text embedding providers by @peter-gy in #76
- feat: add dashboard layout by @donghaoren in #92
- feat: add markdown widget by @donghaoren in #93
- feat: messages format renderer by @donghaoren in #94
- chore: update packages by @donghaoren in #95
- feat: redesign chart spec schema, add line chart, bubble chart, and eCDF chart by @donghaoren in #97
- fix: for large x y values the embedding view disappears by @donghaoren in #98
- chore: exclude sensitive and irrelevant args from text projection hash by @peter-gy in #100
- fix: better WebGPU feature check by @donghaoren in #101
- chore: bump version to 0.13.0 by @donghaoren in #102
New Contributors
Full Changelog :
v0.12.0...v0.13.0 -
🔗 Servo Blog Servo Sponsorship Tiers rss
The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:
- Platinum: 10,000 USD/month
- Gold: 5,000 USD/month
- Silver: 1,000 USD/month
- Bronze: 100 USD/month
Organizations and individual sponsors donating in these tiers will be acknowledged on theservo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be “no strings attached” donations. All the information about these new tiers is available at the Sponsorship page on this website.
Please contact us atjoin@servo.org if you are interested in sponsoring the project through one of these tiers.
Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187.
Last, but not least, we’re excited to welcome our first bronze sponsorLambdaTest who has recently started donating to the Servo project. Thank you very much!
-
🔗 Baby Steps Move Expressions rss
This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.
TL;DR
The idea itself is simple, within a closure (or future), we add the option to write
move($expr). This is a value expression ("rvalue") that desugars into a temporary value that is moved into the closure. So|| something(&move($expr))is roughly equivalent to something like:
{ let tmp = $expr; || something(&{tmp}) }How it would look in practice
Let's go back to one of our running examples, the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today - note the
let _some_value = ...lines for dealing with captures:// task: listen for dns connections let _some_a = self.some_a.clone(); let _some_b = self.some_b.clone(); let _some_c = self.some_c.clone(); tokio::task::spawn(async move { do_something_else_with(_some_a, _some_b, _some_c) });Under this proposal it would look something like this:
tokio::task::spawn(async { do_something_else_with( move(self.some_a.clone()), move(self.some_b.clone()), move(self.some_c.clone()), ) });There are times when you would want multiple clones. For example, if you want to move something into a
FnMutclosure that will then give away a copy on each call, it might look likedata_source_iter .inspect(|item| { inspect_item(item, move(tx.clone()).clone()) // ---------- ------- // | | // move a clone | // into the closure | // | // clone the clone // on each iteration }) .collect(); // some code that uses `tx` later...Credit for this idea
This idea is not mine. It's been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it's come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary's proposal, like earlier proposals I've heard, used the
superkeyword. Later on @simulacrum proposed usingmove, which to me is a major improvement, and that's the version I ran with here.This proposal makes closures more "continuous"
The reason that I love the
movevariant of this proposal is that it makes closures more "continuous" and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teachmoveclosures at the end, as a convenient default:A Rust closure captures the places you use in the "minimal way that it can" - so
|| vec.len()will capture a shared reference to thevec,|| vec.push(22)will capture a mutable reference, and|| drop(vec)will take ownership of the vector.You can use
moveexpressions to control exactly what is captured: so|| move(vec).push(22)will move thevectorinto the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:|| { let vec = move(input.vec); // take full ownership of vec let data = move(&cx.data); // take a reference to data let output_tx = move(output_tx); // take ownership of the outputchannel
process(&vec, &mut output_tx, data) }As a shorthand, you can write
move ||at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match withmoveexpressions to get more control. > So the previous closure might be written more concisely like so:move || { process(&input.vec, &mut output_tx, move(&cx.data)) // --------- --------- -------- // | | | // | | closure still // | | captures a ref // | | `&cx.data` // | | // because of the `move` keyword on the clsoure, // these two are captured "by move" // }This proposal makes
move"fit in" for meIt's a bit ironic that I like this, because it's doubling down on part of Rust's design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:
To be honest, I don't like the choice of
movebecause it's so operational. I think if I could go back, I would try to refashion our closures around two concepts- Attached closures (what we now call
||) would always be tied to the enclosing stack frame. They'd always have a lifetime even if they don't capture anything. - Detached closures (what we now call
move ||) would capture by-value, likemovetoday.
I think this would help to build up the intuition of "use
detach ||if you are going to return the closure from the current stack frame and use||otherwise".moveexpressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don't have "ref closures" and "move closures" - you just have closures that sometimes capture moves, and a "move" closure is just a shorthand for usingmoveexpressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it's quite elegant.Why not suffix?
One question is whether a
moveexpression should be a prefix or a postfix operator. So e.g.|| something(&$expr.move)instead of
&move($expr).My feeling is that it's not a good fit for a postfix operator because it doesn't just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:
|| process(foo(bar()).move)When does
bar()get called? If you think about it, it has to be closure creation time, but it's not very "obvious".We reached a similar conclusion when we were considering
.unsafeoperators. I think there is a rule of thumb that things which delineate a "scope" of code ought to be prefix - though I suspectunsafe(expr)might actually be nice, and not justunsafe { expr }.Edit: I added this section after-the-fact in response to questions.
Conclusion
I'm going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust 's existing design. I love that. To me, it joins the set of "yep, we should clearly do that" pieces in this puzzle:
- Add a
Sharetrait (I've gone back to preferring the nameshare😁) - Add
moveexpressions
These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:
"low-level enough for a Kernel, usable enough for a GUI"
but they are moving in the right direction.
- Attached closures (what we now call
-
- November 20, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-20 rss
IDA Plugin Updates on 2025-11-20
New Releases:
- capa v9.3.1
- efiXplorer efiXplorer v6.1.2
- ida-domain v0.3.6-dev.1
- ida-hcli v0.13.3-dev.2
- IDA-VTableExplorer Release 2025.11.20
- Unicorn-Trace Unicorn-Trace v0.1
Activity:
- augur
- 12f8d8ab: chore: update dependencies
- capa
- 7844ebb1: v9.3.1 (#2769)
- dotfiles
- 146081e3: update
- 0ce96e8e: update
- e16f691f: update
- 88c8c9bf: update
- 27fc6b88: update
- b414f984: update
- c888a83b: update
- a9ff0da5: update
- ef2cbbbb: update
- 7f5bf9e7: update
- 2b29d8fe: update
- 56bfc75a: update
- cda54081: update
- 3de0b984: update
- c2579bf9: update
- b0e121e2: update
- b19411dd: update
- 492d642d: update
- 50db0e9d: update
- efiXplorer
- 69a6e1a0: bump version
- haruspex
- 2b4037e2: chore: update dependencies
- ida-domain
- ida-hcli
- adcef84f: ida: python: add more informative error messages
- 31e02ec0: lint: show more validation for invalid ida-plugin.json
- 3b19d801: docker: fix IDA installer tag name
- 8cf2dc5a: 0.13.3-dev.2
- 07e7af32: tests: codify default path on 9.2/Linux
- 7286a336: install: workaround for IDA 9.2 on Linux with space in path
- ida-sigmaker
- 8d4a175d: Closes #21 - Enhance acknowledgements with more contributors
- IDA-VTableExplorer
- 3538ff8d: fix: Update build output messages and remove Windows DLL references
- cb8b0ee0: fix: Enhance buffer overrun protection and exception handling in vtab…
- 10b4c277: chore: Remove MSVC build workflow
- f8e4ed80: Fix MSVC workflow paths and setup
- 00a3bbcd: fix: Update workflow name for clarity and remove unnecessary triggers
- cdc73489: Add MSVC GitHub Actions build for test
- rhabdomancer
- 71946e84: chore: update dependencies
- twdll
- 0605e904: feat(memhook): hook more memory related functions
- Unicorn-Trace
-
🔗 r/wiesbaden Jazz clubs/bars with live music? rss
Hey!! I recently moved to Wiesbaden, and I've been wanting to go to jazz clubs/bars. I was reading about them in the Stadtlexikon:
https://www.wiesbaden.de/stadtlexikon/stadtlexikon-a-z/jazz-in-wiesbaden
But everytime I look up the listed places, they don't look like what I'm looking for with Walhalla am Exil having specific events, and some of them not having live music.
I also just haven't been to jazz bars before, so I don't fully know what I'm looking for, but I'd appreciate recommendations for places etc. Tabnskyou!
submitted by /u/Old-Bus-6698
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [capa](https://github.com/mandiant/capa): 9.3.1 -
🔗 The Pragmatic Engineer The Pulse: Cloudflare takes down half the internet – but shares a great postmortem rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of five topics from this week 's The Pulse issue. Full subscribers received the below article seven days ago. To get articles like this in your inbox, every week,subscribe here .
Many subscribers expense this newsletter to their learning and development budget. If you have such a budget, here 's an email you could send to your manager .
Before we start: I'm excited to share something new: The Pragmatic Summit.
Four years ago, The Pragmatic Engineer started as a small newsletter: me writing about topics relevant for engineers and engineering leaders at Big Tech and startups. Fast forward to today, and the newsletter crossed one million readers, and the publication expanded with a podcast as well.
One thing that was always missing: meeting in person. Engineers, leaders, founders--people who want to meet others in this community, and learn from each other. Until now that is:
The
Pragmatic Summit. See more details and apply to
attendIn partnership with Statsig, I'm hosting the first-ever Pragmatic Summit. Seats are limited, and tickets are priced at $499, covering the venue, meals, and production--we're not aiming to make any profit from this event.
I hope to see many of you there!
Cloudflare takes down half the internet - but shares a great postmortem
On Tuesday came another reminder about how much of the internet depends on Cloudflare's content delivery network (CDN), when thousands of sites went fully or partially offline in an outage that lasted 6 hours. Some of the higher-profile victims included:
- ChatGPT and Claude
- Canva, Dropbox, Spotify,
- Uber, Coinbase, Zoom
- X and Reddit
Separately, you may or may not recall that during a different recent outage caused by AWS, Elon Musk noted on his website, X, that AWS is a hard dependency for Signal, meaning an AWS outage could take down the secure messaging service at any moment. In response, a dev pointed out that it is the same for X with Cloudflare - and so it proved earlier this week, when X was broken by the Cloudflare outage.
Predicting
the future. Source: Mehul Mohan on
XThat AWS outage was in the company's us-east-1 region and took down a good part of the internet last month. AWS released incident details three days later - unusually speedy for the e-commerce giant - although that postmortem was high-level and we never learned exactly what caused AWS's DNS Enactor service to slow down, triggering an unexpected race condition that kicked off the outage.
What happened this time with Cloudflare?
Within hours of mitigating the outage, Cloudflare's CEO Matthew Prince shared an unusually detailed report of what exactly went wrong. The root cause was to do with propagating a configuration file to Cloudflare's Bot Management module. The file crashed Bot Management, which took Cloudflare's proxy functionality offline.
Here's a brief overview of how Cloudflare's proxy layer works at a high level. It's the layer that protects the "origin" resources of customers - minimizing network traffic to them by blocking malicious requests and caching static resources in Cloudflare's CDN:
How
Cloudflare 's proxy works. More details on Cloudflare 's engineering
blogHere's how the incident unfolded:
A database permissions change inClickHousekicked things off. Before the permissions changed, all queries to fetch feature metadata (to be used by the Bot Management module) would have only been run on distributed tables in Clickhouse, in a database called "default" which contains 60 features.
Before
the permissions change: about 60 features were returned, that were fed to the
Bot ModuleUntil now, these queries were running using a shared system account. Cloudflare's engineering team wanted to improve system security and reliability, and move from this shared system account to individual user accounts. User accounts already had access to another database called "r0", so the team made the database permission change for access to r0 to be implicit instead of explicit.
As a side effect of this, the same query collecting the features to be passed to Bot Management started to fetch from the r0 database, and return many more features than expected:
After
the permissions change: the query did not change but returned twice as many
resultsThe Bot Management module does not allow loading of more than 200 features. This limit was well above the production usage of 60, and was put in place for performance reasons: the Bot Management module pre-allocates memory for up to 200 features, and it will not operate with more than this number.
Asystem panichit machines served with the incorrect feature file. Cloudflare was nice enough to share the exact code that caused this panic, which was this unwrap() function:
Source:CloudflareWhat likely happened:
- The append_with_names() function likely checked for a limit of 200 features
- If it saw more than 200 features, it likely returned an error
- … and when writing the code, it was not expected that append_with_names() would return an error…
- … and so .unwrap() panicked and crashed the system!
Edge nodes started to crash, one by one, seemingly randomly. The feature file was being generated every 5 minutes, and gradually rolled out to Edge nodes. So, initially, it was only a few nodes that crashed, and then over time, more became non-responsive. At one point, both good and bad configuration files were being distributed, making failed nodes that received the good configuration file start working - for a while!
Why so long to find the root cause?
It took Cloudflare engineers unusually long - 2.5 hours! - to figure all this out, and that an incorrect configuration file propagating to Edge servers was to blame for their proxy going down. Turns out, an unrelated failure made the Cloudflare team suspect that they were under a coordinated botnet attack, as when a few of the Edge nodes started to go offline, the company's status page did, too:
Cloudflare
's status page went offline when the outage started. Source:
CloudflareThe team tried to gather details about the attack, but there was no attack, meaning they wasted time looking in the wrong place. In reality, the status page going down was a coincidence and unrelated to the outage. But it's easy to see why their first reaction was to figure out if there was a distributed denial of service (DDoS) attack.
As mentioned, it eventually took 2.5 hours to pinpoint the incorrect configuration files as the source of the outage, and another hour to stop the propagation of new files, and create a new and correct file, which was deployed 3.5 hours after the start of the incident. Cleanup took another 2.5 hours, and at 17:06 UTC, the outage was resolved, ~6 hours after it started.
Cloudflare shared a detailed review of the incident and learnings, which can be read here.
How did the postmortem come so fast?
One thing that keeps being surprising about Cloudflare is how they have a very detailed postmortem up in less than 24 hours after the incident is resolved. Cofounfer and CEO Matthew Prince explained how this was possible:
- Matthew was part of the outage call.
- After the outage was resolved, he wrote a first version of the incident review, at home. Matthew was in Lisbon, in Cloudflare's European HQ, so this was early evening
- The team circulated a Google Doc with this initial writeup, and questions that needed to be reviewed
- In a few hours, all questions were answered
- Matthew: "None of us were happy [about the incident] -- we were embarrassed by what had happened -- but we declared it [the postmortem] true and accurate.
- Sent the draft over to the SF team, who did one more sweep, the posted it
Talk about moving with the speed of a startup, despite being a publicly traded company!
Learnings
There is much to learn from this incident, such as:
Be explicit about logging errors when you raise them! Cloudflare could probably have identified the root cause of this error much faster if the line of code that returned an error, also logged the error, and if Cloudflare had alerts set up when certain errors spiked on its nodes. It could have surely shaved an hour or two off the time it took to mitigate.
Of course, logging errors before throwing them is extra work, but when done with monitoring or log analysis, it can help find the source of errors much faster.
Global database changes are always risky. You never know what part of the system you might hit.**** The incident started with a seemingly innocuous database permissions change that impacted a wide range of queries. Unfortunately, there is no good way to test the impact of such changes (if you know one, please leave a comment below!)
Cloudflare was making the right kind of change by removing global systems accounts; it's a good direction to go in for security and reliability. It was extremely hard to predict the change would end up taking down a part of their system - and the web.
Two things going wrong at the same time can really throw an engineering team. If Cloudflare's status page did not go offline, the engineering team would have surely pinpointed the problem much faster than they did. But in the heat of the moment, it's easy to assume that two small outages are connected, until there's evidence that they're not. Cloudflare is a service that's continuously under attack, so the engineering team can't be blamed for assuming it might be more of the same.
CDNs are the backbone of the internet, and this outage doesn 't change that. The outage hit lots of large businesses, resulting in lost revenue for many. But could affected companies have prepared better for Cloudflare going down?
The problem is that this is hard: using a CDN means taking on a hard dependency in order to reduce traffic on your own servers (the origin servers), while serving internet users faster and more cheaply:
A
CDN is a common way to reduce traffic to servers and serve webpages and APIs
faster to usersWhen using a CDN, you propagate addresses that point to that CDN server's IP or domain. When the CDN goes down, you could start to redirect traffic to your own origin servers (and deal with the traffic spike), or utilize a backup CDN, if you prepared for this eventuality.

Both these are expensive to pull off:
- Redirecting to the origin servers likely means needing to suddenly scale up backend infrastructure
- Having a backup CDN means there must be a contract and payment for a CDN partner which will most likely sit idle. As and when it is needed, you must switch over and warm up their cache: it's a lot of effort and money to do this!
A case study in the trickiness of dealing with a CDN going offline is the story of Downdetector, including inside details on why Downdetector went down during Cloudflare's latest outage, and what they learned from it.
This was one out of the five topics covered in this week 's The Pulse. The full edition additionally covers:
- Downdetector & the real cost of no upstream dependencies. During the Cloudflare outage, Downdetector was also unavailable. I got details from the team about why they have a hard dependency on Cloudflare, and why that won't change anytime soon.
- Antigravity: Google 's new AI IDE - that its devs cannot use. Google wants to become a serious player in AI coding tools, but Antigravity contains remnants of Windsurf. Interestingly, devs at Google aren't allowed to use Antigravity for work
- Industry pulse. Gemini 3 launch, Anthropic valued at $350B, Jeff Bezos funds an AI company, and unusually slow headcount growth at startups persists.
- Five AI fakers caught in 1 month by crypto startup. Candidates who fake their backgrounds and change their looks in remote interviews continue to plague companies hiring full-remote - especially crypto startups.
-
🔗 @HexRaysSA@infosec.exchange 🔌 Allow us to introduce the new IDA Plugin Manager. mastodon
🔌 Allow us to introduce the new IDA Plugin Manager.
Now, with a few simple commands, you can access a modern, self-service plugin ecosystem. Discover and get discovered more easily.
https://hex-rays.com/blog/introducing-the-ida-plugin-manager
-
🔗 Kagi release notes Nov 20th, 2025 - Introducing Quick and Research assistants rss
Kagi Assistant
Introducing Quick and Research assistants

Today, we are officially introducing Kagi Research assistants (previously known as "KI"). Read our full announcement here.
Their main strength is research: identifying what to search for, executing multiple simultaneous searches (in different languages, if needed), and synthesizing the findings into high-quality answers.
Simply choose whether to prioritise speed or depth:
- Quick optimises for speed , providing direct and concise answers.
- Research focuses on depth and diversity , conducting exhaustive analysis for thorough results. Research is available to Ultimate subscribers only.

To achieve this, they employ different base models for specific tasks. We continuously benchmark top-performing models and select the best one for each job, so you don't have to.
And on top of web search, we’ve added new behavioural layers and a wider toolset, including Python execution and image generation for higher-quality answers. These capabilities go beyond what was already possible in Kagi Assistant using a base model with web search. See our documentation for the full details.
Finally, a huge thank you to everyone in our Discord for beta testing this with us and providing tons of feedback along the way! 🙏
Note:
- With this change, we set the Quick assistant as the default mode in Kagi Assistant. You can always adjust this in your Assistant Settings.
- Additionally, we plan to migrate the
qbang, currently used for Quick Answer, to trigger an Assistant thread targeting the Quick assistant.
LLMs are bullshitters. But that doesn't mean they're not useful
Yesterday, we published an opinion essay exploring the useful yet disruptive nature of LLMs. Give it a read and let us know what you think https://blog.kagi.com/llms
Colour code your Assistant tags
Now you can assign icons and colours to your tags. Spot important threads instantly.

Other improvements and bug fixes
- Retired a handful of models. As part of a regular process, we occasionally review and retire models that are not used by Kagi customers and have been superseded by better, newer models. Saying bon voyage to:
gpt-oss-20b,gpt-4-1-nano,gpt-4-1-mini,gpt-4-1,o4-mini,o3,grok-code-fast,mistral-large,deepseek-r1, andhermes-4-405b. In the future we will forecast these changes with more advanced notice. - Various untranslated Kagi Assistant texts #5328 @MonoMatrix
- Kagi Assistant - work on relationship between Custom Assistant and Model in the UI #8327 @RobOK
- Show more info in dialog when using Kagi Assistants #8335 @Thibaultmol
- Case-agnostic alphabetical sorting for assistant tags #8967 @lolroger
Kagi Search
SlopStop Update
Last week we kicked off our SlopStop initiative. Since then, the community has submitted over 3,000 reports! Our team is reviewing this data to refine our evaluation pipeline, with improvements expected to go live next week
Please continue reporting AI slop in your search results.

- Paywalled news sites are now signaled on
/news.
- The new AI slop report breaks the layout when translated #8923 @tux0r
- Programming lens doesn't work #8310 @Khyta
- Timer not removed when search is changed #8780 @acut3
- No reference list for quick answer with privacy pass #8917 @Jesal
- Reverse image search for recent image works on Google but not Kagi #8380 @leftium
- You can report the same website as AI generated multiple times #8911 @pma_snek
- Family invite page for some accounts has garbage html #8943 @Temm
- Quick Answer 'Show More' doesn't save state #8902 @Dustin
- Quick Answer autocomplete suggestion opens non-search tabs, should go to search #8941 @Thibaultmol
Kagi Translate
Post of the week
Here is this week's featured social media mention:

Haven't tried the Kagi Translate extension yet? Check it out!
-
🔗 Hex-Rays Blog Introducing the IDA Plugin Manager rss
-
🔗 r/reverseengineering SDRPlay RSP1 Clone - MSI2500/MSI100 Dongle Deep Dive and Teardown rss
submitted by /u/SpiffyCabbage
[link] [comments] -
🔗 r/reverseengineering Reverse engineering an update mechanism in the Lumi A31C security camera, and gaining backdoor access. rss
submitted by /u/Ancient-Winter5861
[link] [comments] -
🔗 Simon Willison Nano Banana Pro aka gemini-3-pro-image-preview is the best available image generation model rss
Hot on the heels of Tuesday's Gemini 3 Pro release, today it's Nano Banana Pro, also known as Gemini 3 Pro Image. I've had a few days of preview access and this is an astonishingly capable image generation model.
As is often the case, the most useful low-level details can be found in the API documentation:
Designed to tackle the most challenging workflows through advanced reasoning, it excels at complex, multi-turn creation and modification tasks.
- High-resolution output: Built-in generation capabilities for 1K, 2K, and 4K visuals.
- Advanced text rendering: Capable of generating legible, stylized text for infographics, menus, diagrams, and marketing assets.
- Grounding with Google Search: The model can use Google Search as a tool to verify facts and generate imagery based on real-time data (e.g., current weather maps, stock charts, recent events).
- Thinking mode: The model utilizes a "thinking" process to reason through complex prompts. It generates interim "thought images" (visible in the backend but not charged) to refine the composition before producing the final high-quality output.
- Up to 14 reference images: You can now mix up to 14 reference images to produce the final image.
[...] These 14 images can include the following:
- Up to 6 images of objects with high-fidelity to include in the final image
- Up to 5 images of humans to maintain character consistency
There is also a short (6 page) model card PDF which lists the following as "new capabilities" compared to the previous Nano Banana: Multi character editing, Chart editing, Text editing, Factuality - Edu, Multi-input 1-3, Infographics, Doodle editing, Visual design.
Trying out some detailed instruction image prompts
Max Woolf published the definitive guide to prompting Nano Banana just a few days ago. I decided to try his example prompts against the new model, requesting results in 4K.
Here's what I got for his first test prompt, using Google's AI Studio:
Create an image of a three-dimensional pancake in the shape of a skull, garnished on top with blueberries and maple syrup.
The result came out as a 24.1MB, 5632 × 3072 pixel PNG file. I don't want to serve that on my own blog so here's a Google Drive link for the original.
Then I ran his follow-up prompt:
Make ALL of the following edits to the image: - Put a strawberry in the left eye socket. - Put a blackberry in the right eye socket. - Put a mint garnish on top of the pancake. - Change the plate to a plate-shaped chocolate-chip cookie. - Add happy people to the background.
I'll note that it did put the plate-sized cookie on a regular plate. Here's the 24.9MB PNG.
The new model isn't cheap. Here's the API pricing: it's 24 cents for a 4K image and 13.4 cents for a 1K or 2K image. Image inputs are 0.11 cents (just over 1/10th of a cent) each - an earlier version of their pricing page incorrectly said 6.7 cents each but that's now been fixed.
Unlike most of Google's other models it also isn't available for free via AI Studio: you have to configure an API key with billing in order to use the model there.
Creating an infographic
So this thing is great at following instructions. How about rendering text?
I tried this prompt, this time using the Gemini consumer app in "thinking" mode (which now uses Nano Banana Pro for image generation). Here's a share link - my prompt was:
Infographic explaining how the Datasette open source project works
This is a great opportunity to test its ability to run searches (aka "Grounding with Google Search"). Here's what it created based on that 9 word prompt:
![Described by Gemini 3 Pro: A technical architecture diagram titled "How Datasette Works: From Raw Data to Explorable API," illustrating a workflow from left to right. 1. "RAW DATA SOURCES" include "CSV", "JSON", "Excel (XLSX)", and "Log Files". 2. These flow into "DATA PREPARATION & CONVERSION" using tools "csvs-to-sqlite" and "sqlite-utils" to create a "SQLite DATABASE". 3. This feeds into the central "DATASETTE APPLICATION CORE," a stack comprising "Data Ingestion (Read-Only)", "Query Engine (SQL)", "API Layer (JSON)", and "Web UI Rendering". 4. A "PLUGIN ECOSYSTEM" connects below the core, listing "Vega Charts", "Cluster Maps", "Full-Text Search", and "Custom Renderers". 5. To the right, the core branches into "OUTPUT & INTERFACES": an "Interactive Web Interface (Explore, Filter, Facet)", a "JSON API (Programmatic Access)" showing a snippet {"data": [...]}, and a "SQL EDITOR Custom SQL Queries" showing SELECT * FROM.... 6. The API output connects to "PUBLISHING & DEPLOYMENT" via a terminal command datasette publish cloudrun my.db leading to deployment targets "Heroku", "Google Cloud Run", "Fly.io", and "Vercel".](https://static.simonwillison.net/static/2025/nano-banana-datasette.jpg)
That's a pretty extraordinary result for such a simple prompt. The text is all spelled correctly and rendered without glitches. The content is solid too - it even included logos for the most popular publish platforms, and a tiny thumbnail of the Datasette UI which is close-enough for an infographic.
"Data Ingestion (Read-Only)" is a bit off. Everything else looks great to me.
Here's the thinking trace for that image generation process:
Initiating Datasette Infographic
I'm starting a detailed infographic about the Datasette project. I plan to use Google Search to gather specific information about its core processes, which I will then visualize in a multi-panel diagram.
Developing the Datasette workflow
I am visualizing the Datasette workflow as described in the search results. I'm focusing on the conversion of data from different formats into a SQLite database, which is then presented as a web application complete with a user interface and JSON output.
Visualizing Datasette Workflow
I've sketched out the initial structure for the infographic, focusing on a flowchart style. The key components include data input from diverse sources, conversion to SQLite, the Datasette core application, its main features (web UI, JSON API, SQL querying), ecosystem plugins, and finally, deployment options. I aim for clear icons and labels to represent each step. Now I will focus on refining the flow and clarifying the relationships between the parts.
Defining Core Datasette Stages
I've sketched out the initial infographic structure. I have identified the key components, focusing on visual representation of the data flow and the key benefits and features of the system. I have made updates to the flowchart with clear icons and labels. Now I'm working to expand the data input section and refine the visual representation of each step.
Constructing Datasette Infographic
I am translating the current Datasette workflow outline into visual elements. The input section will show icons for different data formats. I've sketched the data conversion stage with clear illustrations of the necessary tools. I will add a depiction of the Datasette core application, showcasing its interfaces and functions. The plugin ecosystem and deployment options will follow. Now, I am refining the visual representation.
Analyzing Datasette's Architecture
I'm currently working on representing the Datasette open-source project's architecture visually. I've broken it down into key components: raw data processing, SQLite conversion, core application functionalities, plugin integration, and deployment methods. I'm focusing on creating a clear, linear representation to show the workflow.
Mapping the Datasette Flow
I've crafted a comprehensive graphic to illustrate Datasette's operation. It spans the data ingestion, SQLite conversion, main application features, plugin interactions, and deployment stages, aiming for clarity and a logical flow.
Detecting generated images with SynthID
This note from the announcement post caught my eye:
We believe it's critical to know when an image is AI-generated. This is why all media generated by Google's tools are embedded with our imperceptible SynthID digital watermark.
Today, we are putting a powerful verification tool directly in consumers' hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.
Last night I used Nano Banana Pro to generate a fake photograph of raccoons stealing our food delivery, then scrubbed out the little diamond icon using the Apple Photos "cleanup" tool. I uploaded that Gemini app and asked "Was this image created with AI?":

It replied:
Yes, it appears that all or part of this image was created with Google Al. SynthID detected a watermark in 25-50% of the image.
Presumably that 25-50% figure is because the rest of the photo was taken by me - it was just the raccoons that were added by Nano Banana Pro.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 @binaryninja@infosec.exchange Binary Ninja 5.2 brings initial bitfield support. You can now represent mastodon
Binary Ninja 5.2 brings initial bitfield support. You can now represent structure members with specific bit positions and widths, and Linear View will show them cleanly. Our debug info plugins like DWARF, PDB, and the built in SVD import all express bitfields now too. More analysis work is coming in a future update to help resolve common access patterns in Medium and High Level IL. https://binary.ninja/2025/11/13/binary-ninja-5.2-io.html#initial- bitfield-support
-
🔗 r/LocalLLaMA Your local LLM agents can be just as good as closed-source models - I open-sourced Stanford's ACE framework that makes agents learn from mistakes rss
I implemented Stanford's Agentic Context Engineering paper. The framework makes agents learn from their own execution feedback through in-context learning instead of fine- tuning.
How it works:
Agent runs task → reflects on what worked/failed → curates strategies into playbook → uses playbook on next run
Improvement:
Paper shows +17.1pp accuracy improvement vs base LLM (≈+40% relative improvement) on agent benchmarks (DeepSeek-V3.1 non-thinking mode), helping close the gap with closed-source models. All through in-context learning (no fine-tuning needed).
My Open-Source Implementation:
- Drop into existing agents in ~10 lines of code
- Works with local or API models
- Real-world test on browser automation agent:
- 30% → 100% success rate
- 82% fewer steps
- 65% decrease in token cost
Get started:
- GitHub: https://github.com/kayba-ai/agentic-context-engine
- Local Model Starter Templates (Ollama, LM Studio, LiteLLM): https://github.com/kayba-ai/agentic-context-engine/tree/main/examples
Would love to hear if anyone tries this with their local setups! Especially curious how it performs with different models.
I'm currently actively improving this based on feedback - ⭐ the repo so you can stay updated!
submitted by /u/cheetguy
[link] [comments] -
🔗 r/wiesbaden Jede Bundestagswahl seit dem zweiten Weltkrieg rss
submitted by /u/Antique-Hedgehog5005
[link] [comments] -
🔗 r/wiesbaden Help! I need to do the residence anmeldung but is it correct? rss
I have found an apartment in Weisbaden and look for an appointment at https://dtms.wiesbaden.de/DTMSTerminWeb/ the next is the 12th of January, I would stat working the 15th of January. Am I selecting the right appointment? Two months only for the residence is too much I think.
submitted by /u/MCOMICN
[link] [comments] -
🔗 r/LocalLLaMA Ai2 just announced Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use rss
| Try Olmo 3 in the Ai2 Playground → https://playground.allenai.org/ Download: https://huggingface.co/collections/allenai/olmo-3-68e80f043cc0d3c867e7efc6 Blog: https://allenai.org/blog/olmo3 Technical report: https://allenai.org/papers/olmo3 submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [unicorn-tracer-arm64](https://github.com/chenxvb/Unicorn-Trace) (0.1) -
🔗 r/reverseengineering Reverse Engineering Game Cartridge Authentication on the PlayStation Vita. rss
submitted by /u/VitaCmd56
[link] [comments] -
🔗 @cxiao@infosec.exchange (for srs tho there is a category of mandarin learner that's...disturbingly mastodon
(for srs tho there is a category of mandarin learner that's...disturbingly susceptible to simping for the PRC 😬)
-
🔗 @cxiao@infosec.exchange too many ppl learning mandarin because of "american decline", not enough ppl mastodon
too many ppl learning mandarin because of "american decline", not enough ppl learning mandarin for access to a new world of memes
-
🔗 r/LocalLLaMA Spark Cluster! rss
| Doing dev and expanded my spark desk setup to eight! Anyone have anything fun they want to see run on this HW? Im not using the sparks for max performance, I'm using them for nccl/nvidia dev to deploy to B300 clusters. Really great platform to do small dev before deploying on large HW submitted by /u/SashaUsesReddit
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [efiXplorer](https://github.com/binarly-io/efixplorer): 6.1.2 -
🔗 r/reverseengineering KsDumper 11 v1.3.5H Release rss
submitted by /u/BngModding
[link] [comments] -
🔗 r/reverseengineering Friman - Frida version manager tool rss
submitted by /u/deleee
[link] [comments] -
🔗 Console.dev newsletter Google Antigravity rss
Description: Google’s new AI IDE.
What we like: Agent-first UI: agent manager to instruct and manage AI coding agents. Exposes task list, implementation plan and walkthrough file artifacts used by the agent - builds docs as it goes. You can comment on plans to provide feedback to the agent. Automates in-browser code testing through a Chrome extension - result artifacts are added to the walkthrough doc.
What we dislike: Seems quite easy to hit AI model rate limits with extended usage. How many IDEs does Google have now?
-
🔗 Console.dev newsletter Homebrew 5 rss
Description: Package manager.
What we like: Now officially supports Linux ARM as well as macOS. Concurrent downloads enabled by default speeds things up. Makes it easy to install packages which live in their own directory and are symlinked to the relevant bin directories. Gives you one command to update everything (and pin specific packages).
What we dislike: Will drop support for unsigned packages in a year so this may break a lot of packages even if it does improve security long term.
-
🔗 Rust Blog Switching to Rust's own mangling scheme on nightly rss
TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in
nightly-2025-11-21Context
When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.
In C, the symbol name of a function is just the name that the function was defined with, such as
strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as
a::fooandb::foowould have clashing symbols.Rust originally used a symbol mangling scheme based on the Itanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:
- Information about generic parameter instantiations is lost during mangling
- It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
- Symbol names can contain
.characters which aren't supported on all platforms - Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
- There is no straightforward way to differentiate between Rust and C++ symbols
If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.
Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:
- An unambiguous encoding for everything that can end up in a binary's symbol table
- Information about generic parameters are encoded in a reversible way
- Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
- It doesn't rely on compiler internals
- Symbols are restricted to only
A-Z,a-z,0-9and_, helping ensure compatibility with tools on varied platforms - It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding
However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!
Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.
Benefits
Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.
It's easy to see the new mangling scheme in action, consider the following example:
fn foo<T>() { panic!() } fn main() { foo::<Vec<(String, &[u8; 123])>>(); }With the legacy mangling scheme, all of the useful information about the generic instantiation of
foois lost in the symbolf::foo..thread 'main' panicked at f.rs:2:5: explicit panic stack backtrace: 0: std::panicking::begin_panic at /rustc/d6c...582/library/std/src/panicking.rs:769:5 1: f::foo 2: f::main 3: core::ops::function::FnOnce::call_once note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace...but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with
f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:thread 'main' panicked at f.rs:2:5: explicit panic stack backtrace: 0: std::panicking::begin_panic at /rustc/d6c...582/library/std/src/panicking.rs:769:5 1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>> 2: f::main 3: <fn() as core::ops::function::FnOnce<()>>::call_once note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.Possible drawbacks
Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.
Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.
In any case, using the new mangling scheme can be disabled if any problem occurs: use the
-Csymbol-mangling-version=legacy -Zunstable-optionsflag to revert to using the legacy mangling scheme.Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.
Adding v0 support in your tools
If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demangle repository that can be integrated into your project.
Summary
rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (
nightly-2025-11-21).Let us know if you encounter problems, by opening an issue on GitHub.
If that happens, you can use the legacy mangling scheme with the
-Csymbol- mangling-version=legacy -Zunstable-optionsflag. Either by adding it to the usualRUSTFLAGSenvironment variable, or to a project's.cargo/config.tomlconfiguration file, like so:[build] rustflags = ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the
-Csymbol-mangling-version=v0flag today viaRUSTFLAGSor.cargo/config.toml:[build] rustflags = ["-Csymbol-mangling-version=v0"]
-
