- â
- â
- Rust Binary Analyis 101 - Part 2 - Z3R0cool Blogs
- mprocs: start all your project's commands at once
- Jon's Arm Reference
- Optimize for momentum
- Nviso vshell report
- December 25, 2025
-
đ Hex-Rays Blog IDA 9.3 Beta is Live! rss
If you're part of our Beta Program, the new build is now available in the Download Center of your customer portal. Your testing and feedback are essential in validating new features, surfacing regressions, and ensuring this release is ready for production use.
Not enrolled yet? Joining the program is quick and easy, just click Subscribe from your customer portal dashboard - see below for details.

-
đ r/reverseengineering [Research] Algebraic Reverse Engineering of SHA-256: Genesis Block Raw Preimage Reconstruction rss
submitted by /u/No_Arachnid_5563
[link] [comments] -
đ r/reverseengineering Reverse engeenring a mobile game apk, bassically I have an old War robots apk, which is a btw server sided game, it downloads the files, and because its trying to reconnect to the old servers which are no longer present, it gives me âretry againâ or connection issue, anyway doing a private server? rss
submitted by /u/SignificantGear1214
[link] [comments]
-
- December 24, 2025
-
đ r/LocalLLaMA Exclusive: Nvidia buying AI chip startup Groq's assets for about $20 billion in largest deal on record rss
| submitted by /u/fallingdowndizzyvr
[link] [comments]
---|--- -
đ r/LocalLLaMA We asked OSS-120B and GLM 4.6 to play 1,408 Civilization V games from the Stone Age into the future. Here's what we found. rss
| GLM-4.6 Playing Civilization V + Vox Populi (Replay) We had GPT-OSS-120B and GLM-4.6 playing 1,408 full Civilization V games (with Vox Populi/Community Patch activated). In a nutshell: LLMs set strategies for Civilization V's algorithmic AI to execute. Here is what we found: An overview of our system and results TLDR: It is now possible to get open-source LLMs to play end-to-end Civilization V games (the m. They are not beating algorithm-based AI on a very simple prompt, but they do play quite differently. The boring result: With a simple prompt and little memory, both LLMs did slightly better in the best score they could achieve within each game (+1-2%), but slightly worse in win rates (-1~3%). Despite the large number of games run (2,207 in total, with 919 baseline games), neither metric is significant. The surprising part: Pure-LLM or pure-RL approaches [1], [2] couldn't get an AI to play and survive full Civilization games. With our hybrid approach, LLMs can survive as long as the game goes (~97.5% LLMs, vs. ~97.3% the in-game AI). The model can be as small as OSS-20B in our internal test. Moreover, the two models developed completely different playstyles.- OSS-120B went full warmonger: +31.5% more Domination victories, -23% fewer Cultural victories compared to baseline
- GLM-4.6 played more balanced, leaning into both Domination and Cultural strategies
- Both models preferred Order (communist-like , ~24% more likely) ideology over Freedom (democratic-like)
Cost/latency (OSS-120B):
- ~53,000 input / 1,500 output tokens per turn
- ~$0.86/game (OpenRouter pricing as of 12/2025)
- Input tokens scale linearly as the game state grows.
- Output stays flat: models don't automatically "think harder" in the late game.
Watch more:
Try it yourself:
- The Vox Deorum system is 100% open-sourced and currently in beta testing
- GitHub Repo: https://github.com/CIVITAS-John/vox-deorum
- GitHub Release: https://github.com/CIVITAS-John/vox-deorum/releases
- Works with any OpenAI-compatible local providers
We exposed the game as a MCP server, so your agents can play the game with you Your thoughts are greatly appreciated:
- What's a good way to express the game state more efficiently? Consider a late-game turn where you have 20+ cities and 100+ units. Easily 50k+ tokens. Could multimodal help?
- How can we get LLMs to play better? I have considered RAG, but there is really little data to "retrieve" here. Possibly self-play + self-reflection + long-term memory?
- How are we going to design strategy games if LLMs are to play with you? I have put an LLM spokesperson for civilizations as an example, but there is surely more to do?
Join us:
- I am hiring a PhD student for Fall '26, and we are expanding our game-related work rapidly. Shoot me a DM if you are interested!
- I am happy to collaborate with anyone interested in furthering this line of work.
submitted by /u/vox-deorum
[link] [comments]
---|--- -
đ livestorejs/livestore "v0.4.0-dev.22" release
"Release
0.4.0-dev.22including Chrome Extension" -
đ r/reverseengineering Zelda: Twilight Princess Has Been Decompiled rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
đ HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
sync repo: +1 plugin, +2 releases ## New plugins - [fwhunt-ida](https://github.com/binarly-io/fwhunt-ida) (1.0.2, 1.0.1) -
đ r/reverseengineering Trafexia - Mobile Traffic Interceptor rss
submitted by /u/danieldev23
[link] [comments] -
đ r/wiesbaden Laute Explosion đ„ rss
Hat in der Nacht von 23. auf 24.12. noch jemand um 1 Uhr morgens herum einen heftigen Knall in der Innenstadt gehört? Klang nicht nach einem Böller oder Àhnlichen sondern wie eine ausgewachsene Verpuffung oder Explosion.
Frohe Weihnachten euch allen.
submitted by /u/Buschhannes
[link] [comments] -
đ batrachianai/toad The XMas Eve Release đ release
[0.5.6] - 2025-12-24
Fixed
- Fixed agent selector not focusing on run.
- Added project directory as second argument to
toad acprather than a switch.
-
đ gulbanana/gg GG 0.36.3 release
Fixed
- CLI build: added dock icon on MacOS.
- CLI build: the advertised
--foregroundnow actually exists and works. - GG now respects the
snapshot.auto-tracksetting.
-
đ r/wiesbaden Store/Jeweler Recommendations for buying 24 Caret Gold Jewelry rss
I am in Wiesbaden for a short term work assignment. I want to buy my daughter some gold jewelry.
I need a recommendation for a jewelry store that sells 24 carat gold bracelets, necklaces or earrings.
Does anyone have a recommendation of a store or jeweler in the Mainz/Wiesbaden area?
Thank you.
submitted by /u/J-V1972
[link] [comments] -
đ r/reverseengineering WIBU CodeMeter claims AES-256 military-grade encryption but entropy analysis reveals simple XOR rss
submitted by /u/Signal-Setting-7117
[link] [comments] -
đ r/reverseengineering WIBU CodeMeter claims AES-256 military-grade encryption but entropy analysis reveals simple XOR rss
submitted by /u/Signal-Setting-7117
[link] [comments] -
đ r/LocalLLaMA The current state of sparse-MoE's for agentic coding work (Opinion) rss
| submitted by /u/ForsookComparison
[link] [comments]
---|--- -
đ r/LocalLLaMA New 1B parameter open-source coding model getting 76% on HumanEval [shameless but proud self-plug] rss
Hey folks, merry festive season to you all. Hope you are staying safe!
Wanted to share a new open-source coding model release that might be interesting to yall here. My team proudly published it this morning..(we are a small start up out of Australia)Itâs called Maincoder-1B... a 1B-parameter code generation model that gets 76% on HumanEval, which is unusually high for a model this small (so far its ranking best-in-class for open models in that size range).
Our focus isnât on scaling up, but on making small models actually good. We know that with a lot of real-world use cases such as: interactive tools, local/offline coding, batch refactors, search-based program synthesis... you care more about latency, cost, and fast rollouts than having a massive model.
Some key points to note:
-Designed for low-latency and low-cost inference
-Can run locally or on constrained hardware
-Useful for systems that need many cheap generations (search, verification, RL-style loops)
-as well as fine tuning to personal preferences
-Released under Apache 2.0It does have the expected limitations: ~2k context window and itâs best at small, self-contained tasks....not large codebases or safety-critical code without human review.
Weights and benchmarks and all that are here:
https://huggingface.co/Maincode/Maincoder-1BThe full release note is here: https://maincode.com/maincoder/
Keen to hear your thoughts ..and particularly where small-but-strong coding models fit best today. Thanks in advance for your support :) We are excited to have got this over the line!
submitted by /u/More_Article9837
[link] [comments] -
đ HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [AutoRE](https://github.com/a1ext/auto_re) (2.2.0)
-
- December 23, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-12-23 rss
IDA Plugin Updates on 2025-12-23
New Releases:
Activity:
- auto_re
- 4c2f062f: feat: IDA Plugin manager support
- chernobog
- f69ba53b: Update readme
- community-malware-research
- 9088eb4a: Adding files for fake putty IDA video
- IDA-FastAnalysis
- IDA-VTableExplorer
- 75bcad1d: feat: update vtable annotations to include parent class inheritance aâŠ
- ida_domain_mcp
- 55eaa285: Change to OPENAI_DEFAULT_MODEL
- IDAPluginList
- c37f20b9: Update
- OpenLumina
- auto_re
-
đ r/reverseengineering Finding Jingle Town: Debugging an N64 Game without Symbols rss
submitted by /u/Mediocre_Ad_1923
[link] [comments] -
đ r/wiesbaden GruppenaktivitĂ€ten fĂŒr Geburtstage? :) rss
Hi, ich bin gerade am ĂŒberlegen was man noch so in Wiesbaden zusammen machen kann auĂer in die Superflyhalle, Keramik bemalen oder Bowling.. habt ihr eventuell Ideen wo es sich noch lohnt mit ein paar Leuten zu meinem Geburtstag hinzugehen?
Danke! đ
submitted by /u/FunkINFJ
[link] [comments] -
đ r/reverseengineering Nintendo 64 Decomp Update: Harvest Moon 64 is now 100% decompiled! rss
submitted by /u/harvestwhisperer
[link] [comments] -
đ r/reverseengineering Fabrice Bellard Releases MicroQuickJS rss
submitted by /u/Ok-Tune-1346
[link] [comments] -
đ r/reverseengineering Fake PuTTY Installer Malware Analysis with IDA Pro rss
submitted by /u/jershmagersh
[link] [comments] -
đ News Minimalist đą FDA approves first weight loss pill + 8 more stories rss
In the last 5 days ChatGPT read 146994 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

[5.5] FDA approves first GLP-1 pill for obesity âstatnews.com(+77)
The FDA has approved the first oral GLP-1 pill for obesity, a version of Novo Nordiskâs Wegovy, potentially expanding access to effective weight loss treatments starting in January.
The 25-milligram daily medication demonstrated 14% weight loss in trials, mirroring the efficacy of the injectable version. It also reduces cardiovascular risks and will initially cost $150 per month for the lowest dosage through direct-to-consumer channels.
This peptide-based pill requires strict morning fasting for absorption. Meanwhile, competitor Eli Lilly is developing a small-molecule pill, orforglipron, which may offer easier manufacturing and fewer dietary restrictions once approved.
[6.4] European governments agree to introduce a digital euro ânrc.nl(Dutch) (+5)
European governments have agreed to create a digital euro, establishing a central bank-backed public currency to safeguard the continentâs financial sovereignty and payment resilience.
This public currency would offer a secure alternative to commercial bank accounts and US-based payment providers. Pending European Parliament approval, the digital euro could launch by 2029 via apps or cards, featuring offline capabilities to ensure transaction continuity during cyberattacks.
The proposal guarantees privacy and bans programmable spending to mirror the utility of physical cash. While merchants must eventually accept the currency, commercial banks remain critical of the implementation costs and competition.
[6.1] TikTok agrees to sell US operations to American investors âtheguardian.com(+93)
TikTok has signed a binding deal to sell its United States operations to a group of American investors including Oracle and Silver Lake, preventing a ban and ensuring continued service.
The agreement, set to close January 22, grants Oracle, Silver Lake, and MGX a combined 45 percent stake. Oracle will license TikTokâs recommendation algorithm to address long-standing national security concerns.
Highly covered news with significance over 5.5
[5.8] EU leaders approve âŹ90 billion loan for Ukraine despite dissent from Hungary, Slovakia, and Czech Republic â irishtimes.com (+65)
[5.6] FCC bans new Chinese-made drones over national security concerns â apnews.com (+21)
[5.6] EU court rules for refugee in landmark case against Frontex â independent.co.uk (+2)
[5.6] Austria's top court rules Meta's ad model illegal, orders overhaul of user data practices in EU â channelnewsasia.com (+4)
[5.6] OpenAI launches an app store inside ChatGPT â tomsguide.com (+7)
[5.5] Trump appoints special envoy to Greenland to pursue acquisition â nrc.nl (Dutch) (+148)
Thanks for reading!
â Vadim
You can personalize this newsletter with premium.
-
đ r/LocalLLaMA Qwen released Qwen-Image-Edit-2511 â a major upgrade over 2509 rss
| Hugging face: https://huggingface.co/Qwen/Qwen-Image-Edit-2511 Whatâs new in 2511: đ„ Stronger multi-person consistency for group photos and complex scenes đ§© Built-in popular community LoRAs â no extra tuning required đĄ Enhanced industrial & product design generation đ Reduced image drift with dramatically improved character & identity consistency đ Improved geometric reasoning, including construction lines and structural edits From identity-preserving portrait edits to high-fidelity multi-person fusion and practical engineering & design workflows, 2511 pushes image editing to the next level. submitted by /u/Difficult-Cap-7527
[link] [comments]
---|--- -
đ r/LocalLLaMA AMA With Z.AI, The Lab Behind GLM-4.7 rss
Hi r/LocalLLaMA
Today we are having Z.AI, the research lab behind the GLM 4.7. Weâre excited to have them open up and answer your questions directly.
Our participants today:
- Yuxuan Zhang, u/YuxuanZhangzR
- Qinkai Zheng, u/QinkaiZheng
- Aohan Zeng, u/Sengxian
- Zhenyu Hou, u/ZhenyuHou
- Xin Lv, u/davidlvxin
The AMA will run from 8 AM â 11 AM PST, with the Z.AI team continuing to follow up on questions over the next 48 hours.
submitted by /u/zixuanlimit
[link] [comments] -
đ r/wiesbaden Rental options rss
submitted by /u/AwareLocation5665
[link] [comments] -
đ Simon Willison Cooking with Claude rss
I've been having an absurd amount of fun recently using LLMs for cooking. I started out using them for basic recipes, but as I've grown more confident in their culinary abilities I've leaned into them for more advanced tasks. Today I tried something new: having Claude vibe-code up a custom application to help with the timing for a complicated meal preparation. It worked really well!
A custom timing app for two recipes at once
We have family staying at the moment, which means cooking for four. We subscribe to a meal delivery service called Green Chef, mainly because it takes the thinking out of cooking three times a week: grab a bag from the fridge, follow the instructions, eat.
Each bag serves two portions, so cooking for four means preparing two bags at once.
I have done this a few times now and it is always a mad flurry of pans and ingredients and timers and desperately trying to figure out what should happen when and how to get both recipes finished at the same time. It's fun but it's also chaotic and error-prone.
This time I decided to try something different, and potentially even more chaotic and error-prone: I outsourced the planning entirely to Claude.
I took this single photo of the two recipe cards side-by-side and fed it to Claude Opus 4.5 (in the Claude iPhone app) with this prompt:
Extract both of these recipes in as much detail as possible

This is a moderately challenging vision task in that there quite a lot of small text in the photo. I wasn't confident Opus could handle it.
I hadn't read the recipe cards myself. The responsible thing to do here would be a thorough review or at least a spot-check - I chose to keep things chaotic and didn't do any more than quickly eyeball the result.
I asked what pots I'd need:
Give me a full list of pots I would need if I was cooking both of them at once
Then I prompted it to build a custom application to help me with the cooking process itself:
I am going to cook them both at the same time. Build me a no react, mobile, friendly, interactive, artifact that spells out the process with exact timing on when everything needs to happen have a start setting at the top, which starts a timer and persists when I hit start in localStorage in case the page reloads. The next steps should show prominently with countdowns to when they open. The full combined timeline should be shown slow with calculated times tor when each thing should happen
I copied the result out onto my own hosting (you can try it here) because I wasn't sure if localStorage would work inside the Claude app and I really didn't want it to forget my times!
Then I clicked "start cooking"!

Here's the full Claude transcript.
There was just one notable catch: our dog, Cleo, knows exactly when her dinner time is, at 6pm sharp. I forgot to mention this to Claude, which had scheduled several key steps colliding with Cleo's meal. I got woofed at. I deserved it.
To my great surprise, it worked. I followed the recipe guide to the minute and served up both meals exactly 44 minutes after I started cooking.

The best way to learn the capabilities of LLMs is to throw tasks at them that may be beyond their abilities and see what happens. In this case I fully expected that something would get forgotten or a detail would be hallucinated and I'd end up scrambling to fix things half way through the process. I was surprised and impressed that it worked so well.
Some credit for the app idea should go to my fellow hackers at /dev/fort 2 in 2009, when we rented Knockbrex Castle in Dumfries, Scotland for a week and attempted to build a cooking timer application for complex meals.
Generating recipes from scratch
Most of my other cooking experiments with LLMs have been a whole lot simpler than this: I ask for a recipe, ask for some variations and then cook one of them and see what happens.
This works remarkably well considering LLMs have no taste buds.
I've started to think of this as asking LLMs for the average recipe for a dish, based on all of the recipes they have hoovered up during their training. It turns out the mean version of every guacamole recipe on the internet is a decent guacamole!
Here's an example of a recipe I tried recently that worked out really well. I was helping Natalie run her ceramic stall at the farmers market and the stall next to us sold excellent dried beans. I've never used dried beans before, so I took a photo of their selection and asked Claude what I could do with them:

Identify these beans
It took a guess at the beans, then I said:
Get me excited about cooking with these! If I bought two varietiew what could I make
"Get me excited" switches Claude into a sort of hype-man mode, which is kind of entertaining:
Oh, you're about to enter the wonderful world of bean cooking! Let me get you pumped about some killer two-bean combos: [...]
Mixed bean salad with lemon, olive oil, fresh herbs, cherry tomatoes - light but satisfying [...]
I replied:
OK Bean salad has me interested - these are dried beans. Give me some salad options I can make that would last a long time in the fridge
... and after some back and forth we arrived on the recipe in this transcript, which I cooked the following day (asking plenty of follow-up questions) and thoroughly enjoyed.
I've done this a bunch of times with a bunch of different recipes across both Claude and ChatGPT and honestly I've not had a notable miss yet. Being able to say "make it vegan" or "I don't have coriander, what can I use instead?" or just "make it tastier" is a really fun way to explore cooking.
It's also fun to repeat "make it tastier" multiple times to see how absurd you can get.
I really want someone to turn this into a benchmark!
Cooking with LLMs is a lot of fun. There's an opportunity here for a really neat benchmark: take a bunch of leading models, prompt them for recipes, follow those recipes and taste-test the results!
The logistics of running this are definitely too much for me to handle myself. I have enough trouble cooking two meals at once, for a solid benchmark you'd ideally have several models serving meals up at the same time to a panel of tasters.
If someone else wants to try this please let me know how it goes!
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
đ sacha chua :: living an awesome life 2025-12-22 Emacs news rss
- Upcoming events (iCal file, Org):
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Dec 27 0030 America/Vancouver - 0230 America/Chicago - 0330 America/Toronto - 0830 Etc/GMT - 0930 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Emacs Berlin (hybrid, in English) https://emacs-berlin.org/ Wed Dec 31 0930 America/Vancouver - 1130 America/Chicago - 1230 America/Toronto - 1730 Etc/GMT - 1830 Europe/Berlin - 2300 Asia/Kolkata – Thu Jan 1 0130 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri Jan 2 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Sat Jan 3 0000 Asia/Singapore
- Emacs configuration:
- Emacs Lisp:
- GitHub - Kinneyzhang/tp: Text properties library for Emacs Lisp.
- GitHub - Kinneyzhang/elog: A Powerful Logging System for Emacs Lisp (Reddit)
- Marcin Borkowski: Summing effort estimates, my way
- Listful Andrew: Awhile â Time difference from now in different formats (Emacs package)
- Listful Andrew: When is Christmas on a Saturday?
- Treat Emacs as an Elisp Runtime using Eask | Jen-Chieh's Website
- EmacsConf 2025: Some problems of modernizing Emacs - Eduardo Ochs (he/him) (25:23)
- Kana: Juicemacs: Exploring Speculative JIT Compilation for ELisp (HN, lobste.rs)
- Appearance:
- Navigation:
- Writing:
- Composing Text in Emacs: Unicode, Emojis, and the Power of C-x 8 (Reddit)
- Inline image display in markdown text
- Jour 22 : gérer une bibliographie · Emacs expliqué à mes enfants
- EmacsConf 2025 Q&A: Emacs as a fully-fledged reference manager - Vidianos Giannitsis (he/him) (22:37)
- Emacs Lisp functions to preview quarto documents asynchronously on buffer save, and to kill existing quarto preview processes · GitHub (@vurtuali@fosstodon.org)
- Org Mode:
- Org Mode requests: [RFC] Allow empty headlines without trailing space
- Jour 15 : capturer une idée · Emacs expliqué à mes enfants
- EmacsConf 2025: Bookclub tapas - Maddie Sullivan (she/her) (31:26)
- Get Focused with org-pomodoro - YouTube (@curtismchale@mastodon.social)
- Process PDFs with Emacs and Org Mode (01:48)
- LuciusChen/discourse-graphs: An Emacs org-mode implementation of the Discourse Graphs protocol for knowledge synthesis. (Reddit)
- [RELEASE] org-transclusion-blocks v0.4 - var expansion + PROPERTY inheritance (Reddit)
- Org Mode tip for using R to merge Org Mode tables using a src block - @bthalpin.bsky.social
- (Update) org-supertag 5.6: Decoupling UI from Data, Smarter Sync, and Plugin Power
- Tips on Emacs Lisp Development for Contributing to Org-mode (@tiang@mastodon.social)
- Import, export, and integration:
- How to export your org-mode and org-agenda to Apple Reminders · GitHub (HN)
- EmacsConf 2025 Q&A: org-gmail: A deep integration of Gmail into your Org Mode (08:22)
- EmacsConf 2025: LaTeX export in org-mode: the overhaul - Pedro A. Aranda Gutiérrez (he, him) (32:35)
- EmacsConf 2025: Gardening in Emacs: A Windows user's tale of tending, tweaking, and triumph (17:37)
- 4honor/org-drawio: Open, create, export, and display draw.io in org mode (Reddit)
- I built a visual Timeline for Org-Roam (Bi-directional sync + HTML/JS UI) (Reddit)
- Charles Choi: Export Org to Markdown with the Clipboard (Irreal)
- Denote:
- Completion:
- Coding:
- Tip about using C-c C-v to view a file in a web browser when you're in html-mode
- Tip about using web-mode-indentless-attributes (@jasalt@fosstodon.org)
- Greg Newman: Trying Ty for my LSP in Emacs
- Mike Olson: ty: A Fast Python Type Checker and LSP for Emacs
- EmacsConf 2025 Q&A: Interactive Python programming in Emacs - David Vujic (he/him) (18:45)
- James Dyer: Setting Up Emacs for C# Development on Windows
- EmacsConf 2025 Q&A: Common Lisp images communicating like-a-human through shared Emacs slime and eev (18:25)
- Developing Android APP With Emacs (@tonyptdm@mastodon.social)
- Tip about customizing vc-handled-backends if you only use one or two
- fzf, magit, and ast-grep demo (12:41)
- Math:
- Web:
- Mail, news, and chat:
- Multimedia:
- Fun:
- AI:
- Community:
- Other:
- emacs-jp/dmacro: Repeated detection and execution of key operation (Reddit)
- Emacs: use font-lock to add unit conversion to temperatures · GitHub (@redblobgames.com on Bluesky)
- ssh/load-key function for loading a key into your SSH agent for a certain period of time
- I ditched my terminal for emacs - change the keybinding that opens the terminal to open emacs instead
- EmacsConf 2025 Q&A: An introduction to the Emacs Reader - DivyĂĄ (19:03)
- [14] Emacs Reader: Triaging after Hiatus - 12/22/2025, 2:51:25 PM - Dyne.org TV
- Hel â Helix Emulation Layer (Reddit)
- Made a macOS-only alternative to emacs-everywhere using Hammerspoon (Reddit)
- Getting Emacs And MacOS To Play Nice | Brain Baking (HN)
- Gene Goykhman: Building Emacs 30 on macOS
- Emacs development:
- Add functions to set frame size and position in one compound step
- Add binary format specifications '%b' and '%B'
- Remove binary-as-unsigned (bug#79990)
- System GUI taskbar and progress reporter hooks (bug#79859)
- Add query-replace-read-transpose-from-to
- hideshow: Support nested comment block in 'hs-hide-level-recursive'
- hi-lock: Use active region for default values in more places
- Make VC-Dir's 'd' able to delete unregistered files
- New M-RET, M-p, M-n commands in Log View mode
- New bookmark-after-load-file-hook (bug#80003)
- ; lisp/saveplace.el, etc/NEWS: Refinements to bug#75837.
- New optional recentf autosave timer (bug#80002)
- New packages:
- dired-du-duc: Speed up dired-du with duc (MELPA)
- eager-state: Eagerly persist data onto disk (MELPA)
- fastbuild-bff-mode: Major mode for FASTBuild BFF files (MELPA)
- gptel-forge-prs: Generate PR descriptions for forge using gptel (MELPA)
- iwd-manager: Manage IWD via the D-Bus interface (MELPA)
- lisp-docstring-toggle: Toggle Lisp docstring visibility (MELPA)
- markdown-mermaid: Preview Mermaid code blocks in Markdown (MELPA)
- ob-duckdb: Org Babel integration for DuckDB CLI (MELPA)
- orgit-file: Support for links to files in Git repositories (MELPA)
- royal-hemlock-theme: Soothing royal-blue light-theme (MELPA)
- whisper: Speech-to-text using Whisper.cpp (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to AndrĂ©s RamĂrez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
đ matklad Newtype Index Pattern In Zig rss
Newtype Index Pattern In Zig
Dec 23, 2025
In efficiency-minded code, it is idiomatic to use indexes rather than pointers. Indexes have several advantages:
First , they save memory. Typically a 32-bit index is enough, a saving of four bytes per pointer on 64-bit architectures. I havenât seen this measured, but my gut feeling is that this is much more impactful than it might initially seem. On modern architectures, saving memory saves time (and energy) as well, because the computing bottleneck is often the bit pipe between the memory and the CPU, not the computation per se. Dense data structures use CPU cache more efficiently, removing prohibitive latency of memory accesses. Bandwidth savings are even better: smaller item size obviously improves bandwidth utilization, but having more items in cache obviates the need to use the bandwidth in the first place. Best case, the working set fits into the CPU cache!
Note well that memory savings are evenly spread out. Using indexes makes every data structure slightly more compact, which improves performance across the board, regardless of hotspot distribution. Itâs hard to notice a potential for such saving in a profiler, and even harder to test out. For these two reasons, I would default to indexes for code where speed matters, even when I donât have the code written yet to profile it!
Thereâs also a more subtle way in which indexes save memory. Using indexes means storing multiple items in an array, but such dense storage contains extra information in relative positions of the items. If you need to store a list of items, you can often avoid materializing the list of indexes by storing a range âpointingâ into the shared storage. Occasionally, you can even do UTF-8 trick and use just a single bit to mark the end of a list.
The second benefit of indexes is more natural modeling of cyclic and recursive data structures. Creating a cycle fundamentally requires mutability somewhere (âtying the knotâ in Haskell relies on mutability of lazy thunks). This means that you need to make some pointers nullable, and that usually gets awkward even without borrow checker behind your back. Even without cycles and just recursion, pointers are problematic, due to a combination of two effects:
- pointers encourage recursive functions, and
- recursive data structures lead to arbitrary long (but finite) chains of pointers.
The combination works fine at small scale, but then it fails with stack overflow in production every single time, requiring awkward work-arounds. For example,
rustcserializes error traces from nested macro expansions as a deeply nested tree of JSON objects, which requires using stacker hack when parsing the output (which youâll learn about only after crashes in the hands of macro connoisseur users).Finally , indexes greatly help serialization, they make it trivial to communicate data structures both through space (sending a network message) and time (saving to disk and reading later). Indexes are naturally relocatable, it doesnât matter where in memory they are. But this is just a half of serialization benefit. The other is that, because everything is in few arrays, you can do bulk serialization. You donât need to write the items one by one, you can directly
memcpyarrays around (but be careful to not leak data via padding, and be sure to checksum the result).The big problem with ânaiveâ
u32indexes is of course using the right index with the wrong array, or vice verse. The standard solution here is to introduce a newtype wrapper around the raw index. @andrewrk recently popularized a nice âhappy accident of language designâ pattern for this in Zig. The core idea is to define an index via non-exhaustiveenum:const ItemIndex = enum(u32) { _ };
In Zig,
enumdesignates a strongly-typed collection of integer constants, not a Rust-style ADT (thereâsunion(enum)for that). By default an backing integer type is chosen by the compiler, but you can manually override it withenum(u32)syntax:const Color = enum(u16) { red, green, blue };Finally, Zig allows making enums non-exhaustive with
_. In a non-exhaustive enum, any numeric value is valid, and some have symbolic labels:const FontWeight = enum(u16) { normal = 400, bold = 700, _, pub fn value(weight: FontWeight) u16 { return @intFromEnum(weight); } } test FontWeight { assert(FontWeight.value(.normal) == 400); const bold: FontWeight = @enumFromInt(700); assert(bold == .bold); }@intFromEnumand@enumFromIntbuiltins switch abstraction level between a raw integer and an enum value. So,const ItemIndex = enum(u32) { _ };is a way to spell â
u32, but a distinct typeâ. Note that thereâs no strong encapsulation boundary here, anyone can@enumFromInt. Zig just doesnât provide language-enforced encapsulation mechanisms.Putting everything together, this is how I would model n-ary tree with parent pointers in Zig:
pub const Tree = struct { nodes: []const Node.Data, pub const Node = enum(u32) { root = 0, invalid = std.math.maxInt(u32), _, pub const Data = struct { parent: Node, // .invalid means no parent. children: struct { index: u32, count: u32, }, comptime { assert(@sizeOf(Data) == 12); } }; }; fn get( tree: *const Tree, node: Node, ) Node.Data { return tree.nodes[@intFromEnum(node)]; } pub fn parent( tree: *const Tree, node: Node, ) ?Node { const result = tree.get(node).parent; return if (result == .invalid) null else result; } pub fn children( tree: *const Tree, node: Node, ) []const Node { const range = tree.get(node).children; return tree.nodes[range.index..][0..range.count]; } };Some points of note:
- As usual with indexes, you start with defining the collective noun first, a
Treerather than aNode. - In my experience, you usually donât want
Indexsuffix in your index types, soNodeis justenum(u32), not the underlying data. - Nested types are good!
Node.Datafeels just right. - For readability, the order is fields, then nested types, then functions.
- In
Node, we have a couple of symbolic constants..rootis for the root node that is stored first,.invalidfor whenever we want to apply offensive programing and make bad indexes blow up. Here, we use.invalidfor ânullâ parent. An alternative would be to use?Node, but that would waste of space, or making the root its own parent. - If you care about performance, its a good idea to
comptime assertsizes of structures, not to prevent changes, but as a comment that explains to the reader just how the large the struct is. - I donât know if I like
index/countorstart/endmore for representing ranges, but I use the former just because the names align in length. - Both
tree.method(node)andnode.method(tree)are reasonable shapes for the API. I donât know which one I prefer more. I default to the former because it works even if there are several node arguments.
P.S. Apparently I also wrote a Rust version of this post a while back? https://matklad.github.io/2018/06/04/newtype-index-pattern.html
-
đ matklad Static Allocation For Compilers rss
Static Allocation For Compilers
Dec 23, 2025
TigerBeetle famously uses âstatic allocationâ. Infamously, the use of the term is idiosyncratic: what is meant is not
staticarrays, as found in embedded development, but rather a weaker âno allocation after startupâ form. The amount of memory TigerBeetle process uses is not hard-coded into the Elf binary. It depends on the runtime command line arguments. However, all allocation happens at startup, and thereâs no deallocation. The long-lived event loop goes round and round happily withoutalloc.Iâve wondered for years if a similar technique is applicable to compilers. It seemed impossible, but today Iâve managed to extract something actionable from this idea?
Static Allocation
Static allocation depends on the physics of the underlying problem. And distributed databases have surprisingly simple physics, at least in the case of TigerBeetle.
The only inputs and outputs of the system are messages. Each message is finite in size (1MiB). The actual data of the system is stored on disk and can be arbitrarily large. But the diff applied by a single message is finite. And, if your input is finite, and your output is finite, itâs actually quite hard to need to allocate extra memory!
This is worth emphasizing â it might seem like doing static allocation is tough and requires constant vigilance and manual accounting for resources. In practice, I learned that it is surprisingly compositional. As long as inputs and outputs of a system are finite, non-allocating processing is easy. And you can put two such systems together without much trouble. routing.zig is a good example of such an isolated subsystem.
The only issue here is that there isnât a physical limit on how many messages can arrive at the same time. Obviously, you canât process arbitrary many messages simultaneously. But in the context of a distributed system over an unreliable network, a safe move is to drop a message on the floor if the required processing resources are not available.
Counter-intuitively, not allocating is simpler than allocating, provided that you can pull it off!
For Compilers
Alas, it seems impossible to pull it off for compilers. You could say something like âhey, the largest program will have at most one million functionsâ, but that will lead to both wasted memory and poor user experience. You could also use a single yolo arena of a fixed size, like I did in Hard Mode Rust, but that isnât at all similar to âstatic allocationâ. With arenas, the size is fixed explicitly, but you can OOM. With static allocation it is the opposite â no OOM, but you donât know how much memory youâll need until startup finishes!
The âproblem sizeâ for a compiler isnât fixed â both the input (source code) and the output (executable) can be arbitrarily large. But that is also the case for TigerBeetle â the size of the database is not fixed, itâs just that TigerBeetle gets to cheat and store it on disk, rather than in RAM. And TigerBeetle doesnât do âstatic allocationâ on disk, it can fail with
ENOSPACEat runtime, and it includes a dynamic block allocator to avoid that as long as possible by re-using no longer relevant sectors.So what we could say is that a compiler consumes arbitrarily large input, and produces arbitrarily large output, but those âdo not countâ for the purpose of static memory allocation. At the start, we set aside an âoutput arenaâ for storing finished, immutable results of compilerâs work. We then say that this output is accumulated after processing a sequence of chunks, where chunk size is strictly finite. While limiting the total size of the code-base is unreasonable, limiting a single file to, say, 4 MiB (runtime-overridable) is fine. Compiling then essentially becomes a âstream processingâ problem, where both inputs and outputs are arbitrary large, but the filter program itself must execute in O(1) memory.
With this setup, it is natural to use indexes rather than pointers for âoutput dataâ, which then makes it easy to persist it to disk between changes. And itâs also natural to think about âchunks of changesâ not only spatially (compiler sees a new file), but also temporally (compiler sees a new version of an old file).
Is there any practical benefits here? I donât know! But seems worth playing around with! I feel that a strict separation between O(N) compiler output and O(1) intermediate processing artifacts can clarify compilerâs architecture, and I wonât be too surprised if O(1) processing in compilers would lead to simpler code the same way it does for databases?
-
- December 22, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-12-22 rss
IDA Plugin Updates on 2025-12-22
New Releases:
Activity:
- auto_re
- e1afa59f: feat: armle api wrappers support
- chernobog
- GTA2_RE
- IDA-FastAnalysis
- idaguides
- 4df31984: simplified Liner
- reshare-ida
- cd077094: typofix
- auto_re
-
đ r/LocalLLaMA DGX Spark: an unpopular opinion rss
| I know there has been a lot of criticism about the DGX Spark here, so I want to share some of my personal experience and opinion: Iâm a doctoral student doing data science in a small research group that doesnât have access to massive computing resources. We only have a handful of V100s and T4s in our local cluster, and limited access to A100s and L40s on the university cluster (two at a time). Spark lets us prototype and train foundation models, and (at last) compete with groups that have access to high performance GPUs like the H100s or H200s. I want to be clear: Spark is NOT faster than an H100 (or even a 5090). But its all-in-one design and its massive amount of memory (all sitting on your desk) enable us â a small group with limited funding, to do more research. submitted by /u/emdblc
[link] [comments]
---|--- -
đ sacha chua :: living an awesome life La semaine du 15 dĂ©cembre au 21 dĂ©cembre rss
Lundi, le quinze décembre
J'ai emmené ma fille à son cours de gymnastique. Elle a travaillé ses roues. Elle a aussi envie d'ajouter un cours de gymnastique aérienne. D'une part, j'avais dit que si nous gérions bien ses devoirs, ce serait plus facile de dire oui. D'autre part, c'est un bon exercice pour la santé. Je pense que l'entraßnement individuel est meilleur pour ma fille parce qu'elle veut procéder à son propre rythme.
Pour le souper, nous avons préparé des sushis avec des edamames et de la soupe au miso.
Le mini-four a arrĂȘtĂ© de fonctionner. Heureusement, c'est notre deuxiĂšme mini-four du mĂȘme modĂšle, et nous avons le vieux mini-four dans l'abri de jardin pour les piĂšces dĂ©tachĂ©es. Au lieu de faire ses devoirs, ma fille a aidĂ© mon mari dans l'atelier et a appris des bases d'Ă©lectronique. Ensuite, ma fille a aidĂ© mon mari Ă faire du pain. Je me suis un peu inquiĂ©tĂ©e pour ses devoirs, mais je pense que passer du temps ensemble Ă©tait tout aussi bien.
Ils ont découvert une coccinelle dans le vieux mini-four. Ils l'ont sauvée et l'ont placée dans un petit bocal. Je lui ai donné un morceau de raisin et un bout d'essuie-tout que j'ai humecté. Je ne sais pas si elle pourra survivre jusqu'au printemps, mais elle est là , donc nous essayons.
Mon mari s'est renseigné sur nos notes de latin que nous avons prises en 2011. AprÚs une brÚve recherche, je les ai trouvées. Elles étaient dans un vieux format TiddlyWiki, donc je les ai transformées en format Org Mode pour les exporter en livre électronique. Je n'étudie plus le latin depuis longtemps, donc j'oublie tout.
J'ai rĂ©flĂ©chi Ă l'aide : comment aider quelqu'un, comment recevoir de l'aide. Mon ami qui traversait une crise personnelle voulait de l'aide sous forme d'argent, mais je pense que l'aide qu'il a voulue ne lui sera pas utile. Ma fille n'a pas voulu d'aide avec ses devoirs. Peut-ĂȘtre que ma fille pense que ses efforts suffisent, et peut-ĂȘtre que cela lui suffit. Au lieu de m'inquiĂ©ter, je dois m'entraĂźner Ă recevoir de l'aide moi-mĂȘme. C'est une des raisons pour lesquelles j'apprends le français avec ma tutrice, j'apprends Ă parler de mes sentiments avec ma thĂ©rapeute, et j'apprĂ©cie la façon dont ma famille m'aide Ă mĂ»rir. Je peux amĂ©liorer les processus pour que les gens puissent m'aider. Par exemple, pour le traitement des vidĂ©os de la prĂ©sentation ou de la discussion en direct, je dois simplifier et documenter le processus. Si les gens sont occupĂ©s, ce n'est pas grave, je le fais lentement. Si les gens veulent aider, ils peuvent aider.
Mardi, le seize décembre
Aujourd'hui, j'ai repris une routine normale. J'ai travaillé sur Deck the Halls au piano, j'ai suivi une courte vidéo d'exercice, et j'ai finalement fait une longue promenade au parc. Je ne veux pas marcher sur le verglas parce qu'il est glissant, donc j'ai marché sur le trottoir autour du parc.
Quelqu'un a discutĂ© de la modĂ©ration du canal #emacs sur IRC. Il a semblĂ© ĂȘtre frustrĂ©. Je ne peux pas faire grand-chose, mais j'ai conseillĂ© quelques choses qu'il pouvait faire.
J'ai emmenĂ© ma fille Ă son dernier cours d'art. Elle Ă©tait fiĂšre que son Ćuvre soit exposĂ©e dans la fenĂȘtre. Elle a ramassĂ© les autres Ćuvres dans son carton Ă dessins pour les transporter Ă la maison. Elle a apprĂ©ciĂ© le cours avec son amie, mais elle a parfois trouvĂ© que c'Ă©tait trop bruyant, donc elle ne veut pas continuer pour le moment. Nous allons garder un programme assez libre sans beaucoup de cours pour que nous puissions aller patiner ou jouer avec ses amies quand elle en a envie.
Dans la session de thérapie, nous avons discuté des sentiments. J'intellectualise des situations difficiles au lieu de les ressentir, donc mes devoirs pour les vacances de Noël comprennent de remarquer quand j'utilise ce mécanisme de défense. Je vais aussi écrire un journal des sentiments.
J'ai configuré un correcteur d'orthographe grùce au cours « Emacs expliqué à mes enfants » de @vincek.
Mercredi, le dix-sept décembre
J'ai écrit une petite fonction pour rechercher des mots dans quelques dictionnaires en ligne. Petit à petit, j'améliore mon environnement d'écriture.
Cet aprÚs-midi, j'ai un rendez-vous pour faire réviser mon vélo cargo. J'ai fait du vélo jusqu'au magasin de cycles. Le mécanicien m'a donné le devis pour le service et des conseils à propos de pneus spécialisés pour le verglas.
Ensuite, j'ai pris le métro, qui avait un problÚme. Au lieu d' attendre la navette à la station Keele, j'ai marché sur une courte distance jusqu'à la maison.
Je dois probablement traiter les vidĂ©os de la confĂ©rence. Un peu de travail peut les rendre prĂȘtes pour la publication. Je vais combiner les vidĂ©os et les audios normalisĂ©s, revoir tout ça, et publier sur YouTube et sur notre site. Quelques vidĂ©os ont eu quelques problĂšmes avec la conversion, donc je dois revoir les derniĂšres minutes attentivement pour remarquer des erreurs.
ă»ă»ă»ă»ă»AprĂšs l'Ă©cole, j'ai emmenĂ© ma fille Ă la patinoire au parc pour jouer avec son amie. Elles ont pris beaucoup de plaisir Ă jouer Ă chat avec le pĂšre de son amie, qui Ă©tait trop rapide pour elles. J'ai Ă©tĂ© heureuse de les regarder. Nous avons bu du chocolat chaud pendant que la surfaceuse prĂ©parait la glace.
Nous avons mangĂ© des restes. AprĂšs le souper, j'ai travaillĂ© sur les vidĂ©os de la confĂ©rence. Deux vidĂ©os ont eu des erreurs de codage, donc j'ai utilisĂ© les vidĂ©os originales et modifiĂ© notre processus. Ma prochaine Ă©tape est de convertir les vidĂ©os au format WebM pour les tĂ©lĂ©charger sur notre serveur. Je dois aussi revoir le sous-titrage, mais ça peut ĂȘtre fait graduellement.
Jeudi, le dix-huit décembre
Une Ă©tape importante : je deviens plus Ă l'aise pour Ă©crire en français sur mobile. Ăa signifie que je peux ajouter Ă mon journal n'importe quand et n'importe oĂč. Je recherche toujours des mots dans le dictionnaire, ce qui n'est pas si pratique sur mobile Ă cause du petit Ă©cran, mais c'est tolĂ©rable. Au moins, ça peut remplacer le dĂ©filement infini de Reddit pour l'Ă©niĂšme fois. Un jour je pourrai dicter Ă mon portable, ce qui serait plus utile pendant les promenades en hiver, quand taper sera difficile.
J'ai encore fait une longue promenade au parc. Le médecin a dit que les promenades étaient bonnes pour la santé, donc j'essaie souvent d'en faire. Un jour je voudrais flùner pendant plusieurs heures, mais pour l'instant, une promenade de trente minutes ou une heure est suffisante.
Les expĂ©riences de mon mari avec le pain au levain continuent. Il a achetĂ© quelques bannetons. Ma fille l'a aidĂ© avec cette fournĂ©e pendant la pause rĂ©crĂ©. Elle aime scarifier des motifs variĂ©s sur le pain. C'est parfait : passer du temps ensemble, apprĂ©cier la nourriture et pratiquer l'art. Ăa demande de la patience, mais c'est la vie et elle peut apprendre la valeur des choses qui prennent du temps. C'est probablement plus important que les notes Ă©levĂ©es Ă l'Ă©cole. (Ou du moins c'est ce que je me dis quand je m'inquiĂšte.)
Quand je rentrerai à la maison, j'aurai trente minutes avant sa pause déjeuner. Je pourrai faire une courte tùche, comme envoyer des messages ou vérifier des vidéos. Ma routine matinale pour prendre soin de moi prend la majeure partie de la matinée. Je me demande comment les autres s'organisent.
ă»ă»ă»ă»ă»J'ai dĂ©cidĂ© de cuisiner le dĂ©jeuner au lieu de faire de petites tĂąches. J'ai prĂ©parĂ© des grilled-cheeses. On s'est rĂ©galĂ©s.
AprÚs le déjeuner, j'ai travaillé sur les vidéos de la conférence. J'ai ajouté les chapitres à quelques vidéos et corrigé quelques sous-titres.
ă»ă»ă»ă»ă»AprĂšs l'Ă©cole, ma fille a voulu aller chez Sephora pour acheter de la brume parfumĂ©e. Elle en a cherchĂ© en ligne. Mon mari a voulu acheter du papier toilette Ă No Frills, donc nous avons pris le mĂ©tro jusqu'au Dufferin Mall. Elle a appris Ă choisir par elle-mĂȘme. C'est pour ça qu'elle a ses propres Ă©conomies. Elle a choisi « darling » qui sent les fleurs. J'ai aimĂ© voir ma fille gagner en confiance et en autodĂ©termination. Elle a mis longtemps Ă choisir, mais j'ai Ă©tĂ© patiente parce que j'ai pu Ă©crire mon journal sur mobile.
Ensuite, nous avons mangé un souper de pùtes au pesto à la tomate.
Puis nous avons joué à la marchande comme dans sa classe de théùtre. Nous avons lancé des idées pour les rÎles, donc nous avons improvisé dans la situation qu'elle a choisie. Elle a dit que j'étais drÎle.
J'ai travaillé sur d'autres vidéos, et j'ai corrigé une erreur dans le logiciel d'affichage des chapitres.
Vendredi, le dix-neuf décembre
Je me suis levée un peu tard parce que mon portable ne s'est pas rechargé correctement. Heureusement, il restait un peu de temps avant l'école, donc j'ai pu réveiller ma fille à temps pour un petit-déjeuner sur le pouce.
Pendant qu'elle participait à l'école virtuelle, j'ai fait ma routine matinale. Ensuite, j'ai travaillé sur le sous-titrage. Maintenant que les choses sont détendues, je peux prendre plaisir à la préparation des ressources. C'est le dernier jour avant sa pause d'hiver, donc je dois faire les tùches qui demandent de la concentration.
Ma fille a fait sa présentation sur le Nouvel An chinois. Elle était si fiÚre. Elle a dit que ses camarades de classe avaient faim à cause de sa présentation sur la nourriture traditionnelle.
Par coïncidence, mon mari a préparé du riz gluant au poulet pour le déjeuner. On s'est régalés.
La coccinelle était plus active. Nous lui avons donné un morceau de raisin et un morceau de pomme. Ma fille a humidifié le bout d'essuie-tout.
Cet aprÚs-midi, j'ai continué le travail sur les vidéos. Elles étaient presque toutes faites, il n'en restait que quelques-unes.
En guise de promenade, j'ai fait les courses. Ensuite, j'ai jouĂ© aux cartes avec ma fille. Je gagnais toujours malgrĂ© mes efforts subtils. Ma fille est devenue un peu grincheuse. La prochaine fois, je proposerai Ă ma fille des jeux coopĂ©ratifs comme Space Escape ou comme on joue au Pictionary ou aux charades ensemble. Comme ça, on ne peut pas vraiment gagner Ă tous les coups sinon quelqu'un va ĂȘtre fĂąchĂ© contre moi.
ă»ă»ă»ă»ă»Elle s'est sentie mieux et elle est revenue pour manger des ailes de poulet. Elle avait froid aussi, donc elle avait envie de cĂąlins.
Samedi, le vingt décembre
J'ai fait une diffusion en direct sur Twitch pendant que je travaillais sur les sous-titres qu'un intervenant a corrigés. J'ai écrit une courte fonction pour copier des textes dans son chapitre actuel. Trois spectateurs sont venus étonnamment, et ils ont fait quelques commentaires sur mon processus. Avant de faire plus de chapitres de vidéos, je pense que je dois copier les discussions d'IRC et de YouTube sur les pages du wiki pour les envoyer aux intervenants. Ensuite, je peux me remettre à faire les chapitres.
J'ai rĂ©flĂ©chi un peu plus Ă l'aide. Le sous-titrage semble une occasion facile d'aider. J'ai documentĂ© le processus et j'ai créé quelques outils. Mais c'est souvent plus facile si je continue moi-mĂȘme parce que je ne dois pas attendre. Bon, c'est possible pour des personnes qui se portent volontaires pour faire les sous-titres de quelques vidĂ©os. Je les laisse de cĂŽtĂ© et je travaille sur les autres vidĂ©os avant elles. Est-ce que je veux inviter les volontaires Ă aider sur les vidĂ©os restantes? Peut-ĂȘtre. Je dois amĂ©liorer la page des coulisses pour plus facilement choisir parmi les tĂąches restantes, et je dois documenter le processus pour aider les dĂ©butants. Il est tentant de travailler seul, mais il est bon de crĂ©er des occasions pour que d'autres personnes puissent aider. En plus, la documentation m'aidera quand j'aurai tout oubliĂ© d'ici l'annĂ©e prochaine.
L'aprĂšs-midi, je suis allĂ©e Ă la pharmacie pour une vaccination contre la grippe. Bien que la vaccination de cette annĂ©e ne corresponde pas bien aux variations de grippe trĂšs courantes, c'est toujours un peu protecteur. Ma fille a marchĂ© avec moi Ă mi-chemin, puis elle est retournĂ©e Ă la maison et elle est allĂ©e avec mon mari chez le perceur. Elle voulait porter des boucles d'oreilles. Elle est assez ĂągĂ©e pour choisir par elle-mĂȘme. Je l'ai aidĂ©e pour le nettoyage avec la solution saline.
J'ai préparé le bulletin d'information pour la Bike Brigade. Puisque personne ne s'est porté volontaire, je suis revenue à mon processus qui est plus automatique. Je déteste tous les processus qui demandent plusieurs clics et offrent plusieurs occasions de faire des erreurs. Lorsqu'un bénévole s'engagera, je restaurerai le processus manuel.
Nous avons aussi jouĂ© Ă une simulation de petit cafĂ© sur Minecraft avec sa tante. Ma fille s'occupait du service, ma sĆur s'occupait des salades, et je m'occupais d'alterner les crĂȘpes et les gĂąteaux. On a bien gĂ©rĂ© dans les temps. AprĂšs ma routine du soir, nous avons aussi jouĂ© au Space Escape. Nous avons gagnĂ© ensemble !
Dimanche, le vingt-et-un décembre
AprĂšs la vaccination d'hier, j'ai un peu mal au cou, donc je me la coule douce aujourd'hui. Je vais faire la lessive et peut-ĂȘtre copier des discussions de la confĂ©rence. Mais avant tout, peut-ĂȘtre que je vais Ă©tudier un peu le français.
Mon logiciel d'analyse de mon journal a dit que j'ai Ă©crit cinquante-deux entrĂ©es jusqu'Ă prĂ©sent. Ăa nous fait un total de 10.766 mots (1.381 lemmes). J'ai commencĂ© Ă apprendre le français pour peut-ĂȘtre aider ma fille, mais je trouve que j'apprĂ©cie la stimulation d'Ă©criture dans une autre langue. C'est certain que j'Ă©cris plus d'entrĂ©es Ă propos de ma vie. L'analyse de mon vocabulaire m'encourage Ă essayer de nouveaux mots et de plus longues entrĂ©es. En 2012, lors d'une confĂ©rence sur Quantified Self, j'ai rencontrĂ© une personne qui met son journal sur son systĂšme de rĂ©pĂ©tition espacĂ©e pour aider Ă s'en souvenir. AprĂšs chaque rendez-vous avec ma tutrice, je mets mes phrases sur Anki pour Ă©tudier du vocabulaire. En cours de route, je me remĂ©more ces moments. Je ne peux pas encore parler aisĂ©ment. Peut-ĂȘtre que je dois pratiquer l'expression orale et trouver ma propre mĂ©thode pour pratiquer la comprĂ©hension orale. RĂ©pĂ©ter en mĂȘme temps que l'audio semble utile.
L'outil d'IA que j'ai essayĂ© est sorti de sa phase bĂȘta et a maintenant besoin d'un abonnement de 29 dollars chaque mois. En ce moment, je me demande si je veux l'utiliser, ou si je veux utiliser d'autres outils comme ChatGPT ou Gemini, ou si je veux crĂ©er mon propre outil. Je pense que pour le moment, je me concentre principalement sur l'Ă©criture. Ă cause de COVID et du cĂŽtĂ© chronophage de l'Ă©ducation de mon enfant, je ne suis pas intĂ©ressĂ©e par des sujets frĂ©quents comme commander au restaurant, les voyages, ou mĂȘme la prĂ©sentation et le bavardage. Je veux Ă©crire et Ă©couter des informations sur Emacs et d'autres sujets techniques, donc je peux commencer Ă lire « Emacs expliquĂ© Ă mes enfants ». Je peux aussi utiliser la synthĂšse vocale pour transformer mon journal en audio, que je peux utiliser pour m'entraĂźner. J'ai ajoutĂ© une fonction pour attendre aprĂšs chaque phrase pendant un multiple du temps initial pour pouvoir rĂ©pĂ©ter plus facilement. MĂȘme si peut-ĂȘtre penser Ă Ă©couter la prononciation quand je cherche des mots dans le dictionnaire en ligne serait suffisant quand j'utilise mon portable, ce qui arrive plus souvent.
Je ne peux pas me concentrer sur mon travail, donc j'ai fait une sieste l'aprÚs-midi. AprÚs deux heures, ma fille m'a réveillée parce qu'elle était fiÚre d'avoir aidé mon mari à mettre en conserve les betteraves qu'il avait achetées il y a deux semaines. Ils ont utilisé l'autocuiseur. Puisqu'un bocal ne s'est pas bien scellé, il l'a mis au réfrigérateur. Ils ont aussi fait un gùteau aux ananas et aux betteraves, que ma fille aime bien.
AprĂšs le souper, j'ai rĂ©cupĂ©rĂ© un peu d'Ă©nergie. J'ai jouĂ© Ă la simulation de petit cafĂ© sur Minecraft avec ma fille et ma sĆur, comme hier. Cette fois, notre jeu se dĂ©roule bien. Ma sĆur a fait beaucoup de salades par lots. Elle a dit : « Dix salades grecques sont prĂȘtes » et ma fille les a servies aux clients. Moi, j'ai prĂ©parĂ© des crĂȘpes et des gĂąteaux nature sans cesse, et je les ai combinĂ©s avec d'autres ingrĂ©dients pour chaque commande, donc j'ai souvent dit : « Le gĂąteau au chocolat et Ă la banane sur le comptoir. » Nous avons franchi facilement deux Ă©tapes de plus. Je pense qu'il reste une Ă©tape.
You can e-mail me at sacha@sachachua.com.
-
đ r/reverseengineering OGhidra: Automating dataflow analysis and vulnerability discovery in Ghidra via local Ollama models rss
submitted by /u/Nightlark192
[link] [comments] -
đ r/LocalLLaMA GLM 4.7 released! rss
| GLM-4.7 is here! GLM-4.7 surpasses GLM-4.6 with substantial improvements in coding, complex reasoning, and tool usage, setting new open-source SOTA standards. It also boosts performance in chat, creative writing, and role-play scenarios. Weights: http://huggingface.co/zai-org/GLM-4.7 Tech Blog: http://z.ai/blog/glm-4.7 submitted by /u/ResearchCrafty1804
[link] [comments]
---|--- -
đ r/LocalLLaMA GLM 4.7 is out on HF! rss
| submitted by /u/KvAk_AKPlaysYT
[link] [comments]
---|--- -
đ r/reverseengineering ImHex Hex Editor v1.38.1 - Better Pattern Editor, many new Data Sources, Save Editor Mode and more rss
submitted by /u/WerWolv
[link] [comments] -
đ r/LocalLLaMA I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! rss
| Hi! Iâm Eugene, and Iâve been working on Soprano : a new state-of-the-art TTS model I designed for voice chatbots. Voice applications require very low latency and natural speech generation to sound convincing, and I created Soprano to deliver on both of these goals. Soprano is the worldâs fastest TTS by an enormous margin. It is optimized to stream audio playback with < 15 ms latency, 10x faster than any other realtime TTS model like Chatterbox Turbo, VibeVoice-Realtime, GLM TTS, or CosyVoice3. It also natively supports batched inference, benefiting greatly from long-form speech generation. I was able to generate a 10-hour audiobook in under 20 seconds, achieving ~2000x realtime! This is multiple orders of magnitude faster than any other TTS model, making ultra-fast, ultra-natural TTS a reality for the first time. I owe these gains to the following design choices:- Higher sample rate: most TTS models use a sample rate of 24 kHz, which can cause s and z sounds to be muffled. In contrast, Soprano natively generates 32 kHz audio, which sounds much sharper and clearer. In fact, 32 kHz speech sounds indistinguishable from 44.1/48 kHz speech, so I found it to be the best choice.
- Vocoder-based audio decoder: Most TTS designs use diffusion models to convert LLM outputs into audio waveforms. However, this comes at the cost of slow generation. To fix this, I trained a vocoder-based decoder instead, which uses a Vocos model to perform this conversion. My decoder runs several orders of magnitude faster than diffusion-based decoders (~6000x realtime!), enabling extremely fast audio generation.
- Seamless Streaming: Streaming usually requires generating multiple audio chunks and applying crossfade. However, this causes streamed output to sound worse than nonstreamed output. I solve this by using a Vocos-based decoder. Because Vocos has a finite receptive field. I can exploit its input locality to completely skip crossfading, producing streaming output that is identical to unstreamed output. Furthermore, I modified the Vocos architecture to reduce the receptive field, allowing Soprano to start streaming audio after generating just five audio tokens with the LLM.
- State-of-the-art Neural Audio Codec: Speech is represented using a novel neural codec that compresses audio to ~15 tokens/sec at just 0.2 kbps. This helps improve generation speed, as only 15 tokens need to be generated to synthesize 1 second of audio, compared to 25, 50, or other commonly used token rates. To my knowledge, this is the highest bitrate compression achieved by any audio codec.
- Infinite generation length: Soprano automatically generates each sentence independently, and then stitches the results together. Theoretically, this means that sentences can no longer influence each other, but in practice I found that this doesnât really happen anyway. Splitting by sentences allows for batching on long inputs, dramatically improving inference speed.
Iâm a second-year undergrad whoâs just started working on TTS models, so I wanted to start small. Soprano was only pretrained on 1000 hours of audio (~100x less than other TTS models), so its stability and quality will improve tremendously as I train it on more data. Also, I optimized Soprano purely for speed, which is why it lacks bells and whistles like voice cloning, style control, and multilingual support. Now that I have experience creating TTS models, I have a lot of ideas for how to make Soprano even better in the future, so stay tuned for those! Github: https://github.com/ekwek1/soprano Huggingface Demo: https://huggingface.co/spaces/ekwek/Soprano-TTS Model Weights: https://huggingface.co/ekwek/Soprano-80M - Eugene submitted by /u/eugenekwek
[link] [comments]
---|--- -
đ batrachianai/toad v0.5.5 release
[0.5.5] - 2025-12-22
Fixed
- Fixed column setting not taking effect
-
đ r/reverseengineering GitHub - Fatmike-GH/MCPDebugger: A lightweight MCP debugger designed for learning and experimentation. Supports Windows executables (x86 and x64). rss
submitted by /u/Fatmike-Reddit
[link] [comments] -
đ r/LocalLLaMA NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! rss
| Blog Link: https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/ You'll learn about: - Training methods: LoRA, FFT, RL - When to fine-tune and why + use-cases - Amount of data and VRAM needed - How to train locally on DGX Spark, RTX GPUs & more submitted by /u/Difficult-Cap-7527
[link] [comments]
---|--- -
đ r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
đ r/LocalLLaMA major open-source releases this year rss
| submitted by /u/sahilypatel
[link] [comments]
---|---
-