- â
- â
- Rust Binary Analyis 101 - Part 2 - Z3R0cool Blogs
- mprocs: start all your project's commands at once
- Jon's Arm Reference
- Optimize for momentum
- Nviso vshell report
- December 23, 2025
-
đ r/reverseengineering Finding Jingle Town: Debugging an N64 Game without Symbols rss
submitted by /u/Mediocre_Ad_1923
[link] [comments] -
đ r/wiesbaden Gruppenaktivitäten fĂźr Geburtstage? :) rss
Hi, ich bin gerade am Ăźberlegen was man noch so in Wiesbaden zusammen machen kann auĂer in die Superflyhalle, Keramik bemalen oder Bowling.. habt ihr eventuell Ideen wo es sich noch lohnt mit ein paar Leuten zu meinem Geburtstag hinzugehen?
Danke! đ
submitted by /u/FunkINFJ
[link] [comments] -
đ r/reverseengineering Nintendo 64 Decomp Update: Harvest Moon 64 is now 100% decompiled! rss
submitted by /u/harvestwhisperer
[link] [comments] -
đ langchain-ai/deepagents deepagents==0.3.1 release
Changes since deepagents==0.3.0
release(deepagents): 0.3.1 (#608)
docs: fix documentation issues (#513)
fix(deepagents): strip trailing whitespace from subagent messages to prevent Anthropic API errors (#586)
fix(deepagents): Pass through runtime config to subagents (#602)
chore(deepagents): test write todos from sub-agents (#605)
fix(deepagents): exclude structured response from state update (#603)
feat: add ability to paste images in input (#555) -
đ r/reverseengineering Fabrice Bellard Releases MicroQuickJS rss
submitted by /u/Ok-Tune-1346
[link] [comments] -
đ r/reverseengineering Fake PuTTY Installer Malware Analysis with IDA Pro rss
submitted by /u/jershmagersh
[link] [comments] -
đ News Minimalist đ˘ FDA approves first weight loss pill + 8 more stories rss
In the last 5 days ChatGPT read 146994 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

[5.5] FDA approves first GLP-1 pill for obesity âstatnews.com(+77)
The FDA has approved the first oral GLP-1 pill for obesity, a version of Novo Nordiskâs Wegovy, potentially expanding access to effective weight loss treatments starting in January.
The 25-milligram daily medication demonstrated 14% weight loss in trials, mirroring the efficacy of the injectable version. It also reduces cardiovascular risks and will initially cost $150 per month for the lowest dosage through direct-to-consumer channels.
This peptide-based pill requires strict morning fasting for absorption. Meanwhile, competitor Eli Lilly is developing a small-molecule pill, orforglipron, which may offer easier manufacturing and fewer dietary restrictions once approved.
[6.4] European governments agree to introduce a digital euro ânrc.nl(Dutch) (+5)
European governments have agreed to create a digital euro, establishing a central bank-backed public currency to safeguard the continentâs financial sovereignty and payment resilience.
This public currency would offer a secure alternative to commercial bank accounts and US-based payment providers. Pending European Parliament approval, the digital euro could launch by 2029 via apps or cards, featuring offline capabilities to ensure transaction continuity during cyberattacks.
The proposal guarantees privacy and bans programmable spending to mirror the utility of physical cash. While merchants must eventually accept the currency, commercial banks remain critical of the implementation costs and competition.
[6.1] TikTok agrees to sell US operations to American investors âtheguardian.com(+93)
TikTok has signed a binding deal to sell its United States operations to a group of American investors including Oracle and Silver Lake, preventing a ban and ensuring continued service.
The agreement, set to close January 22, grants Oracle, Silver Lake, and MGX a combined 45 percent stake. Oracle will license TikTokâs recommendation algorithm to address long-standing national security concerns.
Highly covered news with significance over 5.5
[5.8] EU leaders approve âŹ90 billion loan for Ukraine despite dissent from Hungary, Slovakia, and Czech Republic â irishtimes.com (+65)
[5.6] FCC bans new Chinese-made drones over national security concerns â apnews.com (+21)
[5.6] EU court rules for refugee in landmark case against Frontex â independent.co.uk (+2)
[5.6] Austria's top court rules Meta's ad model illegal, orders overhaul of user data practices in EU â channelnewsasia.com (+4)
[5.6] OpenAI launches an app store inside ChatGPT â tomsguide.com (+7)
[5.5] Trump appoints special envoy to Greenland to pursue acquisition â nrc.nl (Dutch) (+148)
Thanks for reading!
â Vadim
You can personalize this newsletter with premium.
-
đ r/wiesbaden Rental options rss
submitted by /u/AwareLocation5665
[link] [comments] -
đ syncthing/syncthing v2.0.13-rc.1 release
Major changes in 2.0
-
Database backend switched from LevelDB to SQLite. There is a migration on
first launch which can be lengthy for larger setups. The new database is
easier to understand and maintain and, hopefully, less buggy. -
The logging format has changed to use structured log entries (a message
plus several key-value pairs). Additionally, we can now control the log
level per package, and a new log level WARNING has been inserted between
INFO and ERROR (which was previously known as WARNING...). The INFO level
has become more verbose, indicating the sync actions taken by Syncthing. A
new command line flag--log-levelsets the default log level for all
packages, and theSTTRACEenvironment variable and GUI has been updated
to set log levels per package. The--verboseand--logflagscommand
line options have been removed and will be ignored if given. -
Deleted items are no longer kept forever in the database, instead they are
forgotten after fifteen months. If your use case require deletes to take
effect after more than a fifteen month delay, set the
--db-delete-retention-intervalcommand line option or corresponding
environment variable to zero, or a longer time interval of your choosing. -
Modernised command line options parsing. Old single-dash long options are
no longer supported, e.g.-homemust be given as--home. Some options
have been renamed, others have become subcommands. All serve options are
now also accepted as environment variables. Seesyncthing --helpand
syncthing serve --helpfor details. -
Rolling hash detection of shifted data is no longer supported as this
effectively never helped. Instead, scanning and syncing is faster and more
efficient without it. -
A "default folder" is no longer created on first startup.
-
Multiple connections are now used by default between v2 devices. The new
default value is to use three connections: one for index metadata and two
for data exchange. -
The following platforms unfortunately no longer get prebuilt binaries for
download at syncthing.net and on GitHub, due to complexities related to
cross compilation with SQLite:- dragonfly/amd64
- solaris/amd64
- linux/ppc64
- netbsd/*
- openbsd/386 and openbsd/arm
- windows/arm
- The handling of conflict resolution involving deleted files has changed. A
delete can now be the winning outcome of conflict resolution, resulting in
the deleted file being moved to a conflict copy.
This release is also available as:
-
APT repository: https://apt.syncthing.net/
-
Docker image:
docker.io/syncthing/syncthing:2.0.13-rc.1orghcr.io/syncthing/syncthing:2.0.13-rc.1
({docker,ghcr}.io/syncthing/syncthing:2to follow just the major version)
What's Changed
Fixes
- fix(beacon): don't join multicast groups on non-multicast interfaces (fixes #10497) by @marbens-arch in #10498
Other
- chore(model): refactor context handling for folder type by @calmh in #10472
- build: fix docker build by ensuring qemu by @calmh in #10492
- chore(beacon): more verbose debug logging by @marbens-arch in #10496
- build: fix hash failure by limiting globbing by @calmh in #10505
- chore: tweak pull retry logic by @calmh in #10491
Full Changelog :
v2.0.12...v2.0.13-rc.1 -
-
đ obra/superpowers v4.0.1 release
Release v4.0.1
-
đ Simon Willison Cooking with Claude rss
I've been having an absurd amount of fun recently using LLMs for cooking. I started out using them for basic recipes, but as I've grown more confident in their culinary abilities I've leaned into them for more advanced tasks. Today I tried something new: having Claude vibe-code up a custom application to help with the timing for a complicated meal preparation. It worked really well!
A custom timing app for two recipes at once
We have family staying at the moment, which means cooking for four. We subscribe to a meal delivery service called Green Chef, mainly because it takes the thinking out of cooking three times a week: grab a bag from the fridge, follow the instructions, eat.
Each bag serves two portions, so cooking for four means preparing two bags at once.
I have done this a few times now and it is always a mad flurry of pans and ingredients and timers and desperately trying to figure out what should happen when and how to get both recipes finished at the same time. It's fun but it's also chaotic and error-prone.
This time I decided to try something different, and potentially even more chaotic and error-prone: I outsourced the planning entirely to Claude.
I took this single photo of the two recipe cards side-by-side and fed it to Claude Opus 4.5 (in the Claude iPhone app) with this prompt:
Extract both of these recipes in as much detail as possible

This is a moderately challenging vision task in that there quite a lot of small text in the photo. I wasn't confident Opus could handle it.
I hadn't read the recipe cards myself. The responsible thing to do here would be a thorough review or at least a spot-check - I chose to keep things chaotic and didn't do any more than quickly eyeball the result.
I asked what pots I'd need:
Give me a full list of pots I would need if I was cooking both of them at once
Then I prompted it to build a custom application to help me with the cooking process itself:
I am going to cook them both at the same time. Build me a no react, mobile, friendly, interactive, artifact that spells out the process with exact timing on when everything needs to happen have a start setting at the top, which starts a timer and persists when I hit start in localStorage in case the page reloads. The next steps should show prominently with countdowns to when they open. The full combined timeline should be shown slow with calculated times tor when each thing should happen
I copied the result out onto my own hosting (you can try it here) because I wasn't sure if localStorage would work inside the Claude app and I really didn't want it to forget my times!
Then I clicked "start cooking"!

Here's the full Claude transcript.
There was just one notable catch: our dog, Cleo, knows exactly when her dinner time is, at 6pm sharp. I forgot to mention this to Claude, which had scheduled several key steps colliding with Cleo's meal. I got woofed at. I deserved it.
To my great surprise, it worked. I followed the recipe guide to the minute and served up both meals exactly 44 minutes after I started cooking.

The best way to learn the capabilities of LLMs is to throw tasks at them that may be beyond their abilities and see what happens. In this case I fully expected that something would get forgotten or a detail would be hallucinated and I'd end up scrambling to fix things half way through the process. I was surprised and impressed that it worked so well.
Some credit for the app idea should go to my fellow hackers at /dev/fort 2 in 2009, when we rented Knockbrex Castle in Dumfries, Scotland for a week and attempted to build a cooking timer application for complex meals.
Generating recipes from scratch
Most of my other cooking experiments with LLMs have been a whole lot simpler than this: I ask for a recipe, ask for some variations and then cook one of them and see what happens.
This works remarkably well considering LLMs have no taste buds.
I've started to think of this as asking LLMs for the average recipe for a dish, based on all of the recipes they have hoovered up during their training. It turns out the mean version of every guacamole recipe on the internet is a decent guacamole!
Here's an example of a recipe I tried recently that worked out really well. I was helping Natalie run her ceramic stall at the farmers market and the stall next to us sold excellent dried beans. I've never used dried beans before, so I took a photo of their selection and asked Claude what I could do with them:

Identify these beans
It took a guess at the beans, then I said:
Get me excited about cooking with these! If I bought two varietiew what could I make
"Get me excited" switches Claude into a sort of hype-man mode, which is kind of entertaining:
Oh, you're about to enter the wonderful world of bean cooking! Let me get you pumped about some killer two-bean combos: [...]
Mixed bean salad with lemon, olive oil, fresh herbs, cherry tomatoes - light but satisfying [...]
I replied:
OK Bean salad has me interested - these are dried beans. Give me some salad options I can make that would last a long time in the fridge
... and after some back and forth we arrived on the recipe in this transcript, which I cooked the following day (asking plenty of follow-up questions) and thoroughly enjoyed.
I've done this a bunch of times with a bunch of different recipes across both Claude and ChatGPT and honestly I've not had a notable miss yet. Being able to say "make it vegan" or "I don't have coriander, what can I use instead?" or just "make it tastier" is a really fun way to explore cooking.
It's also fun to repeat "make it tastier" multiple times to see how absurd you can get.
I really want someone to turn this into a benchmark!
Cooking with LLMs is a lot of fun. There's an opportunity here for a really neat benchmark: take a bunch of leading models, prompt them for recipes, follow those recipes and taste-test the results!
The logistics of running this are definitely too much for me to handle myself. I have enough trouble cooking two meals at once, for a solid benchmark you'd ideally have several models serving meals up at the same time to a panel of tasters.
If someone else wants to try this please let me know how it goes!
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
đ sacha chua :: living an awesome life 2025-12-22 Emacs news rss
- Upcoming events (iCal file, Org):
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Dec 27 0030 America/Vancouver - 0230 America/Chicago - 0330 America/Toronto - 0830 Etc/GMT - 0930 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Emacs Berlin (hybrid, in English) https://emacs-berlin.org/ Wed Dec 31 0930 America/Vancouver - 1130 America/Chicago - 1230 America/Toronto - 1730 Etc/GMT - 1830 Europe/Berlin - 2300 Asia/Kolkata – Thu Jan 1 0130 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri Jan 2 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Sat Jan 3 0000 Asia/Singapore
- Emacs configuration:
- Emacs Lisp:
- GitHub - Kinneyzhang/tp: Text properties library for Emacs Lisp.
- GitHub - Kinneyzhang/elog: A Powerful Logging System for Emacs Lisp (Reddit)
- Marcin Borkowski: Summing effort estimates, my way
- Listful Andrew: Awhile â Time difference from now in different formats (Emacs package)
- Listful Andrew: When is Christmas on a Saturday?
- Treat Emacs as an Elisp Runtime using Eask | Jen-Chieh's Website
- EmacsConf 2025: Some problems of modernizing Emacs - Eduardo Ochs (he/him) (25:23)
- Kana: Juicemacs: Exploring Speculative JIT Compilation for ELisp (HN, lobste.rs)
- Appearance:
- Navigation:
- Writing:
- Composing Text in Emacs: Unicode, Emojis, and the Power of C-x 8 (Reddit)
- Inline image display in markdown text
- Jour 22 : gÊrer une bibliographie ¡ Emacs expliquÊ à mes enfants
- EmacsConf 2025 Q&A: Emacs as a fully-fledged reference manager - Vidianos Giannitsis (he/him) (22:37)
- Emacs Lisp functions to preview quarto documents asynchronously on buffer save, and to kill existing quarto preview processes ¡ GitHub (@vurtuali@fosstodon.org)
- Org Mode:
- Org Mode requests: [RFC] Allow empty headlines without trailing space
- Jour 15 : capturer une idÊe ¡ Emacs expliquÊ à mes enfants
- EmacsConf 2025: Bookclub tapas - Maddie Sullivan (she/her) (31:26)
- Get Focused with org-pomodoro - YouTube (@curtismchale@mastodon.social)
- Process PDFs with Emacs and Org Mode (01:48)
- LuciusChen/discourse-graphs: An Emacs org-mode implementation of the Discourse Graphs protocol for knowledge synthesis. (Reddit)
- [RELEASE] org-transclusion-blocks v0.4 - var expansion + PROPERTY inheritance (Reddit)
- Org Mode tip for using R to merge Org Mode tables using a src block - @bthalpin.bsky.social
- (Update) org-supertag 5.6: Decoupling UI from Data, Smarter Sync, and Plugin Power
- Tips on Emacs Lisp Development for Contributing to Org-mode (@tiang@mastodon.social)
- Import, export, and integration:
- How to export your org-mode and org-agenda to Apple Reminders ¡ GitHub (HN)
- EmacsConf 2025 Q&A: org-gmail: A deep integration of Gmail into your Org Mode (08:22)
- EmacsConf 2025: LaTeX export in org-mode: the overhaul - Pedro A. Aranda GutiĂŠrrez (he, him) (32:35)
- EmacsConf 2025: Gardening in Emacs: A Windows user's tale of tending, tweaking, and triumph (17:37)
- 4honor/org-drawio: Open, create, export, and display draw.io in org mode (Reddit)
- I built a visual Timeline for Org-Roam (Bi-directional sync + HTML/JS UI) (Reddit)
- Charles Choi: Export Org to Markdown with the Clipboard (Irreal)
- Denote:
- Completion:
- Coding:
- Tip about using C-c C-v to view a file in a web browser when you're in html-mode
- Tip about using web-mode-indentless-attributes (@jasalt@fosstodon.org)
- Greg Newman: Trying Ty for my LSP in Emacs
- Mike Olson: ty: A Fast Python Type Checker and LSP for Emacs
- EmacsConf 2025 Q&A: Interactive Python programming in Emacs - David Vujic (he/him) (18:45)
- James Dyer: Setting Up Emacs for C# Development on Windows
- EmacsConf 2025 Q&A: Common Lisp images communicating like-a-human through shared Emacs slime and eev (18:25)
- Developing Android APP With Emacs (@tonyptdm@mastodon.social)
- Tip about customizing vc-handled-backends if you only use one or two
- fzf, magit, and ast-grep demo (12:41)
- Math:
- Web:
- Mail, news, and chat:
- Multimedia:
- Fun:
- AI:
- Community:
- Other:
- emacs-jp/dmacro: Repeated detection and execution of key operation (Reddit)
- Emacs: use font-lock to add unit conversion to temperatures ¡ GitHub (@redblobgames.com on Bluesky)
- ssh/load-key function for loading a key into your SSH agent for a certain period of time
- I ditched my terminal for emacs - change the keybinding that opens the terminal to open emacs instead
- EmacsConf 2025 Q&A: An introduction to the Emacs Reader - DivyĂĄ (19:03)
- [14] Emacs Reader: Triaging after Hiatus - 12/22/2025, 2:51:25 PM - Dyne.org TV
- Hel â Helix Emulation Layer (Reddit)
- Made a macOS-only alternative to emacs-everywhere using Hammerspoon (Reddit)
- Getting Emacs And MacOS To Play Nice | Brain Baking (HN)
- Gene Goykhman: Building Emacs 30 on macOS
- Emacs development:
- Add functions to set frame size and position in one compound step
- Add binary format specifications '%b' and '%B'
- Remove binary-as-unsigned (bug#79990)
- System GUI taskbar and progress reporter hooks (bug#79859)
- Add query-replace-read-transpose-from-to
- hideshow: Support nested comment block in 'hs-hide-level-recursive'
- hi-lock: Use active region for default values in more places
- Make VC-Dir's 'd' able to delete unregistered files
- New M-RET, M-p, M-n commands in Log View mode
- New bookmark-after-load-file-hook (bug#80003)
- ; lisp/saveplace.el, etc/NEWS: Refinements to bug#75837.
- New optional recentf autosave timer (bug#80002)
- New packages:
- dired-du-duc: Speed up dired-du with duc (MELPA)
- eager-state: Eagerly persist data onto disk (MELPA)
- fastbuild-bff-mode: Major mode for FASTBuild BFF files (MELPA)
- gptel-forge-prs: Generate PR descriptions for forge using gptel (MELPA)
- iwd-manager: Manage IWD via the D-Bus interface (MELPA)
- lisp-docstring-toggle: Toggle Lisp docstring visibility (MELPA)
- markdown-mermaid: Preview Mermaid code blocks in Markdown (MELPA)
- ob-duckdb: Org Babel integration for DuckDB CLI (MELPA)
- orgit-file: Support for links to files in Git repositories (MELPA)
- royal-hemlock-theme: Soothing royal-blue light-theme (MELPA)
- whisper: Speech-to-text using Whisper.cpp (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to AndrĂŠs RamĂrez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
đ matklad Static Allocation For Compilers rss
Static Allocation For Compilers
Dec 23, 2025
TigerBeetle famously uses âstatic allocationâ. Infamously, the use of the term is idiosyncratic: what is meant is not
staticarrays, as found in embedded development, but rather a weaker âno allocation after startupâ form. The amount of memory TigerBeetle process uses is not hard-coded into the Elf binary. It depends on the runtime command line arguments. However, all allocation happens at startup, and thereâs no deallocation. The long-lived event loop goes round and round happily withoutalloc.Iâve wondered for years if a similar technique is applicable to compilers. It seemed impossible, but today Iâve managed to extract something actionable from this idea?
Static Allocation
Static allocation depends on the physics of the underlying problem. And distributed databases have surprisingly simple physics, at least in the case of TigerBeetle.
The only inputs and outputs of the system are messages. Each message is finite in size (1MiB). The actual data of the system is stored on disk and can be arbitrarily large. But the diff applied by a single message is finite. And, if your input is finite, and your output is finite, itâs actually quite hard to need to allocate extra memory!
This is worth emphasizing â it might seem like doing static allocation is tough and requires constant vigilance and manual accounting for resources. In practice, I learned that it is surprisingly compositional. As long as inputs and outputs of a system are finite, non-allocating processing is easy. And you can put two such systems together without much trouble. routing.zig is a good example of such an isolated subsystem.
The only issue here is that there isnât a physical limit on how many messages can arrive at the same time. Obviously, you canât process arbitrary many messages simultaneously. But in the context of a distributed system over an unreliable network, a safe move is to drop a message on the floor if the required processing resources are not available.
Counter-intuitively, not allocating is simpler than allocating, provided that you can pull it off!
For Compilers
Alas, it seems impossible to pull it off for compilers. You could say something like âhey, the largest program will have at most one million functionsâ, but that will lead to both wasted memory and poor user experience. You could also use a single yolo arena of a fixed size, like I did in Hard Mode Rust, but that isnât at all similar to âstatic allocationâ. With arenas, the size is fixed explicitly, but you can OOM. With static allocation it is the opposite â no OOM, but you donât know how much memory youâll need until startup finishes!
The âproblem sizeâ for a compiler isnât fixed â both the input (source code) and the output (executable) can be arbitrarily large. But that is also the case for TigerBeetle â the size of the database is not fixed, itâs just that TigerBeetle gets to cheat and store it on disk, rather than in RAM. And TigerBeetle doesnât do âstatic allocationâ on disk, it can fail with
ENOSPACEat runtime, and it includes a dynamic block allocator to avoid that as long as possible by re-using no longer relevant sectors.So what we could say is that a compiler consumes arbitrarily large input, and produces arbitrarily large output, but those âdo not countâ for the purpose of static memory allocation. At the start, we set aside an âoutput arenaâ for storing finished, immutable results of compilerâs work. We then say that this output is accumulated after processing a sequence of chunks, where chunk size is strictly finite. While limiting the total size of the code-base is unreasonable, limiting a single file to, say, 4 MiB (runtime-overridable) is fine. Compiling then essentially becomes a âstream processingâ problem, where both inputs and outputs are arbitrary large, but the filter program itself must execute in O(1) memory.
With this setup, it is natural to use indexes rather than pointers for âoutput dataâ, which then makes it easy to persist it to disk between changes. And itâs also natural to think about âchunks of changesâ not only spatially (compiler sees a new file), but also temporally (compiler sees a new version of an old file).
Is there any practical benefits here? I donât know! But seems worth playing around with! I feel that a strict separation between O(N) compiler output and O(1) intermediate processing artifacts can clarify compilerâs architecture, and I wonât be too surprised if O(1) processing in compilers would lead to simpler code the same way it does for databases?
-
- December 22, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-12-22 rss
IDA Plugin Updates on 2025-12-22
New Releases:
Activity:
- auto_re
- e1afa59f: feat: armle api wrappers support
- chernobog
- GTA2_RE
- IDA-FastAnalysis
- idaguides
- 4df31984: simplified Liner
- reshare-ida
- cd077094: typofix
- auto_re
-
đ r/LocalLLaMA DGX Spark: an unpopular opinion rss
| I know there has been a lot of criticism about the DGX Spark here, so I want to share some of my personal experience and opinion: Iâm a doctoral student doing data science in a small research group that doesnât have access to massive computing resources. We only have a handful of V100s and T4s in our local cluster, and limited access to A100s and L40s on the university cluster (two at a time). Spark lets us prototype and train foundation models, and (at last) compete with groups that have access to high performance GPUs like the H100s or H200s. I want to be clear: Spark is NOT faster than an H100 (or even a 5090). But its all-in-one design and its massive amount of memory (all sitting on your desk) enable us â a small group with limited funding, to do more research. submitted by /u/emdblc
[link] [comments]
---|--- -
đ sacha chua :: living an awesome life La semaine du 15 dĂŠcembre au 21 dĂŠcembre rss
Lundi, le quinze dĂŠcembre
J'ai emmenĂŠ ma fille Ă son cours de gymnastique. Elle a travaillĂŠ ses roues. Elle a aussi envie d'ajouter un cours de gymnastique aĂŠrienne. D'une part, j'avais dit que si nous gĂŠrions bien ses devoirs, ce serait plus facile de dire oui. D'autre part, c'est un bon exercice pour la santĂŠ. Je pense que l'entraĂŽnement individuel est meilleur pour ma fille parce qu'elle veut procĂŠder Ă son propre rythme.
Pour le souper, nous avons prĂŠparĂŠ des sushis avec des edamames et de la soupe au miso.
Le mini-four a arrêtÊ de fonctionner. Heureusement, c'est notre deuxième mini-four du même modèle, et nous avons le vieux mini-four dans l'abri de jardin pour les pièces dÊtachÊes. Au lieu de faire ses devoirs, ma fille a aidÊ mon mari dans l'atelier et a appris des bases d'Êlectronique. Ensuite, ma fille a aidÊ mon mari à faire du pain. Je me suis un peu inquiÊtÊe pour ses devoirs, mais je pense que passer du temps ensemble Êtait tout aussi bien.
Ils ont dĂŠcouvert une coccinelle dans le vieux mini-four. Ils l'ont sauvĂŠe et l'ont placĂŠe dans un petit bocal. Je lui ai donnĂŠ un morceau de raisin et un bout d'essuie-tout que j'ai humectĂŠ. Je ne sais pas si elle pourra survivre jusqu'au printemps, mais elle est lĂ , donc nous essayons.
Mon mari s'est renseignÊ sur nos notes de latin que nous avons prises en 2011. Après une brève recherche, je les ai trouvÊes. Elles Êtaient dans un vieux format TiddlyWiki, donc je les ai transformÊes en format Org Mode pour les exporter en livre Êlectronique. Je n'Êtudie plus le latin depuis longtemps, donc j'oublie tout.
J'ai rÊflÊchi à l'aide : comment aider quelqu'un, comment recevoir de l'aide. Mon ami qui traversait une crise personnelle voulait de l'aide sous forme d'argent, mais je pense que l'aide qu'il a voulue ne lui sera pas utile. Ma fille n'a pas voulu d'aide avec ses devoirs. Peut-être que ma fille pense que ses efforts suffisent, et peut-être que cela lui suffit. Au lieu de m'inquiÊter, je dois m'entraÎner à recevoir de l'aide moi-même. C'est une des raisons pour lesquelles j'apprends le français avec ma tutrice, j'apprends à parler de mes sentiments avec ma thÊrapeute, et j'apprÊcie la façon dont ma famille m'aide à mÝrir. Je peux amÊliorer les processus pour que les gens puissent m'aider. Par exemple, pour le traitement des vidÊos de la prÊsentation ou de la discussion en direct, je dois simplifier et documenter le processus. Si les gens sont occupÊs, ce n'est pas grave, je le fais lentement. Si les gens veulent aider, ils peuvent aider.
Mardi, le seize dĂŠcembre
Aujourd'hui, j'ai repris une routine normale. J'ai travaillĂŠ sur Deck the Halls au piano, j'ai suivi une courte vidĂŠo d'exercice, et j'ai finalement fait une longue promenade au parc. Je ne veux pas marcher sur le verglas parce qu'il est glissant, donc j'ai marchĂŠ sur le trottoir autour du parc.
Quelqu'un a discutĂŠ de la modĂŠration du canal #emacs sur IRC. Il a semblĂŠ ĂŞtre frustrĂŠ. Je ne peux pas faire grand-chose, mais j'ai conseillĂŠ quelques choses qu'il pouvait faire.
J'ai emmenĂŠ ma fille Ă son dernier cours d'art. Elle ĂŠtait fière que son Ĺuvre soit exposĂŠe dans la fenĂŞtre. Elle a ramassĂŠ les autres Ĺuvres dans son carton Ă dessins pour les transporter Ă la maison. Elle a apprĂŠciĂŠ le cours avec son amie, mais elle a parfois trouvĂŠ que c'ĂŠtait trop bruyant, donc elle ne veut pas continuer pour le moment. Nous allons garder un programme assez libre sans beaucoup de cours pour que nous puissions aller patiner ou jouer avec ses amies quand elle en a envie.
Dans la session de thĂŠrapie, nous avons discutĂŠ des sentiments. J'intellectualise des situations difficiles au lieu de les ressentir, donc mes devoirs pour les vacances de NoĂŤl comprennent de remarquer quand j'utilise ce mĂŠcanisme de dĂŠfense. Je vais aussi ĂŠcrire un journal des sentiments.
J'ai configurÊ un correcteur d'orthographe grâce au cours  Emacs expliquÊ à mes enfants  de @vincek.
Mercredi, le dix-sept dĂŠcembre
J'ai ĂŠcrit une petite fonction pour rechercher des mots dans quelques dictionnaires en ligne. Petit Ă petit, j'amĂŠliore mon environnement d'ĂŠcriture.
Cet après-midi, j'ai un rendez-vous pour faire rÊviser mon vÊlo cargo. J'ai fait du vÊlo jusqu'au magasin de cycles. Le mÊcanicien m'a donnÊ le devis pour le service et des conseils à propos de pneus spÊcialisÊs pour le verglas.
Ensuite, j'ai pris le mÊtro, qui avait un problème. Au lieu d' attendre la navette à la station Keele, j'ai marchÊ sur une courte distance jusqu'à la maison.
Je dois probablement traiter les vidÊos de la confÊrence. Un peu de travail peut les rendre prêtes pour la publication. Je vais combiner les vidÊos et les audios normalisÊs, revoir tout ça, et publier sur YouTube et sur notre site. Quelques vidÊos ont eu quelques problèmes avec la conversion, donc je dois revoir les dernières minutes attentivement pour remarquer des erreurs.
ăťăťăťăťăťAprès l'ĂŠcole, j'ai emmenĂŠ ma fille Ă la patinoire au parc pour jouer avec son amie. Elles ont pris beaucoup de plaisir Ă jouer Ă chat avec le père de son amie, qui ĂŠtait trop rapide pour elles. J'ai ĂŠtĂŠ heureuse de les regarder. Nous avons bu du chocolat chaud pendant que la surfaceuse prĂŠparait la glace.
Nous avons mangÊ des restes. Après le souper, j'ai travaillÊ sur les vidÊos de la confÊrence. Deux vidÊos ont eu des erreurs de codage, donc j'ai utilisÊ les vidÊos originales et modifiÊ notre processus. Ma prochaine Êtape est de convertir les vidÊos au format WebM pour les tÊlÊcharger sur notre serveur. Je dois aussi revoir le sous-titrage, mais ça peut être fait graduellement.
Jeudi, le dix-huit dĂŠcembre
Une ĂŠtape importante : je deviens plus Ă l'aise pour ĂŠcrire en français sur mobile. Ăa signifie que je peux ajouter Ă mon journal n'importe quand et n'importe oĂš. Je recherche toujours des mots dans le dictionnaire, ce qui n'est pas si pratique sur mobile Ă cause du petit ĂŠcran, mais c'est tolĂŠrable. Au moins, ça peut remplacer le dĂŠfilement infini de Reddit pour l'ĂŠnième fois. Un jour je pourrai dicter Ă mon portable, ce qui serait plus utile pendant les promenades en hiver, quand taper sera difficile.
J'ai encore fait une longue promenade au parc. Le mÊdecin a dit que les promenades Êtaient bonnes pour la santÊ, donc j'essaie souvent d'en faire. Un jour je voudrais flâner pendant plusieurs heures, mais pour l'instant, une promenade de trente minutes ou une heure est suffisante.
Les expĂŠriences de mon mari avec le pain au levain continuent. Il a achetĂŠ quelques bannetons. Ma fille l'a aidĂŠ avec cette fournĂŠe pendant la pause rĂŠcrĂŠ. Elle aime scarifier des motifs variĂŠs sur le pain. C'est parfait : passer du temps ensemble, apprĂŠcier la nourriture et pratiquer l'art. Ăa demande de la patience, mais c'est la vie et elle peut apprendre la valeur des choses qui prennent du temps. C'est probablement plus important que les notes ĂŠlevĂŠes Ă l'ĂŠcole. (Ou du moins c'est ce que je me dis quand je m'inquiète.)
Quand je rentrerai à la maison, j'aurai trente minutes avant sa pause dÊjeuner. Je pourrai faire une courte tâche, comme envoyer des messages ou vÊrifier des vidÊos. Ma routine matinale pour prendre soin de moi prend la majeure partie de la matinÊe. Je me demande comment les autres s'organisent.
ăťăťăťăťăťJ'ai dĂŠcidĂŠ de cuisiner le dĂŠjeuner au lieu de faire de petites tâches. J'ai prĂŠparĂŠ des grilled-cheeses. On s'est rĂŠgalĂŠs.
Après le dÊjeuner, j'ai travaillÊ sur les vidÊos de la confÊrence. J'ai ajoutÊ les chapitres à quelques vidÊos et corrigÊ quelques sous-titres.
ăťăťăťăťăťAprès l'ĂŠcole, ma fille a voulu aller chez Sephora pour acheter de la brume parfumĂŠe. Elle en a cherchĂŠ en ligne. Mon mari a voulu acheter du papier toilette Ă No Frills, donc nous avons pris le mĂŠtro jusqu'au Dufferin Mall. Elle a appris Ă choisir par elle-mĂŞme. C'est pour ça qu'elle a ses propres ĂŠconomies. Elle a choisi ÂŤ darling Âť qui sent les fleurs. J'ai aimĂŠ voir ma fille gagner en confiance et en autodĂŠtermination. Elle a mis longtemps Ă choisir, mais j'ai ĂŠtĂŠ patiente parce que j'ai pu ĂŠcrire mon journal sur mobile.
Ensuite, nous avons mangÊ un souper de pâtes au pesto à la tomate.
Puis nous avons jouÊ à la marchande comme dans sa classe de thÊâtre. Nous avons lancÊ des idÊes pour les rôles, donc nous avons improvisÊ dans la situation qu'elle a choisie. Elle a dit que j'Êtais drôle.
J'ai travaillĂŠ sur d'autres vidĂŠos, et j'ai corrigĂŠ une erreur dans le logiciel d'affichage des chapitres.
Vendredi, le dix-neuf dĂŠcembre
Je me suis levĂŠe un peu tard parce que mon portable ne s'est pas rechargĂŠ correctement. Heureusement, il restait un peu de temps avant l'ĂŠcole, donc j'ai pu rĂŠveiller ma fille Ă temps pour un petit-dĂŠjeuner sur le pouce.
Pendant qu'elle participait à l'Êcole virtuelle, j'ai fait ma routine matinale. Ensuite, j'ai travaillÊ sur le sous-titrage. Maintenant que les choses sont dÊtendues, je peux prendre plaisir à la prÊparation des ressources. C'est le dernier jour avant sa pause d'hiver, donc je dois faire les tâches qui demandent de la concentration.
Ma fille a fait sa prÊsentation sur le Nouvel An chinois. Elle Êtait si fière. Elle a dit que ses camarades de classe avaient faim à cause de sa prÊsentation sur la nourriture traditionnelle.
Par coĂŻncidence, mon mari a prĂŠparĂŠ du riz gluant au poulet pour le dĂŠjeuner. On s'est rĂŠgalĂŠs.
La coccinelle ĂŠtait plus active. Nous lui avons donnĂŠ un morceau de raisin et un morceau de pomme. Ma fille a humidifiĂŠ le bout d'essuie-tout.
Cet après-midi, j'ai continuÊ le travail sur les vidÊos. Elles Êtaient presque toutes faites, il n'en restait que quelques-unes.
En guise de promenade, j'ai fait les courses. Ensuite, j'ai jouÊ aux cartes avec ma fille. Je gagnais toujours malgrÊ mes efforts subtils. Ma fille est devenue un peu grincheuse. La prochaine fois, je proposerai à ma fille des jeux coopÊratifs comme Space Escape ou comme on joue au Pictionary ou aux charades ensemble. Comme ça, on ne peut pas vraiment gagner à tous les coups sinon quelqu'un va être fâchÊ contre moi.
ăťăťăťăťăťElle s'est sentie mieux et elle est revenue pour manger des ailes de poulet. Elle avait froid aussi, donc elle avait envie de câlins.
Samedi, le vingt dĂŠcembre
J'ai fait une diffusion en direct sur Twitch pendant que je travaillais sur les sous-titres qu'un intervenant a corrigĂŠs. J'ai ĂŠcrit une courte fonction pour copier des textes dans son chapitre actuel. Trois spectateurs sont venus ĂŠtonnamment, et ils ont fait quelques commentaires sur mon processus. Avant de faire plus de chapitres de vidĂŠos, je pense que je dois copier les discussions d'IRC et de YouTube sur les pages du wiki pour les envoyer aux intervenants. Ensuite, je peux me remettre Ă faire les chapitres.
J'ai rÊflÊchi un peu plus à l'aide. Le sous-titrage semble une occasion facile d'aider. J'ai documentÊ le processus et j'ai crÊÊ quelques outils. Mais c'est souvent plus facile si je continue moi-même parce que je ne dois pas attendre. Bon, c'est possible pour des personnes qui se portent volontaires pour faire les sous-titres de quelques vidÊos. Je les laisse de côtÊ et je travaille sur les autres vidÊos avant elles. Est-ce que je veux inviter les volontaires à aider sur les vidÊos restantes? Peut-être. Je dois amÊliorer la page des coulisses pour plus facilement choisir parmi les tâches restantes, et je dois documenter le processus pour aider les dÊbutants. Il est tentant de travailler seul, mais il est bon de crÊer des occasions pour que d'autres personnes puissent aider. En plus, la documentation m'aidera quand j'aurai tout oubliÊ d'ici l'annÊe prochaine.
L'après-midi, je suis allÊe à la pharmacie pour une vaccination contre la grippe. Bien que la vaccination de cette annÊe ne corresponde pas bien aux variations de grippe très courantes, c'est toujours un peu protecteur. Ma fille a marchÊ avec moi à mi-chemin, puis elle est retournÊe à la maison et elle est allÊe avec mon mari chez le perceur. Elle voulait porter des boucles d'oreilles. Elle est assez âgÊe pour choisir par elle-même. Je l'ai aidÊe pour le nettoyage avec la solution saline.
J'ai prĂŠparĂŠ le bulletin d'information pour la Bike Brigade. Puisque personne ne s'est portĂŠ volontaire, je suis revenue Ă mon processus qui est plus automatique. Je dĂŠteste tous les processus qui demandent plusieurs clics et offrent plusieurs occasions de faire des erreurs. Lorsqu'un bĂŠnĂŠvole s'engagera, je restaurerai le processus manuel.
Nous avons aussi jouĂŠ Ă une simulation de petit cafĂŠ sur Minecraft avec sa tante. Ma fille s'occupait du service, ma sĹur s'occupait des salades, et je m'occupais d'alterner les crĂŞpes et les gâteaux. On a bien gĂŠrĂŠ dans les temps. Après ma routine du soir, nous avons aussi jouĂŠ au Space Escape. Nous avons gagnĂŠ ensemble !
Dimanche, le vingt-et-un dĂŠcembre
Après la vaccination d'hier, j'ai un peu mal au cou, donc je me la coule douce aujourd'hui. Je vais faire la lessive et peut-être copier des discussions de la confÊrence. Mais avant tout, peut-être que je vais Êtudier un peu le français.
Mon logiciel d'analyse de mon journal a dit que j'ai ĂŠcrit cinquante-deux entrĂŠes jusqu'Ă prĂŠsent. Ăa nous fait un total de 10.766 mots (1.381 lemmes). J'ai commencĂŠ Ă apprendre le français pour peut-ĂŞtre aider ma fille, mais je trouve que j'apprĂŠcie la stimulation d'ĂŠcriture dans une autre langue. C'est certain que j'ĂŠcris plus d'entrĂŠes Ă propos de ma vie. L'analyse de mon vocabulaire m'encourage Ă essayer de nouveaux mots et de plus longues entrĂŠes. En 2012, lors d'une confĂŠrence sur Quantified Self, j'ai rencontrĂŠ une personne qui met son journal sur son système de rĂŠpĂŠtition espacĂŠe pour aider Ă s'en souvenir. Après chaque rendez-vous avec ma tutrice, je mets mes phrases sur Anki pour ĂŠtudier du vocabulaire. En cours de route, je me remĂŠmore ces moments. Je ne peux pas encore parler aisĂŠment. Peut-ĂŞtre que je dois pratiquer l'expression orale et trouver ma propre mĂŠthode pour pratiquer la comprĂŠhension orale. RĂŠpĂŠter en mĂŞme temps que l'audio semble utile.
L'outil d'IA que j'ai essayÊ est sorti de sa phase bêta et a maintenant besoin d'un abonnement de 29 dollars chaque mois. En ce moment, je me demande si je veux l'utiliser, ou si je veux utiliser d'autres outils comme ChatGPT ou Gemini, ou si je veux crÊer mon propre outil. Je pense que pour le moment, je me concentre principalement sur l'Êcriture. à cause de COVID et du côtÊ chronophage de l'Êducation de mon enfant, je ne suis pas intÊressÊe par des sujets frÊquents comme commander au restaurant, les voyages, ou même la prÊsentation et le bavardage. Je veux Êcrire et Êcouter des informations sur Emacs et d'autres sujets techniques, donc je peux commencer à lire  Emacs expliquÊ à mes enfants . Je peux aussi utiliser la synthèse vocale pour transformer mon journal en audio, que je peux utiliser pour m'entraÎner. J'ai ajoutÊ une fonction pour attendre après chaque phrase pendant un multiple du temps initial pour pouvoir rÊpÊter plus facilement. Même si peut-être penser à Êcouter la prononciation quand je cherche des mots dans le dictionnaire en ligne serait suffisant quand j'utilise mon portable, ce qui arrive plus souvent.
Je ne peux pas me concentrer sur mon travail, donc j'ai fait une sieste l'après-midi. Après deux heures, ma fille m'a rÊveillÊe parce qu'elle Êtait fière d'avoir aidÊ mon mari à mettre en conserve les betteraves qu'il avait achetÊes il y a deux semaines. Ils ont utilisÊ l'autocuiseur. Puisqu'un bocal ne s'est pas bien scellÊ, il l'a mis au rÊfrigÊrateur. Ils ont aussi fait un gâteau aux ananas et aux betteraves, que ma fille aime bien.
Après le souper, j'ai rĂŠcupĂŠrĂŠ un peu d'ĂŠnergie. J'ai jouĂŠ Ă la simulation de petit cafĂŠ sur Minecraft avec ma fille et ma sĹur, comme hier. Cette fois, notre jeu se dĂŠroule bien. Ma sĹur a fait beaucoup de salades par lots. Elle a dit : ÂŤ Dix salades grecques sont prĂŞtes Âť et ma fille les a servies aux clients. Moi, j'ai prĂŠparĂŠ des crĂŞpes et des gâteaux nature sans cesse, et je les ai combinĂŠs avec d'autres ingrĂŠdients pour chaque commande, donc j'ai souvent dit : ÂŤ Le gâteau au chocolat et Ă la banane sur le comptoir. Âť Nous avons franchi facilement deux ĂŠtapes de plus. Je pense qu'il reste une ĂŠtape.
You can e-mail me at sacha@sachachua.com.
-
đ r/reverseengineering OGhidra: Automating dataflow analysis and vulnerability discovery in Ghidra via local Ollama models rss
submitted by /u/Nightlark192
[link] [comments] -
đ r/LocalLLaMA GLM 4.7 released! rss
| GLM-4.7 is here! GLM-4.7 surpasses GLM-4.6 with substantial improvements in coding, complex reasoning, and tool usage, setting new open-source SOTA standards. It also boosts performance in chat, creative writing, and role-play scenarios. Weights: http://huggingface.co/zai-org/GLM-4.7 Tech Blog: http://z.ai/blog/glm-4.7 submitted by /u/ResearchCrafty1804
[link] [comments]
---|--- -
đ r/LocalLLaMA GLM 4.7 is out on HF! rss
| submitted by /u/KvAk_AKPlaysYT
[link] [comments]
---|--- -
đ r/reverseengineering ImHex Hex Editor v1.38.1 - Better Pattern Editor, many new Data Sources, Save Editor Mode and more rss
submitted by /u/WerWolv
[link] [comments] -
đ r/LocalLLaMA I made Soprano-80M: Stream ultra-realistic TTS in <15ms, up to 2000x realtime, and <1 GB VRAM, released under Apache 2.0! rss
| Hi! Iâm Eugene, and Iâve been working on Soprano : a new state-of-the-art TTS model I designed for voice chatbots. Voice applications require very low latency and natural speech generation to sound convincing, and I created Soprano to deliver on both of these goals. Soprano is the worldâs fastest TTS by an enormous margin. It is optimized to stream audio playback with < 15 ms latency, 10x faster than any other realtime TTS model like Chatterbox Turbo, VibeVoice-Realtime, GLM TTS, or CosyVoice3. It also natively supports batched inference, benefiting greatly from long-form speech generation. I was able to generate a 10-hour audiobook in under 20 seconds, achieving ~2000x realtime! This is multiple orders of magnitude faster than any other TTS model, making ultra-fast, ultra-natural TTS a reality for the first time. I owe these gains to the following design choices:- Higher sample rate: most TTS models use a sample rate of 24 kHz, which can cause s and z sounds to be muffled. In contrast, Soprano natively generates 32 kHz audio, which sounds much sharper and clearer. In fact, 32 kHz speech sounds indistinguishable from 44.1/48 kHz speech, so I found it to be the best choice.
- Vocoder-based audio decoder: Most TTS designs use diffusion models to convert LLM outputs into audio waveforms. However, this comes at the cost of slow generation. To fix this, I trained a vocoder-based decoder instead, which uses a Vocos model to perform this conversion. My decoder runs several orders of magnitude faster than diffusion-based decoders (~6000x realtime!), enabling extremely fast audio generation.
- Seamless Streaming: Streaming usually requires generating multiple audio chunks and applying crossfade. However, this causes streamed output to sound worse than nonstreamed output. I solve this by using a Vocos-based decoder. Because Vocos has a finite receptive field. I can exploit its input locality to completely skip crossfading, producing streaming output that is identical to unstreamed output. Furthermore, I modified the Vocos architecture to reduce the receptive field, allowing Soprano to start streaming audio after generating just five audio tokens with the LLM.
- State-of-the-art Neural Audio Codec: Speech is represented using a novel neural codec that compresses audio to ~15 tokens/sec at just 0.2 kbps. This helps improve generation speed, as only 15 tokens need to be generated to synthesize 1 second of audio, compared to 25, 50, or other commonly used token rates. To my knowledge, this is the highest bitrate compression achieved by any audio codec.
- Infinite generation length: Soprano automatically generates each sentence independently, and then stitches the results together. Theoretically, this means that sentences can no longer influence each other, but in practice I found that this doesnât really happen anyway. Splitting by sentences allows for batching on long inputs, dramatically improving inference speed.
Iâm a second-year undergrad whoâs just started working on TTS models, so I wanted to start small. Soprano was only pretrained on 1000 hours of audio (~100x less than other TTS models), so its stability and quality will improve tremendously as I train it on more data. Also, I optimized Soprano purely for speed, which is why it lacks bells and whistles like voice cloning, style control, and multilingual support. Now that I have experience creating TTS models, I have a lot of ideas for how to make Soprano even better in the future, so stay tuned for those! Github: https://github.com/ekwek1/soprano Huggingface Demo: https://huggingface.co/spaces/ekwek/Soprano-TTS Model Weights: https://huggingface.co/ekwek/Soprano-80M - Eugene submitted by /u/eugenekwek
[link] [comments]
---|--- -
đ r/reverseengineering GitHub - Fatmike-GH/MCPDebugger: A lightweight MCP debugger designed for learning and experimentation. Supports Windows executables (x86 and x64). rss
submitted by /u/Fatmike-Reddit
[link] [comments] -
đ r/LocalLLaMA NVIDIA made a beginner's guide to fine-tuning LLMs with Unsloth! rss
| Blog Link: https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/ You'll learn about: - Training methods: LoRA, FFT, RL - When to fine-tune and why + use-cases - Amount of data and VRAM needed - How to train locally on DGX Spark, RTX GPUs & more submitted by /u/Difficult-Cap-7527
[link] [comments]
---|--- -
đ langchain-ai/deepagents deepagents-cli==0.0.12 release
Changes since deepagents-cli==0.0.11
minor version bump, model setting, agent skill spec support, skill creator example (#600)
Comply with Anthropic Agent Skills spec (#592)
feat(cli): add --model flag with auto-detection (#584)
feat: add skill-creator skill with init and validation scripts (#579)
docs(cli): add LangSmith environment variables documentation (#583) -
đ r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
đ r/LocalLLaMA major open-source releases this year rss
| submitted by /u/sahilypatel
[link] [comments]
---|---
-
- December 21, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-12-21 rss
IDA Plugin Updates on 2025-12-21
New Releases:
Activity:
- chernobog
- IDA-VTableExplorer
- 3081ff81: fix: add actions back to browse functions and annotate all vtables
- 1612b4e2: feat: replace JPEG images with PNG for better quality in README
- 3fbe47f1: Refactor VTable handling and enhance RTTI parsing
- cf02df00: feat: Update build-all target to include clean step for improved builâŚ
- 0f930310: feat: Add clean target to Makefile for removing build artifacts
- IDAPluginList
- bd40bf13: Update
- twdll
- b5a35786: chore: update README
-
đ r/LocalLLaMA 1 year later and people are still speedrunning NanoGPT. Last time this was posted the WR was 8.2 min. Its now 127.7 sec. rss
| Previous post for context. Also note original NanoGPT run from Andrej Karpathy was 45 min. I think this is a great way to understand progress in overall algorithmic speed improvements as I'm sure the big labs are using similar speedup tricks. submitted by /u/jd_3d
[link] [comments]
---|--- -
đ r/LocalLLaMA llama.cpp appreciation post rss
| submitted by /u/hackiv
[link] [comments]
---|--- -
đ sacha chua :: living an awesome life La semaine du 7 dĂŠcembre au 14 dĂŠcembre rss
Lundi, le huit dĂŠcembre
Je me suis concentrÊe sur mon journal en français avant le rendez-vous avec ma tutrice. J'ai Êcrit suffisamment pour bien utiliser le temps, malgrÊ la semaine dernière chargÊe. Nous avons aussi fait de la conversation. J'ai utilisÊ le Live Captions de Google Chrome pour comprendre quand elle parlait trop rapidement.
J'ai emmenĂŠ ma fille Ă son cours de gymnastique. C'ĂŠtait apparemment la semaine des parents, donc j'ai pu regarder ma fille dans le gymnase. J'ai pris quelques vidĂŠos pour la montrer.
J'ai fait beaucoup de lessive, parce que je n'ai pas pu en faire pendant la confĂŠrence.
Mardi, le neuf dĂŠcembre
Ce matin, j'ai continuÊ à rattraper mon retard. Pendant un ou deux mois prÊcÊdant la confÊrence, je n'ai pas fait beaucoup de travail de conseil, donc j'ai accumulÊ quelques tâches. Je n'Êtais pas stressÊ, j'ai juste eu à gÊrer mon temps. J'ai pris plaisir à les aider.
L'un de mes amis m'a appelÊ pour discuter d'une crise personnelle. Je suppose que c'est ça la crise de la quarantaine. C'est très difficile, mais on doit persÊvÊrer.
Cet après-midi, j'ai contemplÊ mes valeurs pour mes devoirs de la session de gestion du stress avec ma thÊrapeute. Je pense que je peux les simplifier en cette liste : responsabilitÊ, adaptabilitÊ, relations, et curiositÊ. C'est utile pour faire des choix.
Aujourd'hui, il fait froid et gris avec de la neige et un vent fort. La mĂŠtĂŠo a annoncĂŠ plus de neige. J'ai laissĂŠ ma fille choisir d'aller au cours d'art ou de rester Ă la maison. Elle a choisi de rester, donc nous avons passĂŠ une soirĂŠe tranquille. Nous avons jouĂŠ aux jeux de cartes. Ma fille aime bien les jeux de stratĂŠgie. Moi aussi. Elle commence Ă apprendre Ă anticiper les choses quand elle joue Ă Exploding Kittens et Ă Tacos versus Burritos. Elle s'amuse beaucoup parce que les cartes sont amusantes.
Nous avons pratiquÊ un peu le français avec l'IA. Elle apprend le vocabulaire sur la mÊtÊo à l'Êcole, donc elle a essayÊ quelques phrases. Ensuite, j'ai Êcrit mon journal pendant qu'elle regardait KPop Demon Hunters pour l'Ênième fois.
Demain, je vais enregistrer une vidÊo sur la prÊparation du bulletin d'information pour Bike Brigade pour le transfÊrer à l'autre volontaire. Je vais aussi enregistrer une vidÊo de fÊlicitations en français. S'il y a du temps, je veux aussi traiter les vidÊos de la confÊrence.
Mercredi, le dix dĂŠcembre
Mon mari s'est levÊ très tôt pour prÊparer son examen mÊdical. Il a dÝ jeÝner pour son examen, donc il avait très faim et il s'ennuyait, alors il a commencÊ deux recettes de pain au levain. J'ai aidÊ ma fille avec sa routine matinale. Pendant que ma fille participait à l'Êcole virtuelle et mon mari Êtait sorti, j'ai dÝ gÊrer les deux recettes tout en cuisinant une bouillie de riz au poulet pour le dÊjeuner de mon mari.
Alors, je me suis sentie un peu perturbÊe, mais j'Êtais aussi contente car mon mari comptait sur moi pour faire ces tâches. Il n'a pas souvent demandÊ de l'aide. C'Êtait un plaisir de l'aider, même si la situation Êtait amusante.
J'ai fini les trois recettes moi-mĂŞme : deux sortes de pain et la bouillie. C'ĂŠtait la deuxième fois que nous essayions de faire du pain au levain, et cette fois ça a marchĂŠ ! Je pense que j'ai laissĂŠ le pain reposer plus longtemps, ce qui a mieux fonctionnĂŠ, en effet… Et ma fille aime notre pain au levain ! Enfin, notre première victoire ! Ma fille a jugĂŠ que mes essais prĂŠcĂŠdents n'ĂŠtaient pas aussi bons que le pain qu'elle achète d'habitude au marchĂŠ fermier.
J'ai aussi enregistrÊ une courte vidÊo pour souhaiter un joyeux anniversaire en français. C'Êtait un bon exercice pour l'expression orale.
Pour l'exercice, j'ai dÊblayÊ beaucoup de neige. Il pleuvait aussi, donc la neige Êtait lourde. Je n'ai pas pu me reposer parce que j'avais trop de tâches.
MalgrÊ la neige et la pluie, ma fille a aussi envoyÊ une lettre au Père NoÍl. Nous sommes en retard pour le programme Lettres au Père NoÍl des Postes Canada, mais quand même ça vaut le coup d'essayer. Elle veut un jeu de piste pour elle, et des chaussettes pour moi. Le jeu de piste est une tradition dans ma famille. Je vais Êcrire quelques indices et les cacher partout dans la maison. Peut-être que cette annÊe, je peux Êcrire quelques indices en français.
Mon mari rĂŠessaye la recette du pain maintenant. Petit Ă petit, on s'amĂŠliore. Les rĂŠsultats intermĂŠdiaires sont dĂŠlicieux, donc la pratique est agrĂŠable.
Jeudi, le onze dĂŠcembre
J'ĂŠtais fatiguĂŠe. L'Ĺil de ma fille faisait un peu mal mĂŞme après sa nuit de sommeil, donc je me suis un peu inquiĂŠtĂŠe. Elle a pu participer Ă l'ĂŠcole virtuelle, au moins. Je suppose que c'ĂŠtait une journĂŠe avec moins d'ĂŠnergie.
J'ai emmenÊ ma fille aux Stockyards à pied parce qu'elle avait envie d'une longue promenade. Pour une petite friandise, j'ai achetÊ une boÎte de feuilletÊs chez Marry Me Mochi, et elle les a gardÊs pour après le souper. Mon mari et ma fille ont cuisinÊ des sandwiches au fromage grillÊ à la purÊe de pomme de terre, une nouvelle idÊe que mon mari a trouvÊe en ligne. C'Êtait dÊlicieux.
Après un souper rapide, j'ai eu une sÊance d'information sur le bulletin d'information de Bike Brigade. J'ai Êcrit de la documentation. Pendant la sÊance, j'ai expliquÊ le processus.
Vendredi, le douze dĂŠcembre
L'Ĺil de ma fille faisait mal et ĂŠtait enflĂŠ pendant deux jours, donc j'ai concentrĂŠ mes efforts pour obtenir de l'aide. Elle n'a pas voulu participer en classe. Ce matin, j'ai appelĂŠ quelques endroits pour essayer de prendre un rendez-vous en alternant les câlins rĂŠconfortants. Après une longue attente et quelques messages, j'ai pris un rendez-vous Ă l'hĂ´pital Sick Kids.
J'ĂŠtais fatiguĂŠe, donc j'ai fait une sieste de trente minutes Ă midi.
Cet après-midi, j'ai emmenÊ ma fille en mÊtro chez l'ophtalmologue à l'hôpital. Nous avons attendu pendant deux heures, ce qui Êtait très ennuyant pour ma fille mais c'Êtait nÊcessaire. Je l'ai laissÊe regarder beaucoup de vidÊos et jouer à quelques jeux.
L'ophtalmologue a dit que ma fille a un orgelet, donc elle a conseillĂŠ des compresses chaudes et de l'ĂŠrythromycine. Elle a aussi remarquĂŠ qu'elle a des cils qui frottent l'Ĺil, donc elle a recommendĂŠ des gouttes pour les yeux. J'ai dĂŠposĂŠ ma fille Ă la maison et je suis allĂŠe Ă la pharmacie pour acheter l'ĂŠrythromycine.
Après tout ça, ce qui a pris toute la journÊe, j'Êtais très fatiguÊe.
Samedi, le treize dĂŠcembre
Le masque chauffant pour les yeux semble aider ma fille avec son Ĺil. Elle l'a portĂŠ hier soir dix minutes et encore ce matin. Son Ĺil est moins enflĂŠ maintenant, mais elle a encore un peu mal.
Elle trouve que se concentrer sur ses devoirs est difficile. Les mathÊmatiques sont amusantes, mais les devoirs de langue sont ennuyeux. Elle a reportÊ ses tâches pendant plusieurs jours, et maintenant elles forment un gros tas. J'ai conseillÊ de faire petit à petit et de faire les diffÊrentes sortes de devoirs pour que son maÎtre puisse Êvaluer les diffÊrentes matières. J'ai travaillÊ sur mes devoirs de français dans sa chambre pour qu'elle ne se sente pas seule. Parfois elle a besoin d'un câlin avant de recommencer à travailler. Je n'ai pas le droit de lui rappeler ses devoirs, juste de la câliner. Eh, on va voir. D'une part, je souhaite le succès de ma fille. D'autre part, c'est elle qui doit dÊcouvrir ce qui fonctionne bien, et le moment prÊsent est idÊal pour expÊrimenter parce que les enjeux sont faibles. Aujourd'hui, elle veut rattraper tout son retard de devoirs de lecture au lieu de faire un peu de tout. C'est à elle de dÊcider.
Après ses devoirs, elle veut aller à KidSpark pour jouer au magasin imaginaire. Je pense que je peux l'emmener à vÊlo malgrÊ la neige et la glace, probablement. Le mÊtro ne fonctionne pas ce week-end, donc il faudra se contenter. Je n'ai pas de pneus spÊciaux pour la glace, donc je devrai faire du vÊlo attentivement.
ăťăťăťăťăťNous sommes tous allĂŠs Ă KidSpark malgrĂŠ la fermeture du mĂŠtro d'Ossington Ă Spadina. Je n'ai pas eu d'ĂŠnergie pour faire du vĂŠlo, donc nous avons dĂť prendre le mĂŠtro. La navette ĂŠtait lente et bondĂŠe, mais nous sommes finalement arrivĂŠs.
Nous n'avons jouÊ qu'une heure, mais notre fille a eu beaucoup de plaisir, donc j'Êtais contente que nous sommes venus. Nous avons jouÊ au magasin imaginaire et nous avons aussi jouÊ avec les nouveaux jouets de construction. Il y avait beaucoup d'enfants, donc c'Êtait bruyant, et notre fille a utilisÊ le protège-oreilles du sac à dos sensoriel.
Nous avons achetÊ quelques petits pains et des raviolis aux crevettes en rentrant, avant d'attendre les navettes pendant longtemps. Les navettes Êtaient très bondÊes, et notre fille a eu froid en marchant jusqu'à la maison. Mais nous avons persÊvÊrÊ.
Quand nous sommes rentrĂŠs, nous avons tous bu du thĂŠ. Mon mari et notre fille ont cuisinĂŠ deux fournĂŠes de petites crĂŞpes ĂŠpaisses, et j'ai fait la vaisselle.
Dimanche, le quatorze dĂŠcembre
J'ĂŠtais fatiguĂŠe, donc j'ai fait la grasse matinĂŠe. Ma fille s'est levĂŠe avant moi. Elle a fait tomber le sac de cĂŠrĂŠales par accident et elle est devenue un peu grincheuse. Elle est devenue plus grincheuse quand nous avons mentionnĂŠ ses devoirs. Elle a une prĂŠsentation la semaine prochaine, donc elle doit se prĂŠparer. Alors, je ne peux pas la forcer. Je me le dis : c'est son expĂŠrience, ce n'est pas moi.
Du coup, comme elle est grincheuse, peut-être que j'ai le temps pour mes tâches. Je dois produire ma dÊclaration fiscale de l'entreprise, qui a besoin de concentration. Je peux Êcrire mon journal avant le rendez-vous avec ma tutrice lundi, et j'ai les devoirs pour la session sur la gestion du stress mardi. Je veux aussi travailler sur le reste du travail de la confÊrence. Beaucoup de choses à faire.
Mes devoirs sur la gestion du stress comprennent la description de mon sentiment et son ĂŠvaluation en pourcentage. Cette ĂŠvaluation est ĂŠtonnamment difficile. Je suis perdue. Alors, je suppose que c'est ce que je dois apprendre.
ăťăťăťăťăťMa fille est revenue de sa chambre d'humeur assez raisonnable. Elle a mangĂŠ un peu de nourriture et a reçu des câlins. Je pense qu'elle n'a pas travaillĂŠ sur ses devoirs. Son Ĺil fait mal et maintenant ses deux yeux dĂŠmangent, sa nouvelle molaire fait mal, elle ĂŠtait fatiguĂŠe de ses devoirs… Je ne peux pas faire grand-chose, juste des câlins rĂŠconfortants et aider avec sa routine du soir.
Reflection
I'm gradually expanding my vocabulary. I can now write enough that reading my vocabulary entries out loud to my tutor (and chatting a little about stuff along the way) takes up the hour. It's still good pronunciation practice while I work on picking up more words and internalizing the pronunciation rules, though, so it's probably a good idea to continue that instead of shifting that to AI.
New root wordsabsence, accumuler, adaptabilitĂŠ, amĂŠlioration, anniversaire, annulation, anticiper, apparemment, appeler, apprĂŠcier, attente, attentivement, automatisation, bonder, bouillie, bruyant, cacher, car, certain, chauffer, choix, cil, commencer, comprendre, compresse, concentration, connecter, conseiller, construction, contempler, contenter, contrĂ´ler, coulisse, court, crise, crĂŞpe, curiositĂŠ, câliner, cĂŠrĂŠale, description, deuxième, diffĂŠrence, diffĂŠrent, documentation, droit, dĂŠcider, dĂŠclaration, dĂŠcouvrir, dĂŠlicieux, dĂŠmanger, dĂŠrouler, effet, effort, enfin, enfler, enjeu, ennuyant, ennuyer, entreprise, envie, essai, examen, expliquer, expĂŠrience, expĂŠrimenter, faible, falloir, façon, fenĂŞtre, fermeture, fermier, feuilletĂŠ, fiscal, forcer, former, fournĂŠe, frotter, fĂŠlicitation, glace, goutte, gras, griller, gris, gros, gymnase, gĂŠnĂŠral, hĂ´pital, idĂŠal, inattendu, indice, inspirant, intermĂŠdiaire, jeĂťner, jouet, joyeux, juger, lecture, lent, lessive, lettre, longtemps, lors, lourd, mal, masque, mathĂŠmatique, mois, molaire, montrer, mĂŠdical, mĂŠtro, mĂŞler, navette, nourriture, noĂŤl, obtenir, oeil, ophtalmologue, organisation, orgelet, outil, partager, partout, perdre, personnel, persĂŠvĂŠrer, phrase, plan, pneu, porter, poste, pourcentage, processus, produire, prĂŠcĂŠdent, prĂŠcĂŠder, purĂŠe, quarantaine, raisonnable, rapidement, rattraper, recommencer, recommender, reconnecter, relation, remarquer, reposer, responsabilitĂŠ, retard, rĂŠduire, rĂŠpondre, rĂŠsultat, rĂŠussir, sauter, sauvegarder, scène, sembler, sensoriel, sentiment, serviable, sieste, similaire, situation, soir, soirĂŠe, sommeil, sorte, souhaiter, spĂŠcial, spĂŠcialisĂŠ, stratĂŠgie, stresser, succès, suffisamment, supplĂŠmentaire, supposer, surtout, sĂŠance, taille, thĂŠ, toutefois, tradition, transcription, transformation, transfĂŠrer, vaisselle, valoir, victoire, volet, ça, ĂŠnergie, ĂŠnième, ĂŠpais, ĂŠrythromycine, ĂŠtonnamment, ĂŠtude, ĂŠvaluation, ĂŠvaluer, Ĺil
You can e-mail me at sacha@sachachua.com.
-
đ r/reverseengineering From UART to Root: Breaking Into the Xiaomi C200 via U-Boot rss
submitted by /u/igor_sk
[link] [comments] -
đ Register Spill Joy & Curiosity #67 rss
Last issue of the year, let's do this!
This week, Ryan and I got to interview DHH. It's very rare that I get nervous before an online conversation, but this was one of those times. I mean, that's the guy who made Rails, man! I wouldn't be here without Rails. Rails is what I did for the first seven years of my career. Rails is the reason why I have a career. I read every book he and Jason have ever written, of course, and 37signals has had as deep an impression as a company can have on probably anybody who's worked in a startup between 2008 and 2015.
âŚand then we had a great conversation. It's been a few days, and different parts of it keep popping back into my head. David said quite a few things that I now feel I have to share. Some things about marketing that resonate with what we've been talking about internally; some things I want the world to hear; some things that were funny; other things that were very fascinating (he said he still writes 95% of his code by hand); and the rant on cookie banners that I want politicians to hear.
But here's something that I want to leave you with, in this last edition of the year, this year that brought and announced more change to this profession than any other year I've lived through as a working software developer. Here's something that David said that sums up why I'm excited and so curious about where all of this is going, something that I hope makes you feel something positive too:
"Where does the excitement come from? First and foremost, I love computers and I love to see computers do new things. It's actually remarkable to me how many people who work in tech don't particularly like computers. Yes, even programmers who have to interact with them every day and make these computers dance, not all of them like computers. I love computers. I love computers just for the-- sheer machine of it. I'm not just trying to be instrumental about it. I'm not just trying to use computers to accomplish something. There's a whole class of people who view the computer just as a tool to get somewhere. No, no, no. For me, it's much deeper. I just love the computer itself and I love to see the computer do new things. And this is the most exciting new thing that computers have been doing, probably in my lifetime. Or at least it's on level with the network-connected computer. Yes."
The computer can now do new things.
-
My teammate Tim wrote about how he ported his TUI framework from Zig to TypeScript and how, in the process of porting it, he noticed that he's getting in the way of the agent, slowing it down and costing more tokens. So he took his hands off the wheel and what we ended up with is this: A Codebase by an Agent for an Agent. I've shared this story quite a few times in person. I'm really happy it's out now, so we have proof: this is an world-class terminal expert and programmer, letting an agent write 90% of the code, and ending up with something that is really , really good. (Also, side note: I contributed the images and, man, it's so fun to put stuff like this out into the world.)
-
This was fantastic: Jeff Dean and Sanjay Ghemawat with Performance Hints. When I opened it I thought I'd skim it, but then I read the whole thing, looked at a lot of the examples, asked ChatGPT some questions along with screenshots. The writing is clear and precise and simple, the section with the napkin math is impressive, the emoji map optimization is what made me open ChatGPT, and then at the end there, in the CLs that demonstrate multiple techniques section, there's this header 3.3X performance in index serving speed! and when you click on it you'll read that they "found a number of performance issues when planning a switch from on-disk to in-memory index serving in 2001. This change fixed many of these problems and took us from 150 to over 500 in-memory queries per second (for a 2 GB in-memory index on dual processor Pentium III machine)" and then you realize what an impressive cathedral of software engineering Google's infrastructure is. Click here for a good time, I'm telling you.
-
The TUI renaissance isn't over: Will McGugan just released Toad, a "unified experience for AI in the terminal." Taking inspiration from Jupyter notebooks is very smart and I love those little UI interactions he built. Good stuff.
-
The title is "Prompt caching: 10x cheaper LLM tokens, but how?" so you might think that this is about prompt caching, but, haha, that's silly. Listen, this is about everything. It's one of the best all-in-one explainers of how transformers work that I've come across. It's by Sam Rose, who's very good at visual explanations, and here he does a full explanation of how text goes into an LLM and text comes out the other end, including visuals, pseudo-code, in-depth explanations. It's very, very good. If you don't know how a transformer works, do yourself a favor and read this. If you do know how it works, look at this and smile at the visualizations.
-
Imagine you're holding two rocks. One has written on it: "terminals can display images now, thanks to the kitty's terminal graphics protocol". The other: "when you think about it, a GUI framework does nothing but create images and display them, right?" Now the question is: what happens if you smash those two rocks together? This: "DVTUI" (note the quotes!), which takes a GUI framework (DVUI), gets it to save PNGs instead of rendering them to the screen, and then uses a TUI framework (libvaxis) to render those images in the terminal. To quote: "All that happens every single frame. And yet it works."
-
As you know, I'm a sucker for lists like this one: Tom Whitwell's 52 things I learned in 2025. Wonderful.
-
⌠and it brought me to this: write to escape your default setting. "Writing forces you to tidy that mental clutter. To articulate things with a level of context and coherence the mind alone can't achieve." Yes. Now, in times of LLMs, it's probably more apparent than ever before that writing (real writing; writing you do) is thinking.
-
How I wrote JustHTML using coding agents: "After writing the parser, I still don't know HTML5 properly. The agent wrote it for me. I guided it when it came to API design and corrected bad decisions at the high level, but it did ALL of the gruntwork and wrote all of the code." I bet there's a lot of people who read this and think "ha! so he doesn 't know HTML5 still!" And yet I wonder: was that the goal? It's a very good post. A very calm, practical post, but that raises a fundamental question: JustHTML is now "3,000 lines of Python with 8,500+ tests passing" and "passes 100% of the html5lib test suite, has zero dependencies, and includes a CSS selector query API" -- how many more dependencies could we turn into that now?
-
Martin Kleppmann: "I find it exciting to think that we could just specify in a high-level, declarative way the properties that we want some piece of code to have, and then to vibe code the implementation along with a proof that it satisfies the specification. That would totally change the nature of software development: we wouldn't even need to bother looking at the AI-generated code any more, just like we don't bother looking at the machine code generated by a compiler."
-
"The perfection of snow in the paintings of Danish artist Peder Mørk Mønsted."
-
Stripe Press: Tacit. "The mechanism for developing tacit knowledge is straightforward but slow: repeated practice that gradually moves skills from conscious effort to automatic execution. The mechanism for transmitting it is even slower: apprenticeship, where a learner works alongside someone experienced, observing and imitating until their own judgment develops. This is why tacit knowledge often concentrates in lineages, unbroken chains of practitioners passing expertise to the next generation. [âŚ] AI has elevated the distinction between what is tacit and what is not. Language models can summarize and automate, but when they attempt to create something that carries the signature of human craft, the result is often flat." In the words of Tamara Winter: Tacit is a series of mini-documentaries that are " vignettes of craftspeople who provide a pretty compelling answer to the question, 'after AI, does mastery still matter?'"
-
I need to try this: Geoffrey Litt's JIT Guide Workflow.
-
This fantastic post by Jakob Schwichtenberg shifted something in my head: "Our very definition of intelligence encodes the bias toward speed. The modern definition of intelligence is extremely narrow. It simply describes the speed at which you can solve well-defined problems. Consider this: if you get access to an IQ test weeks in advance, you could slowly work through all the problems and memorize the solutions. The test would then score you as a genius. This reveals what IQ tests actually measure. It's not whether you can solve problems, but how fast you solve them." And then: "In fact, it's not hard to imagine how raw processing speed can be counterproductive. People who excel at quickly solving well-defined problems tend to gravitate toward... well-defined problems. They choose what to work on based on what they're good at, not necessarily what's worth doing."
-
⌠but then there's James Somers saying "Speed matters: Why working quickly is more important than it seems." And Nat Friedman is saying: "It's important to do things fast. You learn more per unit time because you make contact with reality more frequently. Going fast makes you focus on what's important; there's no time for bullshit." And Patrick Collison is collecting fast projects. Then here I am, wondering, and possibly assuring myself: yeah, we're not all doing the same things, are we?
-
antirez' Reflections on AI at the end of 2025. "The fundamental challenge in AI for the next 20 years is avoiding extinction."
-
Yes, this is in The New Yorker: "I trust in TextEdit. It doesn't redesign its interface without warning, the way Spotify does; it doesn't hawk new features, and it doesn't demand I update the app every other week, as Google Chrome does. I've tried out other software for keeping track of my random thoughts and ideas in progress--the personal note-storage app Evernote; the task-management board Trello; the collaborative digital workspace Notion, which can store and share company information. Each encourages you to adapt to a certain philosophy of organization, with its own formats and filing systems. But nothing has served me better than the brute simplicity of TextEdit, which doesn't try to help you at all with the process of thinking." Great title too: TextEdit and the Relief of Simple Software.
-
Also The New Yorker, on performative reading, and reading, and books, and social media: "Reading a book is antithetical to scrolling; online platforms cannot replicate the slow, patient, and complex experience of reading a weighty novel. [...] The only way that an internet mind can understand a person reading a certain kind of book in public is through the prism of how it would appear on a feed: as a grotesquely performative posture, a false and self-flattering manipulation, or a desperate attempt to attract a romantic partner."
-
LLMs and physical laws? Maybe: "The dynamics of LLM generation are quite unique. Compared to traditional rule-based programs, LLM-based generation exhibits diverse and adaptive outputs. [âŚ] To model the dynamic behavior of LLMs, we embed the generative process of LLM within a given agent framework, viewing it as a Markov transition process in its state space. [âŚ] Based on this model, we propose a method to measure this underlying potential function based on a least action principle. By experimentally measuring the transition probabilities between states, we statistically discover [âŚ] To our knowledge, this is the first discovery of a macroscopic physical law in LLM generative dynamics that does not depend on specific model details."
-
"'Climbing Everest solo without bottled oxygen in 1980 was the hardest thing I've done. I was alone up there, completely alone. I fell down a crevasse at night and almost gave up. Only because I had this fantasy - because for two years I had been pregnant with this fantasy of soloing Everest - was I able to continue.' This is how Messner talks about how his will was governed."
-
I regularly remind myself and sometimes even others of Jason Fried's Give it five minutes. It's one of the most influential things I've read in the past ten years. I constantly think of it and I'm convinced it's improved my mental well-being and my connections to other people like few others things. Yes, I know how this sounds, but, I guess, an idea and a specific phrase that sticks with you can go a long way as far as life-changing is concerned. Now, all of that is just context, because what I want to actually share is this Jason Fried piece here: Idea protectionism. I re-found and re-read it after sharing the other Jason Fried piece and wanting to share the Jony Ive quote in this one and, yup, stumbled across it by chance. Lucky.
-
Reuters reports on China's Manhattan Project. This is it, baby! This has it all: corporate espionage, ASML, lithography, "one veteran Chinese engineer from ASML recruited to the project was surprised to find that his generous signing bonus came with an identification card issued under a false name", EUV systems that "are roughly the size of a school bus, and weigh 180 tons", Germany's Carl Zeiss AG, "networks of intermediary companies are sometimes used to mask the ultimate buyer", "employees assigned to semiconductor teams often sleep on-site and are barred from returning home during the work week, with phone access restricted for teams handling more sensitive tasks", and, of course, the tension at the heart of it all: "Starting in 2018, the United States began pressuring the Netherlands to block ASML from selling EUV systems to China. The restrictions expanded in 2022, when the Biden administration imposed sweeping export controls designed to cut off China's access to advanced semiconductor technology. No EUV system has ever been sold to a customer in China, ASML told Reuters."
-
I didn't know this is a thing, this was funny: the Beckham rumour that refuses to die.
-
At work, we ended up talking about Christmas traditions and while I was explaining that where I live the magical entity that makes presents appear is called "christkind" (christ child), I was also trying to find proof on Wikipedia so I'd seem less weird and found this map. Note the filename: Christmas-gift-bringers-Europe.jpg. Great name. But now see where the green and the brown mix, in the middle of Germany? That's where I live. So not only does one legend say it's Baby Jesus bringing presents, it's also that in the next town over it's the Christmas Man. And that dude looks an awful lot like its American cousin Santa Claus, who has a lot more media appearances and higher popularity in the younger-than-10 demographic. Try to keep your story straight when you talk to a 4-year-old who keeps asking you whether she'll get a computer for Christmas. How grand it must be to live in Iceland, where, according to that map, the Christmas Lads live.
If you also feel a bit, let's say, joy & curiosity about computers doing new things, you should subscribe:
-
-
đ Andrew Healey's Blog A Fair, Cancelable Semaphore in Go rss
Building a fair, cancelable semaphore in Go and the subtle concurrency issues involved.
-
- December 20, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-12-20 rss
IDA Plugin Updates on 2025-12-20
New Releases:
Activity:
- AiDA
- 7d7c9556: Merge pull request #12 from CheckForUpdates/main
- augur
- a1320f1d: chore: update dependencies
- haruspex
- 59546b3c: chore: update dependencies
- IDAPluginList
- 7a4bb0c8: Update
- rhabdomancer
- fd74d128: chore: update dependencies
- AiDA
-
đ Jeremy Fielding (YouTube) Machining Parts for Wall-E. Episode 03 rss
Order custom parts or PCB's from PCBWayđ https://pcbway.com/g/4fU4Ha If you want to join my community of makers and Tinkers consider getting a YouTube membership đ https://www.youtube.com/@JeremyFieldingSr/join
If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. đ https://www.patreon.com/jeremyfieldingsr đ https://www.buymeacoffee.com/jeremyfielding
Social media, websites, and other channel
Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter đhttps://twitter.com/jeremy_fielding TikTok đhttps://www.tiktok.com/@jeremy_fielding0 LinkedIn đhttps://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites đ https://www.jeremyfielding.com đhttps://www.fatherhoodengineered.com My other channel Fatherhood engineered channel đ https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA
Notes: Check out the Formlabs 4L Printer đhttps://bit.ly/4590tau
WALL-E Playlist here đ https://www.youtube.com/playlist?list=PL4njCTv7IRbwHiU2GX5WXI8d0NzBzbMsV
Technical corrections
Nothing yet
-
đ jj-vcs/jj v0.35.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
-
Workspaces can now have their own separate configuration. For instance, you
can usejj config set --workspaceto update a configuration option only in
the current workspace. -
After creating a local bookmark, it is now possible to use
jj bookmark track
to associate the bookmark with a specific remote before pushing it. When
pushing a tracked bookmark, it is not necessary to use--allow-new. -
The new
jj git colocation enableandjj git colocation disablecommands
allow converting between colocated and non-colocated workspaces.
Breaking changes
-
The
remote_bookmarks(remote=pattern)revset now includes Git-tracking
bookmarks if the specifiedpatternmatchesgit. The default is
remote=~exact:"git"as before. -
The deprecated flag
--summaryofjj abandonhas been removed. -
The deprecated command
jj backouthas been removed, usejj revertinstead. -
The following deprecated config options have been removed:
signing.sign-allcore.watchman.register_snapshot_triggerdiff.format
Deprecations
-
jj bisect run --command <cmd>is deprecated in favor of
jj bisect run -- <cmd>. -
jj metaedit --update-committer-timestampwas renamed to
jj metaedit --force-rewritesince the old name (and help text)
incorrectly suggested that the committer name and email would not
be updated.
New features
-
Workspaces may have an additional layered configuration, located at
.jj/workspace-config.toml.jj configsubcommands which took layer options
like--reponow also support--workspace. -
jj bookmark trackcan now associate new local bookmarks with remote.
Tracked bookmarks can be pushed without--allow-new.
#7072 -
The new
jj git colocationcommand provides sub-commands to show the
colocation state (status), to convert a non-colocated workspace into
a colocated workspace (enable), and vice-versa (disable). -
New
jj tag set/deletecommands to create/update/delete tags locally.
Created/updated tags are currently always exported to Git as lightweight
tags. If you would prefer them to be exported as annotated tags, please give
us feedback on #7908. -
Templates now support a
.split(separator, [limit])method on strings to
split a string into a list of substrings. -
-Gis now available as a short form of--no-graphinjj log,jj evolog,
jj op log,jj op showandjj op diff. -
jj metaeditnow accepts-m/--messageoption to non-interactively update
the change description. -
The
CryptographicSignature.key()template method now also works for SSH
signatures and returns the corresponding public key fingerprint. -
Added
template-aliases.empty_commit_marker. Users can override this value in
their config to change the "(empty)" label on empty commits. -
Add support for
--when.workspacesconfig scopes. -
Add support for
--when.hostnamesconfig scopes. This allows configuration to
be conditionally applied based on the hostname set inoperation.hostname. -
jj bisect runaccepts the command and arguments to pass to the command
directly as positional arguments, such as
jj bisect run --range=..main -- cargo check --all-targets. -
Divergent changes are no longer marked red in immutable revisions. Since the
revision is immutable, the user shouldn't take any action, so the red color
was unnecessarily alarming. -
New commit template keywords
local/remote_tagsto show only local/remote
tags. These keywords may be useful in non-colocated Git repositories where
local and exported@gittags can point to different revisions. -
jj git clonenow supports the--branchoption to specify the branch(es)
to fetch during clone. If present, the first matching branch is used as the
working-copy parent. -
Revsets now support logical operators in string patterns.
Fixed bugs
-
jj metaedit --author-timestamptwice with the same value no longer
edits the change twice in some cases. -
jj squash: fixed improper revision rebase when both--insert-afterand
--insert-beforewere used. -
jj undocan now revert "fetch"/"import" operation that involves tag updates.
#6325 -
Fixed parsing of
files(expr)revset expression including parentheses.
#7747 -
Fixed
jj describe --stdinto append a final newline character.
Contributors
Thanks to the people who made this release happen!
- Alpha Chen (@kejadlen)
- Angel Ezquerra (@AngelEzquerra)
- ase (@adamse)
- Austin Seipp (@thoughtpolice)
- Benjamin Brittain (@benbrittain)
- bipul (@bipulmgr)
- Brian Schroeder (@bts)
- Bryce Berger (@bryceberger)
- Cole Helbling (@cole-h)
- Daniel Luz (@mernen)
- David Higgs (@higgsd)
- Defelo (@Defelo)
- Fedor (@sheremetyev)
- Gabriel Goller (@kaffarell)
- GaĂŤtan Lehmann (@glehmann)
- George Christou (@gechr)
- Ilya Grigoriev (@ilyagr)
- Isaac Corbrey (@icorbrey)
- James Coman (@jamescoman)
- Joseph Lou (@josephlou5)
- Lander Brandt (@landaire)
- Martin von Zweigbergk (@martinvonz)
- Michael Chirico (@MichaelChirico)
- Owen Brooks (@owenbrooks)
- Peter Schilling (@schpet)
- Philip Metzger (@PhilipMetzger)
- Remo Senekowitsch (@senekor)
- Ross Smyth (@RossSmyth)
- Scott Taylor (@scott2000)
- Steve Fink (@hotsphink)
- Steve Klabnik (@steveklabnik)
- Theo Buehler (@botovq)
- Theodore Dubois (@tbodt)
- Theodore Keloglou (@sirodoht)
- Yuya Nishihara (@yuja)
-
-
đ jj-vcs/jj v0.36.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
- The documentation has moved from https://jj-vcs.github.io/jj/ to
https://docs.jj-vcs.dev/.
301 redirects are being issued towards the new domain, so any existing links
should not be broken.-
Fixed race condition that could cause divergent operations when running
concurrentjjcommands in colocated repositories. It is now safe to
continuously run e.g.jj logwithout--ignore-working-copyin one
terminal while you're running other commands in another terminal.
#6830 -
jjnow ignores$PAGERset in the environment and usesless -FRXon most
platforms (:builtinon Windows). See the docs for
more information, and #3502 for
motivation.
Breaking changes
-
In filesets or path patterns, glob matching
is enabled by default. You can usecwd:"path"to match literal paths. -
In the following commands, string pattern
arguments are now parsed the same way they
are in revsets and can be combined with logical operators:jj bookmark delete/forget/list/move,jj tag delete/list,jj git clone/fetch/push -
In the following commands, unmatched bookmark/tag names is no longer an
error. A warning will be printed instead:jj bookmark delete/forget/move/track/untrack,jj tag delete,jj git clone/push -
The default string pattern syntax in revsets will be changed to
glob:in a
future release. You can opt in to the new default by setting
ui.revsets-use-glob-by-default=true. -
Upgraded
scm-recordfrom v0.8.0 to v0.9.0. See release notes at
https://github.com/arxanas/scm-record/releases/tag/v0.9.0. -
The minimum supported Rust version (MSRV) is now 1.89.
-
On macOS, the deprecated config directory
~/Library/Application Support/jj
is not read anymore. Use$XDG_CONFIG_HOME/jjinstead (defaults to
~/.config/jj). -
Sub-repos are no longer tracked. Any directory containing
.jjor.git
is ignored. Note that Git submodules are unaffected by this.
Deprecations
-
The
--destination/-darguments forjj rebase,jj split,jj revert,
etc. were renamed to--onto/-o. The reasoning is that--onto,
--insert-before, and--insert-afterare all destination arguments, so
calling one of them--destinationwas confusing and unclear. The old names
will be removed at some point in the future, but we realize that they are
deep in muscle memory, so you can expect an unusually long deprecation period. -
jj describe --editis deprecated in favor of--editor. -
The config options
git.auto-local-bookmarkandgit.push-new-bookmarksare
deprecated in favor ofremotes.<name>.auto-track-bookmarks. For example:[remotes.origin]auto-track-bookmarks = "glob:*"
For more details, refer to
the docs.- The flag
--allow-newonjj git pushis deprecated. In order to push new
bookmarks, please track them withjj bookmark track. Alternatively, consider
setting up an auto-tracking configuration to avoid the chore of tracking
bookmarks manually. For example:[remotes.origin]auto-track-bookmarks = "glob:*"
For more details, refer to
the docs.New features
-
jj commit,jj describe,jj squash, andjj splitnow accept
--editor, which ensures an editor will be opened with the commit
description even if one was provided via--message/-m. -
All
jjcommands show a warning when the providedfilesetexpression
doesn't match any files. -
Added
files()template function toDiffStats. This supports per-file stats
likelines_added()andlines_removed() -
Added
join()template function. This is different fromseparate()in that
it adds a separator between all arguments, even if empty. -
RepoPathtemplate type now has aabsolute() -> Stringmethod that returns
the absolute path as a string. -
Added
format_path(path)template alias that controls how file paths are printed
withjj file list. -
New built-in revset aliases
visible()andhidden(). -
Unquoted
*is now allowed in revsets.bookmarks(glob:foo*)no longer
needs quoting. -
jj prev/next --no-editnow generates an error if the working-copy has some
children. -
A new config option
remotes.<name>.auto-track-bookmarkscan be set to a
string pattern. New bookmarks matching it will be automatically tracked for
the specified remote. See
the docs. -
jj lognow supports a--countflag to print the number of commits instead
of displaying them.
Fixed bugs
-
jj fixnow prints a warning if a tool failed to run on a file.
#7971 -
Shell completion now works with nonânormalized paths, fixing the previous
panic and allowing prefixes containing.or..to be completed correctly.
#6861 -
Shell completion now always uses forward slashes to complete paths, even on
Windows. This renders completion results viable when using jj in Git Bash.
#7024 -
Unexpected keyword arguments now return a parse failure for the
coalesce()
andconcat()templating functions. -
Nushell completion script documentation add
-foption, to keep it up to
date.
#8007 -
Ensured that with Git submodules, remnants of your submodules do not show up
in the working copy after runningjj new.
#4349
Contributors
Thanks to the people who made this release happen!
- abgox (@abgox)
- ase (@adamse)
- BjĂśrn Kautler (@Vampire)
- Bryce Berger (@bryceberger)
- Chase Naples (@cnaples79)
- David Higgs (@higgsd)
- edef (@edef1c)
- Evan Mesterhazy (@emesterhazy)
- Fedor (@sheremetyev)
- GaĂŤtan Lehmann (@glehmann)
- George Christou (@gechr)
- Hubert Lefevre (@Paluche)
- Ilya Grigoriev (@ilyagr)
- Jonas Greitemann (@jgreitemann)
- Joseph Lou (@josephlou5)
- Julia DeMille (@judemille)
- Kaiyi Li (@06393993)
- Kyle Lippincott (@spectral54)
- Lander Brandt (@landaire)
- Lucio Franco (@LucioFranco)
- Luke Randall (@lukerandall)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Mitchell Skaggs (@magneticflux-)
- Peter Schilling (@schpet)
- Philip Metzger (@PhilipMetzger)
- QingyaoLin (@QingyaoLin)
- Remo Senekowitsch (@senekor)
- Scott Taylor (@scott2000)
- Stephen Jennings (@jennings)
- Steve Klabnik (@steveklabnik)
- Tejas Sanap (@whereistejas)
- Tommi Virtanen (@tv42)
- Velociraptor115 (@Velociraptor115)
- Vincent Ging Ho Yim (@cenviity)
- Yuya Nishihara (@yuja)
- The documentation has moved from https://jj-vcs.github.io/jj/ to
-
đ r/wiesbaden Wo arbeiten von Bar / CafĂŠ nach 19 Uhr? rss
Hi zusammen,
ich arbeite am Laptop und setze mich dazu gerne in CafÊs. Oft bin ich bis 21 Uhr aber noch nicht wirklich fertig. Die meisten CafÊs haben spätestens dann geschlossen.
Kennt ihr CafÊs oder Bars die geeignet wären? Musik ist fßr mich kein Problem, solange man dort auch am Laptop sitzen darf.
submitted by /u/CalmSorry
[link] [comments] -
đ r/LocalLLaMA Xiaomiâs MiMo-V2-Flash (309B model) jumping straight to the big leagues rss
| submitted by /u/98Saman
[link] [comments]
---|--- -
đ Anton Zhiyanov Go feature: Modernized go fix rss
Part of theAccepted! series: Go proposals and features explained in simple terms.
The modernized
go fixcommand uses a fresh set of analyzers and the same infrastructure asgo vet.Ver. 1.26 ⢠Tools ⢠Medium impact
Summary The go fix is re-implemented using the Go analysis framework â the same one go vet uses. While go fix and go vet now use the same infrastructure, they have different purposes and use different sets of analyzers: Vet is for reporting problems. Its analyzers describe actual issues, but they don't always suggest fixes, and the fixes aren't always safe to apply. Fix is (mostly) for modernizing the code to use newer language and library features. Its analyzers produce fixes are always safe to apply, but don't necessarily indicate problems with the code. See the full set of fix's analyzers in the Analyzers section. Motivation The main goal is to bring modernization tools from the Go language server (gopls) to the command line. If go fix includes the modernize suite, developers can easily and safely update their entire codebase after a new Go release with just one command. Re-implementing go fix also makes the Go toolchain simpler. The unified go fix and go vet use the same backend framework and extension mechanism. This makes the tools more consistent, easier to maintain, and more flexible for developers who want to use custom analysis tools. Description Implement the new go fix command: usage: go fix [build flags] [-fixtool prog] [fix flags] [packages] Fix runs the Go fix tool (cmd/fix) on the named packages and applies suggested fixes. It supports these flags: -diff instead of applying each fix, print the patch as a unified diff The -fixtool=prog flag selects a different analysis tool with alternative or additional fixers. By default, go fix runs a full set of analyzers (see the list below). To choose specific analyzers, use the -NAME flag for each one, or use -NAME=false to run all analyzers except the ones you turned off. For example, here we only enable the forvar analyzer: go fix -forvar . And here, we enable all analyzers except omitzero : go fix -omitzero=false . Currently, there's no way to suppress specific analyzers for certain files or sections of code. The -fixtool=prog flag selects a different analysis tool instead of the default one. For example, you can build and run the "stringintconv" analyzer, which fixes string(int) conversions, by using these commands: go install golang.org/x/tools/go/analysis/passes/stringintconv/cmd/stringintconv@latest go fix -fixtool=$(which stringintconv) Alternative fix tools should be built atop unitchecker, which handles the interaction with go fix. Analyzers Here's the list of fixes currently available in go fix, along with examples. any ⢠bloop ⢠fmtappendf ⢠forvar ⢠hostport ⢠inline ⢠mapsloop ⢠minmax ⢠newexpr ⢠omitzero ⢠plusbuild ⢠rangeint ⢠reflecttypefor ⢠slicescontains ⢠slicessort ⢠stditerators ⢠stringsbuilder ⢠stringscut ⢠stringcutprefix ⢠stringsseq ⢠testingcontext ⢠waitgroup any Replace interface{} with any: // before func main() { var val interface{} val = 42 fmt.Println(val) } // after func main() { var val any val = 42 fmt.Println(val) } bloop Replace for-range over b.N with b.Loop and remove unnecessary manual timer control: // before func Benchmark(b *testing.B) { s := make([]int, 1000) for i := range s { s[i] = i } b.ResetTimer() for range b.N { Calc(s) } } // after func Benchmark(b *testing.B) { s := make([]int, 1000) for i := range s { s[i] = i } for b.Loop() { Calc(s) } } fmtappendf Replace []byte(fmt.Sprintf) with fmt.Appendf to avoid intermediate string allocation: // before func format(id int, name string) []byte { return []byte(fmt.Sprintf("ID: %d, Name: %s", id, name)) } // after func format(id int, name string) []byte { return fmt.Appendf(nil, "ID: %d, Name: %s", id, name) } forvar Remove unnecessary shadowing of loop variables: // before func main() { for x := range 4 { x := x go func() { fmt.Println(x) }() } } // after func main() { for x := range 4 { go func() { fmt.Println(x) }() } } hostport Replace network addresses created with fmt.Sprintf by using net.JoinHostPort instead, because host-port pairs made with %s:%d or %s:%s format strings don't work with IPv6: // before func main() { host := "::1" port := 8080 addr := fmt.Sprintf("%s:%d", host, port) net.Dial("tcp", addr) } // after func main() { host := "::1" port := 8080 addr := net.JoinHostPort(host, fmt.Sprintf("%d", port)) net.Dial("tcp", addr) } inline Inline function calls accoring to the go:fix inline comment directives: // before //go:fix inline func Square(x float64) float64 { return math.Pow(float64(x), 2) } func main() { fmt.Println(Square(5)) } // after //go:fix inline func Square(x float64) float64 { return math.Pow(float64(x), 2) } func main() { fmt.Println(math.Pow(float64(5), 2)) } mapsloop Replace explicit loops over maps with calls to maps package (Copy, Insert, Clone, or Collect depending on the context): // before func copyMap(src map[string]int) map[string]int { dest := make(map[string]int, len(src)) for k, v := range src { dest[k] = v } return dest } // after func copyMap(src map[string]int) map[string]int { dest := make(map[string]int, len(src)) maps.Copy(dest, src) return dest } minmax Replace if/else statements with calls to min or max: // before func calc(a, b int) int { var m int if a > b { m = a } else { m = b } return m * (b - a) } // after func calc(a, b int) int { var m int m = max(a, b) return m * (b - a) } newexpr Replace custom "pointer to" functions with new(expr): // before type Pet struct { Name string Happy *bool } func ptrOf *T { return &v } func main() { p := Pet{Name: "Fluffy", Happy: ptrOf(true)} fmt.Println(p) } // after type Pet struct { Name string Happy *bool } //go:fix inline func ptrOf[T any](v T) *T { return new(v) } func main() { p := Pet{Name: "Fluffy", Happy: new(true)} fmt.Println(p) } omitzero
Remove
omitemptyfrom struct-type fields because this tag doesn't have any effect on them:// before type Person struct { Name string `json:"name"` Pet Pet `json:"pet,omitempty"` } type Pet struct { Name string } // after type Person struct { Name string `json:"name"` Pet Pet `json:"pet"` } type Pet struct { Name string }plusbuild Remove obsolete //+build comments: //go:build linux && amd64 // +build linux,amd64 package main func main() { var _ = 42 } //go:build linux && amd64 package main func main() { var _ = 42 } rangeint Replace 3-clause for loops with for-range over integers: // before func main() { for i := 0; i < 5; i++ { fmt.Print(i) } } // after func main() { for i := range 5 { fmt.Print(i) } } reflecttypefor Replace reflect.TypeOf(x) with reflect.TypeFor when the type is known at compile time: // before func main() { n := uint64(0) typ := reflect.TypeOf(n) fmt.Println("size =", typ.Bits()) } // after func main() { typ := reflect.TypeFor[uint64]() fmt.Println("size =", typ.Bits()) } slicescontains
Replace loops with
slices.Containsorslices.ContainsFunc:// before func find(s []int, x int) bool { for _, v := range s { if x == v { return true } } return false } // after func find(s []int, x int) bool { return slices.Contains(s, x) }slicessort Replace sort.Slice with slices.Sort for basic types: // before func main() { s := []int{22, 11, 33, 55, 44} sort.Slice(s, func(i, j int) bool { return s[i] < s[j] }) fmt.Println(s) } // after func main() { s := []int{22, 11, 33, 55, 44} slices.Sort(s) fmt.Println(s) } stditerators Use iterators instead of Len/At-style APIs for certain types in the standard library: // before func main() { typ := reflect.TypeFor for i := range typ.NumField() { field := typ.Field(i) fmt.Println(field.Name, field.Type.String()) } } // after func main() { typ := reflect.TypeFor[Person]() for field := range typ.Fields() { fmt.Println(field.Name, field.Type.String()) } } stringsbuilder
Replace repeated
+=withstrings.Builder:// before func abbr(s []string) string { res := "" for _, str := range s { if len(str) > 0 { res += string(str[0]) } } return res } // after func abbr(s []string) string { var res strings.Builder for _, str := range s { if len(str) > 0 { res.WriteString(string(str[0])) } } return res.String() }stringscut
Replace some uses of
strings.Indexand string slicing withstrings.Cutorstrings.Contains:// before func nospace(s string) string { idx := strings.Index(s, " ") if idx == -1 { return s } return strings.ReplaceAll(s, " ", "") } // after func nospace(s string) string { found := strings.Contains(s, " ") if !found { return s } return strings.ReplaceAll(s, " ", "") }stringscutprefix
Replace
strings.HasPrefix/TrimPrefixwithstrings.CutPrefixandstrings.HasSuffix/TrimSuffixwithstring.CutSuffix:// before func unindent(s string) string { if strings.HasPrefix(s, "> ") { return strings.TrimPrefix(s, "> ") } return s } // after func unindent(s string) string { if after, ok := strings.CutPrefix(s, "> "); ok { return after } return s }stringsseq
Replace ranging over
strings.Split/Fieldswithstrings.SplitSeq/FieldsSeq:// before func main() { s := "go is awesome" for _, word := range strings.Fields(s) { fmt.Println(len(word)) } } // after func main() { s := "go is awesome" for word := range strings.FieldsSeq(s) { fmt.Println(len(word)) } }testingcontext
Replace
context.WithCancelwitht.Contextin tests:// before func Test(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) defer cancel() if ctx.Err() != nil { t.Fatal("context should be active") } } // after func Test(t *testing.T) { ctx := t.Context() if ctx.Err() != nil { t.Fatal("context should be active") } }waitgroup
Replace
wg.Add+wg.Donewithwg.Go:// before func main() { var wg sync.WaitGroup wg.Add(1) go func() { defer wg.Done() fmt.Println("go!") }() wg.Wait() } // after func main() { var wg sync.WaitGroup wg.Go(func() { fmt.Println("go!") }) wg.Wait() }Links & Credits
đŁ 71859 đĽ Alan Donovan, Jonathan Amsterdam
*[Medium impact]: Likely impact for an average Go developer
-
đ r/wiesbaden kĂźnstler*innenviertel? rss
finde online nichts dazu, aber wird die haltestelle kĂźnstlerviertel bei der buslinie 18 seit neuestem als kĂźnstlerinnenviertel angesagt? oder spinn ich und das war schon immer so
submitted by /u/imdrixn
[link] [comments] -
đ r/LocalLLaMA Of course it works, in case you are wondering... and it's quite faster. rss
| submitted by /u/JLeonsarmiento
[link] [comments]
---|--- -
đ r/LocalLLaMA Open source LLM tooling is getting eaten by big tech rss
I was using TGI for inference six months ago. Migrated to vLLM last month. Thought it was just me chasing better performance, then I read the LLM Landscape 2.0 report. Turns out 35% of projects from just three months ago already got replaced. This isn't just my stack. The whole ecosystem is churning.
The deeper I read, the crazier it gets. Manus blew up in March, OpenManus and OWL launched within weeks as open source alternatives, both are basically dead now. TensorFlow has been declining since 2019 and still hasn't hit bottom. The median project age in this space is 30 months.
Then I looked at what's gaining momentum. NVIDIA drops Dynamo, optimized for NVIDIA hardware. Google releases Gemini CLI with Google Cloud baked in. OpenAI ships Codex CLI that funnels you into their API. That's when it clicked.
Two years ago this space was chaotic but independent. Now the open source layer is becoming the customer acquisition layer. We're not choosing tools anymore. We're being sorted into ecosystems.
submitted by /u/Inevitable_Wear_9107
[link] [comments] -
đ r/LocalLLaMA Key Highlights of NVIDIAâs New Open-Source Vision-to-Action Model: NitroGen rss
- NitroGen is a unified vision-to-action model designed to play video games directly from raw frames. It takes video game footage as input and outputs gamepad actions.
- NitroGen is trained purely through large-scale imitation learning on videos of human gameplay.
- NitroGen works best on games designed for gamepad controls (e.g., action, platformer, and racing games) and is less effective on games that rely heavily on mouse and keyboard (e.g., RTS, MOBA).
How this model works?
- RGB frames are processed through a pre-trained vision transformer (SigLip2).
- A diffusion matching transformer (DiT) then generates actions, conditioned on SigLip output.
Model - https://huggingface.co/nvidia/NitroGen submitted by /u/Dear- Success-1441
[link] [comments]
---|--- -
đ r/LocalLLaMA Japan's Rakuten is going to release a 700B open weight model in Spring 2026 rss
https://news.yahoo.co.jp/articles/0fc312ec3386f87d65e797ab073db56c230757e1
Hope it works well in real life. Then it can not only be an alternative to the Chinese models. but also prompt the US companies to release big models.
submitted by /u/Ok_Warning2146
[link] [comments] -
đ Filip Filmar note to self: do not remove `.bazelversion` rss
note to self: do not remove .bazelversion from projects The other day, I was pondering whether to keep setting particular bazel version in projects. I even removed some, to see what would become of it. Since I use the bazelisk installation method, I get automatic bazel updates when a new version is released. It turns out that this remains a bad idea. An auto-update to bazel 8.5.0 caused some of my CI workflows to fail, likely because bazel 8.
-