- ↔
- →
- On evaluating agents – aunhumano
- How to Test
- Driver Reversing 101
- Destructure as a Reminder · Dmitrii Aleksandrov
- Lubeno
- September 17, 2025
-
🔗 charmbracelet/crush v0.9.0 release
Attribution settings + Powernap/LSP
-
🔗 r/wiesbaden Fußball-Fragen rss
Hallo zusammen! Ich ziehe im Oktober nach Wiesbaden, gibt es Fußballligen für Erwachsene? 11 gegen 11, Pickup-Fußball oder Hallenfußball? (Männer, Frauen, gemischtes Geschlecht)
submitted by /u/FunnyCryptographer38
[link] [comments] -
🔗 jesseduffield/lazygit v0.55.1 release
This hotfix release fixes two bugs that have crept in in v0.55.0: one is a regression that broke displaying the enter key in the keybindings menu, the other is a problem with a newly added feature that didn't work quite correctly. See below for details.
For the changes in 0.55.0, see https://github.com/jesseduffield/lazygit/releases/tag/v0.55.0.
What's Changed
Fixes 🔧
- Don't hide keybindings that match the confirmMenu key in the keybindings menu by @stefanhaller in #4880
- Fix staging when using the new useExternalDiffGitConfig config by @stefanhaller in #4895
Full Changelog :
v0.55.0...v0.55.1
-
🔗 r/reverseengineering R.E.L.I.V.E. -- open-source re-implementation of Oddworld: Abe's Exoddus and Oddworld: Abe's Oddysee rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
🔗 Locklin on science Best quantum computing paper of 2025 rss
Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog by Peter Gutmann and Stephan Neuhaus Click to access 1237.pdf Supposedly AI investors are poised to fuel this horse shit with your retirement money, now that the air is going out of that bubble. This paper comes in the nick […]
-
🔗 r/LocalLLaMA Magistral Small 2509 has been released rss
| https://huggingface.co/mistralai/Magistral-Small-2509-GGUF https://huggingface.co/mistralai/Magistral-Small-2509
Magistral Small 1.2
Building upon Mistral Small 3.2 (2506), with added reasoning capabilities , undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters. Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. Learn more about Magistral in our blog post. The model was presented in the paper Magistral.
Updates compared with [Magistral Small
1.1](https://huggingface.co/mistralai/Magistral-Small-2507)
- Multimodality : The model now has a vision encoder and can take multimodal inputs, extending its reasoning capabilities to vision.
- Performance upgrade : Magistral Small 1.2 should give you significatively better performance than Magistral Small 1.1 as seen in the benchmark results.
- Better tone and persona : You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
- Finite generation : The model is less likely to enter infinite generation loops.
- Special think tokens : [THINK] and [/THINK] special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
- Reasoning prompt : The reasoning prompt is given in the system prompt.
Key Features
- Reasoning: Capable of long chains of reasoning traces before providing an answer.
- Multilingual: Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- Vision : Vision capabilities enable the model to analyze images and reason based on visual content in addition to text.
- Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
- Context Window: A 128k context window. Performance might degrade past 40k but Magistral should still give good results. Hence we recommend to leave the maximum model length to 128k and only lower if you encounter low performance.
https://preview.redd.it/d0vo5ev3xqpf1.png?width=1342&format=png&auto=webp&s=f81d6fa64a262e991112d1c8011e18d1d75b2774 submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/LocalLLaMA China bans its biggest tech companies from acquiring Nvidia chips, says report — Beijing claims its homegrown AI processors now match H20 and RTX Pro 6000D rss
| submitted by /u/balianone
[link] [comments]
---|--- -
🔗 pixelspark/sushitrain v2.1.57 (released for iOS) release
On macOS, the current release is v2.1.54.
-
🔗 r/wiesbaden Oktoberfest rss
Any ideas where I can get a good Lederhosen outfit that won’t break the bank but is somewhat authentic?
submitted by /u/BigDavidJ
[link] [comments] -
🔗 sacha chua :: living an awesome life Getting a Google Docs draft ready for Mailchimp via Emacs and Org Mode rss
I've been volunteering to help with the Bike Brigade newsletter. I like that there are people who are out there helping improve food security by delivering food bank hampers to recipients. Collecting information for the newsletter also helps me feel more appreciation for the lively Toronto biking scene, even though I still can't make it out to most events. The general workflow is:
- collect info
- draft the newsletter somewhere other volunteers can give feedback on
- convert the newsletter to Mailchimp
- send a test message
- make any edits requested
- schedule the email campaign
We have the Mailchimp Essentials plan, so I can't just export HTML for the whole newsletter. Someday I should experiment with services that might let me generate the whole newsletter from Emacs. That would be neat. Anyway, with Mailchimp's block-based editor, at least I can paste in HTML code for the text/buttons. That way, I don't have to change colours or define links by hand.
The logistics volunteers coordinate via Slack, so a Slack Canvas seemed like a good way to draft the newsletter. I've previously written about my workflow for copying blocks from a Slack Canvas and then using Emacs to transform the rich text, including recolouring the links in the section with light text on a dark background. However, copying rich text from a Slack Canvas turned out to be unreliable. Sometimes it would copy what I wanted, and sometimes nothing would get copied. There was no way to export HTML from the Slack Canvas, either.
I switched to using Google Docs for the drafts. It was a little less convenient to add items from Slack messages and I couldn't easily right-click to download the images that I pasted in. It was more reliable in terms of copying, but only if I used xclip to save the clipboard into a file instead of trying to do the whole thing in memory.
I finally got to spend a little time automating a new workflow. This time I exported the Google Doc as a zip that had the HTML file and all the images in a subdirectory. The HTML source is not very pleasant to work with. It has lots of extra markup I don't need. Here's what an entry looks like:
Figure 1: Exported HTML for an entry Things I wanted to do with the HTML:
- Remove the google.com/url redirection for the links. Mailchimp will add its own redirection for click-tracking, but at least the links can look simpler when I paste them in.
- Remove all the extra classes and styles.
- Turn [ call to action ] into fancier Mailchimp buttons.
Also, instead of transforming one block at a time, I decided to make an Org Mode document with all the different blocks I needed. That way, I could copy and paste things in quick succession.
Here's what the result looks like. It makes a table of contents, adds the sign-up block, and adds the different links and blocks I need to paste into Mailchimp.
Figure 2: Screenshot of newsletter Org file with blocks for easy copying I need to copy and paste the image filenames into the upload dialog on Mailchimp, so I use my custom Org Mode link type for copying to the clipboard. For the HTML code, I use
#+begin_src html ... #+end_src
instead of#+begin_export html ... #+end_export
so that I can use Embark and embark-org to quickly copy the contents of the source block. (That doesn't work for export blocks yet.) I haveC-.
bound toembark-act
, the source block is detected by the functions thatembark-org.el
added toembark-target-finders
, and thec
binding inembark-org-src-block-map
callsembark-org-copy-block-contents
. So all I need to do isC-. c
in a block to copy its contents.Here's the code to process the newsletter draft(defun my-brigade-process-latest-newsletter-draft (date) "Create an Org file with the HTML for different blocks." (interactive (list (if current-prefix-arg (org-read-date nil t nil "Date: ") (org-read-date nil t "+Sun")))) (when (stringp date) (setq date (date-to-time date))) (let ((default-directory "~/Downloads/newsletter") file dom sections) (call-process "unzip" nil nil nil "-o" (my-latest-file "~/Downloads" "\\.zip$")) (setq file (my-latest-file default-directory)) (with-temp-buffer (insert-file-contents-literally file) (goto-char (point-min)) (my-transform-html '(my-brigade-save-newsletter-images) (buffer-string)) (setq dom (my-brigade-simplify-html (libxml-parse-html-region (point-min) (point-max)))) (setq sections (my-html-group-by-tag 'h1 (dom-children (dom-by-tag dom'body))))) (with-current-buffer (get-buffer-create "*newsletter*") (erase-buffer) (org-mode) (insert (format-time-string "%B %-e, %Y" date) "\n" "* In this e-mail\n#+begin_src html\n" "<p>Hi Bike Brigaders! Here’s what's happening this week, with quick signup links. In this e-mail:</p>" (my-transform-html '(my-brigade-remove-meta-recursively my-brigade-just-headings) (copy-tree dom)) "\n#+end_src\n\n") (insert "* Sign-up block\n\n#+begin_src html\n" (my-brigade-copy-signup-block date) "\n#+end_src\n\n") (dolist (sec '("Bike Brigade" "In our community")) (insert "* " sec "\n" (mapconcat (lambda (group) (let* ((item (apply 'dom-node 'div nil (append (list (dom-node 'h2 nil (car group))) (cdr group)))) (image (my-brigade-image (car group)))) (format "** %s\n\n%s\n%s\n\n#+begin_src html\n%s\n#+end_src\n\n" (car group) (if image (org-link-make-string (concat "copy:" image)) "") (or (my-html-last-link-href item) "") (my-transform-html (delq nil (list 'my-transform-html-remove-images 'my-transform-html-remove-italics 'my-brigade-simplify-html 'my-brigade-format-buttons (when (string= sec "In our community") 'my-brigade-recolor-recursively))) item)))) (my-html-group-by-tag 'h2 (cdr (assoc sec sections 'string=))) ""))) (insert "* Other updates\n" (format "#+begin_src html\n%s\n#+end_src\n\n" (my-transform-html '(my-transform-html-remove-images my-transform-html-remove-italics my-brigade-simplify-html) (car (cdr (assoc "Other updates" sections 'string=)))))) (goto-char (point-min)) (display-buffer (current-buffer))))) (defun my-html-group-by-tag (tag dom-list) "Use TAG to divide DOM-LIST into sections. Return an alist of (section . children)." (let (section-name current-section results) (dolist (node dom-list) (if (and (eq (dom-tag node) tag) (not (string= (string-trim (dom-texts node)) ""))) (progn (when current-section (push (cons section-name (nreverse current-section)) results) (setq current-section nil)) (setq section-name (string-trim (dom-texts node)))) (when section-name (push node current-section)))) (when current-section (push (cons section-name (reverse current-section)) results) (setq current-section nil)) (nreverse results))) (defun my-html-last-link-href (node) "Return the last link HREF in NODE." (dom-attr (car (last (dom-by-tag node 'a))) 'href)) (defun my-brigade-image (heading) "Find the latest image related to HEADING." (car (nreverse (directory-files my-brigade-newsletter-images-directory t (regexp-quote (my-brigade-newsletter-heading-to-image-file-name heading))))))
Some of the functions it uses are in my config, particularly the section on Transforming HTML clipboard contents with Emacs to smooth out Mailchimp annoyances: dates, images, comments, colours.
Along the way, I learned that
svg-print
is a good way to turn document object models back into HTML.When I saw two more events and one additional link that I wanted to include, I was glad I already had this code sorted out. It made it easy to paste the images and details into the Google Doc, reformat it slightly, and get the info through the process so that it ended up in the newsletter with a usefully-named image and correctly-coloured links.
I think this is a good combination of Google Docs for getting other people's feedback and letting them edit, and Org Mode for keeping myself sane as I turn it into whatever Mailchimp wants.
My next step for improving this workflow might be to check out other e-mail providers in case I can get Emacs to make the whole template. That way, I don't have to keep switching between applications and using the mouse to duplicate blocks and edit the code.
This is part of my Emacs configuration.You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 @binaryninja@infosec.exchange Last chance. Our annual RE survey closes at 2pm EDT! mastodon
Last chance. Our annual RE survey closes at 2pm EDT! https://binary.ninja/survey/
-
🔗 @malcat@infosec.exchange First steps with [#malcat](https://infosec.exchange/tags/malcat)? Here is a mastodon
First steps with #malcat? Here is a tutorial video, courtesy of @invokereversing :
-
🔗 r/wiesbaden Warum heute/morgen fast alle Hotelzimmer ausgebucht? rss
Hi,
ich habe morgen einen wichtigen Termin in Wiesbaden und mir daher bereits vor 2 Wochen ein Hotelzimmer gebucht in der Innenstadt. Allerdings hatte ich bereits zu diesem Zeitpunkt enorme Probleme ein freies Zimmer zu finden. Was findet heute/morgen denn in Wiesbaden statt, dass die Hotels an einem random Mittwoch so voll sind ?
submitted by /u/Ok-Camp-890
[link] [comments] -
🔗 r/wiesbaden Knallgeräusche rss
Zur Information: Falls Sie sich, wie viele andere auch, gefragt haben, was die Ursache für die Knallgeräusche war, so handelt es sich offenbar um eine Abfangübung von (einem) Jet/s
Heute
Große Aufregung in Wiesbaden wegen
lauten Knallen
Heute kam es innerhalb kürzester Zeit zu zahlreichen Notrufen wegen zwei sehr lauten Knallen im Stadtgebiet. Viele Bürgerinnen und Bürger konnten nichts sehen und waren verunsichert.
Nach Rücksprache mit anderen Behörden steht fest: Es handelte sich um den Überschallknall eines Düsenjets.
Keine Gefahr für die Bevölkerung! <<
submitted by /u/thisismang0
[link] [comments] -
🔗 r/wiesbaden Name gesucht: Wiesbaden will im Dezember Rufbusse einführen rss
submitted by /u/Transportschaden
[link] [comments] -
🔗 r/wiesbaden Explosion? rss
Was waren das für zwei explosionsartige Geräusche aus Wiesbaden Mitte gerade?
submitted by /u/Triebii
[link] [comments] -
🔗 r/wiesbaden Beste Martinsgans (weil es bald wieder Zeit wird) rss
Früher waren wir immer bei meiner Mutter, die die beste Martinsgans gemacht hat - mit grünen / Thüringer Klössen. Sie stand immer den ganzen Tag in der Küche und hat für uns gearbeitet - aber das war es wert. Seit einiger Zeit wollten wir ihr die Arbeit sparen - aber so richtig sind wir nicht fündig geworden mit einer Alternative - in einem Restaurant.
Das Weinhaus Sinz in Frauenstein war enttäuschend - Fleisch zu hart - Haut zu weich
Das Goldstein by Gollners - etwas zu etepetete
Die Bratkartoffel - die beste bis jetzt aber das Restaurant ist etwas klein, etwas zu "Eckkneipe"Habt Ihr einen Tipp und Erfahrungen - oder müssen wir sie wieder in die Küche stecken? ;-)
submitted by /u/Masil-
[link] [comments] -
🔗 pydantic/pydantic-ai v1.0.8 (2025-09-16) release
What's Changed
- Tools can now return AG-UI events separate from result sent to model by @DouweM in #2922
- Fix bug causing doubled reasoning tokens usage by deepcopying by @DouweM in #2920
- Fix auto-detection of HTTP proxy settings by @maxnilz in #2917
- Fix
new_messages()
andcapture_run_messages()
when history processors are used by @DouweM in #2921 - chore: Remove 'text' from RunUsage docstrings by @alexmojaki in #2919
New Contributors
Full Changelog :
v1.0.7...v1.0.8
-
🔗 charmbracelet/crush nightly: chore: task run release
Verifying the artifacts
First, download the
checksums.txt
file, for example, withwget
:wget 'https://github.com/charmbracelet/crush/releases/download//checksums.txt'
Then, verify it using
cosign
:cosign verify-blob \ --certificate-identity 'https://github.com/charmbracelet/meta/.github/workflows/goreleaser.yml@refs/heads/main' \ --certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \ --cert 'https://github.com/charmbracelet/crush/releases/download//checksums.txt.pem' \ --signature 'https://github.com/charmbracelet/crush/releases/download//checksums.txt.sig' \ ./checksums.txt
If the output is
Verified OK
, you can safely use it to verify the checksums of other artifacts you downloaded from the release usingsha256sum
:sha256sum --ignore-missing -c checksums.txt
Done! You artifacts are now verified!
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
-
🔗 Drew DeVault's blog A better future for JavaScript that won't happen rss
In the wake of the largest supply-chain attack in history, the JavaScript community could have a moment of reckoning and decide: never again. As the panic and shame subsides, after compromised developers finish re-provisioning their workstations and rotating their keys, the ecosystem might re-orient itself towards solving the fundamental flaws that allowed this to happen.
After all, people have been sounding the alarm for years that this approach to dependency management is reckless and dangerous and broken by design. Maybe this is the moment when the JavaScript ecosystem begins to understand the importance and urgency of this problem, and begins its course correction. It could leave behind its sprawling dependency trees full of micro-libraries, establish software distribution based on relationships of trust, and incorporate the decades of research and innovation established by more serious dependency management systems.
Perhaps Google and Mozilla, leaders in JavaScript standards and implementations, will start developing a real standard library for JavaScript, which makes micro-dependencies like left-pad a thing of the past. This could be combined with a consolidation of efforts, merging micro-libraries into larger packages with a more coherent and holistic scope and purpose, which prune their own dependency trees in turn.
This could be the moment where npm comes to terms with its broken design, and with a well-funded effort (recall that, ultimately, npm is GitHub is Microsoft, market cap $3 trillion USD), will develop and roll out the next generation of package management for JavaScript. It could incorporate the practices developed and proven in Linux distributions, which rarely suffer from these sorts of attacks, by de-coupling development from packaging and distribution, establishing package maintainers who assemble and distribute curated collections of software libraries. By introducing universal signatures for packages of executable code, smaller channels and webs of trust, reproducible builds, and the many other straightforward, obvious techniques used by responsible package managers.
Maybe other languages that depend on this broken dependency management model, like Cargo, PyPI, RubyGems, and many more, are watching this incident and know that the very same crisis looms in their future. Maybe they will change course, too, before the inevitable.
Imagine if other large corporations who depend on and profit from this massive pile of recklessly organized software committed their money and resources to it, through putting their engineers to the task of fixing these problems, through coming together to establish and implement new standards, through direct funding of their dependencies and by distributing money through institutions like NLNet, ushering in an era of responsible, sustainable, and secure software development.
This would be a good future, but it’s not the future that lies in wait for us. The future will be more of the same. Expect symbolic gestures – mandatory 2FA will be rolled out in more places, certainly, and the big players will write off meager donations in the name of “OSS security and resilience” in their marketing budgets.
No one will learn their lesson. This has been happening for decades and no one has learned anything from it yet. This is the defining hubris of this generation of software development.
-
- September 16, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-09-16 rss
IDA Plugin Updates on 2025-09-16
New Releases:
Activity:
- ghidra
- ghidra-chinese
- IDA-MCP
- idapatch
- 0b1cf4fb: chore: update readme (tested versions)
- f7ec4c9c: chore: update readme
- 7ab19dc7: feat: add customizable loop max sleep into global settings
- ba6681dc: chore: cleaned up logs - added prefix to all of them
- 4883e2ce: fix: deal with module unloading (tradeoff: no-longer stops looping af…
- 69563cdd: fix linux compatibility
- f6ffe110: feat: threaded loop until all patches are applied
- python-learn
- 65cbe371: so相关
- RealworldFirmware
- 4133f1fe: Update README.md
- tenrec
- 6c35f8d1: Updated version
- 38e3b162: Merge pull request #1 from axelmierczuk/dev-plugin-installer
- 54873b44: Updated plugins in toml
- 4c61b90a: Updated plugins in toml
- d34bfff2: Updated plugins in toml
- 0a16c888: Updated documentation
- 82ea4fdb: Fixed plugin removal in config.py
- 1cbf4ad1: Fixed plugin removal in config.py
- 18e28de8: Fixed plugin removal in config.py
- 1ad95417: Fixed plugin removal in config.py
- a74adaad: Fixed plugin removal in config.py
- wp81IdaDriverAnalyzer
- 1aeb7a2c: wip
-
🔗 r/LocalLLaMA The Qwen of Pain. rss
| submitted by /u/-Ellary-
[link] [comments]
---|--- -
🔗 r/reverseengineering smb1-bugfix -- NES Super Mario Bros. disassembly with bugfixes, QoL improvements & more rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
🔗 charmbracelet/crush v0.8.3 release
Watcher nuances, better Gemini, and a Windows bugfix
This release add further tuning for and file watching. It also includes a hotfix for tools in Gemini (Thanks @dvcrn!) and a bugfix on Windows.
Thanks for love and support: you crush. 💘
Changelog
Fixed!
e2e952b
: fix: only enable watcher for git repos (#1060) (@raphamorim)a9d3080
: fix(gemini): ensure tool responses have the user role (@dvcrn)9a8574b
: fix: bump fang to v0.4.1 to fix a Windows issue #1041 (@aymanbagabas)
Docs
0146161
: docs(readme): add cerebras API key to table (@meowgorithm)
Verifying the artifacts
First, download the
checksums.txt
file, for example, withwget
:wget 'https://github.com/charmbracelet/crush/releases/download/v0.8.3/checksums.txt'
Then, verify it using
cosign
:cosign verify-blob \ --certificate-identity 'https://github.com/charmbracelet/meta/.github/workflows/goreleaser.yml@refs/heads/main' \ --certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \ --cert 'https://github.com/charmbracelet/crush/releases/download/v0.8.3/checksums.txt.pem' \ --signature 'https://github.com/charmbracelet/crush/releases/download/v0.8.3/checksums.txt.sig' \ ./checksums.txt
If the output is
Verified OK
, you can safely use it to verify the checksums of other artifacts you downloaded from the release usingsha256sum
:sha256sum --ignore-missing -c checksums.txt
Done! You artifacts are now verified!
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
-
🔗 Confessions of a Code Addict What Makes System Calls Expensive: A Linux Internals Deep Dive rss
Cover: A Flamegraph highlighting performance overhead due to system calls
System calls are how user programs talk to the operating system. They include opening files, reading the current time, creating processes, and more. They're unavoidable, but they're also not cheap.
If you've ever looked at a flame graph, you'll notice system calls often show up as hot spots. Engineers spend a lot of effort cutting them down, and whole features such as io_uring for batching I/O or eBPF for running code inside the kernel exist just to reduce how often programs have to cross into kernel mode.
Why are they so costly? The obvious part is the small bit of kernel code that runs for each call. The bigger cost comes from what happens around it: every transition into the kernel makes the CPU drop its optimizations, flush pipelines, and reset predictor state, then rebuild them again on return. This disruption is what makes system calls much more expensive than they appear in the source code.
In this article, we'll look at what really happens when you make a system call on Linux x86-64. We'll follow the kernel entry and exit path, analyse the direct overheads, and then dig into the indirect microarchitectural side- effects that explain why minimizing system calls is such an important optimization.
CodeRabbit: Free AI Code Reviews in CLI (Sponsored)
CodeRabbit CLI: A Code Review Agent to Review your AI Generated Code
As developers increasingly turn to CLI coding agents like Claude Code for rapid development, a critical gap emerges: who reviews the AI-generated code? CodeRabbit CLI fills this void by delivering senior-level code reviews directly in your terminal, creating a seamless workflow where code generation flows directly into automated validation. Review uncommitted changes, catch AI hallucinations, and get one-click fixes - all without leaving your command line. It's the quality gate that makes autonomous coding truly possible, ensuring every line of AI-generated code meets production standards before it ships.
Background on System Calls
Let's start with a quick overview of system calls. These are routines inside the kernel that provide specific services to user space. They live in the kernel because they need privileged access to registers, instructions, or hardware devices. For example, reading a file from disk requires talking to the disk controller, and creating a new process requires allocating hardware resources. Both are privileged operations, which is why they are system calls.
Calling a system call requires a special mechanism to switch execution from user space to kernel space. On x86-64 this is done using the
syscall
instruction, where you place the syscall number inrax
and the arguments in registers (rdi
,rsi
,rdx
,r10
,r9
,r8
), then invokesyscall
:# set args for calling read syscall movq $1, %rax movq $1, %rdi movq $buf, %rsi movq $size, %rdx syscall # we enter the kernel here movq %rax, %rbx
On encountering this instruction, the processor switches to kernel mode and jumps to the registered syscall entry path. The kernel completes the context switch (switching the page tables and stack) and then jumps to the specific syscall implementation.
When the syscall finishes, it places the return value in
rax
and returns. Returning requires another privilege mode switch, reversing everything done on entry: restoring the user page table, stack, and registers.The following diagram illustrates the sequence of steps required to execute a system call (
read
in this case).Flow of a
read
system call: user space sets up arguments and invokessyscall
, control transfers to the kernel entry handler, the kernel executes the system call (keys_read
), and then returns control back to user space.In the figure:
-
User space code sets up arguments for the
read
system call. -
It invokes the system call using the
syscall
instruction. -
The instruction switches to kernel mode and enters the syscall entry handler, where the kernel switches to its own page table and stack.
-
The kernel then jumps to the implementation of the
read
system call. -
After returning, the kernel restores the user space page table and stack, then control resumes at the next user instruction.
Now that we have this high-level overview, let's look inside the Linux kernel's syscall handler to understand each step in more detail.
Inside the Linux Syscall Handler
When a system call is invoked, the CPU jumps into the kernel's designated system call handler. The following diagram shows the Linux kernel code for this handler for the x86-64 architecture from the file entry_64.S. In the diagram, you can see the set of steps the kernel needs to perform before it can actually execute the system call. Let's briefly discuss each of these.
Actual x86-64 syscall entry code from Linux kernel (
entry_64.S
), annotated to show the steps the kernel performs before invoking the system call.Swapping the GS Register
GS is a segment register in the x86 architecture. In user space it is primarily used for thread-local storage (TLS). In kernel space it holds per-cpu variables, such as a pointer to the currently executing task. So, the first thing that the kernel does is restore the kernel mode value of the GS register.
Switching to Kernel Page Table and Kernel Stack
The Linux kernel has its own page table with mappings for kernel memory pages. To be able to access its memory it must restore this page table. It does this by calling the
SWITCH_TO_KERNEL_CR3
macro.On x86, the CR3 control register is designated to store the address of the root of the page table. This is why the macro for switching page tables is called
SWITCH_TO_KERNEL_CR3.
Separately, the kernel has its own fixed-size stack for executing kernel-side code. At this point the
rsp
register still points to the user space stack, so the kernel saves it in a scratch space and then restores its own stack pointer from a per-cpu variable.When returning from the system call, the kernel restores the user page table and stack by reversing these operations. This code is not shown in the diagram but happens right after the "
call do_syscall_64"
step.Saving User Space Registers
At this time, the CPU registers still contain the values they had while executing user space code. They will be overwritten when the kernel code executes, to avoid that from happening, the kernel saves the values on the kernel stack. After that it sanitizes those registers for security. All of this can be seen in boxes 3 and 4 in the diagram.
Mitigations Against Speculative Execution Attacks
The next three steps in the code are:
-
Enabling IBRS (indirect branch restricted speculation)
-
Untraining the return stack buffer
-
Clearing the branch history buffer
These are there to mitigate against speculative execution attacks, such as spectre (v1 and v2), and retbleed. Speculative execution is an optimization in modern processors where they predict the outcome of branches in the code and speculatively execute instructions at the predicted path. When done accurately, this significantly improves the performance of the code.
However, vulnerabilities have been found where a malicious user program may train the branch predictor in ways that cause the CPU to speculatively execute along attacker‑chosen paths inside the kernel. While these speculative paths do not change the logical flow of kernel execution, they can leak information through microarchitectural side‑channels such as the cache.
These mitigations prevent user‑controlled branch predictor state from influencing speculative execution in the kernel. But, these also come at a great performance cost. We will revisit these in detail later, when discussing the impact of system calls on branch prediction.
Executing the System Call and Returning Back to User Space
After all of this setup, the kernel finally calls the function
do_syscall_64
. This is where the actual system call gets invoked. We will not look inside of this function because our focus is on performance impact rather than a walkthrough of kernel code.Once the system call is done, the
do_syscall_64
function returns. The kernel then restores the user space state, including registers, page table, and stack, and returns control back to user space. The following diagram shows the code after thedo_syscall_64
call to highlight this part.Actual x86-64 syscall exit path code from Linux kernel (
entry_64.S
), showing how the kernel restores user registers, page tables, and state before returning control to user space.Now that we have seen all the code the kernel executes to enter and exit a system call, we are ready to discuss the overheads introduced. There are two categories:
-
Direct overhead from the code executed on entry and return.
-
Indirect overhead from microarchitectural side-effects (e.g. clearing the branch history buffer and return stack buffer).
The major focus of this article is on discussing the indirect overhead induced due to system calls. But before we go any further, let's do a quick benchmark to measure the impact of the direct overheads.
Writing these deep dives takes 100+ hours of work. If you find this valuable and insightful, please consider upgrading to a paid subscription to keep this work alive.
Direct Overhead of System Calls
Direct overhead is largely fixed across all system calls, since each system call must perform the same entry and exit steps. We can do a rough measurement of this overhead with a simple benchmark by comparing the number of cycles taken to execute the clock_gettime system call in the kernel versus executing it in the user space.
The
clock_gettime
system call reads a system clock, such as the realtime clock (seconds since the Unix epoch) or the monotonic clock (seconds since kernel boot). It is very frequently used in software. For example, Java'sSystem.currentTimeMillis()
and Python'stime.time()
andtime.perf_counter()
use it under the hood.Because system calls are expensive, Linux provides an optimization called vDSO (virtual dynamic shared object). This is a user-space shortcut for selected system calls where the kernel maps the system call's code into each process's address space so that they can be executed like a normal function call, avoiding kernel entry.
So, we can create a benchmark that measures the time taken to execute
clock_gettime
in the user space using vDSO and compare it against the time taken inside the kernel using the syscall interface. The following code shows the benchmarking program.#define _GNU_SOURCE #include <sys/syscall.h> #include <unistd.h> #include <stdint.h> #include <stdio.h> #include <time.h> #include <x86intrin.h> int main() { const int ITERS = 100000; uint32_t cpuid; struct timespec ts; // Warm up both syscall and libc versions for (int i = 0; i < 10000; i++) { syscall(SYS_clock_gettime, CLOCK_MONOTONIC, &ts); clock_gettime(CLOCK_MONOTONIC, &ts); } // Test 1: Direct syscall interface _mm_lfence(); uint64_t start1 = __rdtsc(); long sink1 = 0; for (int i = 0; i < ITERS; i++) { long ret = syscall(SYS_clock_gettime, CLOCK_MONOTONIC, &ts); sink1 += ret + ts.tv_sec + ts.tv_nsec; // use the results to prevent optimization } uint64_t end1 = __rdtscp(&cpuid); _mm_lfence(); // Test 2: libc clock_gettime _mm_lfence(); uint64_t start2 = __rdtsc(); long sink2 = 0; for (int i = 0; i < ITERS; i++) { int ret = clock_gettime(CLOCK_MONOTONIC, &ts); sink2 += ret + ts.tv_sec + ts.tv_nsec; // use the results to prevent optimization } uint64_t end2 = __rdtscp(&cpuid); _mm_lfence(); // Prevent dead-code removal if (sink1 == 42 || sink2 == 42) fprintf(stderr, "x\n"); double cycles_per_syscall = (double)(end1 - start1) / ITERS; double cycles_per_libc = (double)(end2 - start2) / ITERS; printf("Direct syscall cycles per call ~ %.1f\n", cycles_per_syscall); printf("Libc wrapper cycles per call ~ %.1f\n", cycles_per_libc); printf("Difference ~ %.1f cycles (%.1f%% %s)\n", cycles_per_libc - cycles_per_syscall, 100.0 * (cycles_per_libc - cycles_per_syscall) / cycles_per_syscall, cycles_per_libc > cycles_per_syscall ? "slower" : "faster"); return 0; }
A note on rdtsc : Normally, you would use
clock_gettime()
to measure timings. But here we are benchmarkingclock_gettime()
itself, so we need something more precise.rdtsc
is an x86 instruction that reads the value of a 64‑bit timestamp counter (TSC) in the CPU. This counter ticks at a fixed frequency (e.g. 2.3 GHz on my machine). By measuring its value before and after, we can know how many cycles an operation took.The program produces the following output on my laptop:
➜ ./clock_gettime_comparison Direct syscall cycles per call ~ 1428.8 Libc wrapper cycles per call ~ 157.0 Difference ~ -1271.9 cycles (-89.0% faster)
The vDSO version is an order of magnitude faster, showing how costly the syscall entry/exit path is compared to a plain function call.
We should take this estimate with a grain of salt because in the benchmark we are measuring inside a loop, and the performance of the loop itself can suffer from the indirect side ‑effects of entering and exiting the kernel, which is our next topic.
While this benchmark isolates direct overhead, real‑world performance also suffers from indirect costs due to CPU microarchitectural effects. Let's explore those next.
Indirect Overhead of System Calls
System calls also incur indirect costs, because the kernel's entry path disturbs the CPU's microarchitectural state. These side-effects impact the microarchitectural state of the process in the CPU and the loss of this state can introduce transient degradation in the performance of the user space code.
At the microarchitecture level, the CPU implements several optimizations such as instruction pipelining, superscalar execution and branch prediction. These are designed to improve the instruction throughput of the program, i.e., how many instructions the CPU can execute each cycle. A higher throughput means faster program execution.
It can take a few cycles for the CPU to get to a steady state where these optimizations start to pay off, but making system calls can lead to the loss of this state and a drop in the performance of the program.
We will cover the indirect costs of system calls by discussing the different components of the microarchitecture that are impacted, starting from the instruction pipeline, followed by the branch predictor buffers.
Effect on the Instruction Pipeline
We didn't see any code in the Linux kernel which touches the instruction pipeline, rather this is done by the CPU itself. Before switching to kernel mode, the CPU drains the instruction pipeline to ensure that the user space code does not interfere when the kernel code executes. This impacts the performance of the user space code when the system call returns. To understand how, we need to revisit the basics of instruction pipelining.
CPUs have multiple execution resources, such as registers, execution units, load and store buffers etc. To use all of these effectively, it is necessary that they executes multiple program instructions in parallel, this is made possible through instruction pipelining and superscalar architecture.
Instruction pipelining breaks down the execution of an instruction into several stages, like the assembly pipeline in a factory. An instruction moves from one stage to the next in each CPU cycle, enabling the CPU to start executing one new instruction each cycle.
For example, the following diagram shows a 5-stage pipeline. You can see that it takes five instructions for the pipeline to fill completely, and for the first instruction to retire. After this stage, the pipeline is in a steady state, and it can provide a throughput of one instruction per cycle. This is a very simplistic example, modern x86 processors have much deeper pipelines, e.g. 20-30 cycles.
Example of a simple 5-stage instruction pipeline (Fetch, Decode, Memory Read, ALU, Memory Write), showing how multiple instructions overlap in execution across cycles.
Modern processors are also superscalar. They have multiple such pipelines to issue and execute multiple new instructions each cycle. For example, a 4-wide processor can start executing up to 4 new instructions each cycle and it can retire up to 4 new instructions each cycle. If such a CPU has a pipeline depth of 20, then it can have up to 80 instructions in flight in a steady state.
This means that the processor is normally busy executing dozens of user-space instructions in parallel. But when a system call occurs, the CPU must first ensure all pending user instructions finish before it can jump into the kernel.
So, when the system call returns back to the user space, you can imagine that the instruction pipeline is almost empty because the CPU did not allow the instructions following syscall to enter the pipeline. At this point the pipeline has to start almost from scratch, and it can again take a while until the pipeline reaches a steady throughput again.
Contrast this with the scenario where no system call occurs: the CPU remains in its steady state, pipelines stay full, and instruction throughput stays high. In other words, a single system call can derail the momentum of dozens of in‑flight instructions.
On x86-64, the syscall instruction is used to execute a system call. The Intel manual has this note about it:
"Instruction ordering: Instructions following a SYSCALL may be fetched from memory before earlier instructions complete execution, but they will not execute (even speculatively) until all instructions prior to the SYSCALL have completed execution (the later instructions may execute before data stored by the earlier instructions have become globally visible)."
This confirms that the CPU drains the pipeline before transferring control to the kernel.
Effect on Branch Prediction
The next major indirect impact system calls have on user space performance is through the clearing of the branch predictor buffers. These can be grouped as three mitigations the kernel applies that we saw in the kernel code above.
-
Clearing the branch history buffer
-
Untraining the return stack buffer
-
Enabling/disabling the IBRS
The first two of these have a profound indirect impact on user code performance. The enabling/disabling of IBRS does not impact user space performance, rather only adds a direct overhead to syscall execution. However, I will discuss this here because logically it goes with the topic of branch prediction. In this section, we will first review branch prediction and then talk about each of these.
Understanding Branch Prediction
Instruction pipelining and superscalar execution enables CPUs to execute multiple instructions in parallel, and they execute these instructions out-of- order.
When the CPU comes across a branching instruction, such as an if condition, it may not know the result of the condition because those set of instructions may still be executing. If the CPU waits for those instructions to finish to know the branch outcome, the pipeline can be stalled for a long time, which means poor performance.
To optimize this, the CPUs come with a feature called the branch predictor that can predict the target address of these branches based on past branching patterns. This enables the CPU to speculatively execute the instruction from the predicted address and stay busy. If the prediction turns out to be correct, then the CPU saves a lot of cycles and instruction throughput remains high.
However, when the prediction is wrong, the CPU has to discard the results of these speculatively executed instructions, flush the instruction pipeline, and fetch the instructions from the right address. This can cost 20-30 cycles on modern CPUs (depending on the depth of the pipeline).
Clearing the Branch History Buffer
We saw in the kernel code that it invokes the macro
CLEAR_BRANCH_HISTORY
which clears the branch history buffer (BHB).The BHB is a buffer in the branch predictor that learns the branching history patterns at a global level. This helps the branch predictor predict the outcomes of deeply nested and complex branching patterns more accurately. You can think of it as remembering the last few intersections you passed to better predict where you'll turn next.
But it can take a while for the BHB to collect enough history for the branch predictor to generate accurate predictions. So, whenever you execute a system call in your code, if the kernel clears the BHB, you lose all that state. As a result, your user space code may experience an increased rate of branch mispredictions after returning from the system call. This can significantly degrade the performance of user space applications.
Note on recent CPUs: This clearance of BHB was added to the kernel as a mitigation against speculative execution attacks, such as Spectre V2. In recent years, CPU vendors have introduced hardware mitigations which obviate the need for the kernel to clear the BHB. For example, the Intel advisory says that if your CPU comes with the "enhanced IBRS" (we discuss IBRS below) feature, then there is no need to clear the BHB. So, not all CPUs suffer degraded performance due to this.
If you want to check whether your kernel clears the BHB, you can check the lscpu output. If you see "
BHI SW loop
" in the vulnerability section, it means that the kernel clears the BHB during system calls.Also, if you believe that you will never execute untrusted code, you can manually disable the mitigation through a boot time flag.
Untraining the Return Stack Buffer
Next in the line is untraining of the return stack buffer (RSB). The RSB is another buffer in the branch predictor that is used to predict the return address of function calls.
But why does it need to predict the return address? It again comes down to out-of-order execution. The CPU may want to execute the return instruction even though other instructions of the function may still be executing. At this point, the CPU does not know the return address. The return address is stored on the process's stack memory, but accessing memory is slow. So, the CPU uses the RSB to predict the return address.
On every function call, the CPU pushes the return address into the RSB. While executing the return instruction, the CPU pops this buffer and jumps to that address. Because this buffer right in the CPU, it is very fast to access.
However, this also led to vulnerabilities such as Retbleed. In this attack, carefully chosen user‑space code could influence how the CPU predicted kernel return addresses, so that the CPU speculatively executed instructions at the wrong place inside the kernel. While this speculative execution did not change the actual kernel logic, it could leak information through side‑channels. To prevent this, the kernel untrains the RSB on entering the kernel.
Untraining the RSB impacts the performance of the user space code when the system call returns because now the RSB does not have the state. Without a trained RSB, the CPU falls back to a slower indirect branch predictor which may have higher chances of making a mistake.
Note on CPUs Impacted : The kernel does not clear the RSB for all the CPU models. The vulnerabilities that require clearing the RSB (retbleed and SRSO) have only been known to impact AMD CPUs. Also, if your CPU has hardware mitigations, such as enhanced IBRS, then the kernel does not perform this (the
UNTRAIN_RET
macro becomes a noop on such devices).Again, the kernel allows you to disable the mitigation but do this only when you are sure that you will never run untrusted code.
IBRS Entry and Exit
Finally, let's talk about indirect branch restricted speculation (IBRS). We saw that the kernel executes
IBRS_ENTER
on entering the syscall andIBRS_EXIT
while returning back. So, what is IBRS and what is its impact on performance?IBRS is a hardware feature which restricts the indirect branch predictor when executing in kernel mode. Effectively, it prevents the user space training of the indirect branch predictor from having any effect on indirect branch prediction inside the kernel.
Indirect branches are those branches in code where the target address is not part of the instruction but is known only at runtime. A common example is calling through a function pointer in C (e.g.,
(*fp)()
), where the actual target depends on which function the pointer holds at that moment. Another example is a virtual function call in C++ or a jump table generated for a large switch statement. In all these cases, the CPU can use the indirect branch predictor to guess the likely target address based on past branching history.When the Spectre and related vulnerabilities were found, one of the attack vectors involved tricking the CPU into mispredicting indirect branch targets inside the kernel. By influencing the branch predictor state from user space, attackers could cause the CPU to speculatively execute instructions at unintended locations in the kernel. It could lead to leak of sensitive kernel data through side-channels such as the cache.
The mitigation for this attack is to restrict the indirect branch predictor when executing in kernel mode via the IBRS mechanism. Enabling and disabling IBRS itself doesn't have any impact on the performance of the user space code, but the act of executing extra instructions to do this during each system call adds overhead.
However, recent CPUs have a feature called enhanced IBRS which automatically enables IBRS when switching to kernel mode. On such devices, the
IBRS_ENTER
andIBRS_EXIT
macros in the kernel become a noop.
Together, these mitigations explain why the indirect cost of system calls can vary significantly across CPU generations and configurations. In practice, this means a single system call can not only drain the pipeline but also leave the branch predictor partially blind, forcing the CPU to relearn patterns and slowing down your code until it recovers. The important point is that the true cost of a system call is not just the handful of instructions executed in the kernel, but also the disruption it causes to the CPU's optimizations. This makes system calls far more expensive than they look on the surface, and why minimizing them can be such a powerful optimization strategy. However, slowly CPU vendors are adding hardware mitigations which is making these software- based mitigations obsolete and reducing the performance overheads.
Practical Ways to Reduce System Calls
So what can you do as a developer? A few practical ideas:
-
Use vDSO : For calls like
clock_gettime
, prefer the vDSO path to avoid kernel entry. -
Cache cheap values : Some values obtained through system calls rarely change during a program's lifetime. If you can safely cache them once and reuse, you can avoid repeated system calls.
-
Optimize I/O System Calls : There are various strategies and patterns that you can use to optimize I/O related system calls. For example:
-
Prefer buffered I/O instead of raw read/write system calls
-
Use scatter/gather operations like
readv
/writev
to batch multiple buffers -
If your system allows, use
mmap
instead of repeated read/write calls.
-
-
Batch operations : Interfaces like io_uring let you submit many I/O requests to a shared queue in user space, which the kernel can then process in batches. This reduces the number of times your program needs to cross into the kernel.
-
Push work into the kernel : With eBPF it is increasingly possible to move parts of application logic into the kernel itself. Beyond traditional use cases like packet filtering, newer frameworks let you offload tasks such as policy enforcement, monitoring, and even parts of data processing. In these cases, instead of making repeated system calls, the user program loads small programs into the kernel that run directly when events occur, avoiding crossings altogether.
None of these tricks are magic, but they all follow the same principle: fewer crossings means less disruption. Every time you avoid a system call, you're saving not just a function call into the kernel, but also the hidden costs of the CPU recovering its state.
Wrapping Up
We've gone through a lot of detail for what looks like just a small stretch of kernel code. The point is simple: the cost of a system call goes beyond the small number of instructions that execute in the kernel. It disrupts the CPU's rhythm by draining pipelines, resetting predictors, and forcing everything to start fresh. That's why they show up as hot spots in profiles and why people try so hard to avoid them.
The strategies we looked at earlier (vDSO, caching, optimizing I/O, batching with io_uring, and pushing work into the kernel) are all ways to cut down on this disruption. They won't remove the cost of system calls entirely, but they can make the difference between code that spends most of its time waiting on the kernel and code that keeps the CPU running at full speed.
System calls are the interface to the kernel and the hardware. They are necessary, but they come at a cost. Understanding and managing that cost is a key part of writing faster software.
If you read till here, there is a good chance you find this insightful. This work is supported by readers such as you. Consider becoming a paid subscriber to keep this going.
-
-
🔗 @binaryninja@infosec.exchange Our annual RE survey closes in 24 hours and we want to hear from you! Don’t mastodon
Our annual RE survey closes in 24 hours and we want to hear from you! Don’t miss your chance to win FREE Binja licenses, a ticket to RE//verse, and more! https://binary.ninja/survey/
-
🔗 r/wiesbaden Ringkirche Chuck Ragan rss
Hallo zusammen! Ich bin leider zu spät dran und habe kein Ticket mehr für das Chuck Ragan Konzert am 05.12. bekommen. Falls jemand eins über hat, gerne melden. Würde mich freuen, wenn ich noch eins bekomme :) Vielen Dank im Voraus und einen schönen Abend noch !
submitted by /u/Hejel_oder_wat
[link] [comments] -
🔗 News Minimalist 🐢 Rolling Stone owner sues Google + 9 more stories rss
In the last 3 days ChatGPT read 84973 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.9.
[5.5] Rolling Stone, Billboard owner Penske sues Google over AI Overviews —usnews.com(+23)
Penske Media, owner of Rolling Stone and Variety, sued Google, alleging its AI Overviews use its journalism without consent, reducing website traffic and hurting the publisher's revenue.
Marking the first such lawsuit by a major U.S. publisher, Penske alleges Google leverages its search dominance to use content without payment. The publisher reports its affiliate revenue has dropped by over a third.
Google called the lawsuit meritless, stating AI Overviews improve search and send traffic to websites. The legal action comes as other AI companies are signing licensing deals with news organizations.
[5.8] UK and US partner on modular nuclear reactor projects in Britain —theguardian.com(+15)
The UK and US announced several deals to build modular nuclear reactors in Britain, a partnership intended to enhance energy security, create jobs, and streamline regulatory safety approvals.
The agreements include a plan for Centrica and X-energy to build 12 reactors in Hartlepool, creating 2,500 jobs. The UK and US will also accept each other's safety checks, halving licensing times.
Other ventures involve nuclear-powered data centers and ports. The broader partnership aims to support AI growth and end reliance on Russian nuclear material by the close of 2028.
Highly covered news with significance over 5.5
[6.5] US and China agree on framework for TikTok's US operations ownership — bbc.com (+177)
[6.2] US Army deploys Typhon missile system to Japan for the first time — twz.com (+11)
[6.0] Australia and Papua New Guinea to integrate defense forces under new security pact — apnews.com (+14)
[5.8] AI can now predict who will go blind, years before doctors can — sciencedaily.com (+3)
[5.6] Chinese migration into Siberia grows under new visa-free travel agreement — rbc.ua (Ukrainian) (+2)
[5.7] Gestational diabetes linked to cognitive decline in mothers, increased risk of neurodevelopmental disorders in children — medicalxpress.com (+4)
[5.5] Alphabet's market capitalization surpasses $3 trillion after favorable antitrust ruling — euronews.com (+17)
[6.4] UN predicts ozone layer recovery by mid-century — letemps.ch (French) (+7)
Thanks for reading!
— Vadim
You can set up and personalize your own newsletter like this with premium.
-
🔗 benji.dog rss
I think I have all the pieces set up and Umbrella is ready to go. It is a simple 11ty starter site that has IndieAuth, Micropub, and Webmentions built in using Netlify functions.
I've been using all of these pieces on my site for a bit but I wanted to put them all together in one place just in case it could help others start their own site.
-
🔗 r/wiesbaden Looking For Movie Stores rss
Hello. I’m looking to buy more secondhand dvds. What are some good stores to check out? I’ve been to Oxfam already.
submitted by /u/RacingBlind
[link] [comments] -
🔗 r/wiesbaden Die Wiesbadener "Feuerlöschpolizei" im Januar 1939 rss
submitted by /u/Transportschaden
[link] [comments] -
🔗 r/LocalLLaMA I bought a modded 4090 48GB in Shenzhen. This is my story. rss
| https://preview.redd.it/ume4fe3jmipf1.jpg?width=4032&format=pjpg&auto=webp&s=9aa908d45211be937b291377b1c495c9917834fe A few years ago, before ChatGPT became popular, I managed to score a Tesla P40 on eBay for around $150 shipped. With a few tweaks, I installed it in a Supermicro chassis. At the time, I was mostly working on video compression and simulation. It worked, but the card consistently climbed to 85°C. When DeepSeek was released, I was impressed and installed Ollama in a container. With 24GB of VRAM, it worked—but slowly. After trying Stable Diffusion, it became clear that an upgrade was necessary. The main issue was finding a modern GPU that could actually fit in the server chassis. Standard 4090/5090 cards are designed for desktops: they're too large, and the power plug is inconveniently placed on top. After watching the LTT video featuring a modded 4090 with 48GB (and a follow-up from Gamers Nexus), I started searching the only place I knew might have one: Alibaba.com. I contacted a seller and got a quote: CNY 22,900. Pricey, but cheaper than expected. However, Alibaba enforces VAT collection, and I’ve had bad experiences with DHL—there was a non-zero chance I’d be charged twice for taxes. I was already over €700 in taxes and fees. Just for fun, I checked Trip.com and realized that for the same amount of money, I could fly to Hong Kong and back, with a few days to explore. After confirming with the seller that they’d meet me at their business location, I booked a flight and an Airbnb in Hong Kong. For context, I don’t speak Chinese at all. Finding the place using a Chinese address was tricky. Google Maps is useless in China, Apple Maps gave some clues, and Baidu Maps was beyond my skill level. With a little help from DeepSeek, I decoded the address and located the place in an industrial estate outside the city center. Thanks to Shenzhen’s extensive metro network, I didn’t need a taxi. After arriving, the manager congratulated me for being the first foreigner to find them unassisted. I was given the card from a large batch—they’re clearly producing these in volume at a factory elsewhere in town (I was proudly shown videos of the assembly line). I asked them to retest the card so I could verify its authenticity. During the office tour, it was clear that their next frontier is repurposing old mining cards. I saw a large collection of NVIDIA Ampere mining GPUs. I was also told that modded 5090s with over 96GB of VRAM are in development. After the test was completed, I paid in cash (a lot of banknotes!) and returned to Hong Kong with my new purchase. submitted by /u/king_priam_of_Troy
[link] [comments]
---|--- -
🔗 charmbracelet/crush v0.8.2 release
“Too many files open?” Not anymore.
Our past release included improvements when it comes to LSP. Some users notified us about a regression where Crush would error with "too many files open". This was happening in particular when opening Crush inside a directory with several files, which could be a big monorepo or
$HOME
, for example. We've done a fix for it, but we know it can still happen in some scenarios. The next release will likely include further enhancements in the area.Thanks for your awesome support!
Changelog
Fixed!
9a132dc
: fix: introduce notify ignore files (@raphamorim)de3d46b
: fix: make the limit really high on non-unix (@raphamorim)fd6b617
: fix: remove ulimt as go 1.19 automatically raises file descriptors (@raphamorim)d401aa3
: fix: request MaximizeOpenFileLimit for unix (@raphamorim)8cae314
: fix(lint): windows number formatting (@raphamorim)7ff6ba9
: refactor: make func unexported (@andreynering)18ea1c9
: fix: handle ctx cancel event (@vadiminshakov)
Other stuff
6ec5a77
: chore(devs): updategithub.com/raphamorim/notify
to v0.9.4 (@andreynering)
Verifying the artifacts
First, download the
checksums.txt
file, for example, withwget
:wget 'https://github.com/charmbracelet/crush/releases/download/v0.8.2/checksums.txt'
Then, verify it using
cosign
:cosign verify-blob \ --certificate-identity 'https://github.com/charmbracelet/meta/.github/workflows/goreleaser.yml@refs/heads/main' \ --certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \ --cert 'https://github.com/charmbracelet/crush/releases/download/v0.8.2/checksums.txt.pem' \ --signature 'https://github.com/charmbracelet/crush/releases/download/v0.8.2/checksums.txt.sig' \ ./checksums.txt
If the output is
Verified OK
, you can safely use it to verify the checksums of other artifacts you downloaded from the release usingsha256sum
:sha256sum --ignore-missing -c checksums.txt
Done! You artifacts are now verified!
Thoughts? Questions? We love hearing from you. Feel free to reach out on X, Discord, Slack, The Fediverse, Bluesky.
-
🔗 @cxiao@infosec.exchange It's pretty ridiculous that people are being investigated and fired for just mastodon
It's pretty ridiculous that people are being investigated and fired for just saying, in plain language, who Charlie Kirk was and what he believed
"Charlie Kirk Didn’t Shy Away From Who He Was. We Shouldn’t Either."
-
- September 15, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-09-15 rss
IDA Plugin Updates on 2025-09-15
Activity:
- IDA-MCP
- abdc4815: Merge branch 'master' of https://github.com/jelasin/IDA-MCP
- 45453d64: update req
- tenrec
- wp81IdaDriverAnalyzer
- 0daa11a9: wip
- IDA-MCP
-
🔗 pydantic/pydantic-ai v1.0.7 (2025-09-15) release
What's Changed
- Added MCP metadata and annotations to
ToolDefinition.metadata
for use in filtering by @ChuckJonas in #2880 - When starting run with message history ending in
ModelRequest
, make its content available inRunContext.prompt
by @DouweM in #2891 - Let
FunctionToolset
take default values forstrict
,sequential
,requires_approval
,metadata
by @DouweM in #2909 - Don't require
mcp
orlogfire
to use Temporal or DBOS by @DouweM in #2908 - Combine consecutive AG-UI user and assistant messages into the same model request/response by @DouweM in #2912
- Fix
new_messages()
whendeferred_tool_results
is used withmessage_history
ending inToolReturnPart
s by @DouweM in #2913
Full Changelog :
v1.0.6...v1.0.7
- Added MCP metadata and annotations to
-
🔗 ryoppippi/ccusage v16.2.5 release
🚀 Features
- Make logger injectable in ResponsiveTable class - by @ryoppippi in #628 (8c2c9)
View changes on GitHub
-
🔗 organicmaps/organicmaps 2025.09.15-18-android release
• OSM data as of September 13
• Show Postcode/zip code for addresses
• New roundabout icons in Android Auto
• Removed minor islands from the World map
• Fixed wrong map centering around the current position
• Bookmark colors are preserved in GPX
• Lighting shops
• Draw national park borders
• Draw archaeological sites from zoom 12 in outdoor mode
• Show campsites and caravan sites in navigation mode
• Fixed secondary highway color in navigation mode…more at omaps.org/news
See a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
60978fdb7ebe49b606e55cac8eb71de50f4c3e8db89addd969bd2de5fb811a66 OrganicMaps-25091518-web-release.apk
-
🔗 pixelspark/sushitrain v2.1.55 release
No content.
-
🔗 pixelspark/sushitrain v2.1.56 release
No content.
-
🔗 r/wiesbaden Das alte Kurhaus in Wiesbaden rss
Was denkt ihr, was da wohl abging früher? Ich finde die alten Bilder davon echt interessant.
Die ersten beiden Fotos wurden um 1900 gemacht, das letzte ist von 1871
submitted by /u/Cherrywaterx
[link] [comments] -
🔗 r/wiesbaden Westend- besorgte Eltern rss
Ich ziehe bald nach Wiesbaden Westend, meine Eltern machen sich Sorgen da es eine "unsichere Wohngegend" sein soll, wie kann ich Ihnen sorgen nehmen?
submitted by /u/Old-Bus-6698
[link] [comments] -
🔗 sacha chua :: living an awesome life 2025-09-15 Emacs news rss
There were lots of Emacs-related discussions on Hacker News, mainly because of two posts about Emacs's extensibility: an Org Mode example and a completion example. This guide on connecting to databases from Org Mode Babel blocks started a few discussions. There were a couple of Emacs Carnival posts on obscure packages, too. Also, if you want to present at EmacsConf 2025 (great way to meet like-minded folks), please send your proposal in by this Friday (Sept 19). Thanks!
- Upcoming events (iCal file, Org):
- London Emacs (in person): M-x drinks https://www.meetup.com/london-emacs-hacking/events/310874314/ Tue Sep 16 1800 Europe/London
- M-x Research: TBA https://m-x-research.github.io/ Wed Sep 17 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Emacs Berlin (hybrid, in English) https://emacs-berlin.org/ Wed Sep 24 0930 America/Vancouver - 1130 America/Chicago - 1230 America/Toronto - 1630 Etc/GMT - 1830 Europe/Berlin - 2200 Asia/Kolkata – Thu Sep 25 0030 Asia/Singapore
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Sep 27 0130 America/Vancouver - 0330 America/Chicago - 0430 America/Toronto - 0830 Etc/GMT - 1030 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Emacs meetup in person in Novosibirsk, Russia ichernyshovvv@gmail.com - Sept 27 20:00 - 23:00 Asia/Novosibirsk
- Emacs configuration:
- dandrake/casual-eww: Emacs transient "casual" menu for the emacs web browser eww (@ddrake@mathstodon.xyz)
- Emacs workflow ideas: How I use Hydra in a minor mode (15:50, Reddit)
- ZeniesQis/Emacsn: Installation and multiple configuration management for Emacs - Codeberg.org (@Zenie@piaille.fr)
- Everything with Emacs (@jnpn@mastodon.social) - vianney.leboutellier's Emacs configuration
- meow-edit/meow: Yet another modal editing on Emacs / 猫态编辑 (recent discussion on HN)
- Emacs Lisp:
- Sharing some thoughts on page-ext and building your own alternatives to packages (Reddit)
- Listful Andrew: Mondo — World's scripts, languages, places in any language (Emacs package)
- Listful Andrew: Vexil — Create flags of countries and subdivisions (Emacs package)
- Sacha Chua: Emacs and dom.el: quick notes on parsing HTML and turning DOMs back into HTML
- Ep699 Emacs Lisp defvar-keymap, define-keymap , part 2 (01:01:15)
- [12] Emacs Reader's Development: Working on Partial & Tiled Rendering (Part III) - 9/14/2025, 2:31:27 PM - Dyne.org TV
- How to Create and Test an Emacs Tree-Sitter Major Mode (emacs-devel)
- Appearance:
- Navigation:
- Writing:
- Powerful emacs hacks: patching markdown-mode (HN) - getting images from a different directory
- Powerful emacs hacks: paste images to markdown from clipboard
- Irreal: Semantic Line Breaks
- Org Mode:
- Emacs: a paradigm shift (Reddit, HN, HN) - automatically sorting in Org Mode
- Jeremy Friesen: Dynamic Org-Mode Block to List Books Read
- (Update) org-supertag 5.1: Implement Interactive Schema View for Tag Management (Github)
- Using Emacs Org-Mode With Databases: A getting-started guide (Reddit, HN, lobste.rs)
- RangHo/svelte-preprocess-org: Svelte preprocessor for Org, the most superior markup language. (Reddit)
- Sync Emacs Org Agenda and Google Calendar with the Org-Gcal Package (06:30)
- Generating a website/blog from Org files with Hakyll
- Org Social Preview Generator
- Coding:
- dotemacs/lisp-comment-dwim.el: Do What I Mean mode for toggling comments in Lisp (@dotemacs@mastodon.xyz)
- tomekw/doom-ada: Doom Emacs Ada language module with syntax highlighting, LSP and Alire support (HN) - also usable from vanilla Emacs
- Elisp For Clojure Developers
- Clojuredocs.el from within emacs
- A baseline indent rule for tree-sitter major modes (Reddit)
- Integrate Emacs and Jira with Ejira3
- James Dyer: Debugging Software Breakage with Git Stash and Emacs
- Magit 4.4, Forge 0.6, Ghub 5.0 and Transient 0.10 released
- Shells:
- Web:
- Doom Emacs:
- Fun:
- AI:
- Community:
- Other:
- Tip about using M-x decipher to solve cryptoquotes (single alphabet substitution ciphers)
- Improved emacsclient-wrapper.sh. to be used as $VISUAL, $EDITOR
- buffer-terminator.el: Safely terminate Emacs buffers automatically to enhance performance and reduce clutter in the buffer list (Release 1.2.0) (Reddit)
- Announcing Numeri - an Emacs package for Roman number translation (Reddit, Irreal)
- stripspace.el: Ensure Emacs Automatically removes trailing whitespace before saving a buffer (Release 1.0.2) (Reddit)
- arkoinad/autopaste: Autopaste clipboard to current buffer - polls the MacOS clipboard with pbpaste
- September Emacs Carnival: Obscure Packages
- Emacs development:
- New packages:
- async1: Unroll chain of async callbacks, parallel and sequencial (MELPA)
- blue: BLUE build system interface (MELPA)
- consult-gh-nerd-icons: Nerd icons Integration for consult-gh (MELPA)
- doxymacs: Emacs integration with Doxygen (MELPA)
- meep: Lightweight modal editing (MELPA)
- quick-fasd: Integration for the command-line tool `fasd' (MELPA)
- thanks: Say thanks to the authors of all your installed packages (MELPA)
- verse-mode: Major mode for Verse (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, r/planetemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 MetaBrainz You are invited to MetaBrainz Summit 25 rss
MetaBrainz Summit 25 is upon us! September 15-19 in Barcelona, Spain.
We would love for you to join us remotely. If you are reading this, you are qualified to attend. Congratulations! Read on for more information.
You can join us on Zoom or watch us on Youtube.
- Click here for daily stream links
- All chat is on the Development/MetaBrainz channel in ChatBrainz
You are also invited to add agenda topic suggestions to the wiki, if there is something you would like to see discussed (or leave them in the post comments and some kind person (me) will probably move them to the wiki for you!)
-
🔗 r/wiesbaden Vinyl-Flohmarkt rss
Plattenladen Pop-up Vol. 4 im Mainzer Budiker. Mit Pizza, Waffeln, Drinks & Vinyl!
submitted by /u/EmploymentUnique2066
[link] [comments] -
🔗 r/wiesbaden Die Marktkirche im besten Licht rss
submitted by /u/SchoeneBunteKnete
[link] [comments] -
🔗 r/wiesbaden Auto Raserei melden wo? rss
Hallo zusammen,
Ich wohne im schönen Stadtteil Biebrich und muss jeden Tag auch abends/nachts zusehen wie Autos mit sehr hoher Geschwindigkeit und fetten Auspuffen durch die Wohngegend brettern….
Die anliegende Straße ist hier sogar 30 Zone (Schule, Kindergarten etc.)
An Welche Behörde kann ich mich melden, damit eventuell hier mal durchgegriffen wird?
Mit freundlichen Grüßen
submitted by /u/xoffxwhite
[link] [comments] -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/LocalLLaMA Update: we got our revenge and now beat Deepmind, Microsoft, Zhipu AI and Alibaba rss
Three weeks ago we open-sourced our agent that uses mobile apps like a human. At that moment, we were #2 on AndroidWorld (behind Zhipu AI).
Since, we worked hard and improved the performance of our agent: we’re now officially #1 on the AndroidWorld leaderboard, surpassing Deepmind, Microsoft Research, Zhipu AI and Alibaba.
It handles mobile tasks: booking rides, ordering food, navigating apps, just like a human would. Still working on improvements and building an RL gym for fine-tuning :)
The agent is completely open-source: github.com/minitap-ai/mobile- use
What mobile tasks would you want an AI agent to handle for you? Always looking for feedback and contributors!
submitted by /u/Connect-Employ-4708
[link] [comments] -
🔗 r/LocalLLaMA Completed 8xAMD MI50 - 256GB VRAM + 256GB RAM rig for $3k rss
| Hello everyone, A few months ago I posted about how I was able to purchase 4xMI50 for $600 and run them using my consumer PC. Each GPU could run at PCIE3.0 x4 speed and my consumer PC did not have enough PCIE lanes to support more than 6x GPUs. My final goal was to run all 8 GPUs at proper PCIE4.0 x16 speed. I was finally able to complete my setup. Cost breakdown:
- ASRock ROMED8-2T Motherboard with 8x32GB DDR4 3200Mhz and AMD Epyc 7532 CPU (32 cores), dynatron 2U heatsink - $1000
- 6xMI50 and 2xMI60 - $1500
- 10x blower fans (all for $60), 1300W PSU ($120) + 850W PSU (already had this), 6x 300mm riser cables (all for $150), 3xPCIE 16x to 8x8x bifurcation cards (all for $70), 8x PCIE power cables and fan power controller (for $100)
- GTX 1650 4GB for video output (already had this)
In total, I spent around ~$3k for this rig. All used parts. ASRock ROMED8-2T was an ideal motherboard for me due to its seven x16 full physical PCIE4.0 slots. Attached photos below. 8xMI50/60 32GB with GTX 1650 top view 8xMI50/60 32GB in open frame rack with motherboard and PSU. My consumer PC is on the right side (not used here) I have not done many LLM tests yet. PCIE4.0 connection was not stable since I am using longer PCIE risers. So, I kept the speed for each PCIE slot at 3.0 x16. Some initial performance metrics are below. Installed Ubuntu 24.04.3 with ROCm 6.4.3 (needed to copy paste gfx906 tensiles to fix deprecated support).
- CPU alone: gpt-oss 120B (65GB Q8) runs at ~25t/s with ~120t/s prompt processing (llama.cpp)
- 2xMI50: gpt-oss 120B (65GB Q8) runs at ~58t/s with 750t/s prompt processing (llama.cpp)
- 8xMI50: qwen3 235B Q4_1 runs at ~21t/s with 350t/s prompt processing (llama.cpp)
- 2xMI60 vllm gfx906: llama3.3 70B AWQ: 25t/s with ~240 t/s prompt processing
Idle power consumption is around ~400W (20w for each GPU, 15w for each blower fan, ~100W for motherboard, RAM, fan and CPU). llama.cpp inference averages around 750W (using wall meter). For a few seconds during inference, the power spikes up to 1100W I will do some more performance tests. Overall, I am happy with what I was able to build and run. Fun fact: the entire rig costs around the same price as a single RTX 5090 (variants like ASUS TUF). submitted by /u/MLDataScientist
[link] [comments]
---|---
-
- September 14, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-09-14 rss
IDA Plugin Updates on 2025-09-14
New Releases:
Activity:
- auto_re
- 9e3da6d6: feat: Added mipsel api wrappers support
- ida-terminal-plugin
- idawilli
- tenrec
- 3293939e: Update README to remove video and add link
- ec9677b1: Version 0.1.2 - Demo and documentation improvements
- 4000161e: Moved tests to tenrec folder
- d7b2afb5: Fixed xref graph function AttributeError when callee_func is None
- 082dda4b: Updated README.md
- ecfd9187: Configuration improvements and simplification of entrypoints
- dd58dfdd: Updated logo URL and badges
- 0828fe53: Updated logo URL and badges
- auto_re
-
🔗 r/LocalLLaMA Spent 4 months building Unified Local AI Workspace - ClaraVerse v0.2.0 instead of just dealing with 5+ Local AI Setup like everyone else rss
| ClaraVerse v0.2.0 - Unified Local AI Workspace (Chat, Agent, ImageGen, Rag & N8N) Spent 4 months building ClaraVerse instead of just using multiple AI apps like a normal person Posted here in April when it was pretty rough and got some reality checks from the community. Kept me going though - people started posting about it on YouTube and stuff. The basic idea: Everything's just LLMs and diffusion models anyway, so why do we need separate apps for everything? Built ClaraVerse to put it all in one place. What's actually working in v0.2.0:
- Chat with local models (built-in llama.cpp) or any provider with MCP, Tools, N8N workflow as tools
- Generate images with ComfyUI integration
- Build agents with visual editor (drag and drop automation)
- RAG notebooks with 3D knowledge graphs
- N8N workflows for external stuff
- Web dev environment (LumaUI)
- Community marketplace for sharing workflows
The modularity thing: Everything connects to everything else. Your chat assistant can trigger image generation, agents can update your knowledge base, workflows can run automatically. It's like LEGO blocks but for AI tools. Reality check: Still has rough edges (it's only 4 months old). But 20k+ downloads and people are building interesting stuff with it, so the core idea seems to work. Everything runs local, MIT licensed. Built-in llama.cpp with model downloads, manager but works with any provider. Links: GitHub: github.com/badboysm890/ClaraVerse Anyone tried building something similar? Curious if this resonates with other people or if I'm just weird about wanting everything in one app. submitted by /u/BadBoy17Ge
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life Anchoring my thoughts with a sketch rss
Text and links from sketchAnchoring my thoughts with a sketch
I keep most of my notes in text files. This is great for searching, but the sameness of the typography makes things blur together.
I have to read a lot to remember what things felt like, and I still feel so much is missing. Some people can evoke lush word-pictures. I'm not there yet.
Lately I've been giving myself more time to draw, to colour, to doodle.
"Today: A+ kept giving me hugs as we walked home from the supermarket."
Even my simple sketches give me a surprisingly good sense of what I felt, what I cared about.
I made a font from my handwriting, but real handwritten text says so much more.
Comics are very expressive. I wonder how they do that. How do they draw something so specific and yet so resonant?
I take a tangled thought, coax a bit of it into a drawing, and see where that takes me.
"A drawing is simply a line going for a walk." - Paul Klee
Sometimes I do an audio braindump to feel my way around it or to capture lots of details. That gives me a wall of text. Too much, and at the same time, not enough.
I might try to make an outline and expand it, but I often lose steam.
I like organizing and fleshing out the sketch. Drawing it is fun.
Then I can write the text. I often add lots of details and links. Sometimes I feel lost in the weeds. The sketch becomes my map.
I want to finish writing so that you can see my sketch! (and so it makes sense to you and my future self)
Sometimes I just keep playing with the drawing until something interesting emerges.
I've been drawing more lately. It's slow, but more fun. I like looking at my sketches from years ago. I think I will like these ones years from now.
I feel like drawings do a good job of reminding me what I feel about a topic, why I want to write about it, and what the overall shape of the topic is, which is important so that I don't run out of steam a couple thousand words into a post. The drawing also encourages me to finish the post so that I can put it out there.
Other related posts:
- My Emacs writing experience (2025) - how the text side works
- Finding the shape of my thoughts (2025) - similar to this one
- Through blogging, we discover our thoughts and other people (2025)
- Updating my audio braindump workflow to take advantage of WhisperX (2024)
- How sketchnotes fit into my personal knowledge management (2024)
- Integrating visual outlining into my writing process (2013)
- Working with the flow of ideas (2023)
- Working with fragmented thoughts (2015)
- Drawing thoughts on index cards (2015)
- How I organize and publish my sketches (2013) I don't use Flickr and Evernote any more. I built my own sketch viewer, and I use text files in the same directory as my sketches to make them locally searchable.
Elsewhere:
-
I find when I look back over these notes that the memories really come flooding back in really high detail because then, I spent a little more time documenting when I [relive it].
- Sketching and Reflecting. I’ve been using visual thinking… | by Chris Spalton | UX Planet - getting more out of reviewing and reflecting on sketchnotes
- Practical skill building and application of sketchnoting and visual thinking (Troy Schubert). I think the section on "Deeper Thinking: Cliché to Metaphor" might be good for expanding my visual vocabulary, and figure 31 (Polarity Management – Results of Self-Facilitation) reminds me of how I like to use sketches to explore my thoughts.
You can e-mail me at sacha@sachachua.com.
-
🔗 pixelspark/sushitrain v2.1.54 release
No content.
-
🔗 r/LocalLLaMA ROCm 7.0 RC1 More than doubles performance of LLama.cpp rss
| EDIT: Added Vulkan data. My thought now is if we can use Vulkan for tg and rocm for pp :) I was running a 9070XT and compiling Llama.cpp for it. Since performance felt a bit short vs my other 5070TI. I decided to try the new ROCm Drivers. The difference is impressive. ROCm 6.4.3 ROCm 7.0 RC1 Vulkan I installed ROCm following this instructions: https://rocm.docs.amd.com/en/docs-7.0-rc1/preview/install/rocm.html And I had a compilation issue that I have to provide a new flag:
-DCMAKE_POSITION_INDEPENDENT_CODE=ON The full compilation Flags: HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" ROCBLAS_USE_HIPBLASLT=1 \ cmake -S . -B build \ -DGGML_HIP=ON \ -DAMDGPU_TARGETS=gfx1201 \ -DGGML_HIP_ROCWMMA_FATTN=ON \ -DCMAKE_BUILD_TYPE=Release \ -DBUILD_SHARED_LIBS=OFF \ -DCMAKE_POSITION_INDEPENDENT_CODE=ON
submitted by /u/no_no_no_oh_yes
[link] [comments]
---|--- -
🔗 r/wiesbaden coolsten Straßennamen rss
Welche sind für euch die coolsten/einzigartigsten Straßennamen in Wiesbaden?
submitted by /u/Just-Recognition6465
[link] [comments] -
🔗 r/wiesbaden Dog-friendly bakeries, restaraunts, shops/supermarkets? rss
Anyone know of any bakeries, restaraunts, kiosks/supermarkets in Wiesbaden Mitte that would allow me to bring my dog inside? I know this is generally rare but it would be helpful to pick up bread or other items while walking him rather than going out twice. Any business that would be okay with this would get me as a customer far more often.
submitted by /u/monsieur_melancholy
[link] [comments] -
🔗 Register Spill Joy & Curiosity #54 rss
As I'm standing here and packing for a ten day trip, first to Mexico and then to San Francisco, I can hear the three hearts beating in my chest again.
The first one, with its ba-dum ba-dum ba-dum spelling out its wishes in morse code, tells me that I should be envious of those hand-luggage-only people. True travellers; agile, light, flexible; their whole life in the overhead compartment; they probably laugh at those idiots standing around the luggage carousel when they walk past. Actually, they probably are somewhere else already.
My second heart, its rhythm clashing, instead laughs at the hand-luggage- only people. "Just wait until they get cholocate ice cream all over their white t-shirts," it assures me, "and then sit down into some jam-filled pastry someone dropped on a chair at breakfast, right before someone tells them they can't wear those shoes to dinner, which comes right before the rain, for which they're unprepared. What if they pee their pants, dude?" It tells me to take the bigger suitcase.
The miraculous third heart is wise. It's calm. Its ba-dum whispers and says "it's fine, they're both right, let's do what we always do." What it means, of course, is that I should do both. Pack a lot (a lot) but then also get stressed out about not having enough clothes (_watch out for those pastries …) _and obsessively ration the clothes I brought -- like wearing the same shorts for an amount of time that's barely accepted by society while telling myself that no one will notice that stain anyway. It'll come right out if you just, see, brush over it like this.
-
Really, really, really good: Behind the Scenes of Bun Install. Clear writing based on clear thoughts with explanations on just the right level of abstraction, consistently so, this is what technical writing should be. Oh, and, of course, it's a lot of fun and makes me want to make things faster.
-
Also really, really, really good: Defeating Nondeterminism in LLM Inference. I've always been curious about why even at temperature 0 some LLMs can be non-deterministic and when, a few weeks back, I've tweeted about LLMs being non-deterministic people in the replies fought over not only over whether they are non-deterministic, but also what the source of that non-determinism could be. Floating point operations? The hardware, GPUs? Here's the answer. It's both, but not really: "In other words, the primary reason nearly all LLM inference endpoints are nondeterministic is that the load (and thus batch-size) nondeterministically varies!" They are ultimately non-deterministic because multiple requests are being sent through the model at the same time. Again: very good blog post! And also, note that this is Thinking Machines, the startup founded by Mira Murati, the ex-CTO of OpenAI; the startup that's raised $2 billion; the startup that, so far, hasn't published anything else yet. As far as I know, this post is the first thing they put out into the world. Well done.
-
You have to watch this and I'm sorry but not really and I'm not going to tell you what it is before you click, so just click here. I have to admit that I also cloned the code and tried to make it work for our Amp TUI. Admission #2: I didn't know about the "classic" script that was referenced and that's apparently by The Onion.
-
"I believe we have both the power and the responsibility to shape this technology's future. That begins with a clear-eyed diagnosis of the present. One of the most useful diagnostic tools I've found for this comes from computer scientist Melanie Mitchell. In a seminal paper back in 2021, she identified what she claims are four foundational fallacies, four deeply embedded assumptions that explain to a large extent our collective confusion about AI, and what it can and cannot do."
-
PostHog has a new homepage and it looks like an operating system in the browser. That in itself isn't new, but this one's very cute and it's now the homepage of a company that's raised a Series D this year and is valued at $920M. Let's see how long it stays. But I guess even if they rip it out in 4 weeks, they've created some buzz. Good move.
-
Term.Everything allows you to run "every GUI app in the terminal!" The demos look very cool and the README is cool and the description of how it works is very inspiring. I want to play around with chafa now.
-
In February, Apple released the iPhone 16e, including the C1 modem, about which Mark Gurman wrote: "The C1 Apple modem is a monumental technical achievement. A several billion dollar effort that has been in the works for 7 years. In the end it gets two sentences in the press release and 15 seconds in the announcement video. Apple is clearly downplaying this intentionally." I quoted Gurman here, back in February, and provided some more links to more comments that described what an achievement it is. Now, this week, Apple released the iPhone Air, that comes with "N1, a new Apple-designed wireless networking chip that enables Wi-Fi 7, Bluetooth 6, and Thread." A HackerNews comment says: "Congrats to Apple for finally designing out Broadcom and vertically integrating the wireless chip." I have to admit that I'm essentially clueless when it comes to global hardware manufacturing, but, man, I'm intrigued.
-
Talking about HackerNews: I found this whole discussion interesting. Linked article states that we're all being sucked into the hole of short-form video but the top comment says: "Too simple of a narrative. At the same time, YouTube videos are getting longer, and people are watching more YouTube videos on TVs than on mobile devices. […] So I think we're seeing more of a bifurcation: in-depth longform videos are becoming 30, 40, 60, even 90 minutes long, whereas anything shorter than 10 minutes is being compressed to 30-60 seconds." I've never installed TikTok on my phone, so I can't comment on that, but I can say that my brain seems to be immune to YouTube Shorts, they just don't do anything to me. I can watch one and stop. Twitter, on the other hand, well…
-
Another good comment: "Saying boilerplate shouldn't exist is like saying we shouldn't need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is 'I mean, sure, that'd be great, not sure how you'll actually accomplish that though'."
-
But back to Apple. Here's some engineering porn for you: "Memory Integrity Enforcement (MIE) is the culmination of an unprecedented design and engineering effort, spanning half a decade, that combines the unique strengths of Apple silicon hardware with our advanced operating system security to provide industry-first, always-on memory safety protection across our devices." But what is it? In their bombastic words, it's "the industry's first ever, comprehensive, always-on memory-safety protection covering key attack surfaces -- including the kernel and over 70 userland processes -- built on the Enhanced Memory Tagging Extension (EMTE) and supported by secure typed allocators and tag confidentiality protections." But then you read it and think, god damn, that's impressive, I'd be bombastic about this too.
-
Things you can do with a debugger but not with print debugging. 2025 is the year in which I discovered debuggers for myself and after feeling pretty snobby about it for the first few weeks, I'm now back down on earth and use both, print debugging and the debugger.
-
"My heart goes out to the man who does his work when the 'boss' is away, as well as when he is home. And the man who, when given a letter for Garcia, quietly takes the missive, without asking any idiotic questions, and with no lurking intention of chucking it into the nearest sewer, or of doing aught else but deliver it, never gets "laid off," nor has to go on strike for higher wages. Civilization is one long anxious search for just such individuals. Anything such a man asks will be granted; his kind is so rare that no employer can afford to let him go. He is wanted in every city, town, and village - in every office, shop, store and factory. The world
cries out for such; he is needed, and needed badly--the man who can Carry a message to Garcia."
-
My iPhone 8 Refuses to Die: Now It's a Solar-Powered Vision OCR Server. Sounds like a ton of fun and I have a very old iPad lying around here…
-
Some very popular NPM packages got compromised, but it seems like we all got lucky, since the attackers only wanted to steal crypto stuff. That phishing email looks impressively real though.
-
"The Babel fish is a small, bright yellow fish, which can be placed in someone's ear in order for them to be able to hear any language translated into their first language." And now it's here. Well, not here here, if you're in the EU, but here. Isn't that, scusa, fucking crazy?
-
Being good isn't enough. I think this is directionally correct, but I'd make two changes: I think technical skills, in this industry, are the foundation on which everything else needs to rest. When they write that "the biggest gains come from combining disciplines. […] technical skill, product thinking, project execution, and people skills", I'd argue that you shouldn't think of a pie chart, but a pyramid and the base layer is very thick and has the "technical skill" label. The other change: I'd underline, three times, the part about agency. I agree that it's "more powerful than smarts or credentials or luck", but if, years ago, you'd told me that you had seen programmers from bumfuck nowhere outwork Stanford graduates, not with "smarts or credentials", but with grit, discipline, humility, reliability, and attention to detail? I guess I still wouldn't have believed you.
-
I remember when people got pissed off at Sublime Text for showing "How about you pay us some money for this software?" every few restarts. Now? "The ROI is obvious, but budget for $1000-1500/month for a senior engineer going all-in on AI development. It's also reasonable to expect engineers to get more efficient with AI spend as they get good with it, but give them time." Wild times.
-
"The real risk is not taking a risk. The scaling maximalism of the last decade allowed us to avoid many hard choices -- now, we have to think strategically. […] The tragedy is that most teams are still fighting the wrong battle. They're running the 'more GPUs' playbook in a world where the real bottleneck is the data supply chain. If your team is asking for more compute but can't explain their data roadmap, send them back to the drawing board."
Travel light? I'm sure you can fit this subscription in:
-
-