- â
- â
to read (pdf)
- A Protocol for Package Management | Andrew Nesbitt
- No management needed: anti-patterns in early-stage engineering teams | Antoine Boulanger
- Reconstructing Program Semantics from Go Binaries
- Long time ago, I was looking for game with some hidden rules, browsing random wi... | Hacker News
- Keychronâs Nape Pro turns your mechanical keyboard into a laptopâstyle trackball rig: Hands-on at CES 2026 - Yanko Design
- January 23, 2026
-
đ r/reverseengineering Organized Traffer Gang on the Rise Targeting Web3 Employees and Crypto Holders rss
submitted by /u/CyberMasterV
[link] [comments] -
đ remorses/critique critique@0.1.43 release
- Show full submodule diffs instead of just commit hashes:
- Added
--submodule=diffflag to git commands - Strip submodule header lines (
Submodule name hash1..hash2:) before parsing - Works with TUI,
--web, andreviewcommands
- Added
- Show full submodule diffs instead of just commit hashes:
-
đ r/reverseengineering How to detect arguments in a decompiler (rev.ng hour 2023-10-13) rss
submitted by /u/aleclm
[link] [comments] -
đ r/LocalLLaMA OpenAI CFO hinting at "Outcome-Based Pricing" (aka royalties on your work)? Makes the case for local even stronger. rss
UPDATE : My bad on this one, guys. I got caught by the clickbait.
Thanks to u/evilbarron2 for digging up the original Business Insider source.
CFO was actually talking about " Outcome-Based Pricing" for huge enterprise deals (e.g., if AI helps a Pharma company cure a disease, OpenAI wants a cut of that specific win).
There is basically zero evidence this applies to us regular users, indie devs, or the API. I'm keeping the post up because the concept is still interesting to debate, but definitely take the headline with a huge grain of salt.
Original Post:
Saw some screenshots floating around about OpenAI planning to "take a cut" of customer discoveries (like pharma drugs, etc).
I tried to dig up the primary source to see if itâs just clickbait. The closest official thing is a recent blog post from their CFO Sarah Friar talking about "outcome-based pricing" and "sharing in the value created" for high-value industries.
~~Even if the "royalty" headlines are sensationalized by tech media, the direction is pretty clear. They are signaling a shift from "paying for electricity" (tokens) to "taxing the factory output" (value).~~
It kind of reminds me of the whole Grid vs. Solar debate. relying on the Grid (Cloud APIs) is cheap and powerful, but you don't control the terms. If they decide your specific use case is "high value" and want a percentage, you're locked in.
Building a local stack is like installing solar/batteries. Expensive upfront, pain in the ass to maintain, but at least nobody knocks on your door asking for 5% of your project revenue just because you used their weights to run the math.
Link to article: https://www.gizmochina.com/2026/01/21/openai-wants-a-cut-of- your-profits-inside-its-new-royalty-based-plan-and-other-business-models/
Link to the actual source: https://www.businessinsider.com/openai-cfo-sarah- friar-future-revenue-sources-2026-1
submitted by /u/distalx
[link] [comments] -
đ Servo Blog December in Servo: multiple windows, proxy support, better caching, and more! rss
Servo 0.0.4 and our December nightly builds now support multiple windows (@mrobinson, @mukilan, #40927, #41235, #41144)! This builds on features that landed in Servoâs embedding API last month. Weâve also landed support for several web platform features, both old and new:
- âcontrast-color()â in CSS color values (@webbeef, #41542)
- partial support for < meta charset> (@simonwuelker, #41376)
- partial support for encoding sniffing (@simonwuelker, #41435)
- âbackgroundâ and âbgcolorâ attributes on <table>, <thead>, <tbody>, <tfoot>, <tr>, <td>, <th> (@simonwuelker, #41272)
- tee() on readable byte streams (@Taym95, #35991)
Note: due to a known issue, servoshell on macOS may not be able to directly open new windows, depending on your system settings.

For better compatibility with older web content, we now support vendor- prefixed CSS properties like â-moz-transformâ (@mrobinson, #41350), as well as window.clientInformation (@Taym95, #41111).
Weâve continued shipping the SubtleCrypto API, with full support for ChaCha20-Poly1305 , RSA-OAEP , RSA-PSS , and RSASSA-PKCS1-v1_5 (see below), plus importKey() for ML-KEM (@kkoyung, #41585) and several other improvements (@kkoyung, @PaulTreitel, @danilopedraza, #41180, #41395, #41428, #41442, #41472, #41544, #41563, #41587, #41039, #41292):
Algorithm |
---|---
ChaCha20-Poly1305 | (@kkoyung, #40978, #41003, #41030)
RSA-OAEP | (@kkoyung, @TimvdLippe, @jdm, #41225, #41217, #41240, #41316)
RSA-PSS | (@kkoyung, @jdm, #41157, #41225, #41240, #41287)
RSASSA-PKCS1-v1_5 | (@kkoyung, @jdm, #41172, #41225, #41240, #41267)When using servoshell on Windows, you can now see
--helpand log output, as long as servoshell was started in a console (@jschwe, #40961).Servo diagnostics options are now accessible in servoshell via the
SERVO_DIAGNOSTICSenvironment variable (@atbrakhi, #41013), in addition to the usual-Z/--debug=arguments.Servoâs devtools now partially support the Network > Security tab (@jiang1997, #40567), allowing you to inspect some of the TLS details of your requests. Weâve also made it compatible with Firefox 145 (@eerii, #41087), and use fewer IPC resources (@mrobinson, #41161).

Weâve fixed rendering bugs related to âfloatâ , âorderâ , âmax- widthâ , âmax-heightâ , â:linkâ selectors , < audio> layout, and getClientRects() , affecting intrinsic sizing (@Loirooriol, #41513), anonymous blocks (@Loirooriol, #41510), incremental layout (@Loirooriol, #40994), flex item sizing (@Loirooriol, #41291), selector matching (@andreubotella, #41478), replaced element layout (@Loirooriol, #41262), and empty fragments (@Loirooriol, #41477).
Servo now fires âtoggleâ events on < dialog> (@lukewarlow, #40412). Weâve also improved the conformance of âwheelâ events (@mrobinson, #41182), âhashchangeâ events (@Taym95, #41325), âdblclickâ events on <input> (@Taym95, #41319), âresizeâ events on <video> (@tharkum, #40940), âseekedâ events on <video> and <audio> (@tharkum, #40981), and the âtransformâ property in getComputedStyle() (@mrobinson, #41187).
Embedding API Servo now has basic support for HTTP proxies (@Narfinger, #40941). You can set the proxy URL in the http_proxy (@Narfinger, #41209) or HTTP_PROXY (@treeshateorcs, @yezhizhen, #41268) environment variables, or via --pref network_http_proxy_uri. We now use the system root certificates by default (@Narfinger, @mrobinson, #40935, #41179), on most platforms. If you donât want to trust the system root certificates, you can instead continue to use Mozillaâs root certificates with --pref network_use_webpki_roots. As always, you can also add your own root certificates via Opts::certificate_path (--certificate-path=). We have a new SiteDataManager API for managing localStorage , sessionStorage , and cookies (@janvarga, #41236, #41255, #41378, #41523, #41528), and a new NetworkManager API for managing the cache (@janvarga, @mrobinson, #41255, #41474, #41386). To clear the cache, call NetworkManager::clear_cache, and to list cache entries, call NetworkManager::cache_entries. Simple dialogs â that is alert(), confirm(), and prompt() â are now exposed to embedders via a new SimpleDialog type in EmbedderControl (@mrobinson, @mukilan, #40982). This new interface is harder to misuse, and no longer requires boilerplate for embedders that wish to ignore simple dialogs. Web console messages , including messages from the Console API, are now accessible via ServoDelegate::show_console_message and WebViewDelegate::show_console_message (@atbrakhi, #41351). Servo, the main handle for controlling Servo, is now cloneable for sharing within the same thread (@mukilan, @mrobinson, #41010). To shut down Servo, simply drop the last Servo handle or let it go out of scope. Servo::start_shutting_down and Servo::deinit have been removed (@mukilan, @mrobinson, #41012). Several interfaces have also been renamed: Servo::clear_cookies is now SiteDataManager::clear_cookies (@janvarga, #41236, #41255) DebugOpts::disable_share_style_cache is now Preferences::layout_style_sharing_cache_enabled (@atbrakhi, #40959) The rest of DebugOpts has been moved to DiagnosticsLogging, and the options have been renamed (@atbrakhi, #40960) Perf and stability
We can now evict entries from our HTTP cache (@Narfinger, @gterzian, @Taym95, #40613), rather than having it grow forever (or get cleared by an embedder). about:memory now tracks SVG-related memory usage (@d-kraus, #41481), and weâve fixed memory leaks in <video> and <audio> (@tharkum, #41131).
Servo now does less work when matching selectors (@webbeef, #41368), when focus changes (@mrobinson, @Loirooriol, #40984), and when reflowing boxes whose size did not change (@Loirooriol, @mrobinson, #41160).
To allow for smaller binaries, gamepad support is now optional at build time (@WaterWhisperer, #41451).
Weâve fixed some undefined behaviour around garbage collection (@sagudev, @jdm, @gmorenz, #41546, mozjs#688, mozjs#689, mozjs#692). To better avoid other garbage-collection-related bugs (@sagudev, mozjs#647, mozjs#638), weâve continued our work on defining (and migrating to) safer interfaces between Servo and the SpiderMonkey GC (@sagudev, #41519, #41536, #41537, #41520, #41564).
Weâve fixed a crash that occurs when < link rel=âshortcut iconâ> has an empty âhrefâ attribute , which affected chiptune.com (@webbeef, #41056), and weâve also fixed crashes in:
- âbackground-repeatâ (@mrobinson, #41158)
- <audio> layout (@Loirooriol, #41262)
- custom elements (@mrobinson, #40743)
- AudioBuffer (@WaterWhisperer, #41253)
- AudioNode (@Taym95, #40954)
- ReportingObserver (@Taym95, #41261)
- Uint8Array (@jdm, #41228)
- the fonts system, on FreeType platforms (@simonwuelker, #40945)
- IME usage, on OpenHarmony (@jschwe, #41570)
Donations Thanks again for your generous support! We are now receiving 7110 USD/month (+10.5% over November) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo. Servo is also on thanks.dev, and already 30 GitHub users (+2 over November) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community. We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsors: Anthropy , Niclas Overby , and RxDB! If youâre interested in this kind of sponsorship, please contact us at join@servo.org. 7110 USD/month 10000 Use of donations is decided transparently via the Technical Steering Committeeâs public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page. Conference talks and blogs [
__](https://servo.org/blog/2026/01/23/december-in-servo/#conference-talks-and- blogs)
Weâve recently published one talk and one blog post:
-
Web engine CI on a shoestring budget (slides; transcript) â Delan Azabani (@delan) spoke about the CI system that keeps our builds and tryjobs moving fast, running nearly two million tests in under half an hour.
-
Servo 2025 Stats â Manuel Rego (@mrego) wrote about the growth of the Servo project, and how our many new contributors have enabled that.
We also have two upcoming talks at FOSDEM 2026 in Brussels later this month:
-
The Servo project and its impact on the web platform ecosystem â Manuel Rego (@mrego) is speaking on Saturday 31 January at 14:00 local time (13:00 UTC), about Servoâs impact on spec issues, interop bugs, test cases, and the broader web platform ecosystem.
-
Implementing Streams Spec in Servo web engine â Taym Haddadi (@Taym95) is speaking on Saturday 31 January at 17:45 local time (16:45 UTC), about our experiences writing a new implementation of the Streams API that is independent of the one in SpiderMonkey.
Servo developers Martin Robinson (@mrobinson) and Delan Azabani (@delan) will also be attending FOSDEM 2026, so it would be a great time to come along and chat about Servo!
-
- January 22, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-01-22 rss
IDA Plugin Updates on 2026-01-22
New Releases:
Activity:
- capa
- CrystalRE
- DeepExtractIDA
- distro
- a26bdc26: Add enhanced analysis and predecessor state tracking to remnux-diag
- DriverBuddy-7.4-plus
- 1d3f9a8d: Sync workflows-sync.yml from .github repo
- dylib_dobby_hook
- efiXplorer
- 374644da: update links after transition to the new org (#130)
- HappyIDA
- fb17ecaf: fix: add missing early return after handling non-citem paste-type case
- 2b46a4ff: feat: enable copy-type when cursor is on function prototype header
- 5af7cf18: release: v1.0.4
- d0ceea53: feat: enable paste-type when cursor is on function prototype header
- 989450e7: fix: navigate through nested struct/union member function pointers
- a85631fc: fix: eh_start_list might be empty
- ida-hcli
- msc-thesis-LLMs-to-rank-decompilers
- cf472500: refactored server - wsgi
- 12bc9d8d: Merge branch 'main' of https://github.com/Lurpigi/msc-thesis-LLMs-to-âŠ
- 758f7bdb: pre refactor
- playlist
- 9e65ca6b: The chepes and the chapas
- plugin-ida
- 8f16b166: Merge pull request #97 from RevEngAI/revert-95-fix-PLU-239-crash-whenâŠ
- 34a05fa1: Revert "fix(PLU-239): Crash when plugin not actively used"
- a9a041ce: Merge pull request #95 from RevEngAI/fix-PLU-239-crash-when-plugin-noâŠ
- b6008583: Merge pull request #92 from RevEngAI/feat-PLU-232-show-renamed-funcs
- c01609c1: Merge pull request #93 from RevEngAI/feat-PLU-231-filter-functions-foâŠ
-
đ r/LocalLLaMA Am I the only one who feels that, with all the AI boom, everyone is basically doing the same thing? rss
Lately I go on Reddit and I keep seeing the same idea repeated over and over again. Another chat app, another assistant, another âAI toolâ that, in reality, already exists â or worse, already exists in a better and more polished form.
Many of these are applications that could be solved perfectly with an extension, a plugin, or a simple feature inside an app we already use. Iâm not saying AI is bad â quite the opposite, itâs incredible. But there are people pouring all their money into Anthropic subscriptions or increasing their electricity bill just to build a less polished version of things like OpenWebUI, Open Code, Cline, etc
submitted by /u/Empty_Enthusiasm_167
[link] [comments] -
đ @cxiao@infosec.exchange RE: mastodon
RE: https://social.troll.academy/@mushu/115937976404644181
https://runjak.codes/posts/2026-01-21-adversarial-coding- test/
Seems really similar to a recently reported variant of a North Korean state aligned campaign, ContagiousInterview. They've moved to VS Code tasks now
https://www.jamf.com/blog/threat-actors-expand-abuse-of-visual-studio- code/
https://opensourcemalware.com/blog/contagious-interview- vscode#DPRK #ContagiousInterview #lazarus #LazarusGroup #FamousChollima
-
đ r/reverseengineering Symphony of the Night Decomp Updates! It's Getting Closer rss
submitted by /u/chicagogamecollector
[link] [comments] -
đ News Minimalist đą Renewables overtake fossil fuels in EU + 9 more stories rss
In the last 3 days ChatGPT read 95356 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

[5.6] Wind and solar power lead EU energy mix for the first time in 2025 âlavenir.net(French) (+15)
For the first time, wind and solar power surpassed fossil fuels in the European Unionâs electricity production mix in 2025, marking a major milestone in the region's energy transition.
According to think tank Ember, these renewables generated 841 terawatt-hours, accounting for 30.1% of EU electricity. This exceeded fossil fuels, which fell to 29% as solar production hit record highs and coal reached a historic low of 9.2%.
[5.7] Hundreds of artists, including Scarlett Johansson and Cate Blanchett, launch anti-AI campaign against unauthorized use of copyrighted work âvariety.com(+3)
Scarlett Johansson, Cate Blanchett, and Joseph Gordon-Levitt joined 700 creators in launching a campaign condemning tech companies for training artificial intelligence on copyrighted work without authorization, labeling it theft.
The group asserts that unauthorized scraping endangers millions of jobs and economic growth. They urge developers to adopt transparent licensing agreements, proving that technological advancement can coexist with the protection of creators' intellectual property rights and legal authorship.
This initiative follows several high-profile legal disputes, notably involving Johansson, who has previously taken action against the unauthorized use of her name, voice, and likeness in AI-generated advertisements and content.
Highly covered news with significance over 5.5
[6.3] Personalized mRNA vaccine shows lasting benefits for high-risk melanoma patients â euronews.com (+6)
[6.2] South Korea enacts comprehensive laws to regulate artificial intelligence â japantimes.co.jp (+5)
[5.9] Snap settles social media addiction lawsuit, avoiding trial â nytimes.com (+5)
[5.8] Moldova begins withdrawal from the Commonwealth of Independent States â pravda.com.ua (Ukrainian) (+7)
[5.7] Israel demolishes UNRWA headquarters in East Jerusalem â dw.com (Russian) (+22)
[5.6] Trump unveils Board of Peace initiative with global leaders â irishtimes.com (+38)
[5.5] Argentina welcomes large shipment of Chinese electric vehicles as it eases import restrictions â apnews.com (+10)
[5.6] Canada prepares for potential US invasion following Trump's provocations â fr.de (German) (+23)
Thanks for reading!
â Vadim
You can create your own personal newsletter like this with premium.
-
đ r/reverseengineering apk.sh makes reverse engineering Android apps easier, automating repetitive tasks like pulling, decoding, rebuilding and patching an APK. It supports direct bytecode manipulation with no decompilation, this avoids decompilation/recompilation errors. rss
submitted by /u/Happy_Youth_1970
[link] [comments] -
đ langchain-ai/deepagents deepagents-cli==0.0.13 release
Changes since deepagents-cli==0.0.12
release(sdk): bump version (#879)
fix(sdk): make sure that tool truncation applies to execute (#547)
fix(cli): hitl spinner errors (#861)
fix(cli): Improve HITL approval UX (#859)
release: patch release 0.3.7 (#869)
fix(cli): resume should load previous thread messages (#862)
chore(cli): clean checks in cli spacing (#854)
fix(cli): avoid jumping (#853)
chore(cli): turn on more linting (#852)
Add current model display to status bar (#844)
Disable double message submission while agent is working
update remember prompt to work with memory and skills (#842)
feat(cli): focus input when clicking anywhere in the terminal (#826)
feat: summarization offloading (#742)
Bump version to 0.3.7a1 (#817)
chore(deps): bump the uv group across 5 directories with 1 update (#811)
fix(infra): excludebuild/from typechecking (#808)
fix(sdk): BaseSandbox.ls_info() to return absolute paths (#797)
chore: bump deepagents-cli to 0.0.13a2 (#795)
docs: add testing readme (#788)
fix(cli): include tcss and py.typed in package data (#781)
feat(cli): format file tree with markdown (#782)
fix(cli): add explicit package inclusion for setuptools (#780)
add prompt seeding with -m flag (#755)
docs: update model configuration details inREADME(#772)
fix: import rules (#763)
release(deepagents-cli): 0.0.13a1 (#756)
cli-token-tracking-fixes (#706)
release: deepagents 0.3.6 (#752)
chore: automatically sort imports (#740)
Add LangSmith tracing status to welcome banner (#741)
feat(cli): inject local context into system prompt viaLocalContextMiddleware
fix: don't allow Rich markup from user content (#704)
fix(cli): remove duplicate version fromwelcome.py(#737)
feat(cli): add--version//versioncommands (#698)
minor release(deepagents): bump version to 0.3.5 (#695)
Port SDK Memory to CLI (#691)
fix thread id (#692)
chore(ci): add uv lock checks (#681)
update version bounds (#687)
CLI Refactor to Textual (#686)
Fix invalid YAML in skill-creator SKILL.md frontmatter (#675)
feat(deepagents): add skills to sdk (#591)
docs: replace gemini 1.5 (#653)
feat(cli): show version in splash screen (#610)
chore(cli): expose version (#609)
fix(cli): handle read_file offset exceeding file length by returning all lines (issue #559) (#568)
chore(cli): remove line (#601) -
đ langchain-ai/deepagents deepagents==0.3.8 release
-
đ r/wiesbaden Könnten E-Scooter in der Innenstadt verboten werden? rss
Ich habe die Nase voll... sorry im vorraus fĂŒr den Rant. Seit Monaten gehe ich nur noch ungerne in die Innenstadt, weil ich im Februar einen Unfall mit einem Halbstarken auf einem E-Scooter hatte. Der ist mit vollem Tempo durch die EinkaufsstraĂe gedĂŒst und hat mich erwischt. Unterarm angebrochen. Schnell geheilt aber tat sau-weh. Ich konnte nichts machen, weil der einfach wieder aufgestiegen ist und weiter gefahren ist.
Heute fast wieder. MarktstraĂe runter auf dem Weg z BĂŒrgerbĂŒro. Ein Typ zwischen 16-20 Jahren rast an mir vorbei als wenns kein morgen gibt.
Ich habe langsam echt die Nase voll davon im einer FuĂgĂ€nger Zone um meine Sicherheit fĂŒrchten zu mĂŒssen, nur weil es anscheinend der neue Trend ist mit 20-30kmh da durch zu donnern.
submitted by /u/Winston_Duarte
[link] [comments] -
đ r/reverseengineering What Nobody Tells You About Becoming a Vulnerability Researcher rss
submitted by /u/shine-rose
[link] [comments] -
đ @HexRaysSA@infosec.exchange đ§ Jump Anywhere in IDA 9.3 makes everyday navigation faster and more mastodon
đ§ Jump Anywhere in IDA 9.3 makes everyday navigation faster and more responsive â especially on large databases.
Hereâs how it works and whatâs improved.
https://hex-rays.com/blog/ida-9.3-jump-anywhere -
đ Hex-Rays Blog Jump Anywhere: Unified Navigation Gets an Upgrade in IDA 9.3 rss
An IDA database stores many different kinds of information: functions, named global variables, types, and more. Jump Anywhere , introduced in IDA 9.2, is a unified âquick navigationâ dialog that lets you search across those database items from a single place. It also supports resolving simple expressions that the user could have entered into the âJump to addressâ dialog.

-
đ r/wiesbaden Date-Ideen im Winter in Wiesbaden (öffifreundlich) rss
Hallo ihr Lieben,
ich plane im Moment ein Date mit meinem Freund. Da ich am Valentinstag nicht da bin, möchte ich gerne den Tag mit ihm nachholen und etwas schönes planen. Ich wĂŒrde gerne mit ihm zur Kaiser-Friedrich-Therme fahren und abends ins Theater. Habt ihr gute Ideen, wie man sich dazwischen schön die Zeit vertreiben kann? Ich bin leider nicht ortskundig, wĂŒrde mich aber sehr ĂŒber Tipps freuen!
Danke schonmal :)
submitted by /u/ichbineindummeraffe
[link] [comments] -
đ remorses/critique critique@0.1.42 release
- New
--imageflag for all diff commands:- Generates WebP images of terminal output (saved to /tmp)
- Splits long diffs into multiple images (70 lines per image)
- Uses takumi for high-performance image rendering
@takumi-rs/coreand@takumi-rs/helpersadded as optional dependencies- Library export:
import { renderTerminalToImages } from "critique/src/image.ts"
- Web output: Use default theme to enable dark/light mode switching based on system preference
reviewcommand:- Improved AI prompt: order hunks by code flow, think upfront before writing, split heavy logic across sections
- Dependencies:
- Update opentui to
367a9408
- Update opentui to
- New
-
đ r/LocalLLaMA Qwen have open-sourced the full family of Qwen3-TTS: VoiceDesign, CustomVoice, and Base, 5 models (0.6B & 1.8B), Support for 10 languages rss
| Github: https://github.com/QwenLM/Qwen3-TTS Hugging Face: https://huggingface.co/collections/Qwen/qwen3-tts Blog: https://qwen.ai/blog?id=qwen3tts-0115 Paper: https://github.com/QwenLM/Qwen3-TTS/blob/main/assets/Qwen3_TTS.pdf Hugging Face Demo: https://huggingface.co/spaces/Qwen/Qwen3-TTS submitted by /u/Nunki08
[link] [comments]
---|--- -
đ r/LocalLLaMA Qwen3 TTS just dropped đŁïžđ rss
-
đ r/LocalLLaMA Qwen dev on Twitter!! rss
| submitted by /u/Difficult-Cap-7527
[link] [comments]
---|--- -
đ r/wiesbaden FuĂballkneipe rss
Hello, kennt jemand in Wiesbaden oder Umgebung so eine richtige FuĂballkneipe, wo am besten noch viel FuĂball Deko hĂ€ngt? Ich wĂŒrde mich ĂŒber Tipps freuen :)
submitted by /u/Turbulent_Life_5826
[link] [comments] -
đ Anton Zhiyanov Interfaces and traits in C rss
Everyone likes interfaces in Go and traits in Rust. Polymorphism without class-based hierarchies or inheritance seems to be the sweet spot. What if we try to implement this in C?
Interfaces in Go âą Traits in Rust âą Toy example âą Interface definition âą Interface data âą Method table âą Method table in implementor âą Type assertions âą Final thoughts
Interfaces in Go
An interface in Go is a convenient way to define a contract for some useful behavior. Take, for example, the honored
io.Reader:// Reader is the interface that wraps the basic Read method. type Reader interface { // Read reads up to len(p) bytes into p. It returns the number of bytes // read (0 <= n <= len(p)) and any error encountered. Read(p []byte) (n int, err error) }Anything that can read data into a byte slice provided by the caller is a
Reader. Quite handy, because the code doesn't need to care where the data comes from â whether it's memory, the file system, or the network. All that matters is that it can read the data into a slice:// work processes the data read from r. func work(r io.Reader) int { buf := make([]byte, 8) n, err := r.Read(buf) if err != nil && err != io.EOF { panic(err) } // ... return n }We can provide any kind of reader:
func main() { var total int b := bytes.NewBufferString("hello world") // bytes.Buffer implements io.Reader, so we can use it with work. total += work(b) total += work(b) fmt.Println("total =", total) } total = 11Go's interfaces are structural, which is similar to duck typing. A type doesn't need to explicitly state that it implements
io.Reader; it just needs to have aReadmethod:// Zeros is an infinite stream of zero bytes. type Zeros struct{} func (z Zeros) Read(p []byte) (n int, err error) { clear(p) return len(p), nil }The Go compiler and runtime take care of the rest:
func main() { var total int var z Zeros // Zeros implements io.Reader, so we can use it with work. total += work(z) total += work(z) fmt.Println("total =", total) } total = 16Traits in Rust
A trait in Rust is also a way to define a contract for certain behavior. Here's the
std::io::Readtrait:// The Read trait allows for reading bytes from a source. pub trait Read { // Readers are defined by one required method, read(). Each call to read() // will attempt to pull bytes from this source into a provided buffer. fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize>; // ... }Unlike in Go, a type must explicitly state that it implements a trait:
// An infinite stream of zero bytes. struct Zeros; impl io::Read for Zeros { fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> { buf.fill(0); Ok(buf.len()) } }The Rust compiler takes care of the rest:
// Processes the data read from r. fn work(r: &mut dyn io::Read) -> usize { let mut buf = [0; 8]; match r.read(&mut buf) { Ok(n) => n, Err(e) => panic!("Error: {}", e), } } fn main() { let mut total = 0; let mut z = Zeros; // Zeros implements Read, so we can use it with work. total += work(&mut z); total += work(&mut z); println!("total = {}", total); } total = 16Either way, whether it's Go or Rust, the caller only cares about the contract (defined as an interface or trait), not the specific implementation.
Toy example
Let's make an even simpler version of
Readerâ one without any error handling (Go):// Reader an interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. type Reader interface { Read(p []byte) int }Usage example:
// Zeros is an infinite stream of zero bytes. type Zeros struct { total int // total number of bytes read } // Read reads len(p) bytes into p. func (z *Zeros) Read(p []byte) int { clear(p) z.total += len(p) return len(p) } // work processes the data read from r. func work(r Reader) int { buf := make([]byte, 8) return r.Read(buf) } func main() { z := new(Zeros) work(z) work(z) fmt.Println("total =", z.total) } total = 16Let's see how we can do this in C!
Interface definition
The main building blocks in C are structs and functions, so let's use them. Our
Readerwill be a struct with a single field calledRead. This field will be a pointer to a function with the right signature:// An interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); } Reader;To make
Zerosfully dynamic, let's turn it into a struct with aReadfunction pointer (I know, I know â just bear with me):// An infinite stream of zero bytes. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); size_t total; } Zeros;Here's the
Zeros_Read"method" implementation:// Reads up to len(p) bytes into p. size_t Zeros_Read(void* self, uint8_t* p, size_t len) { Zeros* z = (Zeros*)self; for (size_t i = 0; i < len; i++) { p[i] = 0; } z->total += len; return len; }The
workis pretty obvious:// Does some work reading from r. size_t work(Reader* r) { uint8_t buf[8]; return r->Read(r, buf, sizeof(buf)); }And, finally, the
mainfunction:int main(void) { Zeros z = {.Read = Zeros_Read, .total = 0}; Reader* r = (Reader*)&z; work(r); work(r); printf("total = %zu\n", z.total); } total = 16See how easy it is to turn a
Zerosinto aReader: all we need is(Reader*)&z. Pretty cool, right?Not really. Actually, this implementation is seriously flawed in almost every way (except for the
Readerdefinition).Memory overhead. Each
Zerosinstance has its own function pointers (8 bytes per function on a 64-bit system) as "methods", which isn't practical even if there are only a few of them. Regular objects should store data, not functions.Layout dependency. Converting from
Zeros*toReader*like(Reader*)&zonly works if both structures have the sameReadfield as their first member. If we try to implement another interface:// Reader interface. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); } Reader; // Closer interface. typedef struct { void (*Close)(void* self); } Closer; // Zeros implements both Reader and Closer. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); void (*Close)(void* self); size_t total; } Zeros;Everything will fall apart:
int main(void) { Zeros z = { .Read = Zeros_Read, .Close = Zeros_Close, .total = 0, }; Closer* c = (Closer*)&z; // (X) c->Close(c); } Segmentation fault: 11CloserandZeroshave different layouts, so type conversion in â§ is invalid and causes undefined behavior.Lack of type safety. Using a
void*as the receiver inZeros_Readmeans the caller can pass any type, and the compiler won't even show a warning:int main(void) { int x = 42; uint8_t buf[8]; Zeros_Read(&x, buf, sizeof(buf)); // bad decision } size_t Zeros_Read(void* self, uint8_t* p, size_t len) { Zeros* z = (Zeros*)self; // ... z->total += len; // consequences return len; } Abort trap: 6C isn't a particularly type-safe language, but this is just too much. Let's try something else.
Interface data
A better way is to store a reference to the actual object in the interface:
// An interface that wraps the basic Read method. // Read reads up to len(p) Zeros into p. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); void* self; } Reader;We could have the
Readmethod in the interface take aReaderinstead of avoid*, but that would make the implementation more complicated without any real benefits. So, I'll keep it asvoid*.Then
Zeroswill only have its own fields:// An infinite stream of zero bytes. typedef struct { size_t total; } Zeros;We can make the
Zeros_Readmethod type-safe:// Reads len(p) bytes into p. size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) { for (size_t i = 0; i < len; i++) { p[i] = i % 256; } z->total += len; return len; }To make this work, we add a
Zeros_Readermethod that returns the instance wrapped in aReaderinterface:// Returns a Reader implementation for Zeros. Reader Zeros_Reader(Zeros* z) { return (Reader){ .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read, .self = z, }; }The
workandmainfunctions remain quite simple:// Does some work reading from r. size_t work(Reader r) { uint8_t buf[8]; return r.Read(r.self, buf, sizeof(buf)); } int main(void) { Zeros z = {0}; Reader r = Zeros_Reader(&z); work(r); work(r); printf("total = %zu\n", z.total); } total = 16This approach is much better than the previous one:
- The
Zerosstruct is lean and doesn't have any interface-related fields. - The
Zeros_Readmethod takes aZeros*instead of avoid*. - The cast from
ZerostoReaderis handled inside theZeros_Readermethod. - We can implement multiple interfaces if needed.
Since our
Zerostype now knows about theReaderinterface (through theZeros_Readermethod), our implementation is more like a basic version of a Rust trait than a true Go interface. For simplicity, I'll keep using the term "interface".There is one downside, though: each
Readerinstance has its own function pointer for every interface method. SinceReaderonly has one method, this isn't an issue. But if an interface has a dozen methods and the program uses a lot of these interface instances, it can become a problem.Let's fix this.
Method table
Let's extract interface methods into a separate strucute â the method table. The interface references its methods though the
mtabfield:// An interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); } ReaderTable; typedef struct { const ReaderTable* mtab; void* self; } Reader;ZerosandZeros_Readdon't change at all:// An infinite stream of zero bytes. typedef struct { size_t total; } Zeros; // Reads len(p) bytes into p. size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) { for (size_t i = 0; i < len; i++) { p[i] = i % 256; } z->total += len; return len; }The
Zeros_Readermethod initializes the static method table and assigns it to the interface instance:// Returns a Reader implementation for Zeros. Reader Zeros_Reader(Zeros* z) { // The method table is only initialized once. static const ReaderTable impl = { .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read, }; return (Reader){.mtab = &impl, .self = z}; }The only difference in
workis that it calls theReadmethod on the interface indirectly using the method table (r.mtab->Readinstead ofr.Read):// Does some work reading from r. size_t work(Reader r) { uint8_t buf[8]; return r.mtab->Read(r.self, buf, sizeof(buf)); }mainstays the same:int main(void) { Zeros z = {0}; Reader r = Zeros_Reader(&z); work(r); work(r); printf("total = %zu\n", z.total); } total = 16Now the
Readerinstance always has a single pointer field for its methods. So even for large interfaces, it only uses 16 bytes (mtab+selffields). This approach also keeps all the benefits from the previous version:- Lightweight
Zerosstructure. - Easy conversion from
ZerostoReader. - Supports multiple interfaces.
We can even add a separate
Reader_Readhelper so the client doesn't have to worry aboutr.mtab->Readimplementation detail:// Reads len(p) bytes into p. size_t Reader_Read(Reader r, uint8_t* p, size_t len) { return r.mtab->Read(r.self, p, len); } // Does some work reading from r. size_t work(Reader r) { uint8_t buf[8]; return Reader_Read(r, buf, sizeof(buf)); }Nice!
Alternative: Method table in implementor
There's another approach I've seen out there. I don't like it, but it's still worth mentioning for completeness.
Instead of embedding the
Readermethod table in the interface, we can place it in the implementation (Zeros):// An interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); } ReaderTable; typedef ReaderTable* Reader; // An infinite stream of zero bytes. typedef struct { Reader mtab; size_t total; } Zeros;We initialize the method table in the
Zerosconstructor:// Returns a new Zeros instance. Zeros NewZeros(void) { static const ReaderTable impl = { .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read, }; return (Zeros){ .mtab = (Reader)&impl, .total = 0, }; }worknow takes aReaderpointer:// Does some work reading from r. size_t work(Reader* r) { uint8_t buf[8]; return (*r)->Read(r, buf, sizeof(buf)); }And
mainconvertsZeros*toReader*with a simple type cast:int main(void) { Zeros z = NewZeros(); Reader* r = (Reader*)&z; work(r); work(r); printf("total = %zu\n", z.total); } total = 16This keeps
Zerospretty lightweight, only adding one extramtabfield. But the(Reader*)&zcast only works becauseReader mtabis the first field inZeros. If we try to implement a second interface, things will break â just like in the very first solution.I think the "method table in the interface" approach is much better.
Bonus: Type assertions
Go has an
io.Copyfunction that copies data from a source (a reader) to a destination (a writer):func Copy(dst Writer, src Reader) (written int64, err error)There's an interesting comment in its documentation:
If
srcimplementsWriterTo, the copy is implemented by callingsrc.WriteTo(dst). Otherwise, ifdstimplementsReaderFrom, the copy is implemented by callingdst.ReadFrom(src).Here's what the function looks like:
func Copy(dst Writer, src Reader) (written int64, err error) { // If the reader has a WriteTo method, use it to do the copy. // Avoids an allocation and a copy. if wt, ok := src.(WriterTo); ok { return wt.WriteTo(dst) } // Similarly, if the writer has a ReadFrom method, use it to do the copy. if rf, ok := dst.(ReaderFrom); ok { return rf.ReadFrom(src) } // The default implementation using regular Reader and Writer. // ... }src.(WriterTo)is a type assertion that checks if thesrcreader is not just aReader, but also implements theWriterTointerface. The Go runtime handles these kinds of dynamic type checks.Can we do something like this in C? I'd prefer not to make it fully dynamic, since trying to recreate parts of the Go runtime in C probably isn't a good idea.
What we can do is add an optional
AsWriterTomethod to theReaderinterface:// An interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. typedef struct { // required size_t (*Read)(void* self, uint8_t* p, size_t len); // optional WriterTo (*AsWriterTo)(void* self); } ReaderTable; typedef struct { const ReaderTable* mtab; void* self; } Reader;Then we can easily check if a given
Readeris also aWriterTo:void work(Reader r) { // Check if r implements WriterTo. if (r.mtab->AsWriterTo) { WriterTo wt = r.mtab->AsWriterTo(r.self); // Use r as WriterTo... return; } // Use r as a regular Reader... return; }Still, this feels a bit like a hack. I'd rather avoid using type assertions unless it's really necessary.
Final thoughts
Interfaces (traits, really) in C are possible, but they're not as simple or elegant as in Go or Rust. The method table approach we discussed is a good starting point. It's memory-efficient, as type-safe as possible given C's limitations, and supports polymorphic behavior.
Here's the full source code if you are interested:
#include <stdint.h> #include <stdio.h> #include <stdlib.h> // An interface that wraps the basic Read method. // Read reads up to len(p) bytes into p. typedef struct { size_t (*Read)(void* self, uint8_t* p, size_t len); } ReaderTable; typedef struct { const ReaderTable* mtab; void* self; } Reader; // Reads len(p) bytes into p. size_t Reader_Read(Reader r, uint8_t* p, size_t len) { return r.mtab->Read(r.self, p, len); } // An infinite stream of zero bytes. typedef struct { size_t total; } Zeros; // Reads len(p) bytes into p. size_t Zeros_Read(Zeros* z, uint8_t* p, size_t len) { for (size_t i = 0; i < len; i++) { p[i] = i % 256; } z->total += len; return len; } // Returns a Reader implementation for Zeros. Reader Zeros_Reader(Zeros* z) { // The method table is only initialized once. static const ReaderTable impl = { .Read = (size_t (*)(void*, uint8_t*, size_t))Zeros_Read, }; return (Reader){.mtab = &impl, .self = z}; } // Does some work reading from r. size_t work(Reader r) { uint8_t buf[8]; return Reader_Read(r, buf, sizeof(buf)); } int main(void) { Zeros z = {0}; Reader r = Zeros_Reader(&z); work(r); work(r); printf("total = %zu\n", z.total); } total = 16Cheers!
- The
-
đ r/wiesbaden Zeitreise am Bahnsteig: Dampflok in Wiesbaden rss
submitted by /u/DealKompassDE
[link] [comments] -
đ r/wiesbaden Fahrradreparatur in der Innenstadt? rss
Hallo zusammen! Ich suche nach einer guten Möglichkeit, mein E-Lastenrad (Tern GSD) in der Wiesbadener Innenstadt reparieren zu lassen. Bisher war Lucky Bike an der Mainzer StraĂe meine Anlaufstelle, aber die sind ja jetzt nach Biebrich gezogen. Ambrosius in Biebrich finde ich auch gut, aber beides ist fĂŒr mich nicht so gut zu erreichen. Kennt jemand von euch eine Werkstatt in der Innenstadt, die sich damit auskennt? Ich wĂ€re fĂŒr jeden Tipp dankbar!
submitted by /u/eggsplorer
[link] [comments] -
đ @cxiao@infosec.exchange RE: mastodon
RE: https://infosec.exchange/@watchTowr/115935944816059052
another incredible post about the power of simply reverse engineering patches
no spoilers but a preview:
-
đ r/LocalLLaMA Fei Fei Li dropped a non-JEPA world model, and the spatial intelligence is insane rss
| Fei-Fei Li, the "godmother of modern AI" and a pioneer in computer vision, founded World Labs a few years ago with a small team and $230 million in funding. Last month, they launched https://marble.worldlabs.ai/, a generative world model thatâs not JEPA, but instead built on Neural Radiance Fields (NeRF) and Gaussian splatting. Itâs insanely fast for what it does, generating explorable 3D worlds in minutes. For example: this scene. Crucially, itâs not video. The frames arenât rendered on-the-fly as you move. Instead, itâs a fully stateful 3D environment represented as a dense cloud of Gaussian splatsâeach with position, scale, rotation, color, and opacity. This means the world is persistent, editable, and supports non-destructive iteration. You can expand regions, modify materials, and even merge multiple worlds together. You can share your world, others can build on it, and you can build on theirs. It natively supports VR (Vision Pro, Quest 3), and you can export splats or meshes for use in Unreal, Unity, or Blender via USDZ or GLB. It's early, there are (very literally) rough edges, but it's crazy to think about this in 5 years. For free, you get a few generations to experiment; $20/month unlocks a lot, I just did one month so I could actually play, and definitely didn't max out credits. Fei-Fei Li is an OG AI visionary, but zero hype. Sheâs been quiet, especially about this. So Marble hasnât gotten the attention it deserves. At first glance, visually, you might think, âmehâ... but thereâs no triangle-based geometry here, no real-time rendering pipeline, no frame-by-frame generation. Just a solid, exportable, editable, stateful pile of splats. The breakthrough isn't the image though, itâs the spatial intelligence. Y'all should play around, it's wild. I know this is a violation of Rule #2 but honestly there just aren't that many subs with people smart enough to appreciate this; no hard feelings if it needs be removed though. submitted by /u/coloradical5280
[link] [comments]
---|--- -
đ badlogic/pi-mono v0.49.3 release
Added
markdown.codeBlockIndentsetting to customize code block indentation in rendered output (#855 by @terrorobe)- Added
inline-bash.tsexample extension for expanding!{command}patterns in prompts (#881 by @scutifer) - Added
antigravity-image-gen.tsexample extension for AI image generation via Google Antigravity (#893 by @benvargas) - Added
PI_SHARE_VIEWER_URLenvironment variable for custom share viewer URLs (#889 by @andresaraujo) - Added Alt+Delete as hotkey for delete word forwards (#878 by @Perlence)
Changed
- Tree selector: changed label filter shortcut from
ltoShift+Lso users can search for entries containing "l" (#861 by @mitsuhiko) - Fuzzy matching now scores consecutive matches higher for better search relevance (#860 by @mitsuhiko)
Fixed
- Fixed error messages showing hardcoded
~/.pi/agent/paths instead of respectingPI_CODING_AGENT_DIR(#887 by @aliou) - Fixed
writetool not displaying errors in the UI when execution fails (#856) - Fixed HTML export using default theme instead of user's active theme (#870 by @scutifer)
- Show session name in the footer and terminal / tab title (#876 by @scutifer)
- Fixed 256color fallback in Terminal.app to prevent color rendering issues (#869 by @Perlence)
- Fixed viewport tracking and cursor positioning for overlays and content shrink scenarios
- Fixed autocomplete to allow searches with
/characters (e.g.,folder1/folder2) (#882 by @richardgill) - Fixed autolinked emails displaying redundant
(mailto:...)suffix (#888 by @terrorobe) - Fixed
@file autocomplete adding space after directories, breaking continued autocomplete into subdirectories
-
đ @cxiao@infosec.exchange **Content warning:** cwing my further deranged mark carney speech thoughts for mastodon
Content warning: cwing my further deranged mark carney speech thoughts for the sanctity of ur timeline, also NDP leadership race
Speaking of the other federal parties having absolutely no foreign policy vision to counter the Liberals: This is what the NDP leadership candidates have said so far about foreign policy as of January 15th. Frankly this is embarrassing. Like if none of the leadership candidates can even produce any coherent thoughts on US relations what are we even doing here.
Also not in this image: Immigration. But if you check out the summary page with all policy proposals (linked) it's still "No policy released". There should be several slam dunks here about immigrant worker rights, about protection from exploitation, about the concerning rise in anti immigrant sentiment, about the key role immigrants are going to play in nation building, about the building of Canada's own immigration security and border apparatus which is extremely concerning, etc. Having absolutely no thoughts on this is completely unacceptable.
All of the nice domestic promises in the other policy images, the climate promises, the affordability promises, depend on us having a functional economy, on immigrants to power it, on resources and services from other countries to build it, and on not being blockaded or invaded! Some of the most critical threats to workers in several sectors across the country right now is directly due to tariffs, and companies' response to tariffs!
I hope more comes out before the next leadership debate but right now, if we are going to criticize the Carney speech from the left, it's kind of concerning that the contenders to be leader of the main left party seem to be unable to formulate any international relations thoughts
https://progresscanada.substack.com/p/tracking-the-policy-commitments- in
-
đ Console.dev newsletter qmd rss
Description: CLI search for local Markdown.
What we like: On-device text search. Indexes anything Markdown. Supports keyword and natural language search. Embeds a local LLM to help with ranking results. Includes an MCP server so you can integrate with your AI of choice. Various output formats (text, JSON, CSV, Markdown).
What we dislike: Not multi-platform - only works on macOS.
-
đ Console.dev newsletter RepoBar rss
Description: Access GitHub from your status bar.
What we like: Exposes key GitHub primitives in your status bar - issues, PRs, releases, actions, checks, latest activity. Dig into each repo to browse open issues, PRs, etc. Shows local Git state so you can easily switch worktrees. Will automatically discover local repos. Bundles a basic CLI to get the data in your terminal.
What we dislike: Source is available on GitHub, but thereâs no license (yet?).
-
đ Rust Blog Announcing Rust 1.93.0 rss
The Rust team is happy to announce a new version of Rust, 1.93.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via
rustup, you can get 1.93.0 with:$ rustup update stableIf you don't have it already, you can get
rustupfrom the appropriate page on our website, and check out the detailed release notes for 1.93.0.If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (
rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!What's in 1.93.0 stable
Update bundled musl to 1.2.5
The various
*-linux-musltargets now all ship with musl 1.2.5. This primarily affects static musl builds forx86_64,aarch64, andpowerpc64lewhich bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using
musltargets for static linking, this should make portable Linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2.5 years ago), and we believe has sufficiently widely propagated that we're ready to make the change in Rust targets.
See our previous announcement for more details.
Allow the global allocator to use thread-local storage
Rust 1.93 adjusts the internals of the standard library to permit global allocators written in Rust to use std's
thread_local!andstd::thread::currentwithout re-entrancy concerns by using the system allocator instead.See docs for details.
cfgattributes onasm!linesPreviously, if individual parts of a section of inline assembly needed to be
cfg'd, the fullasm!block would need to be repeated with and without that section. In 1.93,cfgcan now be applied to individual statements within theasm!block.asm!( // or global_asm! or naked_asm! "nop", #[cfg(target_feature = "sse2")] "nop", // ... #[cfg(target_feature = "sse2")] a = const 123, // only used on sse2 );Stabilized APIs
<[MaybeUninit<T>]>::assume_init_drop<[MaybeUninit<T>]>::assume_init_ref<[MaybeUninit<T>]>::assume_init_mut<[MaybeUninit<T>]>::write_copy_of_slice<[MaybeUninit<T>]>::write_clone_of_sliceString::into_raw_partsVec::into_raw_parts<iN>::unchecked_neg<iN>::unchecked_shl<iN>::unchecked_shr<uN>::unchecked_shl<uN>::unchecked_shr<[T]>::as_array<[T]>::as_mut_array<*const [T]>::as_array<*mut [T]>::as_mut_arrayVecDeque::pop_front_ifVecDeque::pop_back_ifDuration::from_nanos_u128char::MAX_LEN_UTF8char::MAX_LEN_UTF16std::fmt::from_fnstd::fmt::FromFn
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.93.0
Many people came together to create Rust 1.93.0. We couldn't have done it without all of you. Thanks!
-
- January 21, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-01-21 rss
IDA Plugin Updates on 2026-01-21
New Releases:
Activity:
- DeepExtractIDA
- gdbsync
- 640b8d06: update
- ghidra-chinese
- 30dfc342: Merge pull request #87 from TC999/sync
- HappyIDA
- hrtng
- 6c1a88d9: a few improvements:
- IDA-DataExportPlus
- idalib
- LUDA
- d21ba11a: Update README.md
- bb520059: Update README.md
- 381b9e2a: Merge branch 'master' of https://github.com/stolevchristian/LUDA
- f2433318: Func
- msc-thesis-LLMs-to-rank-decompilers
-
đ langchain-ai/deepagents deepagents==0.3.7 release
Changes since deepagents==0.3.6
fix(sdk): don't dedent summarization prompt (#870)
feat(deepagents): truncate old write / edit calls in message history (#806)
release: patch release 0.3.7 (#869)
chore(deepagents): add end to end tests to confirm file reducer working properly in state backend (#754)
chore: improve filesystem and subagents tool descriptions (#807)
docs(SDK): clarify usage of file system backend (#850)
nit: standardize naming (#849)
docs(sdk):FilesystemBackendrefs fixes (#791)
feat: show allowed tools for each skill in skill list (#837)
fix(sdk): FilesystemMiddleware forward 'name' attribute in large_tool_result from the original tool msg (#825)
docs(sdk): docstring formatting nits (#824)
feat: summarization offloading (#742)
Bump version to 0.3.7a1 (#817)
Add Async ops to Store Backend (#816)
chore(deepagents): add tests for grep in end to end tests (#805)
chore(deepagents): bump langchain in lock file (#800)
chore(deps): bump the uv group across 5 directories with 1 update (#811)
fix(deepagents): respect continuation markers when reading files (#809)
fix(infra): excludebuild/from typechecking (#808)
feat: supportSystemMessagefor parity w/create_agent(#803)
chore(deepagnets): end to end tests for agent writing editing files (#804)
fix(sdk): BaseSandbox.ls_info() to return absolute paths (#797)
fix(deepagents): truncate lines on read (#784)
chore(deps): bump the uv group across 3 directories with 3 updates (#796)
fix: refinements fortest_summarization(#786)
docs: fix old URLs (#787)
docs: add testing readme (#788)
fix: added error catching for file operations without permissions (#734)
docs(deepagents): update subagent spec (#785)
chore(deepagents): add mini eval for summarization (#751)
docs(sdk): improveFileSystemBackendref docs (#783) -
đ remorses/critique critique@0.1.41 release
reviewcommand:- Filter
--resumereviews by current working directory (only shows reviews from cwd or subdirectories) - Use ACP
unstable_listSessionsfor OpenCode instead of parsing JSON files directly - Falls back to file-based parsing for Claude Code when ACP method unavailable
- Add instruction to always close code blocks before new text (fixes unclosed diagram blocks)
- Filter
-
đ r/LocalLLaMA 8x AMD MI50 32GB at 26 t/s (tg) with MiniMax-M2.1 and 15 t/s (tg) with GLM 4.7 (vllm-gfx906) rss
- MiniMax-M2.1 AWQ 4bit @ 26.8 tok/s (output) // 3000 tok/s (input of 30k tok) on vllm-gfx906 with MAX context length (196608)
- GLM 4.7 AWQ 4bit @ 15.6 tok/s (output) // 3000 tok/s (input of 30k tok) on vllm-gfx906 with context length 95000
GPUs cost : 880$ for 256GB VRAM (early 2025 prices) Power draw : 280W (idle) / 1200W (inference) Goal : reach one of the most cost effective solution of the world for one of the best fast intelligent local inference setup. Credits : BIG thanks to the Global Open source Community! All setup details here: https://github.com/ai-infos/guidances- setup-8-mi50-glm47-minimax-m21/tree/main Feel free to ask any questions and/or share any comments. PS : few weeks ago, I posted here this setup of 16 MI50 with deepeseek v3.2: https://www.reddit.com/r/LocalLLaMA/comments/1q6n5vl/16x_amd_mi50_32gb_at_10_ts_tg_2k_ts_pp_with/ After few more tests/dev on it, I could have reached 14 tok/s but still not stable after ~18k tokens context input (generating garbage output) so almost useless for me. Whereas, the above models (Minimax M2.1 and GLM 4.7) are pretty stable at long context so usable for coding agents usecases etc. submitted by /u/ai-infos
[link] [comments]
---|--- -
đ remorses/critique critique@0.1.40 release
reviewcommand:- Increased session/review picker limits from 10/20 to 25 for both ACP sessions and
--resume
- Increased session/review picker limits from 10/20 to 25 for both ACP sessions and
-
đ remorses/critique critique@0.1.39 release
0.1.39
reviewcommand:- Enhanced splitting rules in system prompt: never show hunks larger than 10 lines
- Added files must be split into parts with descriptions for each function/method
- More aggressive chunk splitting for reduced cognitive load
- Track review status:
in_progress(interrupted) orcompleted - Interrupted reviews saved on Ctrl+C/exit and can be restarted via
--resume - Use ACP session ID as review ID
- Show status indicator in review picker (yellow for in progress)
- JSON file only written on exit/completion to prevent concurrent access issues
0.1.38
reviewcommand:- Add
--resumeflag to view previously saved reviews - Reviews are automatically saved to
~/.critique/reviews/on completion - Select from recent reviews with interactive picker (ordered by creation time)
- Resume supports
--webflag to generate shareable URL - AI now generates a
titlefield in YAML for better review summaries - Keeps last 50 reviews, auto-cleans older ones
- Add
0.1.37
reviewcommand:- Add
--model <id>option to specify which model to use for review - Model format depends on agent:
- OpenCode:
provider/model-id(e.g.,anthropic/claude-sonnet-4-20250514) - Claude Code:
model-id(e.g.,claude-sonnet-4-20250514) - Shows available models with helpful error message if invalid model specified
- Add
-
đ remorses/critique critique@0.1.36 release
reviewcommand:- Use Unicode filled arrows (
â¶,â,âŒ) in diagram examples for proper parsing - Use
secondarytheme color for diagram text (purple in github theme)
- Use Unicode filled arrows (
-
đ r/reverseengineering capa in the browser - fully local static analysis to detect binary capabilities and behaviors rss
submitted by /u/Nightlark192
[link] [comments] -
đ @malcat@infosec.exchange A quick update on Malcat's MacOS development (apple silicon): mastodon
A quick update on Malcat's MacOS development (apple silicon):
A couple of visual glitches, but the analysis & UI are now functional \o/
-
đ r/LocalLLaMA Fix for GLM 4.7 Flash has been merged into llama.cpp rss
| The world is saved! FA for CUDA in progress https://github.com/ggml-org/llama.cpp/pull/18953 submitted by /u/jacek2023
[link] [comments]
---|--- -
đ r/wiesbaden Wiesbaden Taco Bell Party Details rss
Location: P+R Parkplatz Berliner StraĂe near Kasse 4 (next to BRITA-Arena (edit))
Date/Time: Friday 23 January 2026 at 1730
Cost: Free! First come, first served. I will except donations or beer, but not required.
RSVP: Please comment below with the number of people attending, followed by any requests.
Request for items: You can request items but no guarantees. Cut off time is 1700 on 22 January.
What Iâm I providing: I will buy an assortment of Tacos (hard, soft & Dorito (if in stock)), burritos, quesadillas, etc, along with hot sauce.
Drinks: BYOB (bring your own beverage), there is no Baha Blast anyways so you arenât missing out on anything you canât already get.
submitted by /u/OldBayExorcism
[link] [comments] -
đ @cxiao@infosec.exchange **Content warning:** cwing my further deranged mark carney speech thoughts for mastodon
Content warning: cwing my further deranged mark carney speech thoughts for the sanctity of ur timeline
Some good Takesâąïž from other people on this:
1) This position of middle power tightrope walking with different alliances among capricious partners for different purposes is what the Global South has has to deal with for years (including all the countries now dealing with a China that has colonial interests...)
2) The "when we only negotiate bilaterally with a hegemon, we negotiate from weakness, we compete with each other to be the most accommodating" feels in some ways like...a very subtle shade throw to countries which did not really care when Canada's sovereignty was being threatened last year and who mostly rolled over with trade deals -
đ r/LocalLLaMA vLLM v0.14.0 released rss
| submitted by /u/jinnyjuice
[link] [comments]
---|--- -
đ @cxiao@infosec.exchange And honestly, the more I think about it, the more I realize that the other mastodon
And honestly, the more I think about it, the more I realize that the other parties really have no coherent foreign policy, and the more I think that is a huge problem. The CPC's foreign policy is ?????, swinging wildly between loving Trump (for the base) and criticizing the Liberals for not standing up to Trump (but never saying how they themselves would walk that tightrope). The NDP can be forgiven for not having one now. But none of the leadership candidates seem to have any big ideas about how to steer through this dangerous world we're in now. We can't do all of the nice domestic things either of the parties promise, if our sovereignty isn't even guaranteed...
-
đ @cxiao@infosec.exchange RE: [https://flipboard.com/@cbcnews/politics-2qr4m137z/-/a-zVLYs- mastodon
RE: https://flipboard.com/@cbcnews/politics-2qr4m137z/-/a-zVLYs- OUTPKU827dJAL8Nw%3Aa%3A107108217-%2F0
Yes. This was a really important speech and a really important signal. Whether we can meet this vision is a different question. But I think regardless of that, I think we will all see it, in 3 years, as a turning point.
I think many of us in Canada who follow the news maybe do not think it is as consequential because some of the latter half of the speech is talking points we have heard before, and because we have been dealing with the US breathing down our necks more directly than others. Carney has been saying to us for a year already that things have fundamentally changed (and we are tired of hearing it from him, and frustrated with how slow the change has been).
But the clarity of the speech, the direct positioning of Canada as a middle power leader, and the frank assessment of what the global order is like now - it is a huge contrast to what everyone else who is dealing with the US, especially the Europeans, has been saying. Especially with Trump posting last night the image of Greenland, Canada, and Venezuela all covered by the US flag, no one seems to have been conveying an attitude that matched the seriousness of the moment.
The blunt assessment and lofty vision in the speech sets this government up for a very challenging task with sky high expectations. But honestly, neither the NDP or the Conservatives have a foreign policy vision that is nearly as cohesive as this, and that comes close to matching the danger of the moment we're in now. That's really bad and those parties must do better, because there are huge risks with this new vision and we need opposition to seriously grapple with those risks. It is easy to say "we will simply stand up more to the US", but it is honestly hard to believe the opposition parties when they say this, because there is no coherent vision for how they propose to navigate differently.
I hate living through an era of Canadian history that feels like it came from my school textbooks. But now this is what we are all dealing with.
-
đ Mitchell Hashimoto Don't Trip[wire] Yourself: Testing Error Recovery in Zig rss
(empty) -
đ Rust Blog crates.io: development update rss
Time flies! Six months have passed since our last crates.io development update, so it's time for another one. Here's a summary of the most notable changes and improvements made to crates.io over the past six months.
Security Tab
Crate pages now have a new "Security" tab that displays security advisories from the RustSec database. This allows you to quickly see if a crate has known vulnerabilities before adding it as a dependency.

The tab shows known vulnerabilities for the crate along with the affected version ranges.
This feature is still a work in progress, and we plan to add more functionality in the future. We would like to thank the OpenSSF (Open Source Security Foundation) for funding this work and Dirkjan Ochtman for implementing it.
Trusted Publishing Enhancements
In our July 2025 update, we announced Trusted Publishing support for GitHub Actions. Since then, we have made several enhancements to this feature.
GitLab CI/CD Support
Trusted Publishing now supports GitLab CI/CD in addition to GitHub Actions. This allows GitLab users to publish crates without managing API tokens, using the same OIDC-based authentication flow.
Note that this currently only works with GitLab.com. Self-hosted GitLab instances are not supported yet. The crates.io implementation has been refactored to support multiple CI providers, so adding support for other platforms like Codeberg/Forgejo in the future should be straightforward. Contributions are welcome!
Trusted Publishing Only Mode
Crate owners can now enforce Trusted Publishing for their crates. When enabled in the crate settings, traditional API token-based publishing is disabled, and only Trusted Publishing can be used to publish new versions. This reduces the risk of unauthorized publishes from leaked API tokens.
Blocked Triggers
The
pull_request_targetandworkflow_runGitHub Actions triggers are now blocked from Trusted Publishing. These triggers have been responsible for multiple security incidents in the GitHub Actions ecosystem and are not worth the risk.Source Lines of Code
Crate pages now display source lines of code (SLOC) metrics, giving you insight into the size of a crate before adding it as a dependency. This metric is calculated in a background job after publishing using the tokei crate. It is also shown on OpenGraph images:

Thanks to XAMPPRocky for maintaining the
tokeicrate!Publication Time in Index
A new
pubtimefield has been added to crate index entries, recording when each version was published. This enables several use cases:- Cargo can implement cooldown periods for new versions in the future
- Cargo can replay dependency resolution as if it were a past date, though yanked versions remain yanked
- Services like Renovate can determine release dates without additional API requests
Thanks to Rene Leonhardt for the suggestion and Ed Page for driving this forward on the Cargo side.
Svelte Frontend Migration
At the end of 2025, the crates.io team evaluated several options for modernizing our frontend and decided to experiment with porting the website to Svelte. The goal is to create a one-to-one port of the existing functionality before adding new features.
This migration is still considered experimental and is a work in progress. Using a more mainstream framework should make it easier for new contributors to work on the frontend. The new Svelte frontend uses TypeScript and generates type-safe API client code from our OpenAPI description, so types flow from the Rust backend to the TypeScript frontend automatically.
Thanks to eth3lbert for the helpful reviews and guidance on Svelte best practices. We'll share more details in a future update.
Miscellaneous
These were some of the more visible changes to crates.io over the past six months, but a lot has happened "under the hood" as well.
-
Cargo user agent filtering : We noticed that download graphs were showing a constant background level of downloads even for unpopular crates due to bots, scrapers, and mirrors. Download counts are now filtered to only include requests from Cargo, providing more accurate statistics.
-
HTML emails : Emails from crates.io now support HTML formatting.
-
Encrypted GitHub tokens : OAuth access tokens from GitHub are now encrypted at rest in the database. While we have no evidence of any abuse, we decided to improve our security posture. The tokens were never included in the daily database dump, and the old unencrypted column has been removed.
-
Source link : Crate pages now display a "Browse source" link in the sidebar that points to the corresponding docs.rs page. Thanks to Carol Nichols for implementing this feature.
-
Fastly CDN : The sparse index at index.crates.io is now served primarily via Fastly to conserve our AWS credits for other use cases. In the past month, static.crates.io served approximately 1.6 PB across 11 billion requests, while index.crates.io served approximately 740 TB across 19 billion requests. A big thank you to Fastly for providing free CDN services through their Fast Forward program!
-
OpenGraph image improvements : We fixed emoji and CJK character rendering in OpenGraph images, which was caused by missing fonts on our server.
-
Background worker performance : Database indexes were optimized to improve background job processing performance.
-
CloudFront invalidation improvements : Invalidation requests are now batched to avoid hitting AWS rate limits when publishing large workspaces.
Feedback
We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!
-
- January 20, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-01-20 rss
IDA Plugin Updates on 2026-01-20
New Releases:
Activity:
- binary-code-similarity-detection
- capa
- CrystalRE
- DeepExtractIDA
- FeelingLucky
- GoResolver
- 8446475a: GoResolver CLI extension & Plugin in-SRE analysis.
- HappyIDA
- 0d5aa0bd: docs: update labeler toggle usage
- ea759c5d: release: v1.0.2
- 0e3784e9: docs: update README.md
- eb288081: fix: BWN_HEXVIEW <9.1 compatibility issue
- 34d09a7b: refactor: move actions to seperate folder
- 5c56d82e: feat: add toggle for labeler
- 847ff8d3: docs: update README.md for copy address feature
- 652329f4: feat: add copy address action for disasm, hexdump, and decompiled views
- IDA-DataExportPlus
- 76074c71: Merge pull request #5 from secretlay3r/main
- 39fdc139: Fix method calling and field enabling logic in Form class
- 9e2273a2: Uniform the format of class method names
- a441d91b: Merge branch 'main' into pr/secretlay3r/5
- 42535129: Set the option to preserve comments and names when exporting assemblyâŠ
- 30522f66: Update README.zh_CN.md
- a86528d3: Update README.md
- 0355b154: Reconstruct address selection logic to support input of end address
- IDA-NO-MCP
- 7e347ef2: Merge pull request #4 from PwnYouLin/main
- IDAPlugins
- dc1c308a: update: Migrate plugin format to hcli-supported format
- idawilli
- rhabdomancer
-
đ r/LocalLLaMA Current GLM-4.7-Flash implementation confirmed to be broken in llama.cpp rss
Recent discussion in https://github.com/ggml-org/llama.cpp/pull/18936 seems to confirm my suspicions that the current llama.cpp implementation of GLM-4.7-Flash is broken.
There are significant differences in logprobs compared to vLLM. That could explain the looping issues, overthinking, and general poor experiences people have been reporting recently.
Edit:
There is a potential fix already in this PR thanks to Piotr:
https://github.com/ggml-org/llama.cpp/pull/18980submitted by /u/Sweet_Albatross9772
[link] [comments] -
đ r/reverseengineering This open-source Windows XP alternative finally gets a much-awaited speed boost rss
submitted by /u/Jeditobe
[link] [comments] -
đ @HexRaysSA@infosec.exchange We're heading to D.C. for mastodon
We're heading to D.C. for @DistrictCon this weekend and would love to connect.
Our Head of Marketing is available to discuss content collaborations and other partnerships, and our Product Evangelist is always eager for product feedback, user insights, and more.
Book a few minutes with us during the conference: https://meetings- eu1.hubspot.com/justine-benjamin/districtcon-2026
-
đ r/LocalLLaMA You have 64gb ram and 16gb VRAM; internet is permanently shut off: what 3 models are the ones you use? rss
No more internet: you have 3 models you can run
What local models are you using?
submitted by /u/Adventurous-Gold6413
[link] [comments] -
đ r/reverseengineering Google Meet Reactions: Reverse Engineering the WebRTC Channel for Emoji rss
submitted by /u/ArtemFinland
[link] [comments] -
đ sacha chua :: living an awesome life Emacs and whisper.el :Trying out different speech-to-text backends and models rss
I was curious about parakeet because I heard that it was faster than Whisper on the HuggingFace leaderboard. When I installed it and got it running on my laptop (CPU only, no GPU), it seemed like my results were a little faster than whisper.cpp with the large model, but much slower than whisper.cpp with the base model. The base model is decent for quick dictation, so I got curious about other backends and other models.
In order to try natrys/whisper.el with other backends, I needed to work around how whisper.el validates the model names and sends requests to the servers. Here's the quick and dirty code for doing so, in case you want to try it out for yourself.
(defvar my-whisper-url-format "http://%s:%d/transcribe") (defun whisper--transcribe-via-local-server () "Transcribe audio using the local whisper server." (message "[-] Transcribing via local server") (whisper--setup-mode-line :show 'transcribing) (whisper--ensure-server) (setq whisper--transcribing-process (whisper--process-curl-request (format my-whisper-url-format whisper-server-host whisper-server-port) (list "Content-Type: multipart/form-data") (list (concat "file=@" whisper--temp-file) "temperature=0.0" "temperature_inc=0.2" "response_format=json" (concat "model=" whisper-model) (concat "language=" whisper-language))))) (defun whisper--check-model-consistency () t)Then I have this function for trying things out.
(defun my-test-whisper-api (url &optional args) (with-temp-buffer (apply #'call-process "curl" nil t nil "-s" url (append (mapcan (lambda (h) (list "-H" h)) (list "Content-Type: multipart/form-data")) (mapcan (lambda (h) (list "-F" h)) (list (concat "file=@" whisper--temp-file) "temperature=0.0" "temperature_inc=0.2" "response_format=verbose_json" (concat "language=" whisper-language))) args)) (message "%s %s" (buffer-string) url)))Here's the audio file. It is around 10 seconds long. I run the benchmark 3 times and report the average time.
Code for running the benchmarks(mapcar (lambda (group) (let ((whisper--temp-file "/home/sacha/recordings/whisper/2026-01-19-14-17-53.wav")) ;; warm up the model (eval (cadr group)) (list (format "%.3f" (/ (car (benchmark-call (lambda () (eval (cadr group))) times)) times)) (car group)))) '( ("parakeet" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 5092))) ("whisper.cpp base-q4_0" (my-test-whisper-api (format "http://%s:%d/inference" whisper-server-host 8642))) ("speaches whisper-base" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001) (list "-F" "model=Systran/faster-whisper-base"))) ("speaches whisper-base.en" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001) (list "-F" "model=Systran/faster-whisper-base.en"))) ("speaches whisper-small" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001) (list "-F" "model=Systran/faster-whisper-small"))) ("speaches whisper-small.en" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001) (list "-F" "model=Systran/faster-whisper-small.en"))) ("speaches lorneluo/whisper-small-ct2-int8" (my-test-whisper-api (format "http://%s:%d/v1/audio/transcriptions" whisper-server-host 8001) (list "-F" "model=lorneluo/whisper-small-ct2-int8"))) ;; needed export TORCH_FORCE_NO_WEIGHTS_ONLY_LOAD=1 ("whisperx-server Systran/faster-whisper-small" (my-test-whisper-api (format "http://%s:%d/transcribe" whisper-server-host 8002)))))3.694 parakeet 2.484 whisper.cpp base-q4_0 1.547 speaches whisper-base 1.425 speaches whisper-base.en 4.076 speaches whisper-small 3.735 speaches whisper-small.en 2.870 speaches lorneluo/whisper-small-ct2-int8 4.537 whisperx-server Systran/faster-whisper-small I tried it with:
- parakeet
- whisper.cpp (as whisper.el sets it up)
- speaches, which is a front-end for faster-whisper, and
- whisperx-server, which is a front-end for whisperx
Looks like speaches + faster-whisper-base is the winner for now. I like how speaches lets me switch models on the fly, so maybe I can use base.en generally and switch to base when I want to try dictating in French. Here's how I've set it up to use the server I just set up.
(setq whisper-server-port 8001 whisper-model "Systran/faster-whisper-base.en" my-whisper-url-format "http://%s:%d/v1/audio/transcriptions")At some point, I'll override
whisper--ensure-serverso that starting it up is smoother.Benchmark notes: I have a Lenovo P52 laptop (released 2018) with an Intel Core i7-8850H (6 cores, 12 threads; 2.6 GHz base / 4.3 GHz turbo) with 64GB RAM and an SSD. I haven't figured out how to get the GPU working under Ubuntu yet.
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
đ r/wiesbaden Any fun events this weekend rss
Hi Iâm new to Germany Iâm 19 male type. Iâm interested in anything from a big fun soccer game to a small party just let me know
submitted by /u/GuavaCool4628
[link] [comments] -
đ r/LocalLLaMA Liquid AI released the best thinking Language Model Under 1GB rss
| Liquid AI released LFM2.5-1.2B-Thinking, a reasoning model that runs entirely on-device. What needed a data centre two years ago now runs on any phone with 900 MB of memory. -> Trained specifically for concise reasoning
-> Generates internal thinking traces before producing answers
-> Enables systematic problem-solving at edge-scale latency
-> Shines on tool use, math, and instruction following
-> Matches or exceeds Qwen3-1.7B (thinking mode) acrross most performance benchmarks, despite having 40% less parameters. At inference time, the gap widens further, outperforming both pure transformer models and hybrid architectures in speed and memory efficiency. LFM2.5-1.2B-Thinking is available today: with broad, day-one support across the on-device ecosystem.
Hugging Face: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking
LEAP: https://leap.liquid.ai/models?model=lfm2.5-1.2b-thinking
Liquid AI Playground: https://playground.liquid.ai/login?callbackUrl=%2F At submitted by /u/PauLabartaBajo
[link] [comments]
---|--- -
đ r/LocalLLaMA 768Gb Fully Enclosed 10x GPU Mobile AI Build rss
| I haven't seen a system with this format before but with how successful the result was I figured I might as well share it. Specs:
Threadripper Pro 3995WX w/ ASUS WS WRX80e-sage wifi ii 512Gb DDR4 256Gb GDDR6X/GDDR7 (8x 3090 + 2x 5090) EVGA 1600W + Asrock 1300W PSU's Case: Thermaltake Core W200 OS: Ubuntu Est. expense: ~$17k The objective was to make a system for running extra large MoE models (Deepseek and Kimi K2 specifically), that is also capable of lengthy video generation and rapid high detail image gen (the system will be supporting a graphic designer). The challenges/constraints: The system should be easily movable, and it should be enclosed. The result technically satisfies the requirements, with only one minor caveat. Capital expense was also an implied constraint. We wanted to get the most potent system possible with the best technology currently available, without going down the path of needlessly spending tens of thousands of dollars for diminishing returns on performance/quality/creativity potential. Going all 5090's or 6000 PRO's would have been unfeasible budget-wise and in the end likely unnecessary, two 6000's alone could have eaten the cost of the entire amount spent on the project, and if not for the two 5090's the final expense would have been much closer to ~$10k (still would have been an extremely capable system, but this graphic artist would really benefit from the image/video gen time savings that only a 5090 can provide). The biggest hurdle was the enclosure problem. I've seen mining frames zip tied to a rack on wheels as a solution for mobility, but not only is this aesthetically unappealing, build construction and sturdiness quickly get called into question. This system would be living under the same roof with multiple cats, so an enclosure was almost beyond a nice-to-have, the hardware will need a physical barrier between the expensive components and curious paws. Mining frames were quickly ruled out altogether after a failed experiment. Enter the W200, a platform that I'm frankly surprised I haven't heard suggested before in forum discussions about planning multi-GPU builds, and is the main motivation for this post. The W200 is intended to be a dual-system enclosure, but when the motherboard is installed upside-down in its secondary compartment, this makes a perfect orientation to connect risers to mounted GPU's in the "main" compartment. If you don't mind working in dense compartments to get everything situated (the sheer density overall of the system is among its only drawbacks), this approach reduces the jank from mining frame + wheeled rack solutions significantly. A few zip ties were still required to secure GPU's in certain places, but I don't feel remotely as anxious about moving the system to a different room or letting cats inspect my work as I would if it were any other configuration. Now the caveat. Because of the specific GPU choices made (3x of the 3090's are AIO hybrids), this required putting one of the W200's fan mounting rails on the main compartment side in order to mount their radiators (pic shown with the glass panel open, but it can be closed all the way). This means the system technically should not run without this panel at least slightly open so it doesn't impede exhaust, but if these AIO 3090's were blower/air cooled, I see no reason why this couldn't run fully closed all the time as long as fresh air intake is adequate. The final case pic shows the compartment where the actual motherboard is installed (it is however very dense with risers and connectors so unfortunately it is hard to actually see much of anything) where I removed one of the 5090's. Airflow is very good overall (I believe 12x 140mm fans were installed throughout), GPU temps remain in good operation range under load, and it is surprisingly quiet when inferencing. Honestly, given how many fans and high power GPU's are in this thing, I am impressed by the acoustics, I don't have a sound meter to measure db's but to me it doesn't seem much louder than my gaming rig. I typically power limit the 3090's to 200-250W and the 5090's to 500W depending on the workload. . Benchmarks Deepseek V3.1 Terminus Q2XXS (100% GPU offload) Tokens generated - 2338 tokens Time to first token - 1.38s Token gen rate - 24.92tps ____ GLM 4.6 Q4KXL (100% GPU offload) Tokens generated - 4096 Time to first token - 0.76s Token gen rate - 26.61tps ___ Kimi K2 TQ1 (87% GPU offload) Tokens generated - 1664 Time to first token - 2.59s Token gen rate - 19.61tps _____ Hermes 4 405b Q3KXL (100% GPU offload) Tokens generated - was so underwhelmed by the response quality I forgot to record lol Time to first token - 1.13s Token gen rate - 3.52tps ____ Qwen 235b Q6KXL (100% GPU offload) Tokens generated - 3081 Time to first token - 0.42s Token gen rate - 31.54tps ______ I've thought about doing a cost breakdown here, but with price volatility and the fact that so many components have gone up since I got them, I feel like there wouldn't be much of a point and may only mislead someone. Current RAM prices alone would completely change the estimate cost of doing the same build today by several thousand dollars. Still, I thought I'd share my approach on the off chance it inspires or is interesting to someone. submitted by /u/SweetHomeAbalama0
[link] [comments]
---|--- -
đ r/wiesbaden FreizeitfuĂball / casual football rss
Spielt hier jemand FreizeitfuĂball? Mein Verlobter (33) ist gerade nach Wiesbaden gezogen und hat gesagt, dass er gerne ein- oder zweimal pro Woche spielen wĂŒrde.
â-
Is anyone here playing casual football? My fiancĂ© (33M) has just moved to Wiesbaden and said heâd like to play once or twice a week.
submitted by /u/SillyRate1329
[link] [comments] -
đ r/reverseengineering frida-ipa-extract rss
submitted by /u/lvculic
[link] [comments] -
đ r/LocalLLaMA It's been one year since the release of Deepseek-R1 rss
| submitted by /u/Recoil42
[link] [comments]
---|--- -
đ r/reverseengineering I have made an app to collect, decompile apk with apktool and jadx to have a reference, recompile it, sign it, zipalign it and install it. rss
submitted by /u/Swimming-Ad-5583
[link] [comments] -
đ @cxiao@infosec.exchange "To Every American Who's Sorry" mastodon
"To Every American Who's Sorry"
https://www.reddit.com/r/greenland/comments/1qhhijq/to_every_american_whos_sorry/
We see similar behaviour from Americans in the Canadian online spaces, and offline as well (for example, https://www.ctvnews.ca/vancouver/article/anonymous-american-apologizes-to- canadians-on-vancouver- billboards/).
I agree with this post: It is annoying, tiring to deal with, and not useful. It serves only the purpose of making Americans feel better by dumping their guilt externally. Americans should redirect their energy elsewhere.
-
đ r/LocalLLaMA Bartowski comes through again. GLM 4.7 flash GGUF rss
-
đ @cxiao@infosec.exchange RE: mastodon
RE: https://flipboard.com/@cbcnews/calgary-s2m5l3ffz/-/a-4N3WgbkgTGaoZbWQVTDYZQ%3Aa%3A107108217-%2F0
"cyber threats are a risk"
looks inside
"The report states the City of Calgary's rate of clicking on malicious links between May and August 2024 was up to 15 times higher than other regional or similar-sized organizations."
đ how the f is this a valid way of measuring cyber risk in the year of our lord 2026
-
đ r/LocalLLaMA Unsloth GLM 4.7-Flash GGUF rss
-
đ r/reverseengineering On the Coming Industrialisation of Exploit Generation with LLMs rss
submitted by /u/tnavda
[link] [comments] -
đ r/reverseengineering Conditions in the Intel 8087 floating-point chip's microcode rss
submitted by /u/tnavda
[link] [comments] -
đ matklad Vibecoding #2 rss
Vibecoding #2
Jan 20, 2026
I feel like I got substantial value out of Claude today, and want to document it. I am at the tail end of AI adoption, so I donât expect to say anything particularly useful or novel. However, I am constantly complaining about the lack of boring AI posts, so itâs only proper if I write one.
Problem Statement
At TigerBeetle, we are big on deterministic simulation testing. We even use it to track performance, to some degree. Still, it is crucial to verify performance numbers on a real cluster in its natural high-altitude habitat.
To do that, you need to procure six machines in a cloud, get your custom version of
tigerbeetlebinary on them, connect clusterâs replicas together and hit them with load. It feels like, quarter of a century into the third millennium, ârun stuff on six machinesâ should be a problem just a notch harder than opening a terminal and typingls, but I personally donât know how to solve it without wasting a day. So, I spent a day vibecoding my own square wheel.The general shape of the problem is that I want to spin a fleet of ephemeral machines with given specs on demand and run ad-hoc commands in a SIMD fashion on them. I donât want to manually type slightly different commands into a six- way terminal split, but I also do want to be able to ssh into a specific box and poke it around.
Solution
My idea for the solution comes from these three sources:
- https://github.com/catern/rsyscall
- https://peter.bourgon.org/blog/2011/04/27/remote-development-from-mac-to-linux.html
- https://github.com/dsherret/dax
The big idea of
rsyscallis that you can program distributed system in direct style. When programming locally, you do things by issuing syscalls:const fd = open("/etc/passwd");This API works for doing things on remote machines, if you specify which machine you want to run the syscall on:
const fd_local = open(.host, "/etc/passwd"); const fd_cloud = open(.{.addr = "1.2.3.4"}, "/etc/passwd");Direct manipulation is the most natural API, and it pays to extend it over the network boundary.
Peterâs post is an application of a similar idea to a narrow, mundane task of developing on Mac and testing on Linux. Peter suggests two scripts:
remote-syncsynchronizes a local and remote projects. If you runremote- syncinside~/p/tbfolder, then~/p/tbmaterializes on the remote machine.rsyncdoes the heavy lifting, and the wrapper script implementsDWIMbehaviors.It is typically followed by
remote-run some --command, which runs command on the remote machine in the matching directory, forwarding output back to you.So, when I want to test local changes to
tigerbeetleon my Linux box, I have roughly the following shell session:$ cd ~/p/tb/work $ code . # hack here $ remote-sync $ remote-run ./zig/zig build testThe killer feature is that shell-completion works. I first type the command I want to run, taking advantage of the fact that local and remote commands are the same, paths and all, then hit
^Aand prependremote-run(in reality, I haverralias that combines sync&run).The big thing here is not the commands per se, but the shift in the mental model. In a traditional ssh & vim setup, you have to juggle two machines with a separate state, the local one and the remote one. With
remote-sync, the state is the same across the machines, you only choose whether you want to run commands here or there.With just two machines, the difference feels academic. But if you want to run your tests across six machines, the ssh approach fails â you donât want to re-vim your changes to source files six times, you really do want to separate the place where the code is edited from the place(s) where the code is run. This is a general pattern â if you are not sure about a particular aspect of your design, try increasing the cardinality of the core abstraction from 1 to 2.
The third component,
daxlibrary, is pretty mundane â just a JavaScript library for shell scripting. The notable aspects there are:-
JavaScriptâs template literals, which allow implementing command interpolation in a safe by construction way. When processing
$ls ${paths}`, a string is never materialized, itâs arrays all the way to theexec` syscall ( more on the topic). -
JavaScriptâs async/await, which makes managing concurrent processes (local or remote) natural:
await Promise.all([$
sleep 5, $remote-run sleep 5, ]); -
Additionally, deno specifically valiantly strives to impose process-level structured concurrency, ensuring that no processes spawned by the script outlive the script itself, unless explicitly marked
detachedâ a sour spot of UNIX.
Combining the three ideas, I now have a deno script, called
box, that provides a multiplexed interface for running ad-hoc code on ad-hoc clusters.A session looks like this:
# Switch to project with local modifications $ cd ~/p/tb/work $ git status --short M src/lsm/forest.zig # Spin up 3 machines, print their IPs $ box create 3 108.129.172.206,52.214.229.222,3.251.67.25 $ box list 0 108.129.172.206 1 52.214.229.222 2 3.251.67.25 # Move my code to remote machines $ box sync 0,1,2 # Run pwd&ls on machine 0; now the code is there: $ box run 0 pwd /home/alpine/p/tb/work $ box run 0 ls CHANGELOG.md LICENSE README.md build.zig docs/ src/ zig/ # Setup dev env and run build on all three machines. $ box run 0,1,2 ./zig/download.sh Downloading Zig 0.14.1 release build... Extracting zig-x86_64-linux-0.14.1.tar.xz... Downloading completed (/home/alpine/p/tb/work/zig/zig)! Enjoy! # NB: using local commit hash here (no git _there_). $ box run 0,1,2 \ ./zig/zig build -Drelease -Dgit-commit=$(git rev-parse HEAD) # ?? is replaced by machine id $ box run 0,1,2 \ ./zig-out/bin/tigerbeetle format \ --cluster=0 --replica=?? --replica-count=3 \ 0_??.tigerbeetle 2026-01-20 19:30:15.947Z info(io): opening "0_0.tigerbeetle"... # Cleanup machines (they also shutdown themselves after 8 hours) $ box destroy 0,1,2I like this! Havenât used in anger yet, but this is something I wanted for a long time, and now I have it
Structure
The problem with implementing above is that I have zero practical experience with modern cloud. I only created my AWS account today, and just looking at the console interface ignited the urge to re-read The Castle. Not my cup of pu-erh. But I had a hypothesis that AI should be good at wrangling baroque cloud API, and it mostly held.
I started with a couple of paragraphs of rough, super high-level description of what I want to get. Not a specification at all, just a general gesture towards unknown unknowns. Then I asked ChatGPT to expand those two paragraphs into a more or less complete spec to hand down to an agent for implementation.
This phase surfaced a bunch of unknowns for me. For example, I wasnât thinking at all that I somehow need to identify machines, ChatGPT suggested using random hex numbers, and I realized that I do need 0,1,2 naming scheme to concisely specify batches of machines. While thinking about this, I realized that sequential numbering scheme also has an advantage that I canât have two concurrent clusters running, which is a desirable property for my use-case. If I forgot to shutdown a machine, Iâd rather get an error on trying to re-create a machine with the same name, then to silently avoid the clash. Similarly, turns out the questions of permissions and network access rules are something to think about, as well as what region and what image I need.
With the spec document in hand, I turned over to Claude code for actual implementation work. The first step was to further refine the spec, asking Claude if anything is unclear. There were couple of interesting clarifications there.
First, the original ChatGPT spec didnât get what I meant with my âcurrent directory mappingâ idea, that I want to materialize a local
~/p/tb/workas remote~/p/tb/work, even if~are different. ChatGPT generated an incorrect description and an incorrect example. I manually corrected example, but wasnât able to write a concise and correct description. Claude fixed that working from the example. I feel like I need to internalize this more â for current crop of AI, examples seem to be far more valuable than rules.Second, the spec included my desire to auto-shutdown machines once I no longer use them, just to make sure I donât forget to turn the lights off when leaving the room. Claude grilled me on what precisely I want there, and I asked it to DWIM the thing.
The spec ended up being 6KiB of English prose. The final implementation was 14KiB of TypeScript. I wasnât keeping the spec and the implementation perfectly in sync, but I think they ended up pretty close in the end. Which means that prose specifications are somewhat more compact than code, but not much more compact.
My next step was to try to just one-shot this. Ok, this is embarrassing, and I usually avoid swearing in this blog, but I just typoed that as âone-shitâ, and, well, that is one flavorful description I wonât be able to improve upon. The result was just not good (more on why later), so I almost immediately decided to throw it away and start a more incremental approach.
In my previous vibe-post, I noticed that LLM are good at closing the loop. A variation here is that LLMs are good at producing results, and not necessarily good code. I am pretty sure that, if I had let the agent to iterate on the initial script and actually run it against AWS, I would have gotten something working. I didnât want to go that way for three reasons:
- Spawning VMs takes time, and that significantly reduces the throughput of agentic iteration.
- No way I let the agent run with a real AWS account, given that AWS doesnât have a fool-proof way to cap costs.
- I am fairly confident that this script will be a part of my workflow for at least several years, so I care more about long-term code maintenance, than immediate result.
And, as I said, the code didnât feel good, for these specific reasons:
- It wasnât the code that I would have written, it lacked my character, which made it hard for me to understand it at a glance.
- The code lacked any character whatsoever. It could have worked, it wasnât ânaively badâ, like the first code you write when you are learning programming, but there wasnât anything good there.
- I never know what the code should be up-front. I donât design solutions, I discover them in the process of refactoring. Some of my best work was spending a quiet weekend rewriting large subsystems implemented before me, because, with an implementation at hand, it was possible for me to see the actual, beautiful core of what needs to be done. With a slop-dump, I just donât get to even see what could be wrong.
- In particular, while you are working the code (as in âwrought ironâ), you often go back to requirements and change them. Remember that ambiguity of my request to âshut down idle clusterâ? Claude tried to DWIM and created some horrific mess of bash scripts, timestamp files, PAM policy and systemd units. But the right answer there was âlets maybe not have that feature?â (in contrast, simply shutting the machine down after 8 hours is a one-liner).
The incremental approach worked much better, Claude is good at filling-in the blanks. The very first thing I did for
box-v2was manually typing-in:type CLI = | CLICreate | CLIDestroy | CLIList | CLISync type BoxList = string[]; type CLICreate = { tag: "create"; count: number }; type CLIDestroy = { tag: "destroy"; boxes: BoxList }; type CLIList = { tag: "list" }; type CLISync = { tag: "sync"; boxes: BoxList; }; function fatal(message: string): never { console.error(message); Deno.exit(1); } function CLIParse(args: string[]): CLI { }Then I asked Claude to complete the
CLIParsefunction, and I was happy with the result. Note Show, Donât TellI am not asking Claude to avoid throwing an exception and fail fast instead. I just give
fatalfunction, and it code-completes the rest.I canât say that the code inside
CLIParseis top-notch. Iâd probably written something more spartan. But the important part is that, at this level, I donât care. The abstraction for parsing CLI arguments feel right to me, and the details I can always fix later. This is how this overall vibe-coding session transpired â I was providing structure, Claude was painting by the numbers.In particular, with that CLI parsing structure in place, Claude had little problem adding new subcommands and new arguments in a satisfactory way. The only snag was that, when I asked to add an optional path to
sync, it went withstring | null, while I strongly preferstring | undefined. Obviously, its better to pick your null in JavaScript and stick with it. The fact thatundefinedis unavoidable predetermines the winner. Given that the argument was added as an incremental small change, course-correcting was trivial.The null vs undefined issue perhaps illustrates my complaint about the code lacking character.
| nullis the default non-choice.| undefinedis an insight, which I personally learned from VS Code LSP implementation.The hand-written skeleton/vibe-coded guts worked not only for the CLI. I wrote
async function main() { const cli = CLIParse(Deno.args); if (cli.tag === "create") return await mainCreate(cli.count); if (cli.tag === "destroy") return await mainDestroy(cli.boxes); ... } async function mainDestroy(boxes: string[]) { for (const box of boxes) { await instanceDestroy(box); } } async function instanceDestroy(id: string) { }and then asked Claude to write the body of a particular function according to the SPEC.md.
Unlike with the CLI, Claude wasnât able to follow this pattern itself. With one example itâs not obvious, but the overall structure is that
instanceXXXis the AWS-level operation on a single box, andmainXXXis the CLI-level control flow that deals with looping and parallelism. When I asked Claude to implementbox run, without myself doing themain/instancesplit, Claude failed to notice it and needed a course correction.Implementation
However , Claude was massively successful with the actual logic. It would have taken me hours to acquire specific, non-reusable knowledge to write:
// Create spot instance const instanceMarketOptions = JSON.stringify({ MarketType: "spot", SpotOptions: { InstanceInterruptionBehavior: "terminate" }, }); const tagSpecifications = JSON.stringify([ { ResourceType: "instance", Tags: [{ Key: moniker, Value: id }] }, ]); const result = await $`aws ec2 run-instances \ --image-id ${image} \ --instance-type ${instanceType} \ --key-name ${moniker} \ --security-groups ${moniker} \ --instance-market-options ${instanceMarketOptions} \ --user-data ${userDataBase64} \ --tag-specifications ${tagSpecifications} \ --output json`.json(); const instanceId = result.Instances[0].InstanceId; // Wait for instance to be running await $`aws ec2 wait instance-status-ok --instance-ids ${instanceId}`;I want to be careful â I canât vouch for correctness and especially completeness of the above snippet. However, given that the nature of the problem is such that I can just run the code and see the result, I am fine with it. If I were writing this myself, trial-and-error would totally be my approach as well.
Then thereâs synthesis â with several instance commands implemented, I noticed that many started with querying AWS to resolve symbolic machine name, like â1â, to the AWS name/IP. At that point I realized that resolving symbolic names is a fundamental part of the problem, and that it should only happen once, which resulting in the following refactored shape of the code:
async function main() { const cli = CLIParse(Deno.args); const instances = await instanceMap(); if (cli.tag === "create") return await mainCreate(instances, cli.count); if (cli.tag === "destroy") return await mainDestroy(instances, cli.boxes); ... }Claude was ok with extracting the logic, but messed up the overall code layout, so the final code motions were on me. âContextâ arguments go first , not last, common prefix is more valuable than common suffix because of visual alignment.
The original âone-shottedâ implementation also didnât do up-front querying. This is an example of a shape of a problem I only discover when working with code closely.
Of course, the script didnât work perfectly the first time and we needed quite a few iterations on the real machines both to fix coding bugs, as well gaps in the spec. That was an interesting experience of speed-running rookie mistakes. Claude made naive bugs, but was also good at fixing them.
For example, when I first tried to
box sshafterbox create, I got an error. Pasting it into Claude immediately showed the problem. Originally, the code was doingaws ec2 wait instance-runningand notaws ec2 wait instance- status-ok.The former checks if instance is logically created, the latter waits until the OS is booted. It makes sense that these two exist, and the difference is clear (and its also clear that OS booted != SSH demon started). Claudeâs value here is in providing specific names for the concepts I already know to exist.
Another fun one was about the disk. I noticed that, while the instance had an SSD, it wasnât actually used. I asked Claude to mount it as home, but that didnât work. Claude immediately asked me to run
$ box run 0 cat /var/some/unintuitive/long/path.logand that log immediately showed the problem. This is remarkable! 50% of my typical Linux debugging day is wasted not knowing that a useful log exists, and the other 50% is for searching for the log I know should exist somewhere.After the fix, I lost the ability to SSH. Pasting the error immediately gave the answer â by mounting over
/home, we were overwriting ssh keys configured prior.There were couple of more iterations like that. Rookie mistakes were made, but they were debugged and fixed much faster than my personal knowledge allows (and again, I feel that is trivia knowledge, rather than deep reusable knowledge, so I am happy to delegate it!).
It worked satisfactorily in the end, and, whatâs more, I am happy to maintain the code, at least to the extent that I personally need it. Kinda hard to measure productivity boost here, but, given just the sheer number of CLI flags required to make this work, I am pretty confident that time was saved, even factoring the writing of the present article!
Coda
Iâve recently read The Art of Doing Science and Engineering by Hamming (of distance and code), and one story stuck with me:
A psychologist friend at Bell Telephone Laboratories once built a machine with about 12 switches and a red and a green light. You set the switches, pushed a button, and either you got a red or a green light. After the first person tried it 20 times they wrote a theory of how to make the green light come on. The theory was given to the next victim and they had their 20 tries and wrote their theory, and so on endlessly. The stated purpose of the test was to study how theories evolved.
But my friend, being the kind of person he was, had connected the lights to a random source! One day he observed to me that no person in all the tests (and they were all high-class Bell Telephone Laboratories scientists) ever said there was no message. I promptly observed to him that not one of them was either a statistician or an information theorist, the two classes of people who are intimately familiar with randomness. A check revealed I was right!
-