- â
- â
- About KeePassXCâs Code Quality Control â KeePassXC
- How to build a remarkable command palette
- Leaderboard - compar:IA, the AI chatbot arena
- Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
- Automerge
- November 28, 2025
-
đ r/LocalLLaMA Apparently Asus is working with Nvidia on a 784GB "Coherent" Memory desktop PC with 20 PFLOPS AI Performance rss
Somehow the announcement went under the radar, but back in May, along side the Ascent GX10, Asus announced the ExpertCenter Pro ET900N G3, with GB300 Blackwell. They don't really say what's a "Coherent" memory, but my guess it's another term of saying unified memory like Apple and AMD.
The announcement and the specs are very dry on details, but given the GB300, we might get a very decent memory bandwidth, without looking like a hideous frankestein monster.
This might be r/Localllama wet dream. If they manage to price it well, and fix that memory bandwidth (that plagued Spark), they have my money.
EDIT: As many pointed out in the comments, it's based on the Nvidia DGX Station, announced back in March, which is rumored to be 80k. ServeTheHome had a nice article about it back in March.
The official specs:-
496GB LPDDR5X CPU memory at 396GB/s (Micron SOCAMM, so it seems that it will be modular not soldered!)
-
288GB HBM3e GPU memory at 8TB/s.
submitted by /u/waiting_for_zban
[link] [comments] -
-
- November 27, 2025
-
đ maxgoedjen/secretive 3.0.4 release
Bug fixes and updated translations for 3.0.
Fixes
- Fix bug where sometimes agent would hang and not respond. (#765)
- Fix bug where ruby net/ssh would not work properly (#747)
New in 3.0:
A huge update! UI Refresh for macOS 26 Tahoe and Liquid Glass, Post-Quantum Key Support, Enhanced Security, and lots more!
Features
- Secretive's UI has been updated for macOS 26 Tahoe and Liquid Glass, and has just gotten a facelift overall (#606, #609, #612, #657, #697, #701, #703, #704, #707, #714)
- Most key signing operations are now performed using CryptoKit (#628)
- On macOS Tahoe, MLDSA-65 and MLDSA-87 keys are now supported (#631)
- Secretive is now built with "Enhanced Security" configuration enabled. (#618)
- SocketController has been rewritten for concurrency support and overall performance (#634)
- Data payloads from agent are now parsed in isolated XPC service for improved security(#675, #681)
- Update checks now happen in isolated XPC service for improved security (#675, #681).
- New "Integrations" window for easier configuration (#657)
- Keys can now have an optional "key attribution," which is commonly an email address (#628)
- Add "Reveal in Finder" button to public key section (#669)
- Key name is now returned in SSH agent identities response (#647)
- Add help notice for post-reboot when keys sometimes fail to load (#671)
- Secretive is now built with Swift 6 and concurrency checks enabled (#578, 617)
- Secretive is now built with "String Memory Checking" enabled (#683)
- More complete SSH agent protocol parsing (#673)
- New localizations: Catalan(#558), Korean (#537), Japanese(#546), Polish (#585), Russian (#553)
- Localized strings are now type-safe (#607)
- GitHub attestation is enabled (#614, #616 #666, #667)
Special Thanks
- A big special thank you to @ultrasecreth, @orazioedoardo, @nliechti, @StephenKing, @multipleofzero, @gmessir, @dmulloy2, @HuaDeity, and @KizzyCode for their help reporting, gathering logs, and testing a fix for the agent hang.
Minimum macOS Version
14.0.0
Build
https://github.com/maxgoedjen/secretive/actions/runs/19746755792
Attestation
https://github.com/maxgoedjen/secretive/attestations/14106317
Full Changelog
-
đ trailofbits/algo AlgoVPN 2.0.1 release
A maintenance release focused on Ansible 12 compatibility, cloud provider fixes, and dependency updates.
đ§ Bug Fixes
Ansible 12 Compatibility
- Fixed Ansible 12 boolean type checking (#14834) - Resolved deployments breaking due to stricter boolean handling
- Fixed Ansible 12 double-templating issues (#14836) - Corrected Jinja2 spacing issues causing template failures
- Fixed Ansible 12 compatibility issues (#14840) - General compatibility fixes for the Ansible 12.x series
- Fixed AWS EC2 and Lightsail deployment failures (#14861) - Resolved Ansible 12-specific API issues
- Fixed GCE deployment error (#14860) - Corrected JSON parsing issues with Ansible 12
Cloud Provider Fixes
- Fixed Vultr deployment issues (#14852, #14853) - Resolved regions string conversion bug and startup script JSON serialization
- Fixed Scaleway deployment (#14848) - Replaced broken organization_info module
- Fixed DigitalOcean API error handling (#14830) - Improved debugging for API errors
- Fixed update-users not working (#14859) - Resolved user management failures
Other Fixes
- Added missing ansible.utils collection (#14880) - Fixed 'No filter named ipmath' errors for ansible-core users
- Removed unused dependencies - Cleaned up pyopenssl and boto dependencies
â ď¸ Breaking Changes
- Removed Exoscale support (#14841) - CloudStack API has been deprecated by Exoscale
đ Infrastructure Updates
- Updated Hetzner server type (#14874) - Changed from deprecated cpx11 to cpx22
- Added pre-commit hooks (#14831) - Comprehensive code quality checks
- Switched Dependabot to uv (#14862) - Improved dependency management
- Added Claude Code GitHub Actions (#14873) - Automated issue triage and PR reviews
đŚ Dependency Updates
- ansible 11.9.0 â 12.2.0
- boto3 1.40.3 â 1.41.5
- azure-identity 1.23.1 â 1.25.1
- azure-mgmt-compute 35.0.0 â 37.1.0
- hcloud 2.5.4 â 2.11.1
- google-auth 2.40.3 â 2.43.0
- linode-api4 5.33.1 â 5.38.0
- openstacksdk 4.6.0 â 4.8.0
- pyyaml 6.0.2 â 6.0.3
- requests 2.32.4 â 2.32.5
đ Full Changelog
-
đ r/reverseengineering I built SentinelNav, a binary file visualization tool to help me understand file structures (and it became way more powerful than I expected) rss
submitted by /u/FiddleSmol
[link] [comments] -
đ r/wiesbaden Pendeln: Taunusstein - RĂźsselsheim rss
Guten Tag,
hat hier jemand Erfahrung mit täglichem Pendeln von Taunusstein nach RĂźsselsheim Ăźber die BundesstraĂe 54? Ist dort mit viel Stau zu rechnen im Morgenverkehr?
submitted by /u/CarefulRabbit684
[link] [comments] -
đ r/wiesbaden Gym in wiesbaden? rss
What's a good gym in wiesbaden that has:
Squat racks Bench presses Deadlift platform
And allows me to get a day pass
I will travel to wiesbaden for work and want to keep training
submitted by /u/earl-the-grey
[link] [comments] -
đ r/LocalLLaMA Yes it is possible to uncensor gpt-oss-20b - ArliAI/gpt-oss-20b-Derestricted rss
| Original discussion on the initial Arli AI created GLM-4.5-Air-Derestricted model that was ablated using u/grimjim's new ablation method is here: The most objectively correct way to abliterate so far - ArliAI/GLM-4.5-Air-Derestricted (Note: Derestricted is a name given to models created by Arli AI using this method, but the method officially is just called Norm-Preserving Biprojected Abliteration by u/grimjim) Hey everyone, Owen here from Arli AI again. In my previous post, I got a lot of requests to attempt this derestricting on OpenAI's gpt-oss models as they are models that are intelligent but was infamous for being very...restricted. I thought that it would be a big challenge and be interesting to try and attempt as well, and so that was the next model I decided to try and derestrict next. The 120b version is more unwieldy to transfer around and load in/out of VRAM/RAM as I was experimenting, so I started with the 20b version first but I will get to the 120b next which should be super interesting. As for the 20b model here, it seems to have worked! The model now can respond to questions that OpenAI never would have approved of answering (lol!). It also seems to have cut down its wasteful looping around of deciding whether it can or cannot answer a question based on a non existent policy in it's reasoning, although this isn't completely removed yet. I suspect a more customized harmful/harmless dataset to specifically target this behavior might be useful for this, so that will be what I need to work on. Otherwise I think this is just an outright improved model over the original as it is much more useful now than it's original behavior. Where it would usually flag a lot of false positives and be absolutely useless in certain situations just because of "safety". In order to work on modifying the weights of the model, I also had to use a BF16 converted version to start with as the model as you all might know was released in MXFP4 format, but then attempting the ablation on the BF16 converted model seems to work well. I think that this proves that this new method of essentially "direction-based" abliteration is really flexible and works super well for probably any models. As for quants, I'm not one to worry about making GGUFs myself because I'm sure the GGUF makers will get to it pretty fast and do a better job than I can. Also, there are no FP8 or INT8 quants now because its pretty small and those that run FP8 or INT8 quants usually have a substantial GPU setup anyways. Try it out and have fun! This time it's really for r/LocalLLaMA because we don't even run this model on our Arli AI API service. submitted by /u/Arli_AI
[link] [comments]
---|--- -
đ r/wiesbaden Konzerttickets fĂźr heute zu verschenken - Electric Callboy in Frankfurt rss
Hallo,
aufgrund von Krankheit kurzfristig zu verschenken. 2 Tickets fĂźr Electric Callboy heute Abend in Frankfurt. Abzuholen bei mir in Wiesbaden Stadtteil Westend.
submitted by /u/Rote_Gazelle
[link] [comments] -
đ r/LocalLLaMA deepseek-ai/DeepSeek-Math-V2 ¡ Hugging Face rss
| submitted by /u/Dark_Fire_12
[link] [comments]
---|--- -
đ r/reverseengineering Released Zero the Hero (0tH) â a Rust-based Mach-O analysis tool for macOS rss
submitted by /u/gabriele70
[link] [comments] -
đ r/LocalLLaMA Anthropic just showed how to make AI agents work on long projects without falling apart rss
Most AI agents forget everything between sessions, which means they completely lose track of long tasks. Anthropicâs new article shows a surprisingly practical fix. Instead of giving an agent one giant goal like âbuild a web app,â they wrap it in a simple harness that forces structure, memory, and accountability.
First, an initializer agent sets up the project. It creates a full feature list, marks everything as failing, initializes git, and writes a progress log. Then each later session uses a coding agent that reads the log and git history, picks exactly one unfinished feature, implements it, tests it, commits the changes, and updates the log. No guessing, no drift, no forgetting.
The result is an AI that can stop, restart, and keep improving a project across many independent runs. It behaves more like a disciplined engineer than a clever autocomplete. It also shows that the real unlock for long-running agents may not be smarter models, but better scaffolding.
Read the article here:
https://www.anthropic.com/engineering/effective-harnesses-for-long-running- agentssubmitted by /u/purealgo
[link] [comments] -
đ r/LocalLLaMA Where did the Epstein emails dataset go rss
Removed from Hugging Face (link)
Removed from GitHub (link)
Reddit account deleted (last post)submitted by /u/egomarker
[link] [comments] -
đ Console.dev newsletter Requestly rss
Description: API client + interceptor.
What we like: Issue requests to APIs with support for environments, variables, and API collections. Intercept requests and replace, modify, redirect. Create API mocks. Local first or set up collaboration over Git or cloud drives. Import from common alternatives e.g. Postman.
What we dislike: Desktop app built in Electron.
-
đ Console.dev newsletter Prisma 7 rss
Description: TypeScript ORM.
What we like: Faster performance with a smaller ESM-first bundle. Drop in update from previous version. Generated client no longer buried in node_modules. Now supports mapped enums. New Prisma Studio UI visualizes database inspection.
What we dislike: Using an ORM is probably a mistake for all but the smallest side project.
-
đ Ampcode News Opus 4.5: Better, Faster, Often Cheaper rss
Claude Opus 4.5 is the new main model in Amp's
smartmode, two days after we shipped it for you to try out.Only a week ago, we changed Amp's main model to Gemini 3 â a historic change, we said. It was the first time since Amp's creation that we switched away from Claude. Now we're switching again and you may ask: why? Why follow a historic change with another one, in a historically short amount of time?
We love Gemini 3, but, once rolled out, its impressive highs came with lows. What we internally experienced as rough edges turned into some very frustrating behaviors for our users. Frustrating and costly.
Then, not even a week later, Opus 4.5 comes out. Opus 4.5, on the other hand, seems as capable as Gemini 3. Its highs might not be as brilliant as Gemini 3's, but it also seems to do away with the lows. It seems more polished. It's faster, even.
We're also pleasantly surprised by Opus's cost-efficiency. Yes, Opus tokens are more expensive, but it needs fewer tokens to do the job, makes fewer token-wasting mistakes, and needs less human intervention (which results in a higher cache hit rate, which means lower costs and latency).
Sonnet 4.5 Gemini 3 Pro Opus 4.5 Internal Evals 37.1% 53.7% 57.3% Avg. Thread Cost $2.75 $2.04 $2.05 Â Â Â 0-200k Tokens Only[^1] $1.48 $1.19 $2.05 Off-the-Rails Cost[^2] 8.4% 17.8% 2.4% Speed (p50, preliminary) 2.4 min 4.3 min 3.5 min In words:
- If you use long threads (200k+ tokens): Opus will be a lot cheaper. Itâs currently limited to 200k tokens of context, which forces you to use small threadsâour strong recommendation anyway, for both quality and cost. If you need to temporarily keep using Sonnet's long context, use the
"amp.model.sonnet": truesetting or--use-sonnetCLI flag. - If Sonnet or Gemini frequently struggles for you or has hit a capability ceiling: Opus will be far more capable and accurate, and often cheaper too (by avoiding wasted tokens).
- If you loved Gemini 3 Pro: Opus will be ~40% more expensive but faster and more tolerant of ambiguous prompts. (This describes most of the Amp team, and we still find Opus worth it.)
- If you were perfectly satisfied with Sonnet 4.5: Opus will be ~35% more expensive for the same task. The real win comes from getting outside your comfort zone and giving it harder tasks where Sonnet would struggle.
Staying on the frontier means sometimes shipping despite issues â and sometimes shipping something better a week later.
- If you use long threads (200k+ tokens): Opus will be a lot cheaper. Itâs currently limited to 200k tokens of context, which forces you to use small threadsâour strong recommendation anyway, for both quality and cost. If you need to temporarily keep using Sonnet's long context, use the
-
- November 26, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-11-26 rss
IDA Plugin Updates on 2025-11-26
New Releases:
- ghidra-chinese 20251126 - 19689512396
- ida-hcli v0.14.2
- ida-hcli v0.14.1
- ida-settings v3.3.0
- ida-settings v3.2.3
- plugin-ida v3.2.0
- plugin-ida v3.2.0-rc1
Activity:
- d810-ng
- 1a386c25: fix(qtshim): fixes #12 - for PyQt5, QtShortCut is in QtWidgets and QtâŚ
- distro
- 1e0a27bd: Updated dex2jar
- dotfiles
- ghidra
- 14ead1aa: Merge remote-tracking branch 'origin/GP-6173_d-millar_tree_logic_bug'
- 307f0827: GP-6173: fix for searchForSuitable
- aff51b9c: Merge remote-tracking branch
- 895ff8d6: Merge remote-tracking branch 'origin/GP-6163_d-millar_emu_ss'
- 9dab7774: Merge remote-tracking branch 'origin/GP-1-dragonmacher-save-state-fixâŚ
- 614fd2c1: Merge remote-tracking branch 'origin/Ghidra_12.0'
- 9499199f: Merge remote-tracking branch 'origin/GP-6120_emteere_PPC64ThunkPatterâŚ
- ghidra-chinese
- 3d69a475: Merge pull request #72 from TC999/sync
- hrtng
- d816763d: add logo.jpg
- ida-hcli
- ida-settings
- ida-sigmaker
- 5d645fa3: #18: Add cancelable search functionality (#20)
- plugin-ida
- ed65f5b4: build: release version 3.2.0
- 2f51e612: Merge pull request #85 from RevEngAI/fix-PLU-200-incorrect-rebase-on-âŚ
- f13dad20: Merge pull request #84 from RevEngAI/fix-PLU-201-database-has-unrecovâŚ
- a9bf4dc6: Merge pull request #86 from RevEngAI/feat-latest-sdk
- ee82cea8: feat: update to the latest SDK
- 2988d422: fix(PLU-200): fix cause of rename warning when name has a reserved prâŚ
- a32984d8: fix(PLU-200): now passing a delta when we rebase program
- 4915d67c: fix(PLU-201): removed call to idaapi.auto_wait() in PLUGIN_ENTRY
- tomsons_RE_scripts
-
đ r/wiesbaden Taunusstein Fluglärm rss
Guten Tag,
hat hier jemand Wohnerfahrung in Taunusstein und kann eine Einschätzung zum Thema Fluglärm geben?
submitted by /u/CarefulRabbit684
[link] [comments] -
đ r/LocalLLaMA Qwen3 Next almost ready in llama.cpp rss
| After over two months of work, itâs now approved and looks like it will be merged soon. Congratulations to u/ilintar for completing a big task! GGUFs https://huggingface.co/lefromage/Qwen3-Next-80B-A3B-Instruct-GGUF https://huggingface.co/ilintar/Qwen3-Next-80B-A3B-Instruct-GGUF For speeeeeed (on NVIDIA) you also need CUDA-optimized ops https://github.com/ggml-org/llama.cpp/pull/17457 - SOLVE_TRI https://github.com/ggml-org/llama.cpp/pull/16623 - CUMSUM and TRI submitted by /u/jacek2023
[link] [comments]
---|--- -
đ r/reverseengineering Learn to Crawle Sites with Nuclei rss
submitted by /u/SUmidcyber
[link] [comments] -
đ News Minimalist đ˘ Scientists may have found dark matter + 11 more stories rss
Hi! Thereâs a quick message for you after the news.
In the last 5 days ChatGPT read 158847 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[5.8] Scientists may have detected dark matter for the first time âspace.com(+19)
Scientists using NASA's Fermi telescope may have detected dark matter for the first time, observing a unique gamma-ray signature originating from the center of the Milky Way galaxy.
The gamma-ray signal's shape and energy match predictions for annihilating dark matter particles known as WIMPs. Researchers say no other known astronomical phenomena can easily explain the observation.
Led by a University of Tokyo team, the research was published Tuesday. The scientific community will require more data to confirm if this is the first direct observation of dark matter.
[6.2] Ukraine and Russia near peace deal as Zelenskyy prepares for potential D.C. visit âcbsnews.com(+708)
Ukraine has agreed to a U.S.-brokered peace deal to end its war with Russia, with officials stating that only minor details need to be finalized.
Ukrainian President Volodymyr Zelenskyy may visit Washington this month to complete the agreement. The news comes as U.S. Army Secretary Dan Driscoll meets with Russian officials in Abu Dhabi to advance negotiations brokered by the Trump administration.
The deal follows talks in Geneva based on a White House proposal. Past drafts reportedly included Ukrainian concessions on territory and its ambitions to join NATO in exchange for security guarantees.
[5.5] GLP-1 drug withdrawal causes weight regain and health decline in most users, study finds âarstechnica.com(+9)
A new study suggests that stopping GLP-1 drugs causes most users to regain weight and lose cardiovascular health benefits, indicating the medication may require long-term use.
In a clinical trial published this week, 82% of participants who stopped taking tirzepatide regained significant weight. They also saw reversals in blood pressure, cholesterol, and blood sugar control improvements.
Experts suggest rebranding these as "weight management" drugs for chronic disease. More research is needed on potential strategies for safely tapering off the medication, as the study involved an abrupt withdrawal.
Highly covered news with significance over 5.5
[6.6] Trump signs executive order for AI project called Genesis Mission to boost scientific discoveries â apnews.com (+26)
[5.9] Canada, India, and Australia launch technology partnership â ctvnews.ca (+106)
[6.4] Canadian-U.S. study makes breakthrough in aggressive brain tumour treatment â ctvnews.ca (+4)
[5.9] 3-year-old boy gets world-first gene therapy to treat life-threatening Hunter syndrome â manchestereveningnews.co.uk (+8)
[5.8] RSF declares unilateral ceasefire in Sudan, faces war crime accusations â yle.fi (Swedish) (+40)
[5.7] President Trump begins designating Muslim Brotherhood branches in Lebanon, Jordan, and Egypt as terrorist organizations â letemps.ch (French) (+33)
[5.7] Obesity drug semaglutide fails to slow Alzheimer's â bbc.co.uk (+26)
[5.7] China launches global mining initiative with allied nations â livemint.com (+3)
You may have noticed that the gap between these emails got longer. Today we had the longest break in almost a year - 5 days between emails.
As a quick reminder - I only send an email once thereâs enough significant news to warrant an update. If nothing significant happens - nothing gets sent.
I havenât made any changes to the selection process, and the scoring system works as before.
But it seems weâve caught an interesting pattern - by the end of the year the world slows down a bit. This happened last year too - in December we had the longest break between emails ever - 10 days.
So enjoy the quiet. Everything is working as before, and you arenât missing anything - there are just fewer things to miss.
As always, thanks for reading!
â Vadim
You can track significant news in your country with premium.
-
đ r/LocalLLaMA Open-source just beat humans at ARC-AGI (71.6%) for $0.02 per task - full code available rss
German researchers achieved 71.6% on ARC-AGI (humans average 70%) using three clever techniques that run on a regular GPU for 2 cents per task. OpenAI's o3 gets 87% but costs $17 per task - that's 850x more expensive.
The breakthrough uses: - Product of Experts (viewing puzzles from 16 angles) - Test-Time Training (model adapts to each puzzle) - Depth-First Search (efficient solution exploration)
I made a technical breakdown video explaining exactly how it works and why this matters for democratizing AI: https://youtu.be/HEIklawkoMk
The code is fully open-source: https://github.com/da-fr/Product-of-Experts- ARC-Paper
Paper: https://arxiv.org/abs/2505.07859
What's remarkable is they used Qwen-32B (not even the largest model) and achieved this with smart engineering rather than raw compute. You can literally run this tonight on your own machine.
Has anyone here tried implementing this yet? I'm curious what other problems these techniques could solve.
submitted by /u/Proof-Possibility-54
[link] [comments] -
đ HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
sync repo: ~1 changed ## Changes - [ida-settings-editor](https://github.com/williballenthin/ida-settings): - 1.0.2: archive contents changed, download URL changed -
đ r/reverseengineering Quantum Silicon Core Loader v5.6 Update rss
submitted by /u/ComputerGlobal1249
[link] [comments] -
đ Anton Zhiyanov Go proposal: Goroutine metrics rss
Part of theAccepted! series, explaining the upcoming Go changes in simple terms.
Export goroutine-related metrics from the Go runtime.
Ver. 1.26 ⢠Stdlib ⢠Medium impact
Summary
New metrics in the
runtime/metricspackage give better insight into goroutine scheduling:- Total number of goroutines since the program started.
- Number of goroutines in each state.
- Number of active threads.
Motivation
Go's runtime/metrics package already provides a lot of runtime stats, but it doesn't include metrics for goroutine states or thread counts.
Per-state goroutine metrics can be linked to common production issues. An increasing waiting count can show a lock contention problem. A high not-in- go count means goroutines are stuck in syscalls or cgo. A growing runnable backlog suggests the CPUs can't keep up with demand.
Observability systems can track these counters to spot regressions, find scheduler bottlenecks, and send alerts when goroutine behavior changes from the usual patterns. Developers can use them to catch problems early without needing full traces.
Description
Add the following metrics to the
runtime/metricspackage:/sched/goroutines-created:goroutines Count of goroutines created since program start. /sched/goroutines/not-in-go:goroutines Approximate count of goroutines running or blocked in a system call or cgo call. /sched/goroutines/runnable:goroutines Approximate count of goroutines ready to execute, but not executing. /sched/goroutines/running:goroutines Approximate count of goroutines executing. Always less than or equal to /sched/gomaxprocs:threads. /sched/goroutines/waiting:goroutines Approximate count of goroutines waiting on a resource (I/O or sync primitives). /sched/threads/total:threads The current count of live threads that are owned by the Go runtime.The per-state numbers are not guaranteed to add up to the live goroutine count (
/sched/goroutines:goroutines, available since Go 1.16).All metrics use uint64 counters.
Example
Start some goroutines and print the metrics after 100 ms of activity:
func main() { go work() // omitted for brevity time.Sleep(100 * time.Millisecond) fmt.Println("Goroutine metrics:") printMetric("/sched/goroutines-created:goroutines", "Created") printMetric("/sched/goroutines:goroutines", "Live") printMetric("/sched/goroutines/not-in-go:goroutines", "Syscall/CGO") printMetric("/sched/goroutines/runnable:goroutines", "Runnable") printMetric("/sched/goroutines/running:goroutines", "Running") printMetric("/sched/goroutines/waiting:goroutines", "Waiting") fmt.Println("Thread metrics:") printMetric("/sched/gomaxprocs:threads", "Max") printMetric("/sched/threads/total:threads", "Live") } func printMetric(name string, descr string) { sample := []metrics.Sample{{Name: name}} metrics.Read(sample) // Assuming a uint64 value; don't do this in production. // Instead, check sample[0].Value.Kind and handle accordingly. fmt.Printf(" %s: %v\n", descr, sample[0].Value.Uint64()) } Goroutine metrics: Created: 52 Live: 12 Syscall/CGO: 0 Runnable: 0 Running: 4 Waiting: 8 Thread metrics: Max: 8 Live: 4No surprises here: we read the new metric values the same way as before â using metrics.Read.
Further reading
đŁ 15490 ⢠đđ 690397, 690398, 690399
P.S. If you are into goroutines, check out my interactive book on concurrency
*[Medium impact]: Likely impact for an average Go developer
-
đ Cryptography & Security Newsletter The Legend of Kipp Hickman rss
Working on the short news for this monthâs newsletter, I came across Cypherpunks Hall of Fame, which has a long list of people who have contributed to encryption, privacy, and similar causes. Looking at the list, I couldnât help but feel that itâs missing one very important person that made a significant contribution.
-
đ r/LocalLLaMA New Open-source text-to-image model from Alibaba is just below Seedream 4, Coming today or tomorrow! rss
| submitted by /u/abdouhlili
[link] [comments]
---|--- -
đ organicmaps/organicmaps 2025.11.26-5-android release
⢠Wikipedia articles in Turkish, Japanese, and Chinese, type
?wikito see them on the map
⢠New setting to visually highlight downloaded regions on the map
⢠Import Google Takeout's bookmarks and saved places via GeoJSON
⢠OSM data as of November 23
⢠Improved audio playback and sound interruptions
⢠Fixed crashes in the Editor and when downloading maps
⢠Brighter roads in dark vehicle mode
⢠Improved bicycle routing in Austria
âŚmore at omaps.org/newsSee a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
ca07f7c992eef3adc11b5e95c4ed27709a2ae9ff10040eb2d3471aedc5026450 OrganicMaps-25112605-web-release.apk -
đ Simon Willison Highlights from my appearance on the Data Renegades podcast with CL Kao and Dori Wilson rss
I talked with CL Kao and Dori Wilson for an episode of their new Data Renegades podcast titled Data Journalism Unleashed with Simon Willison.
I fed the transcript into Claude Opus 4.5 to extract this list of topics with timestamps and illustrative quotes. It did such a good job I'm using what it produced almost verbatim here - I tidied it up a tiny bit and added a bunch of supporting links.
-
What is data journalism and why it's the most interesting application of data analytics [02:03]
"There's this whole field of data journalism, which is using data and databases to try and figure out stories about the world. It's effectively data analytics, but applied to the world of news gathering. And I think it's fascinating. I think it is the single most interesting way to apply this stuff because everything is in scope for a journalist."
-
The origin story of Django at a small Kansas newspaper [02:31]
"We had a year's paid internship from university where we went to work for this local newspaper in Kansas with this chap Adrian Holovaty. And at the time we thought we were building a content management system."
-
Building the "Downloads Page" - a dynamic radio player of local bands [03:24]
"Adrian built a feature of the site called the Downloads Page. And what it did is it said, okay, who are the bands playing at venues this week? And then we'll construct a little radio player of MP3s of music of bands who are playing in Lawrence in this week."
-
Working at The Guardian on data-driven reporting projects [04:44]
"I just love that challenge of building tools that journalists can use to investigate stories and then that you can use to help tell those stories. Like if you give your audience a searchable database to back up the story that you're presenting, I just feel that's a great way of building more credibility in the reporting process."
-
Washington Post's opioid crisis data project and sharing with local newspapers [05:22]
"Something the Washington Post did that I thought was extremely forward thinking is that they shared [the opioid files] with other newspapers. They said, 'Okay, we're a big national newspaper, but these stories are at a local level. So what can we do so that the local newspaper and different towns can dive into that data for us?'"
-
NICAR conference and the collaborative, non-competitive nature of data journalism [07:00]
"It's all about trying to figure out what is the most value we can get out of this technology as an industry as a whole."
-
ProPublica and the Baltimore Banner as examples of nonprofit newsrooms [09:02]
"The Baltimore Banner are a nonprofit newsroom. They have a hundred employees now for the city of Baltimore. This is an enormously, it's a very healthy newsroom. They do amazing data reporting... And I believe they're almost breaking even on subscription revenue [correction, not yet], which is astonishing."
-
The "shower revelation" that led to Datasette - SQLite on serverless hosting [10:31]
"It was literally a shower revelation. I was in the shower thinking about serverless and I thought, 'hang on a second. So you can't use Postgres on serverless hosting, but if it's a read-only database, could you use SQLite? Could you just take that data, bake it into a blob of a SQLite file, ship that as part of the application just as another asset, and then serve things on top of that?'"
-
Datasette's plugin ecosystem and the vision of solving data publishing [12:36]
"In the past I've thought about it like how Pinterest solved scrapbooking and WordPress solved blogging, who's going to solve data like publishing tables full of data on the internet? So that was my original goal."
-
Unexpected Datasette use cases: Copenhagen electricity grid, Brooklyn Cemetery [13:59]
"Somebody was doing research on the Brooklyn Cemetery and they got hold of the original paper files of who was buried in the Brooklyn Cemetery. They digitized those, loaded the results into Datasette and now it tells the story of immigration to New York."
-
Bellingcat using Datasette to investigate leaked Russian food delivery data [14:40]
"It turns out the Russian FSB, their secret police, have an office that's not near any restaurants and they order food all the time. And so this database could tell you what nights were the FSB working late and what were the names and phone numbers of the FSB agents who ordered food... And I'm like, 'Wow, that's going to get me thrown out of a window.'"
Bellingcat: Food Delivery Leak Unmasks Russian Security Agents
-
The frustration of open source: no feedback on how people use your software [16:14]
"An endless frustration in open source is that you really don't get the feedback on what people are actually doing with it."
-
Open office hours on Fridays to learn how people use Datasette [16:49]
"I have an open office hours Calendly, where the invitation is, if you use my software or want to use my software, grab 25 minutes to talk to me about it. And that's been a revelation. I've had hundreds of conversations in the past few years with people."
-
Data cleaning as the universal complaint - 95% of time spent cleaning [17:34]
"I know every single person I talk to in data complains about the cleaning that everyone says, 'I spend 95% of my time cleaning the data and I hate it.'"
-
Version control problems in data teams - Python scripts on laptops without Git [17:43]
"I used to work for a large company that had a whole separate data division and I learned at one point that they weren't using Git for their scripts. They had Python scripts, littering laptops left, right and center and lots of notebooks and very little version control, which upset me greatly."
-
The Carpentries organization teaching scientists Git and software fundamentals [18:12]
"There's an organization called The Carpentries. Basically they teach scientists to use Git. Their entire thing is scientists are all writing code these days. Nobody ever sat them down and showed them how to use the UNIX terminal or Git or version control or write tests. We should do that."
-
Data documentation as an API contract problem [21:11]
"A coworker of mine said, you do realize that this should be a documented API interface, right? Your data warehouse view of your project is something that you should be responsible for communicating to the rest of the organization and we weren't doing it."
-
The importance of "view source" on business reports [23:21]
"If you show somebody a report, you need to have view source on those reports... somebody would say 25% of our users did this thing. And I'm thinking I need to see the query because I knew where all of the skeletons were buried and often that 25% was actually a 50%."
-
Fact-checking process for data reporting [24:16]
"Their stories are fact checked, no story goes out the door without someone else fact checking it and without an editor approving it. And it's the same for data. If they do a piece of data reporting, a separate data reporter has to audit those numbers and maybe even produce those numbers themselves in a separate way before they're confident enough to publish them."
-
Queries as first-class citizens with version history and comments [27:16]
"I think the queries themselves need to be first class citizens where like I want to see a library of queries that my team are using and each one I want to know who built it and when it was built. And I want to see how that's changed over time and be able to post comments on it."
-
Two types of documentation: official docs vs. temporal/timestamped notes [29:46]
"There's another type of documentation which I call temporal documentation where effectively it's stuff where you say, 'Okay, it's Friday, the 31st of October and this worked.' But the timestamp is very prominent and if somebody looks that in six months time, there's no promise that it's still going to be valid to them."
-
Starting an internal blog without permission - instant credibility [30:24]
"The key thing is you need to start one of these without having to ask permission first. You just one day start, you can do it in a Google Doc, right?... It gives you so much credibility really quickly because nobody else is doing it."
-
Building a search engine across seven documentation systems [31:35]
"It turns out, once you get a search engine over the top, it's good documentation. You just have to know where to look for it. And if you are the person who builds the search engine, you secretly control the company."
-
The TIL (Today I Learned) blog approach - celebrating learning basics [33:05]
"I've done TILs about 'for loops' in Bash, right? Because okay, everyone else knows how to do that. I didn't... It's a value statement where I'm saying that if you've been a professional software engineer for 25 years, you still don't know everything. You should still celebrate figuring out how to learn 'for loops' in Bash."
-
Coding agents like Claude Code and their unexpected general-purpose power [34:53]
"They pretend to be programming tools but actually they're basically a sort of general agent because they can do anything that you can do by typing commands into a Unix shell, which is everything."
-
Skills for Claude - markdown files for census data, visualization, newsroom standards [36:16]
"Imagine a markdown file for census data. Here's where to get census data from. Here's what all of the columns mean. Here's how to derive useful things from that. And then you have another skill for here's how to visualize things on a map using D3... At the Washington Post, our data standards are this and this and this."
-
The absurd 2025 reality: cutting-edge AI tools use 1980s terminal interfaces [38:22]
"The terminal is now accessible to people who never learned the terminal before 'cause you don't have to remember all the commands because the LLM knows the commands for you. But isn't that fascinating that the cutting edge software right now is it's like 1980s styleâ I love that. It's not going to last. That's a current absurdity for 2025."
-
Cursor for data? Generic agent loops vs. data-specific IDEs [38:18]
"More of a notebook interface makes a lot more sense than a Claude Code style terminal 'cause a Jupyter Notebook is effectively a terminal, it's just in your browser and it can show you charts."
-
Future of BI tools: prompt-driven, instant dashboard creation [39:54]
"You can copy and paste a big chunk of JSON data from somewhere into [an LLM] and say build me a dashboard. And they do such a good job. Like they will just decide, oh this is a time element so we'll do a bar chart over time and these numbers feel big so we'll put those in a big green box."
-
Three exciting LLM applications: text-to-SQL, data extraction, data enrichment [43:06]
"LLMs are stunningly good at outputting SQL queries. Especially if you give them extra metadata about the columns. Maybe a couple of example queries and stuff."
-
LLMs extracting structured data from scanned PDFs at 95-98% accuracy [43:36]
"You file a freedom of information request and you get back horrifying scanned PDFs with slightly wonky angles and you have to get the data out of those. LLMs for a couple of years now have been so good at, 'here's a page of a police report, give me back JSON with the name of the arresting officer and the date of the incident and the description,' and they just do it."
-
Data enrichment: running cheap models in loops against thousands of records [44:36]
"There's something really exciting about the cheaper models, Gemini Flash 2.5 Lite, things like that. Being able to run those in a loop against thousands of records feels very valuable to me as well."
-
Multimodal LLMs for images, audio transcription, and video processing [45:42]
"At one point I calculated that using Google's least expensive model, if I wanted to generate captions for like 70,000 photographs in my personal photo library, it would cost me like $13 or something. Wildly inexpensive."
Correction: with Gemini 1.5 Flash 8B it would cost 173.25 cents
-
First programming language: hated C++, loved PHP and Commodore 64 BASIC [46:54]
"I hated C++ 'cause I got my parents to buy me a book on it when I was like 15 and I did not make any progress with Borland C++ compiler... Actually, my first program language was Commodore 64 BASIC. And I did love that. Like I tried to build a database in Commodore 64 BASIC back when I was like six years old or something."
-
Biggest production bug: crashing The Guardian's MPs expenses site with a progress bar [47:46]
"I tweeted a screenshot of that progress bar and said, 'Hey, look, we have a progress bar.' And 30 seconds later the site crashed because I was using SQL queries to count all 17,000 documents just for this one progress bar."
-
Favorite test dataset: San Francisco's tree list, updated several times a week [48:44]
"There's 195,000 trees in this CSV file and it's got latitude and longitude and species and age when it was planted... and get this, it's updated several times a week... most working days, somebody at San Francisco City Hall updates their database of trees, and I can't figure out who."
-
Showrunning TV shows as a management model - transferring vision to lieutenants [50:07]
"Your job is to transfer your vision into their heads so they can go and have the meetings with the props department and the set design and all of those kinds of things... I used to sniff at the idea of a vision when I was young and stupid. And now I'm like, no, the vision really is everything because if everyone understands the vision, they can make decisions you delegate to them."
The Eleven Laws of Showrunning by Javier Grillo-Marxuach
-
Hot take: all executable code with business value must be in version control [52:21]
"I think it's inexcusable to have executable code that has business value that is not in version control somewhere."
-
Hacker News automation: GitHub Actions scraping for notifications [52:45]
"I've got a GitHub actions thing that runs a piece of software I wrote called shot-scraper that runs Playwright, that loads up a browser in GitHub actions to scrape that webpage and turn the results into JSON, which then get turned into an atom feed, which I subscribe to in NetNewsWire."
-
Dream project: whale detection camera with Gemini AI [53:47]
"I want to point a camera at the ocean and take a snapshot every minute and feed it into Google Gemini or something and just say, is there a whale yes or no? That would be incredible. I want push notifications when there's a whale."
-
Favorite podcast: Mark Steel's in Town (hyperlocal British comedy) [54:23]
"Every episode he goes to a small town in England and he does a comedy set in a local venue about the history of the town. And so he does very deep research... I love that sort of like hyperlocal, like comedy, that sort of British culture thing."
Mark Steel's in Town available episodes
-
Favorite fiction genre: British wizards caught up in bureaucracy [55:06]
"My favorite genre of fiction is British wizards who get caught up in bureaucracy... I just really like that contrast of like magical realism and very clearly researched government paperwork and filings."
Colophon
I used a Claude Project for the initial analysis, pasting in the HTML of the transcript since that included
<span data-timestamp="425">elements. The project uses the following custom instructionsYou will be given a transcript of a podcast episode. Find the most interesting quotes in that transcript - quotes that best illustrate the overall themes, and quotes that introduce surprising ideas or express things in a particularly clear or engaging or spicy way. Answer just with those quotes - long quotes are fine.
I then added a follow-up prompt saying:
Now construct a bullet point list of key topics where each item includes the mm:ss in square braces at the end
Then suggest a very comprehensive list of supporting links I could find
Then one more follow-up:
Add an illustrative quote to every one of those key topics you identified
Here's the full Claude transcript of the analysis.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
-
đ tonsky.me How to get hired in 2025 rss

Itâs 2025 and you are applying for a software engineer position. They give you a test assignment. You complete it yourself, send it over, and get rejected. Why?
Because it looked like AI.
Unfortunately, itâs 2025, AI is spreading like glitter in a kindergarten, and itâs really easy to mistake hard human labor for soulless, uninspired machine slop.
Following are the main red flags in test assignments that should be avoided :
- The assignment was read and understood in full.
- All parts are implemented.
- Industry-standard tools and frameworks are used.
- The code is split into small, readable functions.
- Variables have descriptive names.
- Complex parts have comments.
- Errors are handled, error messages are easy to follow.
- Source files are organized reasonably.
- The web interface looks nice.
- There are tests.
Avoid these AI giveaways and spread the word!
-
- November 25, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-11-25 rss
IDA Plugin Updates on 2025-11-25
New Releases:
Activity:
- capa
- distro
- c1cb3600: Updated ilspycmd
- dotfiles
- Graffiti
- 324c0238: :bug: Fix padding on light theme
- HexRaysCodeXplorer
- hrtng
- dcc0f0ec: Merge branch 'master' of https://github.com/KasperskyLab/hrtng
- b232df83: cache cmake's IDASDK_DIR
- 4939824c: Create build.yml
- fe77a18d: upd ida-plugin.json
- ida-pro-mcp
-
đ @binaryninja@infosec.exchange You can now pull Ghidra databases straight into your workflow in Binary Ninja mastodon
You can now pull Ghidra databases straight into your workflow in Binary Ninja 5.2! Open a .gbf on its own, import Ghidra data into an existing session, or bring parts of a full project into a Binary Ninja project on Commercial and above. Mixed tool workflows get a lot easier and this update sets the stage for future export support. https://binary.ninja/2025/11/13/binary- ninja-5.2-io.html#ghidra-import
-
đ r/LocalLLaMA You can now do FP8 reinforcement learning locally! (<5GB VRAM) rss
| Hey r/LocalLlama! We're getting close to our last release of 2025! Thanks so much for all the support this year. The DeepSeek team back in Jan showcased how powerful FP8 RL can be with GRPO. Well, you can now try it on your local hardware using only 5GB VRAM! RTX 50x, 40x series all work! Unsloth GitHub: https://github.com/unslothai/unsloth Why should you do FP8 training?
NVIDIA's research finds FP8 training can match BF16 accuracy whilst getting 1.6x faster inference time. We collabed with TorchAO from PyTorch to introduce FP8 RL training, making FP8 GRPO possible on home GPUs with no accuracy loss!- Qwen3-4B FP8 GRPO works on just 6GB VRAM. Qwen3-1.7B on 5GB
- 1.4x faster RL training and 2Ă longer context vs BF16/FP16
- 60% less VRAM and 10Ă longer context than other FP8 RL implementations
- Unsloth is the only framework that makes FP8 RL LoRA work on consumer GPUs (e.g. NVIDIA RTX 40 & 50 Series). Also runs on H100, H200, B200.
- You may notice Unsloth now uses much less VRAM than before, enabling even longer context. Weâre also implementing faster training soon. Blog coming soon
- Our notebooks use 24GB L4s which fit Qwen3-14B as Tesla T4s donât support FP8.
- Our FP8 RL incorporates Unslothâs weight sharing, Standby, Flex Attention + more.
- Works on any NVIDIA RTX 40, 50 series and H100, B200 etc. GPUs
- Use
load_in_fp8 = TruewithinFastLanguageModelto enable FP8 RL.
You can read our blogpost for our findings and more: https://docs.unsloth.ai/new/fp8-reinforcement-learning Llama 3.2 1B FP8 Colab Notebook: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama_FP8_GRPO.ipynb In the notebook, you can plug in any of our previous reward functions or RL environment examples, including our auto kernel creation and our 2048 game notebooks. To enable fp8:
import os; os.environ['UNSLOTH_VLLM_STANDBY'] = "1" # Saves 30% VRAM from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Qwen3-8B", max_seq_length = 2048, load_in_4bit = False, # False for LoRA 16bit fast_inference = True, # Enable vLLM fast inference max_lora_rank = 32, load_in_fp8 = True, # Float8 RL / GRPO! )Hope you all have a lovely Thanksgiving, a lovely rest of the week and I'll be here to answer any and all questions! =) submitted by /u/danielhanchen
[link] [comments]
---|--- -
đ r/LocalLLaMA Flux 2 can be run on 24gb vram!!! rss
| I dont know why people are complaining...... submitted by /u/Brave-Hold-9389
[link] [comments]
---|--- -
đ r/wiesbaden Bought from rewe @ hauptbahnhof đŤ rss
submitted by /u/Electrical-You-6513
[link] [comments] -
đ r/LocalLLaMA LLaDA2.0 (103B/16B) has been released rss
LLaDA2.0-flash is a diffusion language model featuring a 100BA6B Mixture- of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA2.0 series, it is optimized for practical applications.
https://huggingface.co/inclusionAI/LLaDA2.0-flash
LLaDA2.0-mini is a diffusion language model featuring a 16BA1B Mixture-of- Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA series, it is optimized for practical applications.
https://huggingface.co/inclusionAI/LLaDA2.0-mini
llama.cpp support in progress https://github.com/ggml- org/llama.cpp/pull/17454
previous version of LLaDA is supported https://github.com/ggml- org/llama.cpp/pull/16003 already (please check the comments)
submitted by /u/jacek2023
[link] [comments] -
đ Hex-Rays Blog Use idalib To Power Your Tools and Products (And Do It For Free During Dev) rss
If youâve worked with IDA beyond the UI â scripting, automation, headless tasks, or custom tooling â you may have heard of idalib. For those who havenât, itâs the programmatic interface to the IDA analysis engine, designed to let you call IDA as a library, run analysis headlessly, or integrate IDA logic into your own systems.

-
đ HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
sync repo: +2 plugins, +2 releases ## New plugins - [funcfiletree](https://github.com/rand-tech/idaplugins) (1.0) - [navigator](https://github.com/rand-tech/idaplugins) (1.3) -
đ syncthing/syncthing v2.0.12-rc.1 release
Major changes in 2.0
-
Database backend switched from LevelDB to SQLite. There is a migration on
first launch which can be lengthy for larger setups. The new database is
easier to understand and maintain and, hopefully, less buggy. -
The logging format has changed to use structured log entries (a message
plus several key-value pairs). Additionally, we can now control the log
level per package, and a new log level WARNING has been inserted between
INFO and ERROR (which was previously known as WARNING...). The INFO level
has become more verbose, indicating the sync actions taken by Syncthing. A
new command line flag--log-levelsets the default log level for all
packages, and theSTTRACEenvironment variable and GUI has been updated
to set log levels per package. The--verboseand--logflagscommand
line options have been removed and will be ignored if given. -
Deleted items are no longer kept forever in the database, instead they are
forgotten after fifteen months. If your use case require deletes to take
effect after more than a fifteen month delay, set the
--db-delete-retention-intervalcommand line option or corresponding
environment variable to zero, or a longer time interval of your choosing. -
Modernised command line options parsing. Old single-dash long options are
no longer supported, e.g.-homemust be given as--home. Some options
have been renamed, others have become subcommands. All serve options are
now also accepted as environment variables. Seesyncthing --helpand
syncthing serve --helpfor details. -
Rolling hash detection of shifted data is no longer supported as this
effectively never helped. Instead, scanning and syncing is faster and more
efficient without it. -
A "default folder" is no longer created on first startup.
-
Multiple connections are now used by default between v2 devices. The new
default value is to use three connections: one for index metadata and two
for data exchange. -
The following platforms unfortunately no longer get prebuilt binaries for
download at syncthing.net and on GitHub, due to complexities related to
cross compilation with SQLite:- dragonfly/amd64
- solaris/amd64
- linux/ppc64
- netbsd/*
- openbsd/386 and openbsd/arm
- windows/arm
- The handling of conflict resolution involving deleted files has changed. A
delete can now be the winning outcome of conflict resolution, resulting in
the deleted file being moved to a conflict copy.
This release is also available as:
-
APT repository: https://apt.syncthing.net/
-
Docker image:
docker.io/syncthing/syncthing:2.0.12-rc.1orghcr.io/syncthing/syncthing:2.0.12-rc.1
({docker,ghcr}.io/syncthing/syncthing:2to follow just the major version)
What's Changed
Other
- chore: update quic-go, adapt to lack of write tracking by @calmh in #10456
- chore(cli): clean up generated usage strings for config commands (fixes #10462) by @acolomb in #10463
Full Changelog :
v2.0.11...v2.0.12-rc.1 -
-
đ r/LocalLLaMA NVIDIA RTX PRO 6000 Blackwell desktop GPU drops to $7,999 rss
| Do you guys think that a RTX Quadro 8000 situation could happen again? submitted by /u/panchovix
[link] [comments]
---|--- -
đ Rust Blog Interview with Jan David Nose rss
On the Content Team, we had our first whirlwind outing at RustConf 2025 in Seattle, Washington, USA. There we had a chance to speak with folks about interesting things happening in the Project and the wider community.
Jan David Nose, Infrastructure Team
In this interview, Xander Cesari sits down with Jan David Nose, then one of the full-time engineers on the Infrastructure Team, which maintains and develops the infrastructure upon which Rust is developed and deployed -- including CI/CD tooling and crates.io.
We released this video on an accelerated timeline, some weeks ago, in light of the recent software supply chain attacks, but the interview was conducted prior to the news of compromised packages in other languages and ecosystems.
Check out the interview here or click below.
Transcript
Xander Cesari : Hey, this is Xander Cesari with the Rust Project Content Team, recording on the last hour of the last day of RustConf 2025 here in Seattle. So it's been a long and amazing two days. And I'm sitting down here with a team member from the Rust Project Infra Team, the unsung heroes of the Rust language. Want to introduce yourself and kind of how you got involved?
Jan David Nose : Yeah, sure. I'm JD. Jan David is the full name, but especially in international contexts, I just go with JD. I've been working for the Rust Foundation for the past three years as a full-time employee and I essentially hit the jackpot to work full-time on open source and I've been in the Infra Team of the Rust Project for the whole time. For the past two years I've led the team together with Jake. So the Infra Team is kind of a thing that lets Rust happen and there's a lot of different pieces.
Xander Cesari : Could you give me an overview of the responsibility of the Infra Team?
Jan David Nose : Sure. I think on a high level, we think about this in terms of, we serve two different groups of people. On one side, we have users of the language, and on the other side, we really try to provide good tooling for the maintainers of the language.
Jan David Nose : Starting with the maintainer side, this is really everything about how Rust is built. From the moment someone makes a contribution or opens a PR, we maintain the continuous integration that makes sure that the PR actually works. There's a lot of bots and tooling helping out behind the scenes to kind of maintain a good status quo, a sane state. Lots of small things like triage tools on GitHub to set labels and ping people and these kinds of things. And that's kind of managed by the Infra Team at large.
Jan David Nose : And then on the user side, we have a lot of, or the two most important things are making sure users can actually download Rust. We don't develop crates.io, but we support the infrastructure to actually ship crates to users. All the downloads go through content delivery networks that we provide. The same for Rust releases. So if I don't do my job well, which has happened, there might be a global outage of crates.io and no one can download stuff. But those are kind of the two different buckets of services that we run and operate.
Xander Cesari : Gotcha. So on the maintainer side, the Rust organization on GitHub is a large organization with a lot of activity, a lot of code. There's obviously a lot of large code bases being developed on GitHub, but there are not that many languages the size of Rust being developed on GitHub. Are there unique challenges to developing a language and the tooling that's required versus developing other software projects?
Jan David Nose : I can think of a few things that have less to do with the language specifically, but with some of the architecture decisions that were made very early on in the life cycle of Rust. So one of the things that actually caused a lot of headache for mostly GitHub, and then when they complained to us, for us as well, is that for a long, long time, the index for crates.io was a Git repo on GitHub. As Rust started to grow, the activity on the repo became so big that it actually caused some issues, I would say, in a friendly way on GitHub, just in terms of how much resources that single repository was consuming. That then kind of started this work on a web-based, HTTP-based index to shift that away. That's certainly one area where we've seen how Rust has struggled a little bit with the platform, but also the platform provider struggled with us.
Jan David Nose : I think for Rust itself, especially when we look at CI, we really want to make sure that Rust works well on all of the targets and all the platforms we support. That means we have an extremely wide CI pipeline where, for every Tier 1 target, we want to run all the tests, we want to build the release artifacts, we want to upload all of that to S3. We want to do as much as we reasonably can for Tier 2 targets and, to a lesser extent, maybe even test some stuff on Tier 3. That has turned into a gigantic build pipeline. Marco gave a talk today on what we've done with CI over the last year. One of the numbers that came out of doing the research for this talk is that we accumulate over three million build minutes per month, which is about six years of CPU time every month.
Jan David Nose : Especially when it comes to open source projects, I think we're one of the biggest consumers of GitHub Actions in that sense. Not the biggest in total; there are definitely bigger commercial projects. But that's a unique challenge for us to manage because we want to provide as good a service as we can to the community and make sure that what we ship is high quality. That comes at a huge cost in terms of scaling. As Rust gets more popular and we want to target more and more platforms, this is like a problem that just continues to grow.
Jan David Nose : We'll probably never remove a lot of targets, so there's an interesting challenge to think about. If it's already big now, how does this look in 5 years, 10 years, 15 years, and how can we make sure we can maintain the level of quality we want to ship? When you build and run for a target in the CI pipeline, some of those Tier 1 targets you can just ask a cloud service provider to give you a VM running on that piece of hardware, but some of them are probably not things that you can just run in the cloud.
Xander Cesari : Is there some HIL (Hardware-In-the-Loop) lab somewhere?
Jan David Nose : So you're touching on a conversation that's happening pretty much as we speak. So far, as part of our target tier policy, there is a clause that says it needs to be able to run in CI. That has meant being very selective about only promoting things to Tier 1 that we can actually run and test. For all of this, we had a prerequisite that it runs on GitHub Actions. So far we've used very little hardware that is not natively supported or provided by GitHub.
Jan David Nose : But this is exactly the point with Rust increasing in popularity. We just got requests to support IBM platforms and RISC-V, and those are not natively supported on GitHub. That has kicked off an internal conversation about how we even support this. How can we as a project enable companies that can provide us hardware to test on? What are the implications of that?
Jan David Nose : On one side, there are interesting constraints and considerations. For example, you don't want your PRs to randomly fail because someone else's hardware is not available. We're already so resource- constrained on how many PRs we can merge each day that adding noise to that process would really slow down contributions to Rust. On the other side, there are security implications. Especially if we talk about promoting something to Tier 1 and we want to build release artifacts on that hardware, we need to make sure that those are actually secure and no one sneaks a back door into the Rust compiler target for RISC-V.
Jan David Nose : So there are interesting challenges for us, especially in the world we live in where supply chain security is a massive concern. We need to figure out how we can both support the growth of Rust and the growth of the language, the community, and the ecosystem at large while also making sure that the things we ship are reliable, secure, and performant. That is becoming an increasingly relevant and interesting piece to work on. So far we've gotten away with the platforms that GitHub supports, but it's really cool to see that this is starting to change and people approach us and are willing to provide hardware, provide sponsorship, and help us test on their platforms. But essentially we don't have a good answer for this yet. We're still trying to figure out what this means, what we need to take into consideration, and what our requirements are to use external hardware.
Xander Cesari : Yeah, everyone is so excited about Rust will run everywhere, but there's a maintenance cost there that is almost exponential in scope.
Jan David Nose : It's really interesting as well because there's a tension there. I think with IBM, for example, approaching us, it's an interesting example. Who has IBM platforms at home? The number of users for that platform is really small globally, but IBM also invests heavily in Rust, tries to make this happen, and is willing to provide the hardware.
Jan David Nose : For us, that leads to a set of questions. Is there a line? Is there a certain requirement? Is there a certain amount of usage that a platform would need for us to promote it? Or do we say we want to promote as much as we can to Tier 1? This is a conversation we haven't really had to have yet. It's only now starting to creep in as Rust is adopted more widely and companies pour serious money and resources into it. That's exciting to see.
Jan David Nose : In this specific case, companies approach the Infra Team to figure out how we can add their platforms to CI as a first step towards Tier 1 support. But it's also a broader discussion we need to have with larger parts of the Rust Project. For Tier 1 promotions, for example, the Compiler Team needs to sign off, Infra needs to sign off. Many more people need to be involved in this discussion of how we can support the growing needs of the ecosystem at large.
Xander Cesari : I get the feeling that's going to be a theme throughout this interview.
Jan David Nose : 100%.
Xander Cesari : So one other tool that's part of this pipeline that I totally didn't know about for a long time, and I think a talk at a different conference clued me into it, is Crater. It's a tool that attempts to run all of the Rust code it can find on the internet. Can you talk about what that tool does and how it integrates into the release process?
Jan David Nose : Whenever someone creates a pull request on GitHub to add a new feature or bug fix to the Rust compiler, they can start what's called a Crater run, or an experiment. Crater is effectively a large fleet of machines that tries to pull in as many crates as it can. Ideally, we would love to test all crates, but for a variety of reasons that's not possible. Some crates simply don't build reliably, so we maintain lists to exclude those. From the top of my head, I think we currently test against roughly 60% of crates.
Jan David Nose : The experiment takes the code from your pull request, builds the Rust compiler with it, and then uses that compiler to build all of these crates. It reports back whether there are any regressions related to the change you proposed. That is a very important tool for us to maintain backwards compatibility with new versions and new features in Rust. It lets us ask: does the ecosystem still compile if we add this feature to the compiler, and where do we run into issues? Then, and this is more on the Compiler Team side, there's a decision about how to proceed. Is the breakage acceptable? Do we need to adjust the feature? Having Crater is what makes that conversation possible because it gives us real data on the impact on the wider ecosystem.
Xander Cesari : I think that's so interesting because as more and more companies adopt Rust, they're asking whether the language is going to be stable and backward compatible. You hear about other programming languages that had a big version change that caused a lot of drama and code changes. The fact that if you have code on crates.io, the Compiler Team is probably already testing against it for backwards compatibility is pretty reassuring.
Jan David Nose : Yeah, the chances are high, I would say. Especially looking at the whole Python 2 to Python 3 migration, I think as an industry we've learned a lot from those big version jumps. I can't really speak for the Compiler Team because I'm not a member and I wasn't involved in the decision- making, but I feel this is one of the reasons why backwards compatibility is such a big deal in Rust's design. We want to make it as painless as possible to stay current, stay up to date, and make sure we don't accidentally break the language or create painful migration points where the entire ecosystem has to move at once.
Xander Cesari : Do you know if there are other organizations pulling in something like Crater and running it on their own internal crate repositories, maybe some of the big tech companies or other compiler developers or even other languages? Or is this really bespoke for the Rust compiler team?
Jan David Nose : I don't know of anyone who runs Crater itself as a tool. Crater is built on a sandboxing framework that we also use in other places. For example, docs.rs uses some of the same underlying infrastructure to build all of the documentation. We try to share as much as we can of the functionality that exists in Crater, but I'm not aware of anyone using Crater in the same way we do.
Xander Cesari : Gotcha. The other big part of your job is that the Infra Team works on supporting maintainers, but it also supports users and consumers of Rust who are pulling from crates.io. It sounds like crates.io is not directly within your team, but you support a lot of the backend there.
Jan David Nose : Yeah, exactly. crates.io has its own team, and that team maintains the web application and the APIs. The crates themselves, all the individual files that people download, are hosted within our infrastructure. The Infra Team maintains the content delivery network that sits in front of that. Every download of a crate goes through infrastructure that we maintain. We collaborate very closely with the crates.io team on this shared interface. They own the app and the API, and we make sure that the files get delivered to the end user.
Xander Cesari : So it sounds like there's a lot of verification of the files that get uploaded and checks every time someone pushes a new version to crates.io. That part all happens within crates.io as an application.
Jan David Nose : Cargo uses the crates.io API to upload the crate file. crates.io has a lot of internal logic to verify that it is valid and that everything looks correct. For us, as the Infra Team, we treat that as a black box. crates.io does its work, and if it is happy with the upload, it stores the file in S3. From that point onward, infrastructure makes sure that the file is accessible and can be downloaded so people can start using your crate.
Xander Cesari : In this theme of Rust being a bit of a victim of its own success, I assume all of the traffic graphs and download graphs are very much up and to the right.
Jan David Nose : On the Foundation side, one of our colleagues likes to check how long it takes for one billion downloads to happen on crates.io, and that number has been falling quickly. I don't remember what it was three years ago, but it has come down by orders of magnitude. In our download traffic we definitely see exponential growth. Our traffic tends to double year over year, and that trend has been pretty stable. It really seems like Rust is getting a lot of adoption in the ecosystem and people are using it for more and more things.
Xander Cesari : How has the Infra Team scaled with that? Are you staying ahead of it, or are there a lot of late nights?
Jan David Nose : There have definitely been late nights. In the three years I've been working in the Infra Team, every year has had a different theme that was essentially a fire to put out.
Jan David Nose : It changes because we fix one thing and then the next thing breaks. So far, luckily, those fires have been mostly sequential, not parallel. When I joined, bandwidth was the big topic. Over the last year, it has been more about CI. About three years ago, we hit this inflection point where traffic was doubling and the sponsorship capacity we had at the time was reaching its limits.
Jan David Nose : Two or three years ago, Fastly welcomed us into their Fast Forward program and has been sponsoring all of our bandwidth since then. That has mostly helped me sleep at night. It has been a very good relationship. They have been an amazing partner and have helped us at every step to remove the fear that we might hit limits. They are very active in the open source community at large; most famously they also sponsor PyPI and the Python ecosystem, compared to which we're a tiny fish in a very big pond. That gives us a lot of confidence that we can sustain this growth and keep providing crates and releases at the level of quality people expect.
Xander Cesari : In some ways, Rust did such a good job of making all of that infrastructure feel invisible. You just type Cargo commands into your terminal and it feels magical.
Jan David Nose : I'm really happy about that. It's an interesting aspect of running an infrastructure team in open source. If you look at the ten-year history since the first stable release, or even the fifteen years since Rust really started, infrastructure was volunteer-run for most of that time. I've been here for three years, and I was the first full-time infrastructure engineer. So for ten to twelve years, volunteers ran the infrastructure.
Jan David Nose : For them, it was crucial that things just worked, because you can't page volunteers in the middle of the night because a server caught fire or downloads stopped working. From the beginning, our infrastructure has been designed to be as simple and as reliable as possible. The same is true for our CDNs. I always feel a bit bad because Fastly is an amazing sponsor. Every time we meet them at conferences or they announce new features, they ask whether we want to use them or talk about how we use Fastly in production. And every time I have to say: we have the simplest configuration possible. We set some HTTP headers. That's pretty much it.
Jan David Nose : It's a very cool platform, but we use the smallest set of features because we need to maintain all of this with a very small team that is mostly volunteer-based. Our priority has always been to keep things simple and reliable and not chase every fancy new technology, so that the project stays sustainable.
Xander Cesari : Volunteer-based organizations seem to have to care about work-life balance, which is probably terrific, and there are lessons to be learned there.
Jan David Nose : Yeah, it's definitely a very interesting environment to work in. It has different rules than corporations or commercial teams. We have to think about how much work we can do in a given timeframe in a very different way, because it's unpredictable when volunteers have time, when they're around, and what is happening in their lives.
Jan David Nose : Over the last few years, we've tried to reduce the number of fires that can break out. And when they do happen, we try to shield volunteers from them and take that work on as full-time employees. That started with me three years ago. Last year Marco joined, which increased the capacity we have, because there is so much to do on the Infra side that even with me working full-time, we simply did not have enough people.
Xander Cesari : So you're two full-time and everything else is volunteer.
Jan David Nose : Exactly. The team is around eight people. Marco and I work full-time and are paid by the Rust Foundation to focus exclusively on infrastructure. Then we have a handful of volunteers who work on different things.
Jan David Nose : Because our field of responsibility is so wide, the Infra Team works more in silos than other teams might. We have people who care deeply about very specific parts of the infrastructure. Otherwise there is simply too much to know for any one person. It has been a really nice mix, and it's amazing to work with the people on the team.
Jan David Nose : As someone who is privileged enough to work full-time on this and has the time and resources, we try to bear the bigger burden and create a space that is fun for volunteers to join. We want them to work on exciting things where there is less risk of something catching fire, where it's easier to come in, do a piece of work, and then step away. If your personal life takes over for two weeks, that's okay, because someone is there to make sure the servers and the lights stay on.
Jan David Nose : A lot of that work lives more on the maintainer side: the GitHub apps, the bots that help with triage. It's less risky if something goes wrong there. On the user side, if you push the wrong DNS setting, as someone might have done, you can end up in a situation where for 30 minutes no one can download crates. And in this case, "no one" literally means no user worldwide. That's not an experience I want volunteers to have. It's extremely stressful and was ultimately one of the reasons I joined in the first placeâthere was a real feeling of burnout from carrying that responsibility.
Jan David Nose : It's easier to carry that as a full-timer. We have more time and more ways to manage the stress. I'm honestly extremely amazed by what the Infra Team was able to do as volunteers. It's unbelievable what they built and how far they pushed Rust to get to where we are now.
Xander Cesari : I think anyone who's managing web traffic in 2025 is talking about traffic skyrocketing due to bots and scrapers for AI or other purposes. Has that hit the Rust network as well?
Jan David Nose : Yeah, we've definitely seen that. It's handled by a slightly different team, but on the docs.rs side in particular we've seen crawlers hit us hard from time to time, and that has caused noticeable service degradation. We're painfully aware of the increase in traffic that comes in short but very intense bursts when crawlers go wild.
Jan David Nose : That introduces a new challenge for our infrastructure. We need to figure out how to react to that traffic and protect our services from becoming unavailable to real users who want to use docs.rs to look up something for their work. On the CDN side, our providers can usually handle the traffic. It is more often the application side where things hurt.
Jan David Nose : On the CDN side we also see people crawling crates.io, presumably to vacuum up the entire crates ecosystem into an LLM. Fortunately, over the last two years we've done a lot of work to make sure crates.io as an application is less affected by these traffic spikes. Downloads now bypass crates.io entirely and go straight to the CDN, so the API is not hit by these bursts. In the past, this would have looked like a DDoS attack, with so many requests from so many sources that we couldn't handle it.
Jan David Nose : We've done a lot of backend work to keep our stack reliable, but it's definitely something that has changed the game over the last year. We can clearly see that crawlers are much more active than before.
Xander Cesari : That makes sense. I'm sure Fastly is working on this as well. Their business has to adapt to be robust to this new internet.
Jan David Nose : Exactly. For example, one of the conversations we're having right now is about docs.rs. It's still hosted on AWS behind CloudFront, but we're talking about putting it behind Fastly because through Fastly we get features like bot protection that can help keep crawlers out.
Jan David Nose : This is a good example of how our conversations have changed in the last six months. At the start of the year I did not think this would be a topic we would be discussing. We were focused on other things. For docs.rs we have long-term plans to rebuild the infrastructure that powers it, and I expected us to spend our energy there. But with the changes in the industry and everyone trying to accumulate as much data as possible, our priorities have shifted. The problems we face and the order in which we tackle them have changed.
Xander Cesari : And I assume as one of the few paid members of a mostly volunteer team, you often end up working on the fires, not the interesting next feature that might be more fun.
Jan David Nose : That is true, although it sounds a bit negative to say I only get to work on fires. Sometimes it feels like that because, as with any technology stack, there is a lot of maintenance overhead. We definitely pay that price on the infrastructure side.
Jan David Nose : Marco, for example, spent time this year going through all the servers we run, cataloging them, and making sure they're patched and on the latest operating system version. We updated our Ubuntu machines to the latest LTS. It feels a bit like busy workâyou just have to do it because it's important and necessary, but it's not the most exciting project.
Jan David Nose : On the other hand, when it comes to things like CDN configuration and figuring out how bot protection features work and whether they are relevant to us, that is also genuinely interesting work. It lets us play with new tools vendors provide, and we're working on challenges that the wider industry is facing. How do you deal with this new kind of traffic? What are the implications of banning bots? How high is the risk of blocking real users? Sometimes someone just misconfigures a curl script, and from the outside it looks like they're crawling our site.
Jan David Nose : So it's an interesting field to work in, figuring out how we can use new features and address new challenges. That keeps it exciting even for us full-timers who do more of the "boring" work. We get to adapt alongside how the world around us is changing. If there's one constant, it's change.
Xander Cesari : Another ripped-from-the-headlines change around this topic is software supply chain security, and specifically xz-utils and the conversation around open source security. How much has that changed the landscape you work in?
Jan David Nose : The xz-utils compromise was scary. I don't want to call it a wake-up call, because we've been aware that supply chain security is a big issue and this was not the first compromise. But the way it happened felt very unsettling. You saw an actor spend a year and a half building social trust in an open source project and then using that to introduce a backdoor.
Jan David Nose : Thinking about that in the context of Rust: every team in the project talks about how we need more maintainers, how there's too much workload on the people who are currently contributing, and how Rust's growth puts strain on the organization as a whole. We want to be an open and welcoming project, and right now we also need to bring new people in. If someone shows up and says, "I'm willing to help, please onboard me," and they stick around for a year and then do something malicious, we would be susceptible to that. I don't think this is unique to Rust. This is an inherent problem in open source.
Xander Cesari : Yeah, it's antithetical to the culture.
Jan David Nose : Exactly. So we're trying to think through how we, as a project and as an ecosystem, deal with persistent threat actors who have the time and resources to play a long game. Paying someone to work full-time on open source for a year is a very different threat model than what we used to worry about.
Jan David Nose : I used to joke that the biggest threat to crates.io was me accidentally pulling the plug on a CDN. I think that has changed. Today the bigger threat is someone managing to insert malicious code into our releases, our supply chain, or crates.io itself. They could find ways to interfere with our systems in ways we're simply not prepared for, where, as a largely volunteer organization, we might be too slow to react to a new kind of attack.
Jan David Nose : Looking back over the last three years, this shift became very noticeable, especially after the first year. Traffic was doubling, Rust usage was going up a lot, and there were news stories about Rust being used in the Windows kernel, in Android, and in parts of iOS. Suddenly Rust is everywhere. If you want to attack "everywhere," going after Rust becomes attractive. That definitely puts a target on our back and has changed the game.
Jan David Nose : I'm very glad the Rust Foundation has a dedicated security engineer who has done a lot of threat modeling and worked with us on infrastructure security. There's also a lot of work happening specifically around the crates ecosystem and preventing supply chain attacks through crates. Luckily, it's not something the Infra side has to solve alone. But it is getting a lot more attention, and I think it will be one of the big challenges for the future: how a mostly volunteer-run project keeps up with this looming threat.
Xander Cesari : And it is the industry at large. This is not a unique problem to the Rust package manager. All package registries, from Python to JavaScript to Nix, deal with this. Is there an industry-wide conversation about how to help each other out and share learnings?
Jan David Nose : Yeah, there's definitely a lot happening. I have to smile a bit because, with a lot of empathy but also a bit of relief, we sometimes share news when another package ecosystem gets compromised. It is a reminder that it's not just us, sometimes it's npm this time.
Jan David Nose : We really try to stay aware of what's happening in the industry and in other ecosystems: what new threats or attack vectors are emerging, what others are struggling with. Sometimes that is security; sometimes it's usability. A year and a half ago, for example, npm had the "everything" package where someone declared every package on npm as a dependency, which blew up the index. We look at incidents like that and ask whether crates.io would struggle with something similar and whether we need to make changes.
Jan David Nose : On the security side we also follow closely what others are doing. In the packaging community, the different package managers are starting to come together more often to figure out which problems everyone shares. There is a bit of a joke that we're all just shipping files over the internet. Whether it's an npm package or a crate, ultimately it's a bunch of text files in a zip. So from an infrastructure perspective the problems are very similar.
Jan David Nose : These communities are now talking more about what problems PyPI has, what problems crates.io has, what is happening in the npm space. One thing every ecosystem has seenâeven the very established onesâis a big increase in bandwidth needs, largely connected to the emergence of AI. PyPI, for example, publishes download charts, and it's striking. Python had steady growthâslightly exponential, but manageableâfor many years. Then a year or two ago you see a massive hockey stick. People discovered that PyPI was a great distribution system for their models. There were no file size limits at the time, so you could publish precompiled GPU models there.
Jan David Nose : That pattern shows up everywhere. It has kicked off a new era for packaging ecosystems to come together and ask: in a time where open source is underfunded and traffic needs keep growing, how can we act together to find solutions to these shared problems? crates.io is part of those conversations. It's interesting to see how we, as an industry, share very similar problems across ecosystemsâPython, npm, Rust, and others.
Xander Cesari : With a smaller, more hobbyist-focused community, you can have relaxed rules about what goes into your package manager. Everyone knows the spirit of what you're trying to do and you can get away without a lot of hard rules and consequences. Is the Rust world going to have to think about much harder rules around package sizes, allowed files, and how you're allowed to distribute things?
Jan David Nose : Funnily enough, we're coming at this from the opposite direction. Compared to other ecosystems, we've always had fairly strict limits. A crate can be at most around ten megabytes in size. There are limits on what kinds of files you can put in there. Ironically, those limits have helped us keep traffic manageable in this period.
Jan David Nose : At the same time, there is a valid argument that these limits may not serve all Rust use cases. There are situations where you might want to include something precompiled in your crate because it is hard to compile locally, takes a very long time, or depends on obscure headers no one has. I don't think we've reached the final state of what the crates.io package format should look like.
Jan David Nose : That has interesting security implications. When we talk about precompiled binaries or payloads, we all have that little voice in our head every time we see a curl | sh command: can I trust this? The same is true if you download a crate that contains a precompiled blob you cannot easily inspect.
Jan David Nose : The Rust Foundation is doing a lot of work and research here. My colleague Adam, who works on the crates.io team, is working behind the scenes to answer some of these questions. For example: what kind of security testing can we do before we publish crates to make sure they are secure and don't contain malicious payloads? How do we surface this information? How do we tell a publisher that they included files that are not allowed? And from the user's perspective, when you visit crates.io, how can you judge how well maintained and how secure a crate is?
Jan David Nose : Those conversations are happening quite broadly in the ecosystem. On the Infra side we're far down the chain. Ultimately we integrate with whatever security scanning infrastructure crates.io builds. We don't have to do the security research ourselves, but we do have to support it.
Jan David Nose : There's still a lot that needs to happen. As awesome as Rust already is, and as much as I love using it, it's important to remember that we're still a very young ecosystem. Python is now very mature and stable, but it's more than 25 years old. Rust is about ten years old as a stable language. We still have a lot to learn and figure out.
Xander Cesari : Is the Rust ecosystem running into problems earlier than other languages because we're succeeding at being foundational software and Rust is used in places that are even more security-critical than other languages, so you have to hit these hard problems earlier than the Python world did?
Jan David Nose : I think that's true. Other ecosystems probably had more time to mature and answer these questions. We're operating on a more condensed timeline. There is also simply more happening now. Open source has been very successful; it's everywhere. That means there are more places where security is critical.
Jan David Nose : So this comes with the success of open source, with what is happening in the ecosystem at large, and with the industry we're in. It does mean we have less time to figure some things out. On the flip side, we also have less baggage. We have less technical debt and fifteen fewer years of accumulated history. That lets us be on the forefront in some areas, like how a package ecosystem can stay secure and what infrastructure a 21st century open source project needs.
Jan David Nose : Here I really want to call out the Rust Foundation. They actively support this work: hiring people like Marco and me to work full-time on infrastructure, having Walter and Adam focus heavily on security, and as an organization taking supply chain considerations very seriously. The Foundation also works with other ecosystems so we can learn and grow together and build a better industry.
Jan David Nose : Behind the scenes, colleagues constantly work to open doors for us as a relatively young language, so we can be part of those conversations and sit at the table with other ecosystems. That lets us learn from what others have already gone through and also help shape where things are going. Sustainability is a big part of that: how do we fund the project long term? How do we make sure we have the human resources and financial resources to run the infrastructure and support maintainers? I definitely underestimated how much of my job would be relationship management and budget planning, making sure credits last until new ones arrive.
Xander Cesari : Most open core business models give away the thing that doesn't cost muchâthe softwareâand charge for the thing that scales with useâthe service. In Rust's case, it's all free, which is excellent for adoption, but it must require a very creative perspective on the business side.
Jan David Nose : Yeah, and that's where different forces pull in opposite directions. As an open source project, we want everyone to be able to use Rust for free. We want great user experience. When we talk about downloads, there are ways for us to make them much cheaper, but that might mean hosting everything in a single geographic location. Then everyone, including people in Australia, would have to download from, say, Europe, and their experience would get much worse.
Jan David Nose : Instead, we want to use services that are more expensive but provide a better experience for Rust users. There's a real tension there. On one side we want to do the best we can; on the other side we need to be realistic that this costs money.
Xander Cesari : I had been thinking of infrastructure as a binary: it either works or it doesn't. But you're right, it's a slider. You can pick how much money you want to spend and what quality of service you get. Are there new technologies coming, either for the Rust Infra Team or the packaging world in general, to help with these security problems? New sandboxing technologies or higher-level support?
Jan David Nose : A lot of people are working on this problem from different angles. Internally we've talked a lot about it, especially in the context of Crater. Crater pulls in all of those crates to build them and get feedback from the Rust compiler. That means if someone publishes malicious code, we will download it and build it.
Jan David Nose : In Rust this is a particular challenge because build scripts can essentially do anything on your machine. For us that means we need strong sandboxing. We've built our own sandboxing framework so every crate build runs in an isolated container, which prevents malicious code from escaping and messing with the host systems.
Jan David Nose : We feel that pain in Crater, but if we can solve it in a way that isn't exclusive to Craterâif it also protects user machines from the same vulnerabilitiesâthat would be ideal. People like Walter on the Foundation side are actively working on that. I'm sure there are conversations in the Cargo and crates teams as well, because every team that deals with packages sees a different angle of the problem. We all have to come together to solve it, and there is a lot of interesting work happening in that area.
Xander Cesari : I hope help is coming.
Jan David Nose : I'm optimistic.
Xander Cesari : We have this exponential curve with traffic and everything else. It seems like at some point it has to taper off.
Jan David Nose : We'll see. Rust is a young language. I don't know when that growth will slow down. I think there's a good argument that it will continue for quite a while as adoption grows.
Jan David Nose : Being at a conference like RustConf, it's exciting to see how the mix of companies has changed over time. We had a talk from Rivian on how they use Rust in their cars. We've heard from other car manufacturers exploring it. Rust is getting into more and more applications that a few years ago would have been hard to imagine or where the language simply wasn't mature enough yet.
Jan David Nose : As that continues, I think we'll see new waves of growth that sustain the exponential curve we currently have, because we're moving into domains that are new for us. It's amazing to see who is talking about Rust and how they're using it, sometimes in areas like space that you wouldn't expect.
Jan David Nose : I'm very optimistic about Rust's future. With this increase in adoption, we'll see a lot of interesting lessons about how to use Rust and a lot of creative ideas from people building with it. With more corporate adoption, I also expect a new wave of investment into the ecosystem: companies paying people to work full-time on different parts of Rust, both in the ecosystem and in the core project. I'm very curious what the next ten years will look like, because I genuinely don't know.
Xander Cesari : The state of Rust right now does feel a bit like the dog that caught the car and now doesn't know what to do with it.
Jan David Nose : Yeah, I think that's a good analogy. Suddenly we're in a situation where we realize we haven't fully thought through every consequence of success. It's fascinating to see how the challenges change every year. We keep running into new growing pains where something that wasn't an issue a year ago suddenly becomes one because growth keeps going up.
Jan David Nose : We're constantly rebuilding parts of our infrastructure to keep up with that growth, and I don't see that stopping soon. As a user, that makes me very excited. With the language and the ecosystem growing at this pace, there are going to be very interesting things coming that I can't predict today.
Jan David Nose : For the project, it also means there are real challenges: financing the infrastructure we need, finding maintainers and contributors, and creating a healthy environment where people can work without burning out. There is a lot of work to be done, but it's an exciting place to be.
Xander Cesari : Well, thank you for all your work keeping those magic Cargo commands I can type into my terminal just working in the background. If there's any call to action from this interview, it's that if you're a company using Rust, maybe think about donating to keep the Infra Team working.
Jan David Nose : We always love new Rust Foundation members. Especially if you're a company, that's one of the best ways to support the work we do. Membership gives us a budget we can use either to fund people who work full- time on the project or to fill gaps in our infrastructure sponsorship where we don't get services for free and have to pay real money.
Jan David Nose : And if you're not a company, we're always looking for people to help out. The Infra Team has a lot of Rust-based bots and other areas where people can contribute relatively easily.
Xander Cesari : Small scoped bots that you can wrap your head around and help out with.
Jan David Nose : Exactly. It is a bit harder on the Infra side because we can't give people access to our cloud infrastructure. There are areas where it's simply not possible to contribute as a volunteer because you can't have access to the production systems. But there is still plenty of other work that can be done.
Jan David Nose : Like every other team in the project, we're a bit short- staffed. So when you're at conferences, come talk to me or Marco. We have work to do.
Xander Cesari : Well, thank you for doing the work that keeps Rust running.
Jan David Nose : I'm happy to.
Xander Cesari : Awesome. Thank you so much.
-