- ↔
- →
- About KeePassXC’s Code Quality Control – KeePassXC
- How to build a remarkable command palette
- Leaderboard - compar:IA, the AI chatbot arena
- Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
- Automerge
- November 18, 2025
-
🔗 r/wiesbaden Welche Location wäre das in Wiesbaden? rss
Mainz diskutiert auch schon, was meint ihr?
submitted by /u/valentino_nero
[link] [comments] -
🔗 Simon Willison Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark rss
Google released Gemini 3 Pro today. Here's the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu, their developer blog announcement from Logan Kilpatrick, the Gemini 3 Pro Model Card, and their collection of 11 more articles. It's a big release!
I had a few days of preview access to this model via AI Studio. The best way to describe it is that it's Gemini 2.5 upgraded to match the leading rival models.
Gemini 3 has the same underlying characteristics as Gemini 2.5. The knowledge cutoff is the same (January 2025). It accepts 1 million input tokens, can output up to 64,000 tokens, and has multimodal inputs across text, images, audio, and video.
Benchmarks
Google's own reported numbers (in the model card) show it scoring slightly higher against Claude 4.5 Sonnet and GPT-5.1 against most of the standard benchmarks. As always I'm waiting for independent confirmation, but I have no reason to believe those numbers are inaccurate.

Pricing
It terms of pricing it's a little more expensive than Gemini 2.5 but still cheaper than Claude Sonnet 4.5. Here's how it fits in with those other leading models:
Model Input (per 1M tokens) Output (per 1M tokens) GPT-5.1 $1.25 $10.00 Gemini 2.5 Pro ≤ 200k tokens: $1.25
> 200k tokens: $2.50≤ 200k tokens: $10.00
> 200k tokens: $15.00Gemini 3 Pro ≤ 200k tokens: $2.00
> 200k tokens: $4.00≤ 200k tokens: $12.00
> 200k tokens: $18.00Claude Sonnet 4.5 ≤ 200k tokens: $3.00
> 200k tokens: $6.00≤ 200k tokens: $15.00
> 200k tokens: $22.50Claude Opus 4.1 $15.00 $75.00 Trying it out against a complex image
That screenshot of the benchmarks from above looked like a good test for Gemini 3's multimodal support. I fed it that image URL and asked it to generate alt text for the image:
llm -m gemini-3-pro-preview -a https://static.simonwillison.net/static/2025/gemini-3-benchmarks.jpg 'Alt text for this image, include all figures and make them comprehensible to a screen reader user'Here's what I got back:
A comprehensive table comparing four AI models across various performance benchmarks. The columns listed are: Benchmark Name, Description, Gemini 3 Pro (highlighted in blue), Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1.
Humanity's Last Exam (Academic reasoning)
- No tools: Gemini 3 Pro 37.5%, Gemini 2.5 Pro 21.6%, Claude Sonnet 4.5 13.7%, GPT-5.1 26.5%.
- With search and code execution: Gemini 3 Pro 45.8% (others have no data).
ARC-AGI-2 (Visual reasoning puzzles; ARC Prize Verified)
- Gemini 3 Pro 31.1%, Gemini 2.5 Pro 4.9%, Claude Sonnet 4.5 13.6%, GPT-5.1 17.6%.
GPQA Diamond (Scientific knowledge; No tools)
- Gemini 3 Pro 91.9%, Gemini 2.5 Pro 86.4%, Claude Sonnet 4.5 83.4%, GPT-5.1 88.1%.
AIME 2025 (Mathematics)
- No tools: Gemini 3 Pro 95.0%, Gemini 2.5 Pro 88.0%, Claude Sonnet 4.5 87.0%, GPT-5.1 94.0%.
- With code execution: Gemini 3 Pro 100%, Claude Sonnet 4.5 100%.
MathArena Apex (Challenging Math Contest problems)
- Gemini 3 Pro 23.4%, Gemini 2.5 Pro 0.5%, Claude Sonnet 4.5 1.6%, GPT-5.1 1.0%.
MMMU-Pro (Multimodal understanding and reasoning)
- Gemini 3 Pro 81.0%, Gemini 2.5 Pro 68.0%, Claude Sonnet 4.5 68.0%, GPT-5.1 76.0%.
ScreenSpot-Pro (Screen understanding)
- Gemini 3 Pro 72.7%, Gemini 2.5 Pro 11.4%, Claude Sonnet 4.5 36.2%, GPT-5.1 3.5%.
CharXiv Reasoning (Information synthesis from complex charts)
- Gemini 3 Pro 81.4%, Gemini 2.5 Pro 69.6%, Claude Sonnet 4.5 68.5%, GPT-5.1 69.5%.
OmniDocBench 1.5 (OCR; Overall Edit Distance, lower is better)
- Gemini 3 Pro 0.115, Gemini 2.5 Pro 0.145, Claude Sonnet 4.5 0.145, GPT-5.1 0.147.
Video-MMMU (Knowledge acquisition from videos)
- Gemini 3 Pro 87.6%, Gemini 2.5 Pro 83.6%, Claude Sonnet 4.5 77.8%, GPT-5.1 80.4%.
LiveCodeBench Pro (Competitive coding problems; Elo Rating, higher is better)
- Gemini 3 Pro 2,439; Gemini 2.5 Pro 1,775; Claude Sonnet 4.5 1,418; GPT-5.1 2,243.
Terminal-Bench 2.0 (Agentic terminal coding; Terminus-2 agent)
- Gemini 3 Pro 54.2%, Gemini 2.5 Pro 32.6%, Claude Sonnet 4.5 42.8%, GPT-5.1 47.6%.
SWE-Bench Verified (Agentic coding; Single attempt)
- Gemini 3 Pro 76.2%, Gemini 2.5 Pro 59.6%, Claude Sonnet 4.5 77.2%, GPT-5.1 76.3%.
t2-bench (Agentic tool use)
- Gemini 3 Pro 85.4%, Gemini 2.5 Pro 54.9%, Claude Sonnet 4.5 84.7%, GPT-5.1 80.2%.
Vending-Bench 2 (Long-horizon agentic tasks; Net worth (mean), higher is better)
- Gemini 3 Pro $5,478.16; Gemini 2.5 Pro $573.64; Claude Sonnet 4.5 $3,838.74; GPT-5.1 $1,473.43.
FACTS Benchmark Suite (Held out internal grounding, parametric, MM, and search retrieval benchmarks)
- Gemini 3 Pro 70.5%, Gemini 2.5 Pro 63.4%, Claude Sonnet 4.5 50.4%, GPT-5.1 50.8%.
SimpleQA Verified (Parametric knowledge)
- Gemini 3 Pro 72.1%, Gemini 2.5 Pro 54.5%, Claude Sonnet 4.5 29.3%, GPT-5.1 34.9%.
MMMLU (Multilingual Q&A)
- Gemini 3 Pro 91.8%, Gemini 2.5 Pro 89.5%, Claude Sonnet 4.5 89.1%, GPT-5.1 91.0%.
Global PIQA (Commonsense reasoning across 100 Languages and Cultures)
- Gemini 3 Pro 93.4%, Gemini 2.5 Pro 91.5%, Claude Sonnet 4.5 90.1%, GPT-5.1 90.9%.
MRCR v2 (8-needle) (Long context performance)
- 128k (average): Gemini 3 Pro 77.0%, Gemini 2.5 Pro 58.0%, Claude Sonnet 4.5 47.1%, GPT-5.1 61.6%.
- 1M (pointwise): Gemini 3 Pro 26.3%, Gemini 2.5 Pro 16.4%, Claude Sonnet 4.5 (not supported), GPT-5.1 (not supported).
I have not checked every line of this but a loose spot-check looks accurate to me.
That prompt took 1,105 input and 3,901 output tokens, at a cost of 5.6824 cents.
I ran this follow-up prompt:
llm -c 'Convert to JSON'You can see the full output here, which starts like this:
{ "metadata": { "columns": [ "Benchmark", "Description", "Gemini 3 Pro", "Gemini 2.5 Pro", "Claude Sonnet 4.5", "GPT-5.1" ] }, "benchmarks": [ { "name": "Humanity's Last Exam", "description": "Academic reasoning", "sub_results": [ { "condition": "No tools", "gemini_3_pro": "37.5%", "gemini_2_5_pro": "21.6%", "claude_sonnet_4_5": "13.7%", "gpt_5_1": "26.5%" }, { "condition": "With search and code execution", "gemini_3_pro": "45.8%", "gemini_2_5_pro": null, "claude_sonnet_4_5": null, "gpt_5_1": null } ] },Analyzing a city council meeting
To try it out against an audio file I extracted the 3h33m of audio from the video Half Moon Bay City Council Meeting - November 4, 2025. I used
yt-dlpto get that audio:yt-dlp -x --audio-format m4a 'https://www.youtube.com/watch?v=qgJ7x7R6gy0'That gave me a 74M m4a file, which I ran through Gemini 3 Pro like this:
llm -m gemini-3-pro-preview -a /tmp/HMBCC\ 11⧸4⧸25\ -\ Half\ Moon\ Bay\ City\ Council\ Meeting\ -\ November\ 4,\ 2025\ \[qgJ7x7R6gy0\].m4a 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'That failed with an "Internal error encountered" message, so I shrunk the file down to a more manageable 38MB using
ffmpeg:ffmpeg -i "/private/tmp/HMB.m4a" -ac 1 -ar 22050 -c:a aac -b:a 24k "/private/tmp/HMB_compressed.m4a"
Then ran it again like this (for some reason I had to use
--attachment-typethis time):llm -m gemini-3-pro-preview --attachment-type /tmp/HMB_compressed.m4a 'audio/aac' 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'This time it worked! The full output is here, but it starts like this:
Here is the transcript of the Half Moon Bay City Council meeting.
Meeting Outline
1. Call to Order, Updates, and Public Forum
- Summary: Mayor Brownstone calls the meeting to order. City Manager Chidester reports no reportable actions from the closed session. Announcements are made regarding food insecurity volunteers and the Diwali celebration. During the public forum, Councilmember Penrose (speaking as a citizen) warns against autocracy. Citizens speak regarding lease agreements, downtown maintenance, local music events, and homelessness outreach statistics.
- Timestamp: 00:00:00 - 00:13:25
- Participants: Mayor Brownstone, Matthew Chidester, Irma Acosta, Deborah Penrose, Jennifer Moore, Sandy Vella, Joaquin Jimenez, Anita Rees.
2. Consent Calendar
- Summary: The Council approves minutes from previous meetings and a resolution authorizing a licensing agreement for Seahorse Ranch. Councilmember Johnson corrects a pull request regarding abstentions on minutes.
- Timestamp: 00:13:25 - 00:15:15
- Participants: Mayor Brownstone, Councilmember Johnson, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Nagengast.
3. Ordinance Introduction: Commercial Vitality (Item 9A)
- Summary: Staff presents a new ordinance to address neglected and empty commercial storefronts, establishing maintenance and display standards. Councilmembers discuss enforcement mechanisms, window cleanliness standards, and the need for objective guidance documents to avoid subjective enforcement.
- Timestamp: 00:15:15 - 00:30:45
- Participants: Karen Decker, Councilmember Johnson, Councilmember Nagengast, Vice Mayor Ruddick, Councilmember Penrose.
4. Ordinance Introduction: Building Standards & Electrification (Item 9B)
- Summary: Staff introduces updates to the 2025 Building Code. A major change involves repealing the city's all-electric building requirement due to the 9th Circuit Court ruling (California Restaurant Association v. City of Berkeley). Public speaker Mike Ferreira expresses strong frustration and disagreement with "unelected state agencies" forcing the City to change its ordinances.
- Timestamp: 00:30:45 - 00:45:00
- Participants: Ben Corrales, Keith Weiner, Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick.
5. Housing Element Update & Adoption (Item 9C)
- Summary: Staff presents the 5th draft of the Housing Element, noting State HCD requirements to modify ADU allocations and place a measure on the ballot regarding the "Measure D" growth cap. There is significant disagreement from Councilmembers Ruddick and Penrose regarding the State's requirement to hold a ballot measure. Public speakers debate the enforceability of Measure D. Mike Ferreira interrupts the vibe to voice strong distaste for HCD's interference in local law. The Council votes to adopt the element but strikes the language committing to a ballot measure.
- Timestamp: 00:45:00 - 01:05:00
- Participants: Leslie (Staff), Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Johnson.
Transcript
Mayor Brownstone [00:00:00] Good evening everybody and welcome to the November 4th Half Moon Bay City Council meeting. As a reminder, we have Spanish interpretation services available in person and on Zoom.
Victor Hernandez (Interpreter) [00:00:35] Thank you, Mr. Mayor, City Council, all city staff, members of the public. [Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.] Thank you very much.
Those first two lines of the transcript already illustrate something interesting here: Gemini 3 Pro chose NOT to include the exact text of the Spanish instructions, instead summarizing them as "[Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.]".
I haven't spot-checked the entire 3hr33m meeting, but I've confirmed that the timestamps do not line up. The transcript closes like this:
Mayor Brownstone [01:04:00] Meeting adjourned. Have a good evening.
That actually happens at 3h31m5s and the mayor says:
Okay. Well, thanks everybody, members of the public for participating. Thank you for staff. Thank you to fellow council members. This meeting is now adjourned. Have a good evening.
I'm disappointed about the timestamps, since mismatches there make it much harder to jump to the right point and confirm that the summarized transcript is an accurate representation of what was said.
This took 320,087 input tokens and 7,870 output tokens, for a total cost of $1.42.
And a new pelican benchmark
Gemini 3 Pro has a new concept of a "thinking level" which can be set to low or high (and defaults to high). I tried my classic Generate an SVG of a pelican riding a bicycle prompt at both levels.
Here's low - Gemini decided to add a jaunty little hat (with a comment in the SVG that says
<!-- Hat (Optional Fun Detail) -->):
And here's high. This is genuinely an excellent pelican, and the bicycle frame is at least the correct shape:

Honestly though, my pelican benchmark is beginning to feel a little bit too basic. I decided to upgrade it. Here's v2 of the benchmark, which I plan to use going forward:
Generate an SVG of a California brown pelican riding a bicycle. The bicycle must have spokes and a correctly shaped bicycle frame. The pelican must have its characteristic large pouch, and there should be a clear indication of feathers. The pelican must be clearly pedaling the bicycle. The image should show the full breeding plumage of the California brown pelican.For reference, here's a photo I took of a California brown pelican recently (sadly without a bicycle):

Here's Gemini 3 Pro's attempt at high thinking level for that new prompt:

And for good measure, here's that same prompt against GPT-5.1 - which produced this dumpy little fellow:

And Claude Sonnet 4.5, which didn't do quite as well:

None of the models seem to have caught on to the crucial detail that the California brown pelican is not, in fact, brown.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 sacha chua :: living an awesome life 2025-11-17 Emacs news rss
- Upcoming events (iCal file, Org):
- London Emacs (in person): Emacs London meetup https://www.meetup.com/london-emacs-hacking/events/311781816/ Tue Nov 18 1800 Europe/London
- M-x Research: TBA https://m-x-research.github.io/ Wed Nov 19 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Thu Nov 20 0000 Asia/Singapore
- FSF Hackathon https://www.fsf.org/events/fsf40-hackathon-1 Fri Nov 21 0700 America/Vancouver - 0900 America/Chicago - 1000 America/Toronto - 1500 Etc/GMT - 1600 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Nov 22 0030 America/Vancouver - 0230 America/Chicago - 0330 America/Toronto - 0830 Etc/GMT - 0930 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Formation fondamentale (2 jours) - Paris, in-person https://emacsboost.com/ - lundi 24 au mardi 25 novembre 2025
- Formation avancée (2 jours) - Paris, in-person https://emacsboost.com/ - lundi 27 au mardi 28 novembre 2025
- Emacs Berlin (hybrid, in English) https://emacs-berlin.org/ Wed Nov 26 0930 America/Vancouver - 1130 America/Chicago - 1230 America/Toronto - 1730 Etc/GMT - 1830 Europe/Berlin - 2300 Asia/Kolkata – Thu Nov 27 0130 Asia/Singapore
- EmacsConf will be in less than month!
- Beginner:
- GNU Emacs do Zero - Episódio 02 (19:18)
- Emacs configuration:
- Emacs Lisp:
- marcoxa/magiciel: Emacs Lisp implementation of UN*X magic(5) and file(1) handling. (@marcoxa@mastodon.social)
- Creating an Emacs Package from Concept to MELPA (Part 1) (55:56) - rare-words.el
- Ep712 Emacs Lisp, Move File to Parent Dir. car cdr cons, Supreme Functional Programing No Variable (01:11:25)
- Appearance:
- Navigation:
- Writing:
- Org Mode:
- TAONAW - Emacs and Org Mode: Handling project in Emacs - the 2025 version (Irreal)
- Marcin Borkowski: Showing size of Org mode subtrees
- TAONAW: org-clone-subtree-with-time-shift (Irreal)
- Emacs carnival: Ode to org Babel (@ericsfraga@fediscience.org)
- How to disable GNOME notifications during org-pomodoro focus sessions
- (Update) org-supertag: Fixes Database Corruption & Improves Node View UX
- turning Org Social posts into RSS feeds
- Org development: ol-bbdb: Pre-populate the description for BBDB links
- Completion:
- Coding:
- IDEmacs: aimed at Common Lisp and Scheme programmers, similar to Portacle or Guile Studio. (HN)
- Simple Rust Guix Emacs development environment (Reddit)
- eglot-header-line.el: Show language server protocol breadcrumb information in the Emacs header-line using Eglot.
- Tip about changing magit-list-refs-sortby to sort by commit date
- Shells:
- Mail, news, and chat:
- Fun:
- AI:
- Community:
- Other:
- Knockknock - Unobtrusive notifications for Emacs (Reddit)
- Add a fuzzy clock to your modeline (Reddit)
- Readonly Characters Are a Big Deal (lobste.rs)
- ZeniesQis/screenshot-capture: An Emacs package to do Screen capture and Screenshots using ffmpeg, image-magick and scrot. (@Zenie@piaille.fr)
- New package: dag-draw.el (draw DAGs in ASCII, SVG, DOT) on melpa (Reddit)
- Easy Setup for Drew Adams' Essential Emacs Packages on Emacswiki - YouTube
- A huge increase in windows EMACS startup speed
- Emacs development:
- emacs-devel:
- [PATCH 0/2] Proper XOAUTH2 support for smtpmail (Gnus pending) - Arsen Arsenović
- Re: MS Windows menu lifecycle - Eli Zaretskii some notes on how menus are handled on Windows
- New hideshow option 'hs-cycle-filter' for visibility-cycling with 'TAB'
- Improve foldout-exit-fold with negative arg (bug#77370)
- ; * etc/NEWS (Ibuffer): Announce new faces. (Bug#79786)
- VC revert commands: Facilities to entirely delete revisions
- Revised bindings for diff-revert-and-kill-hunk
- emacs-devel:
- New packages:
- cabal-mode: Support for Cabal packages (MELPA)
- gptel-agent: Agentic LLM use for gptel (MELPA)
- org-tag-tree: Define Org-mode tag hierarchies from Org subtrees (MELPA)
- trailing-newline-indicator: Show an indicator for the trailing newline (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [fwhunt-ida](https://github.com/binarly-io/fwhunt-ida): 1.0.2 -
🔗 r/LocalLLaMA Gemini 3 is launched rss
| submitted by /u/Several-Republic-609
[link] [comments]
---|--- -
🔗 r/LocalLLaMA My local AI server is up and running, while ChatGPT and Claude are down due to Cloudflare's outage. Take that, big tech corps! rss
Local servers for the win!
submitted by /u/alex_bit_
[link] [comments] -
🔗 seanmonstar hyper User Survey 2025 rss
I’m excited to announce the inaugural hyper user survey!
Part of being user-driven is knowing what our users want, what they’re trying to do, and then we can figure out how best to help.
This survey should only take about 5 minutes, no questions are required, and responses are anonymous. The survey will be open for a month, until December 17th, and then we’ll analyze the results and share with you all what we find.
Thanks for taking the time to help make hyper better!
-
🔗 r/reverseengineering MetroY Pro VS Sermoon S1 Shiny Object rss
submitted by /u/Few-Consideration483
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
sync repo: +1 plugin, +2 releases ## New plugins - [idapcode](https://github.com/binarly-io/idapcode) (1.0.1, 1.0.0) -
🔗 Rust Blog Google Summer of Code 2025 results rss
As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.
We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!
Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors.
Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here. For easier navigation, here is an index of the project descriptions in alphabetical order:
- ABI/Layout handling for the automatic differentiation feature by Marcelo Domínguez
- Add safety contracts by Dawid Lachowicz
- Bootstrap of rustc with rustc_codegen_gcc by Michał Kostrubiec
- Cargo: Build script delegation by Naman Garg
- Distributed and resource-efficient verification by Jiping Zhou
- Enable Witness Generation in cargo-semver-checks by Talyn Veugelers
- Extend behavioural testing of std::arch intrinsics by Madhav Madhusoodanan
- Implement merge functionality in bors by Sakibul Islam
- Improve bootstrap by Shourya Sharma
- Improve Wild linker test suites by Kei Akiyama
- Improving the Rustc Parallel Frontend: Parallel Macro Expansion by Lorrens Pantelis
- Make cargo-semver-checks faster by Joseph Chung
- Make Rustup Concurrent by Francisco Gouveia
- Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices by Julien Robert
- Modernising the libc Crate by Abdul Muiz
- Prepare stable_mir crate for publishing by Makai
- Prototype an alternative architecture for cargo fix using cargo check by Glen Thalakottur
- Prototype Cargo Plumbing Commands by Vito Secona
And now strap in, as there is a ton of great content to read about here!
ABI/Layout handling for the automatic differentiation feature
- Contributor: Marcelo Domínguez
- Mentors: Manuel Drehwald, Oli Scherer
- Final report
The
std::autodiffmodule allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user- written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.Our autodiff frontend was facing two challenges.
- First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
- Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.
Marcelo helped rewrite our
autodiffmacros to not generate hacky placeholder function bodies, but instead introduced a properautodiffintrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR.Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!
Add safety contracts
- Contributor: Dawid Lachowicz
- Mentor: Michael Tautschnig
- Final report
The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts, moving from informal comments that specify safety requirements of
unsafefunctions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.Given that Rust contracts are still in their early stages, Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:
-
Pragmatic Contracts Integration : Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.
-
Variable Reference Capability : Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.
-
Separation Logic Integration : The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created.
Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!
Bootstrap of rustc with rustc_codegen_gcc
- Contributor: Michał Kostrubiec
- Mentor: antoyo
- Final report
The goal of this project was to improve the Rust GCC codegen backend (
rustc_codegen_gcc), so that it would be able to compile the "stage 2"1 Rust compiler (rustc) itself again.You might remember that Michał already participated in GSoC last year, where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build
rustcwith GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with#[inline(always)]were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog.After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3"
rustcbuild working, for which he had to vastly improve the memory consumption of the codegen backend.Once that was done, he moved on to yet another goal, which was to build
rustcfor a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k. He also attempted to compilerustcon Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build arustcfor m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux.Michał had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here.
We were really impressed with what Michał was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, Michał!
Cargo: Build script delegation
- Contributor: Naman Garg
- Mentor: Ed Page
- Final report
Cargo build scripts come at a compile-time cost, because even to run
cargo check, they must be built as if you rancargo build, so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here.The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable
multiple-build- scriptsfeature to Cargo, implemented support for parsing an array of build scripts inCargo.toml, and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:
[package] build = ["windows-manifest.rs", "release-info.rs"]then the corresponding crate is able to access the
OUT_DIRs of both build scripts usingenv!("windows-manifest_OUT_DIR")andenv!("release- info_OUTDIR").As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in
Cargo.tomland then implement the actual build script delegation to external build scripts using artifact- dependencies.We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!
Distributed and resource-efficient verification
- Contributor: Jiping Zhou
- Mentor: Michael Tautschnig
- Final report
The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.
Jiping implemented a distributed verification system with several key innovations:
- Intelligent Change Detection : The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
- Multi-Tool Orchestration : The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
- Distributed Architecture : The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
- Real-time Visualization : Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here!
You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!
Enable Witness Generation in cargo-semver-checks
- Contributor: Talyn Veugelers
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checksis a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.Imagine a crate makes the following change to its public API:
// baseline version pub fn example(value: i64) {} // new version pub fn example(value: String) {}This is clearly a major breaking change, right? And yet
cargo-semver- checkswith its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changingi64toString, what if the change was fromi64toimpl Into<i64>, or worse, into some monstrosity like:pub fn example<T, U, const N: usize>( value: impl for<'a> First<'a, T> + Second<U, N> + Sync ) {}Figuring out whether this change is breaking requires checking whether the original
i64parameter type can "fit" into that monstrosity of animpl Traittype. But reimplementing a Rust type checker and trait solver insidecargo-semver-checksis out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io—we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our
examplefunction comes from a crate calledupstream, its witness program would look something like:// take the same parameter type as the baseline version fn witness(value: i64) { upstream::example(value); }This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!
Attempting to
cargo checkthe witness while plugging in the new version ofupstreamforces the Rust compiler to decide whetheri64matches the newimpl Traitparameter. Ifcargo checkpasses without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.
Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the
cargo-semver-checksroadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.Extend behavioural testing of std::arch intrinsics
- Contributor: Madhav Madhusoodanan
- Mentor: Amanieu d'Antras
- Final report
The
std::archmodule contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.The intrinsics are tested with three approaches. We test that:
- The signatures of the intrinsics match the one specified by the architecture.
- The intrinsics generate the correct instruction.
- The intrinsics have the correct runtime behavior.
These behavior tests are implemented in the intrinsics- test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.
First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.
We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!
Implement merge functionality in bors
- Contributor: Sakibul Islam
- Mentor: Jakub Beránek
- Final report
The main Rust repository uses a pull request merge queue bot that we call
bors. Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.
After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!
After we finish the final piece of the merge logic (creating so-called "rollups") together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all
rust-lang/rustpull requests. Exciting!Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!
Improve bootstrap
- Contributor: Shourya Sharma
- Mentors: Jakub Beránek, Jieyou Xu, Onur Özkan
- Final report
bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through
rustup. This project's goal was very open-ended: "improve bootstrap".And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.
After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.
The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare
cargo, which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!
Improve Wild linker test suites
- Contributor: Kei Akiyama
- Mentor: David Lattimore
- Final report
Wild is a very fast linker for Linux that’s written in Rust. It can be used to build executables and shared objects.
Kei’s project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei’s efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.
In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support
--help, which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs. We thank Kei for all the hard work and look forward to continued collaboration in the future!
Improving the Rustc Parallel Frontend: Parallel Macro Expansion
- Contributor: Lorrens Pantelis
- Mentors: Sparrow Li, Vadim Petrochenkov
- Final report
The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.
We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.
The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.
Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings. He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.
These refactorings allowed him to implement something called "batched import resolution", which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues, after which the batched import resolution work will hopefully be merged.
Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!
Make cargo-semver-checks faster
- Contributor: Joseph Chung
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checksis a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at229. More lints mean more work forcargo-semver-checksto do, as well as more work for its test suite which runs over 250000 lint checks!Joseph's contributions took three forms:
- Improving
cargo-semver-checksruntime performance—on large crates, our query runtime went from ~8s to ~2s, a 4x improvement! - Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
- Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.
Joseph described all the clever optimization tricks leading to these results in his final report. To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.
cargo-semver-checksrelies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release ofcargo- semver-checkssupports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version'sserdetypes to use to deserialize the data.Previously, we used to deserialize the JSON file twice: once with a
serdetype that only loaded theformat_version: u32field, and a second time with the appropriateserdetype that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. Whileserdeis quite fast, there's nothing as fast as not doing the work twice in the first place!So we used a trick: optimistically check if the
format_versionfield is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a,character in the last few dozen bytes, then look for:after the,character, and forformat_versionbetween them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.Joseph did a lot of profiling and performance optimizations to make
cargo- semver-checksfaster for everyone, with awesome results. Thank you very much for your work!Make Rustup Concurrent
- Contributor: Francisco Gouveia
- Mentor: rami3l
- Final report
As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global
tokioruntime in #3367, this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:- He created a new set of user interfaces for displaying concurrent progress.
- He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.
As a warmup, Francisco made
rustup checkconcurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif-based progress bar for reporting progress of concurrent operations, which replaced the original hand- rolled solution.After that, the focus of the project has moved on to the toolchain installation flow used in commands like
rustup toolchain installandrustup update. In this part, Francisco developed two main improvements:- The possibility of downloading multiple components at once when setting up a toolchain, controlled by the
RUSTUP_CONCURRENT_DOWNLOADSenvironment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits. - The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the
defaultprofile.
We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!
Mapping the Maze of Rust's UI Test Suite with Established Continuous
Integration Practices
- Contributor: Julien Robert
- Mentor: Jieyou Xu
- Final report
The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:
- We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
- We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.
Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from
git blameand friends.Over the years, we have accumulated a lot of unorganized stray test files in the top level
tests/uidirectory, and have a lot of generically namedissue-*.rstests in thetests/ui/issues/directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:
- No more stray tests under the immediate
tests/ui/top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added. - A top-level document contains TL;DRs for each of the immediate subdirectories.
- Substantially fewer generically-named
issue-*.rsundertests/ui/issues/.
Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.
Modernising the libc Crate
- Contributor: Abdul Muiz
- Mentor: Trevor Gross
- Final report
libcis a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release oflibc.Test generation is handled by the
ctestcrate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project,ctestused an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaninglibccould not easily use any syntax newer than that. Abdul completely rewrotectestto usesynas its parser and make it much easier to add new tests, then went through and switched everything over to the more modernctest. After this change, we were able to remove a number of hacks that had been needed to work with the old parser.The other part of the project was to make progress toward the 1.0 release of
libc. Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all thectestchanges.While there is still a lot of work left to do before
libccan reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!Prepare stable_mir crate for publishing
- Contributor: Makai
- Mentor: Celina Val
- Final report
This project's goal was to prepare the Rust compiler's
stable_mircrate (eventually renamed torustc_public), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.The main technical work involved restructuring
rustc_publicandrustc_public_bridge(previously namedrustc_smir) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.
Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish
rustc_publicindependently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!Prototype an alternative architecture for cargo fix using cargo check
- Contributor: Glen Thalakottur
- Mentor: Ed Page
- Final report
The
cargo fixcommand applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow, it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.These problems are caused by its current architecture; it is implemented as a variant of
cargo checkthat replacesrustcwithcargobeing run in a special mode that will callrustcin a loop, applying fixes until there are none. While this specialrustc-proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making therustc-proxy interactive.Glen implemented a proof of concept of an alternative design called cargo- fixit.
cargo fixitspawnscargo checkin a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.Glen performed various benchmarks to test how the new approach performs. And in some benchmarks,
cargo fixitwas able to finish within a few hundred milliseconds, where before the same task tookcargo fixalmost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!Further work remains to be done on
cargo-fixitto investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!Prototype Cargo Plumbing Commands
- Contributor: Vito Secona
- Mentors: Cassaundra, Ed Page
- Final report
The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.
Vito created a prototype of several plumbing commands in the cargo- plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers. For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.
Vito did a lot of work, as he implemented seven different plumbing subcommands:
locate-manifestread-manifestread-lockfilelock-dependencieswrite-lockfileresolve-featuresplan-build
As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.
We thank Vito for all his work on improving the flexibility of Cargo.
Conclusion
We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.
-
🔗 Ampcode News Gemini 3 Pro rss
Gemini 3 Pro is now the new main model in Amp, powering the
smartagent mode.This is a historic change.
Ever since we've started work on Amp, back in February, Anthropic's Claude has been the main model. Not for lack of trying. We've experimented with different models for the same role: GPT-5, Gemini 2.5, Grok, and others.
But Claude has stuck, because it has been the only model that managed to strike the balance between intelligence, speed, and the willingness and ability to use tools.
Then, Gemini 3 came along. (And, on the horizon, there are even more models lined up to take the crown.)
In just a few days, the number of ecstatic messages it got in our Slack was higher than for any other model:
- "crazy, this is incredible"
- "guys this is really good, I love it, it's a ton of fun"
- "I didn't expect a model drop to affect how much I'm enjoying using amp this much"
- "Wow. It's very persistent - in a great way."
- "Hot dog!! I haven’t tried it yet but you all are getting my hopes up"
- "I'm really loving this model. Feels like a great mix of eagerness mixed with experience. Doesn't feel like a bull in a china shop."
- "am I imaginging things? it is that good, right?"
Gemini 3 checked off all the boxes that so far only Claude had checked: smart, fast, follows instructions very well, works hand-in-hand with the user if needed, very eager to use tools and uses them with high dexterity.
But, as we found out with delight, it does it all and it's better at it.
In our first internal, not-even-optimized run of Terminal-Bench 2.0 the score went up by 17 percentage points, after switching from Sonnet 4.5 to Gemini 3.
It also feels smarter, it feels better at following instructions, it's impressively clever in the way it uses tools, it cleans up after itself, it follows existing patterns in a codebase to a degree which we haven't seen before, its writing is uncannily good. It doesn't use emojis when it shouldn't. It hasn't once said that we're absolutely right. (It does have other imperfections though.)
And yet: should we really make it the default model in Amp? That's a big decision.
We went back and forth, multiple times. Every time we did, someone would invite the rest of the team: "Okay, push back, tell me I'm hallucinating, tell me why we shouldn't do it?"
But, in the end, that question was answered with Gemini 3 passing the ultimate test of all tools. When it quickly went down while early-access versions were switched, our team immediately despaired: "no, please, they can't take this away from me!"
We hope you enjoy it, too. Happy hacking!
If you want to keep using Amp with Sonnet 4.5, you need to downgrade to an older of version of Amp.
Visual Studio Code, Cursor, Windsurf:
- Open the command palette with
⌘+Pand useExtensions: Install Specific Version of Extension... - Pick version
0.0.1763398025
Amp CLI:
``` $ npm install -g @sourcegraph/amp@0.0.1763397734-g58e8bc
$ npx -y @sourcegraph/amp@0.0.1763397734-g58e8bc
$ pnpm add -g @sourcegraph/amp@0.0.1763397734-g58e8bc
$ yarn global add @sourcegraph/amp@0.0.1763397734-g58e8bc ```
It's better, but it's not perfect.
Here is a list of issues we've seen and that we're still actively trying to prompt out of it:
- Sometimes it just "thinks" forever
- Other times, thinking-like prose leaks into the output. E.g. "I'm going to x, I did x, I'm going to y. This usually is associated with a very very large number of output tokens.
- Control characters, fake tool calls (
`) leaking into the output,}}` at the end of the message, repeated words at the end of the message - It has sometimes been very reluctant to execute bash commands
- Unrequested git commits
- Use of non-absolute file paths
We still think that even with these issues, it's a great choice as the main model in Amp.
-
- November 17, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-17 rss
IDA Plugin Updates on 2025-11-17
New Releases:
- efiXplorer efiXplorer v6.1.1
- FindFunc v1.5
- fwhunt-ida v1.0.1
- ida-hcli v0.13.1
- ida-hcli v0.13.0
- panda v1.8.76 @ refs/heads/dev
- plugin-ida v3.0.1
Activity:
- capa
- 8d39765e: ci: bump binja minor version (#2763)
- dec0bcfe: build(deps-dev): bump js-yaml from 4.1.0 to 4.1.1 in /web/explorer (#…
- 99ccecba: build(deps): bump humanize from 4.13.0 to 4.14.0 (#2762)
- af27463c: build(deps-dev): bump pyinstaller from 6.14.1 to 6.16.0 (#2761)
- f4f47b4d: build(deps): bump protobuf from 6.31.1 to 6.33.1 (#2760)
- adc24011: build(deps): bump pycparser from 2.22 to 2.23 (#2759)
- dotfiles
- a26a4faf: update
- 15a0c1e1: update
- 0c1e1348: update
- 8eb4f170: update
- 6615c4c3: update
- ed496755: update
- 9a657623: update
- fca3c5df: update
- 963159ab: update
- d679721e: update
- 7ef897e0: update
- c7c56ae4: update
- 24463aaf: update
- 8bce65ce: update
- 608a0785: update
- 4067f602: update
- b5139f84: update
- 154d88ca: update
- bcd2e4cb: update
- efiXplorer
- FindFunc
- 4f04c67f: bump version v1.5
- fwhunt-ida
- 176ef4cc: add ida-plugin.json, bump version (#8)
- ida-hcli
- IDA-MCP
- ida-semray
- ed31b10c: feat: add SemRay IDA plugin with CodeDumper integration
- ida_domain_mcp
- mcrit
- 8498d322: bugfix for wrong importly imported flags
- panda
- plugin-ida
-
🔗 r/LocalLLaMA 20,000 Epstein Files in a single text file available to download (~100 MB) rss
I've processed all the text and image files (~25,000 document pages/emails) within individual folders released last friday into a two column text file. I used Googles tesseract OCR library to convert jpg to text.
You can download it here: https://huggingface.co/datasets/tensonaut/EPSTEIN_FILES_20K
I uploaded it yesterday, but some of files were incomplete. This version is full. For each document, I've included the full path to the original google drive folder from House oversight committee so you can link and verify contents.
I used mistral 7b to extract entities and relationships and build a basic Graph RAG. There are some new "associations" that have not been reported in the news but couldn't find any breakthrough content. Also my entity/relationship extraction was quick and dirty. Sharing this dataset for people interested in getting into RAG and digging deeper to get more insight that what meets the eye.
In using this dataset, please be sensitive to the privacy of the people involved (and remember that many of these people were certainly not involved in any of the actions which precipitated the investigation.) - Quoted from Enron Email Dataset release
EDIT (NOV 18 Update): These files were released last friday by the house oversight committee. I will post an update as soon as todays files are released and processed
submitted by /u/tensonaut
[link] [comments] -
🔗 r/reverseengineering Advanced Memory Analysis with Volatility 3: Uncover Hidden Threats rss
submitted by /u/SUmidcyber
[link] [comments] -
🔗 r/LocalLLaMA NanoGPT 124m from scratch using a 4090 and a billion tokens of Fineweb in a cave with a box of scraps. rss
| Need a buddy and only have a few hours to make one? I was recently doing some digging into NanoGPT, Karpathy's couple years old repo to recreate GPT-2 124m using 10 billion tokens of fineweb and 8xA100 40gb over the course of four days. More recently, I saw that they've started speedrunning efforts to train the same model to 3.28 loss as fast as possible with 8xH100, and currently the speed record on that setup is less than 3 minutes to train from scratch. That led me to think... with all of the advancements that have been made in the last few years, how fast could I train the same model to that 3.28 loss range on a single 4090? The answer? 115 minutes flat. It ran through 0.92 billion tokens in the process, with 130-140k t/s speeds during training. What does this mean? If you ever find yourself lonely in a cave with a box of scraps, a 4090, and a billion fineweb tokens... you can build your own teeny-jarvis in a couple hours flat then chat with it. I've provided training code and inference code, and the trained model if you want to mess with it for some odd reason. I set up a little github repo as well, so if you feel like trying your hands at modifying my training run and beating it, drop a PR with your results/log/training run and I'll add it to the speedrun chart:
https://github.com/Deveraux-Parker/nanoGPT_1GPU_SPEEDRUN I haven't bothered with any posttraining/finetuning/etc etc etc, this is just the base model trained up from nothing. I might go through and add a little instruct tune on top of it so that I can create a teeny little chatgpt. Here's the list of things it's implementing:
Computation & Precision Optimizations- FP8 Quantization - 8-bit floating-point numbers (float8) for matrix multiplications instead of the usual 16 or 32-bit. This cuts memory use and speeds up math operations dramatically.
- Mixed Precision Training (bfloat16) - Most computations happen in bfloat16, which is faster than float32 while maintaining good numerical stability.
- Custom Triton Kernels - Hand-written GPU kernels for specific operations like symmetric matrix multiplication (X·X^T), which are faster than PyTorch's default implementations.
- torch.compile - PyTorch 2.0's JIT compilation that fuses operations and optimizes the computational graph.
- Flash Attention - Ultra-fast attention implementation that reduces memory usage and speeds up the attention mechanism.
Novel Optimizer & Training Techniques
- Muon Optimizer - A custom momentum-based optimizer that uses orthogonalization (keeping gradient directions independent) for better convergence.
- Polar Express Orthogonalization - A specific algorithm to maintain orthogonality in the Muon optimizer's updates.
- NorMuon Variance Estimator - Adaptive second moment estimation that helps Muon scale gradients appropriately.
- Multiple Optimizers - Using Adam for embeddings/scalars and Muon for weight matrices, each optimized for their parameter type.
- Alternating Optimizer Steps - Muon runs every other step, both optimizers on odd steps, reducing computational overhead.
- Gradient Accumulation - Accumulating gradients over 32 micro-batches to simulate larger batch sizes without running out of memory.
Architecture Innovations
- YaRN (Yet another RoPE extensioN) - Extends the context length capability of Rotary Position Embeddings beyond what the model was trained on.
- RoPE (Rotary Position Embeddings) - More efficient positional encoding than absolute positions.
- RMS Normalization - Simpler and faster than LayerNorm while being equally effective.
- Squared ReLU Activation - Using ReLU(x)² instead of GELU, which is faster and works well.
- Skip Connections with Learnable Gates - U-Net-style architecture where early layers connect to later layers through learned gates.
- Value Embeddings - Separate embedding tables that inject information directly into attention values.
- Smear Gating - Mixes each token with the previous token using a learned gate.
- Backout Connections - Subtracts certain layer outputs to prevent feature redundancy.
- Attention Gating - Per-head gates that learn to selectively use attention outputs.
Learning Rate & Schedule Optimizations
- Custom LR Multipliers - Different learning rates for embeddings (75x), scalars (5x), etc.
- Custom Weight Decay Multipliers - Different regularization strength for different parameter types.
- Warmup-Stable-Decay Schedule - Linear warmup (100 steps), stable plateau (80% of training), then cosine decay.
- Dynamic Muon Momentum - Momentum coefficient that changes during training (0.85→0.95→0.85).
- Adaptive Hyperparameter Tuning - Automatically adjusts learning rate and weight decay based on train/val loss dynamics.
Memory & Data Optimizations
- Expandable Memory Segments - PyTorch memory allocator setting that reduces fragmentation.
- Kernel Warmup - Pre-compiling and warming up kernels before actual training to avoid first-step slowdown.
- Asynchronous Data Loading - Background threads preload the next data shard while training continues.
- BOS-Aligned Batching - Sequences are aligned to document boundaries (BOS tokens) for more natural training.
- Pin Memory - Keeps data in page-locked memory for faster CPU→GPU transfers.
- Non-Blocking Transfers - Async GPU transfers that overlap with computation.
- set_to_none=True - More efficient way to zero gradients than setting them to zero tensors.
Training Efficiency Tricks
- Variable Attention Window Sizes - Different layers use different block masking sizes (some see more context, some less).
- Logit Capping - Applies 30·sigmoid(logits/7.5) to prevent extreme values.
- Vocabulary Size Rounding - Rounds vocab to multiples of 128 for better GPU utilization.
- Strategic Initialization - Zero initialization for output projections, uniform bounded for inputs.
- Checkpoint Resumption - Can pause and resume training without losing progress.
- Early Stopping - Automatically stops when target validation loss is reached.
- Frequent Checkpointing - Saves model every validation step to prevent data loss.
- Efficient Gradient Zeroing - Only zeroes gradients after they're used, not before.
submitted by /u/teachersecret
[link] [comments]
---|--- -
🔗 @binaryninja@infosec.exchange Free Candy anyone? Well, not quite free candy, but it might seem like it if mastodon
Free Candy anyone? Well, not quite free candy, but it might seem like it if you’re a user of the Free edition of Binary Ninja! In 5.2, we’re adding many new features from the paid versions to Free: Objective-C workflow, WARP plugin, DWARF Import and TTD support! https://binary.ninja/2025/11/13/binary- ninja-5.2-io.html#free-candy
-
🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
sync repo: +2 plugins, +2 releases ## New plugins - [efiXplorer](https://github.com/binarly-io/efixplorer) (6.1.1) - [fwhunt-ida](https://github.com/binarly-io/fwhunt-ida) (1.0.1) -
🔗 r/wiesbaden Catering um Wiesbaden/Mainz gesucht rss
Hallo! Ich heirate nächsten Sommer in einer Location ohne Catering, rund 40km westlich von Wiesbaden. Wir haben rund 70 Gäste und suchen jetzt einen guten Caterer fürs Buffet.
Habt ihr Empfehlungen?
submitted by /u/ThreePenguins
[link] [comments] -
🔗 @HexRaysSA@infosec.exchange ⌨️ Introducing HCLI: The Modern Command-Line Interface for IDA. mastodon
⌨️ Introducing HCLI: The Modern Command-Line Interface for IDA.
Automate workflows, simplify plugin development across platforms, manage licenses and more.
-
🔗 r/LocalLLaMA How come Qwen is getting popular with such amazing options in the open source LLM category? rss
| To be fair, apart from Qwen, there is also Kimi K2. Why is this uptick in their popularity? Openrouters shows a 20% share of Qwen. The different evaluations certainly favor the Qwen models when compared with Claude and Deepseek. The main points I feel like working in Qwen's favor are its cheap prices and the open source models. This model doesn't appear to be sustainable however. This will require masssive inflow of resources and talent to keep up with giants like Anthropic and OpenAI or Qwen will fast become a thing of the past very fast. The recent wave of frontier model updates means Qwen must show sustained progress to maintain market relevance. What's your take on Qwen's trajectory? I'm curious how it stacks up against Claude and ChatGPT in your real-world use cases. submitted by /u/Puzzleheaded_Toe5074
[link] [comments]
---|--- -
🔗 Hex-Rays Blog Introducing HCLI: The Modern Command-Line Interface for IDA rss
If you've been working with IDA Pro for any length of time, you know the toil of maintaining a tidy reverse engineering workspace: manually downloading installers from the portal, hunting for the right license file, copying plugins into obscure directories, and wrestling with SDK paths when building native plugins. For years, these were just accepted as part of the IDA experience. Today, we're going to show you a better way.

-
🔗 HexRaysSA/plugin-repository commits fix mirroring of plugin logos rss
fix mirroring of plugin logos closes #3 and https://github.com/HexRaysSA/ida-hcli/issues/115 -
🔗 HexRaysSA/plugin-repository commits Merge pull request #2 from 0xdea/v1 rss
Merge pull request #2 from 0xdea/v1 update sync workflow to rebuild every 4 hours -
🔗 HexRaysSA/plugin-repository commits fix deployed content preview rss
fix deployed content preview -
🔗 HexRaysSA/plugin-repository commits update known-repos with existing entries from plugins.hex-rays.com rss
update known-repos with existing entries from plugins.hex-rays.com ``` curl 'https://api.eu.hex-rays.com/plugin-repository/search?limit=1000' \ -X 'GET' \ -H 'Pragma: no-cache' \ -H 'Accept: application/json, text/plain, */*' | jq ".plugins.hits[].slug" | sort ``` -
🔗 HexRaysSA/plugin-repository commits add justfile rss
add justfile -
🔗 HexRaysSA/plugin-repository commits hugo: fix missing directory rss
hugo: fix missing directory closes #116 -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 Jessitron Speak So They Can Hear You rss
"My manager needs to put an ROI on everything," an SRE leader told me at KubeCon this week. Yeah, I feel that. We know the value of SRE, smoothing all other work so that production can produce. Putting that into dollars though? It's hard to count the cut costs of incidents that didn't happen. Yet someone has to justify each effort in ROI terms to keep the team funded.
On the way to KubeCon I read about Louis Agassiz, a famous naturalist in the nineteenth century. Back then, people didn't value studying nature for its own sake. Learning the names of plants, observing the world in detail, and keeping detailed notes were frivolous activities, shiftless, wasteful. A better person engaged in industry every hour of the day, with busy hands crafting objects of obvious use. But Agassiz made taxonomy sound important! He justified it in religious terms: by studying nature, he could learn about God's hierarchy. He could find moral instruction in the differences between species.
This sounds ridiculous now, but it spoke to the values of the day. It let Agassiz fund teams and expeditions full of scientists to catalog the world, increasing human knowledge. Today, we value increased knowledge for its own sake. Back then, knowledge was not respeected, but virtue was. Agassiz put an ROI on science, in the currency of the day.
The taxonomists and botanist, fans of fishes and lovers of leafs, felt the importance of their work. It took a charismatic leader like Agassiz to make it relevant to people who controlled resources.
SREs, developers, and the managers and directors who establish and protect those programs feel intuitively that this work is crucial. Their work safeguarding the software systems lets everyone keep working, lets the business move forward with options. It takes a leader to know this, and work it into the kind of narrative that VPs need to convince the executives, that executives need to satisfy the board.
Cultural value systems change, personal perspectives shift. I have advanced from "code quality is a virtue" to "appropriate attention to malleability keeps our options open." While the people I respect value system health within complexity, capitalist business culture demands we linearize into an ROI story.
Someone has say that upgrading our software observability is going to save money in terms of shorter incidents and increased development speed, with a bonus of lowering the risk of AI-generated code. Find some ways to put numbers around that. No, it is not really measurable. From developers to CTO and CEO, we know system health is not a linear outcome. We know there's more to the story, and still we need the story.
Agassiz believed his story, though. After Origin of the Species came out, scientists around the world gradually shifted to work from evolution, to find family trees in the fishes and ferns. Agassiz never came around. Every species was a thought of God, and he would not give that up.
We don't have to believe that the real reason to upgrade our libraries and architecture is some specific dollar savings. We know that it eases everything we build on top. We can choose our timing and latch on to some feature that is perceived as profitable, we can point to our upgrade as facilitating that in particular. Keep both stories in our heads, the deep benefits and the nominal ones.
When we understand the deeper value of what we're trying to do, then expressing what we need in terms relevant to people who control the resources is not lying. It's leadership.
-
🔗 r/LocalLLaMA ChatGPT understands its creator rss
| Even ChatGPT knows "Open Source" seems unlikely when it comes to OpenAI submitted by /u/mtmttuan
[link] [comments]
---|--- -
🔗 Rust Blog Launching the 2025 State of Rust Survey rss
It’s time for the 2025 State of Rust Survey!
The Rust Project has been collecting valuable information about the Rust programming language community through our annual State of Rust Survey since 2016. Which means that this year marks the tenth edition of this survey!
We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.
Like last year, the 2025 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.
We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on themain survey page:
- English
- Chinese (Simplified)
- Chinese (Traditional)
- French
- German
- Japanese
- Ukrainian
- Russian
- Spanish
- Portuguese (Brazil)
Note: the non-English translations of the survey are provided in a best- effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!
Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):
- @jieyouxu
- @adriantombu
- @llogiq
- @Marcono1234
- @tanakakz
- @YohDeadFall
- @Kivooeo
- @avrong
- @igarai
- @weihanglo
- @tyranron
- @leandrobbraga
Thank you!
If you have any questions, please see our frequently asked questions.
We appreciate your participation!
Clickhere to read a summary of last year's survey findings.
By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our Zulip channel and say hi.
-
🔗 Will McGugan The Toad Report #3 rss
Welcome to the third issue of the Toad Report. If you are new here, Toad is a universal interface for API I am currently building.
To date, most of the my work on Toad has been focussed on building the “conversation” view where the user can converse and interact with their chosen AI agent. This has come together rather well, with a polished enough user experience for Hugging Face to start building integrations with their technology! And all before Toad’s official release.
Some technologies are indistinguishable from magic... @willmcgugan pic.twitter.com/6idsU3dIDa
-- Shaun Smith (@evalstate) November 17, 2025
The conversation view can be considered the body of Toad. But a body needs a head. In the case of Toad, it needs an interface where the user can find, install, and launch agents. Which is what Ive been working on recently.
The following will be the first view the user will see if they launch
toadwith no arguments:
From here the user can navigate the (growing) list of compatible agents. When they select one of interest, Toad will pop up a dialog with information (in beautiful markdown) regarding the agent.
In this dialog is a pull down list of possible actions associated with the agent—where there will typically be at least an “install” action.

Clicking the “Go” button downloads and installs the agent without ever leaving the app:

When an agent is installed, it is placed in the “quick launch” area. Each entry in the quick launch has a digit associated, so they can jump straight to that agent.

When an agent is selected (quick launch or other), the user can press space to launch the conversation view and begin their agentic coding session.
I wanted the experience to be as friction free as possible. There are many coding agents out there, but not all of them have the same visibility as those offered by big tech. I’m hoping this will promote the less well funded agents.
Here’s a video a recorded of the installation process. The interface is a little older than the screenshots, but you should get the idea…
Found this interesting?
Follow me on the socials where I will be posting regular updates. You can also join the Textual Discord Server if you want to chat with me or the Textual community.
Join the
#toadchannel if you would like an invite to the Toad repository. I’ll be sending more out in a week or two.Thanks for reading!
-
- November 16, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-16 rss
IDA Plugin Updates on 2025-11-16
New Releases:
- IDA-VTableExplorer Release 2025.11.16
- parascope parascope v0.3.0
- sig-importer Signature Importer v1.0.5 for IDA Pro 9.x
Activity:
- dotfiles
- FindFunc
- ida-pro-mcp
- IDA-VTableExplorer
- ida_ifl
- parascope
- 42e23e31: add PyPI based distribution and plugin for hcli (#1)
- pwnable
- 0a3b5229: add sizcaller solution
- sig-importer
-
🔗 jellyfin/jellyfin 10.11.3 release
🚀 Jellyfin Server 10.11.3
We are pleased to announce the latest stable release of Jellyfin, version 10.11.3! This minor release brings several bugfixes to improve your Jellyfin experience. As always, please ensure you take a full backup before upgrading!
You can find more details about and discuss this release on our forums.
Changelog (25)
📈 General Changes
- Fix series DateLastMediaAdded not updating when new episodes are added [PR #15472], by @theguymadmax
- Fix movie titles using folder name when NFO saver is enabled [PR #15529], by @theguymadmax
- Fix tmdbid not detected in single movie folder [PR #14955], by @theguymadmax
- Fix font extraction for certain transcoding settings [PR #15502], by @Iksas
- Fix playlist DateCreated and DateLastMediaAdded not being set [PR #15508], by @theguymadmax
- Add 1 minute tolerance for NFO change detection [PR #15514], by @theguymadmax
- Remove InheritedTags and update tag filtering logic [PR #15493], by @theguymadmax
- Fix .ignore handling for directories [PR #15501], by @theguymadmax
- Fix gitignore-style not working properly on windows. [PR #15487], by @CBPJ
- Improve season folder parsing [PR #15404], by @theguymadmax
- Don't error out when searching for marker files fails [PR #15466], by @Bond-009
- Fix NullReferenceException in GetPathProtocol when path is null [PR #15462], by @theguymadmax
- Resolve symlinks for static media source infos [PR #15263], by @revam
- Check if target exists before trying to follow it [PR #15468], by @Bond-009
- Fix collection grouping in mixed libraries [PR #15373], by @theguymadmax
- Fix AncestorIds not migrating [PR #15446], by @theguymadmax
- Fix System.NullReferenceException when people's role is null (10.11.z) [PR #15441], by @IceStormNG
- Invalidate parent folder's cache on deletion/creation [PR #15423], by @theguymadmax
- Fix name filters to use only SortName [PR #15381], by @theguymadmax
- Don't enforce a minimum amount of free space for the tmp and log dirs [PR #15390], by @Bond-009
- Fixed missing sort argument [PR #15413], by @JPVenson
- Fix item count display for collapsed items [PR #15380], by @theguymadmax
- Update branding in Swagger page [PR #15422], by @nielsvanvelzen
- Fix search terms using diacritics [PR #15435], by @theguymadmax
- feat(sqlite): add timeout config [PR #15369], by @evanreichard
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://hachyderm.io/@seri/115561181196872932
Really great post on both Windows reversing AND some great tips on using Rust to write a small shim utility DLL! Appreciate the specific settings and fun linker flags mentioned here too, to get the Rust DLL as small as possible.
#rustlang #systemsprogramming #windows #reversing #ReverseEngineering
-
🔗 r/reverseengineering SNES CD is Coming Back; progress on reverse engineering a non-existent console rss
submitted by /u/chicagogamecollector
[link] [comments] -
🔗 r/LocalLLaMA AMD Ryzen AI Max 395+ 256/512 GB Ram? rss
| I’m looking at the new AI boxes using the Ryzen AI Max+ 395 (GMKtec EVO-X2, Minisforum’s upcoming units, etc.) and I’m wondering if we’ll actually see higher-end RAM configs — specifically 256GB or even 512GB LPDDR5X. Right now most spec sheets cap out at 128GB LPDDR5X, but the platform itself has a very wide memory bus and is clearly built for AI workloads, not just typical mini-PC use cases. Since these boxes are heavily marketed for local LLM inference, higher RAM would make a massive difference (loading larger models, running multiple models in parallel, bigger context windows, etc.). We also know these boxes can be interconnected / clustered for distributed inference, which is great — but a single node with 256–512GB would still be incredibly useful for running larger models without sharding everything. So I’m curious what the community thinks: 1. Is 256GB or 512GB technically feasible on the 395 platform given LPDDR5X packaging, power, and controller limits? 2. Is the current 128GB ceiling just an OEM choice, or is there a hard limit? 3. Would you personally buy a 256GB/512GB configuration for local LLM work? 4. Or do you think the future is more about multi-box interconnect setups instead of big single-node memory pools? Very interested to hear from anyone who follows AMD’s memory controller architecture or has insight on what GMKtec / Minisforum might be planning next. Anyone have some leaked information about what is next? submitted by /u/quantier
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Heretic: Fully automatic censorship removal for language models rss
| Dear fellow Llamas, your time is precious, so I won't waste it with a long introduction. I have developed a program that can automatically remove censorship (aka "alignment") from many language models. I call it Heretic (https://github.com/p-e-w/heretic). If you have a Python environment with the appropriate version of PyTorch for your hardware installed, all you need to do in order to decensor a model is runpip install heretic-llm heretic Qwen/Qwen3-4B-Instruct-2507 <--- replace with model of your choiceThat's it! No configuration, no Jupyter, no parameters at all other than the model name. Heretic will
- Load the model using a fallback mechanism that automatically finds a dtype that works with your setup
- Load datasets containing "harmful" and "harmless" example prompts
- Benchmark your system to determine the optimal batch size for maximum evaluation speed on your hardware
- Perform directional ablation (aka "abliteration") driven by a TPE-based stochastic parameter optimization process that automatically finds abliteration parameters that minimize both refusals and KL divergence from the original model
- Once finished, give you the choice to save the model, upload it to Hugging Face, chat with it to test how well it works, or any combination of those actions
Running unsupervised with the default configuration, Heretic can produce decensored models that rival the quality of abliterations created manually by human experts: | Model | Refusals for "harmful" prompts | KL divergence from original model for "harmless" prompts
---|---|---
google/gemma-3-12b-it (original) | 97/100 | 0 (by definition)
mlabonne/gemma-3-12b-it-abliterated-v2 | 3/100 | 1.04
huihui-ai/gemma-3-12b-it-abliterated | 3/100 | 0.45
p-e-w/gemma-3-12b-it-heretic (ours) | 3/100 | 0.16As you can see, the Heretic version, generated without any human effort, achieves the same level of refusal suppression as other abliterations, but at a much lower KL divergence, indicating less damage to the original model's capabilities.
Heretic supports most dense models, including many multimodal models, and several different MoE architectures. It does not yet support SSMs/hybrid models, models with inhomogeneous layers, and certain novel attention systems.
You can find a collection of models that have been decensored using Heretic on Hugging Face.
Feedback welcome!
submitted by /u/-p-e-w-
[link] [comments] -
🔗 r/LocalLLaMA Finally a good use case for your local setups rss
| https://www.bbc.com/news/articles/c0rpy7envr5o submitted by /u/lakySK
[link] [comments]
---|--- -
🔗 r/wiesbaden Marktkirche rss
submitted by /u/Constant_Intern_8986
[link] [comments] -
🔗 @malcat@infosec.exchange [#Kesakode](https://infosec.exchange/tags/Kesakode) updated to1.0.45 ! mastodon
#Kesakode updated to1.0.45 !
● New malware entries: Fullmetal, Laplas, RoningLoader, ShadowRat, Silentsweeper and SystemShock
● Updated malware entries: 29
● FP-fixed signatures: 931
● 16587 new clean programs whitelisted
● 3452882 new functions
● 165257 new strings -
🔗 r/reverseengineering iOS 26 - Reversing Swift Like a Pro rss
submitted by /u/gangana3
[link] [comments] -
🔗 r/reverseengineering AI-Assisted Reverse Engineering with Ghidra rss
submitted by /u/GuyWizStupidComments
[link] [comments] -
🔗 Register Spill Joy & Curiosity #62 rss
Here we go again, packing for a trip. I'm flying to San Francisco for the week. Or as my dad, who's never been to the U.S., and nobody else I've spoken to in the last decade, calls it: Frisco.
And this time? Hand luggage only. But I'm bringing the black shirt.
-
Incredible: Dithering, Part 1. Incredible in the literal sense: after the tenth or so illustration you start to think, no way , no way they built all of this. But yes they did, they did. What a gem! And exactly like a Bartosz Ciechanowski post, this also made me wonder: imagine if all learning could look like this.
-
Yours truly wrote about the context window and context management in Amp. I had a lot of fun with this: making the diagrams in Monodraw was fun, creating the orbs with Midjourney and GPT-5 was fun, using this one model that I'm falling in love with to add the orbs was fun, building a script to invert the colors of the diagrams was fun. Hope you have fun reading it.
-
"People wouldn't watch the robot Olympics, for example. People don't watch the chess computer world championships. They watch the chess world championships because they're interested in other people going through the journey of life and wrestling with the same things they're wrestling with." This is from this interview with comedy writer Madeleine Brettingham. And here's one of Drew Breunig's thoughts on the interview: "I had never really thought about what virtuosity meant as a concept, but the way it was discussed in this forum has since solidified the idea of it in my brain: virtuosity can only be achieved when the audience can perceive the risks being taken by the performer." Both are worth reading.
-
_Yes: "_The fundamental number that has defined software development is a simple one: $150/hour. $150/hour is a reasonable approximation of the cost of a professional senior software developer in the United States at this time. That number is large, and the reasons for it are many, but fundamentally it is a simple question of supply and demand. [...] Virtually every aspect of how software development is done has evolved around that $150/hour number. With developers being rare and expensive, every line of code has to justify a very high cost. Decisions around how software should be designed, built, and tested are made not with respect to how to make the software the best it can be, but rather to optimize around that grinding $150/hour number. [...] So what happens when that brutal economics changes? Five months ago, it did, with the initial release of agentic AI for software development. While software developers have to do many more things at their jobs than coding, that $150/hour was justified purely by the fact that only software developers could create code. Worse they could only create it through essentially handcrafted processes that were only some constant factor better than scribing it into punch cards. As of five months ago, that justification became false." This is from Software Development in the Time of Strange New Angels. Read the whole thing. It's very good. If you scoffed at that quote: yes, read it.
-
Nano Banana can be prompt engineered for extremely nuanced AI image generation. As someone who has only recently started to dive into Midjourney and is now using Midjourney and ChatGPT to generate images and send them back and forth between models: this was fantastic! But even as someone who's generated a few images, the kicker at the end, when the model takes HTML and produces a render of the page… Well, that's something else entirely. Wow.
-
I've heard Tyler Cowen say we "should write for the AIs" before, but I could never really make sense of it -- I'm writing on the Internet, aren't it? Isn't that writing for the AIs? Should I address them? Say hello and thank you? This article here -- Baby Shoggoth Is Listening -- digs into the idea. Gwern is quoted too. But… I don't know, I don't know. I still don't think I know what it means. But it's an interesting thought, so here we are.
-
"Personality basins are a mental model that I use to reason about humans within their environment: from modelling why people are they way they are, how they change over time, how mental illnesses and addiction function along with how we should look for their cures, and how the attention economy optimizes itself to consume all of your free time."
-
"Google are killing XSLT!" is the headline at xlst.rip and… Look, I didn't know what XSLT was, and now that I do I'm not sure whether it's a bad thing to kill? But what I am sure about is this: that website is amazing. Click on that link.
-
Here's another amazing website with a URL to match: how-did-i-get-here.net. Very well done. And the writing, too: "The Internet is often described as an open, almost anarchistic network connecting computers, some owned by people like you and me, and some owned by companies. In reality, the Internet is a network of corporation-owned networks, access and control to which is governed by financial transactions and dripping with bureaucracy." And now I'm listening to this again.
-
Tom MacWright, one of the co-founders and the CTO of val.town, wrote this honest, direct, unfluffly, can't-believe-how-honest-actually retrospective on Val Town 2023-2025. "One thing I've thought for a long time is that people building startups are building complicated machines. They carry out a bunch of functions, maybe they proofread your documents or produce widgets, or whatever, but the machine also has a button on it that says 'make money.' And everything kind of relates to that button as you're building it, but you don't really press it."
-
I really, really, really wanted to scoff at curated.supply and say something like "who the hell puts a Porsche 911 and a Rolex and a freaking tea kettle on the same page?" but then I got sucked in and now I want to kind of buy this orb lamp.
-
"If you are having a problem with some code and seeking help, preparing a Short, Self Contained, Correct Example (SSCCE) is very useful. But what is an SSCCE?" If you haven't worked on a popular open-source project your guess as to how many people struggle with producing a proper bug report is likely off, very off. I had always assumed everyone knows what a good ticket looks like -- until I worked on Zed. Now, if someone submits a bug report with an SSCCE I treat as if I had found a gold coin in the pocket of my jacket.
-
When Your Hash Becomes a String: Hunting Ruby's Million-to-One Memory Bug. This was great. I'm fascinated that they even managed to reproduce it. I guess at some point you'd run into it, but, wow, someone got lucky there in a very unlucky situation.
-
I always love listening to comedians talk shop and this one was great: Louis C.K. on a podcast with David Spade and Dana Carvey.
-
This post about lazygit was very interesting. I've not really used it, except for starting it a handful of times, but the section "What's amazing in lazygit?" is interesting because, yes, it's about lazygit, but it's also about TUIs and terminal programs in general and right now a lot of coding agents are in the terminal and… well, it's interesting, isn't it?
-
Anthropic is reporting that they have been "disrupting the first reported AI-orchestrated cyber espionage campaign" and while, as you know, I love to read "state-sponsored group" in connection with cyber attacks, this one was… strange. Take these two paragraphs: "At this point they had to convince Claude--which is extensively trained to avoid harmful behaviors--to engage in the attack. They did so by jailbreaking it, effectively tricking it to bypass its guardrails. They broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose. They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing. The attackers then initiated the second phase of the attack, which involved Claude Code inspecting the target organization's systems and infrastructure and spotting the highest-value databases. Claude was able to perform this reconnaissance in a fraction of the time it would've taken a team of human hackers. It then reported back to the human operators with a summary of its findings." Now, tell me, why did they put this sentence in: _Claude was able to perform this reconnaissance in a fraction of the time it would 've taken a team of human hackers. _Are you reporting on an attack that you averted, while telling us that your "extensively trained" model has been jailbroken, and then, kind of, brag? Is this a security report, or an advertisement? I was asking myself that until I made it to this paragraph: "Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign). The sheer amount of work performed by the AI would have taken vast amounts of time for a human team. At the peak of its attack, the AI made thousands of requests, often multiple per second--an attack speed that would have been, for human hackers, simply impossible to match."
-
Jason Bateman talking to Marc Maron about being a director: "Just exercise taste. Just sit there and watch the results of other people's work and say yes to this, no to that, a little more of this, a little less of that. It's the one person on the set who doesn't have a job. So you don't actually need to do anything. I'm not suggesting that's the right way to be a director but I worked with a million who worked that way. It's very common. You need not be overwhelmed by 'I gotta do a bunch of shit.' Walk before you run. So just sit there and be the arbiter of taste."
-
"Yes: the Referendum gets unattractively self-righteous and judgmental. Quite a lot of what passes itself off as a dialogue about our society consists of people trying to justify their own choices as the only right or natural ones by denouncing others' as selfish or pathological or wrong. So it's easy to overlook that hidden beneath all this smug certainty is a poignant insecurity, and the naked 3 A.M. terror of regret. The problem is, we only get one chance at this, with no do-overs. Life is, in effect, a non-repeatable experiment with no control."
-
I've had thoughts similar to those expressed here -- things have changed dramatically in the last two years, most people apparently haven't realized it yet, and things will change even more -- but, man, was I surprised that it's Will Larson writing this: "In the 2010s, the morality tale was that it was all about empowering engineers as a fundamental good. Sure, I can get excited for that, but I don't really believe that narrative: it happened because hiring was competitive. In the 2020s, the morality tale is that bureaucratic middle management have made organizations stale and inefficient. The lack of experts has crippled organizational efficiency. Once again, I can get behind that-there's truth here-but the much larger drivers aren't about morality, it's about ZIRP-ending and optimism about productivity gains from AI tooling." Highly, highly recommend reading this. If you haven't noticed the shift yet, I hope this gives you a glimpse.
Have you also gone hand luggage only, thinking "I can do this, yeah, I can do this", and then heard this voice in your head that wonders "but what if I spill black coffee on myself on the day I already fell into an chocolate ice cream cart?" -- yeah? You should subscribe:
-
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [parascope](https://github.com/xorpse/parascope) (0.3.0) -
🔗 Kevin Lynagh Pindakaas party, plywood problems, and CNC daydreams rss
Peanut butter tasting (!!!)
I recently re-read Gwern On Having Enough Socks and, uh, now I'm hosting a blind-tasting of a dozen peanut butters this Saturday, November 22 in Amsterdam. If you're in town, email me for the event details. Together we will find the best 100% pindakaas met stukjes.
A shoe tower
The weather has turned chilly, and my girlfriend is highly encouraging me to build some bedroom storage so our coats and sweaters are more accessible.
Since making a standard box wardrobe out of plywood would be more expensive and time consuming yet aesthetically indistinguishable from Ikea, I decided to explore a lightweight frame using the hardwood dowels left over from my floating castle desk.
In particular, I wanted to cut a groove in the dowels to hold (and hide) the raw edge of thin plywood shelves.
I 3D-printed a custom shoe for my lil' trim router:

but this yielded poor results, so I ended up buying a full-sized plunge router.
As it turns out, it takes a lot to hold round wood down securely to a table and route a groove exactly down the center. I eventually got there with enough scrap material, clamps, and track saw track as straight edge:

Since I was using decorative pan head screws, I needed both through holes (for the screw shafts) and counterbores (for the shiny heads) along the dowels. Free-handing with a drill worked about as well as you might expect -- turns out it's very obvious when a 15mm diameter counterbore isn't exactly centered on a 20mm diameter dowel -- so I ended up buying a benchtop drill press:

This Bosch PBD 40 has some awesome features like electronic speed control and digital depth readout. Unfortunately, it also came with noticeable wiggle in the chuck and about 1mm deviation between the base plate's v-groove and the drill center, both of which make it impossible to throw material down and get a repeatable hole location -- you have to carefully line everything up first.
As far as I can tell (please let me know if I'm wrong!) there's not really anything better on the market until you're willing to spend for the $1,400 Nova Viking benchtop drill press.
After much drilling, I decided there was absolutely no way I was going to enjoy building a shoe tower using dowels (much less an entire bedroom storage system), so I demoted this prototype down to "plant stand", which ended up turning out OK:

(Just ignore all those extra holes.)
I used brass tube to hide the screw threads. Originally I thought I'd be able to easily cut brass tube with a hacksaw -- turns out absolutely not; brass is way too gummy and it's impossible (for me, anyway) to maintain a consistent cut line by hand. Luckily, I found one of these $10 things which, pleasingly, is extremely good at its one job:

(Notice the the right side piece was crushed in a drill chuck and has an all-over-the-place failed cut from me earlier attempting to cut it by spinning it against a clamped-down hacksaw.)
For my second shoe tower attempt, I wanted a lightweight plywood frame combined with fabric -- something in the direction of Mieke Meijer's Airframe cabinet (which looks cool as hell).
That build went much more smoothly, and the final product turned out better than I expected:

The only hiccup during the build was discovering that my dowel jig won't work with material thinner than 15mm (my plywood was 12mm). I made a sort of jig/guide using my new drill press and a bit of scrap wood, but the final joints still turned out a bit wonky.
This is a typical woodworking experience for me -- a seemingly simple task like "drill holes centered in a plywood edge" takes two hours longer than expected, and even then doesn't come out quite right.
¯\_(ツ)_/¯Have you tried rubbing some computer on it?
Back when I had a CNC router in my closet, I used to think it'd be so much faster to work in a more traditional workshop with proper saws and drill presses and stuff.
But now that I'm frustrated in my back garden shed, discovering that I'm missing the appropriate drill size / pattern bit / alignment jig and spending hours carefully drilling not-quite-centered holes, I've started daydreaming that, surely, a "conversational" CNC router workflow could be better: clamp stuff down to the machine bed, get a high resolution / magnified image of the work, and specify exactly where and what to cut by pointing at stuff in the computer.
Ideally you'd get an orthographic image -- a top-down projection without any perspective distortion.
Since I'm not about to shell out a few grand for a telecentric lens, I started falling down a rabbit hole of building my own camera by using the machine gantry to move a large CCD line-sensor across the bed.
The TCD1304, for example, has 3648 pixels over a 29mm sensor line, runs about $10 on AliExpress, and is used by solid looking open-source hardware and firmware (they're popular sensors for making your own spectrometer, apparently).
However, I realized that I know basically nothing about optics. Seeing the details of this gigapixel camera from flatbed scanner CCD disabused me of the notion that I'd have a snowball's chance in hell of making a usable camera.
I spent some time with ChatGPT's Deep Research exploring other ideas, but it didn't come up with much besides reminding me that I've been spewing tokens on this bullshit for a while now:

Good ol' YouTube came to the rescue, though -- Paper Tools cleverly avoids the entire perspective distortion problem by only allowing you to pick coordinates from the center of the camera image.
I.e., rather than giving you a static orthographic image that you then CAD/CAM on top of, their workflow requires that you literally drive your CNC machine around to the exact point locations every time you select one.
Quite an obvious solution in hindsight, and I'm curious to watch how that project unfolds.
Next: Drawers
After a week of daydreaming about an alternative, fluid conversational CAD/CAM interfaces for the CNC router currently across the Atlantic Ocean from me, I came back to my senses and decided to keep prototyping furniture ideas sans computer.
I figured solid wood might be more satisfying than plywood. Of course, I don't have a jointer, planer, nor experience working with actual hardwoods, but I do have some leftover 2x softwood I can start with.
My current idea is to lean heavily on bright colors -- paint, 3d-printed plastic, and paracord.
I'm experimenting with cross-bracing using the latter two materials:

For the life of me I can't find a stopper knot that I can slide or otherwise tie under tension -- if you have any ideas or suggestions of how to get a tidy looking knot that'll slide to the left, I'd love to hear them!
Misc. stuff
-
My powered air respirator now works great: I ended up going with a dual-fan design and dropped the belt clip in favor of an easier-to-don sling bag.
-
I love the engineering competence and details in this NTSB interview with a submersible expert. The names are all redacted, but you might be able to guess the interviewee from his first answer: "Well, I'm sure you're familiar with my film Titanic ". I also particularly liked one of the safety discussions: "I think the most dangerous part of our whole operation was these young software engineers puking over the railing in a high sea."
-
The Biochemical Beauty of Retatrutide: How GLP-1s Actually Work. I've never studied physiology, and this article was a perfect overview of one of the most exciting medical advances of the recent years. I was also delighted when the author connected biochemistry with familiar endurance running feelings.
-
Speaking of nicer visual / probing systems for CNCs, this now-on-Kickstarter Nestworks C500 has a bunch of nice looking quality of life details. I swear it was just yesterday I dropped $10k for a Shopbot Desktop with a sketchy USB connection. I'm so lucky to have such deflationary hobbies.
-
Your cheap furniture has a secret. A history of rubberwood.
-
I'm renting a place with gas cooktop, but I wanted induction so I bought the €40 Ikea single-burner and am extremely happy with it. Normally not a fan of touch controls, but it has extremely responsive ones. The protocol between the display and induction coil has also been reverse engineered in case you want to add your own control loop. (I don't need another project, I don't need another project..)
-
"They needed to connect the organ promptly to the patient manually, with a technique called anastomosis. The medical sutures of the time were too thick, and the needles too large, to avoid damaging delicate blood vessels. To overcome such challenges, the world needed a skilled seamstress."
-
"Complexity is free in 3d printing, the limit of design geometry is mostly how much time you’re willing to spend in CAD. I wanted to print the most complicated art piece I could think of."
-
I was curious how the support structures were generated, so I asked Claude to read the code and it gave me a great explanation for $0.04. What a world!
-
SpreadSheesh! by Dennis Heihoff - Clojure Electric-based spreadsheet with live code evaluation and custom UI rendering
-
-