- ↔
- →
- Nviso vshell report
- jj auto-track none
- Transparent Leadership Beats Servant Leadership
- Writing a good CLAUDE.md | HumanLayer Blog
- My Current global CLAUDE.md
- December 12, 2025
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [ida-terminal-plugin](https://github.com/HexRaysSA/ida-terminal-plugin) (0.0.6) -
🔗 p05wn/SuperHint v1.1.0 release
Migrate struct hint database from file system to netnodes
-
- December 11, 2025
-
🔗 Simon Willison GPT-5.2 rss
OpenAI reportedly declared a "code red" on the 1st of December in response to increasingly credible competition from the likes of Google's Gemini 3. It's less than two weeks later and they just announced GPT-5.2, calling it "the most capable model series yet for professional knowledge work".
Key characteristics of GPT-5.2
The new model comes in two variants: GPT-5.2 and GPT-5.2 Pro. There's no Mini variant yet.
GPT-5.2 is available via their UI in both "instant" and "thinking" modes, presumably still corresponding to the API concept of different reasoning effort levels.
The knowledge cut-off date for both variants is now August 31st 2025. This is significant - GPT 5.1 and 5 were both Sep 30, 2024 and GPT-5 mini was May 31, 2024.
Both of the 5.2 models have a 400,000 token context window and 128,000 max output tokens - no different from 5.1 or 5.
Pricing wise 5.2 is a rare increase - it's 1.4x the cost of GPT 5.1, at $1.75/million input and $14/million output. GPT-5.2 Pro is $21.00/million input and a hefty $168.00/million output, putting it up there with their previous most expensive models o1 Pro and GPT-4.5.
So far the main benchmark results we have are self-reported by OpenAI. The most interesting ones are a 70.9% score on their GDPval "Knowledge work tasks" benchmark (GPT-5 got 38.8%) and a 52.9% on ARC-AGI-2 (up from 17.6% for GPT-5.1 Thinking).
The ARC Prize Twitter account provided this interesting note on the efficiency gains for GPT-5.2 Pro
A year ago, we verified a preview of an unreleased version of @OpenAI o3 (High) that scored 88% on ARC-AGI-1 at est. $4.5k/task
Today, we’ve verified a new GPT-5.2 Pro (X-High) SOTA score of 90.5% at $11.64/task
This represents a ~390X efficiency improvement in one year
GPT-5.2 can be accessed in OpenAI's Codex CLI tool like this:
codex -m gpt-5.2There are three new API models:
- gpt-5.2
- gpt-5.2-chat-latest - the model used by ChatGPT
- gpt-5.2-pro
OpenAI have published a new GPT-5.2 Prompting Guide.
It's better at vision
One note from the announcement that caught my eye:
GPT‑5.2 Thinking is our strongest vision model yet, cutting error rates roughly in half on chart reasoning and software interface understanding.
I had dissapointing results from GPT-5 on an OCR task a while ago. I tried it against GPT-5.2 and it did much better:
llm -m gpt-5.2 ocr -a https://static.simonwillison.net/static/2025/ft.jpeg
Here's the result from that, which cost 1,520 input and 1,022 for a total of 1.6968 cents.
Rendering some pelicans
For my classic "Generate an SVG of a pelican riding a bicycle" test:
llm -m gpt-5.2 "Generate an SVG of a pelican riding a bicycle"
And for the more advanced alternative test, which tests instruction following in a little more depth:
llm -m gpt-5.2 "Generate an SVG of a California brown pelican riding a bicycle. The bicycle must have spokes and a correctly shaped bicycle frame. The pelican must have its characteristic large pouch, and there should be a clear indication of feathers. The pelican must be clearly pedaling the bicycle. The image should show the full breeding plumage of the California brown pelican."

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 pydantic/mcp-run-python v0.0.22 (2025-12-11) release
What's Changed
- feat: when verbose logging, also log outputs of deno to console by @Kigstn in #23
- feat: add stateless streamable http mode (#8) by @PeterRodenkirchAA in #15
- pin deno to v2.5.5 by @samuelcolvin in #33
- Ignore
node_moduleswhen copying the TS code for deno by @tonyxwz in #32
New Contributors
- @Kigstn made their first contribution in #23
- @PeterRodenkirchAA made their first contribution in #15
- @tonyxwz made their first contribution in #32
Full Changelog :
v0.0.21...v0.0.22 -
🔗 r/reverseengineering [pe-signgen] - Generate signatures & offsets for functions across all Windows 10/11 versions rss
submitted by /u/Purple_Can9843
[link] [comments] -
🔗 idursun/jjui v0.9.8 release
Release Summary
This release includes experimental Lua scripting support for custom commands, several bug fixes, and UI improvements. The streaming command handler has been reworked to remove the 100ms delay incurred on every refresh (you should feel the difference), and issues with leader keys, parser colour handling, and preview panel focus have been resolved.
Key Highlights
🚀 Major Features
- Lua Scripting in Custom Commands (#415): Experimental support for writing custom commands using Lua scripts.
Initial version includes API for revision navigation, JJ command execution, clipboard operations, revset manipulation and displaying flash messages.
These are the currently available functions but expect the list to grow and change with each release.
Available Functions (v1) :
revisions.current()- Get currently selected change IDrevisions.checked()- Get list of checked change IDsrevisions.refresh({keep_selections?, selected_revision?})- Refresh revisions viewrevisions.navigate({by?, page?, target?, to?, fallback?, ensureView?, allowStream?})- Navigate revisionsrevisions.start_squash({files?})- Begin squash workflowrevisions.start_rebase({source?, target?})- Start rebase operationrevisions.open_details()- Open revision details viewrevisions.start_inline_describe()- Open inline describe editorrevset.set(value)- Set custom revsetrevset.reset()- Reset to default revsetrevset.current()- Get active revset stringrevset.default()- Get default revset stringjj_async({...})- Run JJ command asynchronouslyjj({...})- Run JJ command synchronously (returns output, err)flash(message)- Display flash messagecopy_to_clipboard(text)- Copy text to clipboard
Here are a couple of examples:
-
Appends
| ancestors(<change id of the current revisions>, 2)to the end of revset and bumps the number with each execution[custom_commands.append_to_revset] key = ["+"] lua = ''' local change_id = revisions.current() if not change_id then return end
local current = revset.current() local bumped = false local updated = current:gsub("ancestors%(" .. change_id .. "%s,%s(%d+)%)", function(n) bumped = true return "ancestors(" .. change_id .. ", " .. (tonumber(n) + 1) .. ")" end, 1)
if not bumped then updated = current .. "| ancestors(" .. change_id .. ", 2)" end
revset.set(updated) '''
-
Inserts a new commit after the selected one and then starts inline describe on the new revision.
[custom_commands.new_then_describe] key = ["N"] lua = ''' jj("new", "-A", revisions.current()) revisions.refresh()
local new_change_id = jj("log", "-r", "@", "-T", "change_id.shortest()", "--no-graph") revisions.navigate{to=new_change_id} revisions.start_inline_describe() '''
-
Copy to clipboard example
[custom_commands.copy_to_clipboard] key = ["X"] lua = ''' local selections = revisions.checked() if #selections == 0 then flash("none selected") end local content = table.concat(selections, ",") copy_to_clipboard(content) '''
✨ Enhancements
- Key Sequences for Custom Commands (#420): Custom commands can now be triggered with multi-key sequences using
key_sequenceproperty. Also addsdescproperty for command descriptions. An overlay shows available sequences after pressing the first key.
Example:
[custom_commands.bookmark_list] key_sequence = ["w", "b", "l"] desc = "bookmarks list" lua = ''' revset.set("bookmarks() | remote_bookmarks()") '''-
Faster Refresh (#412): Improved streaming command handling, eliminating 100ms delay and making refreshes instant. Previously jjui would fail to launch or get stuck when jj emitted warning messages (e.g., deprecated config options like
git.push-new-bookmarks). -
Quick Search Highlighting (#414): Case-insensitive search with visual highlighting of all matches in the revisions view
-
Remember Unsaved Descriptions (#417): Descriptions are now preserved when you cancel, preventing accidental loss of work. Addresses the common frustration of accidentally hitting ESC and losing long commit messages with no way to recover them.
-
Squash Operation Toggle (#405): New
--use-destination-messageoption for squash operations
🐛 Bug Fixes
- Preview Panel Focus Issue (#390): Fixed preview panel showing full commit diff instead of selected file diff when terminal regains focus
- EOF Error Handling (#418): Proper error messages when revset contains no revisions instead of getting stuck
- Parser Color Agnostic (#413): Fixed parsing issues when users configure ChangeID/CommitID/IsDivergent with same colors.
- Leader Key Timing (#416): Fixed leader key processing to prevent race conditions. Leader keys were completely non-functional in versions after v0.9.3 - the options would appear in the UI but do nothing when selected.
🎨 UI/UX Improvements
- Clear selected revisions with ESC key when not in editing/overlay/focused operations (#419)
- Better menu spacing for git and bookmarks
- Reduced preview debounce time back to 50ms for snappier response (#410). The 200ms debounce made the UI feel sluggish when navigating between revisions.
⚙️ Internal Improvements
- Introduced intent-based architecture for better separation of concerns (only implemented for revisions, flash for now)
- Moved flash intents to dedicated package
- Simplified details view rendering
- Better configuration organisation
What's Changed
- operation: Add use destination message to squash operation by @woutersmeenk in #405
- Preview panel shows whole commit diff instead of selected file's diff when terminal regains focus by @abourget in #390
- fix(streamer): handle warning messages by @idursun in #412
- parser: stringify log/evolog prefixes to be color agnostic by @baggiiiie in #413
- revisions: add highlight to QuickSearch, make search case insensitive by @baggiiiie in #414
- feat: Lua scripting in custom commands by @idursun in #415
- revisions: handle EOF error for revset without revisions by @baggiiiie in #418
- revisions: clear selected revisions on cancel by @baggiiiie in #419
- feat: custom commands with sequence keys by @idursun in #420
New Contributors
- @woutersmeenk made their first contribution in #405
- @abourget made their first contribution in #390
Full Changelog :
v0.9.7...v0.9.8 -
🔗 jesseduffield/lazygit v0.54.0 release
Again we don't have any major new features this time (unless you count the support for alt-backspace for deleting words in the commit message editor, which is one of my favorite additions), but lots of smaller quality-of-life improvements and bug fixes. The most notable one is probably the fix for the stale index.lock problem, which was a very long-standing bug that seemed to affect some users much more than others for some reason.
Breaking Changes
-
The default sort order for local and remote branches has changed: it used to be 'recency' (based on reflog) for local branches, and 'alphabetical' for remote branches. Both of these have been changed to 'date' (which means committerdate). If you do liked the old defaults better, you can revert to them with the following config:
git: localBranchSortOrder: recency remoteBranchSortOrder: alphabetical
-
The default selection mode in the staging and custom patch building views has been changed to hunk mode. This is the more useful mode in most cases, as it usually saves a lot of keystrokes. If you want to switch back to the old line mode default, you can do so by adding the following to your config:
gui: useHunkModeInStagingView: false
What's Changed
Enhancements 🔥
- Add confirmation for hard reset by @stefanhaller in #4704
- Provide user config defaults for UI-changeable settings by @stefanhaller in #4717
- Improve mouse handling of suggestions panel by @stefanhaller in #4726
- Add new command "Checkout previous branch" by @kyu08 in #4728
- Add confirmation for nuking the working tree by @DawidPietrykowski in #4727
- Support Alt+Backspace for word deletion in text areas by @rtzll in #4741
- Don't use hunk mode for added or deleted files even when useHunkModeInStagingView config is on by @stefanhaller in #4758
- Show [0] keybinding in main view title by @stefanhaller in #4754
- Draw divergence from base branch right-aligned in branches view by @stefanhaller in #4785
- Enable hunk staging mode by default by @stefanhaller in #4780
Fixes 🔧
- Fix scrolling hunk into view when selecting next hunk by @stefanhaller in #4709
- Fix stale main view content when entering/exiting filtering view by @stefanhaller in #4719
- Detect double-clicks properly by @stefanhaller in #4725
- Fix commit searching during rebase or in divergence from upstream view by @stefanhaller in #4730
- Fix amending commits whose commit message is empty by @aidancz in #4732
- Several small fixes to filtering mode (by path or author) by @stefanhaller in #4749
- Show diff for renamed file when filtering by path by @stefanhaller in #4750
- Allow rewording or dropping commits in filtering mode by @stefanhaller in #4756
- Fix index out of bounds panic when repository has massive tags by @chojs23 in #4776
- When pressing
ato stage all files, don't include untracked files when showing only tracked files by @stefanhaller in #4779 - Fix commit hash colors when filtering by path or aythor by @stefanhaller in #4789
- Improve temp dir handling by @stefanhaller in #4784
- Terminate git processes more gracefully to avoid the stale index.lock problem by @stefanhaller in #4782
Maintenance ⚙️
- Raise sponsors PRs as a draft by @jesseduffield in #4694
- Update the peter-evans/create-pull-request action to v7 by @stefanhaller in #4695
- Update release workflow by @stefanhaller in #4703
- Clean up the .gitignore file by @stefanhaller in #4706
- Remove unused code and texts by @stefanhaller in #4715
- Remove deprecated edit configs by @stefanhaller in #4716
- Bump minimum required git version to 2.32 by @stefanhaller in #4718
- Use a better way of pinning the version of golangci-lint by @stefanhaller in #4733
- Make the minimum required git version a placeholder in the error text by @stefanhaller in #4778
- refactor: use slices.Equal to simplify code by @jishudashu in #4764
Docs 📖
- Fix broken markdown in auto-generated keybindings documentation by @KEY60228 in #4690
- Remove the homebrew tap from the readme by @stefanhaller in #4705
I18n 🌎
- Update translations from Crowdin by @stefanhaller in #4791
Performance Improvements 📊
- Fix performance regression on startup in repos with many tags by @stefanhaller in #4777
New Contributors
- @KEY60228 made their first contribution in #4690
- @DawidPietrykowski made their first contribution in #4727
- @rtzll made their first contribution in #4741
- @chojs23 made their first contribution in #4776
- @jishudashu made their first contribution in #4764
Full Changelog :
v0.53.0...v0.54.0 -
-
🔗 @cxiao@infosec.exchange It's also very cool how the artists for the time cover (Simon Baek / Mingjue mastodon
It's also very cool how the artists for the time cover (Simon Baek / Mingjue Helen Chen / Scott Watanabe) adapted a magazine cover that they did originally as a prop in the movie!
-
🔗 jesseduffield/lazygit v0.55.0 release
Breaking Changes
-
The 'redo' command, which used to be bound to ctrl-z, is now bound to shift-Z instead. This is because ctrl-z is now used for suspending the application; it is a commonly known keybinding for that in the Linux world. If you want to revert this change, you can do so by adding the following to your config:
keybinding: universal: suspendApp:
redo: -
The
git.paging.useConfigoption has been removed. If you were relying on it to configure your pager, you'll have to explicitly set the pager again using thegit.paging.pageroption.
What's Changed
Enhancements 🔥
- Allow filtering the keybindings menu by keybinding by @stefanhaller in #4821
- Add support for suspending LazyGit with Ctrl+Z on Unix systems by @cowboy8625 in #4757
- Add "CopyToClipboard" command to
ConfirmationControllerby @kyu08 in #4810 - Add a user config for using git's external diff command for paging by @stefanhaller in #4832
- Log the hash of dropped stashes by @stefanhaller in #4850
Fixes 🔧
- Fix right-alignment of divergence from base branch for branch checked out in a worktree by @stefanhaller in #4824
- Support Azure DevOps vs-ssh.visualstudio.com SSH remotes as hosting provider by @Kahitar in #4822
- Improve display of "esc" keybinding in the keybindings status bar by @stefanhaller in #4819
- Use external diff command in stashes panel by @stefanhaller in #4836
- Remove the git.paging.useConfig option by @stefanhaller in #4837
- Don't auto-forward branches that are checked out in another worktree by @stefanhaller in #4833
- Fix dropping range selection of filtered stashes by @stefanhaller in #4849
- Fix rare crash in interactive rebase (merge command without comment) by @stefanhaller in #4872
- Make it possible to rebind the Confirm keybinding by @stefanhaller in #4860
Maintenance ⚙️
- Pass only Git-tracked Go files to gofumpt by @kyu08 in #4809
- Update donation wording so that it's clear there's no strings attached by @jesseduffield in #4827
- Enhance PR/Issue templates readability by @kyu08 in #4829
- Run label check workflow only on label events and open pr event by @kyu08 in #4830
Docs 📖
- Add installation with gah by @marverix in #4820
- docs(VISION): fix "Dicoverability" typo by @Rudxain in #4866
- Add dev container feature as installation method to README by @HenningLorenzen-ext-bayer in #4876
I18n 🌎
- Update translations from Crowdin by @stefanhaller in #4873
New Contributors
- @marverix made their first contribution in #4820
- @Kahitar made their first contribution in #4822
- @cowboy8625 made their first contribution in #4757
- @Rudxain made their first contribution in #4866
- @HenningLorenzen-ext-bayer made their first contribution in #4876
Full Changelog :
v0.54.2...v0.55.0 -
-
🔗 @cxiao@infosec.exchange [https://time.com/7338690/breakthrough-of-the-year-2025-kpop-demon- mastodon
-
🔗 @cxiao@infosec.exchange the only TIME 2025 people of the year i will accept mastodon
the only TIME 2025 people of the year i will accept
-
🔗 News Minimalist 🐢 Denmark classifies USA as a threat + 10 more stories rss
In the last 2 days ChatGPT read 62599 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[6.0] Denmark now classifies the USA as a threat —tagesspiegel.de(German) (+13)
For the first time, Denmark’s military intelligence service has classified the United States as a threat in its annual assessment, placing it alongside Russia and China.
The assessment states the U.S. now uses its economic and technological power against allies. This follows tensions over President Donald Trump’s claims on Greenland, creating uncertainty about security guarantees.
Despite this development, the intelligence chief reiterated that Russia remains the primary threat. The report also highlights China's increasing security interests and military presence in the Arctic.
[5.5] The Pentagon deploys Google Gemini AI —gizmodo.com(+4)
The U.S. Department of Defense announced a new platform, GenAI.mil, that will initially use Google's Gemini model to integrate artificial intelligence into military operations for millions of users.
The platform, accessible to over three million Pentagon personnel, will use Google’s government-specific AI. Officials state it will speed up administrative tasks, analyze intelligence, and be used to simulate conflicts.
The move follows a $200 million contract Google secured in July. Other AI firms, including OpenAI and xAI, also have defense deals and may be integrated into the platform later.
Highly covered news with significance over 5.5
[6.4] Disney partners with OpenAI, investing $1 billion to integrate Marvel, Pixar, and Star Wars characters into AI video creation — variety.com (+5)
[6.1] China launches world's largest drone mothership — japantimes.co.jp (+4)
[6.1] Many mental illnesses share genetic links — science.orf.at (German) (+12)
[5.8] Astronomers find first direct evidence of “Monster Stars” from the cosmic dawn — cfa.harvard.edu (+11)
[5.7] A supermassive black hole generated ultra-fast winds, a phenomenon never before observed — dw.com (Spanish) (+9)
[5.5] Israel responsible for nearly half of global journalist killings this year, report finds — irishtimes.com (+7)
[5.5] Mexico approves up to 50% tariffs on China and other countries — bbc.com (+22)
[5.5] German women surpass 30% in top management — zeit.de (German) (+2)
[5.8] Sperm from donor with cancer-causing gene was used to conceive almost 200 children — bbc.com (+73)
Thanks for reading!
— Vadim
You can create your own personalized newsletter like this with premium.
-
🔗 r/reverseengineering Impressive project: "Eaglercraft" - A full decompilation and port of Minecraft 1.8 to TeaVM/JavaScript rss
submitted by /u/RealMANI_
[link] [comments] -
🔗 r/LocalLLaMA New in llama.cpp: Live Model Switching rss
| submitted by /u/paf1138
[link] [comments]
---|--- -
🔗 MetaBrainz We can’t have nice things… because of AI scrapers rss
In the past few months the MetaBrainz team has been fighting a battle against unscrupulous AI companies ignoring common courtesies (such as robots.txt) and scraping the Internet in order to build up their AI models. Rather than downloading our dataset in one complete download, they insist on loading all of MusicBrainz one page at a time. This of course would take hundreds of years to complete and is utterly pointless. In doing so, they are overloading our servers and preventing legitimate users from accessing our site.
Now the AI scrapers have found ListenBrainz and are hitting a number of our API endpoints for their nefarious data gathering purposes. In order to protect our services from becoming overloaded, we've made the following changes:
- The /metadata/lookup API endpoints (GET and POST versions) now require the caller to send an Authorization token in order for this endpoint to work.
- The ListenBrainz Labs API endpoints for mbid-mapping, mbid-mapping-release and mbid-mapping-explain have been removed. Those were always intended for debugging purposes and will also soon be replaced with a new endpoints for our upcoming improved mapper.
- LB Radio will now require users to be logged in to use it (and API endpoint users will need to send the Authorization header). The error message for logged in users is a bit clunky at the moment; we'll fix this once we've finished the work for this year's Year in Music.
Sorry for these hassles and no-notice changes, but they were required in order to keep our services functioning at an acceptable level.
-
🔗 r/LocalLLaMA Mistral’s Vibe CLI now supports a 200K token context window (previously 100K) rss
| submitted by /u/Dear-Success-1441
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Leaked footage from Meta's post-training strategy meeting. rss
| submitted by /u/YouCanMake1t
[link] [comments]
---|--- -
🔗 Rust Blog Announcing Rust 1.92.0 rss
The Rust team is happy to announce a new version of Rust, 1.92.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via
rustup, you can get 1.92.0 with:$ rustup update stableIf you don't have it already, you can get
rustupfrom the appropriate page on our website, and check out the detailed release notes for 1.92.0.If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (
rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!What's in 1.92.0 stable
Deny-by-default never type lints
The language and compiler teams continue to work on stabilization of the never type. In this release the
never_type_fallback_flowing_into_unsafeanddependency_on_unit_never_type_fallbackfuture compatibility lints were made deny-by-default, meaning they will cause a compilation error when detected.It's worth noting that while this can result in compilation errors, it is still a lint; these lints can all be
#[allow]ed. These lints also will only fire when building the affected crates directly, not when they are built as dependencies (though a warning will be reported by Cargo in such cases).These lints detect code which is likely to be broken by the never type stabilization. It is highly advised to fix them if they are reported in your crate graph.
We believe there to be approximately 500 crates affected by this lint. Despite that, we believe this to be acceptable, as lints are not a breaking change and it will allow for stabilizing the never type in the future. For more in-depth justification, see the Language Team's assessment.
unused_must_useno longer warns aboutResult<(), UninhabitedType>Rust's
unused_must_uselint warns when ignoring the return value of a function, if the function or its return type is annotated with#[must_use]. For instance, this warns if ignoring a return type ofResult, to remind you to use?, or something like.expect("...").However, some functions return
Result, but the error type they use is not actually "inhabited", meaning you cannot construct any values of that type (e.g. the!orInfallibletypes).The
unused_must_uselint now no longer warns onResult<(), UninhabitedType>, or onControlFlow<UninhabitedType, ()>. For instance, it will not warn onResult<(), Infallible>. This avoids having to check for an error that can never happen.use core::convert::Infallible; fn can_never_fail() -> Result<(), Infallible> { // ... Ok(()) } fn main() { can_never_fail(); }This is particularly useful with the common pattern of a trait with an associated error type, where the error type may sometimes be infallible:
trait UsesAssocErrorType { type Error; fn method(&self) -> Result<(), Self::Error>; } struct CannotFail; impl UsesAssocErrorType for CannotFail { type Error = core::convert::Infallible; fn method(&self) -> Result<(), Self::Error> { Ok(()) } } struct CanFail; impl UsesAssocErrorType for CanFail { type Error = std::io::Error; fn method(&self) -> Result<(), Self::Error> { Err(std::io::Error::other("something went wrong")) } } fn main() { CannotFail.method(); // No warning CanFail.method(); // Warning: unused `Result` that must be used }Emit unwind tables even when
-Cpanic=abortis enabled on linuxBacktraces with
-Cpanic=abortpreviously worked in Rust 1.22 but were broken in Rust 1.23, as we stopped emitting unwind tables with-Cpanic=abort. In Rust 1.45 a workaround in the form of-Cforce-unwind-tables=yeswas stabilized.In Rust 1.92 unwind tables will be emitted by default even when
-Cpanic=abortis specified, allowing for backtraces to work properly. If unwind tables are not desired then users should use-Cforce-unwind-tables=noto explicitly disable them being emitted.Validate input to
#[macro_export]Over the past few releases, many changes were made to the way built-in attributes are processed in the compiler. This should greatly improve the error messages and warnings Rust gives for built-in attributes and especially make these diagnostics more consistent among all of the over 100 built-in attributes.
To give a small example, in this release specifically, Rust became stricter in checking what arguments are allowed to
macro_exportby upgrading that check to a "deny-by-default lint" that will be reported in dependencies.Stabilized APIs
NonZero<u{N}>::div_ceilLocation::file_as_c_strRwLockWriteGuard::downgradeBox::new_zeroedBox::new_zeroed_sliceRc::new_zeroedRc::new_zeroed_sliceArc::new_zeroedArc::new_zeroed_slicebtree_map::Entry::insert_entrybtree_map::VacantEntry::insert_entryimpl Extend<proc_macro::Group> for proc_macro::TokenStreamimpl Extend<proc_macro::Literal> for proc_macro::TokenStreamimpl Extend<proc_macro::Punct> for proc_macro::TokenStreamimpl Extend<proc_macro::Ident> for proc_macro::TokenStream
These previously stable APIs are now stable in const contexts:
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.92.0
Many people came together to create Rust 1.92.0. We couldn't have done it without all of you. Thanks!
-
🔗 Console.dev newsletter Watt rss
Description: Node.js application server.
What we like: Uses SO_REUSEPORT built into the Linux kernel for connection distribution, eliminating significant overhead. Handles crash restarts, graceful shutdown, monitoring, deployments for multiple applications. Shared HTTP cache across workers. Allows integration of databases, APIs, and multiple frameworks within a single app server.
What we dislike: Supports observability through Prometheus and Jaeger, but needs extra work to get data into services like DataDog unless you ingest through OTLP.
-
🔗 Console.dev newsletter Renovate rss
Description: Automated dependency updates.
What we like: Creates pull requests for dependency updates from auto-discovered packages. Designed as a more configurable replacement for Dependabot. Can create a “dashboard” tracking issue to easily see pending updates and manage them (individually or as a group). Supports most popular forges, not just GitHub. Run it independently through CI, not just as a hosted service.
What we dislike: Seems to be gradually being absorbed by an enterprise cloud platform (Mend).
-
🔗 Ampcode News Look At This rss
Amp can now look at PDFs, images, and other media files with a goal in mind.
Using the new
look_attool, Amp sends the file to a separate model — one with its own context window — and gets back only the information it requested.That means the main agent never has to process the full file, saving valuable tokens in the main context window.
To try it, just tell Amp to look at a media file with a purpose, like extracting the structure of a binary file format from the 477-page PDF spec defining it, and watch it distill the relevant bits out of the file.

-
🔗 Ampcode News Thread Labels rss
You can now add labels to threads to organize your work and find conversations later.
After a few weeks with Amp, you'll have dozens of threads. Some are one-offs, but others represent ongoing work: a feature you're iterating on, a bug you keep revisiting, or research you want to reference later. Labels help you find those threads again.

Click the tag icon on any thread to add a label. As you type, you'll see suggestions from labels you've already used—no need to remember exact names.
Click any label to jump to all threads with that label, or use the filter dropdown on the threads page to narrow down by multiple labels at once.
-
🔗 Ampcode News Thread Map rss
Two days ago, Lewis wrote about working with short threads — a lot of them, connected via handoff and thread mentions and forks. In his post, he showed a diagram, a map of one feature spread across 13 threads.
That map? It exists now, we built it:
Run
threads: mapin the Amp CLI command palette to try it out.You'll see a top-down view of all threads connected to your current thread via mentions, handoffs, or forks. If you hit
Enter, you'll open the selected thread and can continue your work there.(Yes, it's only available in the Amp CLI right now, but coming to other clients soon.)
If you only use handoff and forks occasionally, you might not need this yet. But if you do work with many short, connected threads — like Lewis or Igor — this map might make it even easier, because you can see the shape of your work.
Here are some patterns we've noticed so far:
1. Hub-and-Spokes
One thread will form a core from which many other threads can be created. This might be an initial implementation thread, or a context-gathering research thread. The spokes might be refactoring threads, or subfeatures. They don't need the context of the other spokes—by linking only to the hub thread, the context window of each spoke remains lean and relevant.
2. Chain
Many short threads chained together. This is a common pattern when one change depends on another. This pattern often emerges when using the
handofffeature to extract only the relevant context from a previous thread, allowing you to keep threads short but still continue serially dependent work. This is common in research or exploratory tasks, where the desired state is unknown.It's not uncommon for the end of a chain to lead to the central node of a hub-and-spokes pattern; a desired state is found and work can be more easily parallelised.
What's Next?
Our bet is that there are many more patterns out there, waiting to be recognised. Let us know what you find.
-
- December 10, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-10 rss
IDA Plugin Updates on 2025-12-10
Activity:
- FakePDB
- 7beeb88a: Merge pull request #57 from Abbas-MG/addExcepRootHeader
- ghidra
- d405e0d4: Merge remote-tracking branch 'origin/GP-6200-dragonmacher-help-npe'
- 113b0571: GP-6200 - Fixed help NPE
- eb9e9252: GP-0: Initialing set in RegisterValuesSarifMgr (Closes #8753)
- 02f72912: Merge remote-tracking branch
- 0735b97e: Updated message variable to track the last message
- 758e3891: Merge remote-tracking branch 'origin/patch'
- 7fca130f: GP-6198: Fixed program tab not updating on rename
- ghidra-chinese
- d405e0d4: Merge remote-tracking branch 'origin/GP-6200-dragonmacher-help-npe'
- 113b0571: GP-6200 - Fixed help NPE
- eb9e9252: GP-0: Initialing set in RegisterValuesSarifMgr (Closes #8753)
- 02f72912: Merge remote-tracking branch
- 0735b97e: Updated message variable to track the last message
- 758e3891: Merge remote-tracking branch 'origin/patch'
- 7fca130f: GP-6198: Fixed program tab not updating on rename
- FakePDB
-
🔗 r/LocalLLaMA Collection of every GPU from AMD and Nvidia rss
| Source https://youtu.be/g7MpS0X9Ru0?si=aLz_7sOnqUEuNgpa submitted by /u/No_Palpitation7740
[link] [comments]
---|--- -
🔗 Simon Willison Useful patterns for building HTML tools rss
I've started using the term HTML tools to refer to HTML applications that I've been building which combine HTML, JavaScript, and CSS in a single file and use them to provide useful functionality. I have built over 150 of these in the past two years, almost all of them written by LLMs. This article presents a collection of useful patterns I've discovered along the way.
First, some examples to show the kind of thing I'm talking about:
- svg-render renders SVG code to downloadable JPEGs or PNGs
- pypi-changelog lets you generate (and copy to clipboard) diffs between different PyPI package releases.
- bluesky-thread provides a nested view of a discussion thread on Bluesky.
These are some of my recent favorites. I have dozens more like this that I use on a regular basis.
You can explore my collection on tools.simonwillison.net - the by month view is useful for browsing the entire collection.
If you want to see the code and prompts, almost all of the examples in this post include a link in their footer to "view source" on GitHub. The GitHub commits usually contain either the prompt itself or a link to the transcript used to create the tool.
- The anatomy of an HTML tool
- Prototype with Artifacts or Canvas
- Switch to a coding agent for more complex projects
- Load dependencies from CDNs
- Host them somewhere else
- Take advantage of copy and paste
- Build debugging tools
- Persist state in the URL
- Use localStorage for secrets or larger state
- Collect CORS-enabled APIs
- LLMs can be called directly via CORS
- Don't be afraid of opening files
- You can offer downloadable files too
- Pyodide can run Python code in the browser
- WebAssembly opens more possibilities
- Remix your previous tools
- Record the prompt and transcript
- Go forth and build
The anatomy of an HTML tool
These are the characteristics I have found to be most productive in building tools of this nature:
- A single file: inline JavaScript and CSS in a single HTML file means the least hassle in hosting or distributing them, and crucially means you can copy and paste them out of an LLM response.
- Avoid React, or anything with a build step. The problem with React is that JSX requires a build step, which makes everything massively less convenient. I prompt "no react" and skip that whole rabbit hole entirely.
- Load dependencies from a CDN. The fewer dependencies the better, but if there's a well known library that helps solve a problem I'm happy to load it from CDNjs or jsdelivr or similar.
- Keep them small. A few hundred lines means the maintainability of the code doesn't matter too much: any good LLM can read them and understand what they're doing, and rewriting them from scratch with help from an LLM takes just a few minutes.
The end result is a few hundred lines of code that can be cleanly copied and pasted into a GitHub repository.
Prototype with Artifacts or Canvas
The easiest way to build one of these tools is to start in ChatGPT or Claude or Gemini. All three have features where they can write a simple HTML+JavaScript application and show it to you directly.
Claude calls this "Artifacts", ChatGPT and Gemini both call it "Canvas". Claude has the feature enabled by default, ChatGPT and Gemini may require you to toggle it on in their "tools" menus.
Try this prompt in Gemini or ChatGPT:
Build a canvas that lets me paste in JSON and converts it to YAML. No React.Or this prompt in Claude:
Build an artifact that lets me paste in JSON and converts it to YAML. No React.I always add "No React" to these prompts, because otherwise they tend to build with React, resulting in a file that is harder to copy and paste out of the LLM and use elsewhere. I find that attempts which use React take longer to display (since they need to run a build step) and are more likely to contain crashing bugs for some reason, especially in ChatGPT.
All three tools have "share" links that provide a URL to the finished application. Examples:
- ChatGPT JSON to YAML Canvas made with GPT-5.1 Thinking - here's the full ChatGPT transcript
- Claude JSON to YAML Artifact made with Claude Opus 4.5 - here's the full Claude transcript
- Gemini JSON to YAML Canvas made with Gemini 3 Pro - here's the full Gemini transcript
Switch to a coding agent for more complex projects
Coding agents such as Claude Code and Codex CLI have the advantage that they can test the code themselves while they work on it using tools like Playwright. I often upgrade to one of those when I'm working on something more complicated, like my Bluesky thread viewer tool shown above.
I also frequently use asynchronous coding agents like Claude Code for web to make changes to existing tools. I shared a video about that in Building a tool to copy-paste share terminal sessions using Claude Code for web.
Claude Code for web and Codex Cloud run directly against my simonw/tools repo, which means they can publish or upgrade tools via Pull Requests (here are dozens of examples) without me needing to copy and paste anything myself.
Load dependencies from CDNs
Any time I use an additional JavaScript library as part of my tool I like to load it from a CDN.
The three major LLM platforms support specific CDNs as part of their Artifacts or Canvas features, so often if you tell them "Use PDF.js" or similar they'll be able to compose a URL to a CDN that's on their allow-list.
Sometimes you'll need to go and look up the URL on cdnjs or jsDelivr and paste it into the chat.
CDNs like these have been around for long enough that I've grown to trust them, especially for URLs that include the package version.
The alternative to CDNs is to use npm and have a build step for your projects. I find this reduces my productivity at hacking on individual tools and makes it harder to self-host them.
Host them somewhere else
I don't like leaving my HTML tools hosted by the LLM platforms themselves for a couple of reasons. First, LLM platforms tend to run the tools inside a tight sandbox with a lot of restrictions. They're often unable to load data or images from external URLs, and sometimes even features like linking out to other sites are disabled.
The end-user experience often isn't great either. They show warning messages to new users, often take additional time to load and delight in showing promotions for the platform that was used to create the tool.
They're also not as reliable as other forms of static hosting. If ChatGPT or Claude are having an outage I'd like to still be able to access the tools I've created in the past.
Being able to easily self-host is the main reason I like insisting on "no React" and using CDNs for dependencies - the absence of a build step makes hosting tools elsewhere a simple case of copying and pasting them out to some other provider.
My preferred provider here is GitHub Pages because I can paste a block of HTML into a file on github.com and have it hosted on a permanent URL a few seconds later. Most of my tools end up in my simonw/tools repository which is configured to serve static files at tools.simonwillison.net.
Take advantage of copy and paste
One of the most useful input/output mechanisms for HTML tools comes in the form of copy and paste.
I frequently build tools that accept pasted content, transform it in some way and let the user copy it back to their clipboard to paste somewhere else.
Copy and paste on mobile phones is fiddly, so I frequently include "Copy to clipboard" buttons that populate the clipboard with a single touch.
Most operating system clipboards can carry multiple formats of the same copied data. That's why you can paste content from a word processor in a way that preserves formatting, but if you paste the same thing into a text editor you'll get the content with formatting stripped.
These rich copy operations are available in JavaScript paste events as well, which opens up all sorts of opportunities for HTML tools.
- hacker-news-thread-export lets you paste in a URL to a Hacker News thread and gives you a copyable condensed version of the entire thread, suitable for pasting into an LLM to get a useful summary.
- paste-rich-text lets you copy from a page and paste to get the HTML - particularly useful on mobile where view-source isn't available.
- alt-text-extractor lets you paste in images and then copy out their alt text.
Build debugging tools
The key to building interesting HTML tools is understanding what's possible. Building custom debugging tools is a great way to explore these options.
clipboard-viewer is one of my most useful. You can paste anything into it (text, rich text, images, files) and it will loop through and show you every type of paste data that's available on the clipboard.

This was key to building many of my other tools, because it showed me the invisible data that I could use to bootstrap other interesting pieces of functionality.
More debugging examples:
-
keyboard-debug shows the keys (and
KeyCodevalues) currently being held down. - cors-fetch reveals if a URL can be accessed via CORS.
- exif displays EXIF data for a selected photo.
Persist state in the URL
HTML tools may not have access to server-side databases for storage but it turns out you can store a lot of state directly in the URL.
I like this for tools I may want to bookmark or share with other people.
- icon-editor is a custom 24x24 icon editor I built to help hack on icons for the GitHub Universe badge. It persists your in-progress icon design in the URL so you can easily bookmark and share it.
Use localStorage for secrets or larger state
The localStorage browser API lets HTML tools store data persistently on the user's device, without exposing that data to the server.
I use this for larger pieces of state that don't fit comfortably in a URL, or for secrets like API keys which I really don't want anywhere near my server - even static hosts might have server logs that are outside of my influence.
- word-counter is a simple tool I built to help me write to specific word counts, for things like conference abstract submissions. It uses localStorage to save as you type, so your work isn't lost if you accidentally close the tab.
- render-markdown uses the same trick - I sometimes use this one to craft blog posts and I don't want to lose them.
-
haiku is one of a number of LLM demos I've built that request an API key from the user (via the
prompt()function) and then store that inlocalStorage. This one uses Claude Haiku to write haikus about what it can see through the user's webcam.
Collect CORS-enabled APIs
CORS stands for Cross-origin resource sharing. It's a relatively low-level detail which controls if JavaScript running on one site is able to fetch data from APIs hosted on other domains.
APIs that provide open CORS headers are a goldmine for HTML tools. It's worth building a collection of these over time.
Here are some I like:
- iNaturalist for fetching sightings of animals, including URLs to photos
- PyPI for fetching details of Python packages
- GitHub because anything in a public repository in GitHub has a CORS-enabled anonymous API for fetching that content from the raw.githubusercontent.com domain, which is behind a caching CDN so you don't need to worry too much about rate limits or feel guilty about adding load to their infrastructure.
- Bluesky for all sorts of operations
- Mastodon has generous CORS policies too, as used by applications like phanpy.social
GitHub Gists are a personal favorite here, because they let you build apps that can persist state to a permanent Gist through making a cross-origin API call.
- species-observation-map uses iNaturalist to show a map of recent sightings of a particular species.
-
zip-wheel-explorer fetches a
.whlfile for a Python package from PyPI, unzips it (in browser memory) and lets you navigate the files. - github-issue-to-markdown fetches issue details and comments from the GitHub API (including expanding any permanent code links) and turns them into copyable Markdown.
- terminal-to-html can optionally save the user's converted terminal session to a Gist.
- bluesky-quote-finder displays quotes of a specified Bluesky post, which can then be sorted by likes or by time.
LLMs can be called directly via CORS
All three of OpenAI, Anthropic and Gemini offer JSON APIs that can be accessed via CORS directly from HTML tools.
Unfortunately you still need an API key, and if you bake that key into your visible HTML anyone can steal it and use to rack up charges on your account.
I use the
localStoragesecrets pattern to store API keys for these services. This sucks from a user experience perspective - telling users to go and create an API key and paste it into a tool is a lot of friction - but it does work.Some examples:
- haiku uses the Claude API to write a haiku about an image from the user's webcam.
- openai-audio-output generates audio speech using OpenAI's GPT-4o audio API.
- gemini-bbox demonstrates Gemini 2.5's ability to return complex shaped image masks for objects in images, see Image segmentation using Gemini 2.5.
Don't be afraid of opening files
You don't need to upload a file to a server in order to make use of the
<input type="file">element. JavaScript can access the content of that file directly, which opens up a wealth of opportunities for useful functionality.Some examples:
-
ocr is the first tool I built for my collection, described in Running OCR against PDFs and images directly in your browser. It uses
PDF.jsandTesseract.jsto allow users to open a PDF in their browser which it then converts to an image-per-page and runs through OCR. - social-media-cropper lets you open (or paste in) an existing image and then crop it to common dimensions needed for different social media platforms - 2:1 for Twitter and LinkedIn, 1.4:1 for Substack etc.
-
ffmpeg-crop lets you open and preview a video file in your browser, drag a crop box within it and then copy out the
ffmpegcommand needed to produce a cropped copy on your own machine.
You can offer downloadable files too
An HTML tool can generate a file for download without needing help from a server.
The JavaScript library ecosystem has a huge range of packages for generating files in all kinds of useful formats.
- svg-render lets the user download the PNG or JPEG rendered from an SVG.
- social-media-cropper does the same for cropped images.
- open-sauce-2025 is my alternative schedule for a conference that includes a downloadable ICS file for adding the schedule to your calendar. See Vibe scraping and vibe coding a schedule app for Open Sauce 2025 entirely on my phone for more on that project.
Pyodide can run Python code in the browser
Pyodide is a distribution of Python that's compiled to WebAssembly and designed to run directly in browsers. It's an engineering marvel and one of the most underrated corners of the Python world.
It also cleanly loads from a CDN, which means there's no reason not to use it in HTML tools!
Even better, the Pyodide project includes micropip - a mechanism that can load extra pure-Python packages from PyPI via CORS.
- pyodide-bar-chart demonstrates running Pyodide, Pandas and matplotlib to render a bar chart directly in the browser.
- numpy-pyodide-lab is an experimental interactive tutorial for Numpy.
- apsw-query demonstrates the APSW SQLite library running in a browser, using it to show EXPLAIN QUERY plans for SQLite queries.
WebAssembly opens more possibilities
Pyodide is possible thanks to WebAssembly. WebAssembly means that a vast collection of software originally written in other languages can now be loaded in HTML tools as well.
Squoosh.app was the first example I saw that convinced me of the power of this pattern - it makes several best-in-class image compression libraries available directly in the browser.
I've used WebAssembly for a few of my own tools:
- ocr uses the pre-existing Tesseract.js WebAssembly port of the Tesseract OCR engine.
- sloccount is a port of David Wheeler's Perl and C SLOCCount utility to the browser, using a big ball of WebAssembly duct tape. More details here.
- micropython is my experiment using @micropython/micropython-webassembly-pyscript from NPM to run Python code with a smaller initial download than Pyodide.
Remix your previous tools
The biggest advantage of having a single public collection of 100+ tools is that it's easy for my LLM assistants to recombine them in interesting ways.
Sometimes I'll copy and paste a previous tool into the context, but when I'm working with a coding agent I can reference them by name - or tell the agent to search for relevant examples before it starts work.
The source code of any working tool doubles as clear documentation of how something can be done, including patterns for using editing libraries. An LLM with one or two existing tools in their context is much more likely to produce working code.
I built pypi-changelog by telling Claude Code:
Look at the pypi package explorer toolAnd then, after it had found and read the source code for zip-wheel-explorer:
Build a new tool pypi-changelog.html which uses the PyPI API to get the wheel URLs of all available versions of a package, then it displays them in a list where each pair has a "Show changes" clickable in between them - clicking on that fetches the full contents of the wheels and displays a nicely rendered diff representing the difference between the two, as close to a standard diff format as you can get with JS libraries from CDNs, and when that is displayed there is a "Copy" button which copies that diff to the clipboardHere's the full transcript.
See Running OCR against PDFs and images directly in your browser for another detailed example of remixing tools to create something new.
Record the prompt and transcript
I like keeping (and publishing) records of everything I do with LLMs, to help me grow my skills at using them over time.
For HTML tools I built by chatting with an LLM platform directly I use the "share" feature for those platforms.
For Claude Code or Codex CLI or other coding agents I copy and paste the full transcript from the terminal into my terminal-to-html tool and share that using a Gist.
In either case I include links to those transcripts in the commit message when I save the finished tool to my repository. You can see those in my tools.simonwillison.net colophon.
Go forth and build
I've had so much fun exploring the capabilities of LLMs in this way over the past year and a half, and building tools in this way has been invaluable in helping me understand both the potential for building tools with HTML and the capabilities of the LLMs that I'm building them with.
If you're interested in starting your own collection I highly recommend it! All you need to get started is a free GitHub repository with GitHub Pages enabled (Settings -> Pages -> Source -> Deploy from a branch -> main) and you can start copying in
.htmlpages generated in whatever manner you like.Bonus transcript: Here's how I used Claude Code and shot-scraper to add the screenshots to this post.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/LocalLLaMA I bought a Grace-Hopper server for €7.5k on Reddit and converted it into a desktop. rss
| I have been looking for a big upgrade for the brain for my GLaDOS Project, and so when I stumbled across a Grace-Hopper system being sold for 10K euro on here on r/LocalLLaMA , my first thought was “obviously fake.” My second thought was “I wonder if he’ll take 7.5K euro?”. This is the story of how I bought enterprise-grade AI hardware designed for liquid-cooled server racks that was converted to air cooling, and then back again, survived multiple near-disasters (including GPUs reporting temperatures of 16 million degrees), and ended up with a desktop that can run 235B parameter models at home. It’s a tale of questionable decisions, creative problem-solving, and what happens when you try to turn datacenter equipment into a daily driver. If you’ve ever wondered what it takes to run truly large models locally, or if you’re just here to watch someone disassemble $80,000 worth of hardware with nothing but hope and isopropanol, you’re in the right place. You can read the full story here. submitted by /u/Reddactor
[link] [comments]
---|--- -
🔗 Andrew Ayer - Blog Certificate Authorities Are Once Again Issuing Certificates That Don't Work rss
Twice a year, the Certificate Transparency ecosystem undergoes a transition as certificate authorities start to submit certificates to new semiannual log partitions. And recently, the ecosystem has started transitioning to the new static-ct-api specification. Unfortunately, despite efforts to make these transitions extremely easy for certificate authorities, in the past week I have detected 16 certificate authorities who have bungled these transitions, issuing certificates that are rejected by some or all mainstream web browsers with an error message like "This Connection Is Not Private" or ERR_CERTIFICATE_TRANSPARENCY_REQUIRED.
If you're not familiar, Certificate Transparency (CT) is a system for publishing SSL certificates in public logs. Certificate Transparency monitors like Cert Spotter download the logs to help you track certificate expiration and detect unauthorized certificates for your domains.
At a high level, Certificate Transparency works like this:
- Before issuing a certificate, the certificate authority (CA) creates a "precertificate" containing the details of the certificate it intends to issue.
- The CA submits the precertificate to multiple Certificate Transparency logs.
- Each log returns a receipt, called a Signed Certificate Timestamp (SCT), which confirms submission of the precertificate.
- The CA embeds the SCTs in the certificate which it gives to the site operator.
- When a browser loads a website, it makes sure the website's certificate has SCTs from a sufficient number of recognized logs. If it doesn't, the browser throws up an error page and refuses to load the website.
Billions of SSL certificates are issued and logged to CT every year. To prevent logs from growing indefinitely, logs only accept (pre)certificates which expire within a certain range, typically six months long. Every log will eventually contain only expired certificates, allowing it to be shut down. Meanwhile, new logs are created to contain certificates expiring further in the future.
How do CAs know what logs to submit precertificates to? It's easy: Apple and Chrome each publish a JSON file containing a list of logs. (Firefox and Edge use Chrome's list.) Apple's is at https://valid.apple.com/ct/log_list/current_log_list.json and Chrome's is at https://www.gstatic.com/ct/log_list/v3/log_list.json. Each log object contains the log's name, URL, public key, range of expiration dates accepted by the log, and crucially, the log's state.
{ "description": "Sectigo 'Elephant2027h1'", "log_id": "YEyar3p/d18B1Ab8kg3ImesLHH34yVIb+voXdzuXi8k=", "key": "MFkwEwYHKoZIzj0CAQYIKoZI...AScw2woA==", "url": "https://elephant2027h1.ct.sectigo.com/", "mmd": 86400, "state": { "usable": { "timestamp": "2025-07-22T01:33:20Z" } }, "temporal_interval": { "start_inclusive": "2027-01-01T00:00:00Z", "end_exclusive": "2027-07-01T00:00:00Z" } }The state is very simple: if it's "usable", then CAs should use it. If it's something else, CAs should not use it.
The full process of logging is a bit more complicated, because CAs have to include SCTs from a sufficiently-diverse set of logs, but when it comes to finding the initial set of logs to consider, it's hard to imagine how it could be any easier for CAs. They just need to download the Apple and Chrome lists and find the logs whose state is Usable in both lists and whose expiration range covers the expiration date of the certificate.
Despite this, a number of CAs appear to either disregard the state or only consider Chrome's log list. Historically, this has not caused problems because new logs have become Usable in both Chrome and Apple before they were needed for new certificates. Since the maximum certificate lifetime is 398 days, logs for certificates expiring in the first half of 2027 (2027h1) needed to be Usable by November 29, 2025. Unfortunately, not all 2027h1 logs were Usable by this date.
First, Google's 2027h1 logs (Argon 2027h1 and Xenon 2027h1) were added to Chrome 40 days later than they should have been. Normally, new logs are added to Chrome after 30 days of successful monitoring, but this process is still very manual and human error led to Chrome setting a 70 day timer instead of a 30 day timer. Consequentially, these logs are still in the Qualified state in Chrome. Although Qualified logs are recognized by up-to-date installations of Chrome (and Firefox and Edge), there may be out-of-date installations which do not recognize them, making it a very bad idea for CAs to use Qualified logs if they care about compatibility. Chrome, Firefox, and Edge automatically disable Certificate Transparency enforcement once they become 70 days out-of- date, so Argon and Xenon 2027h1 will become Usable on December 27, 2025, which is 70 days after they became Qualified. (Argon and Xenon 2027h1 are already Usable in Apple's list.)
Second, DigiCert's 2027h1 logs (Sphinx 2027h1 and Wyvern 2027h1) don't appear at all in Apple's log list. Since Apple doesn't use a public bug tracker for their CT log program like Chrome, I have no idea what went wrong. Did DigiCert forget to tell Apple about their new logs, or is Apple slow-rolling them for some reason? Certificates which rely on either DigiCert log won't work at all on Apple platforms. (They are already Usable in Chrome's list.)
While the late addition of logs is not ideal, it should not have been a problem, because there are plenty of other 2027h1 logs which became Usable for both Apple and Chrome in time.
I first became aware of issues last Tuesday when Arabella Barks posted a message to Mozilla's dev-security-policy mailing list referencing a certificate issued by Certum with SCTs from DigiCert Wyvern 2027h1. Sensing that this could be a widespread problem, I decided to investigate. My company, SSLMate, maintains a 51TB PostgreSQL database with the contents of every Certificate Transparency log. The database's primary purpose is to power our Certificate Transparency monitoring service, Cert Spotter, and our Certificate Transparency Search API, but it's also very handy for investigating ecosystem issues.
I ran a query to find all precertificates logged to Google's and DigiCert's 2027h1 logs. This alone was not sufficient to identify broken certificates, since CAs could be submitting precertificates to these logs but not including the SCTs in the final certificate, or including more than the minimum number of required SCTs. Therefore, for every precertificate, I looked to see if the corresponding final certificate had been logged anywhere. If it had, I ran it through SSLMate's CT Policy Analyzer to see if it had enough SCTs from broadly Usable logs. If the final certificate wasn't available for analysis, I counted how many other logs the precertificate was logged to. If fewer than three of these logs were Usable, then there was no way the corresponding certificate could have enough SCTs.
I posted my findings to the ct-policy mailing list later that day, alerting CAs to the problem. Since then, I've found even more certificates relying on logs that are not broadly Usable. As of publication time, the following CAs have issued such certificates:
- Certum
- Cybertrust Japan (fixed)
- Disig
- GDCA
- GlobalSign (fixed)
- HARICA
- IdenTrust (fixed)
- Izenpe (fixed)
- Microsec
- NAVER
- SECOM
- SSL.com
- SHECA
- TWCA (fixed)
- certSIGN
- emSign
Of those, only the five indicated above have fixed their systems. The others have all issued broken certificates within the last two days, even though it has been a week since my first public posting.
Unfortunately, logging to non-Usable logs wasn't the only problem. Last Wednesday, Cert Spotter began alerting me about certificates issued by Cybertrust Japan containing SCTs with invalid signatures. I noticed that the SCTs with invalid signatures were all from static-ct-api logs.
To address shortcomings with the original Certificate Transparency specification (RFC6962), the ecosystem has been transitioning to logs based on the static-ct-api specification. Almost half of the 2027h1 logs use static-ct-api. However, while static-ct-api requires major changes for log monitors, it uses the exact same protocol for CAs to submit (pre)certificates. This was an intentional decision to make static-ct-api easier to adopt, so that it wouldn't suffer the same fate as RFC9162, which was intended to replace RFC6962 but was dead-on-arrival in part because it completely broke compatibility with the existing ecosystem.
However, there is one teeny tiny difference with static-ct-api: whereas RFC6962 logs always return SCTs with an empty extensions field, static-ct-api logs return SCTs with non-empty extensions. This should not be problem - the extensions field is just an opaque byte array and CAs do not need to understand what static-ct-api logs place it in it. They just need to copy it through to the final certificate, which they should have been doing anyways with RFC6962 logs. But Cybertrust Japan was always leaving the extension field empty regardless of what the log returned, breaking the SCT's signature. Since SCTs with invalid signatures are disregarded by browsers, this left their certificates with an insufficient number of SCTs, dooming them to rejection.
After publication of this post, Cert Spotter alerted me to invalid SCT signatures in certificates issued by NAVER. In this case, the SCT extensions were non-empty but encoded in base64, indicating that NAVER wasn't decoding the base64 from the JSON response when copying it to the SCT. On one hand, I don't love RFC6962's wording about the
extensionsfield: while the other JSON fields, likeidandsignature, are clearly indicated as "base64 encoded", it's only implied thatextensionsis base64-encoded (it says "Clients should decode the base64-encoded data and include it in the SCT"). On the other hand, if NAVER were verifying the signature of SCTs before embedding them in certificates, they almost certainly would have caught this mistake, since successful verification relies on correctly decoding the JSON response. And we know from past incidents that it's very important for CAs to verify SCT signatures.Unfortunately, we'll probably never learn the root cause of these failures or what CAs are doing to prevent them from happening again. Normally, when a CA violates a policy, they are required to publish a public incident report, answer questions from the community, and note the failure in their next audit. If their incident response is bad or they keep having the same incident, they run the risk of being distrusted. However, Certificate Transparency is not a policy requirement in the traditional sense - CAs are free to issue certificates which violate CT requirements; those certificates just won't work in CT- enforcing browsers. This allows to CAs to issue unlogged certificates to customers who don't want their certificates to be public knowledge (and don't need them to work in browsers). Of course, that's not what the CAs here were doing - they were clearly trying to issue certificates that work in browsers; they just did a bad job of it.
Previously:
-
🔗 r/LocalLLaMA Mistral AI drops 3x as many LLMs in a single week as OpenAI did in 6 years rss
Here are the GGUF links to Mistral AI’s "collected works" from the past week – all ready for local use:
Cutting-edge coding models:
- 24B parameters: https://huggingface.co/bartowski/mistralai_Devstral- Small-2-24B-Instruct-2512-GGUF
- 123B parameters: https://huggingface.co/bartowski/mistralai_Devstral-2-123B-Instruct-2512-GGUF
Top-tier reasoning models – perfectly sized for consumer hardware:
- 3B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-3B-Reasoning-2512-GGUF
- 8B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-8B-Reasoning-2512-GGUF
- 14B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-14B-Reasoning-2512-GGUF
Powerful instruct models for local setups:
- 3B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-3B-Instruct-2512-GGUF
- 8B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-8B-Instruct-2512-GGUF
- 14B parameters: https://huggingface.co/bartowski/mistralai_Ministral-3-14B-Instruct-2512-GGUF
Mistral’s most advanced instruct model:
- 675B parameters: https://huggingface.co/bartowski/mistralai_Mistral- Large-3-675B-Instruct-2512-GGUF
Licensing: All models under Apache 2.0, Devstral 2 with a modified MIT license.
What an insane achievement for a company that’s still small compared to OpenAI! Huge thanks to Mistral AI! <3
submitted by /u/Snail_Inference
[link] [comments] -
🔗 r/LocalLLaMA We did years of research so you don’t have to guess your GGUF datatypes rss
| Hey r/LocalLLaMA, We’ve been working on ShapeLearn , a method that learns optimal datatypes for aggressive quantization while preserving quality. Instead of hand-picking formats and hoping for the best, it uses gradient descent to choose per-tensor (or per-group) bitlengths automatically. We’re starting to release GGUF models produced with ShapeLearn, beginning with popular bases:We provide variants from ~5 bits down to ~2.7 bits per weight. The low-bit regime is where ShapeLearn really shines: it keeps quality high where traditional heuristic and experience approaches usually start to fall apart. While we’re currently focused on LLMs and GGUF, the method itself is general. We can optimize any model, task, quantization method, or datatype family (INT/FP/BFP/etc). We’re targeting the llama.cpp ecosystem first. Each release comes with:
- quality–vs–size–vs–speed tradeoffs,
- benchmarks on multiple hardware targets (RTX 5090, Intel i7, Raspberry Pi), and
- comparisons against other popular llama.cpp-style quantizers (shoutout to Unsloth, we use their work as a strong baseline and really like what they’re doing 💙).
If you want the deeper technical dive, the full write-up is on our blog: https://byteshape.com/blogs/Qwen3-4B-I-2507/ If you want to try the models directly, you can grab them here: https://huggingface.co/byteshape We’d really appreciate feedback, especially from folks who can test on their own hardware and workloads. Happy to answer questions, share more details, or maybe add extra benchmarks in the future if there’s interest. About us We’re ByteShape , a small team spun out of a University of Toronto research group, focused on making AI much more efficient. ShapeLearn’s goal is to remove the guesswork from choosing datatypes: it automatically adapts precision for each tensor, at any granularity, while keeping quality high even at very low bitlengths. submitted by /u/enrique-byteshape
[link] [comments]
---|--- -
🔗 r/LocalLLaMA zai-org/GLM-TTS · Hugging Face rss
- Zero-shot Voice Cloning: Clone any speaker's voice with just 3-10 seconds of prompt audio.
- RL-enhanced Emotion Control: Utilizes a multi-reward reinforcement learning framework (GRPO) to optimize prosody and emotion.
- High-quality Synthesis: Generates speech comparable to commercial systems with reduced Character Error Rate (CER).
- Phoneme-level Control: Supports "Hybrid Phoneme + Text" input for precise pronunciation control (e.g., polyphones).
- Streaming Inference: Supports real-time audio generation suitable for interactive applications.
- Bilingual Support: Optimized for Chinese and English mixed text.
submitted by /u/Dark_Fire_12
[link] [comments]
---|--- -
🔗 r/LocalLLaMA You can now train LLMs 3x faster with 30% less memory! (<3.9GB VRAM) rss
| Hey [r/LocalLlama](/r/LocalLlama)! We're excited to release new Triton kernels and smart auto packing support to enable you to train models 3x (sometimes even 5x) faster with 30-90% less VRAM - all with no accuracy degradation. Unsloth GitHub: https://github.com/unslothai/unsloth- This means you can now train LLMs like Qwen3-4B not only on just 3.9GB VRAM , but also 3x faster
- But how? It's all due to our new custom RoPE and MLP Triton kernels, plus our new smart auto uncontaminated packing integration
- Speed and VRAM optimizations will depend on your setup (e.g. dataset)
- You'll also see improved SFT loss stability and more predictable GPU utilization
- No need to enable these new additions as they're smartly enabled by default. e.g. auto padding-free uncontaminated packing is on for all training runs without any accuracy changes. Benchmarks show training losses match non-packing runs exactly.
Detailed breakdown of optimizations:
- 2.3x faster QK Rotary Embedding fused Triton kernel with packing support
- Updated SwiGLU, GeGLU kernels with int64 indexing for long context
- 2.5x to 5x faster uncontaminated packing with xformers, SDPA, FA3 backends
- 2.1x faster padding free, 50% less VRAM , 0% accuracy change
- We launched Unsloth with a Triton RoPE kernel in Dec, 2023. We’ve now merged the two Q/K kernels into one and added variable-length RoPE for pad-free packing.
You can read our educational blogpost for detailed analysis, benchmarks and more: https://docs.unsloth.ai/new/3x-faster-training-packing And you can of course train any model using our new features and kernels via our free fine- tuning notebooks: https://docs.unsloth.ai/get-started/unsloth-notebooks To update Unsloth to automatically make training faster, do:
pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth_zooAnd to enable manual packing support (we already do padding free which should already provide a boost!) do:
from unsloth import FastLanguageModel from trl import SFTTrainer, SFTConfig model, tokenizer = FastLanguageModel.from_pretrained("unsloth/Qwen3-14B") trainer = SFTTrainer( model = model, processing_class = tokenizer, train_dataset = dataset, args = SFTConfig(..., packing = True,), ) trainer.train()Hope you all have a lovely rest of the week! :) submitted by /u/danielhanchen
[link] [comments]
---|--- -
🔗 Locklin on science The first AI bubble rss
The first investment bubble in “AI” happened in the 1980s. As I mentioned before, one of the things which kicked it off was Japanese investment in the fifth generation computing project. Go look at Blade Runner for an idea of how people thought of Japan back then: everyone figured they were the country of the […]
-
🔗 r/LocalLLaMA new CLI experience has been merged into llama.cpp rss
https://github.com/ggml-org/llama.cpp/pull/17824
submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/wiesbaden Wo sind die Parkplätze für City Flitzer nahe des Elsässer Platzes hin? rss
Ich war gestern abend einigermaßen schockiert als ich die Stellplätze für die Flitzer nit mehr finden konnte. Zuletzt gab es 6 Stück in der Nettelbeckstraße nahe der Baustelle. Habe mich dann notgedrungen an den Kreisel gestellt.
Weiß jemand ob es neue Stellplätze gibt oder ob die wieder kommen, wenn die Baustellen weniger werden?
submitted by /u/Altruistic-Flow5932
[link] [comments] -
🔗 r/reverseengineering The stack circuitry of the Intel 8087 floating point chip, reverse-engineered rss
submitted by /u/tnavda
[link] [comments] -
🔗 @cxiao@infosec.exchange with the complete travel bans, the open racism against somalis from the mastodon
with the complete travel bans, the open racism against somalis from the highest levels, discussing "remigration" like it's no big deal, the insane national security strategy....it really feels like in the last few weeks the US has crossed a new level of Cooked
-
🔗 @cxiao@infosec.exchange and as always, the restrictions being applied to people from privileged mastodon
and as always, the restrictions being applied to people from privileged countries are just a fraction of what people from less privileged countries have had to deal with, including full travel bans that nobody seems very concerned about :/
-
🔗 @cxiao@infosec.exchange it's always a good day to post vance_egghead.png on your public social media, mastodon
it's always a good day to post vance_egghead.png on your public social media, today most of all
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://journa.host/@w7voa/115692772629850738
canadians escaped for now but only because we don't require an ESTA....
i am NOT scrubbing vance_egghead.png off my social media just to have the (increasingly dubious) privilege of entering the US 😭
-
🔗 r/wiesbaden Suche neue Freunde/Bekanntschaften in Mainz/Wiesbaden – Anime, Manga, Gaming, Magic the Gathering rss
submitted by /u/Aggressive-Pizza9184
[link] [comments] -
🔗 Ampcode News Agent Skills rss
Amp now supports Claude Skills natively. Skills let the agent lazily-load specific instructions on how to use local tools. We like skills because they improve agent tool use performance in a very context efficient way.
If you have existing skills in your
.claude/skillsdirectory, Amp picks them up automatically. Zero config.Our team has been doing a ton of experimenting with skills over the past few weeks. Here are a few that we have found particularly useful:
- Agent Sandbox: Isolated execution environment for running untrusted code safely.
- Agent Skill Creator: Meta-skill for creating Claude agents autonomously with comprehensive skill architecture patterns.
- BigQuery: Expert use of the bq cli tool for querying BigQuery datasets.
- Tmux: Run servers and long-running tasks in the background.
- Web Browser: Interact with web pages via Chrome DevTools Protocol for clicking, filling forms, and navigation.
Read more about how to use skills in our manual https://ampcode.com/manual#agent-skills
-
🔗 Ampcode News Amp Python SDK rss
For all of you who swear by tabs and clean syntax, the Amp Python SDK is now live.
You can run Amp programmatically from your Python code, just like you already do in TypeScript.
Here, for example, is how you instruct Amp to migrate React components with custom toolbox tools to validate changes:
import asyncio import os from amp_sdk import execute, AmpOptions prompt = """ Goal: Migrate all React components from React 17 to React 18. 1. Find all React component files (.tsx, .jsx) 2. For each component: - Update deprecated lifecycle methods - Replace ReactDOM.render with createRoot 3. Track any components that fail migration with the reason 4. Run the typecheck_test_tool after each change 5. Output a summary: migrated count, failed list with reasons """ async def main(): # Use the toolbox directory to share tools with Amp toolbox_dir = os.path.join(os.getcwd(), "toolbox") async for message in execute( prompt, AmpOptions( cwd=os.getcwd(), toolbox=toolbox_dir, visibility="workspace", dangerously_allow_all=True, ), ): if message.type == "result": if message.is_error: print(f"Error: {message.error}") else: print(f"Summary: {message.result}") if __name__ == "__main__": asyncio.run(main())To get started, install the pip package and the Amp CLI:
# Install the Amp SDK with pip $ pip install amp-sdk # Install the Amp CLI globally $ npm install -g @sourcegraph/ampNow you can build anything with Amp in any Python runtime environment. To get more ideas and familiar with the SDK, take a look at the examples in the manual.
-
- December 09, 2025
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-09 rss
IDA Plugin Updates on 2025-12-09
New Releases:
Activity:
- climacros
- b21dcb09: Delete CLAUDE.md
- IDA-VTableExplorer
- e7e1bace: feat(build): update builder with all the missing ones included
- IDAPluginList
- 82898a0a: Update
- plugin-ida
- quokka
- 2a7cbecf: Merge pull request #72 from quarkslab/dependabot/github_actions/actio…
- SuperHint
- climacros
-
🔗 Simon Willison Under the hood of Canada Spends with Brendan Samek rss
I talked to Brendan Samek about Canada Spends, a project from Build Canada that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, sqlite-utils and pieces of LLM-powered PDF extraction.
Here's the video on YouTube.
Sections within that video:
- 02:57 Data sources and the PDF problem
- 05:51 Crowdsourcing financial data across Canada
- 07:27 Datasette demo: Search and facets
- 12:33 Behind the scenes: Ingestion code
- 17:24 Data quality horror stories
- 20:46 Using Gemini to extract PDF data
- 25:24 Why SQLite is perfect for data distribution
Build Canada and Canada Spends
Build Canada is a volunteer-driven non-profit that launched in February 2025 - here's some background information on the organization, which has a strong pro-entrepreneurship and pro-technology angle.
Canada Spends is their project to make Canadian government financial data more accessible and explorable. It includes a tax sources and sinks visualizer and a searchable database of government contracts, plus a collection of tools covering financial data from different levels of government.
Datasette for data exploration
The project maintains a Datasette instance at api.canadasbilding.com containing the data they have gathered and processed from multiple data sources - currently more than 2 million rows plus a combined search index across a denormalized copy of that data.

Processing PDFs
The highest quality government financial data comes from the audited financial statements that every Canadian government department is required to publish. As is so often the case with government data, these are usually published as PDFs.
Brendan has been using Gemini to help extract data from those PDFs. Since this is accounting data the numbers can be summed and cross-checked to help validate the LLM didn't make any obvious mistakes.
Further reading
- datasette.io, the official website for Datasette
- sqlite-utils.datasette.io for more on
sqlite-utils - Canada Spends
- BuildCanada/CanadaSpends on GitHub
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/wiesbaden Glasfaser OXG rss
Liebe Wiesbadener,
diese Woche wurden wir angesprochen ob wir einen Glasfaseranschluss machen wollen. Grundsätzlich super Sache - aber das macht ja nicht die Telekom „so wie man es kennt“, sondern die OXG in Kooperation mit der Vodafone. OXG machts kostenfrei.
Machen oder warten auf Telekom?
Erfahrungen mit OXG?
Danke - freue mich über einen Austausch
submitted by /u/Key_Entrepreneur_762
[link] [comments] -
🔗 p05wn/SuperHint fixed struct tracking method release
- Changed struct identification method from comment-based to ordinal-based
- Struct hints are now tracked by ordinal, which remains stable across renames
- No longer requires magic values in struct comments
Full Changelog :
v1.0.0...v1.0.1 - Changed struct identification method from comment-based to ordinal-based
-
🔗 News Minimalist 🐢 Australia bans under-16s from social media + 13 more stories rss
In the last 4 days ChatGPT read 124018 top news stories. After removing previously covered events, there are 14 articles with a significance score over 5.5.

[5.6] Australia bans under-16s from social media —theguardian.com(+77)
Australia enacted a world-first social media ban for users under 16, forcing major platforms like TikTok and Instagram to remove millions of accounts or face significant fines.
The law requires platforms to remove under-16 accounts and block new sign-ups, with non-compliance fines up to $49.5m. Initial implementation has faced issues, with some teens bypassing age-verification tests, though most major platforms have agreed to comply with the ban.
[6.3] Trump reveals nationalist 'America First' strategy —swissinfo.ch(French) (+138)
The Trump administration has released a new nationalist National Security Strategy, prioritizing an "America First" foreign policy and predicting Europe's "civilizational erasure" due to mass migration and other trends.
The 33-page document calls for fighting mass migration, restoring U.S. supremacy in Latin America, and ending America's role in upholding the global order. It also plans to realign military presence toward the Americas.
The strategy, a departure from the previous administration's focus on Russia and China, omits any assessment of a Russian threat and urges Japan and South Korea to increase support for Taiwan.
[5.5] Attack compromises Chernobyl containment dome —huffingtonpost.fr(French) (+36)
A February drone attack damaged Chernobyl’s protective dome, which has now lost its main safety functions, the International Atomic Energy Agency confirmed in a report issued last Friday.
The IAEA found the attack created a 15-square-meter hole and caused a fire, rendering the dome no longer airtight. However, inspectors found no permanent damage to the containment structure's load-bearing supports or monitoring systems.
Ukraine has blamed a Russian drone for the attack, which Moscow denies. Authorities reported that radiation levels have remained normal and stable, with no radioactive leaks detected since the incident.
Highly covered news with significance over 5.5
[6.1] EU adopts Denmark's hardline migration policy — politico.eu (+63)
[6.1] Scientist breeds mosquitoes in Brazil to fight dengue disease — nature.com (+3)
[6.1] RSF seized Sudan's largest oil field — ctvnews.ca (+4)
[6.0] US advisers scrap infant hepatitis B vaccine recommendation — rnz.co.nz (+93)
[5.9] Trump allows Nvidia to sell chips in China — nbcnews.com (+61)
[5.9] German scientists found a gene causing mental illness — farodevigo.es (Spanish) (+10)
[5.9] China posts record $1 trillion trade surplus — fortune.com (+38)
[5.6] Federal judge overturned Trump's wind power ban — cnbc.com (+17)
[5.5] EU fines X for transparency violations — dn.se (Swedish) (+107)
[5.5] PFAS in pregnant women’s drinking water puts their babies at higher risk, study finds — theconversation.com (+2)
[6.6] New therapy reverses incurable blood cancer — bbc.com (+15)
Thanks for reading!
— Vadim
You can customize this newsletter with premium.
-
🔗 r/LocalLLaMA Devstral-Small-2-24B-Instruct-2512 on Hugging Face rss
| submitted by /u/paf1138
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Introducing: Devstral 2 and Mistral Vibe CLI. | Mistral AI rss
| submitted by /u/YanderMan
[link] [comments]
---|--- -
🔗 r/reverseengineering Declarative Binary Parsing for Security Research with Kaitai Struct rss
submitted by /u/Beneficial_Cattle_98
[link] [comments] -
🔗 Anton Zhiyanov Go proposal: Secret mode rss
Part of theAccepted! series, explaining the upcoming Go changes in simple terms.
Automatically erase used memory to prevent secret leaks.
Ver. 1.26 • Stdlib • Low impact
Summary
The new
runtime/secretpackage lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable.secret.Do(func() { // Generate a session key and // use it to encrypt the data. })This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it.
The package is experimental and is mainly for developers of cryptographic libraries, not for application developers.
Motivation
Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy.
In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it.
The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds.
Description
Add the
runtime/secretpackage withDoandEnabledfunctions:// Do invokes f. // // Do ensures that any temporary storage used by f is erased in a // timely manner. (In this context, "f" is shorthand for the // entire call tree initiated by f.) // - Any registers used by f are erased before Do returns. // - Any stack used by f is erased before Do returns. // - Any heap allocation done by f is erased as soon as the garbage // collector realizes that it is no longer reachable. // - Do works even if f panics or calls runtime.Goexit. As part of // that, any panic raised by f will appear as if it originates from // Do itself. func Do(f func()) // Enabled reports whether Do appears anywhere on the call stack. func Enabled() boolThe current implementation has several limitations:
- Only supported on linux/amd64 and linux/arm64. On unsupported platforms,
Doinvokesfdirectly. - Protection does not cover any global variables that
fwrites to. - Trying to start a goroutine within
fcauses a panic. - If
fcallsruntime.Goexit, erasure is delayed until all deferred functions are executed. - Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act.
- If
fpanics, the panicked value might reference memory allocated insidef. That memory won't be erased until (at least) the panicked value is no longer reachable. - Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.
The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a
dataarray and the secret key always starts atdata[100]), don't create a pointer to that location (don't create a pointerpto&data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use
secret.Dobehind the scenes.As of Go 1.26, the
runtime/secretpackage is experimental and can be enabled by settingGOEXPERIMENT=runtimesecretat build time.Example
Use
secret.Doto generate a session key and encrypt a message using AES-GCM:// Encrypt generates an ephemeral key and encrypts the message. // It wraps the entire sensitive operation in secret.Do to ensure // the key and internal AES state are erased from memory. func Encrypt(message []byte) ([]byte, error) { var ciphertext []byte var encErr error secret.Do(func() { // 1. Generate an ephemeral 32-byte key. // This allocation is protected by secret.Do. key := make([]byte, 32) if _, err := io.ReadFull(rand.Reader, key); err != nil { encErr = err return } // 2. Create the cipher (expands key into round keys). // This structure is also protected. block, err := aes.NewCipher(key) if err != nil { encErr = err return } gcm, err := cipher.NewGCM(block) if err != nil { encErr = err return } nonce := make([]byte, gcm.NonceSize()) if _, err := io.ReadFull(rand.Reader, nonce); err != nil { encErr = err return } // 3. Seal the data. // Only the ciphertext leaves this closure. ciphertext = gcm.Seal(nonce, nonce, message, nil) }) return ciphertext, encErr }Note that
secret.Doprotects not just the raw key, but also thecipher.Blockstructure (which contains the expanded key schedule) created inside the function.This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work.
Links & Credits
𝗣 21865 👥 Dave Anderson, Filippo Valsorda, Jason A. Donenfeld, Russ Cox
𝗖𝗟 704615 👥 Daniel Morsing, Keith Randall
*[Low impact]: Likely impact for an average Go developer
- Only supported on linux/amd64 and linux/arm64. On unsupported platforms,
-
🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases rss
sync repo: +2 plugins, +2 releases ## New plugins - [vt-ida-plugin](https://github.com/VirusTotal/vt-ida-plugin) (1.0.6) - [yarka](https://github.com/AzzOnFire/yarka) (0.7.2) -
🔗 r/LocalLLaMA Check on lil bro rss
| submitted by /u/k_means_clusterfuck
[link] [comments]
---|--- -
🔗 matklad Do Not Optimize Away rss
Do Not Optimize Away
Dec 9, 2025
Compilers are sneaky beasts. If you time code like this:
var total: u32 = 0; for (0..N) |i| total += i; print("total={}", .{total});You will discover that LLVM is as smart as a little kid named Gauss, and replaces the summation with an equivalent formula N ( N + 1 ) 2 .
What’s more, if you write something more complicated like
total += i + 2*i*i - i*i*i, you’ll see that LLVM figures out a closed-form expression for that as well (a generalization of the Gauss trick I proudly figured out in 11th grade). See for yourself: https://godbolt.org/z/T9EcTb8zqUsually, this kind of thing is desirable — code runs faster! Except when you are trying to benchmark your code, and instead end up benchmarking an elaborate no-op.
There are two pitfalls with benchmarking. First , in
const start = now(); _ = computation() const elapsed = now() - start;a reasonable compiler can notice that
computation’s result is not used, and optimize the entire computation away.Second , in
const parameter_a = 1_000_000; const parameter_b = 1_000; const start = now(); _ = computation(parameter_a, parameter_b); const elapsed = now() - start;even if the computation is not elided as a whole, compiler can constant-fold parts of it, taking advantage of the fact that values of parameters are known at compile time.
Time To Be Killing The Dragon Again
Usually languages provide some sort of an explicit “please do not optimize this away” function, like Rust’s
hint::black_boxor Zig’smem.doNotOptimizeAway, but they always felt like dragon oil to me:- Their meaning is tricky. The whole compilation pipeline is based on erasing everything about the original form of the code, maintaining only the semantics. But
black_boxis transparent in the semantic spectrum! It is unexplainable using the normal “semantics-preserving transformations” compiler vocabulary. - There’s a simpler and more direct way to achieve the desired result. Just open the box and check if the cat is there!
It’s easier to explain via an example. Let’s say I am benchmarking binary search:
fn insertion_point(xs: []const u32, x: u32) usize { ... }I would use the following benchmarking scaffold:
fn benchmark(arena: Allocator) !void { const element_count = try parameter("element_count", 1_000_000); const search_count = try parameter("search_count", 10_000); const elements: []const u32 = make_elements(arena, element_count); const searches: []const u32 = make_searches(arena, search_count); const start = now(); var hash: u32 = 0; for (searches) |key| { hash +%= insertion_point(elements, key); } const elapsed = now().duration_since(start); print("hash={}\n", .{hash}); print("elapsed={}\n", .{elapsed}); } fn parameter(comptime name: []const u8, default: u64) !u64 { const value = if (process.hasEnvVarConstant(name)) try process.parseEnvVarInt(name, u64, 10) else default; print(name ++ "={}\n", .{value}); }On the input side, the
parameterfunction takes a symbolic name and a default value. It looks up the value among the environmental variables, with fallback. Because the value can be specified at runtime, compiler can’t optimize assuming a particular constant. And you also get a convenient way to re-run benchmark with a different set of parameters without recompiling.On the output side, we compute an (extremely weak) “hash” of the results. For our binary search — just the sum of all the indexes. Then we print this hash together with the timing information. Because we use the results of our computation, compiler can’t optimize them away!
Similarly to the
parameterfunction, we also get a bonus feature for free. You know who also loves making code faster by deleting “unnecessary” functionality? I do! Though I am not as smart as a compiler, and usually end up deleting code that actually is required to get the right answer. With the hash, if I mess my optimization work to the point of getting a wrong answer, I immediately see that reflected in an unexpected value of the hash.Consider avoiding black boxes for your next benchmark. Instead, stick to natural anti-optimizing-compiler remedies:
- Make input parameters runtime overridable (with compile time defaults),
- print the result (or the hash thereof).
- Their meaning is tricky. The whole compilation pipeline is based on erasing everything about the original form of the code, maintaining only the semantics. But
-
🔗 Ampcode News 200k Tokens Is Plenty rss
After Opus 4.5 became Amp's main model a few weeks ago, quite a few of us had to get used to a 200k token window again.
But not Lewis. He doesn't mind, because he "loves short threads" and now he wrote down how he uses them and why he would probably stop his threads early even if more tokens were available.
-



























