- โ
- โ
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- May 04, 2026
-
๐ anthropics/claude-code v2.1.128 release
What's changed
- Bare
/color(no args) now picks a random session color /mcpnow shows the tool count for connected servers and flags servers that connected with 0 tools--plugin-dirnow accepts.zipplugin archives in addition to directories--channelsnow works with console (API key) authentication โ console orgs with managed settings must setchannelsEnabled: trueto enable- Updated
/modelpicker: collapsed duplicate Opus 4.7 entries, and current Opus now shows as "Opus" instead of "Opus 4.7" - Subprocesses (Bash, hooks, MCP, LSP) no longer inherit
OTEL_*environment variables, so OTEL-instrumented apps run via the Bash tool no longer pick up the CLI's own OTLP endpoint - MCP:
workspaceis now a reserved server name โ existing servers with that name will be skipped with a warning - Reconnecting MCP servers no longer flood the conversation with full tool-name lists on every reconnect โ re-announced tools are summarized by server prefix
- SDK hosts now receive a persistent
localSettingssuggestion for Bash permission prompts, so "Always allow" writes to.claude/settings.local.json EnterWorktreenow creates the new branch from local HEAD as documented, instead oforigin/<default-branch>โ unpushed commits are no longer dropped- Auto mode: when the classifier can't evaluate an action, the error now includes a hint (retry,
/compact, or run with--debug) - Fixed focus mode briefly dimming the previous response when submitting a new prompt
- Fixed stray "4;0;" desktop notification on every
/exitin Kitty and other terminals that interpret OSC 9 as a notification - Fixed Remote Control showing an empty "Opening your optionsโฆ" message on rate limit instead of actionable upsell options
- Fixed drag-and-drop image upload hanging on "Pasting textโฆ" when the image read fails
- Fixed crash loop when piping very large input (>10 MB) to
claude -pvia stdin - Fixed long URLs not being individually clickable on every wrapped row in fullscreen mode
- Fixed
/pluginComponents panel showing "Marketplace 'inline' not found" for plugins loaded via--plugin-dir - Fixed MCP tool results dropping images when the server returns both structured content and content blocks
- Fixed fenced code blocks inside list items carrying leading whitespace into the clipboard on copy-paste
- Fixed tab navigation in
/configstranding focus โ the tab header now stays focused so arrows and Esc keep working - Fixed markdown link labels being lost on terminals without OSC 8 hyperlink support โ links now render as
label (url)instead of just the URL - Fixed sessions on 1M-context models with a smaller autocompact window being falsely blocked with "Prompt is too long" before reaching the actual API limit
- Fixed parallel shell tool calls: a failing read-only command (grep, git diff, ls) no longer cancels sibling calls
- Fixed banner showing "with X effort" on models that don't support effort
- Fixed
/faston 3P providers fuzzy-matching to an unrelated skill instead of showing "not available" - Fixed Bedrock default model resolving to
global.*instead of the region-appropriate prefix - Fixed vim mode:
Spacein NORMAL mode now moves the cursor right, matching standard vi/vim behavior - Fixed terminal progress indicator (OSC 9;4) flickering off between tool calls โ stays visible across the full turn
- Fixed
/renamewithout args failing on resumed sessions whose last entry is a compact boundary - Fixed stale "remote-control is active" status lines from prior sessions appearing after
--resume/--continue - Fixed stale
installed_plugins.jsonentries pointing at deleted cache directories polluting PATH - Fixed MCP stdio servers receiving corrupted arguments when
CLAUDE_CODE_SHELL_PREFIXis set and an argument contains spaces or shell metacharacters - Fixed sub-agent progress summaries missing the prompt cache (~3ร
cache_creationreduction) - Fixed
/plugin updatenever detecting new versions of npm-sourced plugins - Fixed sub-agent summaries firing repeatedly while a sub-agent's transcript is static, capping worst-case token cost on idle sub-agents
- Headless
--output-format stream-json:init.plugin_errorsnow includes--plugin-dirload failures in addition to dependency demotions
- Bare
-
๐ obra/superpowers v5.1.0 release
Removals
- Legacy slash commands removed โ
/brainstorm,/execute-plan, and/write-planare gone. They were deprecated stubs that did nothing but tell the user to invoke the corresponding skill. Invokesuperpowers:brainstorming,superpowers:executing-plans, andsuperpowers:writing-plansdirectly instead. (#1188) superpowers:code-reviewernamed agent removed โ the agent was the plugin's only named agent and was used by exactly two skills, while every other reviewer/implementer subagent in the repo dispatchesgeneral-purposewith a prompt template alongside its skill. The agent's persona and checklist have been merged intoskills/requesting-code-review/code-reviewer.mdas a self-contained Task-dispatch template. Anyone dispatchingTask (superpowers:code-reviewer)should switch toTask (general-purpose)with the prompt template instead. (PR #1299)- Integration sections removed from skills โ these were a legacy of the time before agents had native skills systems and didn't help with steering.
Worktree Skills Rewrite
using-git-worktreesandfinishing-a-development-branchnow detect when the agent is already running inside an isolated worktree and prefer the harness's native worktree controls before falling back togit worktree. Behavior was TDD-validated and cross-platform-checked across five harnesses. (PRI-974, PR #1121)- Environment detection โ both skills check
GIT_DIR != GIT_COMMONbefore doing anything; if already in a linked worktree, creation is skipped entirely. A submodule guard prevents false detection. - Consent before creating worktrees โ
using-git-worktreesno longer creates worktrees implicitly; the skill asks the user first. Fixes #991 (subagent-driven-development was auto-creating worktrees without consent). - Native tool preference (Step 1a) โ when the harness exposes its own worktree tool (e.g. Codex), the skill defers to it. The user's stated preference is respected when expressed.
- Provenance-based cleanup โ
finishing-a-development-branchonly cleans up worktrees inside.worktrees/(created by superpowers); anything outside is left alone. Fixes #940 (Option 2 was incorrectly cleaning up worktrees), #999 (merge-then-remove ordering), and #238 (cdto repo root beforegit worktree remove). - Detached HEAD handling โ the finishing menu collapses to two options when there is no branch to merge from.
- Hardcoded
/Users/jessepaths in skill examples replaced with generic placeholders. (#858, PR #1122)
Contributor Guidelines for AI Agents
Two new sections at the top of
CLAUDE.md(symlinked toAGENTS.md) speak directly to AI agents. An audit of the last 100 closed PRs against this repo showed a 94% rejection rate driven by AI-generated slop: agents that didn't read the PR template, opened duplicates, fabricated problem descriptions, or pushed fork- or domain-specific changes upstream.- Pre-submission checklist โ read the PR template, search for existing PRs, verify a real problem exists, confirm the change belongs in core, and show the human partner the complete diff before submitting.
- What we will not accept โ third-party dependencies, "compliance" rewrites of skill content, project-specific configuration, bulk PRs, speculative fixes, domain-specific skills, fork-specific changes, fabricated content, and bundled unrelated changes.
- New harness PRs require a session transcript โ most past new-harness integrations copied skill files or wrapped with
npx skillsinstead of loading theusing-superpowersbootstrap at session start. The acceptance test ("Let's make a react todo list" must auto-triggerbrainstormingin a clean session) and a complete transcript are now required.
Codex Plugin Mirror Tooling
New
sync-to-codex-pluginscript mirrors superpowers into the OpenAI Codex plugin marketplace asprime-radiant-inc/openai-codex-plugins. Path/user- agnostic so any team member can run it. (PR #1165)- Clones the fork fresh into a temp directory per run, regenerates overlays inline, and opens a PR; auto-detects upstream from the script's own location and preflights
rsync/git/gh auth/python3. --bootstrapflag for first-time setup;EXCLUDESpatterns anchored to source root;assets/excluded.- Mirrors
CODE_OF_CONDUCT.md; drops theagents/openai.yamloverlay. - Seeds
interface.defaultPromptin the mirroredplugin.json. (PR #1180 by @arittr) - Codex plugin files are committed to the source repo so the sync script uses canonical versions; Codex marketplace metadata is preserved.
OpenCode
- Bootstrap content cached at module level โ
getBootstrapContent()was callingfs.existsSync+fs.readFileSync+ frontmatter regex on every agent step (theexperimental.chat.messages.transformhook fires on every step in OpenCode's agent loop). Now read once, cached for the session lifetime, with a null sentinel for the missing-file case. 15 regression tests cover cache behavior, fs call counts, the injection guard, the missing-file sentinel, and cache reset. (Fixes #1202) - Integration tests modernized.
- Install caveats clarified in the README.
Code Review Consolidation
requesting-code-reviewis now self-contained: the persona, checklist, and dispatch template live inskills/requesting-code-review/code-reviewer.mdand the skill dispatchesTask (general-purpose)directly. (PR #1299)- Single source of truth โ the persona/checklist that previously lived in both
agents/code-reviewer.mdand the skill's placeholder template (and drifted independently) is now one file. subagent-driven-developmentfollows suit โ itscode-quality-reviewer-prompt.mdnow dispatchesTask (general-purpose)instead of the named agent.- Behavioral test added โ
tests/claude-code/test-requesting-code-review.shplants real bugs (SQL injection, plaintext password handling, credential logging) into a tiny project and asserts the dispatched reviewer flags every planted issue at Critical/Important severity and refuses to approve the diff. - Codex and Copilot workaround docs trimmed โ the "Named agent dispatch" sections in
references/codex-tools.mdandreferences/copilot-tools.mddocumented how to flatten a named agent into a generic dispatch. With no named agents shipping, the workaround is unnecessary; both sections were dropped.
Subagent-Driven Development
- No more pause every 3 tasks โ the "review after each batch (3 tasks)" cadence in
requesting-code-review(originally forexecuting-plans) was leaking intosubagent-driven-development. Replaced with "each task or at natural checkpoints" plus an explicit continuous-execution directive. - SDD integration test now runs its assertions โ three independent bugs caused the test to silently bail before printing any verification results: an unresolved
..segment in the working-dir path, aset -euo pipefailinteraction withfind | sort | head -1(SIGPIPE on the producer killed the script), and a missing--plugin-diron theclaude -pinvocation that caused the test to load the installed plugin instead of the working tree. All three fixed; six verification tests now actually run against a real end-to-end SDD run.
Cursor
- Windows SessionStart hook routed through
run-hook.cmdinstead of invoking the extensionlesssession-startscript directly. Fixes Windows opening the file in an editor instead of running it. Also removed an accidental UTF-8 BOM fromhooks-cursor.json.
Gemini CLI
- Subagent dispatch mapping โ Gemini's
Taskdispatch now maps to@agent-name/@generalist, with parallel subagent dispatch documented for independent tasks.
Skills
- Terminology cleanups across skill content.
Documentation & Install
- Factory Droid installation instructions added to README.
- Quickstart install links in README. (PR #1293 by @arittr)
- Codex plugin install guidance updated. (PR #1288 by @arittr)
- Codex
waitmapping corrected towait_agentin the tools reference. - Install order reorganized ; Codex install instructions cleaned up.
- Removed vestigial
CHANGELOG.mdin favor ofRELEASE-NOTES.mdas the single source. (PR #1163 by @shaanmajid) - Discord invite link fixed; release announcements link and a detailed Discord description added to the Community section.
Community
- Legacy slash commands removed โ
-
๐ r/wiesbaden Ein Wiesbadener baute die erste Guillotine rss
submitted by /u/Happycosinus
[link] [comments] -
๐ r/Yorkshire Spotted in Guernica, Spain rss
| submitted by /u/hillboy286
[link] [comments]
---|--- -
๐ r/Leeds Cars constantly parked in cycle lanes / Cycle Superhighway โ anything actually being done? rss
Just wanted to see if anyone else has had this issue or knows what can actually be done about it.
Iโve started using the cycle lanes and Cycle Superhighway a lot more recently and honestly really like them โ Iโve ended up replacing most of my car journeys around Leeds with cycling.
The problem is there are constantly cars parked in the cycle lanes, especially along the superhighway, which kind of defeats the point and often forces you out into traffic or onto the pavement.
Iโve already emailed the council and CityConnect about it a few times but never seem to get a response.
Is there a better way to report this? Or anyone specific that actually deals with enforcement? Just feels a bit pointless having the infrastructure if itโs not kept clear.
submitted by /u/_testingdude
[link] [comments] -
๐ r/reverseengineering Reverse-engineering Final Fantasy X (PS3) trophy system with Ghidra rss
submitted by /u/JoshLeaves
[link] [comments] -
๐ r/Yorkshire Yorkshire, Yorkshire! Spotted in Downtown Toronto rss
| submitted by /u/Del_213
[link] [comments]
---|--- -
๐ r/reverseengineering Where do i find reverse engineers for actuators? Ideally in Shenzhen rss
submitted by /u/Sad-Lack8225
[link] [comments] -
๐ badlogic/pi-mono v0.73.0 release
New Features
- Xiaomi MiMo API billing and regional Token Plan providers -
xiaominow uses API billing, with separatexiaomi-token-plan-{cn,ams,sgp}providers. See docs/providers.md#api-keys and README.md#providers--models. (#4112 by @Phoen1xCode) - Incremental bash output streaming - Bash tool output now appears while commands run instead of only after completion. (#4145)
- Compact read rendering - Interactive
readoutput for Pi docs, context files, and skills is collapsed by default and shows selected line ranges.
Breaking Changes
- Switched the built-in
xiaomiprovider from Token Plan AMS to Xiaomi's API billing endpoint, and renamed its/logindisplay from "Xiaomi MiMo Token Plan" to "Xiaomi MiMo".XIAOMI_API_KEYnow refers to the API billing key from platform.xiaomimimo.com. Users on Token Plan should switch to the appropriatexiaomi-token-plan-*provider and set the corresponding env var (#4112 by @Phoen1xCode).
Added
- Added three Xiaomi MiMo Token Plan regional providers visible in
/login:xiaomi-token-plan-cn(XIAOMI_TOKEN_PLAN_CN_API_KEY),xiaomi-token-plan-ams(XIAOMI_TOKEN_PLAN_AMS_API_KEY),xiaomi-token-plan-sgp(XIAOMI_TOKEN_PLAN_SGP_API_KEY). Each defaults tomimo-v2.5-pro(#4112 by @Phoen1xCode).
Changed
- Changed
readtool rendering to collapse Pi documentation, AGENTS/CLAUDE context files, andSKILL.mdcontents by default in interactive output.
Fixed
- Fixed generated OpenAI-compatible model metadata for Qwen 3.5/3.6 and MiniMax M2.7, so those models work through the built-in provider catalog (#4110 by @jsynowiec).
- Fixed Bedrock Claude Opus 4.7
xhighthinking requests by preserving the provider's native effort value. - Fixed OpenAI Codex WebSocket transport to fall back to SSE when setup fails before streaming starts, and surface transport diagnostics in the assistant message (#4133).
- Fixed OpenAI Codex WebSocket transport keeping
--printand JSON mode processes alive after the response by closing cached WebSocket sessions during session shutdown (#4103). - Fixed compact
readtool calls to render directly and include selected line ranges in interactive output. - Fixed interactive sessions to exit when terminal input is lost instead of continuing in a broken state.
- Fixed bash tool output to stream incrementally while commands run instead of waiting for command completion (#4145).
- Fixed selector and autocomplete fuzzy ranking to prioritize exact matches.
- Xiaomi MiMo API billing and regional Token Plan providers -
-
๐ r/reverseengineering [CrackMe] PyVMP v6 : The Fortress. I dare you to break it (again x2). rss
submitted by /u/PynaBola
[link] [comments] -
๐ r/wiesbaden Gutes Lokal und Fuรball schauen rss
Hallo zusammen,
ich hoffe die Frage passt hier:
Ich bin am Mittwoch geschรคftlich in Wiesbaden und suche ein Lokal, in dem man Abends gut essen gehen, aber auch das Champions League Rรผckspiel anschauen kann.
Vielen lieben Dank fรผr eure Tipps.
submitted by /u/Julansda
[link] [comments] -
๐ sacha chua :: living an awesome life From David Dimagid: What we talk about when we talk about recommending Emacs packages rss
David Dimagid wrote this post for Emacs Carnival May 2026: "May I recommend…". Here it is!
Someone recently said on emacs-devel that they'd like to talk about recommending ELPA packages. Someone else said we should first ask what "recommending" actually means. RMS opened a thread asking that very question. It's still open, and you can follow it there (ELPA: to curate or not to curate).
I think we could apply Rich Hickey's technique here and start by looking up the definition of "recommend" in the dictionary. I invite everyone to do so with whatever dictionary you have at hand and to trust your definitions.
Now, we could evaluate ELPA packages for recommendation based on whether they complement or improve functionality already present in the core. For example, diff-hl by Dmitry Gutov. Its description says:
diff-hl-mode highlights uncommitted changes on the side of the window, allows you to jump between and revert them selectively. In buffers controlled by Git, you can stage and unstage the changes.
That last feature โstaging partial hunksโ is missing from VC, and diff-hl adds it seamlessly. We could say diff-hl complements the core.
Then there are major mode packages, like csv-mode, markdown-mode, cobol-mode, and so on. They add functionality that doesn't exist in the core. They have no direct equivalent. We could call them standalone packages.
Now consider another excellent package, like diff-hl, that depends only on the core: expreg, by Yuan Fu, the region expansion package. With a single key, it expands the region based on context. The core already offers this through sexp movement commands, but not with a single keybinding โ you need several. Some will prefer the native core way; others will prefer the package. We could say expreg improves or, depending on how you look at it, duplicates the core's functionality.
So, in my opinion, package recommendations should be structured around their relationship with the Emacs core. I believe the best-regarded ELPA packages should be those that encourage users to use what the core already offers, first and foremost, and then try those packages because they extend a feature the core lacks or complement it. This would also help more people discover lesser-known core features, increase bug reports, and, over time, bring more contributors to Emacs. That way, the Emacs community could have a package repository it can trust for as long as Emacs exists. Perhaps the person who wrote Elfeed would have known about Newsticker and would have contributed to that package instead. Perhaps if we recommended what Emacs already offers, the Elisp we write would be Elisp of and for Emacs.
If you e-mail me your comments, I can forward them to David!
You can e-mail me at sacha@sachachua.com.
-
๐ sacha chua :: living an awesome life Emacs Carnival May 2026: "May I recommend..." rss
It's May and I like puns, so I'm going to suggest "May I recommend…" as our Emacs Carnival theme this month, building on lively conversations about people's favourite packages on lobste.rs, Reddit, and Hacker News. Let's go beyond packages and talk workflows, tips, practices, perspectives… whatever you'd recommend!
It was pretty nice having a wiki page that people could edit without needing to wait for me, so if you write about this topic, feel free to and add your link. If you run into problems doing that, please e-mail me and I can add the link for you.
People have already started sharing their recommendations:
- May Emacs Carnival
- May I Recommend EWM | Dilip's Log
- From David Dimagid: What we talk about when we talk about recommending Emacs packages
I'll also do a round-up post at the end of the month so that it shows up in people's RSS feeds.
Looking forward to seeing what y'all recommend!
You can e-mail me at sacha@sachachua.com.
-
๐ r/Leeds Loneliness rss
Damn the loneliness, after 9-5 all i can do is get some beers. There is nothing much to do , no one to talk to. Anyone who has been in my shoes - advice how did you get better ? I migrated here in March.
submitted by /u/FarziiHu
[link] [comments] -
๐ r/Yorkshire River Nidd rss
| The river at Little Ribston; unexpectedly beautiful. But then, itโs Yorkshire. submitted by /u/Inevitable-Debt4312
[link] [comments]
---|--- -
๐ r/wiesbaden Umzug -> Internetanbieter? rss
Da ich in 2 Monaten nach Wiesbaden ziehe und ja, auch wenn das oft keinen wirklichen Stadtbezug hat: Welcher Anbieter bereitet am wenigsten Kopfschmerzen und ist preislich sowie Servicetechnisch absolut empfehlenswert?
Adressencheck ist schon durchgefรผhrt. Es kommen alle gรคngigen Anbieter in Frage.
Danke Euch! :-)
submitted by /u/allroundurso
[link] [comments] -
๐ r/Yorkshire More Whitby! rss
| submitted by /u/SectorSensitive116
[link] [comments]
---|--- -
๐ r/york Typical old pubs in York rss
Hi everyone, I'm looking for some quaint pubs in York to drink and eat. I'll be visiting the city for a couple of days this weekend!
submitted by /u/Resident-Direction86
[link] [comments] -
๐ r/Leeds Both Queens Court and The Bridge have closed down in absolutely devastating news for the LGBTQ+ Scene in leeds rss
submitted by /u/28peteslater
[link] [comments] -
๐ r/reverseengineering [WIP] Resolve indirect calls in Binary Ninja with DynamoRIO instrumentation rss
submitted by /u/Weird_Field_8518
[link] [comments] -
๐ sacha chua :: living an awesome life 2026-05-04 Emacs news rss
Thanks to everyone who shared their thoughts on the April 2026 Emacs Carnival theme of Newbies and Starter Kits. Check out that post to see all the entries people have shared so far. I enjoyed chatting with Prot about the topic, and he shared some defaults that even experienced users have been trying out. The carnival theme for May 2026 is "May I recommend…". Looking forward to reading your posts!
- Upcoming events (iCal file, Org):
- Emacs.si (in person): Emacs.si meetup #5 2026 (v #ลพivo) https://dogodki.kompot.si/events/b4192df7-3da4-41b8-95a3-532b93923656 Mon May 4 1900 CET
- EmacsATX: Emacs Social https://www.meetup.com/emacsatx/events/314341747/ Thu May 7 1600 America/Vancouver - 1800 America/Chicago - 1900 America/Toronto - 2300 Etc/GMT – Fri May 8 0100 Europe/Berlin - 0430 Asia/Kolkata - 0700 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri May 8 1800 Europe/Paris
- London Emacs (in person): Emacs London meetup https://www.meetup.com/london-emacs-hacking/events/314540885/ Tue May 12 1800 Europe/London
- Emacs Berlin: In-Person-Only Emacs-Berlin Stammtisch https://emacs-berlin.org/ Tue May 12 1900 Europe/Berlin
- OrgMeetup (virtual) https://orgmode.org/worg/orgmeetup.html Wed May 13 0900 America/Vancouver - 1100 America/Chicago - 1200 America/Toronto - 1600 Etc/GMT - 1800 Europe/Berlin - 2130 Asia/Kolkata – Thu May 14 0000 Asia/Singapore
- Sacha Chua: May 14: Sacha, Prot, and Philip Kaludercic Talk Emacs: Newcomer Experience (Protesilaos)
- Beginner:
- Emacs configuration:
- Must-have Emacs packages you should know about [Updated]โ (Reddit)
- Jiewawa: Overriding keybindings with Meow
- Magnus: Follow-up on switching to eglot - more about use-package
- Emacs config (15:08)
- badele/idem: Doom Emacs configuration for DevOps workflows (bash, go, json, python, terraform, typescript, etc…) (@jesuislibre.org on Bluesky)
- Sharing my emacs.d while cleaning up my folder a bit. (Reddit)
- My Emacs Config (Reddit)
- Been working on my emacs config lately (Reddit)
- My configuration and workflow for game development in emacs with Godot
- Emacs Lisp:
- Contributing to ELPA (@pkal@social.sdfeu.org, Reddit)
- compat 31.0.0.0 released, stabilization in progress (@minad@mastodon.world)
- Dave's blog: Writing an automated test to try to find an Emacs bug
- NeLisp v1.0 โ Emacs Lisp implemented in Elisp, plus a small Rust runtime that runs it without Emacs (Reddit)
- Appearance:
- Navigation:
- Writing:
- How I use quick-sdcv to get the Oxford English Dictionary in my Emacs
- Dave Pearson: blogmore.el v4.3.0 - blogmore-toggle-invite-comments, blogmore-invite-comments-to
- Denote:
- Org Mode:
- Stupidly Simple Notes Taking With Emacs - Linux Renaissance (@darth@watch.linuxrenaissance.com)
- I built an org-mode weekday repeater, .+wd
- Jonathan Chu: Introducing grove.el - note-taking workflow for Org
- Experimental/personal PDF-viewing/notetaking minor mode I (sort of) vibe-coded. (Reddit) dired + pdfview + org
- Import, export, and integration:
- Implementing a minimal evergreen blog in HTML and Emacs Lisp (Reddit, HN)
- Randy Ridenour: Managing Multiple-Choice Questions With Org Mode
- jamesendreshowell/org-teach-worksheet: Emacs lisp and Org macros for authoring classroom worksheets - Codeberg.org (@jameshowell@fediscience.org)
- schue/org-canvas: upload Org mode files directly into an instance of the Canvas LMS. (@schuemaa@ecoevo.social)
- canvas.el/canvas.org - interact with the Canvas learning management system (@locallytrivial@mathstodon.xyz)
- De Org-mode a Trilium Notes, pasando por Obsidian ยท El blog de Lรกzaro (@elblogdelazaro)
- tykayn/orgmode-to-gemini-blog - Source Bliss: Comme dirait Manon, les sources, c'est important. (@tykayn@mastodon.cipherbliss.com)
- Completion:
- History: delete old duplicates, but still rank by frecency (@minad@mastodon.world)
- vertico-posframe-preview: a preview sidecar for vertico-posframe (Reddit)
- VOMPECCC from Scratch: Picking Fruits and Veggies with ICR (YouTube 51:06, Reddit, HN) - incremental completing read with vertico, consult, marginalia, etc.
- Coding:
- Code to run magit-status on a project (@robjperez@fosstodon.org)
- Wireframe.el Keyboard-first wireframe prototyping inside GNU Emacs.
- Auto-mark rules, snooze, marking and filters for GitHub notifications in Emacs (Reddit)
- eglot, emscripten, and clangd (@robjperez@fosstodon.org)
- Einar Mostad: Fix Emacs python-mode REPL and org code block with python evaluation problems
- uv.el – a declarative Emacs interface for the uv Python package manager (experimental) (Reddit)
- Magnus: Secrets when connecting to DBs
- Using our new Lua debbuger: LuaProbe, we made an Emacs package for it (Reddit)
- Package announcement: go-prettify-mode.el (Reddit)
- Emacs is a fantastic SQL editor - see the comments for more recommendations
- Mail, news, and chat:
- Evil mode:
- Multimedia:
- Fun:
- Server play support in nethack-el: Help lobby for support on popular Nethack servers
- AI:
- macher-agent: Similar to gptel-agent but within the macher context (Reddit)
- adds $ completion for Codex skills in agent-shell buffers (Reddit)
- Agent's major mode kit (Reddit)
- Emacs manager for OpenAI Codex conversations (Reddit)
- anvil.el v1.0.0 โ first stable, anvil-ide split, anvil-pkg sister, and a no-Emacs path via NeLisp (Reddit)- let AI agents use Emacs as a workbench via MCP
- Community:
- Emacs Carnival April 2026:
- Emacs Carnival in May (and in general) (Reddit)
- The gravitational pull of Emacs โ baty.net (@jbaty@social.lol)
- Kent Pitman and Ramin Honary join on #commonLisp #lisp #IDE #emacs #schemacs #UX #lispyGopherClimate - toobnix (@screwtape@toobnix.org)
- SimHacker/NeMACS: UniPress Emacs 2.20 for NeWS ยท GitHub (released 1989) (@kickingvegas@sfba.social)
- Kent Pitman #demo 1977-1984 #MIT #ITS #DDT #TECO #EMACS #LISP #MACLISP - toobnix (@screwtape@toobnix.org)
- A Report on Burnout in Open Source Software Communities (2025, PDF) (@yantar92@fosstodon.org) - not Emacs-specific, but good to think about long-term
- Other:
- Emacs development:
- The emacs-31 branch will be cut in one week (Reddit, Irreal)
- Demote 'completion-preview-is-calling'
- Project prompters always default to current project, if any
- New variable 'completion-preview-is-calling'
- Always compile w32image.c on MinGW (Bug#80924)
- New VC commands for remote unintegrated changes
- New commands to report diffs of all local changes
- New packages:
- emcp: Lets your agent talk to Emacs (MELPA)
- forgejo: Emacs Forgejo Front-end (GNU ELPA)
- grove: Obsidian-like note-taking for org files (MELPA)
- keymap-popup: Described keymaps with popup help (GNU ELPA)
- mysql: Pure Elisp MySQL wire protocol client (MELPA)
- outline-stars: Outshine-style star headings for outline-minor-mode (MELPA)
- simulacrum: Inject custom event types into the event stream (MELPA)
- sql-bigquery: Adds BigQuery support to SQLi mode (MELPA)
- tmux-csi-u: Tmux CSI-u decoder (MELPA)
- ttx-mode: TrueType/OpenType font viewer using ttx (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrรฉs Ramรญrez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
๐ r/LocalLLaMA Llama.cpp MTP support now in beta! rss
| Happy to report that llama.cpp MTP support is now in beta, thanks to Aman (and all the others that have pushed the various issues in the meantime). This has the potential to actually get merged soon-ish. Currently contains support for Qwen3.5 MTP, but other models are likely to follow suit. Between this and the maturing tensor-parallel support, expect most performance gaps between llama.cpp and vLLM, at least when it comes to token generation speeds, to be erased. submitted by /u/ilintar
[link] [comments]
---|--- -
๐ r/Yorkshire Taking the long way round. I could wander these dry stone wall paths forever and still find a new view to admire rss
| submitted by /u/HammersAndPints
[link] [comments]
---|--- -
๐ r/reverseengineering IDA-MCP Is Now RE-MCP With Ghidra Support rss
submitted by /u/jtsylve
[link] [comments] -
๐ @malcat@infosec.exchange [#Malcat](https://infosec.exchange/tags/Malcat) 0.9.14 is out! mastodon
#Malcat 0.9.14 is out!
This is a maintenance build, with some bonuses:
โ AccessDB parsing
โ RAR unpacking
โ UPX (static) unpacking
โ Improved __noreturn detection
โ ... and as usual, up-to-date signature, constants and Kesakode DBs.Happy reversing!
-
๐ r/reverseengineering Reverse-engineered the BLE protocol of the LuckPrinter-SDK family of thermal pocket printers (DP-L1S) โ Python CLI + Web Bluetooth client + full command reference rss
submitted by /u/ChiaraCannolee
[link] [comments] -
๐ r/york My favourite therapeutic loop๐ซ I could walk this a thousand times and never get bored๐ฅน rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
๐ r/Harrogate Recommendations for someone to lay a shed base in Harrogate? rss
Hi all,
Looking for a bit of help/recommendations.
I need to get a shed base put in at the bottom of my garden and Iโm weighing up either concrete or paving slabs. Itโs not a massive job, but I want it done properly so itโs solid and lasts.
Does anyone know someone reliable in the Harrogate area who could take this on? Ideally someone youโve used yourself and would recommend.
Thanks in advance
submitted by /u/Logical_Yogurt_520
[link] [comments] -
๐ r/LocalLLaMA it's time to update your Gemma 4 GGUFs rss
Chat Template was fixed a few days ago
choose your fav dealer:
https://huggingface.co/bartowski/google_gemma-4-31B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-26B-A4B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-E4B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-E2B-it-GGUF
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF
https://huggingface.co/unsloth/gemma-4-31B-it-GGUF
https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF
https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF
submitted by /u/jacek2023
[link] [comments] -
๐ r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
๐ r/Yorkshire Whitby rss
| submitted by /u/Phil-pot
[link] [comments]
---|--- -
๐ r/york Morning Yorkies! Going into York today and wondered if thereโs any really good deli sandwich shops? rss
Craving a really good sandwich with fresh bread if so.
submitted by /u/Yorkshire_Pudding02
[link] [comments] -
๐ Stavros' Stuff Latest Posts Adding a feature to a closed-source app rss
Who needs source code?I use Audiobookshelf (abbreviated ABS) for all my legal audiobooks that I bought legally, and I really like it. I also use the Smart Audiobook Player (abbreviated SABP) Android app, which I also bought (leg
-
๐ Rust Blog Rust is participating in Outreachy rss
The Rust Project has been building up a good history of participating in various open-source mentorship programs, including Google Summer of Code for three years (including this year) and previously OSPP. We're happy to announce that this year we are also participating in Outreachy starting in the May 2026 cohort.
Each of these mentorship programs has different criteria for eligibility depending on who they target and the motivations of the program. Outreachy provides internships in open source, to people from any background who face underrepresentation, systemic bias, or discrimination in the technical industry where they are living. You can learn more about the Outreachy program on their website.
What is Outreachy and how is it different than Google Summer of Code
Outreachy is similar to Google Summer of Code (GSoC) in some aspects, but different in others. First off, unlike GSoC, Outreachy interns first apply to the overall program and only then can apply to specific communities. Second, while oftentimes GSoC applicants submit various contributions prior to their application, Outreachy has a dedicated period where contributions are not just optional, but required. Finally, Outreachy applicants submit an application similar to GSoC applications and communities pick interns based on those applications and the interns' contributions. Outreachy has two internship periods per year, one running from May to August (in which we are currently participating) and one from December to March.
The other major difference between Google Summer of Code and Outreachy is the source of intern stipends. For GSoC, Google graciously covers contributor stipends and overhead. For Outreachy, communities instead cover the interns' stipends and overhead.
We are mentoring 4 interns for the May 2026 cohort
Because of limited funding availability and mentoring capacity, the Rust Project decided to select four interns for mentorship. We'll briefly share these projects below.
Calling overloaded C++ functions from Rust
Ajay Singh has been selected, mentored by teor, Taylor Cramer, and Ethan Smith.
This project aims to implement an experimental feature for calling overloaded C++ functions from Rust, and to begin testing that feature in a few representative use cases.
Code coverage of the Rust compiler at scale
Akintewe Oluwasola has been selected, mentored by Jack Huey.
This project aims to develop the workflows to run and analyze code coverage of the compiler at the scale of the entire compiler test suite and on ecosystem crates detected by crater. The hope is to be able to detect when the compiler is inadequately tested, both within the compiler and in the ecosystem, and to build tools to do continuous analysis on this.
Fuzzing the a-mir-formality type system implementation
Tunde-Ajayi Olamiposi has been selected, mentored by Niko Matsakis, Rรฉmy Rakic, and tiif.
This project aims to implement fuzzing for a-mir- formality, an in-progress model for Rust's type and trait system. The goal is to generate programs in order to identify rules with underspecified semantics in a-mir-formality.
Improve the security of GitHub Actions of the Rust Project
oghenerukevwe Sandra Idjighere has been selected, mentored by Marco Ieni and Ubiratan Soares.
This project aims to improve the security of GitHub Actions workflows of the repositories owned by the Rust Project. It will develop tools and workflows, integrating with existing software, to analyze Github repositories and detect if they follow the best security practices, fix existing issues, and ensure that good security practices are followed in the future.
What's next
Over the next 3 months, the interns will work closely with their mentors to make progress on their projects. When the internship period is over, we'll write another blog post to share the results! See you then!
We also want to thank all the people that submitted applications and made contributions. It was quite tough to decide which applicants to select. Hopefully we will participate in Outreachy again in the future and there are other opportunities to participate. We also very much welcome you to stick around and continue being involved - there is a ton of places in the Rust Project with opportunities to be involved.
-
๐ Julia Evans Links to CSS colour palettes rss
A while back I decided to stop using Tailwind for new projects and to just write vanilla CSS instead.
But one thing I missed about Tailwind was the colour palette (here as CSS). If I wanted a light blue I could just use
blue-100and if I didn't like it maybe tryblue-200orblue-50. I'm not very good with colours so it makes a big difference to me to have a reasonable colour palette that somebody who is better at colour than me has thought about.But I'm also a little tired of those Tailwind colours, so I asked on Mastodon today what other colour palettes were out there. And then a friend said they wanted links to those colour palettes, so here's a blog post so my friend can see them, and all the rest of you too :)
my favourites
The ones I liked the most were:
- uchลซ (css file, FAQ)
- flexoki (css file)
- reasonable colours, which seems to have a focus on accessibility (css file)
more colour palettes
colourscheme generators
Folks also linked to a bunch of colour palette generators
I've always found these types of generators too hard to use but maybe one day I will get better enough at colour that I'm able to use a colour palette generator successfully so I'll leave those links there anyway.
and more colour tools:
- colorhexa has some info about colorblindness
oklchGenerative colors with CSS gives an example of how to use the
oklchCSS function to dynamically generate colors. -
๐ exe.dev Dev, Test, Prod: Choose One, Two, or Three rss
Industry-wide, we often develop our software in three distinct environments. Perhaps your laptop is a Mac; your CI system is hosted GitHub Actions, and your prod is k8s.
Three-in-One
For some use cases, you need not bother with the complexity; use one exe.dev vm for all three. A blog, a dashboard, a link shortener, a bot, and so on: these work well with the environments collapsed. Add features by asking Shelley to do so. Set up continuous deployment by asking Shelley to poll every hour. Use git for a backup if it calls for it. Voila!
Our internal tools sport an "Edit with Shelley" ribbon. They either point straight to the "vm.shelley.exe.xyz" domain, or link to exe.dev/new with a pre-filled prompt and pre-filled tags, just like the link here.
Just Dev
Use an exe.dev vm (or many) to work on your software. Set up the GitHub integration (docs) to make cloning easy. Some people work serially. Some people work using multiple worktrees on one vm. Some people have one vm per task or project. Clone your VMs using โcpโ or configure them using setup scripts.
Using remote VMs opens up the convenience of mobile, opportunities for sharing, not to mention isolation from your other projects.
Why now? Many, many companies have tried remote development before. There is an entire graveyard of failed startups in this space. The big difference is agents. If your development is increasingly chat-based, the old arguments about getting your environment and dot-rc files just right fade away. The convenience of starting a task from your phone overwhelms the decades-old bashrc file and finely crafted PS1. As a bonus, you get the ability to share with your co-workers. Pull requests are so yesterday; send them a link to a working demo instead.
Just Test
Exe.dev VMs are a great place to riff on an idea. Perhaps you want to explore a particular open source project. Or you want to do some data analysis and share it with your co-workers? Or prototype your next idea? Or find your flakes by running your tests over and over again. Or let loose Shelley, our agent, on your app with its built-in browser? Or send off a security review. Or even just run a GitHub Actions runner.
Because you pick what access you want to give your VMs, and because theyโre persistent, exe.dev VMs are great places to test stuff out.
Just Prod
You can host real, production software in exe. We support custom domains with a bit of DNS configuration (docs).
If youโre incredulous that this is a good idea, the entirety of Stack Overflow ran on just a few machines. Reach out to us if you want to enlarge your VM as far as modern hardware can go.
Private, Internal, or Public
Once you build it, you'll want to share it. You can keep it to yourself, and that's the default. Or you can share it with your team or with share links. Or you can share it publically. Sharing a VM's website is as easy as sharing any other online doc.
-
๐ Armin Ronacher Content for Contentโs Sake rss
Language is constantly evolving, particularly in some communities. Not everybody is ready for it at all times. I, for instance, cannot stand that my community is now constantly "cooking" or "cooked", that people in it are "locked in" or "cracked." I don't like it, because the use of the words primarily signals membership of a group rather than one's individuality.
But some of the changes to that language might now be coming from โฆ machines? Or maybe not. I don't know. I, like many others, noticed that some words keep showing up more than before, and the obvious assumption is that LLMs are at fault. What I did was take 90 days' worth of my local coding sessions and look for medium-frequency words where their use is inflated compared to what wordfreq would assume their frequency should be. Then I looked for the more common of these words and did a Google Trends search (filtered to the US). Note that some words like "capability" are more likely going to show up in coding sessions just because of the nature of the problem, so the actual increase is much more pronounced than you would expect.
You can click through it; this is what the change over time looks like. Note that these are all words from agent output in my coding sessions that are inflated compared to historical norms:
Loading word trend chartโฆ
The interactive word trend chart requires JavaScript.
Something is going on for sure. Google Trends, in theory, reflects words that people search for. In theory, maybe agents are doing some of the Googling, but it might just be humans Googling for stuff that is LLM-generated; I don't know. This data set might be a complete fabrication, but for all the words I checked and selected, I also saw an increase on Google Trends.
So how did I select the words to check in the first place? First, I looked for the highest-frequency words. They were, as you would expect, things like "add", "commit", "patch", etc. Then I had an LLM generate a word list of words that it thought were engineering-related, and I excluded them entirely from the list. Then I also removed the most common words to begin with. In the end, I ended up with the list above, plus some other ones that are internal project names. For instance, habitat and absurd, as well as some other internal code names, were heavily over-represented, and I had to remove those. As you can see, not entirely scientific. But of the resulting list of words with a high divergence compared to wordfreq, they all also showed spikes on Google Trends.
There might also be explanations other than LLM generation for what is going on, but I at least found it interesting that my coding session spikes also show up as spikes on Google Trends.
The Rise of LLM Slop
The choice of words is one thing; the way in which LLMs form sentences is another. It's not hard to spot LLM-generated text, but I'm increasingly worried that I'm starting to write like an LLM because I just read so much more LLM text. The first time I became aware of this was that I used the word "substrate" in a talk I gave earlier this year. I am not sure where I picked it up, but I really liked it for what I wanted to express and I did not want to use the word "foundation". Since then, however, I am reading this word everywhere. This, in itself, might be a case of the BaaderโMeinhof phenomenon, but you can also see from the selection above that my coding agent loves substrate more than it should, and that Google Trends shows an increase.
We have all been exposed to LLM-generated text now, but I feel like this is getting worse recently. A lot of the tweet replies I get and some of the Hacker News comments I see read like they are LLM-generated, and that includes people I know are real humans. It's really messing with my brain because, on the one hand, I really want to tell people off for talking and writing like LLMs; on the other hand, maybe we all are increasingly actually writing and speaking like LLMs?
I was listening to a talk recording recently (which I intentionally will not link) where the speaker used the same sentence structure that is over- represented in LLM-generated text. Yes, the speaker might have used an LLM to help him generate the talk, but at the same time, the talk sounded natural. So either it was super well-rehearsed, or it was natural.
Engage and Farm
At least on Twitter, LinkedIn, and elsewhere, there is a huge desire among people to write content and be read. Shutting up is no longer an option and, as a result, people try to get reach and build their profile by engaging with anything that is popular or trending. In the same way that everybody has gazillions of Open Source projects all of a sudden, everybody has takes on everything.
My inbox is a disaster of companies sending me AI-generated nonsense and I now routinely see AI-generated blog posts (or at least ones that look like they are AI-generated) being discussed in earnest on Hacker News and elsewhere.
Genuine human discourse had already been an issue because of social media algorithms before, but now it has become incredibly toxic. As more and more people discover that they can use LLMs to optimize their following, they are entering an arms race with the algorithms and real genuine human signal is losing out quickly. There are entire companies now that just exist to automate sending LLM-generated shit and people evidently pay money for it.
Speed Should Kill
If we take into account the idea that the highest-quality content should win out, then the speed element would not matter. If a human-generated comment comes in 15 minutes after a clanker-generated one, but outperforms it by being better, then this whole LLM nonsense would show up less. But I think that LLM- generated noise actually performs really well. We see this plenty with Open Source now. Someone builds an interesting project, puts it on GitHub and within hours, there are "remixes" and "reimplementations" of that codebase. Not only that, many of those forks come with sloppy marketing websites, paid- for domains, and a whole story on socials about why this is the path to take.
I have complained before that Open Source is quickly deteriorating because people now see the opportunity to build products on top of useful Open Source projects, but the underlying mechanics are the same as why we see so much LLM slop. Someone has a formed opinion (hopefully) at lunch, and then has a clanker-made post 3 minutes later. It just does not take that much time to build it. For the tweets, I think it's worse because I suspect that some people have scripts running to mostly automate the engagement.
And surely, we should hate all of this. These low-effort posts, tweets, and Open Source projects should not make it anywhere. But they do! Whatever they play into, whether in the algorithms or with human engagement, they are not punished enough for how little effort goes into them.
Friction and Rate Limiting
That increases in speed and ease of access can turn into problems is a long- understood issue. ID cards are a very unpopular thing in the UK because the British are suspicious of misuse of a central database after what happened in Nazi Germany. Likewise the US has the Firearm Owners Protection Act from 1986, which also bans the US from creating a central database of gun owners. The gun-tracing methodologies that result from not having such a database look like something out of a Wes Anderson movie. We have known for a long time that certain things should not be easy, because of the misuse that happens.
We know it in engineering; we know it when it comes to governmental overreach. Now we are probably going to learn the same lesson in many more situations because LLMs make almost anything that involves human text much easier. This is hitting existing text-based systems quickly. Take, for instance, the EU complaints system, which is now buckling under the pressure of AI. Or take any AI-adjacent project's issue tracker. Pi is routinely getting AI-generated issue requests, sometimes even without the knowledge of the author.
Trust Erosion and Gaslighting
I know that's a lot of complaining for "I am getting too many emails, shitty Twitter mentions, and GitHub issues." I really think, though, that now that we know that it's happening, we have to change how we interact with people who are increasingly automating themselves. Not only do they produce a lot of shitty slop that we all have to sit through; they are also influencing the world in much more insidious ways, in that they are influencing our interactions with each other. The moment I start distrusting people I otherwise trust, because they have started picking up LLM phrasing, it erodes trust all over society.
You also can't completely ban people for bad behavior, because some of this increasingly happens accidentally. You sending Polsia spam to me? You're dead to me. You sending me an AI-generated issue request and following up with an apology five minutes later? Well, I guess mistakes happen. Yet, in many ways, what is going on and will continue to go on is unsettling.
I recently talked with my friend Ben who said he forced someone to call him to continue a conversation because he was no longer convinced he was talking to a human.
Not all of us have been exposed to the extreme cases of this yet, but I had a handful of interactions in which I questioned reality due to the behavior of the person on the other side. I struggle with this, and I consider myself to be pretty open to new technologies and AI in particular. But how will my children react to stuff like this? My mother? I have strong doubts that technology is going to solve this for us.
Suggestions for Change
The reason I don't think technology is going to solve this for us is that while it can hide some spam and label some generated text, it won't fix us humans. What is being damaged here are social interactions across the board: the assumption that when someone writes to you, there is a person on the other side who has put some care into the interaction. I would rather have someone ghost me or reject me than send me back some AI-generated slop.
Change has to start with awareness and an unfortunate development is that LLMs don't just influence the text we read and they influence the text we write, even when we don't use them. Given the resulting ambiguity, we need to become more aware of how easily we can turn into energy vampires when we use agents to back us up in interactions with others. Consider that every time someone reads text coming from you, they will increasingly have to make a judgment call if it was you, an LLM, or you and an LLM that produced the interaction. Transparency in either direction, when there is ambiguity, can help great lengths.
When someone sends us undeclared slop, we need to change how we engage with them. If we care about them, we should tell them. If we don't care about them, we should not give them visibility and not engage.
When it comes to creating platforms and interfaces where text can be submitted, we need to throw more wrenches in. The fact that it was cheap for you to produce does not make it cheap for someone else to receive, and we need to find more creative ways to increase the backpressure. GitHub or whatever wants to replace it, will have a lot to improve here and some of which might be going against its core KPIs. More engagement is increasingly the wrong thing to look at if you want a long term healthy platform.
Whatever we can do to rate-limit social interactions is something we should try: more in-person meetings, more platforms where trust has to be earned, and maybe more acceptance that sometimes the right response is no response at all.
And as for AI assistance on this blog, I have an AI transparency disclaimer for a while. In this particular blog post I used Pi as an agent to help me generate the dynamic visualization and I used to write the code to analyze and scrape Google Trends.
-
๐ Ampcode News GPT-5.5 In Deep rss
GPT-5.5 now powers Amp's
deepmode.It is a better coding agent than GPT-5.4: more steerable, more interactive, and better at staying inside constraints.
More Agent-Shaped
GPT-5.5 is better at the actual agent loop: read enough code, make the change, verify it, explain what happened. Whereas with GPT-5.4, prompts often had to spell out the process.
With GPT-5.5 we found it's best to clearly describe the outcome and put the rules and repeatable steps into the guidance files and tools.
If the task is vague, it can still solve the wrong problem cleanly. Good prompts matter more, not less.
Reasoning Effort
With GPT-5.5 we lowered
deep's default effort fromhightomedium(deepยฒ).Do not assume higher reasoning is always better: in our eval, GPT-5.5
highcost more thanmediumand performed worse.xhigh(deepยณ) is for cases where maximum quality matters more than cost.As before, you can toggle the thinking effort directly in the CLI with
Opt+D(Alt+D), cycling throughlow(deep),medium, andxhigh.How To Use It
The most important guideline to follow: tell GPT-5.5 what success looks like.
A few patterns have worked well for us:
- Give it the outcome and the constraints. Example: โRefactor transcript caching into a separate module. Keep the public API unchanged. Perf logging should only run behind this env var. Cache growth should be capped. Run the focused tests and typecheck.โ
- Give it a way to prove the fix. Example: โThis CLI focus bug should be verified in the actual CLI, not just by inspection. Reproduce it interactively, check focus state, then run the focused test.โ
- Use it for planning when the shape of the fix is unclear. Example: โAnalyze this protocol deadlock. Is it an infrastructure bug, a protocol bug, or something the client must recover from? Propose 2โ3 options with tradeoffs and pseudo-code. Do not implement yet.โ
Update Amp to the latest version by running
amp updateand you're ready to go.Model Card
We wrote up the full GPT-5.5 model card with evals, reasoning guidance, prompt changes, and caching/ZDR caveats.
-
- May 03, 2026
-
๐ r/york Why does The Shambles make such a big thing out of Harry Potter? rss
submitted by /u/eques_99
[link] [comments] -
๐ r/reverseengineering GitHub - 03DSmoothie/minecraft-cpp-versions: Minecraft recoded in C++ (multiple versions) rss
submitted by /u/03D_DEV
[link] [comments] -
๐ r/LocalLLaMA AMD Strix Halo refresh with 192gb! rss
| Looks like the next strix halo, the Gorgon halo 495 max will have more then 128gb! I already bought a strix halo mini forms couple months ago since the 2026 refesh rumors was not interesting. Was not planning on getting another till 2027 with the bigger refresh, and linking them together. But was planning to add an external gpu for running smaller dense models for now till 2027. Cpu, gpu rumor was smaller improvements. Heard nothing about more memory. But idk having 320gb of memory will allow running some of these newer huge moe models... maybe I drop external gpu thoughts for now. Of course rumors for now need to wait. For those who have not bought one yet, a single 192gb would mean running all these recent 122b models at q8 with fullish context! submitted by /u/mindwip
[link] [comments]
---|--- -
๐ r/LocalLLaMA One bash permission slipped... rss
| How? It kept getting chained bash commands wrong, with wrong escapes. So it created many bad directories, and tried "fixing" its mistake. It offered to run a large bash command, with rm -rfinside, and stupid me missed it. I'm glad I push everything often. But the disruption is massive. FAQ:- No, I don't run this on my personal computer. It's an isolated proxmox VM for coding with LLMs.
submitted by /u/TheQuantumPhysicist
[link] [comments]
---|--- -
๐ r/Leeds Wand and Tankard at St John's Centre rss
After walking past several times and being confused as to what the place actually is (thought it was for kids), today I approached the owner/manager. Wand and Tankard is an outside pub/beer garden that adjoins Merrion Gardens. Anyone can basically go, order drinks and chill out in the cabins or garden they have. Inside the St John's Centre is the Hole in Wand (Indoor golf). Felt a bit sorry for the guys today as they had 3 great buskers on, a seemingly good bar concept, yet was nearly empty.
submitted by /u/Puzzleheaded_Bunch44
[link] [comments] -
๐ r/Yorkshire Whitby, moody version rss
| submitted by /u/SectorSensitive116
[link] [comments]
---|--- -
๐ r/Yorkshire Now and then rss
submitted by /u/Still_Function_5428
[link] [comments] -
๐ r/reverseengineering Automated RASP Bypass with Frida + AI Agent | nutcracker & aipwn demo rss
submitted by /u/Aggravating_Lie_5779
[link] [comments] -
๐ r/york love this place! rss
| submitted by /u/TrueNeighborhood7624
[link] [comments]
---|--- -
๐ r/york Which pubs have darts boards in York? rss
Iโm hoping to organise a darts pub crawl for a friendโs birthday around York but I donโt know many pubs that have boards (the white horse and the old bank are the only two Iโve seen). Do you have any recommendations?
submitted by /u/ZealousidealRange269
[link] [comments] -
๐ r/Yorkshire Whitby Beach rss
| submitted by /u/Glittering_Vast938
[link] [comments]
---|--- -
๐ r/Yorkshire Whitby Harbour rss
| submitted by /u/Glittering_Vast938
[link] [comments]
---|--- -
๐ r/Yorkshire Grade II listed bid for 1960s Shipley Clock Tower rejected rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
๐ r/Yorkshire Pen-y-ghent. rss
| Took the morning i did the yorkshire 3 peaks. Probably one of the best photos I've taken. submitted by /u/Pearls_of_Rizzdom
[link] [comments]
---|--- -
๐ r/reverseengineering Please critique my reverse engineering ctf platform. It is meant for beginners but I would like input from serious reverse engineers. It is functionally done but I need criticism for further refinements, thank you! rss
submitted by /u/ComplaintDirect4335
[link] [comments] -
๐ r/wiesbaden Nachbar mรถchte sein Auto verkaufen. Wer mรถchte es bzw. an wen wendet man sich? rss
Hi,
meine Nachbarn ziehen zurรผck nach Amerika und verkaufen ihr deutsches Auto. Sie kennen sich damit nicht so aus und ich ehrlicherweise auch nicht so, da ich nie ein Auto besessen habe. Ich wรผrde ihnen aber gerne helfen.
Es handelt sich dabei um einen gut gepflegten Hyundai i10 mit 30.000km drauf. Ist 3 oder 4 Jahre alt, hat 49 kW (frรผher waren das die PS, oder?) und soll fรผr 10.000 Euro verkauft werden in ca. 2 Monaten.Wo inseriert man das in Wiesbaden, um einen lokalen Kรคufer zu finden? Kann man das einfach einem guten Autohaus geben? Was wรคre ein gutes Autohaus?
submitted by /u/Individual-Handle676
[link] [comments] -
๐ r/Leeds Who would be responsible for these stairs, Whitehall Rd leading to the canal? Is it the council or the canal trust, rss
Earlier today a lady in front of me slipped and fell forwards on these stairs, normally not an issue other than the fact these stairs are rusted asf with sharp jagged edges. She cut her hand pretty bad, I had to take her to my apartment around the corner to clean her hand up and then put her in an uber to the hospital for what will definitely be a few stitches.
This area is really busy now, with all the new apartments and offices nearby, surely about time they replace this staircase.
Itโs covered in rust, most of the metal grates are broken and full of sharp jagged edges, and itโs slippery asf when wet.
I was hoping to write to who owns it, along with a few pictures of the ladies hand, a threat of liability and compensation might also help them have a bit of urgency. However I canโt work out if this is the canal trust or the council who would be responsible for this. Any help appreciated. Thanks.
submitted by /u/MiserableSandwich36
[link] [comments] -
๐ r/Leeds Name of band at the corn exchange 02/05/2026 rss
Did anyone catch the name of this band that were playing outside the corn exchange yesterday?
They were killing it but the wind made their sign blow over so I have no idea who they are!
Thankyou /Leeds
submitted by /u/baldursbae
[link] [comments] -
๐ r/Yorkshire Sheffield's Reform Candidate Who Was Told R*cists Not Wanted Here Has A Dumb Idea: Scrap Clean Air! rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
๐ Register Spill Joy & Curiosity #84 rss
No big intro today. No time. I have to tweak some orbs, there's a big release coming.
-
Evan Phoenix: Agile in the Age of AI. There's so much in there and it's all really good. Highly recommended.
-
This is one of the most interesting analyses of What's Going On With Software Right Now that I've read in recent weeks: "To be a little less vague, I suspect that we're likely (not certain, but likely) to be entering into a period of unprecedented software degradation, and we're going to be seeing an increasing frequency of outages like this across many high profile products. But IMO the cause is actually not just the-one-thing-that-everyone-is-always-talking-about, it's a number of things that have all been bubbling away at just below critical levels for a long time.[โฆ]" You know this joke about the fish and the water, right: old fish asks young fishes "morning! how's the water?" and the young fish are confused and ask "what's water?" It's easy (and probably not that wrong) to point at AI and declare it the cause of every change we see, but I think it's equally likely that only now that we're out of the ZIRP-era do we see what ZIRP has actually done to this industry.
-
Ghostty Is Leaving GitHub: "It's not a fun place for me to be anymore. I want to be there but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software. I want it to be better, but I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go. I'd love to come back one day, but this will have to be predicated on real results and improvements, not words and promises." The times they are a-changing. Don't forget to read Mitchell's comment here. I don't have the time right now to spell out how much GitHub means to me, but I can safely say that without GitHub I wouldn't have the life I have today. And for many, many years I thought working at GitHub would be the best job in the world.
-
This chart made the rounds and kinda said the record straight: "I don't work on reliability & scaling at GitHub, but the people who do aren't bad at their jobs. They're dealing with unprecedented scale from agents. It's easy to shit on GitHub from the outside if you're not in charge of 30X-ing capacity within a few months. Have some grace."
-
I found Armin's commentary on the whole GitHub situation to be very good: Before Github. This, for example: "GitHub is currently losing some of what made it feel inevitable. Maybe that's just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable. Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It's a miracle that things are going as well as they are." (Sidenote: I can't be the only one who's never used the word 'forge' before and now sees it everywhere as if there had been a big "this is the new word we're going to use now" memo going around.)
-
Mat Duggan on the GitHub he'd build if he were "rich like a man who owns a submarine he's never been inside. Rich like a man whose third wife has a skincare line. Tech-titan rich -- the kind of money that buys you a compound in Wyoming and the confidence to wear the same gray t-shirt to congressional testimony." Doesn't look like what I'd envisioned but some of the points are very interesting, especially this one: "My local copy of the repo should be a representation of the entire repo, not just the code. I should be able to approve a PR from the same VCS I use to check in the code. I should be able to go through my issues by looking through local files." It's kinda funny that over the last decade git and GitHub haven't really merged. It's always been repository here and rest over there.
-
Highly, highly, highly recommend you read this piece by Kevin Kelly on Our Uncertain Uncertainties: "In other words, we have a sustained, extended period of uncertainty. Not just a few years, but a decade or more. As AI continues to progress, rather than resolving our perplexity, it expands it. So for the next 10-15 years we have perpetual, continuous, severe uncertainty. This is a burdensome weight because people hate uncertainty more than bad news. [โฆ] what should we do about it? The most effective response to this multi-layered persistent uncertainty is not to seek impossible stability, but to cultivate radical adaptability and radical optionality. Give up on having a reliable prediction of what happens next. Instead cultivate multiple scenarios of what could happen, and endeavor with each of them to maximize your options. Goals should be considered as disposable hypotheses, constantly ready to be discarded and replaced by better-fitting concepts later on." As much as I don't like to say it, I think it's true. I think the last 30 years will look incredibly calm compared to the next 10. But hey, when the going gets tough, the tough get going, right? Or as the Hunter S. Thompson quote goes that I had pinned to me teenage bedroom wall: when the going gets weird, the weird turn pro.
-
My friend Tomas Senart is looking for a founding engineer to work with him on Perfloop. I worked with Tomas for many years at Sourcegraph, he's a true hardcore programmer, incredibly high agency (probably came out of the womb with his sleeves rolled up), and has a great sense of humor. Also: I trust him blindly to order sushi for me whenever we go out. If you're into AI and systems programming and performance optimizations: talk to him!
-
Had you asked me, when I started this newsletter, whether I'd ever link to something in the National Catholic Register, I probably would've laughed and said "What? What is that? What's in there? Why would I link to it?" But now we're here and I think this is one of the best things I've read on AI and education, or actually: education in general, in a long, long time: Reparing the Ruins: Why AI Can't Replace Education. Listen to this: "Education worthy of the name has always understood this. Its end is not the delivery of content, however accurate. It is the formation of persons capable of judgment, attention and intellectual honesty. That formation requires a genuine encounter with difficulty -- the friction of a hard text, the resistance of a problem that does not yield quickly, the discomfort of revising what one believed. It requires embodiment as much as intellect: reading slowly, speaking in one's own voice, accepting the cost of standing behind one's words. A person does not become capable of truth by managing information alone. Wisdom is formed in contact with reality, not in its simulation." Amen.
-
Big oof: "Copy Fail is a straight-line logic flaw -- it needs neither. The same 732-byte Python script roots every Linux distribution shipped since 2017."
-
The West Forgot How to Make Things. Now It's Forgetting How to Code: "Five to ten years from now, we'll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don't exist yet because we're not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls "AI-mediated competence." They can prompt an AI. They can't tell you what the AI got wrong. It's Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don't build the tacit expertise. And when my generation of engineers retires, that knowledge doesn't transfer to the AI."
-
This was delicious. Daniel Lemire "created something I call the SIMD Quad algorithm" which beats binary search due to parallelism. Essentially: divide your list into blocks of 16 elements, then divide the list of blocks into quarters, check independently which quarter must contain your target (which CPU can optimize), do that until you end up with a single block, then check all 16 elements at once. Slick!
-
Very interesting (but maybe a bit shallow) profile of Mistral in Forbes. This is brutal: "But Mistral has slipped ever further behind in leaderยญboards ranking AI performance. It's so bad that Mistral's best model would lose in a face-off against a version of Anthropic's Claude that was released nine months earlier, per one popular benchmark. Worse, it's also bested by a new crop of open-weight models from Chinese startup DeepSeek and tech giant Alibaba." But there is a But: "But Mensch bets that a smaller, cheaper model made in Europe is better suited for governments and global companies than an American closed-source LLM with far more horsepower. Plus, it's too risky for serious Western companies to depend on Chinese models, says Mistral investor Jeannette zu Furstenberg of venture fund General Catalyst. The stratยญegy has worked to the tune of $200 million in revenue in 2025. And Mensch says Mistral is on track to start making around $80 million monthly by December" Very interesting. But (another one), as a European myself, I have to say that I can't stand it anymore when European tech companies pitch their product with what essentially boils down to: "at least we're not [US company]." Yeah, the product might be worse, yeah, it doesn't work as well as the other thing, but hey, at least we're not โฆ, at least we don't store your data in the US, at least โฆ As a colleague of mine once said about a similar sounding marketing campaign by Opera, the browser company from twenty years ago, in which it essentially said "at least we don't track you": that's not what a winner would say.
-
3 constraints before I build anything. This was fascinating, because my first reaction was: yes, constraints #1 and #2 are right, but what does #3 even mean? But now, re-reading it, I think that even #2 can be argued. And, hey, #1 too, actually. It is interesting though that they all have some value and I'd definitely say it's three things to consider before building anything, but [turning around and pointing at the choir behind me: now everybody!] it depends.
-
Why fat tailed costs emerge at scale: "I find that analysis of AI business models consistently underestimates the impact of unit economics. When people say AI startups face margin squeeze, they point to external competitors or monopolistic GPU pricing as contributing factors. But it seems that the internal resource variance would still exert pressure, even if there was only one LLM provider and chips were abundant." We'll probably never get it, but an in-depth blog post by one of the inference providers or model houses on exactly this would be very interesting.
-
Hell yes: "I like art that feels like it was made by a free person. I like to see how a person chooses things. I like art before it gets noted and workshopped and homogenized. I like art that preserves the rough edges of the person. Polish can be taught, so it's less interesting to me than that which can't be. I like when I can sense how someone really talks, feels, and thinks. I mean consciously so, but also unconsciously so. Every choice communicates. Even the 'errors.' I embrace the errors." That's why I like to listen to live music a lot. As our admin on the Led Zeppelin bootleg forum in 2007 said: "They always bit off more than they could chew -- and then chewed it."
-
Very, very interesting: Inside macOS window internals: how SkyLight enables multi-cursor background agents.
-
Zed 1.0 is out! Congratulations!
-
CorridorKey: "When you film something against a green screen, the edges of your subject inevitably blend with the green background. This creates pixels that are a mix of your subject's color and the green screen's color. Traditional keyers struggle to untangle these colors, forcing you to spend hours building complex edge mattes or manually rotoscoping. [...] I built CorridorKey to solve this unmixing problem. You input a raw green screen frame, and the neural network completely separates the foreground object from the green screen. For every single pixel, even the highly transparent ones like motion blur or out-of-focus edges, the model predicts the true, un-multiplied straight color of the foreground element, alongside a clean, linear alpha channel. It doesn't just guess what is opaque and what is transparent; it actively reconstructs the color of the foreground object as if the green screen was never there." Crazy that his even works without a green screen.
-
So, Henrick Johansson, this Twitter European VC parody account that often hit a bit too close to home isโฆ real?! No, that can't be, right? So my theory is: it started as a parody account, but then Comp AI took over the account, changed the avatar to this actor's image, and now uses it to run ads for compliance while keeping the parody going. Anyone know more?
-
Staring at walls to improve focus and productivity. I don't know, man. On one hand: whew, wow, wow. On the other: if it works? On the third: it's meditating
-
Beautiful: I just learned I only have months to live.
Yes, these lines are drawn by hand (well: mouse). Each one. Every time. Yes, that got you, didn't it? Here's where you can subscribe:
-
-
๐ r/reverseengineering "AccountDumpling": Hunting Down the Google-Sent Phishing Wave Compromising 30,000+ Facebook Accounts rss
submitted by /u/RasheedaDeals
[link] [comments] -
๐ HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [BinSync](https://github.com/binsync/binsync): 5.15.0 -
๐ r/LocalLLaMA Qwen3.6-27B vs Coder-Next rss
| Burned about 20 hours of side-by-side compute on my two RTX PRO 6000 Blackwells trying to get a definitive answer on which of these two models was clearly better. As with many things in life, after many tokens and kWhs later the answer was "it depends." These models in the aggregate are actually crazy well matched against each other โ scoring similarly overall across a wide range of tests and scenarios, hitting and missing on different things, failing and succeeding in different ways. Across the 4 cells I ran at N=10, Coder-Next 25/40 ships, 27B-thinking 30/40 โ statistically tied with overlapping Wilson CIs. On the face of that, it kind of makes sense. 27B is a later-gen dense model that's high on thinking. Coder-Next has roughly 3x the parameters to work with but only activates 3B at a time as it works. Depending on what you're trying to do, either could be the correct choice. Kind of interestingly, 27B with thinking disabled was the most consistent shipper of work โ 95.8% across the full 12-cell grid at N=10 (Wilson 95% [90.5%, 98.2%]). Same model weights as 27B-thinking, just --no-think. A side-by-side hand-graded read on the both-ship cells found substantive output is preserved; the difference is verbosity of reasoning prose, not output decisions. The "thinking-trace as loop substrate" mechanism turned out to be real โ the documented word-trim loop on doc-synthesis halves with no-think (4/10 โ 2/10). 3.6-35B-A3B pretty much fell flat on its face so often for tasking that it didn't seem worth carrying on to keep comparing against the other two. Folder kept as failure-mode evidence. I tossed a lot of crazy stuff at these models over the course of a few days and kept my two GPUs very warm and very busy in the process. I jumped into this mainly because, for lack of a better term, I felt like the traditional benchmarks were being gamed. So I wanted to just chuck these guys in the dirt and abuse them and see what happened. Give them tasks they could win, tasks where they were essentially destined to fail, study how they won and failed and what that looked like. The most lopsided single result: Coder-Next 0/10 on a live market-research task where 27B was 8/10 (Wilson 95% [0%, 27.8%] for the Coder-Next collapse, reproducible). Inverse: Coder-Next ships 10/10 on bounded business-memo and doc-synthesis tasks at 60โ100x lower cost-per-shipped-run than either 27B variant. Same models, very different shapes of "good at." There's a ton of data, I tried to make it easy to sort through, and right now this is all pretty much just about thoroughly comparing these two models. Either way, I'm sleepy now. Let me know your thoughts or if you have any questions, and the repo is below. I'll talk more about this when I'm not looking to pass out lol. https://github.com/Light-Heart-Labs/MMBT-Messy-Model-Bench-Tests submitted by /u/Signal_Ad657
[link] [comments]
---|--- -
๐ matklad Minimal Viable Zig Error Contexts rss
Minimal Viable Zig Error Contexts
May 3, 2026
fn process_file(io: Io, path: []const u8) !void { errdefer log.err("path={s}", .{path}); const fd = try Io.Dir.cwd().openFile(io, path, .{}); defer fd.close(io); // ... }Out of the box, Zig provides minimal and sufficient facilities for error handling โ strongly-typed error codes. Error reporting is left to the user. Idiomatic solution is to pass a
Diagnosticsout parameter (โsinkโ) to materialize human-readable strings as needed.Diagnostics pattern works well for โproductionโ code, but for more script-y code it adds too much friction relative to the default option of a plain
try fallible(), which of course gives a less than ideal message on failure:ฮป zig build error: FileNotFound ~/.cache/zig/p/../lib/std/Io/Threaded.zig:4866:35: 0x1044126c7 in dirOpenFilePosix (fail) .NOENT => return error.FileNotFound, ^ ~/.cache/zig/p/../lib/std/Io/Dir.zig:578:5: 0x104347d8b in openFile (fail) return io.vtable.dirOpenFile(io.userdata, dir, sub_path, options); ^ ~/fail/main.zig:10:16: 0x10443da5f in f (fail) const fd = try Io.Dir.cwd().openFile(io, path, .{}); ^ ~/fail/main.zig:6:5: 0x10443db47 in main (fail) try process_file(io, "data.txt"); ^Error trace is helpful, but knowing which file is the problem is even more so.
The first attempt at finding a middle ground between fully-fledged diagnostics sink pattern and a plain try is something like this:
const fd = dir.openFile(io, path, .{}) catch |err| { log.err("failed to open file '{s}': {t}", .{path, err}); return err; }Unsatisfactory. The friction is high, you need to come up with a reasonably- sounding error message, the โhappy pathโ of the code is obscured, and you need to repeat this for every fallible operation.
A worse-is-better version of the above code is
errdefer log.err("path={s}", .{path}); const fd = try dir.openFile(io, path, .{});That is, just log error context as
key=valuepairs, guarded byerrdefer. The result is not pretty, but passable:ฮป zig build error: path=./data.txt error: FileNotFound ~/.cache/zig/p/../lib/std/Io/Threaded.zig:4866:35: 0x1044126c7 in dirOpenFilePosix (fail) .NOENT => return error.FileNotFound, ^ ~/.cache/zig/p/../lib/std/Io/Dir.zig:578:5: 0x104347d8b in openFile (fail) return io.vtable.dirOpenFile(io.userdata, dir, sub_path, options); ^ ~/fail/main.zig:10:16: 0x10443da5f in f (fail) const fd = try Io.Dir.cwd().openFile(io, path, .{}); ^ ~/fail/main.zig:6:5: 0x10443db47 in main (fail) try process_file(io, "data.txt"); ^The friction is reduced a lot:
- No need to come up with any error messages beyond existing variable names.
- No need to change any of the
trys. - The context is set per-block. If a function does several fallible operations on a file, the path needs to be specified only once.
- The context is โtelescopicโ every function in the call-stack can add its own context.
Thereโs one huge drawback though โ the error message is logged, even if the error is subsequently handled. This is especially important in Zig 0.16, where cancelation (serendipitous-success) is a possible error for any IO-ing operation, and which is intended to be handled, rather than reported.
Generalizing:
- Happy path adds context to all operations in-progress.
- Errors materialize current context.
This does feel like a better error management strategy than decorating errors individually, when they happen. I wonder which language features facilitate this style?
This article https://goldstein.lol/posts/error-progress/ rather convincingly argues that the answer might be โnoneโ?
-
- May 02, 2026
-
๐ IDA Plugin Updates IDA Plugin Updates on 2026-05-02 rss
IDA Plugin Updates on 2026-05-02
New Releases:
Activity:
- binsync
- Flare-On
- c076e8a8: Add Copilot Ch9 solution
- python-elpida_core.py
- 571d10f5: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T23:57Z
- b2c8810a: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T23:36Z
- 61258e7c: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T23:19Z
- 5f377075: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T22:57Z
- 39919049: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T22:36Z
- d7ebf47a: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T22:19Z
- e20890db: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T21:58Z
- 81067ca3: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T21:39Z
- 00fa11cc: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T21:21Z
- 2a31da83: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T20:59Z
- 72ab2d30: [HERMES-ROUTED] Phase 3 routing artifact 2026-05-02T20:39Z
-
๐ r/LocalLLaMA I made a visualizer for Hugging Face models rss
| I built hfviewer.com, a small tool for visually exploring Hugging Face model architectures. You can paste a Hugging Face URL and get an interactive visualization of the architecture, which can make it easier to understand how different models are structured and compare them at a glance. Here is the recent Qwen3.6-27B model as an example: https://hfviewer.com/Qwen/Qwen3.6-27B And here is a side-by-side view of the Gemma 4 family: https://hfviewer.com/family/gemma-4 Feel free to try it out and give me feedback on how it can be improved! :) submitted by /u/Course_Latter
[link] [comments]
---|--- -
๐ r/Yorkshire Goathland is one heck of a beauty village.... rss
| submitted by /u/leodis95
[link] [comments]
---|--- -
๐ r/Yorkshire Seabirds at Bempton Cliffs rss
submitted by /u/DentistKitchen
[link] [comments] -
๐ backnotprop/plannotator v0.19.7 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
What's New in v0.19.7
v0.19.7 brings Codex plan review to Plannotator. Four PRs, two from first-time contributors, extend full plan review support to OpenAI's Codex CLI and add Codex-callable skills for review, annotate, and annotate-last. Two UX improvements for the plan and code review editors round out the release.
Codex Plan Review via Stop Hook
Plannotator now intercepts Codex plan submissions through Codex's
Stophook system. When Codex proposes a plan, the hook extracts the plan from the agent's rollout transcript, opens Plannotator in the browser, and lets you review, annotate, and approve or deny exactly as you would with Claude Code. Denying sends structured feedback back to Codex with continuation instructions, so the agent revises and resubmits.The implementation parses Codex's output format to find the latest plan content, handles both fresh plans and revised resubmissions, and supports the full Plannotator feature set: version history, plan diff, archive, and annotation. All three install scripts (
install.sh,install.ps1,install.cmd) now configure Codex hooks automatically during installation when a Codex config directory is detected.The release pipeline also gained smoke tests that exercise the compiled binary's server startup across plan review, code review, and annotate subcommands, catching integration failures before artifacts are published.
- #577 by @ivanov17andrey, closing #497 (Codex hook support, requested by @myohei) and #105 (Codex support, requested by @muskio1)
Codex Skills for Review, Annotate, and Last
Codex's app interface doesn't accept
!shell commands in chat, which made invoking Plannotator awkward. This PR adds three OpenAI-compatible skill definitions underapps/skills/that make Plannotator callable via$plannotator-review,$plannotator-annotate, and$plannotator-lastdirectly from the Codex app.Each skill instructs the agent to run the Plannotator command, wait for the browser session to complete, and then act on the returned feedback without requiring an extra follow-up message from the user. This closes a workflow gap where Codex would receive annotation feedback but sit idle until explicitly prompted to continue.
- #644 by @leoreisdias
Additional Changes
- Auto-close sidebar when TOC is empty. Documents with no level 1-3 headings (common in annotate and annotate-last sessions) no longer force-open the Table of Contents sidebar. The collapsed sidebar strip remains accessible for manual opening, and the preference resets when the document changes. Archive and annotate-folder modes are unaffected. (#651 by @backnotprop)
- Right-click to copy path or filename in file tree. The code review file tree now has a context menu on each file row with "Copy path" (repo-relative), "Copy filename", and "Copy full path" options. Full path resolves against the active worktree or agent CWD and is hidden in PR-review mode where files aren't on local disk. (#650 by @backnotprop)
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: add Codex Stop-hook plan review by @ivanov17andrey in #577
- feat: add Codex Plannotator skills by @leoreisdias in #644
- feat(review): right-click file tree row to copy path or filename by @backnotprop in #650
- feat(editor): auto-close left sidebar when TOC is empty by @backnotprop in #651
New Contributors
- @ivanov17andrey made their first contribution in #577
- @leoreisdias made their first contribution in #644
Contributors
@ivanov17andrey authored the Codex Stop- hook plan review integration (#577), bringing full plan review support to a new agent platform. The PR included Codex session parsing, install script updates across all three platforms, CI smoke tests, a reproducible E2E test harness, and documentation updates. First contribution to the project.
@leoreisdias added Codex skill definitions (#644) that bridge the gap between Codex app's UI and Plannotator's shell-based invocation. Also a first contribution.
Community members who requested Codex support:
Full Changelog :
v0.19.6...v0.19.7 -
๐ r/Leeds Charity scam around city centre โEducate and empower โ rss
Idk if u guys have noticed but every few days theres a group of guys in blue jackets with educate and empower written on them that stop us asking for donations for special needs kids. It sounds super fishy to me as the council guy has stopped them often asw and they dont have any proof to show where the money we give them is going to and the website( https://educateempower.net )they show is really basic asw and just feels like a bunch of lies. Have you guys encountered them?
Edit: people from other cities dmed me asw and said its happening there asw so
im tagging the relevant cities.submitted by /u/Historical_Ad9327
[link] [comments] -
๐ r/Yorkshire Scarborough Appreciation Post rss
submitted by /u/lovebun2222
[link] [comments] -
๐ r/reverseengineering How to build .NET obfuscator - Part II rss
submitted by /u/kant2002
[link] [comments] -
๐ r/Leeds Peregrine Falcoln babies are here! rss
Im not sure if everybody in Leeds already knows this, but peregrine falcons have been nesting at Uni of Leeds Parkinson Building since 2018 on and off, and theres a live webcam feed on the University website!
A few days ago, I saw she was sitting on about 4 eggs, and yesterday I saw babies!!
Id encourage everyone to have a look, it's such a wholesome part of Leeds for me โฅ๏ธ
Hope links are allowed! https://sustainability.leeds.ac.uk/our- work/biodiversity/university-of-leeds-peregrines/peregrine-camera-2/
submitted by /u/Kazekageshinobigaara
[link] [comments] -
๐ sacha chua :: living an awesome life May 14: Sacha, Prot, and Philip Kaludercic Talk Emacs: Newcomer Experience rss
Philip Kaludercic wanted to continue the conversation from YE24: Sacha and Prot Talk Emacs - Newbies/Starter Kits. He's spent a lot of time thinking about this as one of the main contributors to newcomers-presets, so there'll probably be much to cover!
(America/Toronto -0400) = Thu May 14 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
We'll probably talk about:
- Emacs 31 or Emacs 32 directions towards improving the newcomer experience
- How the newcomers presets fits into the bigger picture
- Documentation and guides
- How to get more feedback from newbies (virtual focus group? mailing list? office hours?)
- Informal community resources
- Other things we can do to help
Related links:
- A proposal for a "beginners" (user-option) theme - Philip Kaludercic
- Re: some file-related options to consider for newcomers-presets - Philip Kaludercic
- A newcomer's feedback on newcomer presets - Abdulnafe Toulaimat
You can e-mail me at sacha@sachachua.com.
-
๐ r/LocalLLaMA Bruh rss
| Do reporting bots even do anything? submitted by /u/Icy_Butterscotch6661
[link] [comments]
---|--- -
๐ r/Leeds Feeling lonely in Leeds rss
Hi guys, Iโm 23M and feeling a bit lonely in Leeds this weekend. I donโt really know anyone my own age here and with it being a long bank holiday, Iโm starting to feel it a bit.
Is there anything going on or anything youโd recommend for someone in my position so Iโm not just sat at home? Appreciate any suggestions, thank you.
submitted by /u/Solid_Antelope902
[link] [comments] -
๐ sacha chua :: living an awesome life June 4: Emacs Chat with Ben Zanin (@gnomon@mastodon.social) rss
On June 4, I'll chat with Ben Zanin about Emacs and life.
- Ben Zanin (@gnomon@mastodon.social) - Mastodon: Robertson screwdriver owner, believer in the value of personal-scale computing and skeptic of the value of computing scales any larger than that
- ~gnomon's git repositories
(America/Toronto -0400) = Thu Jun 4 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
This session will be recorded, and I'll update this blog post with notes.
You can add the iCal for upcoming Emacs Chat episodes to your calendar. https://sachachua.com/topic/upcoming-emacs-chats.ics
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can e-mail me at sacha@sachachua.com.
-
๐ r/wiesbaden Kaiser Friedrich Therme rss
Hat jemand den Kaiser Friedrich Therme ab besucht, kann mir sagen, wie es dort ist?
Erwarte ich, dass alle vรถllig nackt sind, ohne die ganze Zeit ein Handtuch zu tragen, wie es in Friedrichsbad ist, oder ist es anders?
submitted by /u/Neither-Garage-876
[link] [comments] -
๐ r/reverseengineering libghidra - SDK for automating Ghidra from Python, Rust, and C++ rss
submitted by /u/e80000000058
[link] [comments] -
๐ r/Yorkshire Richmond Mayfest off to a sunny start rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
๐ r/reverseengineering Release: Open-source CAN bus reverse engineering suite tailored for offline ML signal decoding, MitM injection, and UDS analysis. rss
submitted by /u/Repulsive_Factor5654
[link] [comments] -
๐ badlogic/pi-mono v0.72.1 release
No content.
-
๐ r/LocalLLaMA We are finally there: Qwen3.6-27B + agentic search; 95.7% SimpleQA on a single 3090, fully local rss
LDR maintainer here. Thanks to the strong support of r/LocalLLaMA community LDR got very far. I haven't reported in a while because I thought I was not ready for another prominent post in one of the leading outlets of Local LLM research.
But I think the LDR community finally there again. I think it is finally time to report again.
Setup
- RTX 3090, 24GB
- Ollama backend (qwen3.6:27b)
- LDR's
langgraph_agentstrategy โ LangChaincreate_agent()with tool-calling, parallel subtopic decomposition, up to 50 iterations - LLM grader: qwen3.6:27b self-graded (I have used opus to review examples and it generally only underestimates accuracy)
Benchmarks (fully local LLM with web search)
Model | SimpleQA | xbench-DeepSearch
---|---|---
Qwen3.6-27B | 95.7% (287/300) | 77.0% (77/100)
Qwen3.5-9B | 91.2% (182/200) | 59.0% (59/100)
gpt-oss-20B | 85.4% (295/346) | โsample size is small, but the benchmarks were not rerun multiple times and you can see from the other rows that this is unlikely just chance. Full leaderboard: https://huggingface.co/datasets/local-deep-research/ldr- benchmarks
Important framing โ these are agent + search scores, not closed- book
However, also note that these are similar benchmarks results to Perplexity Deep Research (93.9%), tavily (93.3%) etc. [Tavily forces the LLM to answer only from retrieved docs (pure retrieval test). Perplexity Deep Research is an end-to-end agent and discloses no grader or sample size. ]
Even if our results where only 90% it would already be a great success.
Also I can confirm from using it daily that these results feel consistent with my performance on random querries I do for daily questions.
Caveats:
- SimpleQA contamination risk on newer base models is real
- LLM-judge noise + Sampling error
- bench-DeepSearch is in chinese so an advantage for the chinese qwen models
- No BrowseComp / GAIA numbers yet - But I also dont believe we are good at this benchmark yet. I will have to run some benchmarks to verify the current state
The thing that surprised me:
Results seem to track tool-calling quality more than raw size for local deep research. The
langgraph_agentstrategy hammers the model with multi- iteration tool calls, parallel subagent decomposition, and structured output โ exactly the axis where the newer Qwen generations have improved most. Hypothesis only; if anyone wants to design an ablation we'd love the data.Some cool LDR features that I want to additionally highlight:
- Journal Quality System (shipped v1.6.0) - academic source grading using OpenAlex, DOAJ. I haven't seen this anywhere else in the open-source deep-research space.
- Per-user SQLCipher AES-256 DB (PBKDF2-HMAC-SHA512, 256k iterations) โ admins can't read your data at rest. No password recovery; we don't hold the keys.
- Zero telemetry. no telemetry, no analytics, no tracking.
- Cosign-signed Docker images with SLSA provenance + SBOMs.
- MIT licensed. Everything open source
Repo: https://github.com/LearningCircuit/local-deep-research
Happy to share strategy configs, help reproduce the Qwen runs
Thanks to all the academic and other open source foundational work that made this repo possible.
submitted by /u/ComplexIt
[link] [comments] -
๐ tomasz-tomczyk/crit v0.10.4 release
What's Changed
Resizable sidebars
The file-tree panel and the comments panel both have drag handles on their inner edge. Widths persist across runs (consolidated into a single
crit- settingscookie alongside the other UI prefs).- feat: resizable file-tree and comments-panel sidebars by @tomasz-tomczyk in #422 โ Thanks @hbogaeus for suggesting!
General
- feat: print "Next round" command on review exit + restructure agent prompts by @tomasz-tomczyk in #421
- feat: consolidate settings cookies, restore update dismiss by @tomasz-tomczyk in #418
- docs: add SECURITY.md by @tomasz-tomczyk in #420
- fix: confirm before discarding non-empty comment draft on Escape by @tomasz-tomczyk in #415
- fix: prevent review comment form re-opening pre-populated after submit by @tomasz-tomczyk in #419
- fix: skip stack autodetect in file mode; remove CRIT_NO_AUTODETECT by @tomasz-tomczyk in #423
- fix: route crit comment --json bulk to alt review file by reply ID by @tomasz-tomczyk in #424
Internal refactors
- chore(deps-dev): bump eslint from 10.2.1 to 10.3.0 by @dependabot in #416
- chore(deps-dev): bump stylelint from 17.9.0 to 17.9.1 by @dependabot in #417
Full Changelog :
v0.10.3...v0.10.4 -
๐ r/york What kind of solicitor do I need and can you recommend one. rss
I want to leave my partner in the house I own and buy another one. Iโm thinking to put my house in a trust for my children. I donโt know who to consult
submitted by /u/ExtensionPrice3535
[link] [comments] -
๐ r/york Lgbtq football team- rss
Im moving to york soon and im thinking of setting up an LGBTQ football team/league.
Start of small and just try get enough people to have a kick about a week.
If we can grow it then make smaller 7 a-side teams and get a league going?Would anyone be interested if i get the ball rolling?
submitted by /u/Total_Bed_3882
[link] [comments] -
๐ r/LocalLLaMA A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat rss
Build American AI, a nonprofit linked to a super PAC bankrolled by executives at OpenAI and Andreessen Horowitz, is funding a campaign to spread pro-AI messaging and stoke fears about China.
So Local LLM is important .... always! Need to support who giving us more Open source & weights. Last month, Half of the open models came from there only. submitted by /u/pmttyji
[link] [comments]
---|--- -
๐ r/Leeds Comic & Gaming stores in Leeds centre. rss
I used to be a regular in Leeds but aside from a couple of quick stops I haven't had a good wander in maybe fifteen years so whilst I can still find my way around I have no idea what's there.
My sister asked me late last night if I could take my nephew into town for Free Comic Book Day as she's working so I was wondering if anybody could give me a list of comic and gaming stores in the city centre.
Thanks!
submitted by /u/RetroSquadDX3
[link] [comments] -
๐ backnotprop/plannotator v0.19.6 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
What's New in v0.19.6
v0.19.6 ships 3 PRs closing 4 long-standing issues. Two changes stand out: Pi browser sessions are now fully async, so review and annotation commands no longer lock the chat while the UI is open; and OpenCode gets a visible agent picker on the Approve button, replacing the buried Settings-only agent switch. A fix for annotate-last mode rounds out the release.
Non-Blocking Browser Sessions for Pi
Every Plannotator command in Pi that opens a browser โ plan review, code review, annotation, and annotate-last โ previously blocked the chat session until the browser tab was closed. The chat input was locked the entire time. If the review server crashed, or the user simply forgot to close the tab, the Pi session hung indefinitely with no way to recover except restarting.
This release rewrites the Pi browser session lifecycle from the ground up. All four browser-based flows now return control to the chat immediately after opening the browser. Users can continue working, ask follow-up questions, or start entirely new tasks while the review UI stays open in a separate tab. When the user submits feedback, approves a plan, or closes the browser, the decision is forwarded back into the Pi session automatically.
The implementation introduces a
BrowserDecisionSessionabstraction that wraps the server lifecycle, browser launch, and decision forwarding into a single async pattern. Each session tracks its own state and cleanup, so multiple review sessions can coexist without interfering. Astop()mechanism ensures sessions are cleaned up even if the browser tab is abandoned, preventing zombie server processes.This also required extracting assistant message parsing into a dedicated module (
assistant-message.ts) and building a session state tracker (current-pi-session.ts) to correctly scope tool availability and saved state across the new async boundaries.- #645 by @backnotprop
Agent Picker Dropdown for OpenCode
OpenCode users can now select which agent to switch to directly from a split Approve button, without opening Settings. The button label shows exactly what will happen: "Approve โ Build", "Approve โ Orchestrator", or just "Approve" when switching is disabled. Click the chevron to see all available agents from the OpenCode API, plus a "No switch" option that prevents any agent mode change on approval.
Before this change, agent switching was controlled by a setting buried in the Plannotator Settings panel. Most users didn't know it existed. The default was "Build", which meant every plan approval silently switched the CLI to Build mode. Users running Orchestrator workflows, Sisyphus mode, or custom agent topologies would find their agent unexpectedly changed after approving a plan.
The "No switch" option also changes the approval prompt itself. When selected, the "approved with notes" message no longer includes "Proceed with implementation," letting the current agent decide its own next step. This is particularly useful for planning-only workflows where approval shouldn't trigger immediate implementation.
Non-OpenCode origins (Claude Code, Gemini CLI, Copilot CLI) are completely unaffected and continue using the standard Approve button.
- #648, closing #575 (agent mode switching unexpectedly, reported by @luyanfeng), #114 (wrong agent on approval, reported by @xitex), #106 (Sisyphus mode compatibility, requested by @tensam), and #159 (approve without triggering implementation, requested by @gustavocaiano)
Fix Code File and Linked Doc Resolution in Annotate-Last Mode
When annotating the last assistant message (
plannotator lastor/plannotator-annotate last), clicking a code file path or linked document returned a 404. The annotate-last mode setsfilePathto the literal string"last-message", which was passed through as a filesystem base directory for resolving relative paths. The server tried to resolve files against"last- message"as a directory, which doesn't exist.The fix guards both the code file popout and linked doc URL builders to skip non-path values and fall back to the project root. Normal file and folder annotation are unaffected since those use real filesystem paths.
- #647 by @backnotprop
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat(pi): make browser review and annotation sessions async by @backnotprop in #645
- feat(opencode): agent picker dropdown on Approve button by @backnotprop in #648
- fix(ui): code file and linked doc resolution in annotate-last mode by @backnotprop in #647
Community
Four issues drove the agent picker feature, all from users who were surprised or frustrated by invisible agent switching:
- @luyanfeng reported in #575 that their CLI switched from Orchestrator to Build mode after using Plannotator
- @xitex reported in #114 that approving a plan switched to the wrong agent, with follow-up discussion from @thoroc
- @tensam requested Sisyphus mode compatibility in #106
- @gustavocaiano requested the ability to approve a plan without triggering implementation in #159, with discussion from @arden-shackelford-q2, @emanuelsan, and @pencoyd
Full Changelog :
v0.19.5...v0.19.6 -
๐ Probably Dance Apple is Holding my Pictures Hostage Until I Accept Their New Terms of Service rss
It started off a few months ago, when my iPad suddenly couldn't play videos any more that it had recorded. It would show the first frame, but hitting the play button wouldn't do anything. I googled around but couldn't find anything. I had recently installed an update, so I figured I'll just wait for the next update to fix things.
But the next update didn't fix things, and it turns out the reason is actually a little dystopian: Apple has deleted my local copies of my videos and will only give them back if I sign their new terms of service.
You may have noticed that the preview pictures in the video above are often blurry. Soon after this, the pictures on my iPad also started looking oddly blurry. Eventually we noticed a little info icon next to the pictures that, when tapped, says "Unable to Load Photo. An error occured while loading a higher quality version of this photo."
What is it talking about? Why would the thumbnail load but the full picture doesn't? But at least this is a message I can google for. I need to change the "iCloud -> Photos" settings to "Download and Keep Originals."
I never paid for Apple's iCloud service so I am a little surprised that not only were my pictures uploaded, the local copy was deleted. Here is a picture of the storage in my iPad:
See that yellow bar for "Photos"? It's almost not there because my local photos were almost all deleted.
This scared me. I knew I should have backed these upโฆ Google said I should go to the "iCloud" settings but those are grayed out, untappable:
Since I never paid for iCloud, this is not too surprising, but what do I do now? I start by installing updates. That doesn't help, but eventually I find the right spot:
So I agreed to the new terms of service. And as soon as I do, my videos are back. I immediately turn on the setting to keep a local copy of all my photos and videos.
Are they Allowed to Do this?
I obviously didn't read the new terms of service before accepting them. They're long. I asked Claude if they say anything about holding my pictures hostage, but there is no clear sentence saying they can withhold the content when they update their terms of service. They only say that they can
- change the terms of service with 30 days notice (Section I.E, "Changing the Service")
- terminate my service if I violate the agreement (Section VII.B, "Termination by Apple")
- ban me if my use of the service "intentionally or unintentionally threatens Appleโs ability to provide the Service" (Section I.C, "Limitations on Use")
- take "steps" they believe are "reasonably necessary or appropriate" to enforce compliance with any part of the agreement. (Section V.E, "Access to Account and Content")
- may remove the service "for indefinite periods of time", or cancel the service "in accordance with the terms of this agreement" (Section IX, "DISCLAIMER OF WARRANTIES; LIMITATION OF LIABILITY")
Number 1 happened, they didn't do number 2 or number 3 here. Number 4 might apply superficially, because they want to force me to sign the new agreement, but I didn't break any terms of the version of the agreement that I actually agreed to. But maybe none of that actually matters because number 5 says they are free to not provide the service anyway. But on the other hand "in accordance with the terms of the agreement" should mean that they're limited by the previous terms (though I'm not a lawyer).
So neither Claude nor me sees anything here that says they can block access to my videos until I sign the new terms of service. In particular the "Access to Account or Content" section is really weak, which makes me think they're not allowed to block me from my own videos.
In theory.
In practice I signed the new agreement like everyone else, because what's supposed to happen if they do something that they're not allowed to do according to their own agreement? That's just how this works. The agreement limits me, not them.
So What Evil Thing Happened Here?
It's hard to say which act exactly led to this dystopian setting where a company can make a copy of your picture, delete the originals, put a new agreement in front of you, and force you to agree to it in order to get your pictures back. But what exactly is the evil thing? Every individual step was not so bad:
- They upload the pictures to iCloud even though I didn't sign up.
- They always give you 5GB for free so might as well use them to have a backup of your photos. That's probably a good thing to do, because people (like me) are irresponsible and don't do backups on their own.
- They delete local pictures by default.
- This is probably very useful for people who have lots of online storage, more than they have space on their iPad.
- They don't allow you to use iCloud if you don't agree to the new Terms of Service.
- What else are you supposed to do? According to their own agreement they probably have to still provide access to the pictures, but that seems hard.
- Repeatedly force you to agree to new Terms of Service that are too long for any sane person to read
- South Park did an episode about this, but the new terms of service are probably not bad and they probably made changes for a reason.
- Ship a locked down device where you can't run arbitrary software and regularly have to install updates and which automatically does weird things like the above.
- Yeah there's definitely a bit of "Stallman was right" here. In particular I probably would have done backups already if I could run my normal backup software on there. But that's a terminal app and I don't think I can ssh onto an iPad.
I don't know. None of these are really evil. I can see good people doing these things for not bad reasons. And nothing really bad happened here. I just had to tap "I Agree" to some agreement that I didn't read. But it sure doesn't feel good. I thought you'd at least get a warning before buying the device: "This device may delete your pictures. We'll make a copy and promise to give them back as long as you agree to all future terms of service."
Also what's the lesson for me personally, if I want to not be evil? Which of the above things should I refuse to do? Every individual step isn't so bad. I guess you have to notice the patterns, notice when other people think what you're doing is bad (FSF, South Park), and then actually take that seriously, not just dismiss itโฆ
And what should Apple do? Obviously they should allow access to the pictures and videos even if I haven't signed the new agreement. That small step would move them from "dystopian" back into good territory. But I do notice that they have already walked into a part of the good territory where it's very narrow and small missteps move you into bad areasโฆ
Will this change my behavior in any way? I already don't buy most apple products, precisely for the "walled garden" reason. I have this iPad and that's it. I'll try to not buy more.
-
๐ Julia Evans Testing Vue components in the browser rss
Hello! One of my long term projects on here is figuring out how to write frontend Javascript without using Node or any other server JS runtime.
One issue I run into a lot in my frontend JS projects is that I don't know how to write tests for them. I've tried to use Playwright in the past, but it felt slow and unwieldy to be starting these new browser processes all the time, and it involved some Node code to orchestrate the tests.
The result is that I just don't test my frontend code which doesn't feel great. Usually I don't update my projects much either so it doesn't come up that much, but it would be nice to be able to make changes with more confidence! So a way to do frontend testing that I like has been on my wishlist for a long time.
idea: just run the tests in the browser tab
Alex Chan wrote a great post a while back called Testing JavaScript without a (third-party) framework in response to one of my previous posts in this series that explained how to write a tiny unit-testing framework that runs in a page in browser.
I loved this post at the time, but it only talked about unit testing and I wanted to write end-to-end integration tests for my Vue components, and I didn't know how to do that.
So when I was talking to Marco the other day and he said something like "you know, you can just run tests for your Vue components in the browser", I thought "hey, I should try that again!!!"
I just did all of this yesterday so certainly there's a lot to improve but I wanted to write down a few things I noticed about the process before I forget.
This was a bit tricky for me because the Vue site usually assumes that you're using Node as part of your build process in some way (there's a lot of "step 1:
npm install THING), and I didn't want to use Node/Deno/etc. But it turned out to not be too complicated.The project I'm going to talk about testing is this zine feedback site I wrote in 2023.
the test framework: QUnit
I used QUnit. It worked great but I don't have anything interesting to say about how it works so I'll leave it at that. I think that Alex's "write your own test framework" approach would have worked too. I followed these directions.
I did appreciate that QUnit has a "rerun test" button that will only rerun 1 test. Because there are so many network requests in my tests, having a way to run just 1 test makes it a lot less confusing to debug the test.
step 1: set up the component for testing
The first thing I needed to do was get my Vue components set up in the test environment.
I changed my main app to put all my components in
window._components, kind of like this:const components = { 'Feedback': FeedbackComponent, ... } window._components = components;Then I was able to write a
mountComponentfunction which does basically exactly the same thing my normal main app does (render a tiny template with the component I want to use). The only differences are:- I can optionally pass some some extra data to use as its props.
- It mounts the component to a temporary invisible div which will get removed from the DOM after the test is done. The div is positioned off the page (
position: absolute; top: -10000, ...) so you can't see it.
Here's what using the
mountComponentfunction looks like:const {div} = mountComponent( '<Page :feedbacks="feedbacks" id=2 />', {feedbacks: [testFeedback]}, );and here's the code for it:
function mountComponent(template, data) { const app = Vue.createApp({ template: template, data: () => data, }) for (const [c, v] of Object.entries(window._components)) { app.component(c, v); } const div = document.getElementById('qunit-fixture') .appendChild(document.createElement('div')); return div; }The result is a div where I can programmatically click, fill in form data, check that the right content appears, etc.
step 2: add some fixture data
Because I was writing end-to-end integration tests to make sure my client JS worked properly with my server, I needed to have some test data in my database. So I wrote ~25 lines of SQL to set up some test data in my database, and added an endpoint to my dev server to run the SQL to reset the test data to a known state.
async function reset() { return fetch('/api/reset_test_data', {method: "POST"}) }Then I just run
await reset()at the beginning of any test that needs the test data.My
reset()function actually doesn't always totally reset everything which is kind of bad, but it was workable to start with and can always be improved.step 3: a basic test
Here's what a basic test looks like! Basically we're rendering the div and make sure it contains some approximately correct data.
QUnit.test('renders feedback content', async function (assert) { const {div} = mountComponent( '<Page :feedbacks="feedbacks" id=2 image=2 page_hash=2 />', {feedbacks: [testFeedback]}, ); assert.ok(div.textContent.includes('loved this section')); })Those are all the basic pieces! Now here are a few issues I ran into along the way
waiting for parts of the page to render
I have a lot of network requests in my tests, and it takes time for them to finish and for the Vue code to do what it has to do with the results and update the DOM.
I think we all learned a long time ago that putting random
sleep()calls in your tests and hoping that the timings are right is slow and flaky and extremely frustrating, so I needed a different way.As far as I can tell the normal way to deal with this is to figure out a way to tell from the DOM whether it's okay to proceed or not. Like "if this button is visible, we can ".
So I wrote a little
waitFor()function that polls every 20ms to see if a condition has finished yet. It times out after 2 seconds.Here's what using it looks like:
QUnit.test("click item", async function (assert) { const {div} = mountComponent( '<Feedback zine_id="test123" image_width="800px" />', {}); const item = await waitFor(() => div.querySelector('.feedback-item')); item.click(); // rest of test goes here... })It looks like there are a lot of implementations of this concept out there and they're all better thought-through than mine. (from a quick Google: qunit- wait-for, playwright expect.poll)
figuring out the right thing to wait for is not straightforward
In some cases I thought I'd identified the right thing to wait for in the DOM ("just wait for this textarea to appear!') but it turned out that because of some internal details of how my program works, actually I needed to wait for something else later on which was hard to pin down.
I ended up changing one of my components to add some random value to the DOM when it was finished an important action (like
data-this-thing-is- ready=true) which didn't feel great.My best guess is that the right way to fix this kind of test issue is a refactor that also makes the app more reliable for the users: if there's an element in the DOM that isn't actually ready for the user to interact with, maybe I shouldn't be displaying it yet!
adding some CSS classes to identify things (but is that right?)
I ended up adding a few classes to HTML elements that I needed to find in the tests, either because I needed to click on them or wait for them to appear in the DOM.
I might want to change this approach later - frontend testing frameworks seem to suggest avoiding using CSS classes and instead using something like getByRole or as a last resort something like a data- testid. Feels like there's a way to make the app more accessible and easier to test at the same time.
filling out forms is tricky
To fill out a form, I can't just set the
value, I also need to dispatch an event to tell Vue that the element has changed. For example,checkboxandtextareaneed different kinds of events.textarea.value = 'banana banana banana'; textarea.dispatchEvent(new Event('input')); checkbox.checked = true; checkbox.dispatchEvent(new Event('change'));This is kind of annoying and it made me realize why I might want to use some kind of UI testing library, for example:
- Testing Library's example of filling out a form looks extremely different from what I'm doing
- Vue Test Utils: their section on form handling looks like it simplifies this a lot.
test coverage
I want to have an idea of what my test coverage was, and it turns out that Chrome actually has a built-in code coverage feature for JS and CSS!
My JS is bundled into a file called
bundle.jswith esbuild, so I could just look atbundle.jsand see which lines weren't covered.The process was a little finicky: I had to turn off sourcemaps in the Chrome devtools to get this to work, and there's a specific not super obvious series of actions I have to do in order to see the coverage data.
this was so fun!
As usual with these posts I've never really worked as a frontend or backend developer (other than for myself!) and I feel like I'm constantly learning how to do super basic tasks.
I really had a blast doing this. My frontend projects always feel so fragile because they're untested, and maybe one day I'll have a test suite I'm confident in!
Some things I'm still thinking about:
- While writing this post I found this frontend testing library called Testing Library that has a lot of guidelines for how to write tests that are very different from my initial ideas. I experimented with rewriting everything to use Testing Library and it felt pretty good, so we'll see how that goes. They distribute a
.umd.jsfile that works without Node. - I'm not sure how I feel about not having a way to run these tests on the command line at all. Maybe there's a simple way to work primarily in the browser but have an way to run them in CI too if I want?
-



