🏡


to read (pdf)

  1. I don't want your PRs anymore
  2. JitterDropper | OALABS Research
  3. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  4. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  5. Neobrutalism components - Start making neobrutalism layouts today

  1. May 08, 2026
    1. 🔗 backnotprop/plannotator v0.19.11 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.10 | Revert unreviewed bypass-clear-reminder permission mode
      v0.19.9 | OpenCode user-managed workflow, Pi model switch fix, Codex skill install, shimmer removal
      v0.19.8 | 49 themes with syntax highlighting, keyboard shortcut registry, smart code-file path validation, remote URL notifications
      v0.19.7 | Codex Stop-hook plan review, Codex skills, sidebar auto-close, file tree context menu
      v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
      v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
      v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
      v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
      v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk


      What's New in v0.19.11

      v0.19.11 adds Jujutsu (jj) as a first-class VCS backend for code review and refines the review UI with slimmer separators, a cleaner header layout, and proper multi-line gutter selection. One of the two PRs in this release is from a first-time contributor.

      Jujutsu (jj) Code Review

      Plannotator's code review now works natively with Jujutsu, the Git-compatible VCS. When you run /plannotator-review in a jj workspace, the VCS is auto-detected and four jj-specific diff modes appear in the diff type picker:

      • Current (jj-current) shows the working-copy changes
      • Last (jj-last) shows the previous change
      • Line (jj-line) shows the full line of work from the current change back to the trunk bookmark
      • All (jj-all) shows all local changes not yet on the remote

      Compare-target selection adapts to jj's model. Instead of branch-based base selection, the picker offers remote bookmarks. The feedback exported to your agent includes jj-appropriate local diff instructions so it can reproduce the same view.

      Under the hood, this required a significant refactor. Diff collection, compare-target semantics, and file-content retrieval were pulled into a provider-based VCS abstraction in packages/shared/vcs-core.ts. Git, jj, and P4 each implement the same provider interface. The review server and UI consume provider-supplied metadata instead of branching on VCS-specific flags. This abstraction makes adding future VCS backends straightforward.

      For colocated repos (both .git and .jj present), jj takes priority. Pass --git to /plannotator-review to override.

      Review UI Refinements

      Several quality-of-life improvements to the code review interface:

      Slimmer hunk separators. The expand/collapse bars between diff hunks are now 24px (down from 32px), with semi-transparent theme-integrated backgrounds. Text and buttons fade with lower opacity for a subtler look that puts the focus on the code.

      Cleaner header layout. Sidebar toggles (Annotations, AI, Agents) moved to the far right of the header bar, with the options menu to their left. A visual divider separates the file tree button from the repo label.

      Collapse viewed files. Marking a file as viewed in all-files review mode now automatically collapses it, keeping only unreviewed files expanded.

      Multi-line gutter selection fix. Click-and-drag on the gutter annotation button now correctly selects a range of lines. The previous implementation used a deprecated Pierre API that never entered the selection mode, so dragging always reported a single line.


      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      New Contributors

      Community

      @graemefolk built full jj support from scratch, implementing the VCS provider, diff modes, compare-target picker, and feedback export in a single well-structured PR. The VCS abstraction layer they introduced benefits the entire codebase.

      @JohannesKlauss reported the multi-line gutter selection bug in #679, with a clear screen recording that made the root cause obvious.

      @festive-onion requested the collapse-on- viewed behavior in #682, a small change that meaningfully improves the review workflow for large diffs.

      Full Changelog : v0.19.10...v0.19.11

    2. 🔗 keeweb/keeweb v1.18.9 release

      What's Changed

      Full Changelog : 1.18.8...v1.18.9

    3. 🔗 r/reverseengineering SASS King Part 2: reverse-engineering ptxas heuristic decisions and what the compiled binary actually reveals rss
    4. 🔗 livestorejs/livestore v0.4.0-dev.24 release

      Release 0.4.0-dev.24

    5. 🔗 r/reverseengineering I just released a C++ rewrite of **Minecraft rd-20090515** (May 15, 2009 — one of the earliest pre-Classic versions).If you find it interesting, a ⭐ on GitHub would mean a lot and help the project grow! rss
  2. May 07, 2026
    1. 🔗 anthropics/claude-code v2.1.133 release

      What's changed

      • Added worktree.baseRef setting (fresh | head) to choose whether --worktree, EnterWorktree, and agent-isolation worktrees branch from origin/<default> or local HEAD. Note: the default fresh changes EnterWorktree's base back to origin/<default> (it has been local HEAD since 2.1.128) — set worktree.baseRef: "head" to keep unpushed commits in new worktrees
      • Added sandbox.bwrapPath and sandbox.socatPath managed settings (Linux/WSL) to specify custom bubblewrap and socat binary locations
      • Added parentSettingsBehavior admin-tier key ('first-wins' | 'merge') to let admins opt SDK managedSettings (parent tier) into the policy merge
      • Hooks now receive the active effort level via the effort.level JSON input field and the $CLAUDE_EFFORT environment variable, and Bash tool commands can read $CLAUDE_EFFORT
      • Improved focus mode behavior
      • Improved memory usage by releasing warm-spare background workers under memory pressure
      • Fixed parallel sessions all dead-ending at 401 after a refresh-token race wiped shared credentials
      • Fixed Edit/Write allow rules scoped to a drive root (C:\) or POSIX / matching incorrectly and always prompting
      • Fixed an unhandled rejection (ECOMPROMISED) when a history or session-log file lock is compromised by clock skew or slow disk
      • Fixed pressing Esc during conversation compaction showing a spurious "Error compacting conversation" notification
      • Fixed HTTP(S)_PROXY / NO_PROXY / mTLS not being respected for the full MCP OAuth flow including discovery, dynamic client registration, token exchange, and token refresh
      • Fixed Read/Write/Edit being denied on mapped network drives passed via --add-dir / SDK additionalDirectories
      • Fixed Remote Control stop/interrupt from claude.ai not fully canceling the CLI session the same way local Esc does, causing queued messages to never advance after interrupting a stuck tool or prompt
      • Fixed /effort in one session unexpectedly changing the effort level of other concurrent sessions, and a related issue where an IDE effort change could be silently dropped
      • Fixed subagents not discovering project, user, or plugin skills via the Skill tool
      • claude --help now lists --remote-control alongside --remote-control-session-name-prefix
      • [VSCode] Fixed claudeCode.claudeProcessWrapper failing with "Unsupported platform" when the extension build doesn't bundle a Claude binary
    2. 🔗 r/LocalLLaMA Collected the infinity stones rss

      Collected the infinity stones | 2.3 TB of ram in here. 400+ vCores. All thats left is plugging it to the blackwell with the driver to do RDMA, and it’s over. Using Blackwells for prefill, RDMA to the studio mesh for decode. I think this would be the first heterogeneous cluster. I do, however, need help with the Tinygrad Driver to make this work. If anyone with any knowledge on these domains would like to collaborate, let me know via PM. We are very close here. submitted by /u/Street-Buyer-2428
      [link] [comments]
      ---|---

    3. 🔗 r/Leeds What should I do about the stressed koi at a restaurant? rss

      I'm at a restaurant in Leeds, I'm sure you could figure out which one, which has a koi pond in the middle of the restaurant. It's covered by a large bridge and a thick mesh, and the fish are showing classic signs of stress (not moving, sitting near the bottom, jumping out of the water, and gasping at the surface). Is there a way for me to advocate for better health for them or is it a lost cause as they are the restaurant's property and technically taken care of? Sorry if this is silly it just makes me sad to see them in a bad state.

      submitted by /u/moonstone7152
      [link] [comments]

    4. 🔗 r/york Goose on Dame Judi Dench Walk rss

      Goose on Dame Judi Dench Walk | Honk submitted by /u/NervousEnergy
      [link] [comments]
      ---|---

    5. 🔗 crosspoint-reader/crosspoint-reader SD Card Fonts (m1-b4) release

      Pre-built .cpfont font files for CrossPoint Reader.

      Download individual files or use Settings > System > Download Fonts on the device.

      See SD Card Fonts documentation for details.

    6. 🔗 r/Leeds I love this spot. rss

      Sidenote : anyone going warehouse this coming Tuesday ?

      submitted by /u/Auriv3x
      [link] [comments]

    7. 🔗 r/york York City Parade rss

      York City Parade | View from the bus! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    8. 🔗 r/reverseengineering The first FREE online WebAssembly Reverse Engineering workbench (and how we built it) rss
    9. 🔗 earendil-works/pi v0.74.0 release

      Changed

      • Updated repository links and package references for the move to earendil-works/pi-mono and @earendil-works/* package scopes.
    10. 🔗 The Pragmatic Engineer The Pulse: AI load breaks GitHub – why not other vendors? rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .

      GitHub's reliability has been beyond unacceptable recently: last month, third party measurements pinned it at one nine (right at 90%). This month, reliability has been down to zero nines - 86% - as per a third-party tracker, and last week, things got even worse: a frankly embarrassing data integrity incident, more outages, and a partial explanation from GitHub, eventually.

      Data integrity incident

      Last Thursday (23 April), this happened: PRs merged via the merge queue using the squash merge method produced incorrect merge commits, when the merge group contained more than one PR. Commits were reverted from subsequent merges: basically, commits were "lost" in the code that was merged!

      Thanks to a bug GitHub introduced, the service broke its integrity promise that pull requests would be merged as expected when using squash merge, which is a technique typically used to merge multiple small commits into a single, meaningful commit. This is a big deal: as data integrity promises are some of the most important ones, for services like GitHub.

      A total of 2,092 pull requests were impacted, and companies hit by the outage included Modal and Zipline. Effectively, GitHub pushed a bunch of work on affected customers who had to manually untangle and recover lost commits, which GitHub could offer zero assistance with.

      Customers had to manually go through their git history and restore missing code. After following manual recovery steps (reverting the squash commit and re-applying commits one by one), all commits should have been recovered.

      GitHub later emailed the list of affected commits to customers, but it's odd that GitHub executives seemed to downplay the nature of this outage. After all, an outage that messes with data integrity is a much bigger deal than something like a fall in availability where no data is corrupted.

      Can Duruk, software engineer at Modal, was unhappy about GitHub's muted response to the outage:

      "The COO going out of their way to find a huge denominator to make the impact appear small feels very dishonest; versus a sincere apology about how this invalidates their entire promise to their customers. We had to dig into their status page about this to even realize they just casually f***ed up our repo."

      Outages don't stop

      On Monday (27 April), pull requests and issues disappeared from GitHub's web UI:

      altPull requests go missing. Source:Mario Zechneralt Issues also not to be found. Source:David Cramer

      This had to do with an Elasticsearch outage on GitHub's backend: the cluster became overloaded and went down. So, while pull requests, issues, and projects didn't vanish altogether, they also didn't show up during the 6-hour-long outage.

      There were other outages this week:

      Also on Tuesday (28 April), security firm Wiz disclosed a critical security issue, where a bad actor could get access to all repositories on GitHub and GitHub Enterprise server by using only a git push command. GitHub fixed the issue on GitHub.com within six hours, but GitHub Enterprise servers that were not updated remain vulnerable.

      Famous open source contributor quits GitHub in frustration

      On Tuesday, Mitchell Hashimoto, founder of HashiCorp, creator of Ghostty, announced GitHub was unfit for professional work and that he was moving off to Ghostty, the open source terminal that's his main focus. Mitchell's reasoning was dead simple: being on GitHub makes him unproductive (emphasis mine:)

      "The past month I've kept a journal where I put an "X" next to every date where a GitHub outage has negatively impacted my ability to work. Almost every day has an X. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage. This is no longer a place for serious work if it just blocks you out for hours per day, every day.

      It's not a fun place for me to be anymore. I want to be there, but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software.

      I want it to be better, but I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go. I'd love to come back one day, but this will have to be predicated on real results and improvements, not words and promises."

      Mitchell's experience suggests that GitHub's official status page is inaccurate from the point of view of a heavy user like himself. The third- party "missing GitHub status page" is likely to be a better estimation: where GitHub's reliability is at zero nines: at 85.51% uptime. That means that a part of GitHub was down for 2-3 hours, per day, on average, for the last 90 days (!!)

      altReliability woes: GitHub "not a place for serious work." Source: The Missing GitHub Status Page

      Mitchell's complaint sounds straightforward:

      1. As a professional software engineer, it's important to have tools that help you get work done
      2. For months, GitHub has got in the way of his work on open source projects via a flood of outages
      3. It makes no sense to use a product unfit for professional work.
      4. As GitHub shows no signs of improvement, it's worthwhile to move to a different solution which just works

      CTO blames AI agent-fuelled load spike

      GitHub CTO, Vlad Fedorov, shared an update on why reliability has been terrible for months at GitHub. He identified the load from agents being much bigger than expected as the culprit. Charts illustrating this were shared by GitHub:

      alt

      This chart looks eye-catching - but there's just one tiny issue: no Y axis! So, while it tells the story of the load going up slowly and then very fast, we're not told by how much. However, I managed to get data from GitHub, and below is the chart showing the actual load increase over two years:

      alt

      A load increase of ~3.5x, spread across two years, doesn 't seem so brutal at first glance. It is nothing like a load increase of 10x in a month, and a good chunk of it occurred in recent months. So, why can't GitHub handle it? In a blog post, Fedorov said:

      "A pull request can touch Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases. At large scale, small inefficiencies compound: queues deepen, cache misses become database load, indexes fall behind, retries amplify traffic, and one slow dependency can affect several product experiences."

      Here's how the per-second load numbers from January 2023 and today compare:

      alt

      GitHub took 15 years to achieve the 2023 numbers, and maybe it expected to continue growing in a comparable way in the future. If so, some engineering decisions about long-term infrastructure improvements would have been made obsolete by the arrival of AI agents.

      To add to GitHub 's challenges, the company is in the midst of a migration from its own data centers -> Azure. In October last year, GitHub started to move over to Azure - a project expected to take 12 months - because it already had constraints on its own data center capacity.

      Such large-scale infrastructure migrations are hard enough when the load on a service is relatively stable; just making sure nothing breaks takes a lot of effort. But moving at a time when load is spiking means that bugs can cause more visible outages. Of course, GitHub can secure a lot more compute capacity on Azure, now they know what to expect.

      But other major companies prepared for a 10x increase in infra load, so why not Microsoft / GitHub? A year ago, I did research on how Big Tech was preparing to respond to the impact of AI on their business. Google was improving its internal systems to accommodate for a 10x increase in load. As we covered in The Pragmatic Engineer, in July last year:

      "Google is preparing for 10x more code to be shipped. A former Google Site Reliability Engineer (SRE) told me:

      "What I'm hearing from SRE friends is that they are preparing for 10x the lines of code making their way into production."

      If any company has data on the likely impact of AI tools, it's Google. 10x as much code generated will likely also mean 10x more: code review, deployments, feature flags, source control footprint and, perhaps, even bugs and outages, if not handled with care."

      Predicted enormous load increases were not secret knowledge within the industry, yet it seems GitHub was blissfully ignorant of their potential size. According to Vlad, GitHub did eventually plan for a need to increase capacity by 10x, but this was in October 2025, months later. In February 2026, the company is now adjusting that expectation to 30x. He wrote:

      "We started executing our plan to increase GitHub's capacity by 10X in October 2025 with a goal of substantially improving reliability and failover. By February 2026, it was clear that we needed to design for a future that requires 30X today's scale."

      There's also the question of whether GitHub miscalculated how much time it had to prepare for explosive load growth, and whether it was caught off guard when that growth materialized months sooner than expected at the start of this year.

      Given GitHub only started to prepare for a major load increase in October, its current problems are unsurprising. At the scale of GitHub, it's common enough for each team owning a service to plan a year ahead on how much load their service will have, and hardware resources like storage, VMs, and networking are allocated accordingly. Load planning can account for up to half of the preparations, and when reality doesn't conform to plans, some systems can struggle to scale up.

      So, on one hand, dealing with a 3.5x increase in load over 2 years should not be such a big deal for most services; especially not ones which can be horizontally scaled (when there's not much state, and scaling is achieved simply by adding new nodes.) But GitHub probably stores a lot more state with pull requests, workflows, projects, etc. This probably makes scaling more tricky when it comes to databases and systems running workflows.

      GitHub also has 18 years of tech debt on its hands, and thousands of staff to align as "organizational overhead." As its service load grows faster than before, responding is harder due to all that accumulated "debt":

      • Tech debt: many systems at the company are 10+ years old and are likely patched up, making them more difficult and risky to change
      • Organizational debt: around 4,000 people work at GitHub, of whom 1,000 are engineers. Teams have dependencies with each other, and even seemingly simple work can require dozens of engineers to work together
      • Customer expectations: GitHub cannot break customer workflows, even if doing so would mean changes to systems happen faster

      GitHub finds itself in the 'innovator's dilemma': the company became successful because it built developer workflows that made sense, pre-AI, and it used to be able to accurately forecast service load changes. But now that engineering teams' workflows include AI agents, GitHub's own workflows are not necessarily the best fit, and the company failed to forecast service-level changes.

      Other vendors floored by AI load? Not really

      One thing that doesn't add up about the situation is that other vendors who are presumably experiencing similar load spikes don't appear to be suffering with reliability issues as much. Vercel, Linear, Resend, Railway, Sentry, and other infra providers see record-level growth thanks to AI, but keep up with the load.

      Yes, it's true that AI vendors like Anthropic, OpenAI, and Cursor have some reliability issues, but it's not at the scale of GitHub's. GitHub's direct competitors, GitLab and Bitbucket, presumably see load going up similarly, but they're not going down as much.

      An obvious question is how much of GitHub 's pain is self-inflicted? With Microsoft as owner, it has more resources at its disposal than any competitor or startup, and yet failed to predict load increases and is too big to respond with the nimbleness of a startup.

      It's undeniable that solving for a major load increase is a hard challenge; it's when the difference between average and standout engineering teams is apparent. GitHub hasn't been responding like a world-class engineering org.

      GitHub alternatives?

      Every regular user of GitHub feels the pain of ongoing outages. As a dev, you can either hope Microsoft will eventually improve reliability, or seek alternatives. As covered above, Mitchell has chosen to quit and is currently deciding where to take Ghostty.

      The obvious alternatives are GitHub's biggest competitors, GitLab, and Bitbucket. Each offers Git hosting, and neither comes with the uptime woes that GitHub is suffering from.

      Self-hosted solutions are also an option, like self-hosting your git repo, or going with a self-hosted forge like Forgejo, which is an open source, local-first GitHub alternative.

      I also suspect that, soon enough, we'll see startups offering GitHub-like code hosting capabilities, while offering more robust uptime and being architected to handle the 30x-or-more scale which GitHub hopes one day to support.

      Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:

      1. Did Anthropic turn hostile on devs because capacity was running low?
      2. Amazon finally allows Claude Code and Codex usage
      3. Meta forcefully assigns engineers to data labelling ahead of job cuts
      4. New trend: small "AI-forward" teams
      5. Industry Pulse: why Meta tracks employees' computer activity, OpenAI starts to move off Datadog, Apple lets slip it uses Claude Code, GitHub -> Xbox transfers at Microsoft, VS Code inserted "coathored by Copilot" even when Copilot did nothing, analysis of the Coinbase layoffs
    11. 🔗 r/wiesbaden Freitags essen gehen zu zweit? rss

      Moin wĂŒrde gerne mit einer Freundin an einem Freitag in Wiesbaden essen gehen, es sollte gemĂŒtlich sein und nicht so laut. Also eine AtmosphĂ€re haben die es her gibt das man sich gut unterhalten kann. Es sollte vegan/vegetarische Optionen geben. Ich wĂ€re sehr dankbar fĂŒr eure Tipps da ich mich nicht so gut auskenne.

      submitted by /u/JohnTheMonkey2
      [link] [comments]

    12. 🔗 r/Leeds why is everyone in fancy dress? rss

      I'm in the city centre right now and just wondering why everyone is dressed up? I thought it was the otley run but now I'm unsure because the people in fancy dress are everywhere. This is just me being nosey but I can't find any info about it online so I was wondering if anyone knows.

      submitted by /u/MeowTS13
      [link] [comments]

    13. 🔗 Simon Willison Notes on the xAI/Anthropic data center deal rss

      There weren't a lot of big new announcements from Anthropic at yesterday's Code w/ Claude event, but the biggest by far was the deal they've struck with SpaceX/xAI to use "all of the capacity of their Colossus data center".

      As I mentioned in my live blog of the keynote, that's the one with the particularly bad environmental record. The gas turbines installed to power the facility initially ran without Clean Air Act permits or pollution control devices, which they got away with by classifying them as "temporary". Credible reports link it to increases in hospital admissions relating to low air quality.

      Andy Masley, one of the most prolific voices pushing back against misleading rhetoric about data centers (see The AI water issue is fake and Data center land issues are fake), had this to say about Colossus:

      I would simply not run my computing out of this specific data center

      I get that Anthropic are severely compute-constrained, but in a world where the very existence of "AI data centers" is a red-hot political issue (see recent news out of Utah for a fresh example), signing up with this particular data center is a really bad look.

      There was a lot of initial chatter about how this meant xAI were clearly giving up on their own Grok models, since all of their capacity would be sold to Anthropic instead. That was a misconception - Anthropic are getting Colossus 1, but xAI are keeping their larger Colossus 2 data center for their own work.

      As an interesting side note, the night before the Anthropic announcement, xAI sent out a deprecation notice for Grok 4.1 Fast and several other models providing just two weeks' notice before shutdown, reported here by @xlr8harder from SpeechMap:

      Effective May 15, 2026 at 12:00pm PT, the following models will be retired from the xAI API: grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning, grok-4-fast-reasoning, grok-4-fast-non-reasoning, grok-4-0709, grok-code-fast-1, grok-3, grok-imagine-image-pro. After May 15, 2026, requests to these models will no longer work.

      This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative.

      I will never depend on one of your products again.

      Here's SpeechMap's detailed explanation of how they selected Grok 4.1 Fast for their project in March.

      Were xAI serving those models out of Colossus 1?

      xAI owner Elon Musk (who previously delighted in calling Anthropic "Misanthropic") tweeted the following:

      By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. [...]

      After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.

      And then shortly afterwards:

      Just as SpaceX launches hundreds of satellites for competitors with fair terms and pricing, we will provide compute to AI companies that are taking the right steps to ensure it is good for humanity.

      We reserve the right to reclaim the compute if their AI engages in actions that harm humanity.

      Presumably the criteria for "harm humanity" are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me!

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    14. 🔗 r/wiesbaden Erfahrungen mit Autohaus Can in Wiesbadener Str. ? rss

      Wer hat Erfahrung mit dem oben genannten HÀndler? Seriös oder nicht ?

      submitted by /u/HagebuddneLard
      [link] [comments]

    15. 🔗 r/LocalLLaMA WARNING: Open-OSS/privacy-filter MALWARE rss

      There's this new "model" on Hugging Face titled Open-OSS/privacy-filter which is actually a customized infostealer virus. It's a fake version of the OpenAI privacy filter and it uses a Python-based dropper (loader.py) which downloads a malicious PowerShell command from the internet, which spawns another PowerShell command and downloads a shady EXE file and runs it using Task Scheduler.

      Here's a behavior analysis of what the EXE does: https://tria.ge/260507-tnftrsfx5x/behavioral1

      I also reported both the dropper and the EXE to Microsoft.

      I also reported the repo to HF.

      If you use Linux (which is easier to use for AI/ML) you are unaffected as this is a Windows virus.

      submitted by /u/charles25565
      [link] [comments]

    16. 🔗 tomasz-tomczyk/crit v0.11.0 release

      What's Changed

      Big milestone! Crit crossed more than 500 commits and 250 stars. You can now install it directly from homebrew and we released a Windows version!

      Thank you to everyone who contributed to get us here! I'd appreciate if you would share it with your colleagues or on Twitter! It helps a lot!


      crit is now in homebrew-core — no tap needed. If you installed from the tap, upgrade once with:

      brew uninstall crit && brew install crit
      

      Future updates will arrive via brew upgrade like any other formula.

      Windows + WSL support

      feat: add Windows + WSL support replaces Unix-only syscalls with cross- platform abstractions, adds rundll32 browser launch on native Windows, and keeps the existing WSL fallback chain. crit now works end-to-end on Windows natively.

      General

      Full Changelog : v0.10.5...v0.11.0

    17. 🔗 earendil-works/pi v0.73.1 release

      New Features

      • Self-update support for the npm scope migration : pi update --self now supports the upcoming package rename from @mariozechner/pi-coding-agent to @earendil-works/pi-coding-agent. After the new package is published, existing global installs can update through the normal self-update flow; pi will uninstall the old global package and install the package name returned by the version check endpoint.
      • Interactive OAuth login selection : OAuth providers can now present multiple login choices in /login, enabling provider-specific interactive authentication flows. See Providers.
      • JSONC-stylemodels.json parsing: models.json now allows comments and trailing commas, making custom provider and model configuration easier to maintain. See Providers and Custom Providers.

      Added

      • Added interactive login selection support so OAuth providers can present multiple login choices (#4190 by @mitsuhiko).

      Changed

      • Changed pi update --self to honor the active package name returned by the Pi version check endpoint, defaulting to the current package when omitted and uninstalling the old global package before installing a renamed package.
      • Changed extension loading to use upstream jiti 2.7 instead of the @mariozechner/jiti fork (#4244 by @pi0).
      • Changed models.json parsing to allow comments and trailing commas (#4162 by @julien-c).

      Fixed

      • Fixed pi -p treating prompts that start with YAML frontmatter as extension flags instead of user messages (#4163).
      • Fixed pending tool results not updating in the live TUI after toggling thinking block visibility while the tool is running (#4167).
      • Fixed /copy reporting success on Linux without writing the clipboard on Wayland-only compositors (Hyprland, Niri, ...) by skipping the X11-only native addon on Linux and routing through wl-copy/xclip/xsel instead (#4177).
      • Fixed HTML session exports to strip skill wrapper XML from rendered user messages (#4234 by @aliou).
      • Fixed OpenAI-compatible chat completion streams that interleave content and tool-call deltas in the same choice.
      • Fixed OpenAI Codex OAuth refresh failures writing directly to stderr while the TUI is active (#4141).
      • Fixed OpenAI Codex Responses requests to send a non-empty system prompt (#4184).
      • Fixed Kimi For Coding model resolution for the Kimi K2 P6 alias (#4218).
      • Fixed Kitty inline image redraws to stay within TUI-owned terminal regions and avoid writing below the active viewport.
      • Fixed Kitty inline image rendering by letting the terminal allocate image ids and bounding parsed image ids to valid values.
      • Fixed inline image capability detection to disable inline images in cmux terminals.
    18. 🔗 r/Leeds Leeds cycle lane network is a 'step in the right direction', say campaigners rss

      Just wanted to add a bit of positivity around the new cycle lanes in Leeds, as there seems to be a lot of negativity whenever the topic comes up.

      Speaking from personal experience, they’ve genuinely changed my life for the better. Up until last year, I hadn’t really ridden a bike since I was a teenager. But after seeing more segregated cycle lanes appear around my area, I realised I could get from my house into the city centre in under 30 minutes almost entirely on protected infrastructure.

      I've started cycling regularly, and eventually I sold my car altogether. I now use my bike every other day for commuting, trips into town, canal rides etc etc. I’m healthier, happier, saving loads of money, and honestly enjoy getting around Leeds far more now. It's hilly in parts but stick to a low gear and it's perfectly manageable, ebikes are great alternatives too and can be purchased through the cycle to work schemes (I saved hundreds on my bike).

      I also cycle year-round, and I think people massively overestimate how “hardcore” cycling is in the UK. Our weather really isn’t that different from places like the Netherlands. Most of the time you’re completely fine with a decent jacket.

      I know the network still has gaps and improvements to make, but for me it’s been a massive step in the right direction and has made cycling feel accessible to normal people again, not just super confident road cyclists.

      Just wondering if anyone else has had a similar experience or enjoys using the bike lanes too?

      submitted by /u/_testingdude
      [link] [comments]

    19. 🔗 r/reverseengineering VLC Media Player MKV Exploit Analysis rss
    20. 🔗 r/york Different angles on one perfect subject đŸ’« rss

      Different angles on one perfect subject đŸ’« | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    21. 🔗 r/Yorkshire 'We're all human': Reform response to Sheffield candidate accused of Nazi praise rss
    22. 🔗 r/LocalLLaMA Qwen3.6 27B uncensored heretic v2 Native MTP Preserved is Out Now With KLD 0.0021, 6/100 Refusals and the Full 15 MTPs Preserved and Retained, Available in Safetensors, GGUFs and NVFP4s formats. rss

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-GGUF: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-GGUF

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4-GGUF: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-NVFP4-GGUF

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-NVFP4

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4-MLP- Only: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored- heretic-v2-Native-MTP-Preserved-NVFP4-MLP-Only

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-GPTQ-Int4: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-GPTQ-Int4

      All are confirmed to have their full 15 MTPs retained and preserved.

      Comes with benchmark too.

      Find all my models here: HuggingFace- LLMFan46

      submitted by /u/LLMFan46
      [link] [comments]

    23. 🔗 r/Leeds Does anyone else remember when you could buy cats at kirkgate market ? rss

      And pirated dad's,before 2010 and other crazy stuff or I'm I confusing it with the wrong place I'm pretty sure we got a cat from there some time in the 2000's but I could be wrong

      submitted by /u/TipAdditional4625
      [link] [comments]

    24. 🔗 Console.dev newsletter honker rss

      Description: Durable queues for SQLite.

      What we like: Adds pub/sub, task queue, and event streams to SQLite. No need for client polling or a broker. Shipped as a SQLite extension with bindings for Python, Node, Rust, Go, Ruby, etc. Allows an INSERT and enqueue as part of the same transaction (with rollback). Also supports cron.

      What we dislike: Polling is via a SELECT per millisecond per database, which should be lightweight, but is an extra high-frequency query. Still experimental.

    25. 🔗 Console.dev newsletter Plow rss

      Description: HTTP benchmarking.

      What we like: Runs HTTP requests and benchmarks latency and response codes. Configurable concurrency, duration, request count, and ramp up time. Outputs stats to the terminal in real time. Supports JSON output and provides a web UI.

      What we dislike: Pretty straightforward HTTP request support, including different methods e.g. POST (with body). For more complex benchmarks, k6 is a good, scriptable alternative.

  3. May 06, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-06 rss

      IDA Plugin Updates on 2026-05-06

      Activity:

    2. 🔗 backnotprop/plannotator v0.19.10 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.9 | OpenCode user-managed workflow, Pi model switch fix, Codex skill install, shimmer removal
      v0.19.8 | 49 themes with syntax highlighting, keyboard shortcut registry, smart code-file path validation, remote URL notifications
      v0.19.7 | Codex Stop-hook plan review, Codex skills, sidebar auto-close, file tree context menu
      v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
      v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
      v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
      v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
      v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
      v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
      v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests


      What's New in v0.19.10

      v0.19.10 reverts an unreviewed permission mode change that shipped in v0.19.9. The bypass-with-clear-reminder feature (PR #668) relied on Claude Code surfacing systemMessage from hook output, which it does not currently support. That PR has been fully reverted. All other v0.19.9 changes remain intact.

      Reverted: Bypass Permissions with /clear Reminder

      PR #668 added a synthetic bypassPermissionsClearReminder permission mode that was supposed to emit a system message reminding users to run /clear after plan approval. Testing revealed that Claude Code does not surface systemMessage fields from hook output to the conversation, so the feature had no visible effect. The entire PR has been reverted to avoid shipping dead UI options.

      Users on v0.19.9 who selected the "Bypass + /clear Reminder" mode in Settings will fall back to the default permission mode ("Accept Edits") after updating.


      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      Full Changelog : v0.19.9...v0.19.10

    3. 🔗 backnotprop/plannotator v0.19.9 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.8 | 49 themes with syntax highlighting, keyboard shortcut registry, smart code-file path validation, remote URL notifications
      v0.19.7 | Codex Stop-hook plan review, Codex skills, sidebar auto-close, file tree context menu
      v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
      v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
      v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
      v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
      v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
      v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
      v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
      v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression


      What's New in v0.19.9

      v0.19.9 adds a user-managed workflow mode for OpenCode, fixes model switching in Pi's plan approval flow, and introduces a bypass-with-reminder permission mode for Claude Code. Five PRs total, two from first-time contributors.

      User-Managed Workflow Mode for OpenCode

      OpenCode's plugin previously offered two extremes: manual (slash commands only, no submit_plan tool) and plan-agent/all-agents (full automation with prompt injection and permission overrides). Users who wanted the submit_plan tool without Plannotator modifying their system prompts or agent configuration had no middle ground.

      The new user-managed workflow mode fills that gap. It registers the submit_plan tool and all slash commands, but does not inject planning prompts, rewrite tool definitions, or modify OpenCode's agent configuration. Users manage their own prompts and permissions, and Plannotator stays out of the way. To enable it, set workflow: "user-managed" in your opencode.json plugin options.

      Existing configurations are unaffected. The default remains plan-agent, and unknown workflow strings continue to fall back to it.

      Pi Model Switch on Plan Approval

      When Pi users configured separate models for planning and execution (e.g., think with one model, execute with another), approving a plan was supposed to switch to the execution model. It didn't. Pi snapshots its model selection at the start of each agent.prompt() call, so calling pi.setModel() mid-loop had no effect until the next user-initiated turn. The user had to manually prompt the agent after every approval to trigger the switch.

      The fix terminates the current agent loop on plan approval by returning terminate: true from the tool response, then uses a deferred sendUserMessage in the agent_end handler to start a fresh turn that picks up the executing model. Execution now continues automatically on the correct model without manual intervention.

      Bypass Permissions with /clear Reminder

      Claude Code's bypass-permissions mode skips the permission prompt on plan approval, but users often forget to run /clear afterward to reset context. A new "Bypass + /clear Reminder" option in the permission mode dropdown pairs bypass-permissions with a system message nudging the user to clear context after approval.

      The mode is available in both the Settings dropdown and the Approve button's extra options. It decomposes to the standard bypassPermissions wire value plus a clearContextNudge flag, so the hook output format is unchanged. Cookie validation was also hardened in this PR: stale or invalid permission mode values now safely fall back to the default instead of flowing through unchecked.

      Additional Changes

      • Remove shimmer animation from clickable file paths. The repeating shimmer on .code-file-link elements was distracting while reading plans. File path links now render as normal inline code with cursor and hover cues only. (#676 by @backnotprop, closing #672 reported by @academo)
      • Install Plannotator command skills under Codex home. The installer now places command-overlap skills (plannotator-review, plannotator-annotate, plannotator-last) in ~/.codex/skills/ when Codex is detected, and keeps shared-agent skills in ~/.agents/skills/. Stale cross-scope copies are cleaned up, and a ~/.codex/skills/ directory created by a previous install no longer triggers false Codex detection. (#669 by @backnotprop)
      • Avoid Pi bundled skill conflicts. The installer configures Pi's settings.json to disable bundled Plannotator skills when shared global skills are already installed, eliminating "duplicate skill" warnings. BOM-free UTF-8 writes on Windows prevent encoding issues with Pi's JSON parser.

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      • feat(opencode-plugin): add user-managed workflow mode by @saksmt in #667
      • feat: expose bypass clear reminder permission mode by @AgileInnov8tor in #668
      • fix: install Plannotator command skills under Codex home by @backnotprop in #669
      • fix(ui): remove shimmer animation from clickable file paths by @backnotprop in #676
      • fix(pi): terminate agent loop on plan approval so model switch takes effect by @backnotprop in #677

      New Contributors

      Contributors

      @saksmt identified the gap between OpenCode's manual and automated workflow modes and contributed the user-managed option (#667), giving users fine-grained control over prompt injection and tool registration. First contribution to the project.

      @AgileInnov8tor built the bypass-with- clear-reminder permission mode (#668), including the synthetic mode decomposition, cookie validation hardening, and the live settings sync fix. First contribution to the project.

      Community members whose reports drove fixes in this release:

      • @snowmead: #674 (Pi model switch not taking effect on plan approval)
      • @academo: #672 (shimmer animation distracting on file path links)

      Full Changelog : v0.19.8...v0.19.9

    4. 🔗 r/Harrogate Harrogate Traffic Relief rss

      The traffic in and around Harrogate is a joke, and has been commented on for as long as I can remember.

      but I’m curious, I’ve no idea how to solve it so what are people’s suggestions? It seems to me there’s just no where to speed flow or reroute around bottlenecks.

      Better busses? By-passes? How do we fix it?

      submitted by /u/CyclePrevious9043
      [link] [comments]

    5. 🔗 anthropics/claude-code v2.1.132 release

      What's changed

      • Added CLAUDE_CODE_SESSION_ID environment variable to the Bash tool subprocess environment, matching the session_id passed to hooks
      • Added CLAUDE_CODE_DISABLE_ALTERNATE_SCREEN=1 env var to opt out of the fullscreen alternate-screen renderer and keep the conversation in the terminal's native scrollback
      • Added a "Pasting
" footer hint while a Ctrl+V image paste is being read from the clipboard
      • Fixed external SIGINT (e.g. IDE stop button, kill -INT) not running graceful shutdown — terminal modes are now restored and the --resume hint is printed instead of an abrupt exit
      • Fixed an uncaught exception when the terminal is closed or SSH disconnects mid-session under the native build
      • Fixed --resume failing with no low surrogate in string when a tool error truncation split an emoji; pre-corrupted sessions are sanitized on load
      • Fixed --permission-mode flag being ignored when resuming a plan-mode session with -p --continue/--resume, and plan mode not being re-applied after ExitPlanMode within the same session
      • Fixed fullscreen mode showing a blank screen after laptop sleep/wake or Ctrl+Z/fg until the next keystroke or stream output
      • Fixed cursor landing mid-grapheme on Ctrl+E/A/K/U/arrow keys when an Indic conjunct or ZWJ emoji wraps across lines
      • Fixed vim operators corrupting text containing decomposed (NFD) accented characters
      • Fixed pasting text starting with / silently swallowing the input or triggering an unknown-command reply
      • Fixed pasting dumping stray escape sequences into the prompt when focus events or mouse-tracking reports interleave with the bracketed paste
      • Fixed mouse wheel scrolling being too fast in Cursor and VS Code 1.92–1.104 due to an upstream xterm.js bug
      • Fixed scroll-wheel handling in JetBrains IDE 2025.2 terminals (spurious arrow keys, wrong-direction events, runaway acceleration)
      • Fixed /usage Ctrl+S hanging when copying the stats screenshot to the clipboard on Linux/X11
      • Fixed /terminal-setup showing a contradictory error in Windows Terminal — Shift+Enter is natively supported there
      • Fixed /effort picker not reflecting the CLAUDE_CODE_EFFORT_LEVEL env var override
      • Fixed /status showing the wrong default model for some users
      • Fixed slash command autocomplete popup being capped at ~3–5 visible commands instead of scaling with terminal height
      • Fixed statusline context_window token counts reflecting cumulative session totals instead of current context usage
      • Fixed Alt+T (thinking toggle) not working on macOS terminals without "Option as Meta" enabled (iTerm2, Terminal.app defaults)
      • Fixed dead keyboard input on Windows after re-opening a background session from claude agents
      • Fixed unbounded memory growth (10GB+ RSS) when a stdio MCP server writes non-protocol data to stdout
      • Fixed MCP servers that connect but fail tools/list silently showing 0 tools — they now retry once and show "connected · tools fetch failed" in /mcp
      • Fixed unauthorized claude.ai MCP connectors showing as "failed" instead of "needs auth", and headless -p mode retrying non-transient 4xx connection failures
      • Improved visual consistency in slash command dialogs and /login, /upgrade, /extra-usage dialog spacing
      • Updated the /tui fullscreen startup banner to describe additional renderer benefits (lower memory usage, mouse support, auto-copy on select)
      • Fixed Bedrock and Vertex 400 errors when ENABLE_PROMPT_CACHING_1H is set
    6. 🔗 @HexRaysSA@infosec.exchange New training updates, plus Spring discounts: mastodon

      New training updates, plus Spring discounts:
      ‱ On-demand Starter → 20% off with code STR20
      ‱ AI-powered Intermediate → 40% off (May 12) with code AI-INTER40
      ‱ Malware, Decompiler & Programming → 30% off with code SPRING30

      Details + course breakdown: https://hex-rays.com/blog/spring-training- sale-2026
      *Limited time offer, check blog for expiration dates!

    7. 🔗 r/LocalLLaMA ZAYA1-8B: Frontier intelligence density, trained on AMD rss

      ZAYA1-8B: Frontier intelligence density, trained on AMD | submitted by /u/carbocation
      [link] [comments]
      ---|---

    8. 🔗 r/york Moving back - flat hunting rss

      I'm coming home! So excited to be moving back but slightly worried about finding a flat after a few years abroad. I know the drill since the last time I lived there, but wanted to see if anything has changed - do things still move at the speed of light - by the time something hits Rightmove, it's already full of viewings and likely to be gone tomorrow - is that still the case?

      I can't remember what month most student lets turn over / when the most availability is...? (I know the new system may impact this)

      Should I just book a hotel and wait till I'm in town to sort out viewings? (and trust I'll find somewhere within a week?)

      Budget is 1.1-1.5k, would like to be relatively near the uni. I know the dust is still settling from the new Renters' rights and I've read so many posts on here about where to look/ agents to avoid etc, but curious how things feel locally lately.

      Last but not least - any anecdotes for getting pets approved since the rule changes? Any differences between getting a cat approved (vs dogs)?

      Thanks!

      submitted by /u/fruitloopfitness
      [link] [comments]

    9. 🔗 r/Leeds Does anyone remember Toyworld Megastore? rss

      As a kid I loved this toy shop, it was on the Headrow, attached to the Headrow Shopping Centre (later turned into the core, now demolished) and was to right of the entrance, which the same unit later became GAME. It seems to have had a very short lifespan, opening and closing in the mid 2000s but having another store on the top floor of the Headrow Shopping centre in the 90s.

      Some of the only info I can find online, is my own reddit post from 3 years ago, https://www.reddit.com/r/Leeds/comments/z57afp/does_anyone_remember_toyworld_megastore/

      I'd love to find a photo of the store, or literally any info/memories - it's basically all gone and I'm so annoyed at myself for not having saved the one photo that existed 3 years ago.

      Thank you in advance!

      submitted by /u/Same_Ability3423
      [link] [comments]

    10. 🔗 r/Yorkshire Silktone Waggonway rss

      Silktone Waggonway | I create short history forgotten videos around Yorkshire and specifically Barnsley, here's my latest short Silkstone Waggonway submitted by /u/9arke1
      [link] [comments]
      ---|---

    11. 🔗 Hex-Rays Blog New Training Formats, New Workflows, New Skills rss

      New Training Formats, New Workflows, New Skills

      We’ve made meaningful updates to our training lineup with the introduction of a new on-demand format for beginners, integration of AI into our Intermediate course, and expanded hands-on content across advanced trainings.

    12. 🔗 Simon Willison Live blog: Code w/ Claude 2026 rss

      I'm at Anthropic's Code w/ Claude event today. Here's my live blog of the morning keynote sessions.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    13. 🔗 r/Leeds An afternoon in Leeds. rss

      Today I got a lovely change of pace, a hot tap in Leeds just 2 miles from where I live which is great because I've been perpetually up near Consett and Seal Sands sorting out P11's and staying in impersonal hotels and pubs.

      So I had a wander into Leeds City Centre on a weekday after sorting out the permits, the change of pace compared to the weekend is huge. Its been years since I've been into Leeds during the week for leisure.

      Found it nice to just wander, Im just having a coffee in the indoor market. My wife's coming through after she finishes work and Im treating us to Blue Sakura.

      Just some aimless musing. Leeds is a good place and it deserves some aimless musing over a nice coffee.

      submitted by /u/EdwardJSuperman
      [link] [comments]

    14. 🔗 r/LocalLLaMA None of this will ever get stolen rss

      None of this will ever get stolen | It's crazy that they're thinking of doing this. There are problems with people stealing catalytic converters off people's cars and now they want to put a rack outside your house!? submitted by /u/martin_xs6
      [link] [comments]
      ---|---

    15. 🔗 r/york Lost keys rss

      I lost a set of keys with a black carabiner on them, two old style keys and one modern one, within the nunnery lane area.

      Any leads?
      I'm really worried😓

      submitted by /u/soupygirls
      [link] [comments]

    16. 🔗 Simon Willison Vibe coding and agentic engineering are getting closer than I'd like rss

      I recently talked with Joseph Ruscio about AI coding tools for Heavybit's High Leverage podcast: Ep. #9, The AI Coding Paradigm Shift with Simon Willison. Here are some of my highlights, including my disturbing realization that vibe coding and agentic engineering have started to converge in my own work.

      One thing I really enjoy about podcasts is that they sometimes push me to think out loud in a way that exposes an idea I've not previously been able to put into words.

      Vibe coding and agentic engineering are starting to overlap

      A few weeks after vibe coding was first coined I published Not all AI-assisted programming is vibe coding (but vibe coding rocks), where I firmly staked out my belief that "vibe coding" is a very different beast from responsible use of AI to write code, which I've since started to call agentic engineering.

      When Joseph brought up the distinction between the two I had a sudden realization that they're not nearly as distinct for me as they used to be:

      Weirdly though, those things have started to blur for me already, which is quite upsetting.

      I thought we had a very clear delineation where vibe coding is the thing where you're not looking at the code at all. You might not even know how to program. You might be a non-programmer who asks for a thing, and gets a thing, and if the thing works, then great! And if it doesn't, you tell it that it doesn't work and cross your fingers.

      But at no point are you really caring about the code quality or any of those additional constraints. And my take on vibe coding was that it's fantastic, provided you understand when it can be used and when it can't.

      A personal tool for you, where if there's a bug it hurts only you, go ahead!

      If you're building software for other people, vibe coding is grossly irresponsible because it's other people's information. Other people get hurt by your stupid bugs. You need to have a higher level than that.

      This contrasts with agentic engineering where you are a professional software engineer. You understand security and maintainability and operations and performance and so forth. You're using these tools to the highest of your own ability. I'm finding the scope of challenges I can take on has gone up by a significant amount because I've got the support of these tools.

      But I'm still leaning on my 25 years of experience as a software engineer.

      The goal is to build high quality production systems: if you're building lower quality stuff faster, I think that's bad. I want to build higher quality stuff faster. I want everything I'm building to be better in every way than it was before.

      The problem is that as the coding agents get more reliable, I'm not reviewing every line of code that they write anymore, even for my production level stuff.

      I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it's just going to do it right. It's not going to mess that up. You have it add automated tests, you have it add documentation, you know it's going to be good.

      But I'm not reviewing that code. And now I've got that feeling of guilt: if I haven't reviewed the code, is it really responsible for me to use this in production?

      The thing that really helps me is thinking back to when I've worked at larger organizations where I've been an engineering manager. Other teams are building software that my team depends on.

      If another team hands over something and says, "hey, this is the image resize service, here's how to use it to resize your images"... I'm not going to go and read every line of code that they wrote.

      I'm going to look at their documentation and I'm going to use it to resize some images. And then I'm going to start shipping my own features. And if I start running into problems where the image resizer thing appears to have bugs or the performance isn't good, that's when I might dig into their Git repositories and see what's going on. But for the most part I treat that as a semi-black box that I don't look at until I need to.

      I'm starting to treat the agents in the same way. And it still feels uncomfortable, because human beings are accountable for what they do. A team can build a reputation. I can say "I trust that team over there. They built good software in the past. They're not going to build something rubbish because that affects their professional reputations."

      Claude Code does not have a professional reputation! It can't take accountability for what it's done. But it's been proving itself anyway - time and time again it's churning out straightforward things and doing them right in the style that I like.

      There's an element of the normalization of deviance here - every time a model turns out to have written the right code without me monitoring it closely there's a risk that I'll trust it at the wrong moment in the future and get burned.

      The new challenge of evaluating software

      It used to be if you found a GitHub repository with a hundred commits and a good readme and automated tests and stuff, you could be pretty sure that the person writing that had put a lot of care and attention into that project.

      And now I can knock out a git repository with a hundred commits and a beautiful readme and comprehensive tests of every line of code in half an hour! It looks identical to those projects that have had a great deal of care and attention. Maybe it is as good as them. I don't know. I can't tell from looking at it. Even for my own projects, I can't tell.

      So I realized what I value more than the quality of the tests and documentation is that I want somebody to have used the thing. If you've got a vibe coded thing which you have used every day for the past two weeks, that's much more valuable to me than something that you've just spat out and hardly even exercised.

      The bottlenecks have shifted

      If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn't.

      It's not just the downstream stuff, it's the upstream stuff as well. I saw a great talk by Jenny Wen, who's the design leader at Anthropic, where she said we have all of these design processes that are based around the idea that you need to get the design right - because if you hand it off to the engineers and they spend three months building the wrong thing, that's catastrophic.

      There's this whole very extensive design process that you put in place because that design results in expensive work. But if it doesn't take three months to build, maybe the design process can be a whole lot riskier because cost, if you get something wrong, has been reduced so much.

      Why I'm still not afraid for my career

      When I look at my conversations with the agents, it's very clear to me that this is moon language for the vast majority of human beings.

      There are a whole bunch of reasons I'm not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you're doing, you can run so much faster with them. [...]

      I'm constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we're trying to achieve here is still really difficult. [...]

      Matthew Yglesias, who's a political commentator, yesterday tweeted, "Five months in, I think I've decided that I don't want to vibecode — I want professionally managed software companies to use AI coding assistance to make more/better/cheaper software products that they sell to me for money." And that feels about right to me. I can plumb my house if I watch enough YouTube videos on plumbing. I would rather hire a plumber.

      On the threat to SaaS providers of companies rolling their own solutions instead:

      I just realized it's the thing I said earlier about how I only want to use your side project if you've used it for a few weeks. The enterprise version of that is I don't want a CRM unless at least two other giant enterprises have successfully used that CRM for six months. [...] You want solutions that are proven to work before you take a risk on them.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    17. 🔗 r/reverseengineering pyghidra-mcp Meets Ghidra GUI: Drive Project-Wide RE with Local AI rss
    18. 🔗 r/york York station gateway what do you think? rss

      York station gateway what do you think? | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    19. 🔗 r/Leeds I bought a job lot of antique postcards from Leeds off eBay rss

      When I saw 50 antique postcards of Leeds on eBay for ÂŁ20 it was was a no brainer of a buy!

      Most date to the first decade of the 20th century and they include lovely, stylised images of streets that look so familiar but also very different. Some also have messages on the back, frankly irrisitable to a nosy person such as myself.

      I've posted a gallery of some of the best ones on my Leeds history newsletter, Bury the Leeds, which is free to read and to subscribe to.

      https://burytheleeds.substack.com/p/looking-back-at-leeds-through-antique

      My favourite is the image of Headingley from 1909 which includes the beast of a stump of the Shire Oak, an ancient tree that was said to have stood on Otley Road for 1,000 years. By the 20th century, only a hulking stump remained before that was destroyed during a storm in 1941. The Original Oak pub is named after it and so is the Skyrack, which is an old timey derivation of 'Shire Oak'.

      I also love the one of the fashionable ladies promenading down Woodhouse Moor in 1904 and the very evocative shots of Briggate and Boar Lane, when trams ruled. You can really imagine how these busy streets must have sounded back then.

      I'm giving the postcards away with a book I've made featuring some of my most interesting and unusual stories about the city. I know several r/Leeds redditors have ordered copies. I'm celebrating one year of this project now so thanks for the support and to the mods!

      submitted by /u/bluetrainlinesss
      [link] [comments]

    20. 🔗 r/LocalLLaMA Bad news: Apple drops high-memory Mac Studio configs rss

      Bad news: Apple drops high-memory Mac Studio configs | Looks like Apple has quietly killed off the higher-memory Mac Studio options. The M3 Ultra Mac Studio is now only available with 96GB RAM. The 512GB option was already removed back in March, and now the 256GB config is gone too. Apple has said both the Mac Studio and Mac mini will stay supply-constrained for the next few months. The Mac mini is also stuck at 48GB RAM max for now. Probably their high-memory chip stock got too expensive to keep producing. This is a real bummer for us! Big unified memory configs were one of the few (relatively) affordable ways to run large models locally. I am glad I own the M3 Utlra 512, will definitely keep this on (my favorite local model is Qwen 397b atm). submitted by /u/jzn21
      [link] [comments]
      ---|---

    21. 🔗 r/Yorkshire Please get out there and vote May 7th (tomorrow.) rss

      The North is often neglected by the government, so the best chance that YOU have to get the work done in your area is by voting in the local election tomorrow.

      If you don’t know who to vote, do your research and see who aligns more with your community. Vote based on who you believe will help your local area the most.

      This isn’t a political soapbox post, I won’t tell you who to vote for. Just please, use your voice. There are a lot of cunts who just wanna use your seat and sit on it, and nothing will ever change. This is an important election with a lot of new voices who could genuinely help your local ward. I wish the best for your local area in the next 4 years and that’s why i’m making this post!

      We don’t get a lot of chances to enact change, so it’s best to use it when we can.

      submitted by /u/coolfunkDJ
      [link] [comments]

    22. 🔗 tomasz-tomczyk/crit Spotify popup-relay preview (bb4d9fb) release

      WIP build of crit with share_flow: "popup" config support for SSO- protected crit-web instances.

      Setup instructions: SPOTIFY-PREVIEW.md

      Pair with crit-web branch share-receiver- elixir (commit ed01b25).

      Built from commit bb4d9fb of branch share- receiver.

      Feedback / issues: tomasz-tomczyk/crit-web#50

    23. 🔗 Anton Zhiyanov Solod v0.1: Go ergonomics, practical stdlib, native C interop rss

      Solod (So) is a system-level language with Go syntax and zero runtime. It's designed for two main audiences:

      • Go developers who want low-level control and zero-cost C interop, without having to learn a new language or standard library.
      • C developers who like Go's style.

      The initial version (let's call it v0) was focused on picking a subset of Go and translating it to C. The next logical step was to port Go's standard library and make it easier to interop with C. That's what the v0.1 release I'm presenting today is all about.

      Standard library ‱ SQLite bindings ‱ Persistent map ‱ Store and retrieve ‱ Command-line interface ‱ Performance ‱ Wrapping up

      Standard library

      Solod v0.1 ships with the following stdlib packages ported from Go:

      • io, bufio, and fmt — Abstractions and types for general-purpose I/O.
      • bytes, strings, strconv, and unicode/utf8 — Common byte and text operations.
      • slices and maps — Generic heap-allocated data structures.
      • crypto/rand and math/rand — Generating random data.
      • flag, os, and path — Working with the command line and files.
      • log/slog — Structured logging.
      • time — Measuring and displaying time.

      And a couple of its own packages:

      • mem — Memory allocation with a pluggable allocator interface.
      • c — Low-level C interop helpers.

      Stdlib documentation

      In the following sections, I'll demonstrate some of the v0.1 features using a simple example: a persistent key-value store backed by SQLite.

      SQLite bindings

      Since So doesn't provide database/sql yet, we'll call SQLite directly through its C API. To do this, let's import the necessary headers with the so:include directive and generate extern declarations using the sobind tool:

      package main
      
      import "solod.dev/so/c"
      
      //so:include <sqlite3.h>
      
      // SQLite constants.
      //
      //so:extern SQLITE_OK
      const sqliteOK = 0
      //so:extern SQLITE_ROW
      const sqliteRow = 100
      //so:extern SQLITE_DONE
      const sqliteDone = 101
      
      // SQLite types.
      //
      //so:extern
      type sqlite3 struct{}
      //so:extern
      type sqlite3_stmt struct{}
      //so:extern
      type sqlite3_value struct{}
      //so:extern
      type sqlite3_callback func(any, int32, **c.Char, **c.Char) int32
      
      // SQLite functions.
      func sqlite3_open(filename string, ppDb **sqlite3) int32
      func sqlite3_prepare_v2(db *sqlite3, zSql string, nByte int32, ppStmt **sqlite3_stmt, pzTail **c.ConstChar) int32
      func sqlite3_step(arg0 *sqlite3_stmt) int32
      func sqlite3_finalize(pStmt *sqlite3_stmt) int32
      func sqlite3_close(arg0 *sqlite3) int32
      func sqlite3_exec(arg0 *sqlite3, sql string, callback sqlite3_callback, arg3 any, errmsg **c.Char) int32
      
      // more declarations...
      

      The so:extern directive is required for constants (sqliteOK) and types (sqlite3_stmt). As for functions (sqlite3_prepare_v2), we can just declare them without a body — the transpiler will treat them as extern declarations even without so:extern.

      Persistent map

      With the SQLite API in place, let's implement a key-value type that wraps the database connection:

      // SQLMap is a simple key-value store backed by an SQLite database.
      type SQLMap struct {
          db *sqlite3
      }
      

      Add a constructor that connects to an SQLite database and creates a table to store the items:

      var ErrCreate = errors.New("sqlmap: create schema failed")
      const sqlCreate = "create table if not exists kv (key text primary key, val)"
      
      // NewSQLMap creates a new SQLMap using the provided connection string.
      // It opens a connection to the SQLite database and creates the underlying
      // key-value table if it does not already exist.
      //
      // The caller is responsible for calling Close on the returned SQLMap
      // when it is no longer needed.
      func NewSQLMap(connStr string) (SQLMap, error) {
          var db *sqlite3
          rc := sqlite3_open(connStr, &db)
          if rc != sqliteOK {
              return SQLMap{}, ErrCreate
          }
      
          rc = sqlite3_exec(db, sqlCreate, nil, nil, nil)
          if rc != sqliteOK {
              sqlite3_close(db)
              return SQLMap{}, ErrCreate
          }
          return SQLMap{db}, nil
      }
      
      // Close releases resources associated with the SQLMap.
      func (m *SQLMap) Close() {
          sqlite3_close(m.db)
      }
      

      As you can see, this So code looks a lot like regular Go code. However, there are some key differences:

      • When compiled, the code is first translated to plain C, then compiled into a native binary using GCC or Clang.
      • Unlike Go, there is no runtime (no automatic heap memory allocation, no garbage collection, no goroutine scheduler).
      • There is no overhead when calling C functions, unlike Go's Cgo.
      • The interop syntax is a bit cleaner. For example, Go's string (sqlCreate in the sqlite3_exec call) automatically decays to C's const char*.

      Store and retrieve

      First, let's implement the Set method:

      var (
          ErrPrepare = errors.New("sqlmap: prepare failed")
          ErrExec    = errors.New("sqlmap: exec failed")
      )
      
      const sqlSet = "insert or replace into kv (key, val) values (?, ?)"
      
      // Set stores a string value for the specified key.
      func (m *SQLMap) Set(key string, val string) error {
          var stmt *sqlite3_stmt
          rc := sqlite3_prepare_v2(m.db, sqlSet, -1, &stmt, nil)
          if rc != sqliteOK {
              return ErrPrepare
          }
          defer sqlite3_finalize(stmt)
      
          sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil)
          sqlite3_bind_text(stmt, 2, val, int32(len(val)), nil)
      
          rc = sqlite3_step(stmt)
          if rc != sqliteDone {
              return ErrExec
          }
          return nil
      }
      

      No surprises here, just a bunch of SQLite API calls.

      The Get method is more interesting:

      var ErrNotFound = errors.New("sqlmap: not found")
      const sqlGet = "select val from kv where key = ?"
      
      // Get returns the value associated with the specified key.
      // The caller owns the returned string and must free it with mem.FreeString.
      func (m *SQLMap) Get(a mem.Allocator, key string) (string, error) {
          var stmt *sqlite3_stmt
          rc := sqlite3_prepare_v2(m.db, sqlGet, -1, &stmt, nil)
          if rc != sqliteOK {
              return "", ErrPrepare
          }
          defer sqlite3_finalize(stmt)
      
          sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil)
          rc = sqlite3_step(stmt)
          if rc == sqliteDone {
              return "", ErrNotFound
          }
          if rc != sqliteRow {
              return "", ErrExec
          }
      
          text := sqlite3_column_text(stmt, 0)
          tmp := c.String(text)
          result := strings.Clone(a, tmp)
          return result, nil
      }
      

      The pointer returned by sqlite3_column_text is managed by SQLite. It becomes invalid after calling sqlite3_finalize (which Get does before returning). Because of this, we need to allocate a copy of the returned value, using strings.Clone in this case.

      So's approach to memory allocation is similar to Zig's — all heap allocations must be done explicitly by providing a specific instance of the mem.Allocator interface.

      The caller, of course, must free the allocated string:

      func main() {
          m, err := NewSQLMap(":memory:")
          if err != nil {
              panic(err)
          }
          defer m.Close()
      
          m.Set("name", "Alice")
          name, err := m.Get(mem.System, "name")
          if err != nil {
              panic(err)
          }
          println("name =", name)
          mem.FreeString(mem.System, name)
      }
      
      
      
      name = Alice
      

      Here, mem.System is a specific allocator that uses libc's malloc and free. Alternatively, we could use mem.Arena or any other implementation of the mem.Allocator interface:

      var buf [1024]byte // stack-allocated
      arena := mem.NewArena(buf[:])
      
      name, _ := m.Get(&arena, "name")
      mem.FreeString(&arena, name) // no-op for arena; can be omitted
      

      Command-line interface

      With the SQLMap type in place, let's create a simple CLI using the flag package:

      var (
          opFlag  string
          keyFlag string
          valFlag string
      )
      
      func parseFlags() {
          flag.StringVar(&opFlag, "op", "", "operation: get, set, or del")
          flag.StringVar(&keyFlag, "key", "", "key name")
          flag.StringVar(&valFlag, "val", "", "value (for set operation)")
          flag.Parse()
      }
      
      func main() {
          parseFlags()
          // ...
      }
      

      Then add command routing:

      m, err := NewSQLMap("sqlmap.db")
      check(err)
      defer m.Close()
      
      switch opFlag {
      case "set":
          err = m.Set(keyFlag, valFlag)
          check(err)
      case "get":
          val, err := m.Get(mem.System, keyFlag)
          check(err)
          println(val)
          mem.FreeString(mem.System, val)
      case "del":
          err = m.Delete(keyFlag)
          check(err)
      default:
          flag.Usage()
          os.Exit(1)
      }
      
      
      
      sqlmap -op=set -key=name -val=alice
      sqlmap -op=get -key=name
      alice
      

      Again, no surprises here — the flag package works just as it does in Go.

      Performance

      Solod isn't trying to outperform hand-tuned C. Still, performance matters: the code is benchmarked and optimized to run reasonably fast. Since So compiles to plain C and then to native code with full optimizations, the results are sometimes better than Go's.

      Here are some highlights from the benchmarks:

      • Buffered I/O is 3x faster than Go.
      • String and byte operations are up to 2.5x faster.
      • Maps are 1.5x faster for modifications.
      • Integer formatting is 2x faster.

      There're no GC pauses and no Cgo bridge cost when calling C libraries. The tradeoff is that you have to handle memory yourself, but as the SQLite example above shows, So's allocator interface makes that pretty manageable.

      Solod vs. Go benchmarks

      Wrapping up

      Solod is still in its early days, but with the v0.1 release, it's ready for hobby projects. The already-ported parts of the Go standard library make it easy to write command-line tools (check out the cat, head, sort, and wc examples). Plus, with native C interop, you can build just about anything else you need.

      The next release (v0.2) will likely focus on networking, concurrency, or both — along with more stdlib packages.

      If you're interested, take a look at So's readme — it has all the information you need to get started. Or try So online without installing anything.

    24. 🔗 r/york York Dungeon investigates 'poltergeist' after tumblers fall from shelves rss
    25. 🔗 sacha chua :: living an awesome life La semaine du 27 avril au 3 mai rss

      lundi 27 avril

      J'ai ajouté la capacité de naviguer en temps réel à mon paquet subed.el. C'était déjà trÚs pratique pour ajouter les chapitres à la transcription de ma conversation avec John Wiegley et Karthik Chikmagalur. Elle a besoin d'une petite modification pour convertir les notes que j'avais prises pendant la conversation.

      J'ai emmené ma fille à son cours de gymnastique. Il y avait un remplaçant. Je suis ravie de voir que le remplaçant a porté un masque KN-95 sans demander.

      Je me suis organisé avec ma mÚre pour installer l'app BDO Pay sur mon téléphone.

      J'ai préparé les éléments pour coudre mon chapeau comme le chapeau que j'avais cousu pour ma fille.

      mardi 28

      J'ai emmené ma fille à Adventure Alley pour jouer avec ses amies. C'était un peu cher, mais ma fille s'est amusée, donc ce n'est pas un problÚme si nous allons là-bas de temps en temps.

      mercredi 29

      L'écran de remplacement est arrivé au magasin Apple, donc je vais aller là-bas demain.

      J'ai réécrit une partie de la page EmacsNewbie sur l'EmacsWiki.

      Ma fille a cousu mon chapeau.

      Sur Stardew Valley, nous avons acheté un cochon et un mouton. Nous avons amélioré le poulailler en un grand poulailler et nous avons ajouté une cuisine à notre maison.

      jeudi 30

      J'ai été ravie en discutant avec Prot sur l'expérience de l'éditeur Emacs pour les débutants.

      Mon mari, ma fille, et moi avons fait du vélo avec son amie et le pÚre de son amie.

      Sur Stardew, ma fille a remarquĂ© que j'ai accidentellement achetĂ© une vache que j'appelle ChĂšvre au lieu de la chĂšvre que j'ai prĂ©vu d'acheter pour le centre communautaire. Oups! Elle s'est trĂšs amusĂ©e et elle m'a demandĂ©, quand j'achĂšte finalement une chĂšvre, si je pouvais l'appeler Vache. Les animaux seront trĂšs confus, et moi aussi. Je l'ai quand mĂȘme fait.

      vendredi 1er mai

      L'école avait un remplaçant et elle n'a pas voulu y assister, donc j'ai prévenu l'école de son absence et nous avons fait un compromis entre ses devoirs et des jeux.

      Nous sommes allées au Stockyards pour acheter des tissus pour son maillot de bain. Elle a trouvé les deux couleurs qu'elle voulait, mais il ne restait qu'un yard d'une couleur. Il faudra que nous planifions soigneusement. Nous avons acheté des fils chez Michaels. Elle a aussi acheté une boßte de mochi puffs chez Marry Me Mochi.

      Elle a cousu des coutures sur mon chapeau.

      samedi 2

      Pour le petit-dĂ©jeuner, ma fille a prĂ©parĂ© une grande omelette en utilisant six Ɠufs. On s'est rĂ©galĂ©s.

      Ma fille était grincheuse parce que j'ai attiré son attention sur son agitation et elle a senti que j'étais sur son dos.

      Le magasin Apple n'a pas pu réparer l'écran de ma tablette, donc il l'a remplacé par une nouvelle tablette pour une petite somme. L'Apple Pencil était finalement lié à ma garantie AppleCare+, mais malheureusement, il était en rupture de stock partout en ville, donc il fallait que j'attende pendant environ une semaine.

      Une fois rentrée, j'ai trouvé que ma fille s'était calmée. Elle et moi avons joué à Duplo, ce qui est aussi un produit LEGO, mais plus grand que la normale. Je les ai utilisés pour montrer à ma fille des concepts mathématiques comme les permutations et les combinaisons.

      dimanche 3

      Mon mari et moi avons fait du vélo au centre-ville avec ma fille dans mon vélo cargo. Ma fille et moi avons essayé le mochi chez Kibo (c'était délicieux) avant de continuer chez MEC pour chercher une nouvelle gourde pour remplacer celle que j'ai perdue. Elle n'a rien vu qui lui plaisait. Nous avons aussi acheté un mannequin en bois pour faciliter des prototypes pour coudre et des crayons d'aquarelle pour les explorer.

      Une fois rentrĂ©s, mon mari a fait cuire un pain de levain qu'il donnera au pĂšre de l'amie de notre fille, suite Ă  leur conversation vendredi. Ma fille et moi avons travaillĂ© sur le plan de faire son maillot de bain. Elle a voulu une robe qui a un corsage cache-cƓur et une jupe Ă  ourlet tulipe. Pour le dos, elle a voulu des bretelles croisĂ©es avec un petit dos goutte.

      J'étais fatiguée, donc j'ai fait une sieste. Ma fille est venue me réveiller. J'ai remarqué que mes yeux étaient trÚs secs, donc elle a négocié de m'apporter des gouttes pour les yeux et elle me les a administrées pour 25 cents.

      You can e-mail me at sacha@sachachua.com.

    26. 🔗 tomasz-tomczyk/crit v0.10.5 release

      What's Changed

      A maintenance release with broad fixes across the GitHub PR roundtrip, the comment-sync push/pull pipeline, and the local review UI — plus accessibility polish on the sidebar resize handles, a distinct "Approved" state on the review-finish modal.

      General

      Fixes

      Documentation

      Internal refactors

      Full Changelog : v0.10.4...v0.10.5

    27. 🔗 r/york First-Time DM looking for DnD players in York! rss

      Hey everyone! I've been wanting to DM something for a while now and I've been planning a campaign that I'm pretty excited about.

      I've got one player on board so far, so I just need three more players to be able to start playing! The two of us are 26/27, so ideally we're looking for people around the same age.

      If you're interested, just let me know and I'll DM you with more details 😄

      submitted by /u/WeirdoWolfBoy
      [link] [comments]

    28. 🔗 r/LocalLLaMA 2.5x faster inference with Qwen 3.6 27B using MTP - Finally a viable option for local agentic coding - 262k context on 48GB - Fixed chat template - Drop-in OpenAI and Anthropic API endpoints rss

      2026-05-07 edit: I have updated the hardware based recommendations with more focus on quality. I do not recommend q4_0 KV cache anymore beyond 64k context. After multiple rounds of testing with the different size quants, it appears3 is the optimal number for draft speculative decoding. The fastest and best quality quant is q8_0-mtp. F16, which I have also uploaded is actually better but ultra slow (6x slower than q8_0). Many keep saying 8bit is virtually lossless compared to 16bit, and 6bit almost as good as 8bit, but this is simply not true: time and time again I have noticed huge differences in quality and correctness between 8bit and 16bit versions of various models.

      The recent PR to llama.cpp bring MTP support to Qwen 3.6 27B. This uses the built-in tensor layers for speculative decoding. None of the existing GGUF have it, as they need to be converted with this PR.

      I have tested it locally on my mac M2 Max 96GB, and the results are amazing: 2.5x speed increase, bringing it to 28 tok/s!

      I have converted the most useful quants and uploaded them to HF. Even if you are using apple silicon, you should use those instead of MLX. You can download them here:

      https://huggingface.co/froggeric/Qwen3.6-27B-MTP-GGUF

      This also includes 7 fixes I made to the original jinja chat template, due to vLLM specificity which broke in other tools:

      https://huggingface.co/froggeric/Qwen-Fixed-Chat-Templates

      For now, you will need to compile your own version of llama.cpp to use them. It is fairly simple to do:

      ```bash git clone --depth 1 https://github.com/ggml-org/llama.cpp.git cd llama.cpp git fetch origin pull/22673/head:mtp-pr && git checkout mtp-pr

      cmake -B build -DGGML_METAL=ON -DCMAKE_BUILD_TYPE=Release cmake --build build --target llama-cli llama-server ```

      Then to start serving with the API endpoint, use a command similar to:

      bash llama-server -m Qwen3.6-27B-Q5_K_M-mtp.gguf \ --spec-type mtp --spec- draft-n-max 3 \ --cache-type-k q8_0 --cache-type-v q8_0 \ -np 1 -c 262144 --temp 0.7 --top-k 20 -ngl 99 --port 8081

      Vision currently crashes llama.cpp when used alongside MTP. Reported 2026-05-06 in the current PR.

      That's it. Three optimizations in one command:

      Flag | What it does | Impact
      ---|---|---
      --spec-type mtp --spec-draft-n-max 3 | Multi-Token Prediction (built into the model) | 2.5x faster generation
      --cache-type-k q8_0 --cache-type-v q8_0 | 8-bit KV cache (instead of 16-bit) | Half the KV memory , negligible quality loss
      -c 262144 | 262K context window | Full native context on 48 GB Mac with q8_0 KV

      Adjust -m, -c, and --cache-type-k/v for your hardware, according to the tables below.

      Here are my recommendations based on your hardware:

      Apple Silicon

      Qwen3.6-27B is a hybrid model — only 16 of 65 layers use KV cache (verified). The other 48 are linear attention (fixed 898 MiB recurrent state). KV memory is ~4× less than a standard dense model. Runtimes that don't handle this (e.g. vllm) allocate KV for all 65 layers and show much higher memory usage.

      Numbers below are total memory used (model + KV cache + 0.9 GB recurrent state). Must leave ≄ 8 GB for macOS (16 GB Macs excepted).

      RAM | Quant | KV cache | Max context | Total used | Vision
      ---|---|---|---|---|---
      16 GB | IQ2_M | q8_0 | 42K | 12.0 GB | ✗
      24 GB | IQ3_M | | 46K | 16.0 GB | ✗
      24 GB | IQ3_M | q8_0 | 91K | 16.0 GB | ✗
      32 GB | Q5_K_M | | 74K | 24.0 GB | ✗
      32 GB | Q5_K_M | q8_0 | 147K | 24.0 GB | ✗
      32 GB | Q4_K_M | | 99K | 24.0 GB | ✓
      48 GB | Q6_K | | 262K | 39.7 GB | ✓
      48 GB | Q8_0 | | 173K | 40.0 GB | ✓
      48 GB | Q8_0 | q8_0 | 262K | 37.3 GB | ✓
      64 GB | Q8_0 | | 262K | 45.8 GB | ✓
      96 GB | Q8_0 | | 262K | 45.8 GB | ✓

      NVIDIA GPU

      Same model memory as Apple Silicon, plus ~1 GB CUDA overhead.

      VRAM | Quant | KV cache | Max context | Total VRAM used | Vision
      ---|---|---|---|---|---
      12 GB | IQ2_M | q8_0 | 11K | 12.0 GB | ✗
      16 GB | IQ3_M | | 30K | 16.0 GB | ✗
      16 GB | IQ3_M | q8_0 | 60K | 16.0 GB | ✗
      24 GB | Q4_K_M | | 83K | 24.0 GB | ✓
      24 GB | Q4_K_M | q8_0 | 167K | 24.0 GB | ✓
      24 GB | Q5_K_M | | 58K | 24.0 GB | ✗
      48 GB | Q6_K | | 262K | 40.7 GB | ✓
      48 GB | Q8_0 | | 262K | 46.8 GB | ✓
      80 GB | Q8_0 | | 262K | 46.8 GB | ✓

      16 GB Mac: IQ2_M/q8_0 — 42K text-only. No vision.

      24 GB Mac: IQ3_M — 46K (f16 KV) or 91K (q8_0). Vision at 32–65K.

      32 GB Mac: Q5_K_M — 74K text-only (f16 KV), 147K (q8_0). Q4_K_M for vision at 99K.

      48 GB Mac: Q6_K/f16 KV — 262K with vision. Q8_0/q8_0 KV for 262K at higher model quality.

      64 GB+ Mac: Q8_0/f16 KV — 262K with vision. Maximum quality at practical speed.

      12 GB GPU: IQ2_M/q8_0 — 11K. Very limited, no vision.

      16 GB GPU: IQ3_M — 30K (f16 KV) or 60K (q8_0). No vision.

      24 GB GPU: Q4_K_M — 83K with vision (f16 KV). Q5_K_M — 58K text-only (f16 KV), 116K (q8_0).

      48 GB+ GPU: Q6_K/f16 KV — 262K with vision. Q8_0 for max quality.

      Leave KV cache at f16 (blank column) for best quality. Use q8_0 KV only when f16 doesn't give enough context. q4_0 KV should not exceed 64K context.

      Vision adds ~0.9 GB for mmproj. macOS needs ≄ 8 GB for itself (16 GB Macs excepted — use ~4 GB). You can increase available memory by raising the wired memory limit, e.g. for a 96 GB Mac: sudo sysctl iogpu.wired_limit_mb=90112 (88 GB). NVIDIA reserves ~1 GB for CUDA.

      submitted by /u/ex-arman68
      [link] [comments]

    29. 🔗 r/wiesbaden Fine Line Tattoo Artist rss

      Hey,

      Kennst jemand ein gutes Tattoo Studio/ einen guten Tattoo Artist fĂŒr abstrakte Fine Line Tattoos in Wiesbaden oder Umgebung?Ansonsten auch anderswo:)

      submitted by /u/heyheyheyoooooo
      [link] [comments]

    30. 🔗 anthropics/claude-code v2.1.131 release

      What's changed

      • Fixed VS Code extension failing to activate on Windows due to a hardcoded build path in the bundled SDK (createRequire polyfill bug)
      • Fixed Mantle endpoint authentication failing with missing x-api-key header
    31. 🔗 tomasz-tomczyk/crit Windows pre-release 1 (PR #459) release

      Pre-release Windows binaries for testing the windows-wsl-support branch (PR #459).

      This release is not published to Homebrew and is not a stable release. It exists so reviewers can test Windows + WSL support without merging the PR.

      Install

      1. Download the matching binary below (crit-windows-amd64.exe for most machines, crit-windows-arm64.exe for ARM64).
      2. Rename it to crit.exe.
      3. Drop it on your PATH.
      4. Run crit in a git repo with changed files.

      Linux/macOS binaries are included for convenience but the supported install path on those platforms remains Homebrew (brew install tomasz- tomczyk/tap/crit).

    32. 🔗 r/Leeds When is Uniqlo going to open? rss

      I was so excited for this opening to be announced last year, at Christmas it said opening soon, then it changed to fall/winter 2026.

      It's a long time to fit out a shop.

      submitted by /u/used2bfat69
      [link] [comments]

    33. 🔗 r/LocalLLaMA Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...) rss

      Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...) | The following is a non-comprehensive test I came up with to test the quality difference (a.k.a degradation) between different quantizations of Qwen 3.6 27B. I want to figure out what's the best quant to run on my 16 GB VRAM setup. WHAT WE ARE TESTING First, the prompt:

      Given this PGN string of a chess game: 1. b3 e5 2. Nf3 h5 3. d4 exd4 4. Nxd4 Nf6 5. f4 Ke7 6. Qd3 d5 7. h4 * Figure out the current state of the chessboard, create an image in SVG code, also highlight the last move.
      

      I want to see if the models can:

      • Able to track the state of the board after each move, to reach the final state (first half of move 7)
      • Generate the right SVG image of the board, correctly place the pieces, highlight the last move

      And yes, if you are questioning. It could be possible that the model was trained to do the same thing on existing chess games, so I came up with some random moves, the kind of moves that no players above 300 elo would ever have played. For those who are not chess players, this is how the board supposed to look like after move 7. h4. Btw, you supposed to look at the pieces positions and the board orientation, not image quality because this is just a screenshot from Lichess. https://preview.redd.it/6lsfvzy8wfzg1.png?width=1586&format=png&auto=webp&s=94634b461528a6ecc6728eefd23072ab28c3769d CAN OTHER MODELS SOLVE IT? Before we go to the main part, let me show the result from some other models. I find it interesting that not many models were able to figure out the board state, let alone rendering it correctly. Qwen 3.5 27B It was mostly figured out the final position of the pieces, but still render the original board state on top. Highlighted the wrong squares, and the board orientation is wrong. https://preview.redd.it/oanbebp9xfzg1.png?width=1078&format=png&auto=webp&s=b72af75a10f4a9f4d897699b404580370bd29d9e Gemma 4 31B Nice chess dot com flagship board style, i would say it can figure out the board state, but failed to render it correctly. The square pattern also messed up. https://preview.redd.it/w5jwi05nxfzg1.png?width=1640&format=png&auto=webp&s=33e6f21f56c4e98df92c828103ac10714e578973 Qwen3 Coder Next I don't know what to say, quite disappointed. https://preview.redd.it/knltp8h1yfzg1.png?width=1348&format=png&auto=webp&s=1e9207cd1dfd08b049eaa13727703be732d2cb96 Qwen3.6 35B A3B As expected, 35B always be the fastest Qwen model, but at the same time, managed to fail the task successfully in many different ways. This is why I decided to find a way to squeeze 27B into my 16 GB card. The speed alone just not worth it. https://preview.redd.it/orti5kdhyfzg1.png?width=3360&format=png&auto=webp&s=c29a3aae9683e5ceaa15c59ae32adecabdd1b6b6 HOW QWEN3.6 27B SOLVE IT? All the models here are tested with the same set of llama.cpp parameters:

      • temp 0.6
      • top-p 0.95
      • top-k 20
      • min-p 0.0
      • presence_penalty 1.0
      • context window 65536

      BF16 version was from OpenRouter, Q8 to Q4_K_XL versions was on a L40S server, the rest are on my RTX 5060 Ti. The SVG code generated directly on Llama.cpp Web UI without any tools or MCP enabled (I originally ran this test in Pi agent, only to found out that the model tried to peek into the parent folders and found the existing SVG diagrams by higher quants, copied most of it). BF16 - Full precision This is the baseline of this test. It has everything I needed: right position, right board orientation, right piece colors, right highlight. The dotted blue line was unexpected, but it also interesting, because later on you will see, not many of the high quants generate this. https://preview.redd.it/lgizkjklzfzg1.png?width=1424&format=png&auto=webp&s=d7867b55735d3d875e0e36aecbaf3c3f0d1dbd58 Q8_0 As expected Q8 retains pretty much everything from the full precision except the line. https://preview.redd.it/6wjnq6ff0gzg1.png?width=1610&format=png&auto=webp&s=f0d20ff4717b972efffced49ac8d43075fa97eb5 Q6_K We start to see some quality loss here. I mean the placement of the rank 5 pawns. The look of the pieces are mostly because Q6 decided to use a different font. None of the models here trying to draw its own pieces in this test. https://preview.redd.it/kcqj81vl0gzg1.png?width=1608&format=png&auto=webp&s=66c7a219e79a8f6ecf44e27489f337b4016185b5 Q5_K_XL Looks very similar with Q8, but it is worth noticing that the SVG code of Q5 version is 7.1 KB, while Q8 is 4.7 KB. https://preview.redd.it/6wshu7g01gzg1.png?width=1506&format=png&auto=webp&s=289db354fea59c456d8bd2dc7abdbcc1e4282ffd Q4_K_XL and IQ4_XS If you ignore the font choice, you will see Q4_K_XL is a more complete solution, because it has the board coordinates. https://preview.redd.it/pzdghdtm1gzg1.png?width=3326&format=png&auto=webp&s=10c3d7758459f223d195107353f1ec76565cd31d Q3_K_XL and Q3_K_M https://preview.redd.it/56gttur62gzg1.png?width=3330&format=png&auto=webp&s=4af27d8a652e2deef6c14485d0fff4bd3651097f IQ3_XXS Now here's the interesting part, everything was mostly correct, the piece placements and the highlight, and there's the line on the last move! But IQ3_XXS get the board orientation wrong, see the light square on the bottom left? https://preview.redd.it/7jnzxy324gzg1.png?width=1608&format=png&auto=webp&s=178f72f51e65866497f16e861b04c0c448fce774 Q2_K_XL This is just a waste of time. But hey, it got all the pieces positions right. The board is just not aligned at all. https://preview.redd.it/3z63d7bv4gzg1.png?width=1604&format=png&auto=webp&s=f6723b28248327c55bede4e42a4a0cfbe962fb74 SO, WHAT DO I USE? I know a single test is not enough to draw any conclusion here. But personally, I will never go for anything below IQ4_XS after this test (I had bad experience with Q3_K_XL and below in other tries). On my RTX 5060 Ti, I got like pp 100 tps and tg 8 tps for IQ4_XS with vanilla llama.cpp (q8 for both ctk and ctv, fit on). But with TheTom's TurboQuant fork, I managed to get up to pp 760 tps and tg 22 tps , by forcing GPU offload for all layers (-ngl 99), quite usable.

      llama-cpp-turboquant/build/bin/llama-server -fa 1 -c 75000 -np 1 --no-mmap --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 --presence_penalty 1.0 -ctk turbo4 -ctv turbo2 -ub 128 -b 256 -m Qwen3.6-27B-IQ4_XS.gguf -ngl 99
      

      The only down side is I have to keep the context window below 75k, and use turbo4/turbo2 for KV cache quant. Below are some example of different KV cache quants. https://preview.redd.it/y0y7o6h09gzg1.png?width=3320&format=png&auto=webp&s=bd7c855100ff63c9bb666a4f4a61b966ad6eebca https://preview.redd.it/dyrru7z19gzg1.png?width=3314&format=png&auto=webp&s=d54238d7a31c6cd8858f84df67ff588dc22d726b You can see all the result directly here https://qwen3-6-27b-benchmark.vercel.app/ submitted by /u/bobaburger
      [link] [comments]
      ---|---

    34. 🔗 r/reverseengineering ant4g0nist/pyre: Ghidra decompiler in your browser rss
    35. 🔗 anthropics/claude-code v2.1.129 release

      What's changed

      • Added --plugin-url <url> flag to fetch a plugin .zip archive from a URL for the current session
      • Added CLAUDE_CODE_FORCE_SYNC_OUTPUT=1 env var to force-enable synchronized output on terminals that auto-detection misses (e.g. Emacs eat)
      • Added CLAUDE_CODE_PACKAGE_MANAGER_AUTO_UPDATE: when set on Homebrew or WinGet installations, Claude Code runs the upgrade command in the background and prompts to restart
      • Plugin manifests: themes and monitors should now be declared under "experimental": { ... }. Top-level declarations still work but claude plugin validate will warn
      • Gateway /v1/models discovery for the /model picker is now opt-in via CLAUDE_CODE_ENABLE_GATEWAY_MODEL_DISCOVERY=1 (was automatic in 2.1.126–2.1.128)
      • Ctrl+R history picker now defaults to searching all prompts across all projects, matching pre-2.1.124 behavior. Press Ctrl+S to narrow to the current project or session
      • Third-party deployments (Bedrock, Vertex, Foundry, or ANTHROPIC_BASE_URL gateway) no longer see spinner tips pointing at first-party Anthropic surfaces
      • skillOverrides setting now works: off hides from model and /, user-invocable-only hides from model only, name-only collapses description
      • The claude_code.pull_request.count OTel metric now counts PRs/MRs created via MCP tools, not just shell commands
      • Policy refusal error messages now include the API Request ID for easier support debugging
      • Fixed API errors with unrecognized 400 status codes showing raw JSON instead of the underlying error message
      • Fixed /clear not resetting the terminal tab title after a conversation
      • Fixed session title chip from /rename disappearing while a permission or other dialog is active
      • Fixed agent panel below the prompt being hidden when subagents are running (regression in 2.1.122)
      • Fixed external-editor handoff (Ctrl+G) blanking the conversation history above the prompt
      • Fixed /context dumping its rendered ASCII visualization grid into the conversation, wasting ~1.6k tokens per call
      • Fixed /agents Library list arrow-key navigation: the highlighted agent now stays visible when the list exceeds the viewport
      • Fixed /branch success message not including the new branch's session id for /resume
      • Fixed bold headers with keycap/ZWJ/skin-tone emoji losing trailing characters in fullscreen mode
      • Fixed server-managed settings policy not applying for enterprise/team users whose stored OAuth credentials lacked the user:inference scope
      • Fixed OAuth refresh race after wake-from-sleep that could log out all running sessions
      • Fixed 1-hour prompt cache TTL being silently downgraded to 5 minutes
      • Fixed cache-miss warning appearing spuriously after /clear or compaction when changing /effort or /model
      • Fixed Bash(mkdir *), Bash(touch *) and similar allow rules not being honored for in-project paths
      • Fixed deniedMcpServers patterns with a *:// scheme wildcard not matching mixed-case hostnames
      • Fixed harmless WebSocket warning being logged as an error in --debug during voice mode
      • [VSCode] Fixed /clear not clearing the conversation context and displayed transcript
    36. 🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
      sync repo: ~1 changed
      
      ## Changes
      - [HashDB](https://github.com/oalabs/hashdb-ida):
        - 1.10.0: archive contents changed, download URL changed
      
    37. 🔗 Ampcode News Amp, Rebuilt rss

      Today we're starting to roll out the new Amp.

      Not all of it, not yet. But the first piece: a rebuilt Amp CLI. Codename: Neo.

      In The Coding Agent is Dead we wrote about where this is going: agents with longer leashes, less handholding, and many more places to run. Not just one agent in one terminal. Agents prompted from anywhere, running everywhere.

      That's the new Amp we're building.

      But the terminal still matters and will matter. There will be moments where you want the agent right next to you.

      So we rebuilt the CLI first. It is still Amp in your terminal. But it's running on a completely new architecture: remote-controllable, compaction-first, plugin-powered, and much faster. Built for what's coming.

      Let's walk through it.

      Remote Control

      When you start a thread in the new Amp CLI, you can now remote control it from ampcode.com.

      You'll not only get live updates but you can also send messages, queue and dequeue them, or cancel what the agent is currently doing:

      The architecture that enables this is the reason we rewrote Amp. And remote control is just the start.

      No More Manual Context Management

      A core principle behind the rebuild: build for what the frontier models can do now, in 2026, and what they will be able to do in the future. Do not build for what once was.

      Today's leading frontier models are great at handling compaction.

      So Amp now manages context for you.

      You don't have to watch context percentages anymore, or decide when to handoff, or extract information from a thread in a panic.

      When the context window fills up, Amp now compacts the thread: it summarizes the current context, starts a fresh window with that summary, and keeps going.

      Compaction now runs automatically when the context window is 90% full.

      It was also the first thing we added to the new architecture. During one migration, we had to shut it off for a day and everyone complained. One beta-user reported: "I love having auto-compaction. NOT missing handoff..."

      So handoff is out. Compaction is in.

      Plugins

      With this release we're officially releasing the Amp Plugin API.

      Amp plugins can:

      • Handle events — amp.on(...) for tool calls, tool results, and agent lifecycle events
      • Add tools — amp.registerTool(...) for custom tools the agent can call
      • Add commands — amp.registerCommand(...) for command palette actions
      • Show UI elements — ctx.ui.notify(...), ctx.ui.confirm(...), ctx.ui.input(...), and ctx.ui.select(...)
      • Ask AI questions — amp.ai.ask(...) for yes/no classification with confidence and reasoning

      Here, for example, is a plugin that registers a tool called ask_user_choice. The agent can use it to present the user with options:

      // .amp/plugins/ask-user-choice.ts
      
      import type { PluginAPI } from '@ampcode/plugin'
      
      export default function (amp: PluginAPI) {
          amp.registerTool({
              name: 'ask_user_choice',
              description:
                  'Present the user with a multiple choice question when there are several possible approaches and you need them to pick one. Use when you have 2-5 concrete options to choose from.',
              inputSchema: {
                  type: 'object',
                  properties: {
                      question: { type: 'string', description: 'The question to ask the user' },
                      options: {
                          type: 'array',
                          items: { type: 'string' },
                          description: 'The options to choose from (2-5 items)',
                      },
                  },
                  required: ['question', 'options'],
              },
              async execute(input, ctx) {
                  const question = input.question as string
                  const options = input.options as string[]
                  const optionsList = options.map((opt, i) => `${i + 1}. ${opt}`).join('\n')
      
                  const answer = await ctx.ui.input({
                      title: question,
                      helpText: `${optionsList}\n\nType the number of your choice`,
                      submitButtonText: 'Select',
                  })
      
                  if (!answer) return 'User dismissed the question without choosing.'
      
                  const index = parseInt(answer.trim(), 10) - 1
                  if (index >= 0 && index < options.length) {
                      return `User selected option ${index + 1}: ${options[index]}`
                  }
                  return `User responded with: ${answer}`
              },
          })
      }
      

      That's it: a single file in .amp/plugins and Amp gets a new tool. It looks like this:

      The ask_user_choice tool in action

      The Amp Plugin API documentation has more examples, including a full permissions plugin.

      Queuing & Steering

      Queuing messages is now the default. When you send a message while the agent is busy, it'll get added to the queue instead of stopping and interrupting the agent.

      This, too, we think fits the models of today and tomorrow better. They work for longer and need fewer mid-flight yanks.

      If you want to fast-track a queued message, you can steer.

      Steering lets you send a queued message as soon as possible, not just when the agent becomes idle. The next time a tool result is sent up to the agent, for example.

      Use ↑ to select a queued message, then steer it with ⏎:

      You can also hit Esc Esc to interrupt the agent and send immediately.

      Permissions

      Amp will no longer ask for permission before running tools.

      What was once the --dangerously-allow-all flag is now the default behavior for users who have not configured permissions.

      The old permissions system still exists. It's now a built-in plugin. If your existing Amp settings already opt into permissions — through amp.permissions, amp.dangerouslyAllowAll: false, or amp.guardedFiles.allowlist — Amp loads that plugin and works as before. (When the plugin is active, it applies in both amp and amp --execute.)

      Why change the default?

      A year ago tool calls were simpler to check: inspect the name, inspect the arguments, do string-based matching, allow or deny. Now, frontier models write throwaway scripts to get stuff done. They chain shell commands.

      It's near-impossible to determine statically whether a tool invocation will be destructive or not.

      When a model writes five 20-line Python scripts in parallel to do something, checking whether a tool call contains rm -rf gives you a false sense of security.

      On top of that, there are now custom skills and scripts, specifically built for agents. And different organizations have different policies around which model is allowed to call which tool.

      So permissions now live in the Plugin API.

      If you need a policy, build the one that matches your setup. Point Amp at the Amp Plugin API and ask it to help you.

      Performance & Efficiency

      The old Amp CLI got slow with huge threads. Neo doesn't. Here's a comparison, using a thread with around 5000 messages:

      Metric Old New Improvement
      CPU% (mean ± sd) 84.1% ± 1.6% 17.4% ± 8.8% 79% less CPU
      CPU% (peak) 86.3% 25.8% —
      Memory (idle) 1814 MB 540 MB 70% less memory

      Rendering performance has improved, too.

      Before:

      After:

      What's Gone

      We also removed features. Of course we did, otherwise it wouldn't be an Amp release, would it?

      Our goal is to keep you on the frontier. Amp should not make you work like it's still 2025.

      Some features made sense when models needed more babysitting, more manual context management, more careful steering. They don't anymore. When a feature starts tying you to the old way to use agents, it goes.

      Handoff is gone. As described above, compaction made it obsolete. There are some valid use cases for Handoff even when there's enough space left in the context, but we don't think it warrants the complexity introduced by many small, connected threads.

      You can also still reference other threads and Amp will read them and extract the relevant information.

      For example, you can use Ctrl+O and thread: new to create a new thread, then hit Enter to quickly insert a reference to the previous thread. Amp will use that reference along with the rest of your prompt to read the previous thread.

      Amp no longer rolls back file changes when you edit or restore a message. We've found ourselves using this less and less as models advanced. The models are now good enough to undo changes for you, with more finesse than a rollback. And, the truth is, the rollback feature was always best-effort: if the agent wrote and ran code that generated files, we didn't keep track of that without elaborate snapshotting.

      Skill management: Amp still supports Agent Skills but we no longer offer commands or subcommands to add, remove, or update skills. That's better done by separate tools, such as skills.

      User-invokable skills: We also removed support for user-invokable skills. The latest generation of models now invokes skills reliably.

      Themes: Custom themes made it harder to keep the CLI legible, polished, and recognizably Amp. We’d rather ship one good interface than support many broken-looking ones.

      Manual bash invocation: in the old Amp CLI you could invoke bash commands by using $ and $$ in the prompt editor. An interesting idea a year ago, but now with models being ever more capable at running commands on their own and without blowing up their context window (and that context window being unlimited, practically) it's no longer useful.

      Rollout

      We’re rolling Neo out over the next few days. If you want to skip the line, send us an email. We'll flip the switch for you.

      This is the first piece of the new Amp.

      More soon.

  4. May 05, 2026
    1. 🔗 r/reverseengineering Resident Evil: Code Veronica X is able to play the opening FMV from the decompiled PS2 source! rss
    2. 🔗 imfing/hextra v0.12.3 release

      What's Changed

      This version focuses on bug fixes and small maintenance updates since v0.12.2.
      For the full release notes and the upgrade guide for v0.12, please visit:
      https://imfing.github.io/hextra/blog/v0.12/

      • fix: remove inline TOC click handler so default Hextra can avoid unsafe-inline (CSP) by @jecc1982 in #981
      • fix: add Hugo compatibility helpers for deprecated multilingual APIs by @imfing in #983
      • fix: add hx:mx-auto to the footer by @luigimorel in #982
      • fix: avoid publishing demo cast in theme builds by @imfing in #985
      • fix(test): accessibility test for YouTube iframe internals by @imfing in #986
      • fix(sidebar): fall back to content tree when mobile menu has no entries by @imfing in #991
      • fix: resolve page-relative URLs in details shortcode by @muit in #989

      New Contributors

      Full Changelog : v0.12.2...v0.12.3

    3. 🔗 r/york TOMORROW (WEDNESDAY). Rising post punk band The 113 headlines the Fulford Arms. Not to be missed! £9 advance tickets available from SeeTickets and Fulford Arms website. rss
    4. 🔗 r/LocalLLaMA DeepSeek V4 being 17x cheaper got me to actually measure what I send to cloud vs what I could run locally. the results are stupid. rss

      That foodtruck bench post showing deepseek v4 matching gpt-5.2 at 17x cheaper got me thinking. if frontier cloud models are that overpriced for equivalent quality, how much of my daily work even needs cloud at all?

      Ran my normal coding workflow for 10 days. every task got logged: what it was, tokens in/out, whether local qwen 3.6 27b (on a 3090) could have done it. didn't use benchmarks, just re-ran a random sample of 150 tasks on both.

      results:

      - file reads, project scanning, "explain this code": local matched cloud 97% of the time. this was 35% of my workload. paying for cloud here is genuinely throwing money away.

      - test writing, boilerplate, single file edits: local matched 88%. another 30% of tasks. the 12% misses were edge cases i could catch in review.

      - debugging with multi-file context: local dropped to 61%. cloud still better but not 17x-the-price better. about 20% of my work.

      - architecture decisions, complex refactors across 5+ files: local at 29%. cloud genuinely needed here. only 15% of my tasks.

      So 65% of my daily coding work runs identically on a model that costs me electricity. another 20% is close enough that I accept the occasional miss. only 15% actually justifies cloud pricing.

      Started routing by task type. local for the first two buckets, cloud for the last two. my api bill went from $85/month to about $22 and the 3090 was already sitting there mining nothing.

      The deepseek post is right that the price gap is insane but the bigger insight is that most of us don't even need cloud for most of what we do. we're just too lazy to measure it.

      submitted by /u/spencer_kw
      [link] [comments]

    5. 🔗 r/Leeds Help me find the same sandwich please - Bánh Mì Cî Út rss

      Went to New York last week and had a sandwich that was so good it genuinely brought tears to my eyes (that may however, have been jet lag). I’m absolutely desperate to find as close to the one I had as possible, the one I tried was the

      No. 1 - Pork belly, boiled Vietnamese ham, fried Vietnamese ham, jambon, pate, mayo,cucumber, “cilantro”, carrot and daikon.

      Happy to pay more than that one cost buy at $8 I suspected I might, can travel a bit but preferably in Leeds

      submitted by /u/Thieves-like-us
      [link] [comments]

    6. 🔗 r/LocalLLaMA I know this isn’t technically an LLM but OmniVoice is FUCKING AMAZING. rss

      Literally one shot voice cloning and it’s literally so easy. What the FUCK. It’s everything I’ve ever dreamed of.

      submitted by /u/Borkato
      [link] [comments]

    7. 🔗 r/reverseengineering Reverse-engineering the 1998 Ultima Online demo server rss
    8. 🔗 r/york Looking for a tenant to take over my old house rss

      Looking for a tenant to take over my old house | Hello all, hoping this doesn’t count as commercial spam and can stay up as I know people sometimes come on here looking for housing. I’ve just bought a house (hooray, adulthood!) and in an effort to not have to pay the rest of my tenancy on my old house, the landlord’s agreed we can be released from the tenancy early if they can find a new tenant. (Renters right act has come in just too late to help us out, unfortunately) It’s a 2 bed mid-terrace in Heworth about a 20 min walk and even shorter cycle to the city centre. It’s the nicest rental property I’ve ever had, and I’ve had a few. It’s a good size, in generally decent condition, has a garage and a parking space and a little courtyard. I can promise I didn’t leave any disastrous messes or unpaid bills there! Shoot me a message if you have the kind of questions letting agents won’t answer!! If you’re interested give them a call via the details on Rightmove, I don’t think they’ve had much interest which surprises me submitted by /u/hollyviolet96
      [link] [comments]
      ---|---

    9. 🔗 sacha chua :: living an awesome life La semaine du 13 au 19 avril rss

      lundi 13

      Ma fille a séché les cours toute la journée. Elle a dit qu'elle était fatiguée. Elle est restée à la maison au lieu d'aller à son cours de gymnastique.

      J'ai configurĂ© obs-websocket pour lancer et arrĂȘter la diffusion en direct depuis Emacs.

      Il faisait trÚs beau, donc je me suis assise dehors et j'ai lu la configuration d'Emacs de tecosaur. Non seulement sa configuration était trÚs détaillée, mais elle était aussi magnifiquement mise en page.

      J'ai préparé mon bulletin d'information sur Emacs pendant que je diffusais en direct.

      Le glacier était toujours fermé, donc nous avons acheté de la crÚme glacée au supermarché à la place.

      À l'heure du coucher, ma fille a dit qu'elle aurait aimĂ© rester une enfant. Elle a dit qu'elle aimait bien KidSpark, qui est rĂ©servĂ© aux enfants jusqu'Ă  10 ans.

      mardi 14

      Ma fille a suivi son cours. AprÚs l'école, nous avons fait du vélo au parc pour jouer avec ses amies, qui en faisaient aussi.

      J'ai continué à améliorer obs-websocket pour gérer mon direct depuis Emacs. J'ai aussi réécrit mon correctif pour l'opération « sentence-at-point » sur Org Mode.

      J'Ă©tais fatiguĂ©e et j'avais un peu mal Ă  la tĂȘte.

      mercredi 15

      Ma fille s'est réveillée tard, mais elle a participé à son cours toute seule.

      J'ai mis Ă  jour mon OBS pour ajouter socialstream.ninja via une source navigateur. Maintenant, je peux afficher les commentaires et je peux envoyer un message depuis Emacs sur YouTube.

      J'ai travaillé un peu comme consultante. Le design du profil avait besoin d'une petite correction.

      Ma fille et moi avons joué à Stardew Valley.

      Mon mari avait une course prÚs du Musée des beaux-arts de l'Ontario. Ma fille était heureuse de sécher les cours l'aprÚs-midi parce que l'école avait une remplaçante. J'ai emmené ma fille là-bas et nous avons passé du temps à essayer les activités au musée et à dessiner sur nos tablettes.

      AprÚs le dßner, nous nous sommes entraßnées à peindre des yeux avec des aquarelles.

      jeudi 16

      J'avais rendez-vous avec Protesilaos pour l'informer de mes progrÚs depuis notre conversation précédente et lui poser mes nouvelles questions. J'ai fait fonctionner mon code pour lancer ma vidéo à partir d'un horodatage et j'ai écrit une fonction pour calculer la conversion entre l'heure réelle et le temps écoulé.

      Ma fille et moi avons joué à la Play-Doh, au sungka (un jeu traditionnel philippin), et aux charades.

      vendredi 17

      J'ai révisé les sous-titres de ma conversation avec Prot d'hier. J'ai ajouté deux fonctions pour gérer l'étiquette d'interlocuteur quand on divise ou fusionne des sous-titres. J'ai aussi programmé trois conversations sur Emacs et j'ai publié les événements sur YouTube et sur mon site grùce à d'autres fonctions. J'ai aussi modifié ma bibliothÚque pour publier mon site afin qu'elle n'inclue pas les fichiers privés.

      J'ai travaillé sur nos impÎts.

      Ma fille s'est réveillée toute seule ce matin, à temps pour le petit-déjeuner, notre routine matinale, et son interrogation de mathématiques à l'école. Mais elle a séché les cours l'aprÚs-midi et elle s'est assise tout l'aprÚs-midi contre sa porte. Au lieu de se détendre, elle s'est davantage braquée contre moi. Je ne sais pas quoi faire dans cette situation.

      samedi 18

      Pour le petit-dĂ©jeuner, j'ai prĂ©parĂ© des crĂȘpes avec le reste de la crĂšme fouettĂ©e. Il reste juste un peu de la crĂ©me, donc je n'ai pas pu fouetter dans le mĂ©langer. J'ai fouettĂ© Ă  la main. J'ai aussi utilisĂ© la crĂšme fouettĂ©e congelĂ©e que j'avais faite il y a plusieurs mois. Je les ai mangĂ© avec des pĂȘches et de la mangue. C'Ă©tait parfait.

      Lire la configuration lettrée d'Emacs de tecosaur me rend jaloux de sa mise en page, donc j'ai passé du temps en ameliorant l'export de ma configuration. C'est trÚs long. Le PDF est 736 pages. Seule la table de matiÚres est 15 pages. Je veux ajouter plus de commentaires et implementer plus d'exports LaTeX pour mes types de liens.

      Ma fille était grincheuse contre moi du matin, mais l'aprÚs-midi, elle a réapparu et elle a voulu passer du temps avec moi.

      Nous avons joué à Minecraft pour essayer les nouveaux cubes de soufre. Nous avons généré un Warden et lui avons donné un cube qui nous donnaient un bloc de champignon. Le Warden s'amusait avec le cube.

      Nous avons joué avec Play-Doh. Je l'ai étalé trÚs finement et nous l'avons coupé à beaucoup de piÚces. Elle les a tressé. Elle a voulu essayer une tresse couronne, donc j'ai tressé ses cheveux.

      Pour le dßner, nous avons préparé des sushis.

      Nous avons jouĂ© encore Ă  Stardew Valley Expanded. Nous avons bien progressĂ© dans les paquets du centre communautaire, mĂȘme si j'ai oubliĂ© d'obtenir l'engrais de centre communautaire aprĂšs la FĂȘte des ƒufs pour accĂ©lerer les fraises. Tant pis.

      Ma fille a pratiqué son vocabulaire français en racontant l'histoire de la famille d'Eevee.

      dimanche 19

      Ma fille s'est réveillée à 8h00 aujourd'hui. Elle trouve que c'est plus facile de se réveiller quand il n'y a pas école. Il est bon que je n'avait pas commencé une diffusion en direct.

      Ma fille et moi sommes allĂ©es aux Stockyards Ă  vĂ©lo pour acheter des tissus pour coudre un chapeau d'Ă©tĂ©. Elle avait fait du lĂšche-vitrine mais elle n'en avait pas trouvĂ© un qui lui convenait, donc nous devons le faire nous-mĂȘme. Elle a choisi du tissu jaune PokĂ©mon. Elle a aussi voulu de la laine pour faire du crochet une couverture.

      Nous avons mangé du Panda Express pour le déjeuner. Le repas enfant m'a suffi.

      Je l'ai déposée à la maison et j'ai apporté des donations au Goodwill en faisant le grand ménage. J'ai aussi fait les courses. Une fois que je suis rentrée, ma fille m'a montré fiÚrement qu'elle a fait les lits comme un hÎtel.

      Nous avons joué à Stardew Valley Expanded aprÚs le dßner. L'été a commencé. Je pense que je dois planter plus de doubeurre pour le paquet récoltes de qualité qui demande 5 récoltes de qualité or.

      You can e-mail me at sacha@sachachua.com.

    10. 🔗 sacha chua :: living an awesome life La semaine du 20 au 26 avril rss

      lundi 20 avril

      Ma fille s'est réveillée tÎt de façon autonome, donc nous avons terminé notre routine matinale. Mais elle a été déconcertée quand son mot de passe n'a pas fonctionné pour se connecter à l'école. Je l'ai aidée et elle a assisté à ses cours. Je pensais qu'elle allait bien, mais une fois que je suis allée la voir pendant la récré, j'ai trouvé qu'elle était grincheuse. Elle a encore séché les cours.

      À mon grand Ă©tonnement, aprĂšs la pause dĂ©jeuner et un petit moment de jeu, elle participait Ă  l'Ă©cole.

      Quelques points :

      • Comme tout le monde, elle a des jours avec et des jours sans. Quand elle a mal au corps, tout est dur.
      • Nous savons que les cours collectifs ne lui conviennent pas pour le moment. C'est une expĂ©rience pour obtenir des donnĂ©es.
      • Ce n'est pas la fin du monde. Peut-ĂȘtre que l'Ă©cole est plus indulgente que je ne le pense. Je peux leur laisser dire quand il y a un vrai problĂšme. C'est possible que ce ne soit pas un problĂšme.
      • C'est trĂšs difficile (peut-ĂȘtre impossible) d'aider une personne qui ne veut pas ĂȘtre aidĂ©e, particuliĂšrement car une partie de sa rĂ©sistance est due Ă  son dĂ©sir d'autonomie.
      • Harceler est inutile et inefficace. Si j'essaie d'utiliser la punition, je lui rends la tĂąche plus difficile pour choisir elle-mĂȘme une bonne façon de procĂ©der.
      • Si elle veut quelque chose de diffĂ©rent, nous pouvons trouver quelque chose de diffĂ©rent.
      • Donc je dois gĂ©rer mes propres Ă©motions et ĂȘtre solidaire. Je dois avoir confiance dans le fait qu'elle veut un bon rĂ©sultat pour elle-mĂȘme. Elle peut le gĂ©rer ou elle peut demander de l'aide. Si je reste zen, c'est plus facile pour elle de demander de l'aide.

      mardi 21

      Je pense que j'ai trouvé un moyen de me protéger contre les accidents pendant une diffusion en direct. Si je diffuse avec un délai vers une autre instance d'OBS, je peux interrompre le flux une fois que je remarque quelque chose que je partage accidentellement.

      J'ai aussi écrit une fonction pour formater les événements dans le format Org Mode pour exporter vers le format iCalendar.

      J'ai répondu à des courriels, dont un en français. J'ai mis à jour les entrées de mon agrégateur Planet Emacslife. Je l'ai modifié pour utiliser toujours l'IPv4 et interpréter correctement les corps des articles.

      Pour la soulager de son ennui, j'ai aidé ma fille à travailler sur des fiches d'exercices mathématiques pour les élÚves de 6Úme, qu'elle a pu accomplir avec de petites astuces. Elle était trÚs fiÚre parce que c'était plus intéressant que ses devoirs.

      AprÚs l'école, j'ai emmené ma fille au parc pour jouer avec toutes ses meilleures amies. Elles s'amusaient tellement que d'autres enfants ont voulu se joindre à elles, ce qui a rendu l'endroit trop bruyant pour ma fille, qui s'est déplacée au bac à sable pour jouer au calme. Une fois que les autres enfants sont partis, ma fille a retrouvé ses amies.

      Ma fille a redécouvert les attrape-soleil et elle en a peint quelques-uns avec des peintures acryliques. Elle a voulu une peinture verte, mais nous n'en avions pas, donc elle a mélangé de la peinture bleue et de la peinture jaune pour en faire.

      Elle a aussi discutĂ© de son idĂ©e pour un petit mannequin pour prĂ©senter des prototypes de robes. Nous avons cherchĂ© des options en ligne, mais tous les produits Ă©taient trop chers ou ne convenaient pas Ă  ma fille. Nous allons peut-ĂȘtre acheter un petit mannequin chez Ikea.

      J'étais un peu fatiguée.

      mercredi 22

      J'ai écrit quelques articles pour annoncer mes diffusions en direct.

      J'ai proposé à ma fille de travailler sur des mathématiques plus complexes ensemble, mais elle n'avait pas besoin de mon aide aujourd'hui.

      AprÚs l'école, ma fille et moi avons fait du vélo au parc. Nous étions en avance pour notre rendez-vous avec ses amies, donc nous avons joué dans l'aire de jeu prÚs de la rue qui a un grand bac à sable. J'ai apporté les jouets de sable, ce qui a permis à ma fille de simuler une pùtisserie. AprÚs avoir joué, nous sommes allées à l'autre aire de jeu en pente. Nos amies étaient en retard, mais ce n'était pas un problÚme. Il y avait d'autres amies, et une fois qu'elles ont dû partir, nous avons joué aux balançoires jusqu'à ce que nos autres amies arrivent. Il faisait beau et un peu chaud. Ma fille a mangé deux sucettes glacées au yaourt, à la fraise, et au miel qu'elle a préparées hier soir, et elle les a offertes à ses amies.

      Ses amies sont venues à pied. Ma fille a voulu les accompagner sur le chemin du retour, donc nous sommes toutes allées à pied. J'ai accroché son vélo au mien grùce au sac Bakkie, et j'ai poussé mon vélo pendant qu'elles marchaient.

      Une de ses amies est tombée et elle a eu mal au genou. Elle a hurlé. Ma fille a offert un bandage Pokémon. Elle a encore hurlé, ce qui était trop bruyant pour ma fille qui commençait aussi à pleurer. Elles ont eu besoin de quelques moments avant qu'elles ne se calment.

      J'étais étonnée que ma fille ait voulu accompagner ses amies presque jusque chez elles. Eh bien, le soleil brillait et je peux toujours emmener ma fille si elle devient trop fatiguée.

      Pour le dßner, mon mari a préparé des escalopes de poulet.

      jeudi 23

      J'ai travaillé comme consultante.

      J'ai emmené ma fille au parc Dufferin Grove pour jouer là-bas. Une fois arrivée, elle a vu que ses meilleures amies sont occupées à jouer avec une fille qui est en désaccord avec ma fille, donc ma fille a décidé de jouer plutÎt avec moi ou avec son pÚre, qui nous a rejoints à vélo. Elle a joué sur la balançoire et le toboggan. Elle a aussi joué dans le sable avec d'autres enfants.

      À la maison, nous avons fait des bulles gĂ©antes.

      vendredi 24

      J'ai eu une merveilleuse conversation avec John Wiegley et Karthik Chikmagalur sur le flux de travail de John pour gérer ses tùches sur Emacs et sur Org Mode.

      Ma fille était un peu grincheuse parce que j'étais occupée avec ma conversation et son pÚre était occupé à préparer le dßner. Une fois que j'étais disponible, elle a voulu jouer à un jeu de dominos que nous avons déjà donné il y a plus d'une année. Elle a été déçue, puis elle a décidé de faire un jeu similaire en utilisant LEGO. Elle s'est amusée.

      J'ai accidentellement fait tomber mon Apple Pencil et il s'est cassé.

      samedi 25

      Je suis allĂ©e au magasin Apple pour essayer de remplacer mon Apple Pencil et de rĂ©parer l'Ă©cran de ma tablette sur la garantie AppleCare+. Je n'ai rien obtenu. Ils n'avaient pas les piĂšces en stock pour la rĂ©paration de l'Ă©cran, donc le technicien les a commandĂ©es et il va me notifier une fois qu'elles seraient arrivĂ©es. Il a trouvĂ© que mon Apple Pencil n'est pas inclus dans la garantie AppleCare+ automatiquement mĂȘme si je l'avais achetĂ© en mĂȘme temps que ma tablette. Le technicien m'a dit que j'ai besoin d'appeler l'assistance Apple pour lier mon Apple Pencil Ă  la garantie AppleCare+, ce qui a pris 35 minutes Ă  rĂ©soudre. Une fois que j'ai fini, le technicien est dĂ©jĂ  passĂ© Ă  un autre client. C'Ă©tait trĂšs occupĂ© au magasin, et je n'ai pu reprendre mon rendez-vous. Si je voulais faire un autre rendez-vous, il m'aurait fallu attendre plus d'une heure et demie. J'Ă©tais surstimulĂ©e, donc j'ai choisi de rentrer.

      Ma fille a voulu jouer à Stardew Valley avec moi. C'étaient les derniers jours avant l'automne. Elle a commencé à détruire ses arbustes de myrtilles. Quand je lui ai demandé ce qu'elle faisait, elle est partie furieuse parce qu'elle a senti que j'étais sur son dos. J'ai présenté mes excuses, et je l'ai aussi informée que les myrtilles ont une récolte de plus exactement à la fin de la saison. Elle ne le savait pas.

      dimanche 26

      J'ai écrit une petite fonction pour sauvegarder une capture d'écran à la position actuelle dans la vidéo et l'ajouter avec un horodatage au sous-titre actuel, ce qui facilite l'inclusion des images à l'article. Karthik et moi avons discuté du traitement de la vidéo.

      Il faisait trÚs beau, donc ma fille et moi avons fait du vélo jusqu'au Corktown Commons pour la premiÚre fois. Elle s'est trÚs amusée sur les toboggans. Nous avons aussi fait plusieurs gùteaux de sable dans le bac à sable, grùce aux quelques conteneurs que j'ai apportés.

      AprĂšs le dĂźner, ma fille a voulu jouer Ă  Stardew Valley avec moi. Elle m'a demandĂ© si c'est acceptable si elle vend quelques minerais d'or. Je lui ai demandĂ© ce qu'elle voulait faire, quel est son but… Elle est devenue grincheuse et elle s'en est allĂ©e. Je me suis rendu compte qu'elle voulait peut-ĂȘtre faire de l'espace dans son inventaire, ce qui peut aussi ĂȘtre rĂ©solu avec un coffre, ce que j'avais d'ailleurs prĂ©vu de faire. Bien, elle doit dĂ©velopper sa propre autorĂ©gulation. Elle est finalement revenue de sa chambre et elle m'a demandĂ© un cĂąlin parce que son nez lui fait mal, pauvre chĂ©rie. Nous avons fait la routine du soir avec des larmes.

      You can e-mail me at sacha@sachachua.com.

    11. 🔗 r/Leeds Childfree people of Leeds? rss

      Heya! Random one, but are there many childfree people in Leeds on here?

      I’ve been thinking about setting up a Discord or something just to chat, maybe find people for games or last-minute plans, but not sure if there’d actually be much interest. I'd probably make it for people around my age, like 25+ year olds or something

      For me personally it feels like a lot of social stuff ends up revolving around kids/schedules and it’d be nice to have a space that’s a bit more flexible, and to also have conversations that don't involve how Timmy shat his pants in Morrisons cafe

      Would anyone be up for something like that? I'm up for making one and sending some invites out - or if this space already exists please do let me know so I can get involved!

      EDIT - I’m gonna make a server - if you want an invite leave a comment/send me a dm :)

      submitted by /u/amzlrr
      [link] [comments]

    12. 🔗 r/york Let's talk about York's hidden past! rss

      Hey r/york!

      We're Uncomfortable York, an academic-led tour organisation focusing on the underrepresented stories and people that make up the UKs favourite cities.

      On our tour we talk about the lived experience of diverse individuals living and working in York across its 2000 years of history. We also examine York's connections to the world as a seat of power from the Roman Empire to a manufacturing hub for the chocolate industry.

      We've taken to Reddit to ask some important questions:

      • Do you feel represented in York's heritage landscape?

      • What topics, themes, people, periods, etc. would you like to see examined with a more critical eye?

      If you're interested in checking out our work feel free to head over to our website!

      submitted by /u/Uncomfortable_Tours
      [link] [comments]

    13. 🔗 r/LocalLLaMA Gemma 4 MTP released rss

      Blog post:

      https://blog.google/innovation-and-ai/technology/developers-tools/multi- token-prediction-gemma-4/

      MTP draft models:

      https://huggingface.co/google/gemma-4-31B-it-assistant

      https://huggingface.co/google/gemma-4-26B-A4B-it-assistant

      https://huggingface.co/google/gemma-4-E4B-it-assistant

      https://huggingface.co/google/gemma-4-E2B-it-assistant

      This model card is for the Multi-Token Prediction (MTP) drafters for the Gemma 4 models. MTP is implemented by extending the base model with a smaller, faster draft model. When used in a Speculative Decoding pipeline, the draft model predicts several tokens ahead, which the target model then verifies in parallel. This results in significant decoding speedups (up to 2x) while guaranteeing the exact same quality as standard generation, making these checkpoints perfect for low-latency and on-device applications.

      submitted by /u/rerri
      [link] [comments]

    14. 🔗 r/reverseengineering Inside Faxanadu series — deep dive into how this NES title works rss
    15. 🔗 r/reverseengineering EMBA v2.0.1 with interactive firmware dependency map available - Check it out and let us know what you are missing rss
    16. 🔗 r/LocalLLaMA Heretic 1.3 released: Reproducible models, integrated benchmarking system, reduced peak VRAM usage, broader model support, and more rss

      Dear fellow Llamas, it is my distinct pleasure to announce the immediate availability of version 1.3 of Heretic (https://github.com/p-e-w/heretic), the leading software for removing censorship from language models.

      This was a long and eventful release cycle, during which Heretic became a high-profile open source project with 20,000 GitHub stars and more than 13 million total model downloads (not counting the models from a certain "competitor" who was recently found to have been using a plagiarized fork of Heretic under the hood). The topic of model decensoring has exploded in popularity, with many clones and forks popping up, some of them clouding their techniques in mystique, technical jargon, or tens of thousands of lines of LLM-written junk code.

      I am happy to say that Heretic is moving in the exact opposite direction. Instead of making it more difficult to understand what is going on, the new release makes it easier and more transparent. The headline feature in Heretic 1.3 is reproducible runs. This was a much more difficult problem to solve than it might appear to be at first glance, because the results of tensor operations can depend on the PyTorch version, the GPU, the driver, the accelerator library, and whether Saturn is Ascendant or not. This means that in order to ensure reproducibility, all of that information must be collected and preserved. This mammoth task was taken up by long-time contributor Vinay-Umrethe, who wrote the majority of the code in the course of an intense multi-week collaboration in which over 250 comments were exchanged.

      As a result, when publishing an abliterated model to Hugging Face, you now have the option to have Heretic generate a reproduce directory in the repository, which contains everything another person needs to know in order to generate a byte-for-byte identical model themselves (example of such a directory). Gone are the days of "I can't seem to get such low numbers on my own machine"; you now can! While the reproducibility system is already immensely helpful and educational by itself, in the future it will form the backbone of something even more ambitious and exciting, which I will announce soon. Please note that publishing reproducibility information is completely optional, and Heretic always prompts before doing so. You are in control of what is uploaded at all times.

      There's more! You know how it can be difficult to tell with certainty whether an abliterated model has incurred significant damage to its capabilities? Heretic now includes the world's simplest benchmarking system , allowing you to run standard benchmarks like MMLU, EQ-Bench, GSM8K, and HellaSwag directly from Heretic, without having to fumble with any configuration and without even having to export the model first. This makes it much easier to decide whether a model is worth publishing, or whether you should look at another trial instead. The system is based on lm-evaluation-harness, the academic gold standard for running LLM benchmarks, allowing the resulting metrics to be directly compared against numbers published online.

      In the course of a typical run, Heretic computes various functions on tensors. This can involve intermediate tensors being manifested in GPU memory that take up large amounts of VRAM. magiccodingman analyzed this in detail, and implemented optimizations that substantially reduce peak VRAM usage , allowing larger models to be processed.

      Model architectures continue to evolve and become more complex, and Heretic is keeping up! farolone and MoonRide303 improved Heretic's layer and module handling logic, making it far more generic and allowing it to process latest-generation models like Qwen3.5 and Gemma 4 , among others.

      Please see the release notes for the full list of improvements and fixes. More exciting stuff is coming in future versions!

      Cheers :)

      submitted by /u/-p-e-w-
      [link] [comments]

    17. 🔗 r/Yorkshire Glorious day along the Wall rss

      Glorious day along the Wall | A bit rainy & windy, but still a brilliant day out. submitted by /u/TitanicDays
      [link] [comments]
      ---|---

    18. 🔗 r/Leeds Favorite spot to read books? rss

      Im new in the city and looking for any recommendations where I can just chill out, have tea or coffee and read a book. I really enjoyed Sonder and Sociable Folk. Any other similar spots?

      submitted by /u/nimblebaroness
      [link] [comments]

    19. 🔗 r/Leeds Is this “d” an upside down “P” on the Leeds sign? rss
    20. 🔗 r/york Why didn't they take this rss

      Why didn't they take this | York recycling bin men left this ? York council are a bloody joke (or they would be if the fact they provide such a shitty service and waste OUR money) submitted by /u/DarkBytes
      [link] [comments]
      ---|---

    21. 🔗 r/york The Doom Stone in the Crypt at York Minster rss

      The Doom Stone in the Crypt at York Minster | ⚔ Beneath the floor of York Minster lies one of the most chilling reminders of medieval England’s belief in death and judgement: The Doom Stone. Carved over 800 years ago, this fragment was once part of a great tympanum above a church doorway. Its original paint and detailed imagery warned every visitor of the Last Judgement – heaven or hell, salvation or damnation. In this film, we explore the stone, the medieval mindset that created it, and how faith shaped the lives and deaths of all who passed beneath it. Featuring rare imagery of medieval Doom paintings, manuscripts, and iconography, this short documentary brings the forgotten stone and its message back into the light. There is NO AI Imagery in this Film, and all Motion Graphics were created by hand. Step into the shadows of England’s past. 00:00 The Doom Stone Beneath York Minster
      00:50 What is the Doom Stone?
      02:30 Medieval Last Judgement Explained
      03:55 Heaven & Hell
      04:50 Fear of Death and Judgement
      6.00 Conclusion – A Warning in Stone submitted by /u/The_Black_Banner_UK
      [link] [comments]
      ---|---

    22. 🔗 r/Yorkshire Mornings like this are all I need❀‍đŸ©č rss

      Mornings like this are all I need❀‍đŸ©č | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    23. 🔗 r/york Struggling to find a place for 3 sharers/2 households rss

      Me and my partner and a friend of ours are looking for a place to live within the next month or so. I keep telling letting agents that me and my partner are long-term dating and we count as two households, but for some reason they still consider it 3 sharers and any advice I find online just says "say two of you are dating so it counts as two households" which we don't need to lie about because we are actually dating. Does anyone know what areas that have a decent commute to the city centre would be more ok with that?? Two of us are students but one of us is graduating in the next month so student accomodation isn't possible. Really not sure what to do.

      submitted by /u/Rainecats
      [link] [comments]

    24. 🔗 r/reverseengineering Copy.fail: Why Internal LLMs Are Non-Negotiable for Security rss
    25. 🔗 backnotprop/plannotator v0.19.8 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.7 | Codex Stop-hook plan review, Codex skills, sidebar auto-close, file tree context menu
      v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
      v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
      v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
      v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
      v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
      v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
      v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
      v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
      v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish


      What's New in v0.19.8

      v0.19.8 is a UI infrastructure release. Nine PRs introduce a 49-theme gallery with matched syntax highlighting, a declarative keyboard shortcut system, smart code-file path validation in plans, and several reliability fixes for remote sessions and the code review editor. One PR comes from a first-time contributor.

      49 Themes with Matched Syntax Highlighting

      The entire color system is now themeable. Settings includes a new Theme tab with 49 editor-grade themes ported from the VS Code ecosystem — dark, light, and dual-mode options including Tokyo Night, Nord, One Dark Pro, Poimandres, Everforest, Vesper, and more. Each theme pairs its UI palette with a matched syntax highlighting scheme, so code blocks in plan and code review render with consistent colors.

      A preview mode lets you cycle through themes without committing: the selected theme applies instantly, and clicking away or pressing Escape reverts to your saved choice. Theme preference persists in cookies and travels across sessions on the same machine.

      Keyboard Shortcut Registry

      Plan review and code review now have a declarative keyboard shortcut system. Shortcuts are defined as scopes (annotation toolbar, comment popover, file tree, AI panel, etc.) with platform-aware bindings — Cmd on macOS, Ctrl elsewhere. Double-tap and hold modifiers are supported for power-user workflows.

      The system auto-generates a reference page on the marketing site at build time, so documentation stays in sync with code without manual maintenance. A help modal (press ?) surfaces all available shortcuts in context.

      Smart Code-File Path Validation

      When plans reference file paths (e.g., src/components/App.tsx), Plannotator now validates them against the actual project tree. Paths that exist get clickable links to a code file viewer. Ambiguous paths (multiple matches) show a picker. Missing paths display a subtle indicator so you know the reference is stale or wrong before approving the plan.

      Resolution is fuzzy — it handles relative paths, paths missing a leading directory, and common shorthand. The validation batches requests to avoid hammering the filesystem on large documents. This works across plan mode, annotate mode, and linked docs.

      Remote Session URL Notifications for OpenCode and Pi

      Remote users on OpenCode and Pi previously couldn't see the Plannotator URL when running over SSH or in containers. The server would start, but the URL was only passed to the browser-open function, which silently fails in headless environments.

      Both plugins now explicitly surface the URL through their native notification APIs: client.app.log() for OpenCode and ctx.ui.notify() for Pi. The URL appears in the TUI/UI regardless of whether a browser actually opens.

      Ghost Session Detection for /plannotator-last

      Running /clear in Claude Code creates a new session log file, but the previous file remains on disk without a registered session ID. When plannotator last resolved the most recent log by modification time, it would find the ghost file instead of the active session, causing it to always show the same stale message.

      The fix checks whether a candidate log's session ID is registered in Claude Code's session metadata. Unregistered (ghost) files from /clear are skipped, and the resolver falls through to the next candidate. This restores correct behavior for users who clear their session mid-conversation.

      Additional Changes

      • Stop diff list re-rendering on every comment keystroke. The code review annotation toolbar's form state was lifting re-renders into the entire diff view, causing visible jank when typing comments. Moved toolbar state into an isolated component with imperative communication back to the parent. (#660 by @backnotprop)
      • bun link support for local development. Contributors can now run bun link in a checkout to get a global plannotator command pointing at the local source. A thin Node wrapper in bin/plannotator.js spawns Bun with the source entrypoint. (#656 by @codythatsme)
      • plannotator-setup-goal skill. A new Claude Code skill that scaffolds project goals from a markdown template using plannotator annotate --gate for interactive review. Includes an OpenAI agent YAML definition and a Python scaffold script. (#665 by @backnotprop)
      • Co-authors and community contributors on homepage. The marketing site's contributor strip now pulls from a broader list including co-authors from commit trailers and community members who've contributed through issues, discussions, and feedback. (#662 by @backnotprop)

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      • feat(ui): shortcut registry foundation by @backnotprop in #652
      • fix(ui): smart resolution + existence-validation for code-file paths by @backnotprop in #654
      • feat: add bun link support for local CLI development by @codythatsme in #656
      • fix(review): stop diff list re-rendering on every comment keystroke by @backnotprop in #660
      • fix(session-log): detect ghost sessions from /clear to resolve correct log by @backnotprop in #661
      • feat(marketing): include co-authors and community contributors on homepage by @backnotprop in #662
      • fix(remote): notify URL in remote mode for OpenCode and Pi by @backnotprop in #663
      • feat(ui): 49 themes with matched syntax highlighting + preview mode by @backnotprop in #664
      • feat(skills): add plannotator-setup-goal skill by @backnotprop in #665

      New Contributors

      Contributors

      @codythatsme contributed bun link support for local development (#656), making it straightforward for contributors to test against a live source checkout without rebuilding the compiled binary. First contribution to the project.

      Community members whose reports drove fixes in this release:

      Full Changelog : v0.19.7...v0.19.8

    26. 🔗 r/Harrogate Bilton Triangle Development rss

      Hi all, I remember a while ago there being so chat regarding developing the Bilton triangle farmers field for housing. Is anyone aware of any updates? Thanks!

      submitted by /u/Leading_Roof407
      [link] [comments]