🏡


to read (pdf)

  1. I don't want your PRs anymore
  2. JitterDropper | OALABS Research
  3. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  4. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  5. Neobrutalism components - Start making neobrutalism layouts today

  1. April 29, 2026
    1. 🔗 r/wiesbaden Neue Freunde finden 25-36+- rss

      Hello, bin 34, Single und neu in Wiesbaden. Da meine Freunde dank Kindern kaum noch vor die Tür gehen, bin ich auf der Suche nach Jungen und aktiven Menschen, die Lust haben sich regelmäßig zu treffen. Garnicht so einfach in WI :( Bumble BFF und Gemeinsam Erleben hat für mich leider garnicht funktioniert und random ein Tanzkurs oder ähnliches anfangen ist auch nicht so mein Ding.

      Ich bin super gerne unterwegs und möchte einfach mal wieder öfter raus und feiern, auf Straßenfeste, in Bars oder einfach nur spazieren. Genau so gerne chille ich zuhause, mache einen Spieleabend, koche was leckeres und starte einen Film/Serien-Marathon. Bin sportlich und auch sonst für vieles zu begeistern.

      Wäre cool, Gleichgesinnte zu treffen, bevorzugt in meinem Alter, so Pi mal Daumen 😁

      submitted by /u/M0zep5
      [link] [comments]

    2. 🔗 backnotprop/plannotator v0.19.3 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
      v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
      v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
      v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
      v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
      v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
      v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
      v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
      v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
      v0.17.3 | Sticky lane repo/branch badge overflow fix


      What's New in v0.19.3

      v0.19.3 makes feedback messages fully configurable and cleans up the stacked PR selector for teams working with long PR chains. Three PRs, one from an external contributor.

      Configurable Feedback Messages

      Every message Plannotator sends to your agent is now customizable through ~/.plannotator/config.json. Plan approvals, plan denials, review approvals, review feedback suffixes, and annotation feedback all flow through a shared prompt pipeline with {{variable}} template interpolation.

      The config supports generic overrides that apply to all runtimes, plus per- runtime overrides for cases where Claude Code, OpenCode, and Pi need different phrasing. A four-level resolution order (runtime-specific, generic, runtime built-in default, global default) means you can be as granular or as broad as you want. Users who don't touch the config get identical behavior to previous versions.

      This started with @oorestisime's PR adding configurable review approval prompts (#561), which was then expanded to cover all 17 hardcoded feedback strings across the hook, OpenCode, and Pi integrations (#627). The full pipeline includes 72 tests (55 unit, 17 integration) covering template resolution, config merging, backward compatibility, and end-to-end disk-to- output flow.

      A new documentation page at Custom Feedback walks through the config format, available template variables, and a context-anchoring pattern contributed by @aviadshiber.

      Hide Merged PRs in Stacked PR Selector

      When reviewing a long chain of stacked PRs, merged PRs would show up alongside open ones in the stack tree and PR selector. For teams that iterate through a stack over several sessions, this made it harder to see which PRs still needed review.

      A "Hide merged" toggle now appears in both the stack tree popover and the PR selector dropdown. When enabled, merged PRs are removed from the list and a summary count shows how many are hidden. When visible, merged PRs appear dimmed with a strikethrough title and a "merged" badge, and they're not clickable. The toggle state persists via cookie across sessions. Tree indentation was also tightened to 2px per level to prevent horizontal overflow on deep stacks (10+ nodes).


      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      • feat(review): add configurable approval prompts by @oorestisime in #561
      • feat(review): hide/de-emphasize merged PRs in stacked PR selector by @backnotprop in #626
      • feat(feedback): configurable plan, annotation, and review feedback by @backnotprop in #627

      Contributors

      @oorestisime filed #558 requesting commit-on-approve for code review sessions, then contributed #561 adding configurable review approval prompts. That PR seeded the broader feedback customization pipeline shipped in this release.

      Community members whose issues shaped this release:

      • @JohannesKlauss filed #624 requesting customizable feedback prompts for the build agent handoff
      • @leoreisdias filed #625 requesting that merged PRs be hidden from the stacked PR selector, with a detailed description of the 10+ PR workflow that motivated the change
      • @aviadshiber contributed a context-anchoring prompt pattern featured in the custom feedback documentation

      Full Changelog : v0.19.2...v0.19.3

    3. 🔗 r/Yorkshire Collapsing Labour vote in Barnsley sees some choosing between Greens and Reform rss
    4. 🔗 fareedfauzi/PseudoNote v1.2.1 release
      • Bug fixes
      • Add few features include "Copy function tree" and "Copy xref tree Global Variable"
    5. 🔗 r/wiesbaden Bernd Zehner löscht ein Drittel seiner Rezensionen in seinem Restaurant (geöffnet im Februar) rss
    6. 🔗 anthropics/claude-code v2.1.123 release

      What's changed

      • Fixed OAuth authentication failing with a 401 retry loop when CLAUDE_CODE_DISABLE_EXPERIMENTAL_BETAS=1 is set
  2. April 28, 2026
    1. 🔗 r/york My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm rss

      My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm | Any information would be greatly appreciated as I require my bike for work submitted by /u/MidnightFar3298
      [link] [comments]
      ---|---

    2. 🔗 anthropics/claude-code v2.1.122 release

      What's changed

      • Added ANTHROPIC_BEDROCK_SERVICE_TIER environment variable to select a Bedrock service tier (default, flex, or priority), sent as the X-Amzn-Bedrock-Service-Tier header
      • Pasting a PR URL into the /resume search box now finds the session that created that PR (GitHub, GitHub Enterprise, GitLab, and Bitbucket)
      • /mcp now shows claude.ai connectors hidden by a manually-added server with the same URL, with a hint to remove the duplicate
      • Clarified the /mcp message shown when an MCP server is still unauthorized after the browser sign-in flow
      • OpenTelemetry: numeric attributes on api_request/api_error log events are now emitted as numbers, not strings
      • OpenTelemetry: added claude_code.at_mention log event for @-mention resolution
      • Fixed /branch producing forks that fail with "tool_use ids were found without tool_result blocks" when the source session contained entries from rewound timelines
      • Fixed /model not showing the Effort option for Bedrock application inference profile ARNs, and those ARNs not receiving output_config.effort
      • Fixed Vertex AI / Bedrock returning invalid_request_error: output_config: Extra inputs are not permitted on session-title generation and other structured-output queries
      • Fixed Vertex AI count_tokens endpoint returning 400 errors for users behind proxy gateways
      • Fixed spinnerTipsOverride.excludeDefault not suppressing the time-based spinner tips
      • Fixed ToolSearch missing MCP tools that connected after session start in nonblocking mode
      • Fixed !exit / !quit in bash mode terminating the CLI instead of running as a shell command
      • Fixed images sent to newer models being resized to 2576px per side instead of the correct 2000px maximum
      • Fixed remote control session idle status redrawing twice per second, which could flood tmux -CC control pipes and pause the terminal
      • Fixed assistant messages appearing blank in some sessions due to a stale view preference
      • Fixed a malformed hooks entry in settings.json no longer invalidating the entire file
      • Voice mode: keybindings bound to Caps Lock now show an error since terminals don't deliver Caps Lock as a key event
    3. 🔗 oxigraph/oxigraph v0.5.8 release
      • HTTP server: add /sparql path that serves both SPARQL queries and updates.
      • GeoSPARQL: add a significant set of new functions.
      • RocksDB backend: fixes some transactions where reading-your-own-writes was not working correctly.
    4. 🔗 r/Leeds Wheelchair accessible taxi services rss

      Hey everyone, I’m a full time wheelchair user from London. I have quadriplegic cerebral palsy so can’t walk at all. I’m looking to study electronic music production at Leeds Conservatoire in September of this year and have to travel up to Leeds for accommodation viewings on Thursday. I was wondering if anyone could give me some taxi companies that do/may provide wheelchair accessible taxi services with full ramp access?

      Uber, at least in London is a bit hit and miss so that’s why I’m asking for taxi services rather than just using Uber. I also wanted to ask, is there a taxi rank at Leeds station and do they have wheelchair accessible vehicles there?

      Thanks in advance and feel free to add any tips or experiences of travelling in Leeds as a wheelchair user. Even if you are able bodied, please let me know if there’s anything you think I should bear in mind while navigating the city in general.

      Thanks again everyone!

      submitted by /u/LORDLUK3
      [link] [comments]

    5. 🔗 @binaryninja@infosec.exchange To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This mastodon

      To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This feature is disabled by default in paid versions and enabled by default in our free version. Either way, you can change the setting whenever you want. Details in our latest blog post: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#crash- reporting

    6. 🔗 r/york Bees on Gillygate rss

      Hi!

      I don’t suppose anyone saw the swarm of bees all over Gillygate around the Tesco today?

      Just wondered if anyone knows if it’s cleared up or what caused it?

      This was about 13:45, and apparently they weren’t there in the morning.

      submitted by /u/SadAndGloomy
      [link] [comments]

    7. 🔗 r/reverseengineering Building a perfect clone of 1993 game SimTower (via RE) rss
    8. 🔗 r/LocalLLaMA Something from Mistral (Vibe) tomorrow rss

      Something from Mistral (Vibe) tomorrow | Model(s) or Tool upgrade/New Tool? Source Tweet : https://xcancel.com/mistralvibe/status/2049147645894021147#m submitted by /u/pmttyji
      [link] [comments]
      ---|---

    9. 🔗 Locklin on science Bouncing droplet “quantum mechanics” rss

      I was always a fan of de Broglie and Bohm’s “pilot wave” idea. This is a fully deterministic theory of quantum mechanics which physicists don’t like because “le hidden variables” (also it isn’t yet relativistic I guess). The original pilot wave idea didn’t work out because de Broglie couldn’t calculate scattering cross sections, though Bohm […]

    10. 🔗 r/Leeds nightclub interview?? rss

      Hey guys! I have an interview for a bartender position at Backrooms nightclub tomorrow and I’ve never had an interview in a club but I really wanna work there bc I love the whole vibe of clubs and want to get into bartending. What kind of things do they ask you for these roles?? If anyone has any personal experience too it would be massively appreciated

      submitted by /u/WhereasFar9745
      [link] [comments]

    11. 🔗 r/reverseengineering How I reverse-engineered a SQLite WAL database inside a VS Code extension - custom merge engine, header byte patching, and protobuf decoding without a schema rss
    12. 🔗 r/york Does anyone know if there is an update regarding foss islands chimney? rss

      Does anyone know if there is an update regarding foss islands chimney? | I noticed the temporary fencing looks to now be permanent, which is a shame- was a handy shortcut to Halfords and vice versa! submitted by /u/UnhingedSerialKiller
      [link] [comments]
      ---|---

    13. 🔗 r/reverseengineering AI solved our CTF in 6min rss
    14. 🔗 r/LocalLLaMA meantime on r/vibecoding rss

      meantime on r/vibecoding | words of wisdom submitted by /u/jacek2023
      [link] [comments]
      ---|---

    15. 🔗 r/LocalLLaMA Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation rss

      Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation | Evaluated Qwen 3.6 27B across BF16, Q4_K_M, and Q8_0 GGUF quant variants with llama-cpp-python using Neo AI Engineer. Benchmarks used:

      • HumanEval: code generation
      • HellaSwag: commonsense reasoning
      • BFCL: function calling

      Total samples:

      • HumanEval: 164
      • HellaSwag: 100
      • BFCL: 400

      Results: BF16

      • HumanEval: 56.10% 92/164
      • HellaSwag: 90.00% 90/100
      • BFCL: 63.25% 253/400
      • Avg accuracy: 69.78%
      • Throughput: 15.5 tok/s
      • Peak RAM: 54 GB
      • Model size: 53.8 GB

      Q4_K_M

      • HumanEval: 50.61% 83/164
      • HellaSwag: 86.00% 86/100
      • BFCL: 63.00% 252/400
      • Avg accuracy: 66.54%
      • Throughput: 22.5 tok/s
      • Peak RAM: 28 GB
      • Model size: 16.8 GB

      Q8_0

      • HumanEval: 52.44% 86/164
      • HellaSwag: 83.00% 83/100
      • BFCL: 63.00% 252/400
      • Avg accuracy: 66.15%
      • Throughput: 18.0 tok/s
      • Peak RAM: 42 GB
      • Model size: 28.6 GB

      What stood out: Q4_K_M looks like the best practical variant here. It keeps BFCL almost identical to BF16, drops about 5.5 points on HumanEval, and is still only 4 points behind BF16 on HellaSwag. The tradeoff is pretty good:

      • 1.45x faster than BF16
      • 48% less peak RAM
      • 68.8% smaller model file
      • nearly identical function calling score

      Q8_0 was a bit underwhelming in this run. It improved HumanEval over Q4_K_M by ~1.8 points, but used 42 GB RAM vs 28 GB and was slower. It also scored lower than Q4_K_M on HellaSwag in this eval. For local/CPU deployment, I would probably pick Q4_K_M unless the workload is heavily code-generation focused. For maximum quality, BF16 still wins. Evaluation setup:

      • GGUF via llama-cpp-python
      • n_ctx: 32768
      • checkpointed evaluation
      • HumanEval, HellaSwag, and BFCL all completed
      • BFCL had 400 function calling samples

      This evaluation was done using Neo AI Engineer, which built the GGUF eval setup, handled checkpointed runs, and consolidated the benchmark results. I manually reviewed the outcome as well. Complete case study with benchmarking results, approach and code snippets in mentioned in the comments below 👇 submitted by /u/gvij
      [link] [comments]
      ---|---

    16. 🔗 livestorejs/livestore LiveStore DevTools artifact 0.4.0-dev.22 (dt-20260428-05ca43eb) release

      Public-safe build id: dt-20260428-05ca43eb
      DevTools version: 0.4.0-dev.22
      Package: @livestore/devtools-vite
      Tarball SHA-256: 37552cd2670decb442a124c7695221eca673ac5186ad6b5384ee24d434a16f6c

      This release contains sanitized package artifacts only. Private source provenance is stored separately.

    17. 🔗 backnotprop/plannotator v0.19.2 release

      Follow @plannotator on X for updates


      Missed recent releases? Release | Highlights
      ---|---
      v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
      v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
      v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
      v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
      v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
      v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
      v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
      v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
      v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
      v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
      v0.17.3 | Sticky lane repo/branch badge overflow fix
      v0.17.2 | Supply-chain hardening, sticky toolstrip and badges, overlay scrollbars, external annotation highlighting, Conventional Comments


      What's New in v0.19.2

      v0.19.2 adds stacked PR review, source line numbers in exported feedback, and several UX fixes. Five PRs, one from a first-time contributor.

      Code Review

      Stacked PR Review

      Reviewing a PR that belongs to a stack used to mean reviewing it in isolation. You could see the diff for that one branch, but not how it fit into the larger chain. Switching to a different PR in the stack meant closing the review and starting a new session.

      Stacked PR review keeps you in a single session across every PR in the stack. A stack tree popover shows the full chain with clickable navigation. Each PR gets its own worktree checkout, so switching PRs recomputes the diff against the correct base without mixing changes between layers. Two scope modes let you toggle between viewing a single PR's changes (layer) and all accumulated changes from the default branch (full-stack).

      Multi-PR posting lets you submit review feedback to multiple PRs at once. A confirmation dialog shows exactly where comments will go before posting to GitHub or GitLab, with parallel submission and partial-failure retry. Annotations from full-stack diffs can't be mapped to a single PR's line numbers, so they're surfaced as copyable markdown rather than silently dropped.

      A new "Branch" option in the default diff type setting (and first-run dialog) gives users who work primarily with committed changes a one-click default.

      Source Line Numbers in Exported Feedback

      When Claude receives annotation feedback, it got the block content and the highlighted text but had no way to locate the annotation in the source file. For large documents with repeated headings or similar paragraphs, this ambiguity forced extra round-trips.

      Exported annotations now include source line numbers. Single-line blocks show (line 42), multi-line blocks show (lines 10–14). Code blocks account for fence lines when computing ranges. Files with YAML frontmatter are offset- corrected so line numbers match the original file, not the parsed output.

      For converted content (HTML files rendered through Turndown, URLs fetched via Jina Reader), the feedback includes a caveat that line numbers refer to the converted markdown rather than the original source. When viewing a linked HTML document within a plan, the conversion flag is derived per-document so mixed collections of markdown and HTML files each get the correct label.

      UX

      Diff Type Dialog Re-Presented

      Many users who set up Plannotator before v0.17.8 never saw the "Committed" option (branch diff vs. the default branch) because the first-run dialog only showed at install time. Users were asking how to set committed changes as their default without realizing the option existed.

      The dialog is now re-presented to existing users with clearer descriptions, a wider layout with a 60/40 split, and a hover-to-zoom preview of the toolbar dropdown. The dialog reminds users they can switch views anytime during a review. Existing preferences are preserved — this only re-shows the picker, it doesn't reset anyone's choice.

      Options Menu Ghost Dot Removed

      The pulsing notification dot on the Options menu was meant to flag new settings after an update. In practice, the dot appeared on every session and users couldn't figure out how to dismiss it. The entire new-settings-hint system has been removed. Settings changes are communicated through release notes instead.

      Additional Changes

      • Docs: toolbar inventory updated. Documentation references to "Insert" and "Replace" annotation types have been scrubbed to match the shipped UI, which uses Delete, Comment, Quick Label, Looks Good, Global Comment, and Copy. — #618 by @vxio, closing #617
      • Docs: OpenCode plugin configuration. Clarified plugin setup instructions for OpenCode users. — commit 33f409a

      Install / Update

      macOS / Linux:

      curl -fsSL https://plannotator.ai/install.sh | bash
      

      Windows:

      irm https://plannotator.ai/install.ps1 | iex
      

      Claude Code Plugin: Run /plugin in Claude Code, find plannotator , and click "Update now".

      OpenCode: Clear cache and restart:

      rm -rf ~/.bun/install/cache/@plannotator
      

      Then in opencode.json:

      {
        "plugin": ["@plannotator/opencode@latest"]
      }
      

      Pi: Install or update the extension:

      pi install npm:@plannotator/pi-extension
      

      What's Changed

      • feat: stacked PR review — PR switching, scope toggling, multi-PR posting by @backnotprop in #620
      • feat(plan,annotate): include source line numbers in exported feedback by @backnotprop in #623
      • docs: scrub Insert/Replace from docs to match shipped UI by @vxio in #618
      • fix: remove ghost dot on Options menu (new-settings-hint system) by @backnotprop in commit 7ab2d8f
      • fix: re-show diff type setup dialog with clearer options and toolbar hint by @backnotprop in commits aaad89e, 03d4e8b

      New Contributors

      • @vxio made their first contribution in #618

      Community

      @vxio noticed the docs still referenced Insert and Replace annotation types that were removed from the UI, filed #617, and contributed the fix in #618. First contribution to the project.

      Full Changelog : v0.19.1...v0.19.2

    18. 🔗 r/Leeds Firstbus app update shenanigans rss

      If you use the Firstbus app for tickets, be warned, they are rolling out an update. The update has gone so well that they have a banner on the website pointing to a separate FAQ specifically for the update with a big list of reasons why you will probably have to call them to get access to your tickets...

      https://www.firstbus.co.uk/help-support/help-and-support/first-bus-app- update

      submitted by /u/awesomeweles
      [link] [comments]

    19. 🔗 r/reverseengineering Example structure for evidence-based vulnerability reports rss
    20. 🔗 r/reverseengineering DeepZero - Automated Vulnerability Research rss
    21. 🔗 r/LocalLLaMA I'm done with using local LLMs for coding rss

      I think gave it a fair shot over the past few weeks, forcing myself to use local models for non-work tech asks. I use Claude Code at my job so that's what I'm comparing to.

      I used Qwen 27B and Gemma 4 31B, these are considered the best local models under the multi-hundred LLMs. I also tried multiple agentic apps. My verdict is that the loss of productivity is not worth it the advantages.

      I'll give a brief overview of my main issues.

      Shitty decision-making and tool-calls

      This is a big one. Claude seems to read my mind in most cases, but Qwen 27B makes me give it the Carlo Ancelotti eyebrow more often than not. The LLM just isn't proceeding how I would proceed.

      I was mainly using local LLMs for OS/Docker tasks. Is this considered much harder than coding or something?

      To give an example, tasks like " Here's a Github repo, I want you to Dockerize it." I'd expect any dummy to follow the README's instructions and execute them. (EDIT: full prompt here: https://reddit.com/r/LocalLLaMA/comments/1sxqa2c/im_done_with_using_local_llms_for_coding/oiowcxe/ )

      Issues like having a 'docker build' that takes longer than the default timeout, which sends them on unrelated follow-ups (as if the task failed), instead of checking if it's still running. I had Qwen try to repeat the installation commands on the host (also Ubuntu) to see what happens. It started assuming "it must have failed because of torchcodec" just like that, pulling this entirely out of its ass, instead of checking output.

      I tried to meet the models half-way. Having this in AGENTS.md: " If you run a Docker build command, or any other command that you think will have a lot of debug output, then do the following: 1. run it in a subagent, so we don't pollute the main context, 2. pipe the output to a temporary file, so we can refer to it later using tail and grep." And yet twice in a row I came back to a broken session with 250k input tokens because the LLM is reading all the output of 'docker build' or 'docker compose up'.

      I know there's huge AGENTS.md that treat the LLM like a programmable robot, giving it long elaborate protocols because they don't expect to have decent self-guidance, I didn't try those tbh. And tbh none of them go into details like not reading the output of 'docker build'. I stuck to the default prompts of the agentic apps I used, + a few guidelines in my AGENTS.md.

      Performance

      Not only are the LLMs slow, but no matter which app I'm using, the prompt cache frequently seems to break. Translation: long pauses where nothing seems to happen.

      For Claude Code specifically, this is made worse by the fact that it doesn't print the LLM's output to the user. It's one of the reasons I often preferred Qwen Code. It's very frustrating when not only is the outcome looking bad, but I'm not getting rapid feedback.

      I'm not learning anything

      Other than changing the URL of the Chat Completions server, there's no difference between using a local LLM and a cloud one, just more grief.

      There's definitely experienced to be gained learning how to prompt an LLM. But I think coding tasks are just too hard for the small ones, it's like playing a game on Hardcore. I'm looking for a sweetspot in learning curve and this is just not worth it.

      What now

      For my coding and OS stuff, I'm gonna put some money on OpenRouter and exclusively use big boys like Kimi. If one model pisses me off, move on to the next one. If I find a favorite, I'll sign up to its yearly plan to save money.

      I'll still use small local models for automation, basic research, and language tasks. I've had fun writing basic automation skills/bots that run stuff on my PC, and these will always be useful.

      I also love using local LLMs for writing or text games. Speed isn't an issue there, the prompt cache's always being hit. Technically you could also use a cloud model for this too, but you'd be paying out the ass because after a while each new turn is sending like 100k tokens.

      Thanks for reading my blog.

      submitted by /u/dtdisapointingresult
      [link] [comments]

    22. 🔗 Jessitron Communication is hard, but sometimes I can fix it. rss

      We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.

      Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.

      Here’s an example.

      Iterating could be easier. The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic. ! As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking. In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn't tell exactly what I was talking about. I can’t just point to the UI.

      This wasn't the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I'm talking about.

      When communication gets difficult, that’s a signal. I can change this.

      So I made it make a way to point to the UI.

      In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ‘r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

      the html comic with references.

      Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?

      Claude understood and added a hover effect that highlights the originating bash tool call.

      That hover effect is OK; I used it a few times. Those reference tags are gold! I've used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.

      This was a great idea. Iterating is much easier now!

      I am in the loop and on the loop.

      There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.

      Then there’s a meta-level feedback loop, the “is this working?” check when I feel resistance. Frustration, tedium, annoyance-these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the “middle loop,” and Kief Morris renamed it "human on the loop."

      Here, I’m both in the development loop with the AI, and I’m “on the loop” as a thoughtful collaborator, smoothing the development loop when it gets rough.

      Resistance will be assimilated.

      As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!

    23. 🔗 r/wiesbaden Eiserne Hand mit der Vespa rss

      Kurz und knappe Frage an die Moped / Rollerfahrer.

      Meine Freundin muss nach Taunusstein pendeln und überlegt auf Roller umzusteigen.

      Daher meine Frage :

      Kommt eine kleine Vespa / Moped mit 50ccm die eiserne Hand hoch ? Also mit sinnvoller Geschwindigkeit?

      Hat das einer von euch schon gemacht ?

      Ich danke schonmal für die Antworten :)

      submitted by /u/metaldog
      [link] [comments]

    24. 🔗 r/Leeds best tuna melt paninis? rss

      i’m craving a tuna melt really badly right now and i’m in the city centre for lunch tomorrow and want to get something good. does anyone have any recommendations? cheese, tuna, and toasted panini bread is all i need right now 🙏

      submitted by /u/Shoddy_Day
      [link] [comments]

    25. 🔗 anthropics/claude-code v2.1.121 release

      What's changed

      • Added alwaysLoad option to MCP server config — when true, all tools from that server skip tool-search deferral and are always available
      • Added claude plugin prune to remove orphaned auto-installed plugin dependencies; plugin uninstall --prune cascades
      • Added a type-to-filter search box to /skills so you can find a skill in long lists without scrolling
      • PostToolUse hooks can now replace tool output for all tools via hookSpecificOutput.updatedToolOutput (previously MCP-only)
      • Fullscreen mode: typing into the prompt no longer jumps scroll back to the bottom after you've scrolled up to read earlier output
      • Dialogs that overflow the terminal are now scrollable with arrow keys, PgUp/PgDn, home/end, and mouse wheel in both fullscreen and non-fullscreen modes
      • Clicking any line of a long URL that wraps across rows in fullscreen mode now opens the full URL
      • SDK and claude -p: CLAUDE_CODE_FORK_SUBAGENT=1 now works in non-interactive sessions
      • --dangerously-skip-permissions no longer prompts for writes to .claude/skills/, .claude/agents/, and .claude/commands/
      • /terminal-setup now enables iTerm2's "Applications in terminal may access clipboard" setting so /copy works, including from tmux
      • MCP servers that hit a transient error during startup now auto-retry up to 3 times instead of staying disconnected
      • The terminal tab session title is now generated in your configured language setting
      • Claude.ai connectors with the same upstream URL are now deduplicated instead of appearing as duplicates
      • Vertex AI: support X.509 certificate-based Workload Identity Federation (mTLS ADC)
      • Faster startup after upgrading: removed the Recent Activity panel from the release-notes splash
      • LSP diagnostic summaries now expand on click/ctrl+o and show the expand hint
      • SDK: mcp_authenticate now supports redirectUri for custom scheme completion and claude.ai connectors
      • OpenTelemetry: added stop_reason, gen_ai.response.finish_reasons, and user_system_prompt (gated behind OTEL_LOG_USER_PROMPTS) to LLM request spans
      • [VSCode] Voice dictation now respects the accessibility.voice.speechLanguage setting when no Claude Code language is configured
      • [VSCode] /context now opens a native token usage dialog
      • Fixed unbounded memory growth (multi-GB RSS) when processing many images in a session
      • Fixed /usage leaking up to ~2GB of memory on machines with large transcript histories
      • Fixed memory leak when long-running tools fail to emit a clear progress event
      • Fixed Bash tool becoming permanently unusable when the directory Claude was started in is deleted or moved mid-session
      • Fixed --resume crashing on startup in external builds
      • Fixed --resume failing on large sessions when a transcript line was corrupted by an unclean shutdown — the corrupt line is now skipped
      • Fixed thinking.type.enabled is not supported error when using Bedrock application inference profile ARNs
      • Fixed Microsoft 365 MCP OAuth failing with duplicate or unsupported prompt parameter
      • Fixed scrollback duplication when pressing Ctrl+L or triggering a redraw in non-fullscreen mode on tmux, GNOME Terminal, Windows Terminal, and Konsole
      • Fixed claude.ai MCP connectors silently disappearing when the connector-list fetch hits a transient auth error at startup
      • Fixed "Always allow" rules for built-in tools in remote sessions not surviving worker restarts
      • Fixed NO_PROXY not being respected for all HTTP clients when set via managed-settings.json under the native build
      • Fixed managed settings approval prompt exiting the session even when accepted — now applies settings and continues
      • Fixed /usage returning "rate limited" after a stale OAuth token — now refreshes automatically
      • Fixed invalid legacy enum values in settings.json invalidating the entire settings file
      • Fixed /usage dialog content being clipped when no-flicker mode is off
      • Fixed /focus showing "Unknown command" when the fullscreen renderer is off — now explains how to enable it
      • Fixed embedded grep/find/rg shell wrappers failing when the running binary is deleted mid-session — now falls back to installed tools
      • Reduced peak file descriptor usage during find in the Bash tool on large directory trees
    26. 🔗 Mitchell Hashimoto Ghostty Is Leaving GitHub rss
      (empty)
    27. 🔗 Armin Ronacher Before GitHub rss

      GitHub was not the first home of my Open Source software. SourceForge was.

      Before GitHub, I had my own Trac installation. I had Subversion repositories, tickets, tarballs, and documentation on infrastructure I controlled. Later I moved projects to Bitbucket, back when Bitbucket still felt like a serious alternative place for Open Source projects, especially for people who were not all-in on Git yet.

      And then, eventually, GitHub became the place, and I moved all of it there.

      It is hard for me to overstate how important GitHub became in my life. A large part of my Open Source identity formed there. Projects I worked on found users there. People found me there, and I found other people there. Many professional relationships and many friendships started because some repository, issue, pull request, or comment thread made two people aware of each other.

      That is why I find what is happening to GitHub today so sad and so disappointing. I do not look at it as just the folks at Microsoft making product decisions I dislike. GitHub was part of the social infrastructure of Open Source for a very long time. For many of us, it was not merely where the code lived; it was where a large part of the community lived.

      So when I think about GitHub's decline, I also think about what came before it, and what might come after it. I have written a few times over the years about dependencies, and in particular about the problem of micro dependencies. In my mind, GitHub gave life to that phenomenon. It was something I definitely did not completely support, but it also made Open Source more inclusive. GitHub changed how Open Source feels, and later npm and other systems changed how dependencies feel. Put them together and you get a world in which publishing code is almost frictionless, consuming code is almost frictionless, and the number of projects in the world explodes.

      That has many upsides. But it is worth remembering that Open Source did not always work this way.

      A Smaller World

      Before GitHub, Open Source was a much smaller world. Not necessarily in the number of people who cared about it, but in the number of projects most of us could realistically depend on.

      There were well-known projects, maintained over long periods of time by a comparatively small number of people. You knew the names. You knew the mailing lists. You knew who had been around for years and who had earned trust. That trust was not perfect, and the old world had plenty of gatekeeping, but reputation mattered in a very direct way. We took pride (and got frustrated) when the Debian folks came and told us our licensing stuff was murky or the copyright headers were not up to snuff, because they packaged things up.

      A dependency was not just a package name. It was a project with a history, a website, a maintainer, a release process, a lot of friction, and often a place in a larger community. You did not add dependencies casually, because the act of depending on something usually meant you had to understand where it came from.

      Not all of this was necessarily intentional, but because these projects were comparatively large, they also needed to bring their own infrastructure. Small projects might run on a university server, and many of them were on SourceForge, but the larger ones ran their own show. They grouped together into larger collectives to make it work.

      We Ran Our Own Infrastructure

      My first Open Source projects lived on infrastructure I ran myself. There was a Trac installation, Subversion repositories, tarballs, documentation, and release files served from my own machines or from servers under my control. That was normal. If you wanted to publish software, you often also became a small-time system administrator. Georg and I ran our own collective for our Open Source projects: Pocoo. We shared server costs and the burden of maintaining Subversion and Trac, mailing lists and more.

      Subversion in particular made this "running your own forge" natural. It was centralized: you needed a server, and somebody had to operate it. The project had a home, and that home was usually quite literal: a hostname, a directory, a Trac instance, a mailing list archive.

      When Mercurial and Git arrived, they were philosophically the opposite. Both were distributed. Everybody could have the full repository. Everybody could have their own copy, their own branches, their own history. In principle, those distributed version control systems should have reduced the need for a single center. But despite all of this, GitHub became the center.

      That is one of the great ironies of modern Open Source. The distributed version control system won, and then the world standardized on one enormous centralized service for hosting it.

      What GitHub Gave Us

      It is easy now to talk only about GitHub's failures, of which there are currently many, but that would be unfair: GitHub was, and continues to be, a tremendous gift to Open Source.

      It made creating a project easy and it made discovering projects easy. It made contributing understandable to people who had never subscribed to a development mailing list in their life. It gave projects issue trackers, pull requests, release pages, wikis, organization pages, API access, webhooks, and later CI. It normalized the idea that Open Source happens in the open, with visible history and visible collaboration. And it was an excellent and reasonable default choice for a decade.

      But maybe the most underappreciated thing GitHub did was archival work: GitHub became a library. It became an index of a huge part of the software commons because even abandoned projects remained findable. You could find forks, and old issues and discussions all stayed online. For all the complaints one can make about centralization, that centralization also created discoverable memory. The leaders there once cared a lot about keeping GitHub available even in countries that were sanctioned by the US.

      I know what the alternative looks like, because I was living it. Some of my earliest Open Source projects are technically still on PyPI, but the actual packages are gone. The metadata points to my old server, and that server has long stopped serving those files.

      That was normal before the large platforms. A personal domain expired, a VPS was shut down, a developer passed away, and with them went the services they paid for. The web was once full of little software homes, and many of them are gone 1.

      npm and the Dependency Explosion

      The micro-dependency problem was not just that people published very small packages. The hosted infrastructure of GitHub and npm made it feel as if there was no cost to create, publish, discover, install, and depend on them.

      In the pre-GitHub world, reputation and longevity were part of the dependency selection process almost by necessity, and it often required vendoring. Plenty of our early dependencies were just vendored into our own Subversion trees by default, in part because we could not even rely on other services being up when we needed them and because maintaining scripts that fetched them, in the pre-API days, was painful. The implied friction forced some reflection, and it resulted in different developer behavior. With npm-style ecosystems, the package graph can grow faster than anybody's ability to reason about it.

      The problem that this type of thinking created also meant that solutions had to be found along the way. GitHub helped compensate for the accountability problem and it helped with licensing. At one point, the newfound influx of developers and merged pull requests left a lot of open questions about what the state of licenses actually was. GitHub even attempted to rectify this with their terms of service.

      The thinking for many years was that if I am going to depend on some tiny package, I at least want to see its repository. I want to see whether the maintainer exists, whether there are issues, whether there were recent changes, whether other projects use it, whether the code is what the package claims it is. GitHub became part of the system that provides trust, and more recently it has even become one of the few systems that can publish packages to npm and other registries with trusted publishing.

      That means when trust in GitHub erodes, the problem is not isolated to source hosting. It affects the whole supply chain culture that formed around it.

      GitHub Is Slowly Dying

      GitHub is currently losing some of what made it feel inevitable. Maybe that's just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable.

      Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It's a miracle that things are going as well as they are.

      For a while, leaving GitHub felt like a symbolic move mostly made by smaller projects or by people with strong views about software freedom. I definitely cringed when Zig moved to Codeberg! But I now see people with real weight and signal talking about leaving GitHub. The most obvious one is Mitchell Hashimoto, who announced that Ghostty will move. Where it will move is not clear, but it's a strong signal. But there are others, too. Strudel moved to Codeberg and so did Tenacity. Will they cause enough of a shift? Probably not, but I find myself on non-GitHub properties more frequently again compared to just a year ago.

      One can argue that this is good: it is healthy for Open Source to stop pretending that one company should be the default home of everything. Git itself was designed for a world with many homes.

      Dispersion Has a Cost

      Going back to many forges, many servers, many small homes, and many independent communities will increase decentralization, and in many ways it will force systems to adapt. This can restore autonomy and make projects less dependent on the whims of Microsoft leadership. It can also allow different communities to choose different workflows. What's happening in Pi's issue tracker currently is largely a result of GitHub's product choices not working in the present-day world of Open Source. It was built for engagement, not for maintainer sanity.

      It can also make the web forget again. I quite like software that forgets because it has a cleansing element. Maybe the real risk of loss will make us reflect more on actually taking advantage of a distributed version control system.

      But if projects move to something more akin to self-hosted forges, to their own self-hosted Mercurial or cgit servers, we run the risk of losing things that we don't want to lose. The code might be distributed in theory, but the social context often is not. Issues, reviews, design discussions, release notes, security advisories, and old tarballs are fragile. They disappear much more easily than we like to admit. Mailing lists, which carried a lot of this in earlier years, have not kept up with the needs of today, and are largely a user experience disaster.

      We Need an Archive

      As much as I like the idea of things fading out of existence, we absolutely need libraries and archives.

      Regardless of whether GitHub is here to stay or projects find new homes, what I would like to see is some public, boring, well-funded archive for Open Source software. Something with the power of an endowment or public funding to keep it afloat. Something whose job is not to win the developer productivity market but just to make sure that the most important things we create do not disappear.

      The bells and whistles can be someone else's problem, but source archives, release artifacts, metadata, and enough project context to understand what happened should be preserved somewhere that is not tied to the business model or leadership mood of a single company.

      GitHub accidentally became that archive because it became the center of Open Source activity. Once that no longer holds, we should not assume some magic archival function will emerge or that GitHub will continue to function as such. We have already seen what happens when project homes are just personal servers and good intentions, and we have seen what happened to Google Code and Bitbucket.

      I hope GitHub recovers, I really do, in part because a lot of history lives there and because the people still working on it inherited something genuinely important. But I no longer think it is responsible to let the continued memory of Open Source depend on GitHub remaining a healthy product.

      The world before GitHub had more autonomy and more loss, and in some ways, we're probably going to move back there, at least for a while. Whatever people want to start building next should try to keep the memory and lose the dependence. It should be easier to move projects, easier to mirror their social context, easier to preserve releases, and harder for one company's drift to become a cultural crisis for everyone else.

      I do not want to go back to the old web of broken tarball links and abandoned Trac instances. I also do not want Open Source to pretend that the last twenty years were normal or permanent. GitHub wrote a remarkable chapter of Open Source, and if that chapter is ending, the next one should learn from it and also from what came before.

      1. This is also a good reminder that we rely so very much on the Internet Archive for many projects of the time.
  3. April 27, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-27 rss

      IDA Plugin Updates on 2026-04-27

      Activity:

      • binsync
        • 7ccbd7cc: Fix documentation links (#520)
      • capa
        • 87f0970d: Update README with dynamic capa heading (#3060)
      • ida-hcli
        • e8367b55: Merge pull request #195 from HexRaysSA/idat-isolation
        • 19e1a266: Add better venv detection
        • 112f24d8: Respect idapythonrc.py venv for python detection
        • d6c2c1aa: Isolate IDAUSR to avoid loading plugins during python version detection
      • python-elpida_core.py
        • 3829ddf5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:54Z
        • 12409dce: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:32Z
        • 4a279d04: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:09Z
        • 00d970a9: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:45Z
        • af252e8e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:25Z
        • 75dca59f: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:59Z
        • 4d2465d4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:36Z
        • 06a6c379: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:11Z
        • 31158572: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:43Z
        • 811516f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:19Z
    2. 🔗 r/Leeds Scam companies to avoid rss

      I will attach pictures showing what to look out for, additionally, be careful of any promising high pay. These people compliment you, and essentially groom you into an extremely low wage, door to door sales job, whilst promising greater things e.g. Quick career progression

      submitted by /u/Fit-Librarian5590
      [link] [comments]

    3. 🔗 r/LocalLLaMA Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. rss

      Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. | TRELLIS.2 is a state-of-the-art large 3D generative model (4B parameters) designed for high-fidelity image-to-3D generation. It leverages a novel "field-free" sparse voxel structure termed O-Voxel to reconstruct and generate arbitrary 3D assets with complex topologies, sharp features, and full PBR materials.


      Link to the Paper:

      Link to the Code:

      Link to Try Out A Live Demo:

      submitted by /u/44th--Hokage
      [link] [comments]
      ---|---

    4. 🔗 r/york Where do parents buy baby/child car seats now that Paul Stride has closed? rss

      Where is there nearby that is good for buying car seats? Don’t know what you’ve got until it’s gone, Paul Stride was amazing and we now need a replacement for our 3 year old.

      submitted by /u/amusedfridaygoat
      [link] [comments]

    5. 🔗 MetaBrainz MusicBrainz Server update, 2026-04-27 rss

      This release mostly consists of a very substantial rewrite of the external links editor code, to make that section of our editors more efficient. While doing that we also fixed a few long-standing links editor bugs. While we kept this code in beta for quite a while so the community could help us catch most new bugs, do not hesitate to report any issues you might find.

      A new release of MusicBrainz Docker is also available that matches this update of MusicBrainz Server. See the release notes for update instructions.

      Thanks to rinsuki for having contributed to the code. Thanks to fabe56, HibiscusKazeneko and Lioncat6 for having reported bugs and suggested improvements. Thanks to Besnik, DenilsonSama, Khaled Salama, Marc Riera, ShimiDoki, Vaclovas Intas, cerberuzzz, coldified_, dddrnzv, dulijuong_artist, imgradeone, karpuzikov, mfmeulenbelt, salo.rock, smreo1590, syntariavoxmortem, wileyfoxyx and yyb987 for updating the translations. And thanks to all others who tested the beta version!

      The git tag is v-2026-04-27.0.

      Fixed Bug

      • [MBS-8570] - "This relationship already exists" error message does not go away when one duplicate URL is removed
      • [MBS-12032] - Adding a duplicate URL rel moves link to new section
      • [MBS-14307] - Wikipedia extracts are not displaying
      • [MBS-14309] - Can't click documentation/help links

      Improvement

      • [MBS-14279] - Support Amazon Belgium links
      • [MBS-14280] - Block archive.today, archive.is, archive.ph, archive.li, archive.fo, archive.md and archive.vn links

      Task

      • [MBS-11521] - Refactor error handling in the external links editor
      • [MBS-11889] - Refactor state handling in the external links editor
      • [MBS-13716] - Update React to v19
    6. 🔗 Simon Willison Tracking the history of the now-deceased OpenAI Microsoft AGI clause rss

      For many years, Microsoft and OpenAI's relationship has included a weird clause saying that, should AGI be achieved, Microsoft's commercial IP rights to OpenAI's technology would be null and void. That clause appeared to end today. I decided to try and track its expression over time on openai.com.

      OpenAI, July 22nd 2019 in Microsoft invests in and partners with OpenAI to support us building beneficial AGI (emphasis mine):

      OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

      But what is AGI? The OpenAI Charter was first published in April 2018 and has remained unchanged at least since this March 11th 2019 archive.org capture:

      OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

      Here's the problem: if you're going to sign an agreement with Microsoft that is dependent on knowing when "AGI" has been achieved, you need something a little more concrete.

      In December 2024 The Information reported the details (summarized here outside of their paywall by TechCrunch):

      Last year’s agreement between Microsoft and OpenAI, which hasn’t been disclosed, said AGI would be achieved only when OpenAI has developed systems that have the ability to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.

      So AGI is now whenever OpenAI's systems are capable of generating $100 billion in profit?

      In October 2025 the process changed to being judged by an "independent expert panel". In The next chapter of the Microsoft–OpenAI partnership:

      The agreement preserves key elements that have fueled this successful partnership—meaning OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until Artificial General Intelligence (AGI). [...]

      Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel. [...]

      Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first.

      OpenAI on February 27th, 2026 in Joint Statement from OpenAI and Microsoft:

      AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same.

      OpenAI today, April 27th 2026 in The next phase of the Microsoft OpenAI partnership (emphasis mine):

      • Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.
      • Microsoft will no longer pay a revenue share to OpenAI.
      • Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.

      As far as I can tell "independent of OpenAI’s technology progress" is a declaration that the AGI clause is now dead. Here's The Verge coming to the same conclusion: The AGI clause is dead.

      My all-time favorite commentary on OpenAI's approach to AGI remains this 2023 hypothetical by Matt Levine:

      And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    7. 🔗 r/york Askham Tesco recycling rss

      Does anyone know when the big cardboard recycling skip gets emptied? It's been full for weeks now and is in a state

      submitted by /u/Isla_Nooblar
      [link] [comments]

    8. 🔗 @binaryninja@infosec.exchange The debugger got some real love in our latest update. Hardware breakpoints and mastodon

      The debugger got some real love in our latest update. Hardware breakpoints and conditional breakpoints have both landed, and the new debug adapters make things faster and more reliable across a range of workflows. Read more from the latest blog: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#debugger

    9. 🔗 r/reverseengineering rfcat-py3 rss
    10. 🔗 r/wiesbaden Hat jemand Lust, diesen Mittwoch mit mir zu nem Konzert nach Köln (Aries) zu fahren? Ich zahle das Ticket rss

      Ich (M21) wohne in Nähe Wiesbaden und gehe diesen Mittwoch auf ein Konzert in Köln. Der Künstler heißt Aries und geht so in Richtung Indie/Pop/Rock/Hip-Hop (hier eine Geschmacksprobe). Ich freu mich schon richtig drauf. Mein Problem ist nur, ich hab kein Auto und mit den Öffis käme ich so ca. 6 Uhr morgens wieder zu Hause an.

      Wenn mich jemand von euch mitnimmt (Hin- und Rückreise), würde ich das Ticket + 20€ Spritgeld bezahlen. Also wer Lust auf sowas hat, meldet euch gerne in den nächsten 24h bei mir.

      Edit: wenn ihr andere Ideen habt, was ich tun soll, wenn das hier nichts wird: immer her damit. Mein aktueller Backup-Plan ist Hinfahrt mit BlaBlaCar und beim Konzert durch die Menge zu gehen und anzusprechen mit nem Pappschild:

      Köln -> Frankfurt

      Anybody?

      submitted by /u/BullfrogMiserable554
      [link] [comments]

    11. 🔗 r/york Thinking of buying a Persimmon new build home in Selby. There’s so many mixed reviews about this company. Was wondering on people’s experiences with this company. rss
    12. 🔗 r/LocalLLaMA Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 rss

      Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 | Hey fellow Llamas, your time is precious, so I'll keep it short. We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. We call it Luce DFlash (https://github.com/Luce-Org/lucebox-hub; MIT) ~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining (z-lab published a matched Qwen3.6-DFlash draft on 2026-04-26, still under training, so AL should keep climbing). If you have CUDA 12+ and an NVIDIA GPU (RTX 3090 / 4090 / 5090, DGX Spark, other Blackwell, or Jetson AGX Thor with CUDA 13+), all you need is # After cloning the repo (link in the first comment): cd lucebox-hub/dflash cmake -B build -S . -DCMAKE_BUILD_TYPE=Release cmake --build build --target test_dflash -j # Fetch target (~16 GB) huggingface-cli download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/ # Matched 3.6 draft is gated: accept terms + set HF_TOKEN first huggingface-cli download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/ # Run DFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):" That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. The binary links libggml*.a and never libllama. Luce DFlash will

      • Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify).
      • Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K.
      • Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts).
      • Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s.
      • Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL.

      Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256: Bench AR tok/s DFlash tok/s AL Speedup HumanEval 34.90 78.16 5.94 2.24x Math500 35.13 69.77 5.15 1.99x GSM8K 34.89 59.65 4.43 1.71x Mean 34.97 69.19 5.17 1.98x As you can see, the speedup is real on consumer hardware, not a paper number. Target graph produces bit-identical output to autoregressive in AR mode; the draft graph matches the z-lab PyTorch reference at cos sim 0.999812. Q4_0 KV costs ~3% AL at short context (8.56 to 8.33) and wins at long context where F16 won't fit anyway. Constraints: CUDA only, greedy verify only (temperature/top_p on the OpenAI server are accepted and ignored), no Metal / ROCm / multi-GPU. Repo started single-3090, recent community PRs added support for RTX 5090, DGX Spark / GB10, other Blackwell cards, and Jetson AGX Thor (sm_110 + CUDA 13). Feedback more than welcome! submitted by /u/sandropuppo
      [link] [comments]
      ---|---

    13. 🔗 r/Leeds Problem neighbours rss

      We have a house of multiple occupancy next door to our house which has adjoining garages. One of the garages is rented out by someone who does not live in any of the nearby houses and just rents the garage. This garage is in very frequent use by the guy renting who is habitually working on his car or multiple cars which groups of noisy ppl, dragging equipment around and using power tools weekend after weekend whenever the weather is good. We have a lovely quiet area apart from when this guy and his cohort show up - who don't even live here.

      Is there any department in LCC we can contact to get help with this as it is starting to really affect out quality of life and put us off spending time in our own garden and I imagine it is affecting other neighbours too. or does anyone know how I find out who owns the property next door.

      Imagine every Sunday it was like having a mechanics / building site going full tilt all afternoon. It's amazing how thoughtless people can be.

      Thanks

      submitted by /u/sanchez599
      [link] [comments]

    14. 🔗 r/Leeds Best pub chips in Leeds rss

      Looking for the best pub chips in Leeds. Must be CHUNKY chips, strictly NO fries. Include pics if poss. Countryside areas preferred (to pair with a walk)

      TIA 🥔🥔🥔🥔

      submitted by /u/Educational_Clue7522
      [link] [comments]

    15. 🔗 r/reverseengineering Using Google's Gemma 4 E4B local AI model to Reverse Engineer a simple Crackme rss
    16. 🔗 r/Leeds Gym friend rss

      Hey everyone,

      I’m looking for a gym partner to train with regularly. Ideally someone who can spot me on certain lifts and help with general accountability.

      I’m 26M and work in the city centre. I’m planning to join either The Edge or PureGym at the Merrion Centre. My main focus is building overall strength and improving general health, so it would be great to find someone with similar goals.

      My preferred training times are:

      Weekdays: after 6pm (or possibly before 8am)

      Weekends: flexible

      I’m relatively new—trained consistently for about 6 months last year but fell out of the routine, so I’m keen to get back into it properly. If you already have a workout plan you’re following, I’d be happy to tag along.

      My main goal right now is improving my bench press, along with bodyweight exercises like pull-ups.

      submitted by /u/CraftyBrie
      [link] [comments]

    17. 🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
      sync plugin-repository.json
      
      No plugin changes detected
      
    18. 🔗 r/Harrogate Has the gentrification of Bilton began? rss

      Lots of new movers, young and from Leeds. Will this lead to businesses popping up supporting their tastes? The Knox is pricier than some town center spots already!

      submitted by /u/MechanicAggressive16
      [link] [comments]

    19. 🔗 sacha chua :: living an awesome life 2026-04-27 Emacs news rss

      There was a big discussion on lobste.rs about people's favourite Emacs packages and that sparked similar conversations on Reddit and HN. Discussions like that are a great source of inspiration. I added a couple of small improvements to my config based on this week's Emacs news, like diff-hl.

      Also, lots of people expressed their appreciation for Chris Wellons, who is moving on to other editors for now. Me, I've enjoyed using simple-httpd, impatient, and skewer, and I'm glad Chris made and shared them. Many of his packages already have new maintainers, and the rest are up for adoption. Perhaps we'll see him around again someday!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    20. 🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like going Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss

      Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?

      A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 100+ members and counting!

      We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people

      The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.

      We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc

      We also have a discord for chill voice chats & casual gaming etc.

      If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome

      I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc

      Leave a comment & I'll DM an invite when available! x

      PLEASE CHECK DMS FOR INVITES

      submitted by /u/rmonkey100
      [link] [comments]

    21. 🔗 r/york Flowers make this city even better somehow🥹💐🪻 rss

      Flowers make this city even better somehow🥹💐🪻 | submitted by /u/Wedding-Beauty
      [link] [comments]
      ---|---

    22. 🔗 r/Yorkshire Cherry trees colouring the world. rss
    23. 🔗 r/Leeds Does anyone have spare beer bottles? rss

      I am brewing my own beer and I need bottles preferably brown. If you work in a pub and have empties I can come and collect? My local only does alc free bottles and doesn’t sell many. Thanks

      submitted by /u/DiligentPotential960
      [link] [comments]

    24. 🔗 tomasz-tomczyk/crit v0.10.1 release

      What's Changed

      Comments panel redesign

      The comments panel has been rebuilt with a segmented filter (All / Open / Resolved) and collapsible groups. Pair it with the new "hide resolved comments" setting (h shortcut) to focus on what's still open during a review.

      CleanShot 2026-04-27 at 08 36 01@2x

      • feat: redesign comments panel with segmented filter and collapsible groups by @tomasz-tomczyk in #354 - thanks @omervk for suggestions in this area!

      General

      Internal refactors

      Full Changelog : v0.10.0...v0.10.1

    25. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    26. 🔗 r/wiesbaden Need help with moving rss

      Hey Leute!

      Meine Freundin und ich sind gerade für die Uni nach Wiesbaden gezogen (Daimlerstraße, 65197). Ich habe selbst einen Transporter gemietet und bin mit unseren Sachen hierher gefahren. Jetzt haben wir aber ein Problem: Wir bekommen unsere Waschmaschine einfach nicht vom Transporter in unsere Wohnung im 4. Stock.

      Hat jemand Tipps oder vielleicht sogar kurzfristig Zeit, kurz mit anzupacken? Würden natürlich auch was dafür geben!

      Vielen Dank schon mal!

      ---

      English:

      Hey guys!

      My girlfriend and I just moved to Wiesbaden for university (Daimlerstraße, 65197). I rented a van myself and drove all our stuff here to our new apartment. But now we have a problem: we can’t get our washing machine from the transporter up to our apartment on the 4th floor.

      Any suggestions, or maybe someone nearby who could help us carry it up? Happy to compensate!

      Thanks a lot in advance.

      submitted by /u/Orph3us_151
      [link] [comments]

  4. April 26, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-26 rss

      IDA Plugin Updates on 2026-04-26

      New Releases:

      Activity:

    2. 🔗 r/Yorkshire Sunrise at Cow and Calf rocks, Ilkley, West Yorkshire rss

      Sunrise at Cow and Calf rocks, Ilkley, West Yorkshire | 📸 Tatiana Hepplewhite submitted by /u/Wedding-Beauty
      [link] [comments]
      ---|---

    3. 🔗 r/LocalLLaMA Confirmed: SWE Bench is now a benchmaxxed benchmark rss

      Confirmed: SWE Bench is now a benchmaxxed benchmark | submitted by /u/rm-rf-rm
      [link] [comments]
      ---|---

    4. 🔗 r/Harrogate Almost drove into bus station rss

      First time driving in Harrogate to go to Leeds and got confused on the turning because everyone was indicating right (so i assumed that I would be able to drive forward where I thought the sign before meant to go).

      Realised my mistake almost immediately and stopped and reversed out quickly after having only gone in a little bit (not even sure if it would be listed as being entered).

      How much would the fine be if it is registered as an offence and would I be able to contest it? I'm a relatively new driver if that would help my case

      submitted by /u/stonecoldtruecel67
      [link] [comments]

    5. 🔗 r/wiesbaden Wie fandet ihr den Vinothon? rss

      Frage steht im Prinzip oben. Ich musste leider spontan krankheitsbedingt absagen, aber mich interessiert wie ihr den ersten Vinothon erlebt habt?

      War er gut organisiert? Was gab es an den Genussstationen? Hat jeder was bekommen oder war es zu wenig? Hattet ihr Spaß? :D

      Wetter war ja super.

      submitted by /u/itsKoeri
      [link] [comments]

    6. 🔗 r/Yorkshire Then & Now Pt. 4 rss
    7. 🔗 r/Yorkshire Then & Now Pt 3 rss
    8. 🔗 r/Yorkshire Then & Now pt 2. rss
    9. 🔗 r/Yorkshire Then & Now Pt 1 rss
    10. 🔗 r/wiesbaden Gibt es hier Gruppentreffs? rss

      Hi,

      Gibt es hier zufällig so Spieleabende oder Whatsapp-Gruppen um zu connecten?

      submitted by /u/Right_Drawing_5299
      [link] [comments]

    11. 🔗 r/Yorkshire It’s Grim Up North rss

      Near Wetherby.

      submitted by /u/Pitiful-Hearing5279
      [link] [comments]

    12. 🔗 r/Harrogate Any one interested for a game of Snooker or pool ? rss

      Im 30M from Hg1

      submitted by /u/ObjectDelicious3427
      [link] [comments]

    13. 🔗 r/Yorkshire Friday rss

      Friday | I work in Liversedge and live in Halifax, the bus journey gets a bit dull, so on Friday morning, after a night shift, I got a bus to Leeds, another across to Pickering. I had a few hours there and then got the bus over the moors to Whitby. Got the coast bus down to Scarborough and then the coastliner back to Leeds. Thoroughly recommend it if you’re ever at a loose end for a day submitted by /u/kitty_pickle
      [link] [comments]
      ---|---

    14. 🔗 r/LocalLLaMA HauhauCS (of "Uncensored Aggressive" fame) published an abliteration package that plagiarizes Heretic without attribution, and violates its license rss

      HauhauCS (u/hauhau901) publishes uncensored LLM models on HuggingFace with 5M+ combined monthly downloads across 22 models (verified via the HuggingFace API, April 2026). Every model card claims "0/465 refusals, zero capability loss." When asked about methodology on HuggingFace, the response was: "Currently it's my own private methods and tools :) Not interested in any donations."

      We recovered the deleted source code from PyPI's CDN. It's a fork of Heretic (AGPL-3.0).

      Full 17-point code breakdown, benchmark analysis, and SHA-256 verified downloads: dreamfast.github.io/reaper- analysis

      The evidence

      • 7/7 module filenames preserved from Heretic v1.2.0
      • 30/32 refusal markers character-for-character identical, including "i an ai" missing the "m" and "i can'" missing the "t"
      • 30+ shared function and class names including get_readme_intro, DatasetSpecification, batchify
      • Identical Optuna parameter bounds: (0.4, 0.9) and (0.6, 1.0) multiplied by last_layer_index
      • The config was renamed from Heretic's good_prompts/bad_prompts to safe_prompts/harmful_prompts, but the internal variables were left as good_residuals/bad_residuals, matching Heretic exactly
      • The entire analyser geometry pipeline reproduced step for step: geometric median computation, PaCMAP with n_neighbors=30, atan2 rotation with the same [[ct, -st], [st, ct]] rotation matrix. Heretic's author notes he has " never seen" the geometric median approach in abliteration literature.
      • A source comment in config.py reads: " kept as a module-level tuple so the literal does not duplicate line-for-line with any fork." A human hiding a fork would not document the evasion. An LLM asked to refactor code would describe the rationale as written.
      • SPDX headers identical format across all core files, just the copyright holder swapped

      View 17 hand picked code snippet comparisons in the side by side comparison.

      Heretic's author confirms derivation

      Philipp Emanuel Weidmann, the creator of Heretic, reviewed the recovered source code and stated: " I can say with certainty that this package was plagiarized from Heretic, and then probably refactored using an LLM in an attempt to hide this." He identified the same SPDX headers, the geometric median approach he has "never seen in literature," the DatasetSpecification fields including residual_plot_label and residual_plot_color, the cascading dtype fallback, the good/bad naming convention, and more. He calls it " a clear violation of Sections 4 and 5 of the AGPL. It's also a clear violation of every ethical standard imaginable, and an obvious case of outright plagiarism." Full quote on the analysis page.

      License violation

      Heretic is AGPL-3.0, which requires modified versions to preserve original copyright notices, identify as derivative works, and remain under AGPL-3.0. Reaper removed all copyright notices, does not identify itself as a derivative work of Heretic, and relicensed to PolyForm Noncommercial.

      Verify it yourself

      Grab the files here

      submitted by /u/nathandreamfast
      [link] [comments]

    15. 🔗 r/Leeds Mid 20s F looking to meet new people rss

      I’m looking to meet new people around my age and really struggling with it at the moment. Joined several groups but always end up fading out. Can anyone recommend some places or if anyone wants to meet? 😊

      Seen several posts but looking to see if there’s anything new?

      submitted by /u/Exciting_Shoulder_88
      [link] [comments]

    16. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [tc_deer](https://github.com/arkup/tc_deer): 0.1.3
      
    17. 🔗 r/LocalLLaMA Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! rss

      Qwen3.6 35B A3B Heretic (KLD 0.0015!) Incredible model. Best 35B I have found! | Been using this for a few days. It is BY FAR the best uncensored model I have found for Qwen 3.6 35B. With IQ4XS, Q8 KVcache, 262K context, it fits in 24GB of VRAM and does not fail on multi turn tool calls. I honeslty feel like it is smarter than the original model (call me crazy). The model also has a very low KLD so it should in theory be similar to the orignal model on harmless prompts. llmfan's 3.5 35B model does actually benchmark higher than the original in the UGI NatInt section, so I have a solid hunch this 3.6 35B will also benchmark higher than the original 3.6 model as well. Y'all should give it a try. submitted by /u/My_Unbiased_Opinion
      [link] [comments]
      ---|---

    18. 🔗 r/york York is beautiful from every angle✨🌹 rss

      York is beautiful from every angle✨🌹 | submitted by /u/No_Donut1433
      [link] [comments]
      ---|---

    19. 🔗 r/york Efl sticker swap shop in York? rss

      Efl sticker swap shop in York? | Anybody know of any efl sticker swap event in York or interested in organizing one? (I'm a man in my 30s, I know...) Got 150 spares and still need 400 to finish the album. Any tips on where is stocking them atm would also be much appreciated. submitted by /u/Beans-4862
      [link] [comments]
      ---|---

    20. 🔗 r/york Charity bike ride setting off today from the Eye of York rss

      Charity bike ride setting off today from the Eye of York | 200 bikes riding up to Huby raising money for the palliative care centre at York Hospital. Looking great with the blue sky over Clifford’s Tower! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    21. 🔗 r/Harrogate Cheapest option to London rss

      I need to travel to London once a week for a few months for a job, what’s the best and cheapest way to book this? I’ve found booking via uber gets 10% credits and Avios points. Are there any others??

      submitted by /u/Odd_Bookkeeper_6027
      [link] [comments]

    22. 🔗 Register Spill Joy & Curiosity #83 rss

      This is a time of great technological change. You could even wring a "once in a lifetime" out of me . Many times per week now I say to either myself or someone who just shared some news: this is crazy, man.

      The numbers, the pace, the demand, the bottlenecks shifting, the new capabilities emerging, and, man, the predictions. The predictions. AI will do that, AI will do this, in the future we'll do all of this and none of that, but surely this will still be that and that thing will be the most important thing.

      I've done it too, of course. I've predicted quite a few things in past issues of this newsletter and, hey, yes, I was right a few times. And so were others.

      But we're talking about technological progress here and that is very hard to predict, especially its second-order effects. So, as you read through the things I shared below, I want you to keep the following quote in mind, because it's been stuck in mine for many weeks now and I found it helpful to carry around with me:

      He did not create a world that went as he wanted, but he created a world that went well. We have many examples of that. Trains and bicycles come in, and we get feminism because it's easier for people, especially women, to move freely and independently. They can organize. They can mobilize. We get suffragettes. Did the inventor of the train intend for there to be women's liberation? No. Did it go the way he imagined? No. Did it go well? Yes.

      Or consider this:

      After the Great War, the Haber-Bosch process was used throughout the world to fix nitrogen on a grand scale. […] It was synthetic fertilizer that enabled Europe, the Americas, China and India to escape mass starvation and consign famine largely to the history books: the annual death rate from famine in the 1960s was 100 times greater than in the 2010s. […] If Haber and Bosch had not achieved their near-impossible innovation, the world would have ploughed every possible acre, felled every forest and drained every wetland, yet would be teetering on the brink of starvation, just as William Crookes had forecast.

      That was after the war. Here's what Bosch and Haber did with their process during the war:

      Then in September 1914 Bosch made the famous 'saltpetre promise' that he could convert the Oppau plant so that it turned ammonia into nitrate, using a newly discovered iron-bismuth catalyst. He built an even bigger plant at Leuna, producing huge quantities of nitrate and thus probably prolonging the war. Haber, in the meantime, had invented gas warfare, personally presiding over the first chlorine attack at Ypres in March 1915.

      Now, who would've predicted going from that to that?

      • Amp's smart mode now uses Opus 4.7. I think it's a great model. I now often switch between smart and deep mode. One plans, the other reviews, and vice versa.

      • Last week I re-read Mike Acton's Expectations of Professional Software Engineers and, man, is it good. So, so good. If you haven't, you need to read this right now. This is software engineering in a team, in a company, in a business. Hacking isn't programming isn't engineering, but what he describes here, that's the real thing. And -- of course you have to say this, Thorsten -- yes: this all still applies when using AI. Maybe even more so. Just like The Basics.

      • For many, many years I've come across strong recommendations to watch this talk by Richard Hamming: You and Your Research. Not considering myself a scientist, I shrugged off those recommendations and never saw it. I can tell you now: that was a huge mistake. This morning, right after waking up, still in bed, I read this transcript, start to end, and let me tell you this: watch the talk or read the transcript! If you're here, reading this newsletter, I'm certain you will get something out of it. It's fantastic.

      • Highly, highly recommend you watch this interview with Dylan Patel on the current state of tokenomics. Really: if you only have a vague idea of what "compute constrained" means, you have to watch this. (Also, the last ten minutes, in which Dylan talks about the optics of the model companies, are kinda separate from tokenomics, but worth it alone.)

      • Talking of which: "Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." $60 billion (!) now sounds like $60 million did in 2012.

      • Kevin Kwok's thoughts on Cursor's and SpaceX's partnership are interesting, but I disagree with him on the premise that model and harness have to go hand in hand. I don't think the causality of the loop is there: Claude 3.5's ability and eagerness for tool calls was the Urknall of agents. That's what lead to us to build Amp and Anthropic to build Claude Code.

      • Bonkers numbers: Google wants to invest up to $60B in Anthropic. The Hacker News comments are interesting.

      • Justin Jackson is asking: what has technology done to us? I very much don't agree with the quoted statement of "technology will always do its worst thing" (and neither does Justin, it sounds like.)

      • It's cool to care: "Whenever somebody asks why, I don't have a good answer. Because it's fun? Because it's moving? Because I enjoy it? I feel the need to justify it, as if there's some logical reason that will make all of this okay. But maybe I don't have to. Maybe joy doesn't need justification. […] So much of our culture tells us that it's not cool to care. It's better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I've certainly felt that pressure - the urge to play it cool, to pretend I'm above it all. To act as if I only enjoy something a 'normal' amount. Well, fuck that."

      • Take some time to play around with ChatGPT Images 2.0. It's mind-blowing. If they can accurately reconstruct screenshots like, regardless of whether that's the "image" model part or the "thinking" model part, I think something just shifted. Also, what a sick landingpage.

      • This was great: What will be scarce? The question that leads to the one in the title is this: "If advanced AI brings material abundance--if machines can produce many if not all forms of human production at very low marginal cost--does economics become irrelevant?" The whole piece is explains the possible mechanisms at play and also answer the question of whether economics will become irrelevant, but even more interesting is the prediction on the future of work: "The economics of structural change tells us that when technology makes one type of production cheap, the economy doesn't collapse. It transforms. It shifts toward the things that technology can't make cheap. For AI, those things are exactly the ones where human involvement carries inherent, irreplaceable value." And that means the "durable jobs will be in the relational sector, where the human element is the product itself." Or, in other words: "You don't need to be Picasso. You need to be the person whose involvement makes the product feel like it was made for someone, by someone."

      • "A parasite that has been eating people for 3,500 years is about to be wiped off the planet. It infected 3.5 million people in 1986. Last year, it infected 10. And I have not seen it make a single front page." Believe it or not, but in seventh grade I gave a presentation in biology class on the Guinea worm. Use Google Image search if you're as brave as I was in seventh grade. Yeah, thought so.

      • This is from December last year, so the numbers are even crazier now, which makes this even more interesting: Liar's Valuation. I knew about "take last month's revenue and multiply by twelve," but the tiered investment rounds were new to me, and so was the "give heavy discount in year one, but then report year three bookings as ARR."

      • The annotated Unicode map. More of this!

      • Yes, it's Sky Sports News of all places: "Pressure is a privilege. And if you're feeling any pressure or the weight of any expectation, you are breathing rare air, that very few of us get to live inside." Good frame.

      • Or, as Josh Kushner said: "Every experience is training you for the next one… In order to become king, God didn't give David a crown, he gave him Goliath."

      • Tim Cook is stepping down as Apple's CEO. This Stratechery reflection was very interesting: Tim Cook's Impeccable Timing. For example, I had no clue that Apple in China (as in: moving its manufacturing to China) was the work of Cook. For me, Cook will always be the CEO who was at the helm when the M1 shipped, one of most remarkable engineering achievements I've witnessed.

      • Apple's incoming CEO John Ternus in 2024 in a commencement speech: "At some point in my first year, I found myself at a supplier facility. I was far away from home, it was well past midnight. I was using a magnifying glass to count the number of grooves on the head of this screw, which, remember, lives on the back of the display. And I was arguing with the supplier because these parts had 35 grooves, they were supposed to have 25. I distinctly remember stepping back for a minute and thinking to myself, 'What the hell am I doing? Is this normal?' And I thought about it, and I realized it might not be normal, but it's right. It's right because I'd already spent months working on that product, and if you're going to spend that much time on something, you should put in your very best effort. Maybe a customer notices, maybe they don't, but either way, whenever I saw one of those displays on someone's desk, it mattered to me to know that my teammates and I had considered everything about it and done the very best job we could." There's a lot more good stuff in there. I'm excited.

      • After probably ten years of using Alfred I switched over to Raycast two years ago and one thing that I've sporadically but consistently missed was Alfred's "Large Type" feature: you type a bit of text hin, hit a shortcut, and boom, the text is now as big as your display. Very helpful when you want to show someone in the room the wifi password, for example. So, this week I thought: surely there's a Raycast plugin for that? And there is but the text isn't that large. But guess what, there's also this: large-type.com. How good is that?

      • Adam Mastroianni again with some very good writing on capital-S science: Nothing ever dies. It merely becomes embarrassing. I didn't know that ego depletion doesn't reproduce! While reading I had to think of Brandolini's Law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." (In 2015 both Brandolini and I both gave a talk at a Ruby conference in Wrocław, Poland, and we chatted for half an hour at the airport and, not sure exactly why, but I'm oddly proud of that.)

      • Orson Scott Card, author of Ender's Game: "Those changes made, I sent it to Ben again. I did not remind him of what he had advised me to do. I merely told him I liked my title, and said, 'I have addressed your other concerns,' which was true. I figured he wouldn't remember what his exact words had been. My answer was a check. [...] Did Ben's feedback help? Yes -- but his specific advice was not right, and I knew it. [...] Editors don't know more than you about your story. They especially don't know why they decide to accept or reject stories. YOU have to know what your story needs to be, and take only advice that you believe in."

      • Reminded me a lot of Bill Hader on feedback: "When people give you notes on something, when they tell you it's wrong, they're usually right. When they tell you how to fix it, they're wrong."

      • exe.dev raised a Series A: "We are building a cloud that makes sense for the current and future state of software development. One that includes the features needed for fast, secure development out of the box. A cloud developers actually enjoy using. We want to revitalize the spirit of projects like early Heroku (though our technology is very different) and ship features that bring you joy." (Not to take away from this announcement, hence the parenthetical: the impact Heroku had on a certain generation of programmers working on developer tooling is hard to overstate. I bring it up a lot , and so do my teammates who are close my age and worked with web technologies in the early 2010s.) I'm very excited to see what they'll do! I like using exe.dev a lot.

      • I also really like David's personal statement that goes along with the funding announcement: I am building a cloud.

      • Just a reminder: chat jimmy exists. Try it. You have to. Try it and then imagine what we could do if one of today's frontier models would run at even half that speed. Send me a letter if you know whether that's physically impossible.

      • New Larry David biography is coming out this year. Pretty, pretty, pretty good.

      • Elad Gil's Random thoughts while gazing at the misty AI Frontier. Lots of interesting things in there. AI researchers' distributed IPO, compute constraints, hidden layoffs, and also this bit: "It is not just the model you use, but the environment, prompting, etc you build around it that helps impact your choice. Brand also matters more then many people think. At some point, either one coding model breaks very far ahead, or they stay neck in neck."

      • Maggie Appleton: One Developer, Two Dozen Agents, Zero Alignment. I think I see the same future that Maggie sees. And we're building it at Amp.

      • That's a title worthy of a book, not a post, but the content is still fascinating: Fabric is harder than steel. As someone who's been chasing the perfect t-shirt for years and who has a very deep fascination with "tech shirts" (not company logos, but high-quality shirts made of "functional" textiles), this was very cool. I often wondered: how can car seats be this good for so long? Well, turns out it's engineering.

      • Jeff Geerling: New 10 GbE USB adapters are cooler, smaller, cheaper. I could read blog posts like this one five times every day.

      • I Found It: The Best Free Restaurant Bread in America. This was fantastic. Go read it if you have an hour and want to smile and enjoy some great writing. There are many quote-worthy sentences in there, but I'll let you read them yourself. Instead, here's a free bread anecdote. Once upon a time, I was working on a farm in Australia, along with around ten other backpackers. Handful of Germans, handful of French people, two Brits. One day we were sitting around the big table in this "shed" (actually a big house, with a shed-like quality, if you will) we were living in, chit-chatting about stuff. What do you miss the most from home? came up as a question and after someone said that they miss a proper shower and feeling clean for once many of us nodded. Yes, that'd be something. Then someone said: I really, really miss the bread. And everybody , because we've all seen and tasted what the Australians call bread, let out a big sigh and said, oh yes, the bread, I miss the bread. And precisely one second later, the room split into two factions and the Germans stared at the French and the French stared at the Germans and both factions, at the same time, said something to the effect of: wait, what the fuck, why do you miss bread, your bread fucking sucks, our bread is good bread, your bread is garbage, shut up. But, sadly, the French wouldn't see how wrong they were, thinking their long, dumb, comic book bread is any good. And I'm pretty sure that created a rift in our little community of grape pickers. Anyway, hopefully I pissed of all the Australians and French people reading this -- your bread sucks. So, go read the article and have some fun.

      Know which bread's the best? You should subscribe:

    23. 🔗 r/reverseengineering Importing GTA IV texture dictionary natively in Unreal rss