🏡


to read (pdf)

  1. I don't want your PRs anymore
  2. JitterDropper | OALABS Research
  3. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  4. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  5. Neobrutalism components - Start making neobrutalism layouts today

  1. May 09, 2026
    1. 🔗 tomasz-tomczyk/crit v0.12.0 release

      chore: bump Nix flake version to v0.12.0

    2. 🔗 r/wiesbaden Child outlet protectors rss

      Hi all,

      I’m on the hunt for some outlet covers/protectors. I’ve checked at Saturn and Media Markt with no luck. So I was curious if anyone had any ideas or knows where to get them? My son is obsessed with trying to put his fingers in the outlets. Thanks!

      submitted by /u/daddyciwa
      [link] [comments]

    3. 🔗 r/wiesbaden FrĂŒhstĂŒcken in Wiesbaden oder Mainz rss

      Moin Leute,

      Wo kann man so richtig gut und gemĂŒtlich lange frĂŒhstĂŒcken. Vegan/Vegetarisch sollte möglich sein? Auch super wenn man dort reservieren kann.

      Eure Tipps letztes Mal fĂŒrs gemĂŒtliche Essen gehen haben super gepasst. :) Danke

      submitted by /u/JohnTheMonkey2
      [link] [comments]

    4. 🔗 idursun/jjui v0.10.5 release

      This release adds several usability and customisation improvements, including dynamic theme switching, better list and input handling, new UI configuration options, and fixes for a few revision workflow issues.

      New features

      • Added dynamic light/dark theme switching, with support for terminal theme change notifications and a polling fallback where those notifications are unavailable. #641
      • Added ui.mouse_support = false to completely disable jjui mouse handling. #663 #661
      • Added ui.set_window_title = false to disable jjui's custom terminal window title. #341
      • Lua/input customisation improvements:
        • input({ value = ... }) can now open with a pre-filled value. #660 #653
        • revisions.open_set_bookmark({ value = ... }) can now pre-fill bookmark names. #652

      Improvements

      • Added ctrl+n / ctrl+p as up/down aliases in input-driven lists, so list navigation works without leaving the home row. #667 #666
      • Improved the details view styling so explanatory hints remain readable when a file row is selected.
      • Custom TOML actions can now define their binding inline with the action, instead of requiring a separate [[bindings]] entry.
        [[actions]]
        

        name = "new after" lua = ''' jj("new", "-A", context.change_id(), "--no-edit") revisions.refresh() ''' seq = ["\", "n", "a"] scope = "revisions"

      Fixes

      • Fixed Lua scripts that chain built-in actions so each step waits for the previous one to finish before continuing.
      • Added @ and f bindings in revert mode for jumping to the working copy and ace-jump navigation. #662
      • Fixed squash preview refresh when moving to the target revision. #658
      • Fixed split hints in details view so they follow checked files instead of the currently hovered unchecked file. #659
      • Fixed revision selection after rewrite operations so focus stays on the intended row.

      What's Changed

      • refactor: inline theme styles into renderers by @idursun in #651
      • Dynamic light/dark switching by @zerowidth in #641
      • feat(bookmark): pre-fill bookmark name via open_set_bookmark value arg by @rdeaton in #652
      • feat(input): allow pre-filled value for input field by @baggiiiie in #660
      • feat(ui): config setting to completely disable mouse support by @shoce in #663
      • feat(ui): add ctrl-n/p list navigation aliases by @vvvu in #667

      New Contributors

      Full Changelog : v0.10.4...v0.10.5

    5. 🔗 hyprwm/Hyprland v0.55.0 release

      A massive update brought to you by the All Hyprland Corp!

      Breaking changes

      • dwindle:pseudotile has been removed as it wasn't doing anything
      • decoration:shadow:ignore_window has been removed (defaults to enabled)
      • render:cm_fs_passthrough has been removed, should be automatic with render:cm_auto_hdr
      • misc:vfr moved to debug: as it's a debug variable that should not be changed in prod environments

      New features:

      • algo/scroll: add center for centering the current col (#14059)
      • algo/scrolling: add config options for focus and swapcol wrapping (#13518)
      • algo/scrolling: add expel, consume, and consume_or_expel (#13869)
      • animations: add springs (#14171)
      • binds: add an auto_consuming flag (#13919)
      • config/lua: add ExpressionVec2, allow using a table for vec2 rules (#14197)
      • config/lua: add clear tag api (#14273)
      • config/lua: add noop
      • config/lua: add simple layout API (#14258)
      • config/workspacerule: add animation style (#13380)
      • config: add device tags (#13728)
      • debug-tools: add flame
      • desktop/window: add alpha container for alpha calculations
      • desktop/windowRule: add confine_pointer window rule (#13379)
      • desktop/windowRule: add parser switch for confine pointer (#14263)
      • dispatchers: add moveintoorcreategroup (#13325)
      • dwindle: add rotatesplit layoutmsg and tests (#13235)
      • gestures: add live pinch cursor zoom (#14049)
      • gestures: add scroll_move (#14063)
      • groups: add groupbar middle_click_close option (#14242)
      • hl.mata.lua: add string to NotificationOptions's icon param. (#14334)
      • hyprctl: add hw cursor flag
      • hyprland.pc.in: add src include flag
      • i18n: add Greek translations (#13865)
      • i18n: add Punjabi translations (#13807)
      • input: add device specific binds (#13073)
      • layerrules: add dynamically registered rules for plugins (#13331)
      • layout/windowTarget: add visualBox (#13626)
      • render/cm: add ICC profile pipeline (#12711)
      • renderer/deco: add glow decoration (#13862)
      • renderer: add a cm settings cache
      • window/rules: add scrolling_width (#13754)
      • windows/focus: add fallbacks when focussing workspaces (#14270)

      Fixes:

      • config/descriptions: add missing desc entry
      • cmake: add -fno-omit-frame-pointer to debug
      • InputManager: add guards to confineToRegion to avoid issues (#14269)
      • algo/dwindle: add back splitratio (#13498)
      • algo/dwindle: fix precise mouse setting (#13678)
      • algo/master: fix crash after dpms (#13522)
      • algo/master: fix crash on null target in getNextTarget
      • algo/scroll: fix std::clamp assertion crash on resume from suspend (#13737)
      • algo/scroll: fix unsigned wrap (#13634)
      • algo/scrolling: fix offset on removeTarget (#13515)
      • algo/scrolling: fix rare crash
      • algo/scrolling: various scrolling view related bugfixes (#13974)
      • build: add glaze dependency with FetchContent fallback (#13666)
      • build: add format-check and format-fix Makefile targets (#13936)
      • build: fix build on gcc 16.x after #6b2c08d (#13429)
      • clang-tidy: fix duplicate entry in .clang-tidy (#14045)
      • cmake: fix permissions for directories by default
      • cmakelists: fixup errors failing build on arch ci (#14259)
      • compositor: fix floating input/visual z-order desync after fullscreen (#14015)
      • compositor: fix focus edge detection (#13425)
      • compositor: fix missing recheckWorkArea to prevent CReservedArea assert failure (#13590)
      • config/actions: fix misuse of ActionResult's error type (#14221)
      • config/legacy: fix crash on getConfigValue of plugin fns
      • config/legacy: fix missing fallbacks crashing device getters
      • config/lua: fix device bool int reads (#14313)
      • config/lua: fix dispatcher shapes to not be callable (#14268)
      • config/lua: fix unbind behavior (#14199)
      • config/lua: fix window object to selector logic
      • config/refresher: fix refreshing of cursor zooms (#14283)
      • config: fix crash in safe mode due to null Config::mgr() (#13855)
      • config: fix propRefresher to not run on first launch
      • config: fix safe mode config generation (#14024)
      • config: fix type confusion in getOption with complex types
      • core: fix i586 build (#13550)
      • deco/border: fix damage region
      • deco/border: fix damageEntire
      • desktop/group: fix movegroupwindow not following focus (#13426)
      • desktop/rule: fix matching for content type by str
      • desktop/rules: fix empty workspace handling (#13544)
      • desktop/rules: fix static rules and content type. (#13725)
      • desktop/view: fix SIGABRT in CWindow::onUnmap when monitor is expired (#14148)
      • desktop/window: fix floating windows being auto-grouped (#13475)
      • desktop/window: fix idealBB reserved (#13421)
      • desktop/windowRule: fix matching CONTENT (#13636)
      • desktop/workspace: fix visibility criteria matching (#14349)
      • example/hyprland.lua: fix wiki links for new stuff (#14172)
      • examples: fix missing permissions entry in lua example config (#14177)
      • groups: fix movewindoworgroup when moving from group to group (#14086)
      • hyprctl: fix bools in getoption
      • hyprctl: fix buffer overflowing writes to the socket
      • hyprctl: fix getoption with custom types (#14243)
      • hyprctl: fix invalid type cast
      • hyprctl: fix json output for the submap command (#13726)
      • hyprctl: fix lib64 pkgconfig for version-checking (#14051)
      • hyprctl: fix workspace dynamic effect reloading (#13537)
      • hyprpm: fix url sanitization in add
      • input: fix device configs for pointer devices
      • input: fix focus_on_close=2 (MRU) routing to cursor path instead of getNextCandidate (#13969)
      • input: fix the multimon touch fix (#13819)
      • input: fix touch monitor focus ordering (#14310)
      • input: fix touch screen focus on multi monitor (#13764)
      • internal: fix relative path header locations (#13650)
      • keybinds: fix keycode matching on lua (#14254)
      • keybinds: fix missing z-order update on floating toggle (#14100)
      • keybinds: fix wrong space assignment in pin (#14061)
      • keybinds: fixup changegroupactive
      • layershell: fix popup crash with nullptr mon (#13763)
      • layout/algo: fix swar on removing a target (#13427)
      • layout/groupTarget: fix crash on null space assignment (#13614)
      • layout/master: fix rollprev/rollnext focusing the wrong window (#14209)
      • layout/scroll: fix configuredWidths not setting properly on new workspaces (#13476)
      • layout/scrolling: fix edge detection in recalculate() (#14359)
      • layout/scrolling: fix size_t underflow in idxForHeight (#13465)
      • layout/windowTarget: fix size_limits_tiled (#13445)
      • layout: fix crash on monitor reconnect due to stale workspace state
      • layout: fix drag_threshold window snap regression (rebased for #12890) (#13140)
      • layout: fix null deref in focalPointForDir and moveInDirection (#13652)
      • layouts: fix crash on missed relayout updates (#13444)
      • meta/stubs: fix notification icon type (#14320)
      • misc: fix missing noreturn attribute for throwError (#13746)
      • monitor: fix centered floating windows off-screen in special workspace (#14203)
      • opengl/shadow: fix shadow offset rendering (#14156)
      • overridableVar: fix reassignment
      • pointer: fix hardware cursor rendering on rotated/flipped monitors (#13574)
      • propRefresher: fix misnamed value
      • protocols/compositor: fix presentFeedback being blocked
      • protocols/sessionLock: fix crash when monitor is gone during lock surface creation
      • protocols: fix image-copy-capture stop handling and remove non protocol errors (#13706)
      • render/pass: fix debug:pass rendering
      • render: fix SIGFPE in addWindowToRenderUnfocused when misc:render_unfocused_fps is 0 (#13973)
      • render: fix layer blur_popups ignoring ignore_alpha when blur is off (#13947)
      • renderer/groupbar: fix a group indicator rounding bug (#13975)
      • renderer/groupbar: fix gradients rendering (#13875)
      • renderer: Various CM fixes, part 8 of refactors (#13860)
      • renderer: fix blockBlurOptimization check (#13685)
      • renderer: fix crash on mirrored outputs needing recalc (#13534)
      • renderer: fix crash on null blur framebuffer during monitor disconnect
      • renderer: fix crash when shader path isn't a file (#13756)
      • renderer: fix crash with nullptr FBs (#13641)
      • renderer: fix decoration colors with linear FP16 (#14361)
      • renderer: fix sdr mod (#13630)
      • renderer: fix shadow CM calculations (#14364)
      • renderer: fix share window projection (#13695)
      • renderer: more FP16 fixes (#14070)
      • renderer: refactor part 7: api fixes (#13631)
      • renderer: small fixes in OpenGL.cpp and OpenGL.hpp (#13842)
      • screencopy: fix crash in screensharing toplevel with invalid handle (#13781)
      • screencopy: fix isOutputBeingSSd (#13586)
      • screencopy: fix minor crash (#13566)
      • screencopy: fix nullptr deref if shm format is weird
      • screenshare: round captureBox after scaling to fix region capture at fractional scales (#14257)
      • seat/compositor: fix minor issues (#13958)
      • seat: fix dropped wl_keyboard.enter after stale keyboardFocusResource (#14143)
      • tests/workspace: fix one test case failing
      • tests: Fix more tests failing on CI (#14159)
      • tests: fix ConfigLuaValueTypes - boolBadType test, 0 and 1 are allowed integer values for bool type (#14240)
      • tests: fix gtests crashing (#14244)
      • workspace: fix missing null access guard (#14119)
      • xwayland: fix compiler warnings (#13920)

      Other:

      • CI/Nix/Test: check gtest exit status
      • CI/Nix: use org-wide actions
      • CI/build: remove commented-out clang-format action (#13893)
      • Nix: always test in debug mode
      • NotificationOverlay: take reserved space into account (#14184)
      • algo/dwindle: Respect force_split when moving windows to workspaces (#13038)
      • algo/dwindle: do NOT use smart_split for overridden focal point (#13635)
      • algo/dwindle: don't crash on empty swapsplit (#13533)
      • algo/dwindle: use focal point correctly for x-ws moves (#13514)
      • algo/scroll: improve directional moves (#13423)
      • algo/scroll: reverse horizontal dir mapping of vertical scroll directions (#13647)
      • algo/scrolling: improve behavior with focus_fit_method = center (#13795)
      • animation: avoid redundant damage calls in tick
      • build: bump hyprgraphics to 0.5.1 (#14013)
      • build: bump hyprutils to 0.13.1 (#14365)
      • build: remove auto-generated hyprctl/hw-protocols/ files during make clear (#13399)
      • build: remove legacy clang-format workflow (#13887)
      • clang-format: run formatter
      • cleanup: avoid repeated weak_ptr lock() calls in conditions (#14057)
      • cleanup: avoid repeated weak_ptr::lock() usage in MasterAlgorithm (#14226)
      • cmake: install the default example hyprland.lua (#14174)
      • cmake: remove dependence on hyprland.conf
      • cmakelists: search for any possible lua package name (#14204)
      • compositor: When processing fullscreen states, only use effective mode where necessary (#13607)
      • compositor: be more selective about how we expand the window box in getting coord (#13720)
      • compositor: damage monitors on workspace attachment updates
      • compositor: move SessionLockManager init from STAGE_LATE to STAGE_BASICINIT (#14272)
      • compositor: recalculate workspace state after fs state update (#14369)
      • config/actions: remove spammy errors and make them silent
      • config/errors: Report and categorize errors properly for actions (#14192)
      • config/executor: actually execute exec-shutdown (#13872)
      • config/legacy: default to active window for movetoworkspace dispatchers (#14170)
      • config/legacy: translate default window args properly
      • config/lua: cannot disable animation (#14215)
      • config/lua: don't pop up an error if no target was found (#14175)
      • config/lua: expand properties in the workspace object (#14194)
      • config/lua: init lua config manager, use lua if available (#13817)
      • config/lua: workspace.move/rename should accept "workspace" instead of "id" as a parameter (#14232)
      • config/refresher: refresh watcher state properly (#14307)
      • config/workspace-rules: support modifying persistent and monitor (#14217)
      • config: allow hashes for parsing colors (#14337)
      • config: always call refresh after config reload (#14346)
      • config: cleanup the entire config infrastructure (#13785)
      • config: find lua paths first (#14335)
      • config: move misc:vfr to debug: (#14021)
      • config: refresh window states on border_size changes (#14201)
      • config: use lua by default, generate lua if no config present
      • data/dnd: guard against expired dndPointerFocus and ensure consistent usage (#13996)
      • debug/overlay: optimize rendering, cleanup and nicetify (#14097)
      • decoration/border: simplify damage callback
      • desktop/group: respect direction when moving window out of group (#13490)
      • desktop/history: include ranges header (#14000)
      • desktop/layerRule: use variants for storage internally
      • desktop/popup: cache popup extents
      • desktop/popup: cache tree count
      • desktop/reserved: do not crash on invalid box init (#13880)
      • desktop/rule: cleanup inheritance, use templates to avoid dup
      • desktop/rule: recheck eating the applied rule (#14362)
      • desktop/rule: use Numeric for number parsing
      • desktop/window: don't group modals
      • desktop/window: expand hidden into proper states
      • desktop/window: guard null monitor in xwaylandSizeToReal (#13876)
      • desktop/window: optimize getRealBorderSize()
      • desktop/window: reduce window deco updates (#13980)
      • desktop/window: refactor over fullscreen state
      • desktop/windowRule: use variants for storage internally
      • desktop/workspaceHistory: small refactor to work better with multi monitor setups (#13632)
      • egl: move over to use hyprgraphics (#12988)
      • errorOverlay: modernize, refactor, use GPU rendering (#14122)
      • example: remove old .conf file
      • examples: merge config blocks in lua example as demo
      • format: safeguard drmGetFormat functions (#13416)
      • gitignore: ignore pointer scroll test artifact
      • helpers/systemInfo: extract info fns (#14222)
      • hyprtester: minor refactoring/restructure (#14154)
      • i18n: update Tatar translations (#13930)
      • i18n: update Vietnamese translations (#13489)
      • i18n: update brazillian portuguese (pt_BR) translation (#14248)
      • init: drop CAP_SYS_NICE from ambient set after gaining SCHED_RR (#14082)
      • input: allow focus to switch to most recently used window on closed (#13769)
      • input: avoid repeated weak_ptr::lock() and ensure consistent usage (#14039)
      • input: focus monitor on touch down events (#13773)
      • input: implement follow_mouse_shrink (#13707)
      • input: keep pointer focus on layer surfaces during keyboard refocus (#14018)
      • input: lazy cache getWindowIdeal()
      • internal: improve cursor size logging (#14180)
      • internal: include setByUser in CConfigManager::getConfigValue (#14155)
      • internal: removed Herobrine
      • internal: rewrite deviceNameToInternalString using a single range pipeline (#13806)
      • internal: silence compiler warnings about unused return values (#13997)
      • keybind/actions: cycle_next w/ tiled = true doesn't choose only tiled windows (#14164)
      • keybindMgr: use legacy behavior for single-key binds on lua (#14176)
      • keybinds: Remove removed keybinds (#13605)
      • layersurface: simulate mouse movement on layer change (#13747)
      • layout/algo: preserve focused target if applicable on layout switches (#14058)
      • layout/algos: use binds:window_direction_monitor_fallback for moves (#13508)
      • layout/dwindle,master: return invalid layoutmsg errors
      • layout/scrolling: handle fullscreen manually (#14190)
      • layout/windowTarget: damage before and after moves (#13496)
      • layout/windowTarget: don't use swar on maximized (#13501)
      • layout/windowTarget: override maximized box status in updateGeom (#13535)
      • layout: guard null workspace in CWindowTarget::updatePos() (#13861)
      • layout: replace string comparison with ID-based matching in WorkspaceAlgoMatcher (#13943)
      • layout: revert "replace string comparison with ID-based matching in WorkspaceAlgoMatcher (#13943)"
      • layout: store and preserve size and pos after fullscreen (#13500)
      • layouts/dwindle: override force after window drags (#14002)
      • logging: update uri of debug log in ConfigManager to reflect change in wiki (#14185)
      • main: improve error reporting during initialization in main.cpp (#14181)
      • meta/stubs: update gesture hints to match new fields (#14195)
      • miscfunctions: reuse monitor pointer instead of repeated calls (#13977)
      • monitor: centralize solitary and scanout eligibility checks
      • monitor: damage old special monitor on change
      • monitor: ensure swapchain is updated before mode test (#14065)
      • monitor: keep workspace monitor bindings on full reconnect (#13384)
      • monitor: set format back after failing DS activation (#14168)
      • monitor: update pinned window states properly on changeWorkspace (#13441)
      • monocle: avoid repeated workspace monitor lock() calls (#14085)
      • nix/tests: print gtests logs
      • nix: separate overlay with deps
      • notifications: move and small refactor (#14094)
      • notifications: optimize rendering (#14088)
      • opengl: minor egl changes (#14147)
      • pass/surface: cache texBox
      • pointer: damage entire buffer in begin of rendering hw
      • protocolMgr: set m_self properly when updating mirrored outputs
      • protocols/workspace: schedule done after output update (#13743)
      • protocols: allow xdg-foreign to be used by sandboxed apps (#13854)
      • protocols: avoid repeated per-client work in hot paths
      • protocols: prune stale subsurface refs in hot traversals
      • protocols: reimplement unstable/xdg-foreign-v2 (#13716)
      • refactor: improve readability of monitor rule comparison (#13884)
      • render/decoration: cache input extents as well
      • render/decoration: improve extent calculations
      • render/decorations: improve cache performance
      • render/opengl: optimize getShaderVariant's map access
      • render/pass: optimize simplification and blur calculations
      • render: scale background to monitor resolution (#14250)
      • renderer/cm: Support wp-cm-v1 version 2 (#12817)
      • renderer: don't damage decos individually in damageWindow
      • renderer: extract window skip conditions into named booleans (#14005)
      • renderer: guard against null monitor in renderMonitor (#13823)
      • renderer: handle HDR -> SDR with cm_auto_hdr (#14102)
      • renderer: move m_renderData to renderer (#13474)
      • renderer: only set presentationmode when required (#14252)
      • renderer: refactor Texture, Framebuffer and Renderbuffer (#13437)
      • renderer: refactor gl renderer (#13488)
      • renderer: refactor projection setting (#13485)
      • renderer: refactor render elements (#13438)
      • renderer: refactor resources and flags (#13471)
      • renderer: shader variants refactor (#13434)
      • renderer: simplify renderWorkspaceWindowsFullscreen
      • renderer: simplify shadows (#14047)
      • renderer: skip redundant render-path work
      • renderer: swizzle on shm screencopy (#14167)
      • repo: ignore the autogen file meta/hl.meta.lua (#14336)
      • rules: make rule prop reset less cursed (#14003)
      • scheduler: keep a strong monitor ref in frame callbacks
      • screencopy: check share session state (#13839)
      • screencopy: clear buffer before rendering (#14064)
      • screencopy: scale window region for toplevel export (#13442)
      • screenshare/frame: set m_copied after shm copy succeeds (#14165)
      • screenshare: adjust session cleanup and event emission order (#14229)
      • screenshare: improve destroy logic of objects (#13554)
      • scroll: clamp column widths properly
      • seat: store surface in pointerFocus before sendEnter (#13941)
      • sessionLock: send locked instead of denied when missing a lock frame for 5 seconds (#14271)
      • shader: delete shader on success path (#13682)
      • socket2: emit kill event (hyprctl kill) (#13104)
      • source: c-f for new clang version
      • splashes: update splashes
      • subsurface: use geometry-aware damage and recurse into nested trees (#13933)
      • tests: add unit tests for ByteOperations helpers (#13886)
      • tests: add unit tests for CDamageRing (#13995)
      • tests: add unit tests for CHyprColor (#13891)
      • tests: add unit tests for CMType helpers (#13888)
      • tests: add unit tests for CMonitorRuleParser (#13895)
      • tests: add unit tests for CTagKeeper (#13970)
      • tests: add unit tests for Direction helpers (#13885)
      • tests: add unit tests for Format utilities (#13923)
      • tests: add unit tests for Math transform utilities (#13935)
      • tests: add unit tests for Math::CExpression (#13924)
      • tests: add unit tests for MiscFunctions helpers (#13934)
      • tests: add unit tests for TransferFunction helpers (#13889)
      • tests: add unit tests for match engine types (#13903)
      • tests: skip pointer tests in CI due to missing input environment (#14238)
      • tests: stabilize CI by relaxing env-dependent checks and timing-sensitive assertions (#14142)
      • tests: tolerate plugin config mismatch in CI (#14173)
      • treewide: alejandra -> nixfmt
      • view: consolidate group flags and apply window rules (#13694)
      • workspace: remove deprecated and unused members (#14198)
      • xdg-foreign-v2: Keep invalid imported objects alive (#14166)
      • xdg-shell: queue state updates for toplevel (#14227)
      • xwayland: handle transient read errors in selection transfer (#14135)
      • xwayland: pipe through monitor in coordinate mapping (#13700)
      • xwayland: prevent potential buffer overflow in socket path handling (#13797)

      Special Thanks

      As always, special thanks to these people / companies for supporting Hyprland's continued development:

      Sponsors

      Diamond

      37Signals

      Gold

      Framework, Butterfly

      Donators

      Top Supporters:

      Tonao Paneguini, Semtex, soy_3l.beantser, Seishin, Nox Æterna, Illyan, Snorezor, Bonsai, Joshua Weaver, ExBhal, DHH, Mikko_Nyman, Kay, iain, TyrHeimdal, miget.com, alexmanman5, Hunter Wesson, --, RaymondLC92, Theory_Lukas, Brandon Wang, Insprill, lzieniew, 3RM, johndoe42, Jas Singh, RayJameson, MadCatX, Xoores, d, Ammar Hossain, Ki☆, inittux111, Arkevius, John Shelburne, DeWattaUnk, ari-cake, gfunnymoney, alukortti, taigrr

      New Monthly Supporters:

      tubid2wenty, Uros Cotman, yafantik, Guy, goblin_engineer, Julius John Puno, Peter Buijs, mb, StellaBuckley, haikuolin, Antibaddy, sludge10123, C Money, Lipski, KampotKaca, Kazuhide Takahashi, Skeptomai, bombadurelli, Rebellen, Álan, StreamCyper, taras, Yury, Sherab, Filinto Delgado, Taddelladius

      One-time Donators:

      Quuton, Selvan, Tyler Adams, tonis, Sam, Dimitrios Liappis, Chivtar, Eric, aponsasan888, bkode, LonestarF1, Chris, Dogmatic Polack, Larry, maxx, MonolithImmortal, edrix, I like GameNative, take my money., nyxloom, Frederic Toemboel, Schmendiey, himes, brandonia, Xphelus, New user, Miguel Flores- Acton, R3dGh0st, Glen, Vitor Moura GUEDES, Anersyum, le_04, Dan, AT, chorr, Awesome, IdeaSpring, Jacobrale, anonymous, Elias Griffin, w00z4, Marcus Edvardsson, Gerhard, Bashmaks, Benjaneb, R4dicalEdward, MatĂœsek ^^, Michael, Gene Raymond, naivesheep, Neginja, anarchuser, Uta, Francois KERISIT, ay4, Lorenzo santacreu, Gitznik, Jure S, Oliver, Pipes, Mein, ironick, Nlight, Pfoid, DasCleverle, Jaf Endee, DIEBUSTER, senorBeard, alex, Mike, luxxa, JasonPettys, One, Daniel, Sven Eppler, L3rdy, Ilunn, Thorff, XurxoMF, Wonkhester, Brian, Doc O, Mortja, Spook, Miguel Cordero Collar, bennyzen, deah, Sean, Higor, nanea808, Torsten Schieber, I3lack5hield, Kevin Steffer, Zarenno, vfosterm, Nikola, EGB, Dietmar, KilahDentist, Wilf Lin, Rad, Yuza, Supporter, nooob, esseonline, Naresh, darquill, BrnPrs, Pani, BYK, Amaury, nythix, Mika, Patriarch, Gambit, GoatCedric, Adam, MirasM, bl4ckb1rd, Loon, KevOlek, AsciiWolf, Brian Barrow, Anon, Kilian, Cristian M., abhinavmishra094, Dejv78, LinoDB, Trofim, Konstantin, JoaquinCamposPlaza(Ximo), Gabo, Phil, dev2and0m, Neil Brown, zarilion, JavierArias(Javi), Thank you, Mystrasun, Skrazzo, MeguminLoli, revitalist, barcellos-pedro, Juh, Goldie, benabrig, mynus, Daniel Zudel, Grant, Jacob Felknor, Noah, e033x, Nick, Niklas, mkami, Slippy, joenu, Oleksandr, t.i.m., Joss001, M4CETO, Nighty, Donater, David N, Cameron, Ekoban, Kieran, brotiii, Doug, Hypruser#0224975, Shadesofastar, sonicbhoc, GKL, Damien, JoĂŁo Seixas, mothmashine, James Freiwirth, Mek, Krizzkrozz, Panzer, mika.dev, Franky Valley, Sycho sMILEz, Roy, Amundis, willibenmula ❀, Justin, marvelousIT, pablo, Alex, Ryan, cito, Juergen, Eric Koslow, valerius21, jfk, Andrejs, tyforupdate, skwrl, DaintyFox

      Full Changelog : v0.54.0...v0.55.0

    6. 🔗 r/LocalLLaMA 80 tok/sec and 128K context on 12GB VRAM with Qwen3.6 35B A3B and llama.cpp MTP rss

      Just wanted to share my config in hopes of helping other 12GB GPU owners achieve what I see as very respectable token generation speeds with modest VRAM. Using the latest llama.cpp build + MTP PR, I got over 80 tok/sec with 80%+ draft acceptance rate on the benchmark found here: https://gist.githubusercontent.com/am17an/228edfb84ed082aa88e3865d6fa27090/raw/7a2cee40ee1e2ca5365f4cef93632193d7ad852a/mtp- bench.py

      Here's my PC specs:

      OS: CachyOS (HIGHLY recommended) CPU: AMD Ryzen 7 9700X RAM: 48GB DDR5-6000 EXPO I GPU: RTX 4070 Super 12GB
      

      Results with other hardware may vary.

      To run llama.cpp with MTP support, you need to build it from source and add a draft PR that hasn't yet been merged with the master branch. You can find a very nice guide on how to do that here and also download the Qwen3.6 MTP GGUF: https://huggingface.co/havenoammo/Qwen3.6-35B-A3B-MTP-GGUF - Thanks u/havenoammo!

      llama.cpp command:

      llama-server \ -m Qwen3.6-35B-A3B-MTP-UD-Q4_K_XL.gguf \ -fitt 1536 \ -c 131072 \ -n 32768 \ -fa on \ -np 1 \ -ctk q8_0 \ -ctv q8_0 \ -ctkd q8_0 \ -ctvd q8_0 \ -ctxcp 64 \ --no-mmap \ --mlock \ --no-warmup \ --spec-type mtp \ --spec-draft-n-max 2 \ --chat-template-kwargs '{"preserve_thinking": true}' \ --temp 0.6 \ --top-p 0.95 \ --top-k 20 \ --min-p 0.0 \ --presence-penalty 0.0 \ --repeat-penalty 1.0
      

      The most important parameter here is -fitt 1536. Since part of the model is offloaded to CPU because of its size and , this tells llama.cpp to properly balance the load on the GPU/CPU to get the best possible performance, and leaves 1536 MB of free memory for the MTP draft model and KV cache. Since I'm running my dGPU as a secondary GPU (monitor plugged in the iGPU), I can use all the available 12GB VRAM for inference. 1536 might be too small if you use your dGPU as your primary GPU, so test it out first.

      You can also try different values for -spec-draft-n-max. I got slightly better tok/sec with 3, but a much better acceptance rate with 2, so the trade off was not worth it. With MTP, you want to maximize speed AND acceptance, so you need to find the best balance between both.

      Benchmark results:

      mtp-bench.py code_python pred= 192 draft= 132 acc= 125 rate=0.947 tok/s=80.8 code_cpp pred= 58 draft= 40 acc= 37 rate=0.925 tok/s=81.8 explain_concept pred= 192 draft= 152 acc= 114 rate=0.750 tok/s=70.0 summarize pred= 53 draft= 40 acc= 32 rate=0.800 tok/s=75.4 qa_factual pred= 192 draft= 144 acc= 119 rate=0.826 tok/s=77.8 translation pred= 22 draft= 16 acc= 13 rate=0.812 tok/s=81.9 creative_short pred= 192 draft= 160 acc= 111 rate=0.694 tok/s=69.2 stepwise_math pred= 192 draft= 144 acc= 119 rate=0.826 tok/s=76.5 long_code_review pred= 192 draft= 148 acc= 117 rate=0.790 tok/s=73.2
      

      If you have any questions, feel free to ask :)

      Cheers.

      submitted by /u/janvitos
      [link] [comments]

    7. 🔗 r/Leeds Petition · Stop water pollution from misconnections in the Gledhow Valley rss

      The Friends of Gledhow Valley Woods water monitoring team have been out again this week along the length of Gledhow Beck.

      They found that the culvert on Allerton Grange Way is again pouring out a thick brown liquid from a misconnection into the Beck. This has been reported to Yorkshire Water and Environment Agency but no evidence of any action.

      This is on top of the 368.75 hours of untreated sewage discharges into the Beck and Lake in 2025( latest figures) from the 4 Combined Sewer outfalls in the Gledhow Valley and the toxic mix of chemicals and heavy metals running off Gledhow valley Road into the Beck. Analysis this week demonstrates that levels of Lead and Zinc from this source are likely to have an adverse impact on invertebrates in Gledhow Beck -a key food source for fish and birds.

      Please support our campaign to clean up this mess for both nature and the local community.

      Sign our petition!

      submitted by /u/blissedandgone
      [link] [comments]

    8. 🔗 r/wiesbaden Wann kommen die 800.000 Euro fĂŒr den Helmut-Schön-Sportpark? rss

      Die 800.000 Euro wurden bestimmt lĂ€ngst per Fax angewiesen, aber im Wiesbadener Rathaus war leider das Thermopapier alle. Wahrscheinlicher ist aber, dass man die Kohle direkt als Beraterhonorar an McKinsey ĂŒberwiesen hat, fĂŒr ein 200-seitiges Gutachten, das klĂ€ren soll, warum unsere kommunale Infrastruktur eigentlich immer verrottet.

      submitted by /u/LethisXia
      [link] [comments]

    9. 🔗 r/Harrogate Harlow Moor Drive rss

      Hi Everyone,

      We are considering the move to Harrogate, and we love the Harlow Moor Drive area. We have noticed an empty property, which I understand was a care home (Avon Lodge), which has since been sold to developers.

      Does anyone know if/what the plans for this property would be? Trying to understand if/how it would affect houses in the area (construction/expansion etc!)

      Thank you so much for your help!

      submitted by /u/RefuseElectrical10
      [link] [comments]

    10. 🔗 r/LocalLLaMA Shel Silverstein predicts LLM's (and its hallucinations), cira 1981 rss

      Shel Silverstein predicts LLM's (and its hallucinations), cira 1981 | Ran across this cartoon / poem on accident as I was reminiscing about my favorite childhood poet, Shel Silverstein, and couldn't help thinking of LLM's of course! submitted by /u/spanielrassler
      [link] [comments]
      ---|---

  2. May 08, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-08 rss

      IDA Plugin Updates on 2026-05-08

      New Releases:

      Activity:

      • capa
        • 5a60f3a0: fix: register all data-ref addresses for imports in Ghidra helpers
        • 99b3cfe0: fix: use singular get_segment_at API in binja file string extractor
        • a28fcce7: fix: linter tests needing placeholder rule sets to function
        • 5ca6c3e3: gitignore: script test temp files
        • b505ba76: fix: remove unused imports and un-suppress F401
        • 309231f2: fix: ghidra and binja file strings yield FileOffsetAddress
        • 57e730fa: fix: binja embedded PE yields FileOffsetAddress via segment data_offset
        • c9cb43a8: fix: elffile imports use AbsoluteVirtualAddress for ELF r_offset
        • 9b93e90e: fix: wrap binja function name addresses in AbsoluteVirtualAddress
        • 4e804007: fix: ghidra: don't emit VAs for embedded PEs
        • 330b6413: fix: ida: correctly emit file offsets for embedded PEs
        • 43d65361: gitignore: CLAUDE.local.md
        • 8fca21f8: linter: validate dynamic example offsets
        • 8e464e60: fix: formatting
        • 555bbdec: fix: guard getByteDef against None for unmapped addresses in viv insn

        • c8d47085: fix: remove unused imports from cache-ruleset.py, detect-binexport2-c

        • 7a8a0aca: fix: remove dead except ValueError clause in capa2sarif.py so JSONDec

        • 7d871409: fix: dedent bulk-process.py main() body so explicit argv is used
        • a938c87f: fix: guard statistics calls in compare-backends.py against empty dura

        • 604fae35: fix: replace zipfile with pyzipper in minimize_vmray_results.py so ou

      • ida-x64dbg-mcp
        • f679d5ba: Harden x64dbg runtime workflows
        • bb3d39cf: Add x64dbg runtime snapshot workflow
      • IDAPluginList
        • 90a9d234: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
    2. 🔗 r/Yorkshire The Gannet, RSPB Bempton Cliffs, Yorkshire rss
    3. 🔗 Hex-Rays Blog Announcing the 2025 Plugin Contest Winners rss

      Announcing the 2025 Plugin Contest Winners

      The 2025 Hex-Rays IDA Plugin Contest is officially wrapped up, and we are excited to announce this year's winners! This edition drew 25 submissions from both returning participants and newcomers, each bringing fresh ideas to extend and improve IDA Pro.

    4. 🔗 r/LocalLLaMA vLLM ROCm has been added to Lemonade as an experimental backend rss

      vLLM ROCm has been added to Lemonade as an experimental backend | vLLM has the ability to run .safetensors LLMs before they are converted to GGUF and represents a new engine to explore. I personally had never tried it out until u/krishna2910-amd/ u/mikkoph and u/sa1sr1 made it as easy as running llama.cpp in Lemonade: lemonade backends install vllm:rocm lemonade run Qwen3.5-0.8B-vLLM This is an experimental backend for us in the sense that the essentials are implemented, but there are known rough edges. We want the community's feedback to see where and how far we should take this. If you find it interesting, please let us know your thoughts! Quick start guide: https://lemonade-server.ai/news/vllm-rocm.html GitHub: https://github.com/lemonade-sdk/lemonade Discord: https://discord.gg/5xXzkMu8Zk submitted by /u/jfowers_amd
      [link] [comments]
      ---|---

    5. 🔗 @HexRaysSA@infosec.exchange 🔩 PLUGIN SPOTLIGHT: ida-cyberchef mastodon

      🔩 PLUGIN SPOTLIGHT: ida-cyberchef

      This is a new open source plugin that embeds CyberChef's data transformation engine directly into IDA Pro, with a Qt interface that sits alongside your disassembly as a side panel.

      Data flows top to bottom through three panels for input, recipe, and output.

      https://hex-rays.com/blog/ida-pro-meet-cyberchef

    6. 🔗 r/york Driving lessons rss

      I’m looking for driving lessons. I had about 6-12 months experience 2 years ago but I haven’t driven since. I was wondering if anyone knew any lessons I could take before my test that is somewhat cheap and good

      submitted by /u/sheetpost00
      [link] [comments]

    7. 🔗 r/Yorkshire Hebden Bridge illustration rss

      Hebden Bridge illustration | Hey folks! Thought you'd enjoy this new little illustration I just finished of Hebden Bridge. This took around 18 hours based on my own photos :) submitted by /u/zacrosso_art
      [link] [comments]
      ---|---

    8. 🔗 r/wiesbaden In Germany soon rss

      Hi. I would like to ask if what are the things that I need to prepare? I will arrive in Hessen Germany this coming June and I am from the Philippines. Any answers/ suggestions will be a great help for me. Thank you.

      submitted by /u/No_Manner_2072
      [link] [comments]

    9. 🔗 r/LocalLLaMA Unpopular Opinion: The DGX Spark Forum community of devs is talented AF and will make the crippled hardware a success through their sheer force of will. rss

      There is a lot of disdain for DGX Sparks here on the sub. And I get it. A lot of people say “It could have been great if it had been better memory bandwidth”, “SM-121 is a fake /second-class Blackwell chip” yadda, yadda. These criticisms are valid.

      I bought one anyway because I’m pursuing a Masters in AI and I wanted it for training models, tool dev, testing, etc.
      I was an early adopter, and like many, I was disappointed by the inference performance and software stack initially. Recently, my opinion and experience has changed.

      NVIDIA has an “official” DGX Spark Development community forum that is thriving. The people in the DGX forum community are some of the kindest, smartest, most tenacious group of developers I’ve met. These dudes have one common goal: Squeeze every last drop of performance out of this hardware to prove to themselves and the world that they didn’t make a bad purchase by buying a Spark. I know that sounds snarky, but I don’t think it’s a bad goal.

      The vibe on the forum is like “Ok bros, we all bought this thing, the peeps over at r/LocalLLama are all laughing at us right now, let’s show those sons-of-bitches what we can do” I mean, none of them would actually say that, because they are all really nice and helpful people, but that’s the vibe I get when I’m browsing through the posts. Everyone there has the same goal: optimize the hell out of DGX Spark to the highest level possible.. It’s wild seeing such a harmonious atmosphere. No one really argues, trolls, rage baits, none of that. Just everyone in the same boat, working together and encouraging each other, sharing benchmarks, code, vLLM recipes, etc. Reminds me of the vibe of this sub like 2 years ago before all the bot posts flooded the place.

      If you don’t believe me, about the DGX dev community, go check it out for yourself:

      https://forums.developer.nvidia.com/c/accelerated-computing/dgx-spark-gb10

      Check out some of the cool projects they’ve spun up like Sparkrun (http://sparkrun.dev), PrismaQuant, Spark Lesderboard, eugr vLLM, and all the other amazing projects these guys are working on.

      The one big advantage of the DGX hardware for these developers is the fact that the HW and OS is all exactly the same for everyone. You know your shit is going to work on every other Spark box that is out there and that is powerful for a unified community with one common goal.

      So yes, DGX Spark could have been a lot better and was probably crippled by design, but that’s not stopping the DGX Spark Forum community, these MFers are going to use their sheer force of will and talent to make this thing a success just to spite all the naysayers. My two cents, agree or disagree?

      submitted by /u/Porespellar
      [link] [comments]

    10. 🔗 r/york Anyone looking for D&D groups or events in York? rss

      I've been working with some local communities recently in york and I'm trying to improve outreach involving a lot of upcoming Dungeons and Dragons related things

      submitted by /u/JunkDrawerTheatreCo
      [link] [comments]

    11. 🔗 r/wiesbaden Outdoor Location gesucht rss

      Gude!

      FĂŒr eine Abschlussfeier suche ich nach einer Outdoor Location bestenfalls mit einem Zelt fĂŒr maximal 50 Personen in Wiesbaden oder Umgebung.
      Bayleaf Events in Frankfurt Höchst ist eine sehr schöne Location mit Zelt und Deko und mein absoluter Favorit allerdings ist es nicht am 23. und 24. Mai verfĂŒgbar.
      Falls ihr sonst noch Ideen habt, wĂŒrde ich mich auf jeden Fall freuen!

      submitted by /u/Levi_Ackermann_1304
      [link] [comments]

    12. 🔗 r/reverseengineering Ghidra-SNES: A Ghidra extension for reverse engineering SNES ROMs (first public release, feedback welcome!) rss
    13. 🔗 tomasz-tomczyk/crit v0.11.0 release

      What's Changed

      Big milestone! Crit crossed more than 500 commits and 250 stars. You can now install it directly from homebrew and we released a Windows version!

      Thank you to everyone who contributed to get us here! I'd appreciate if you would share it with your colleagues or on Twitter! It helps a lot!


      crit is now in homebrew-core — no tap needed. If you installed from the tap, upgrade once with:

      brew uninstall crit && brew untap tomasz-tomczyk/scratch && brew update && brew install crit
      

      Future updates will arrive via brew upgrade like any other formula.

      Windows + WSL support

      feat: add Windows + WSL support replaces Unix-only syscalls with cross- platform abstractions, adds rundll32 browser launch on native Windows, and keeps the existing WSL fallback chain. crit now works end-to-end on Windows natively.

      General

      Full Changelog : v0.10.5...v0.11.0

    14. 🔗 r/Leeds Roundhay park warning rss

      Hi there!

      Just wanted to write that whilst walking my dog- I had a strange encounter with an older man.

      I was up the back of roundhay park lake (taking the pathway through the woods) at 11:30 this morning/ afternoon- he was in a very isolated part of the walking trail, and after staring at me walking past, I said ‘good afternoon’ and he replied by telling me he thought I was ‘very beautiful’ - I got a bad gut feeling and decided to leave straight away, he was saying more stuff as I was leaving but I didn’t hear him as he was very quiet.

      I just wanted to say to be cautious if you are in roundhay park and to stick to the main path by the lake if possible. Thanks!

      submitted by /u/SadEntertainment5259
      [link] [comments]

    15. 🔗 r/reverseengineering Reverse-engineered DaVinci Resolve's activation check with Claude — Frida runtime tracing + radare2 rss
    16. 🔗 r/Yorkshire Richmond Castle in Yorkshire standing tall after nearly 1000 years. rss
    17. 🔗 r/york Policemen with assault rifles running around rss

      Does anyone know anything about the policemen running around with automatic weapons near Hungate apartments? Quite anxiety inducing to see that

      submitted by /u/Reduxtion
      [link] [comments]

    18. 🔗 r/york Love the cobbled or setts, and the whole atmosphere of Shambles is just magical, really brings out the history and charm of the place! đŸŒș rss

      Love the cobbled or setts, and the whole atmosphere of Shambles is just magical, really brings out the history and charm of the place! đŸŒș | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    19. 🔗 r/york Restaurant for 25 people central rss

      Can anyone recommend a place for a lunch for 25 people? Central to York? Thank you!

      submitted by /u/DoctorImpossible89
      [link] [comments]

    20. 🔗 tomasz-tomczyk/crit Spotify popup-relay preview (bb4d9fb) release

      WIP build of crit with share_flow: "popup" config support for SSO- protected crit-web instances.

      Setup instructions: SPOTIFY-PREVIEW.md

      Pair with crit-web: docker image ghcr.io/tomasz-tomczyk/crit-web:spotify- preview (release, built from branch share-receiver- elixir).

      Built from commit bb4d9fb of branch share- receiver.

      Feedback / issues: tomasz-tomczyk/crit-web#50

    21. 🔗 r/Yorkshire There's no better place to drink a tea and reboot yourself than the Dales rss

      There's no better place to drink a tea and reboot yourself than the Dales | Image by Dan Silcock submitted by /u/Seabeachlover10
      [link] [comments]
      ---|---

    22. 🔗 r/reverseengineering SASS King Part 2: reverse-engineering ptxas heuristic decisions and what the compiled binary actually reveals rss
    23. 🔗 r/reverseengineering I just released a C++ rewrite of **Minecraft rd-20090515** (May 15, 2009 — one of the earliest pre-Classic versions).If you find it interesting, a ⭐ on GitHub would mean a lot and help the project grow! rss
    24. 🔗 r/LocalLLaMA Multi-Token Prediction (MTP) for LLaMA.cpp - Gemma 4 speedup by 40% rss

      Multi-Token Prediction (MTP) for LLaMA.cpp - Gemma 4 speedup by 40% | Implemented Multi-Token Prediction for LLaMA.cpp. Quantized Gemma 4 assistant models into GGUF format. Ran tests on a MacBook Pro M5Max. Gemma 26B with MTP drafts tokens 40% faster. Prompt: Write a Python program to find the nth Fibonacci number using recursion Outputs:
      LLaMA.cpp: 97 tokens/s
      LLaMA.cpp + MTP: 138 tokens/s Gemma4-assistant GGUF Quantized models: https://huggingface.co/collections/AtomicChat/gemma-4-assistant-gguf Local AI models app: http://atomic.chat Patched llama.cpp: https://github.com/AtomicBot-ai/atomic-llama-cpp-turboquant submitted by /u/gladkos
      [link] [comments]
      ---|---

    25. 🔗 jank blog jank now has its own custom IR rss

      Good news, everyone! jank has a new custom intermediate representation (IR) and we're using it to optimize jank to compete with the JVM. We'll dive into more of that today, but first I want to say thank you to my Github sponsors and to Clojurists Together for sponsoring me this whole year. You all are helping a great deal. I am still searching for a way to continue working on jank full-time with an income which will cover rent and groceries, so if you've not yet chipped in a sponsorship, now's a great time!

    26. 🔗 matklad Steering Zig Fmt rss

      Steering Zig Fmt

      May 8, 2026

      Two tips on using zig fmt effectively. Read this if you are writing Zig, or if you are implementing a code formatter.

      For me, zig fmt is better than any other formatter I used: rustfmt, the one in IntelliJ, deno fmt. zig fmt is steerable. For every syntactic construct, it has several variations for how it might be laid out. The variation used is selected by looking at what’s currently in a file.

      Easier to show a pair of examples:

          f(1, 2,
            3);
      
      // -> zig fmt ->
      
          f(1, 2, 3);
      
      
          f(1, 2,
            3,);
      
      // -> zig fmt ->
      
          f(
              1,
              2,
              3,
          );
      

      Depending on the trailing comma, function call is formatted on a single line, or with one argument per line.

      The way this plays out in practice is that you decide how you want to lay out the code, add a couple of ,, hit the reformat shortcut (, p is mine), and zig fmt does the rest. For me, this works better than the alternative of the formatter guessing. 90% of great formatting are blank lines between logical blocks and tasteful choice of intermediate variables, so you might as well lean into key choices, rather than eliminate them.

      I know of one non-trivial formatting customization point: columnar layout for arrays:

          .{ 1, 2, 3,
             4, 5, 6, 7, 8, 9, 10, 11,  };
      

      One would think that trailing comma would lead to a number-per-line layout, but, for arrays, zig fmt also takes note of the first line break. In this case, the line break comes after the first three items, so we get three numbers per line, aligned:

          .{
              1,  2,  3,
              4,  5,  6,
              7,  8,  9,
              10, 11,
          };
      

      How cool is that!

      Furthermore, with judicious use of ++ (array concatenation), you can vary the number of items per line. When I need to pass --key value pairs to subprocess, I often go for formatting like this:

      try run(&(.{ "aws", "s3", "sync", path, url } ++ .{
          "--include",            "*.html",
          "--include",            "*.xml",
          "--metadata-directive", "REPLACE",
          "--cache-control",      "max-age=0",
      }));
      
    27. 🔗 Armin Ronacher Pushing Local Models With Focus And Polish rss

      I really, really want local models to work.

      I want them to work in the very practical sense that I can open my coding agent, pick a local model, and get something that feels competitive enough that I do not immediately switch back to a hosted API after five minutes. There are a lot of reasons why I want this, but the biggest quite frankly is that we're so early with this stuff, and the thought of locking all the experimentation away from the average developer really upsets me.

      Frustratingly, right now that is still much harder than it should be but for reasons that have little to do with the complexity of the task or the quality of the models.

      We have an enormous amount of activity around local inference, which is great. We have good projects, fast kernels, and people are doing great quantization work. A lot of very smart people are making all of this better, and yet the experience for someone trying to make this work with a coding agent is worse than it has any right to be.

      Putting an API key into Pi and using a hosted model is a very boring operation. You select the provider, paste the key and then you are done thinking about how to get tokens. Doing the same thing locally, even when you have a high-end Mac with a lot of memory, is a completely different experience. You choose an inference engine, then a model, then a quantization, then a template, then a context size, then you've got to throw a bunch of JSON configs into different parts of the stack and then you discover that one of those choices quietly made the model worse or that something just does not work at all.

      That is the gap I am interested in.

      Runnable Is Not Finished

      A lot of local model work optimizes for making models runnable. That is necessary, but it is not the same thing as making them feel finished. I give you a very basic example here to illustrate this gap: tool parameter streaming.

      For whatever reason, most of the stuff you run locally does not support tool parameter streaming. I cannot quite explain it, but the consequences of that are actually surprisingly significant. If you are not familiar with how these APIs work, the simplest way to think about them is that they are emitting tokens as they become available. For text that is trivial, but for tool calls that is often not done, despite the completions API supporting this. As a result you only see what edits are being done on a file once the model has finished streaming the entire tool call.

      This is bad for a lot of reasons:

      • A dead connection is a weird connection: local models are slow, so when you don't get any tokens for 5 minutes then you can't tell if the connection died or just nothing came. This means you need to increase the inactivity timeouts to the point where they are pointless.

      • You won 't see what will happen: if you are somewhat hands-on, not seeing what bash invocation the system is concocting slowly in the background means potentially wasted tokens, and also means that you won't be able to interrupt it until way too late.

      • It 's just not SOTA. We can do better, and we should aim for having the best possible experience. Tool parameter streaming is as important as token streaming in other places.

      Having a model spit out tokens doesn't take long, but making the experience great end to end does take a lot more energy.

      Fragmentation

      The local stack is fragmented across many engines and layers. There is llama.cpp, Ollama, LM Studio, MLX, Transformers, vLLM, and many other pieces depending on hardware and taste. All of these are amazing projects! The problem is not that they exist or that there are that many of them (even though, quite frankly, I'm getting big old Python packaging vibes), the problem is that for a given model, the actual behavior you get depends on a long chain of small decisions that most users just don't have the energy for.

      Did the chat template render exactly right? Are the reasoning tokens handled in the intended way? Is the tool-call format translated correctly? Is the context window real? Are the KV caches actually working for a coding agent? Did I pick the right quantized model from Hugging Face? Are you accidentally leaving a lot of performance on the table because the model is just mismatched for your hardware? Does streaming usage work across all channels? Does the model need its previous reasoning content preserved in assistant messages? Is the coding agent set up correctly for it?

      You also need to install many different things in addition to just your coding agent.

      All of these things matter. They matter a lot.

      The result is that people try a local model and get a result that is neither a fair evaluation of the model nor a polished product experience and this results in both people dismissing local models and energy being distributed across way too many separate efforts instead of getting one effort going great end to end.

      This is a terrible way to build confidence.

      Too Little Critical Mass

      In line with our general "slow the fuck down" mantra, I want to reiterate once more how fast this industry is moving.

      Every week there is a new model and a new vibeslopped thing. The attention immediately moves to making the next thing run instead of making one thing run really, really well in one harness. I get the excitement and dopamine hit, but it also means that too little critical mass accumulates behind any one model, hardware, inference engine, harness combo to find out how good it can really become when the entire stack is built around it.

      Hosted model providers do not ship a bag of weights and ask you to figure out the rest, and we need to approach that line of thinking for local models too. I want someone to pick one model, pairs it up with one serving path, directly within a coding agent. Initially just for one hardware configuration, then for more. Pick a winner hard. If a tool call breaks, that is a product bug and then it's fixed no matter where in the stack it failed. If the model's reasoning stream is malformed, that is a product bug. If latency is much worse than it should be, that is a product bug. We need to start applying that mentality to local models too.

      And not for every model! That is the point. Let's pick one winner and polish the hell out of it. Learn what it takes to make that one configuration good, then take those learnings to the next config.

      The DS4 Bet

      This is why I am excited about ds4.c. It's Salvatore Sanfilippo's deliberately narrow inference engine for DeepSeek V4 Flash on Macs with 128GB+ of RAM only. It is not a generic GGUF runner and it is not trying to be a framework. It is a model-specific native engine with a Metal path, model-specific loading, prompt rendering, KV handling, server API glue, and tests.

      DeepSeek V4 Flash is a good candidate for this kind of experiment because it has a combination of properties that are unusual for local use. It is large enough to feel meaningfully different from many smaller dense models, but sparse enough that the active parameter count makes it plausible to run. It has a very large context window. Since ds4.c targets Macs and Metal only, it can move KV caches into SSDs which greatly helps the kind of workloads we expect from coding agents.

      To run ds4.c you don't need MLX, Ollama or anything else. It's the whole package.

      Embedding It In Pi

      Which made me build pi-ds4 which is a Pi extension to directly embed the whole thing into Pi itself. Taking what ds4 is and dogfooding the hell out of it with a coding agent and zero configuration. To answer the question how good can the local model experience become if Pi treats this as a first-class provider rather than as a pile of manual configuration?

      The extension registers ds4/deepseek-v4-flash, compiles and starts ds4-server on demand, downloads and builds the runtime if needed, chooses the quantization based on the machine, keeps a lease while Pi is using it, exposes logs, and shuts the server down again through a watchdog when no clients are left. It doesn't even give you knobs right now, because I want to figure out how to set the knobs automatically.

      This is not about hiding the fact that local inference is complicated. It is about putting the complexity in one place where it can be improved, because there is a lot that we need to improve along the stack to make it work better.

      I think we can do better with caching and there is probably some performance that can be gained if we all put our heads together.

      Focusing and Learning

      The experiment I want to run is not "can a local model run?" because we already know that it can. I want to know if, for people with beefed-out Macs for a start, we can get as close as possible to the ergonomics of a hosted provider with decent tool-calling performance: how to get caches to work well, how to improve the way we expose tools in harnesses for these models, and then scale it gradually to more hardware configs and later models.

      I also want everybody to have access to this. Engineers need hammers and a hammer that's locked behind a subscription in a data center in another country does not qualify. I know that the price tag on a Mac that can run this is itself astronomical, but I think it's more likely that this will go down. Even worse, Apple right now due to the RAM shortage does not even sell the Mac Studio with that much RAM. So yes, it's a selected group of people where ds4.c will start out.

      But despite all of that, what matters is that a critical mass of pepole start to focus their efforts on a thing, tinker with it, improve it, not locked away, out in the open, and most importantly not limited by what the hyperscalers make available.

      But if you have the right hardware and you care about local agents, I would love for you to try it within pi:

      pi install https://github.com/mitsuhiko/pi-ds4
      

      My hope is that this becomes a useful forcing function to really polish one coding agent experience. But really, the focal point should be ds4.c itself.

  3. May 07, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-07 rss

      IDA Plugin Updates on 2026-05-07

      Activity:

    2. 🔗 r/LocalLLaMA Collected the infinity stones rss

      Collected the infinity stones | 2.3 TB of ram in here. 400+ vCores. All thats left is plugging it to the blackwell with the driver to do RDMA, and it’s over. Using Blackwells for prefill, RDMA to the studio mesh for decode. I think this would be the first heterogeneous cluster. I do, however, need help with the Tinygrad Driver to make this work. If anyone with any knowledge on these domains would like to collaborate, let me know via PM. We are very close here. submitted by /u/Street-Buyer-2428
      [link] [comments]
      ---|---

    3. 🔗 r/Yorkshire Whitby - North Yorkshire rss

      Whitby - North Yorkshire | submitted by /u/tomthefear
      [link] [comments]
      ---|---

    4. 🔗 r/Leeds What should I do about the stressed koi at a restaurant? rss

      I'm at a restaurant in Leeds, I'm sure you could figure out which one, which has a koi pond in the middle of the restaurant. It's covered by a large bridge and a thick mesh, and the fish are showing classic signs of stress (not moving, sitting near the bottom, jumping out of the water, and gasping at the surface). Is there a way for me to advocate for better health for them or is it a lost cause as they are the restaurant's property and technically taken care of? Sorry if this is silly it just makes me sad to see them in a bad state.

      submitted by /u/moonstone7152
      [link] [comments]

    5. 🔗 r/york Goose on Dame Judi Dench Walk rss

      Goose on Dame Judi Dench Walk | Honk submitted by /u/NervousEnergy
      [link] [comments]
      ---|---

    6. 🔗 sacha chua :: living an awesome life Emacs Chat 22: Shae Erisson rss

      : Transcript, yay!

      I chatted with Shae Erisson about Emacs, keyboards, Org Mode, and life.

      View it via the Internet Archive, watch/comment on YouTube, read the transcript online, download the video / MP3 / transcript, or e-mail me your thoughts!

      Chapters

      • 0:07 Intro
      • 1:01 1999, IRC, community building in Haskell
      • 2:02 Emacs as a light-weight build-your-own-editor toolkit
      • 2:55 LSP, treesitter, Magit, jujutsu, C++, Python, Haskell, rust
      • 3:38 how does a new person experience Emacs? Emacs is always fun.
      • 4:07 Markov keyboard project, moving to Finland, right-handed Dvorak, split keyboard; Jeff Raskin; I am not a koala
      • 6:45 Purpose-specific function keys
      • 7:34 Trackballs, scroll
      • 8:17 1" trackpad rings
      • 8:58 Pair programming: ttyshare, shwim
      • 13:20 Recurse Center, "What is that keyboard? What is that editor?!", Emacs bankruptcy and starter kits
      • 16:09 hippie-expand
      • 17:18 yasnippet
      • 19:01 Function keys
      • 20:05 Org Mode
      • 21:17 Show Org agenda when idle
      • 22:03 Programmers want flow. When programming, light turns red
      • 24:27 ef-themes and modus-themes, season
      • 25:58 htmlize (does this still work on Wayland?)
      • 26:40 lsp-ui-imenu, jumping through rust code
      • 28:30 laptop with 126GB of RAM
      • 29:48 LSP coolness, Haskell, treesitter
      • 32:02 Combobulate
      • 32:52 What else are you using your 126 gigabytes of RAM for?
      • 33:27 TalonVoice
      • 34:46 NixOS, following Steve Purcell about 5 years behind
      • 35:06 envrc
      • 35:54 time-tracking
      • 37:05 taxes with Org Mode, remote lookup
      • 41:02 finding notes with C-s
      • 42:38 Org Mode, managing inbox
      • 46:30 Timestamps
      • 49:14 Org timers
      • 53:56 Org Mode snippets
      • 57:16 Compilation finish function: handle success

      Transcript

      Transcript

      0:00 Intro

      Sacha: Okay, so I'm going to actually remember to hit go live. I've got a 10 second delay, so if we need to panic, we can panic. Okay, so let's see. I think we are live. Hi, everyone. This is Emacs Chat number 22 after a long hiatus. And today, I'm here with Shae Erisson, who is also like an Emacs friend from a long time back. So this is it. As you were just saying, this is the first time we're actually talking live. And I'm looking forward to hearing about your configuration, how you use Emacs, Shae. But before we dive into that, can you give us a little bit of context? Who you are, what sorts of things you do, and how you use Emacs for that?

      0:57 1999, IRC, community building in Haskell

      Shae: I would say that... I guess I started using Emacs in 1999 when I moved to Finland. And I remember about the same time I was on IRC and I was really frustrated. I remember I got on the Perl IRC channel and I was like, hey, I want an editor that has syntax highlighting. I want to see colors to these words when I'm typing them. And they were like, noob, and they kick-banned me. And I was like, well, maybe I don't want to learn Perl, which I never did. And I guess that was an early introduction into I wanted to be part of communities where people were sharing positive things and building up each other. Actually, I ended up starting the Haskell IRC channel a couple of years later, and that became a very big thing. I would say that I'm mostly known for my work in community building in the Haskell programming language community, because I did that for, I don't know, 15 or 20 years. But I really like Emacs.

      1:58 Emacs as a light-weight build-your-own-editor toolkit

      Shae: So like last week at the same time I had the standing chat with a friend of mine who is also a programmer and he said oh so you're going to do this thing in a week do you want to give me like a preview of the talk and I was like yeah I guess so and some of the things that were really interesting was he was like I've never really tried Emacs I don't know much about it I kind of have this impression that it is a very lightweight build your own editor toolkit and I I was kind of taken aback because, you know, I guess I still have this long ago and far away. I don't know if you remember 8 Megs and Constantly Swapping is what people used to call Emacs and things like that. And I was, it was just kind of, I realized I'm still in my little echo chamber. And this is why I like to talk to other people all the time is because I want to have some exposure to what other people are doing.

      2:51 LSP, treesitter, Magit, jujutsu, C++, Python, Haskell, rust

      Shae: I guess things about Emacs that really changed stuff for me is language server protocol, TreeSitter. Those, I think, are two very powerful tools that are much more generic than, I mean, Magit, of course, is like magic. Although I've mostly switched to jujitsu lately instead for the last year. Let's see, I had, I guess, let's see, I did C++, I did Python, I did a whole lot of Python. And then I had Haskell jobs for five or six years. And then I switched to Rust about a year and a half ago. I now have a Rust job. And one of the things that Prot had asked, I think, or you had asked, and I forget exactly how this went.

      3:35 how does a new person experience Emacs? Emacs is always fun.

      Shae: It was great fun watching your livestream. And it was, how does a new person kind of get comfortable with using Emacs for a particular purpose. And I look for things, in fact, like how do I use Emacs for Rust, Rust development? And I found a couple of good guides on, and I was able to follow most of them, although my Yesnitit stuff is broken and I don't exactly know why tab doesn't work, right? But, you know, like there's always, Emacs is always fun, right? There's so many cool things you could do with it.

      4:03 Markov keyboard project, moving to Finland, right-handed Dvorak, split keyboard; Jeff Raskin; I am not a koala

      Shae: I noticed, I actually hadn't seen your preview page and I noticed that you found my Markov keyboard.

      Sacha: When you say Emacs is fun, I'm reminded of all of your fun, crazy keyboard experiments. It's like, what? I have a feeling you like to tinker with things.

      Shae: Yeah, so I think actually the influences as to how I got to where I am are pretty interesting. So the person that I ended up moving to Finland to for dating her, we started a company, we did projects, and I was the programmer. We had this pretty big project. I guess it was like 350,000 euros. And I mean, that was going to be over four years and we had to kind of complete the whole thing, and I was the programmer and we'd had the lowest bid... I had an IBM model M, you know, the super clicky with like all the... And about three years into it, my arm started really hurting a lot. But I was the only programmer. And nobody else knew all the code. And we had to ship it, because that's how we got paid. And so I ended up pushing through. And at the end of it, my arm just didn't work anymore. So for about a year and three months, what I did was I actually taught myself to type right hand. ...Dvorak, because I was already using two-hand Dvorak, and so I kept programming, but I just... One of the things was... like, I like programming, I like using computers, I don't want to wear out my arms again, I don't want to blow them out, so I ended up switching to split keyboards, and I will show you. This is very much the kind of thing that I like to use, and that is like this.

      image from video 00:05:44.800Shae: This is an Ergodox Infinity, but there's a lot of other keyboard flavors like this. And one of the things that I particularly like about this... So around the same time I met Jeff Raskin, who wrote the Inhumane Interface. And so for this particular thing, this is like Control and Alt and Hyper and Super and Shift. And this means that under one thumb, I have a lot more modifier keys than you get off of a standard. And it also means... A lot of my problems started with Emacs pinky, the dreaded, the infamous... I think that one of my... I made a keyboard layout called "I am not koala." You may not know this, but koalas have two thumbs. They have one on each side. And that's cool, but I don't have two thumbs, and I realized that when I was trying to grab something, I didn't put my pinky on it. That would be silly, right? I want to put my thumb around it. And so I decided I would move all of my chording keys under my thumbs. And that's kind of how I...

      6:43 Purpose-specific function keys

      Shae: And another thing I did was when I was really only able to use one hand, was I made my function keys mostly purpose-specific. And that was from Jeff Raskin's writings in The Humane Interface. So I guess I'm a programmer who really likes writing code, doesn't want to wear out my arms, and likes to do fun keyboard things, yeah.

      Sacha: Definitely. You're in it for the long term. You don't want to use up all of your arm capacity now and not be able to keep programming in the future. And now there's hardware to make that easier. So I'm glad. Split keyboards with extra thumb keys seem to be very popular in the Emacs community. I'm now tempted to find space in my desk in order to make that happen.

      7:30 Trackballs, scroll

      image from video 00:07:37.067Shae: Another thing I ended up switching to was I started using trackballs. Oh yeah, yeah. I tend to go completely overboard when trying out new things, so I bought 20 different models of trackballs and ended up settling on this one. The nice thing about this one is that this is how you scroll, and it has four buttons.

      Sacha: That is really cool. I like using ThinkPads, so I've been just living off the tiny little mouse in the middle of the keyboard. But back in the day, I also used a trackball. If I can get to the point where I want to take my hands off the keyboard again in order to do mouse things, that would probably be the direction I would go.

      8:14 1" trackpad rings

      Shae: I had an experiment in that area, which is where I purchased a one-inch touchpad, and I strapped it to my finger. And it was a PS2, and it had a USB converter plugged into it. And the idea was I could keep typing, and then I could move the mouse around without taking my hands off the keyboard. And now they actually have touchpad rings. They came out six months or a year ago. It's relatively recent. But the idea is no change in context.

      Sacha: I've only seen the scroll rings, but now there's a touchpad version. That is interesting.

      Shae: Yeah, I think that's pretty cool stuff. Hardware is actually improving things.

      8:54 Pair programming: ttyshare, shwim

      Shae: Oh, another thing, one of the things you talked about with Prot was how do you learn other people's stuff? And one of the things that I use for pairing, so I have one coworker, and it's a strange, interesting job. I like it a lot. And I met this coworker at a previous job, and one of the things, let's see if I can find it. So we used to, at the previous job, we used this thing called ttyshare. Have you heard of it? ttyshare. It's great. You can run it in a terminal and then you can effectively share your terminal with someone else. And so you have multiplayer terminals and that's neat. It was kind of a pain to set up. You had to make sure that you weren't NATed, you know, like you had to have effectively... someone had to have a public IP. You had to do a couple of other things. And as part of my job, I'm now, I guess, part maintainer for Magic Wormhole, the software.

      image from video 00:09:58.467Shae: And so one of the things that my coworker wrote was this nifty thing called ShWiM. And it's basically "shell with me." And it's a wrapper around TTY share so that with one single command, you can share a terminal. And the way that we use this is... We both run Emacs as a server, and then we use emacsclient in the terminal to connect.

      image from video 00:10:41.967Shae: I don't know if you've ever done this, but I can have a terminal right next to this, and if I run emacsclient in a window, then I'm sharing the same thing. This is a graphical chat with Sacha, in the terminal or in the UI, and both of them are updated.

      Sacha: That's fantastic. I remember people were using tmate for something similar before where you could share that. But yeah, it's just making it seamless, making it frictionless. And on the other side, I have also just been using wormhole to send large files back and forth between Karthik and John Wiegley because we have this other Emacs chat thing where we're going to post it eventually, once I finish figuring out how to redact all the personal information and Org files. But yeah, it's great for being able to send things without having to worry about, oh, you know, what's my public IP? Can I tunnel all the different things to get past whatever firewalls there are? So if this also works for terminal things plus Emacs client, that sounds really, really exciting.

      Shae: We've tried some other experiments. One of the things we tried to do was, and the only downside is like, what if my terminal has a different size, then you have to kind of shrink and match. And so we tried to honestly directly bridge to Emacs clients. And because I don't know if you're aware that there's effectively a local socket for the Emacs client that you can have multiple things connect to. But it turns out there's some sort of like system so I couldn't like reach across the network and directly use my co-workers Emacs session and he couldn't use mine. Weird things happened when we tried to do this cross host. As far as I can tell the Emacs client only works in the same host.

      Sacha: That's interesting. Lately, I've also been experimenting with CRDT, which has that Emacs-less plant as well. So that's been nice. But yeah, of course, a lot of people will be kind of stuck with the first challenge of finding someone that they can pair in Emacs with.

      Shae: I understand. And I think I'm honestly very happy that my one single coworker at this job is also a big Emacs user. And so we exchanged cool ideas and worked on stuff. And I'm very happy about that.

      Sacha: Were they already an Emacs person before they joined? Or did you pick the coworker because they were an Emacs person?

      Shae: They picked me. They were pretty much the person who started this thing. And they picked me because they'd worked with me at the previous job. Although I did have an experience like that. I had this massive Emacs config file, like 20,000 lines, and half of it was comments because it had accrued over 20 years.

      13:13 Recurse Center, "What is that keyboard? What is that editor?!", Emacs bankruptcy and starter kits

      Shae: And in 2019, when I first went to the Recurse Center, well, my first batch, I just was extremely extroverted and social. But my second immediate following batch, which is not the common pattern, I was like, okay, my goal is to write a bunch of Haskell, get some Haskell jobs, And so I went to the quiet room on the quiet floor. But then someone else came in, Marianne, my favorite programming friend. And she was like, what is that keyboard you're using? And I was like, ah, this is an Ergodox thing. And then she's like, what is this editor you're using? And I was like, oh, that's Emacs. And I was kind of a grumpy, like, I'm trying to get stuff done. But she was persistent. She was like, show me this thing. And so I was like, I'll show you Emacs. And she was like, this is great. And I was like. This thing? OK, cool. And I was like, I don't think you want my config. You'll probably want a starter kit. And she was like, well, what are starter kits? And I was like, well, I've heard about Spacemacs. I've heard about Doom. And I would try one of those. So she tried Spacemacs. And I guess this next part happened over several months. She tried Spacemacs. And then she was like, I like it, but it's slow. So I'm switching to Doom Emacs. And I would pair with her. And I was like, wow, look at all these cool things that the starter kits can do. I ended up flushing my entire 20-year-old config and kind of starting over and stealing a lot of great ideas from the starter kits. And Marianne is very ambitious, independent, hardworking, very focused. I'm not very focused. But I've learned a lot of things from her and watching her kind of... I haven't done C in Emacs in a long time so it's great fun to watch her learn these new things and then I learned stuff too and yeah it's good to have collaborative people to work with.

      Sacha: So it sounds like if people would like to encourage more people to talk to them about Emacs, feel free to use your strange keyboards out in public.

      Shae: I like that. That's good. That is good. Yeah I think that's reasonable.

      Sacha: Yeah, and I've just recently started digging into the starter kits too, because I realized I don't know much about them. It is really interesting going through them and discovering all these Emacs 31 options that you can enable to simplify your config or improve your workflow and all that stuff. So there's a lot of good stuff in starter kits, even for people who are not newcomers.

      Shae: I agree. And I think there's nothing wrong with just learning a bunch of new things, trying them out, and also throwing them away if you don't like them.

      Sacha: Now that you've declared Emacs bankruptcy and rebuilt your Emacs on top of other people's starter kits, what has made it into your config? What have you kept from those 20 years of tinkering with Emacs that you really wanted to stick around?

      16:06 hippie-expand

      Shae: I think the only thing that has absolutely stuck around is my use of hippie-expand, which is, I believe, a very old... an ancient tool from a different time. Most of the other stuff is kind of gone. Gone to the wayside. But I really like, I honestly really like hippie-expand. And I know that like, I have rarely heard of other people who use hippie-expand. But you use it? I think you just muted yourself.

      Sacha: I also vote for hippie-expand. It's a nice way to try different functions and just say, I just want all these different possible completions to go in there.

      Shae: Yeah. The thing for me that really sold me on hippie-expand is that most of the time when I am... When I'm doing something, I want to say, like, I can already see that word, just pick that one. And so I'll type the first characters and hit, like, meta forward slash, and ta-da, it's usually there. But then sometimes I do really want, like, some Elisp or some other stuff. And so I actually spent a lot of time tuning this the first time.

      17:14 yasnippet

      Shae: I actually only changed it for the first time recently because I was reading a how to write Rust well inside Emacs and they said oh well you want to use yasnippet and so I you know the funny thing is that yasnippet I believe is the thing that got me into Emacs like in 1999 I met this Finnish person Erno Kuusela in Oulu, Finland. Really cool guy. I was like, wow, how do you do this? As soon as you open a file, it's got a substructure and a skeleton. And when you type part of a function or something, it just populates it. And he was like, I'm using this snippet command in Emacs. That's why I was like, what's Emacs? It was very exciting. And at the time, I was using Vim. And Vim was not as, I don't want to say, automatable.

      Sacha: Yeah, now with Neovim and Lua, people are writing more extensions for it. But before, you had to know a lot of magic in order to customize Vim.

      Shae: Right, right. I agree. Let's see, what else do I do? I run my own email server, and I, of course, read my email in Emacs. In GNU, no less. Which is, I know, an NNTP reader, but it's still also a great... I used to use twiddle compile and I think that stopped working like six years ago, so I need to get rid of this comment, but there's still a lot of kind of cruft from earlier times.

      18:52 Function keys

      Shae: Remember how I said that I use function keys to have like purpose specific stuff? This was especially true because, I mean, I had my left arm strapped to my chest for like a year and three months before I even started regaining any flexibility, and that meant that... I'm amazed that you could just map them directly to single commands instead of giving in to the temptation to make them prefixes for longer keystrokes. I didn't really have the choice because I had only one arm that worked. It was just a lot harder to do any chording at the time. I still have a lot of these. F3 I use a lot, which is like, oh, what am I working on right now? That is org-clock-goto. A lot of times, I want to have a terminal that's in Emacs, so that's vterm,

      20:02 Org Mode

      image from video 00:20:17.133Shae: And I actually really do use the calendar all the time. This is like just switch to whatever it is. Of course, my email is here. You know what, let's see... So this... I don't know, have you seen this before? Have you seen this thing called STARTED in an Org mode file?

      Sacha: I use a STARTED state, yes.

      Shae: Well, I got it from you! So if I look at like, my Org Mode configuration, a lot of this STARTED stuff I have from you, I don't know when, but you were the person who introduced me to it.

      Sacha: It's the reminder that I did start working on this. I tend to get distracted by intermediate tasks, so it's nice to be able to say, try to finish these ones first before you move on to the next thing, maybe?

      Shae: I agree. I have the same thing, yeah. And I keep meaning, because this is... I know that you can put Org Mode configuration into the first TODO item. I would really like to move it into the elisp and I just haven't gotten around to it. And it's been 10 years. I mean, maybe I should just do it.

      21:14 Show Org agenda when idle

      image from video 00:21:23.933Shae: One of the things I did that I found fun... I really have written almost zero Elisp, but I did actually puzzle my way through this a year ago. Since so much of my life is in Org Mode, I learned how to make timers. This is very close to what you get directly out of how to do timers in Emacs. After some amount of time, I want my Org agenda to pop up because I want to say like, oh, what is the stuff I'm supposed to be doing? And what am I forgetting? What has been scheduled? And what is on my to-do list? And I also like to look at what is the stuff I've been working on lately? And I really like that a lot.

      21:58 Programmers want flow. When programming, light turns red

      image from video 00:22:16.067Shae: Another thing that I realized is that I had a blog post that was wildly popular. Where did I put it? And it was all about Emacs. I don't know if you saw the... Here we go. It was... Ah, here it is. So here it is in... This is very much an Emacs...

      Sacha: Oh, yeah, I remember that one. I put it in Emacs News. I thought it was great.

      Shae: All right, cool.

      Sacha: I would like the kiddo to sometimes be able to acknowledge this, but this is not happening. Still, yes.

      Shae: Right, right. Yeah, and so this was really fun because, like... I had a friend who was in development and there was like millions of dollars spent on how do you detect whether a programmer is in flow and it came down to if they're typing they're probably in flow so and that was it because they tried to look at EGs and doing all kinds of other stuff but it was like if they're typing don't interrupt them. And I don't know, because I do so much in Emacs, I'm not sure how accurate this was. But basically, that's where I learned to do timers the first time. Or maybe... I don't remember which one I did first. And the idea then was as soon as basically my average typing into Emacs has gone up a certain amount, then it will actually switch to busy. And it works just fine. It was a lot of fun to write.

      Sacha: So yeah, interesting use of getting the activity. I've seen other fun implementations of this. I think there's a c-c-c-combo package that makes some fun animation appear if you're typing really quickly.

      Shae: Oh, oh, yeah. I'm guessing because I think Atom, the Atom editor had that for a while. I guess that's where it came from.

      Sacha: So yeah, because you can instrument Emacs and play around with it, you can certainly do all sorts of things based on that information. Okay, so you've got it, you've got it set up so that when you come back to your computer, it'll show you the stuff that you've been working on. And when you're working on the things, you can tell it to tell the rest of the world not to bug you. Gotcha.

      Shae: That's right. [Sacha: What other fun stuff do you have in there?

      24:25 ef-themes and modus-themes, season

      Shae: I discovered that I love the EF themes. I love the Modus themes. They make me very happy. They're just unreasonably pleasant. As someone who has tried every single Emacs theme ever, they're just my favorite themes.

      image from video 00:24:41.000Shae: And so, at the moment, it's summer... Where did my summer go? How can this be? There we go. How come I'm in spring? Wait, isn't spring over? Hasn't summer just started? You know what I was thinking would be fun would be take the time of day, and you know that the EF themes has spring, summer, autumn, and winter, and I'm not sure if there are dark versions of each of those, but I thought, like I know that Modus themes will do this like check for the local time of when it turns dark, and then it will go from the light theme to the dark theme as soon as the sun hits, and I was like, well, what if I do that for seasons, you know, wouldn't that be cool?

      Sacha: There's this subtle sense of change as you go through the year. But of course you also have this thing there where you just randomize it.

      Shae: Well, I like that. Sometimes it's like I'm just kind of like, ah, I'm bored. I'm just bored of what I'm looking at. And so I will just change my thing. And it's just time for something. I don't know. It seems to work. It's like it gives me a little brain break from what I was staring at. And I did not know I was going to reset the effects scale, but that's fine. Interesting. What else do I have in here?

      25:56 htmlize (does this still work on Wayland?)

      Shae: Oh, Emacs HTMLize. I'm a little sad. I switched to Wayland. And if I remember correctly, HTMLize only works with, or maybe HTMLize still works, and it's the SVG one that doesn't work. Emacs SVG is a thing that if you're running with an X11 backend, you can turn your current screen directly into an SVG, which is really cute. It does not work in Wayland. I think HTMLize does still work. What other things do I have in here? I don't know. I guess a lot of it lately has been trying to make Rust things work smoothly. I've been trying to do some... I wonder does... Oh, cool. That was not what I expected.

      26:37 lsp-ui-imenu, jumping through rust code

      image from video 00:26:41.100Shae: I just started doing this thing with imenu. imenu integrates nicely with LSP.

      Sacha: That is a very pretty sidebar thing, and I need to learn how to do that.

      Shae: So because I have all these extra modifiers, my s-i is lsp-ui-imenu. And the reason that what I mostly use that for is when I have like a bunch of Rust code and I want to quickly jump through the structure of it. Basically that integrates with LSP, finds all the definitions, and I can quickly jump through it. I used to use lsp-treemacs for that, but lsp-treemacs puts things in its own order, not quite the same order I want, although treemacs is quite nice. I think that the thing to do is that you and I at some time maybe the next time if we do this again we should set up with a Shwim connection and you and I can both share our Emacs and then you can show me cool things that you do and I can show you cool things that I do and then we can start filing over some of the things. How about that?

      Sacha: That sounds fantastic. I know we'd wanted to experiment with pair programming a long time ago so that sounds like a seamless way to do it. And therefore I will go and figure out how to install shim and get it working. I will probably need your help to actually test it. I don't know, I think I can rustle up. Maybe it'll work off my phone. You haven't tried that. But lspui, okay, so I've just been using straight up imenu, like on Neanderthal, but lsp-ui has this fancy grouping of things and colors and stuff, so I definitely want to check that out.

      Shae: I'm a fan, yeah. I don't know. Do I have anything else exciting that goes with this in here?

      28:25 laptop with 126GB of RAM

      Shae: I will say that at the moment, the system I'm working on, I like buying unreasonably powerful laptops. And so, like, this system has 128 gigs of RAM and 24 cores. My previous laptop has 192 gigs of RAM. Long story short, I end up in a lot of cases where I want to use more memory. I've got all these cores. Can you do something with them? Perhaps you've already seen things like LSP doctor, which will say, have you tried this thing? Have you done this other thing? LSP has really changed

      Sacha: I have not. Sorry, would you like to show me this LSP doctor thing? Because I have not ever seen it.

      Shae: Yeah. Do you use language servers much for your development?

      Sacha: I am only just getting used to having a relatively modern 2018 instead of 2010 laptop. And so I have the red squigglies and various things, but I don't know what to do with them yet.

      Shae: Well, I mean, I'm doing a lot of this. So I have...

      29:46 LSP coolness, Haskell, treesitter

      Shae: Originally for me it was like I spent a lot of time with the Haskell language server because I was doing so much Haskell and it was a super powerful thing. In fact, somebody decided to hammer in half of a proof assistant into the Haskell language server and that was magic. You could do incredible stuff with that because you could just grab all of your local variables and transform the whole shape of your function and you could just write little snippets and just have it work. And that was amazing. It wasn't quite... One of the goals that I believe is... For future development of all programming editors, I believe that something like Emacs macros, but instead for abstract syntax trees, I believe this is an essential ingredient that we do not yet have. And I think that TreeSitter is the first step towards there. We now have one of the hats, right? Which is where we can take... TreeSitter is, you know, if you've used it... It is like you write some effectively C code to produce a really fast parser. Or is it like JavaScript that then compiles to C code? I forget exactly how it works. But the nice thing about TreeSitter is, I don't know if you remember, I'm sure you do remember, that if you were writing Python code and you used a triple-quoted string, you had to then add a comment with another quote because regular expressions is how Emacs was doing all the syntax highlighting. And honestly, that was kind of crap. And then there were projects like the Semantic Bovinator that made a full parsing suite in Elisp, which to me is half brilliant and half insane. And then there was TreeSitter, which kind of took over the world because it was... I think that the language server and TreeSitter are the first two of these editor generic pieces, and I suspect there will be more. I think that something where you can modify the abstract syntax tree and then put back to the source is one of those potential paths forward. I hope so.

      Sacha: Yeah, that would be great if you could just do the manipulations and then roundtrip it back into source code. Just regenerate the changed part of your code. That sounds fantastic. So it sounds like you were able to do some kind of manipulation with the Haskell use case that you were describing. Any chance you can show us like the awesomeness?

      Shae: Sadly, that sadly does not work anymore.

      31:58 Combobulate

      Shae: But you know, if you're looking for something in that area, have you heard of a Emacs library called Combobulate?

      Sacha: I have heard of it. I haven't dug into it.

      Shae: So it uses TreeSitter for source code manipulation by, and it's a lot closer to the way that like, you know, in Org Mode, you can like hold meta and arrow to kind of move things around. It uses TreeSitter to let you both move around in the context as well as actually alter the shape. And to me, this is the first step towards this tool that I want, which is where I can write a keyboard macro and have it edit an abstract syntax tree and then spit the results back into the buffer. Yeah.

      Sacha: All right.

      32:46 What else are you using your 126 gigabytes of RAM for?

      Sacha: What else are you using your 126 gigabytes of RAM for?

      Shae: Let's see. Honestly, I'm going to tell you that Rust Analyzer can take a lot of memory. And a Rust compilation can take a lot of cores. And I'm okay with that because I actually, I do like, and I will say that this laptop is actually from this year. So it's a brand new, like, top of the line. But then like, how would I, because I've got like, which I think is a bunch of matrix multiplication hardware. How do I use that from Emacs? I don't know. I'm sure I can find something, you know.

      33:25 TalonVoice

      Sacha: Maybe voice computing?

      Shae: Oh, that's an idea. Yeah, one of my friends, she's using Talon. Have you heard of Talon?

      Sacha: Yeah, I've heard of Talon. There are a couple of videos about people using Talon to code by voice, usually involving memorizing kind of a different alphabet for very quickly accessing different shortcuts. But it sounds really cool, and you sound like you've got the hardware to do something amazing with it.

      Shae: That's true. Well, you know, Talon actually lets you do something very similar to Combobulate, where you can navigate the AST of your source code. You can kind of move around very quickly. I don't know, like, are we like at the end of our? No, no, we're halfway through, right?

      Sacha: We're halfway through. I have about 28 minutes before the kiddo runs out and starts demanding lunch.

      Shae: Okay, well, I feel like I've been driving the structure of our just kind of like dumping random things. Did you have any questions or anything you wanted to cover?

      Sacha: This is all amazing. I come in with no preconceived notions. I'm just like, okay, shapr does cool things with Emacs. Let's hear about it. Let's go, let's go.

      Shae: That works for me. Yeah. I mean, a lot of it's been focused on Rust development lately. Rust and Jujutsu.

      34:45 NixOS, following Steve Purcell about 5 years behind

      Shae: I've been doing a lot of Nix. I'm running NixOS. I don't know if you're familiar, but that's been great fun. It's funny, I feel like I've been following Steve Purcell around from a technical perspective. I'm always about five years behind Steve.

      35:03 envrc

      Shae: I was like, oh, you know, NixOS is kind of a pain with Emacs. And just like this, what was it, NixOS? I forget. Anyway, Steve was like, oh, well, have you tried my library, envrc? And I was like, what's that? And he was like, well, now each buffer can have its own envrc. And I was like, it's perfect. That's exactly what I need. Because previously, every time I switched buffers, it would then go load all of the local everything in Nix. And sometimes that could take a long time, especially if I'm doing Haskell, that could take 10 seconds, and I really don't want that sort of lag. And so Steve Purcell's brilliant library, envrc, says, you know what? Every single buffer can just keep such a thing, and then you can only relit it when you need to. And that's pretty awesome.

      Sacha: That sounds cool, and I should check that out too.

      35:52 time-tracking

      Sacha: @JacksonScholberg has a question. He says, "I was curious about what you were tracking your time working on, how you track it." Is it just Org Clock? So this is how you keep track of the things you're working on and what got interrupted by the new thing that you just added to the stack and so forth?

      Shae: Right. In fact, I have this thing. Honestly, when I sit down on my computer, Just clock in. You'll notice in the bottom right here, we have chat with Sacha, right? And so like, I just kind of clock in stuff. And like, I'm not always, I really kind of need to reorganize my Org mode files because I've been naming them per host because I previously had like a work Org mode and I had a home Org mode. now that my home hardware is also my work hardware I guess and so like I still have my previous laptops things where I'm keeping my events I really need to reorganize things but I mean yeah I schedule things I oh you know I've got a weird thing to show you

      37:01 taxes with Org Mode, remote lookup

      image from video 00:37:09.900Shae: I decided that it would be great fun to do my taxes.

      Sacha: You are showing me your taxes, do I need to like black out this whole thing?

      Shae: Well, this is actually just an example from the docs. So I could actually share my taxes on it because I mostly don't care. But I think in fact you can figure out exactly how much money I'm making by looking at the open whatever. So the thing about this is that I decided to file all of my tax forms directly into Org Mode spreadsheets and then do remote lookups. So basically each spreadsheet was one particular form. And then once I'd gotten to the bottom, like I need this result, like what's my estimated income? And then I would use the lookup, kind of this cross spreadsheet lookup. And that's how I did my taxes for last year. And then my de facto mother-in-law, she's an accountant, and she didn't exactly do this thing, but it was pretty close. She was like, you've got all your taxes in the spreadsheet. I was like, yeah. And then she looked at it and she was like, what is that? And I was like, anyway. So I got to kind of file everything back out into TurboTax, but that was a fun thing to build.

      Sacha: Yeah, I have something like that too. So for example, whenever I do my tax paperwork, I just have to have like, you know, the step by step checklist. Okay, this is where I need to go to get this number. This is where I can put it in. And then eventually it spits out a table that says, okay, put this in box 11, put this in box 13, so that I don't have to do the steps by hand. Because even before the, you know, for me, I use like simple stacks or whatever, it's web based. But before you get to the point where you can put the numbers in the form, you gotta go to this website, calculate this thing, and Org just makes all of that so much easier.

      Shae: I agree. Yeah.

      Sacha: And this remote lookup thing is something I'm always looking up because Org tables are so powerful, but also I need more examples in my life to remember how to use them.

      Shae: Well, I think it took me four hours the first time to get it all figured out. But I can send you an example without showing it here. I can send you an example because I figured out, I think I've hammered the remote lookup down very thoroughly.

      Sacha: And once you've got it right, you can just keep filling that in or copy and paste it. You have an example of the syntax and that's already all you need.

      Shae: Right. I did run across some limitations of the evaluation method of Org mode spreadsheets. But maybe I've been using them a little too hard, if that makes any sense.

      Sacha: Oh, what kind of limitation?

      Shae: Honestly, I think I finally found a way to say every single... Because it was... So really the way that spreadsheets work is they're much more like Dataflow. And that is just that you end up with, like, either you work from the endpoint, which is like much more Haskell style evaluation, which is where you're like, I need to start here. What depends on this? But in the case where you have a whole bunch of different Org Mode spreadsheets, I think I ended up with this little text style hack where I just ran it a bunch of times. So it's like evaluate, evaluate, evaluate. Because remote lookups I ran, you know, I don't remember. And I think I took notes, but I don't remember. That's one of the great things about Org Mode is that I swear it's my, like, half of my brain is in my Org Mode notes. And whenever I had, I'm like, oh, what was that thing? I'm like, well, fortunately, with my terrible short-term memory, I took copious notes because otherwise I would never be able to get back to it.

      40:55 finding notes with C-s

      Sacha: What is your favorite way of finding those notes?

      Shae: I actually use a lot of C-s just because I kind of have some idea of where they are in my tree structure and I'll also say I use a lot of my Org capture templates and they're not super complicated. I have like a to-do, I have a journal, I have ideas and like random ideas will float into my head like you saw Markov keyboard right it is like the weirdest art piece you've seen all day right and Markup keyboard shows up on the front page of Hacker News once a year or so. And people are like, programmers have gone too far. This cannot possibly be usable by humans or something. And I'm like, well, I don't know. I think it was art. And so a lot of times those things will drop into my head, something like that, where I'm trying to do something else. And so I will quickly write down the idea and then just gotten it out of my head enough that I can continue with what I was doing. And so I have a long list of strange ideas. A recent one was like, you've probably had your teeth worked on once or twice. And you know that the dentist always had to move the light around. And I'm like, but we have really good eye tracking. Wouldn't it make sense to figure out where the dentist or the car mechanic is what they're looking at? And then have the light move around behind them to figure out how to actually light up the place they're looking at, right? We've got vision tracking. Why don't we do this? But I don't really, yeah. I decided maybe I don't want to work on that one right now.

      Sacha: It sounds like an involved project. Yeah. Yeah, yeah, yeah. Okay, so you're capturing, you're stuffing a lot of these ideas into an inbox.

      42:35 Org Mode, managing inbox

      Sacha: A lot of people are probably in the same boat where they've got these inboxes full of ideas. How do you deal?

      Shae: I archive stuff when I'm done with it.

      Sacha: Oh yeah?

      Shae: Yeah, so a lot of times, and I find this very valuable, is like if I look at... Do I have it? Oops, that was not what I meant to do.

      Sacha: Alright, so you basically just do aggressive speed commands, archive, archive, archive, or look at the agenda and just mark a whole bunch of things and say, that's it, that's gone. It was written down and then it can go.

      Shae: Yeah, well, when I'm really done with something, when the thing is finished, then I will just archive it. I mean, do you use Archive much?

      Sacha: I do. I have a function that goes through my inbox file and just archives anything that was marked as done.

      Shae: Oh, nice!

      Sacha: Because that way it clears it up, right? So I'll refile things where I'm like, okay, it's done, but it has important information. I want to put it somewhere else. But if it's just a transitory task that I'm using to remind myself, tomorrow I have to do this, go find the water bottle when it's done, I don't need to know about it in the future. So it's left in my inbox because I checked it off, and then periodically I'll say, clean up inbox. Not only will it remove all of the done things, but if I leave a tag In the title of the task or if the task matches certain regular expressions, it will refile it to the appropriate place in my kind of more permanent thing. So I can say, okay, all of my Emacs related tasks will get automatically refiled to my Emacs category without my having to do that manually.

      Shae: So you're using tagging because I kept trying to do tagging and never quite did it.

      Sacha: I use tagging sometimes when I remember it, but this is also why I use the The regular expression match against the title. I'm using Orgzly on Android to capture the thing on my phone. I might want to say this is a consulting task. File it in the right place so it doesn't get lost in my inbox.

      Shae: Wow. When is your interview so I can learn from your tricks?

      Sacha: This is now. Here we go! You can ask questions. The nice thing about conversations is that we jostle different ideas, and we are like, oh yeah, maybe I should write a blog post about that, because I take it for granted. So now apparently I have to write a blog post about my cleaning up process. My inbox is very long. The other thing, speaking of dealing with really long lists that I picked up from John Wiegley was I also sometimes remember to check this list of random items. So in my agenda, there's also like this, you know, random selection of things that I have not gotten around to thinking about further, but it's there just in case serendipity or boredom make me do something.

      Shae: you know that's... I've thought about having... because you know, I've got the pop-up this little timer that pops up my agenda, but I've thought about maybe adding a section I don't know if I could add a section here but it would be something that says like at the bottom here's two or three random to-do's that have been open for a while just like for garbage collection. Because I know that in Jujutsu, I've got a cool little query that says, if you have any change sets that are more than two weeks old and are not in a permanent branch state, maybe you should do something about them. It's just called to do. It'd be kind of nice to have that for Org Mode as well.

      Sacha: Yeah, it's just, you know, and our brains do these strange things with randomness, right? They're like, oh, I want to see what's new now.

      Shae: Right, right, yeah. Oh, I have a question. You have this thing where you had...

      46:28 Timestamps

      Shae: I saw you taking notes with Prot, and you had this timestamp.

      Sacha: Oh, yeah, yeah, yeah. I'm using it now. Okay, okay. So I have it bound two ways now. I have it as a dabbrev, so dynamic abbreviation, and I also have it as a yasnippet because sometimes I'm using it with either SPC or tab to complete it. And I don't really want to think, I just want to get the timestamp in and then move on. And so abbrevs can run functions to evaluate it. You can insert the timestamp that way. Or yesnippet, of course, can evaluate the thing. And now I have those. It's basically just a wall-clock time so that I can go back and plop in the chapters as time offsets, which are automatically calculated from the YouTube data on when the stream started. So I don't have to manually calculate my chapters. But it's super useful to have these times everywhere. And in this case, during a conversation, I want to be able to say, hey, we talked about something interesting. And then be able to go back to that point in the video later on.

      Shae: So you're matching? Oh, oh, wow.

      Sacha: So my shortcut for yasnippet is "ot" because I never type "ot" elsewhere, and it's close enough. I use Dvorak, so my O is on home row, and T is close by. Also, on the other hand... There you go.

      Shae: Did I already show you that this is actually Dvorak?

      Sacha: Oh, there you go. Now I can see the keycaps. Yeah, earlier it was kind of blurry, but now, yes, yes. So yes, that is my shortcut for inserting the timestamp. I previously added seconds as well, but then I realized that my kind might be false precision. So I just, you know, just use a minute at the moment and then I go back and adjust the timestamps a little bit later. But yeah, you can use abbreviations for all sorts of things, including times and dates and stuff.

      Shae: Have you ever tried Org timestamp?

      Sacha: Yeah, Org timer. So Org timer gives you a relative timestamp, right? You can say Org timer. Oh, okay. So, sorry. Are you talking about the C-u C-c ! or something of that sort? So that's actually what I initially was doing, but then it was too many keystroke word modifiers to remember. And then I had to press RET to select the, you know, thing. So now I just have an abbreviation insert the Org mode formatted timestamp for me. And then I have this code that searches for Org timestamp regular expression and then does the calculation and conversion and stuff.

      49:12 Org timers

      image from video 00:53:52.300Sacha: So Org timer is a separate thing. It's useful for meetings and things like that. You would say, okay, your Org timer starts at the beginning of the meeting and then you can have a list and it automatically, like if you alt shift enter or something like that in the list, it'll automatically like insert the right timer, relative timer to it. There you go. So there's an org-timer-start. But the reason I didn't go that approach was because then you A. have to remember to actually start the timer and B. then you have to synchronize your time with video time. Which might not have started at the same time. So now I'm just like, okay, wall clock for everything. And then I can do the transformation with whatever I like. And since I'm editing my subtitles in Emacs, I can say, hey, this file started at this time, according to YouTube. And then just, you know, map all of the wall clocks to the appropriate subtitle times.

      Shae: Wow. That's really cool.

      Sacha: Anyway, so timers, relative, absolute, and using abbreviations is great. Which I think actually is a thing that I picked up from Karl. Karl Voit because he also likes to use... He has an abbreviation, not at the Emacs level, but he has an abbreviation on his system level, like with his window manager, so he can use this timestamp trick anywhere, including in Etherpad or wherever else where you want to insert the date and time. That's V-o-i-t, by the way. But yeah, so times are a great way to just leave yourself a pointer to that moment so you can go back to it later.

      Shae: Now I'm curious, how well does that integrate with this sort of thing? Because I really like looking back at my history agenda.

      Sacha: If you have it insert an inactive timestamp, I think it should still show up there. I think it will be a little like those.

      Shae: Yeah, it looks like the... Well, it looks like these two are showing up.

      Sacha: Yeah, yeah, yeah. Yeah, so that's a basic thing that I would have inserted by my either abbrev or... So it's not even dabbrev. It's just regular abbrev in Emacs.

      Shae: What's the difference?

      Sacha: dabbrev is like hippie... Okay, let me just double check here. I feel like dabbrev is sort of hippie expand-ish. It looks in your buffer or possibly other buffers. And I think hippie-expand and dabbrev, they kind of work together. It's an option to have them work together. Okay, so hippie-expand is... Oh, so I see. Hippie-expand is the more advanced version of dabbrev. dabbrev was Dynamic Expand, and Hippie Expand says, yes, that, but try a whole bunch of other things first. But my timestamp thing is actually just done by a regular abbrev, and I will find the thing in my config for "ot". Oh, yeah. I will put it in my chat.

      Shae: My spelling, most people say my emails are spelled really well, but it's only because I have ispell set up.

      Sacha: Yeah, ispell is great. I am learning French and therefore...

      Shae: Oh, c'est trÚs bien. Je parle un peu de français aussi.

      Sacha: Oh, oui. I'm keeping a journal in French on my blog and I have the Tatoeba Project with all the example sentences and I have a consult interface to look up stuff in them so I can just borrow other people's words and try to make it sound more natural. Plus of course the usual searching for words in dictionaries and stuff. Anyway, in the chat, I put in my global abbrev table definition for insert format time string. In case you want to steal that, it's right there.

      Shae: I will definitely save that into my notes here.

      53:53 Org Mode snippets

      Shae: Another thing I use a lot is I use Org Mode snippets. I will tell you that the first time, I guess if I look back at... This is another thing that I have done a lot of in the past, which is where... I love the fact that Org Mode snippets are just executable. I can just run them. I guess two jobs, three jobs ago, there was a case where, because I would keep the results around and look at them, there was a case where, I guess a couple of months before, something got shipped to a customer, and I noticed our database schema had changed and I prevented a tremendous amount of upset and emergency by being like this doesn't look great. I got one from two weeks ago, and it does not match. Something's wrong here. Everybody's like, I don't think so, Shae. And I'm, like, no no no, we do have a problem, we've got to fix this. And they were, like, oh crap! And then I was like, yeah, solved a problem!

      Sacha: Yeah, I basically try to do as much in a snippet instead of in, you know, in a scratch buffer or whatever, just because having that record, the fact that I did it, and also any notes that I had leading up to it and the output of it, it's just so helpful.

      image from video 00:55:39.300Shae: Oh, I've got a cool thing that I'm doing for work. And that is that our readme file is not only a word file, but we also have the demonstration of our actual thing is done by using like dependent snippets. And so that means that like if you want that, perhaps this is something everyone already knows, I don't know, but we basically are using the results of earlier commands in later places. And the other nice thing about that is that then when we want to check, we have to effectively dock tests, right? When we want to check and see if our software works the way it does in the readme, we evaluate the final Org Mode snippet, which then calls it forward, calls it forward, and then if something goes up or not. Well, I guess I need to fix something. And so it was pretty exciting to put Org Mode niftyness into our, into my Word reading file, you know?

      Sacha: Nice, nice. And you did mention your other coworker is on board with the whole Emacs thing. So that's one of the things that people are often like, I want to use Org Mode and I want to use it for like the documentation or the testing or whatever, but they got to get everyone else on board with the thing. Otherwise it's Jupyter Notebooks or whatever else, right?

      Shae: Right. Okay, so I have a joke for you that I came up with a long time ago, and that is, do you know the only way, there's only one way that Sauron could have organized the invasion of Middle-earth, and do you know what he used?

      Sacha: What?

      Shae: Orc Mode. It's a terrible joke, isn't it?

      Sacha: That's okay. I'm sure someone in the comments will come up with an even worse pun.

      Shae: I'm excited! It's going to be great!

      Sacha: Never underestimate the punniness of the Emacs community.

      Shae: I completely agree. I don't know. Do I have anything else exciting in here?

      57:15 Compilation finish function: handle success

      image from video 00:57:48.300Shae: I actually really like this one. I used to run all of my tests in compile. F12, I have F12 bound to compile. And one of the things I wanted was, I wanted something where it was, if the compile is successful, don't show me the results, because everything's good. And so since I'm doing stuff in Rust, when I run all the tests, it leaves the buffer up, and I need to get around to actually doing stuff like this for Rustic mode as well, where when the tests pass, just go away, because it's all good. And when the tests don't pass, show me where to... I need to look at the problem. And I got this from Enberg and Emacs, I don't know, 20 years ago. Maybe it was less than 20 years ago, but it probably wasn't. So yeah, there's so much good stuff. Yeah, there's just so much good stuff. And I also like to, oh, look, here we go. You can see that this is long gone, by the way. It's not there anymore.

      Sacha: I have a proper, you know, it's sachachua.com/dotemacs. A lot easier to remember. But yeah, and I think that's, yeah, yeah, I remember that now. defadvice is also obsolete. The new hotness is advice-add or something like that.

      Shae: Oh, really? I'm going to make another TODO item for there.

      Sacha: I was digging through my notes trying to find, do you share your config anywhere?

      Shae: No, but you know, at this point if I share it on YouTube, I might as well just throw it up somewhere. Why not? It's not very exciting. Like if you look at someone like Ross Baker who has magic, like wow, is there some magic coming in from Ross Baker? I'm so excited to see more stuff from him. There's just like, I guess I feel like compared to almost everybody else I know, I feel like a power user. Because I'm like, you know, I wish I could do this thing. A lot of times someone I know is like, well, I did that thing and here's a library. And I'm like, yeah, I'll have to do it. And I just, I guess I feel like I'm a power user. And on the good side, I guess I kind of, I really haven't written that much Elisp ever, like I was saying in the comments during your interview with Prot. And I kind of like to, it's just I guess it's never quite gotten to the top of my stack. And I did decide it was time for me to send money to Parade for at least for themes, if not for like, please teach me some Elisp so I can actually, because you know, it's not that Elisp is hard. It's more like, how do I kind of, what are the things I interact with? What are the words? What's the vocabulary of working with Emacs? I don't actually really know. As a user, sure, I can do cool stuff. I can do Lisp macros. I've done Scheme and Lisp some of the past, but not inside Emacs.

      Sacha: Alright, so let me clarify. After more than 20 years of using Emacs, did you say you feel like a power user or do not feel like a power user?

      Shae: I definitely feel like a power user, but I don't feel like someone who does much of anything with Elisp. I don't really feel like someone who has much of a clue in the internals. And that's not entirely true. I have some of the ideas. But for the most part, I haven't actually needed to know that much about the internals. And sure, I've dug into things like how do you efficiently work with large buffers in your ??, like the ropes data structure and stuff like that. That was more for fun. Although it is something that Emacs does and does extremely well. But I'd kind of like to... There's a lot of things I'd kind of like to change and I don't really have enough of the understanding of the kind of how I would write the Elisp to do it. Here's a good example. When I hit F3, it takes me to the one I'm currently clocked into. Unless I haven't clocked in to something since I started Emacs. And honestly, I would like to use something like org-ql, the Org query language, to go find if I've just started Emacs, and Org does not know about something, you know, I just want you to go search for it. I have so many cores and so much memory, just go find it.

      Sacha: That sounds like an excellent reason to go learn Emacs so that you can have it... If you're not currently clocked in, go find the most recent clocked in task and go there, or maybe present you with a list of things and then go from there. I would love to hear about your Emacs Lisp learning journey because that's one of the big things that moves people from, you know, power users, yes, but users, to using Emacs as a lightweight editor toolkit for something that's custom fit to exactly what their workflow is. And on that note, I'm going to try to wrap up gracefully before the kiddo, you know, just like drags me out here. Thank you so much for doing this. I look forward to more conversations. I'm going to post the transcript and other things like that pretty quickly, I think, because I have this nice workflow now that lets me take screenshots and everything, but there's so much here that I want to unpack. But I hear the kiddo, bye!

      #+begin_export 11ty

      <a name="end-ec22-transcript"></a></details> #+end_exportbvt

      Chat

      • JacksonScholberg: ​​Emacs is fun
      • JacksonScholberg: ​Apple's touchpad is another option
      • JacksonScholberg: ​Trackpad
      • JacksonScholberg: ​Lol
      • JacksonScholberg: ​I was curious about what you are tracking your time working on
      • JacksonScholberg: ​How you track it.
      • JacksonScholberg: ​You clock in and out to what you are working on. I like that idea.
      • Bezaar.musicc: ​​That's great!
      • PuercoPop: ​​the buffer api (properties) is the hardest part for me
      • charliemcmackin4859: ​​I think you still have a timer going, btw

      Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat

      You can e-mail me at sacha@sachachua.com.

    7. 🔗 r/Leeds I love this spot. rss

      Sidenote : anyone going warehouse this coming Tuesday ?

      submitted by /u/Auriv3x
      [link] [comments]

    8. 🔗 r/york York City Parade rss

      York City Parade | View from the bus! submitted by /u/York_shireman
      [link] [comments]
      ---|---

    9. 🔗 r/reverseengineering The first FREE online WebAssembly Reverse Engineering workbench (and how we built it) rss
    10. 🔗 r/Leeds Moved to Yeadon rss

      Evening all, I’m 24 yo woman and I’ve just bought my first place in Yeadon. I love it here I think it’s really nice, I’ve come from Baildon so used to the quiet and older population but I was expecting a bit more of a young professional area! Is there anyone on here who also lives round here that maybe wants to go for a drink? I live right by the Robin Hood pub but wouldn’t mind trying some independent bars/restaurants and get to know the area more :)

      submitted by /u/Agitated-Yoghurt5258
      [link] [comments]

    11. 🔗 earendil-works/pi v0.74.0 release

      Changed

      • Updated repository links and package references for the move to earendil-works/pi-mono and @earendil-works/* package scopes.
    12. 🔗 The Pragmatic Engineer The Pulse: AI load breaks GitHub – why not other vendors? rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .

      GitHub's reliability has been beyond unacceptable recently: last month, third party measurements pinned it at one nine (right at 90%). This month, reliability has been down to zero nines - 86% - as per a third-party tracker, and last week, things got even worse: a frankly embarrassing data integrity incident, more outages, and a partial explanation from GitHub, eventually.

      Data integrity incident

      Last Thursday (23 April), this happened: PRs merged via the merge queue using the squash merge method produced incorrect merge commits, when the merge group contained more than one PR. Commits were reverted from subsequent merges: basically, commits were "lost" in the code that was merged!

      Thanks to a bug GitHub introduced, the service broke its integrity promise that pull requests would be merged as expected when using squash merge, which is a technique typically used to merge multiple small commits into a single, meaningful commit. This is a big deal: as data integrity promises are some of the most important ones, for services like GitHub.

      A total of 2,092 pull requests were impacted, and companies hit by the outage included Modal and Zipline. Effectively, GitHub pushed a bunch of work on affected customers who had to manually untangle and recover lost commits, which GitHub could offer zero assistance with.

      Customers had to manually go through their git history and restore missing code. After following manual recovery steps (reverting the squash commit and re-applying commits one by one), all commits should have been recovered.

      GitHub later emailed the list of affected commits to customers, but it's odd that GitHub executives seemed to downplay the nature of this outage. After all, an outage that messes with data integrity is a much bigger deal than something like a fall in availability where no data is corrupted.

      Can Duruk, software engineer at Modal, was unhappy about GitHub's muted response to the outage:

      "The COO going out of their way to find a huge denominator to make the impact appear small feels very dishonest; versus a sincere apology about how this invalidates their entire promise to their customers. We had to dig into their status page about this to even realize they just casually f***ed up our repo."

      Outages don't stop

      On Monday (27 April), pull requests and issues disappeared from GitHub's web UI:

      altPull requests go missing. Source:Mario Zechneralt Issues also not to be found. Source:David Cramer

      This had to do with an Elasticsearch outage on GitHub's backend: the cluster became overloaded and went down. So, while pull requests, issues, and projects didn't vanish altogether, they also didn't show up during the 6-hour-long outage.

      There were other outages this week:

      Also on Tuesday (28 April), security firm Wiz disclosed a critical security issue, where a bad actor could get access to all repositories on GitHub and GitHub Enterprise server by using only a git push command. GitHub fixed the issue on GitHub.com within six hours, but GitHub Enterprise servers that were not updated remain vulnerable.

      Famous open source contributor quits GitHub in frustration

      On Tuesday, Mitchell Hashimoto, founder of HashiCorp, creator of Ghostty, announced GitHub was unfit for professional work and that he was moving off to Ghostty, the open source terminal that's his main focus. Mitchell's reasoning was dead simple: being on GitHub makes him unproductive (emphasis mine:)

      "The past month I've kept a journal where I put an "X" next to every date where a GitHub outage has negatively impacted my ability to work. Almost every day has an X. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage. This is no longer a place for serious work if it just blocks you out for hours per day, every day.

      It's not a fun place for me to be anymore. I want to be there, but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software.

      I want it to be better, but I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go. I'd love to come back one day, but this will have to be predicated on real results and improvements, not words and promises."

      Mitchell's experience suggests that GitHub's official status page is inaccurate from the point of view of a heavy user like himself. The third- party "missing GitHub status page" is likely to be a better estimation: where GitHub's reliability is at zero nines: at 85.51% uptime. That means that a part of GitHub was down for 2-3 hours, per day, on average, for the last 90 days (!!)

      altReliability woes: GitHub "not a place for serious work." Source: The Missing GitHub Status Page

      Mitchell's complaint sounds straightforward:

      1. As a professional software engineer, it's important to have tools that help you get work done
      2. For months, GitHub has got in the way of his work on open source projects via a flood of outages
      3. It makes no sense to use a product unfit for professional work.
      4. As GitHub shows no signs of improvement, it's worthwhile to move to a different solution which just works

      CTO blames AI agent-fuelled load spike

      GitHub CTO, Vlad Fedorov, shared an update on why reliability has been terrible for months at GitHub. He identified the load from agents being much bigger than expected as the culprit. Charts illustrating this were shared by GitHub:

      alt

      This chart looks eye-catching - but there's just one tiny issue: no Y axis! So, while it tells the story of the load going up slowly and then very fast, we're not told by how much. However, I managed to get data from GitHub, and below is the chart showing the actual load increase over two years:

      alt

      A load increase of ~3.5x, spread across two years, doesn 't seem so brutal at first glance. It is nothing like a load increase of 10x in a month, and a good chunk of it occurred in recent months. So, why can't GitHub handle it? In a blog post, Fedorov said:

      "A pull request can touch Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases. At large scale, small inefficiencies compound: queues deepen, cache misses become database load, indexes fall behind, retries amplify traffic, and one slow dependency can affect several product experiences."

      Here's how the per-second load numbers from January 2023 and today compare:

      alt

      GitHub took 15 years to achieve the 2023 numbers, and maybe it expected to continue growing in a comparable way in the future. If so, some engineering decisions about long-term infrastructure improvements would have been made obsolete by the arrival of AI agents.

      To add to GitHub 's challenges, the company is in the midst of a migration from its own data centers -> Azure. In October last year, GitHub started to move over to Azure - a project expected to take 12 months - because it already had constraints on its own data center capacity.

      Such large-scale infrastructure migrations are hard enough when the load on a service is relatively stable; just making sure nothing breaks takes a lot of effort. But moving at a time when load is spiking means that bugs can cause more visible outages. Of course, GitHub can secure a lot more compute capacity on Azure, now they know what to expect.

      But other major companies prepared for a 10x increase in infra load, so why not Microsoft / GitHub? A year ago, I did research on how Big Tech was preparing to respond to the impact of AI on their business. Google was improving its internal systems to accommodate for a 10x increase in load. As we covered in The Pragmatic Engineer, in July last year:

      "Google is preparing for 10x more code to be shipped. A former Google Site Reliability Engineer (SRE) told me:

      "What I'm hearing from SRE friends is that they are preparing for 10x the lines of code making their way into production."

      If any company has data on the likely impact of AI tools, it's Google. 10x as much code generated will likely also mean 10x more: code review, deployments, feature flags, source control footprint and, perhaps, even bugs and outages, if not handled with care."

      Predicted enormous load increases were not secret knowledge within the industry, yet it seems GitHub was blissfully ignorant of their potential size. According to Vlad, GitHub did eventually plan for a need to increase capacity by 10x, but this was in October 2025, months later. In February 2026, the company is now adjusting that expectation to 30x. He wrote:

      "We started executing our plan to increase GitHub's capacity by 10X in October 2025 with a goal of substantially improving reliability and failover. By February 2026, it was clear that we needed to design for a future that requires 30X today's scale."

      There's also the question of whether GitHub miscalculated how much time it had to prepare for explosive load growth, and whether it was caught off guard when that growth materialized months sooner than expected at the start of this year.

      Given GitHub only started to prepare for a major load increase in October, its current problems are unsurprising. At the scale of GitHub, it's common enough for each team owning a service to plan a year ahead on how much load their service will have, and hardware resources like storage, VMs, and networking are allocated accordingly. Load planning can account for up to half of the preparations, and when reality doesn't conform to plans, some systems can struggle to scale up.

      So, on one hand, dealing with a 3.5x increase in load over 2 years should not be such a big deal for most services; especially not ones which can be horizontally scaled (when there's not much state, and scaling is achieved simply by adding new nodes.) But GitHub probably stores a lot more state with pull requests, workflows, projects, etc. This probably makes scaling more tricky when it comes to databases and systems running workflows.

      GitHub also has 18 years of tech debt on its hands, and thousands of staff to align as "organizational overhead." As its service load grows faster than before, responding is harder due to all that accumulated "debt":

      • Tech debt: many systems at the company are 10+ years old and are likely patched up, making them more difficult and risky to change
      • Organizational debt: around 4,000 people work at GitHub, of whom 1,000 are engineers. Teams have dependencies with each other, and even seemingly simple work can require dozens of engineers to work together
      • Customer expectations: GitHub cannot break customer workflows, even if doing so would mean changes to systems happen faster

      GitHub finds itself in the 'innovator's dilemma': the company became successful because it built developer workflows that made sense, pre-AI, and it used to be able to accurately forecast service load changes. But now that engineering teams' workflows include AI agents, GitHub's own workflows are not necessarily the best fit, and the company failed to forecast service-level changes.

      Other vendors floored by AI load? Not really

      One thing that doesn't add up about the situation is that other vendors who are presumably experiencing similar load spikes don't appear to be suffering with reliability issues as much. Vercel, Linear, Resend, Railway, Sentry, and other infra providers see record-level growth thanks to AI, but keep up with the load.

      Yes, it's true that AI vendors like Anthropic, OpenAI, and Cursor have some reliability issues, but it's not at the scale of GitHub's. GitHub's direct competitors, GitLab and Bitbucket, presumably see load going up similarly, but they're not going down as much.

      An obvious question is how much of GitHub 's pain is self-inflicted? With Microsoft as owner, it has more resources at its disposal than any competitor or startup, and yet failed to predict load increases and is too big to respond with the nimbleness of a startup.

      It's undeniable that solving for a major load increase is a hard challenge; it's when the difference between average and standout engineering teams is apparent. GitHub hasn't been responding like a world-class engineering org.

      GitHub alternatives?

      Every regular user of GitHub feels the pain of ongoing outages. As a dev, you can either hope Microsoft will eventually improve reliability, or seek alternatives. As covered above, Mitchell has chosen to quit and is currently deciding where to take Ghostty.

      The obvious alternatives are GitHub's biggest competitors, GitLab, and Bitbucket. Each offers Git hosting, and neither comes with the uptime woes that GitHub is suffering from.

      Self-hosted solutions are also an option, like self-hosting your git repo, or going with a self-hosted forge like Forgejo, which is an open source, local-first GitHub alternative.

      I also suspect that, soon enough, we'll see startups offering GitHub-like code hosting capabilities, while offering more robust uptime and being architected to handle the 30x-or-more scale which GitHub hopes one day to support.

      Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:

      1. Did Anthropic turn hostile on devs because capacity was running low?
      2. Amazon finally allows Claude Code and Codex usage
      3. Meta forcefully assigns engineers to data labelling ahead of job cuts
      4. New trend: small "AI-forward" teams
      5. Industry Pulse: why Meta tracks employees' computer activity, OpenAI starts to move off Datadog, Apple lets slip it uses Claude Code, GitHub -> Xbox transfers at Microsoft, VS Code inserted "coathored by Copilot" even when Copilot did nothing, analysis of the Coinbase layoffs
    13. 🔗 r/wiesbaden Freitags essen gehen zu zweit? rss

      Moin wĂŒrde gerne mit einer Freundin an einem Freitag in Wiesbaden essen gehen, es sollte gemĂŒtlich sein und nicht so laut. Also eine AtmosphĂ€re haben die es her gibt das man sich gut unterhalten kann. Es sollte vegan/vegetarische Optionen geben. Ich wĂ€re sehr dankbar fĂŒr eure Tipps da ich mich nicht so gut auskenne.

      submitted by /u/JohnTheMonkey2
      [link] [comments]

    14. 🔗 r/Leeds why is everyone in fancy dress? rss

      I'm in the city centre right now and just wondering why everyone is dressed up? I thought it was the otley run but now I'm unsure because the people in fancy dress are everywhere. This is just me being nosey but I can't find any info about it online so I was wondering if anyone knows.

      submitted by /u/MeowTS13
      [link] [comments]

    15. 🔗 Simon Willison Notes on the xAI/Anthropic data center deal rss

      There weren't a lot of big new announcements from Anthropic at yesterday's Code w/ Claude event, but the biggest by far was the deal they've struck with SpaceX/xAI to use "all of the capacity of their Colossus data center".

      As I mentioned in my live blog of the keynote, that's the one with the particularly bad environmental record. The gas turbines installed to power the facility initially ran without Clean Air Act permits or pollution control devices, which they got away with by classifying them as "temporary". Credible reports link it to increases in hospital admissions relating to low air quality.

      Andy Masley, one of the most prolific voices pushing back against misleading rhetoric about data centers (see The AI water issue is fake and Data center land issues are fake), had this to say about Colossus:

      I would simply not run my computing out of this specific data center

      I get that Anthropic are severely compute-constrained, but in a world where the very existence of "AI data centers" is a red-hot political issue (see recent news out of Utah for a fresh example), signing up with this particular data center is a really bad look.

      There was a lot of initial chatter about how this meant xAI were clearly giving up on their own Grok models, since all of their capacity would be sold to Anthropic instead. That was a misconception - Anthropic are getting Colossus 1, but xAI are keeping their larger Colossus 2 data center for their own work.

      As an interesting side note, the night before the Anthropic announcement, xAI sent out a deprecation notice for Grok 4.1 Fast and several other models providing just two weeks' notice before shutdown, reported here by @xlr8harder from SpeechMap:

      Effective May 15, 2026 at 12:00pm PT, the following models will be retired from the xAI API: grok-4-1-fast-reasoning, grok-4-1-fast-non-reasoning, grok-4-fast-reasoning, grok-4-fast-non-reasoning, grok-4-0709, grok-code-fast-1, grok-3, grok-imagine-image-pro. After May 15, 2026, requests to these models will no longer work.

      This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative.

      I will never depend on one of your products again.

      Here's SpeechMap's detailed explanation of how they selected Grok 4.1 Fast for their project in March.

      Were xAI serving those models out of Colossus 1?

      xAI owner Elon Musk (who previously delighted in calling Anthropic "Misanthropic") tweeted the following:

      By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. [...]

      After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.

      And then shortly afterwards:

      Just as SpaceX launches hundreds of satellites for competitors with fair terms and pricing, we will provide compute to AI companies that are taking the right steps to ensure it is good for humanity.

      We reserve the right to reclaim the compute if their AI engages in actions that harm humanity.

      Presumably the criteria for "harm humanity" are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me!

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    16. 🔗 r/wiesbaden Erfahrungen mit Autohaus Can in Wiesbadener Str. ? rss

      Wer hat Erfahrung mit dem oben genannten HÀndler? Seriös oder nicht ?

      submitted by /u/HagebuddneLard
      [link] [comments]

    17. 🔗 r/LocalLLaMA WARNING: Open-OSS/privacy-filter MALWARE rss

      There's this new "model" on Hugging Face titled Open-OSS/privacy-filter which is actually a customized infostealer virus. It's a fake version of the OpenAI privacy filter and it uses a Python-based dropper (loader.py) which downloads a malicious PowerShell command from the internet, which spawns another PowerShell command and downloads a shady EXE file and runs it using Task Scheduler.

      Here's a behavior analysis of what the EXE does: https://tria.ge/260507-tnftrsfx5x/behavioral1

      I also reported both the dropper and the EXE to Microsoft.

      I also reported the repo to HF.

      If you use Linux (which is easier to use for AI/ML) you are unaffected as this is a Windows virus.

      submitted by /u/charles25565
      [link] [comments]

    18. 🔗 earendil-works/pi v0.73.1 release

      New Features

      • Self-update support for the npm scope migration : pi update --self now supports the upcoming package rename from @mariozechner/pi-coding-agent to @earendil-works/pi-coding-agent. After the new package is published, existing global installs can update through the normal self-update flow; pi will uninstall the old global package and install the package name returned by the version check endpoint.
      • Interactive OAuth login selection : OAuth providers can now present multiple login choices in /login, enabling provider-specific interactive authentication flows. See Providers.
      • JSONC-stylemodels.json parsing: models.json now allows comments and trailing commas, making custom provider and model configuration easier to maintain. See Providers and Custom Providers.

      Added

      • Added interactive login selection support so OAuth providers can present multiple login choices (#4190 by @mitsuhiko).

      Changed

      • Changed pi update --self to honor the active package name returned by the Pi version check endpoint, defaulting to the current package when omitted and uninstalling the old global package before installing a renamed package.
      • Changed extension loading to use upstream jiti 2.7 instead of the @mariozechner/jiti fork (#4244 by @pi0).
      • Changed models.json parsing to allow comments and trailing commas (#4162 by @julien-c).

      Fixed

      • Fixed pi -p treating prompts that start with YAML frontmatter as extension flags instead of user messages (#4163).
      • Fixed pending tool results not updating in the live TUI after toggling thinking block visibility while the tool is running (#4167).
      • Fixed /copy reporting success on Linux without writing the clipboard on Wayland-only compositors (Hyprland, Niri, ...) by skipping the X11-only native addon on Linux and routing through wl-copy/xclip/xsel instead (#4177).
      • Fixed HTML session exports to strip skill wrapper XML from rendered user messages (#4234 by @aliou).
      • Fixed OpenAI-compatible chat completion streams that interleave content and tool-call deltas in the same choice.
      • Fixed OpenAI Codex OAuth refresh failures writing directly to stderr while the TUI is active (#4141).
      • Fixed OpenAI Codex Responses requests to send a non-empty system prompt (#4184).
      • Fixed Kimi For Coding model resolution for the Kimi K2 P6 alias (#4218).
      • Fixed Kitty inline image redraws to stay within TUI-owned terminal regions and avoid writing below the active viewport.
      • Fixed Kitty inline image rendering by letting the terminal allocate image ids and bounding parsed image ids to valid values.
      • Fixed inline image capability detection to disable inline images in cmux terminals.
    19. 🔗 r/Leeds Leeds cycle lane network is a 'step in the right direction', say campaigners rss

      Just wanted to add a bit of positivity around the new cycle lanes in Leeds, as there seems to be a lot of negativity whenever the topic comes up.

      Speaking from personal experience, they’ve genuinely changed my life for the better. Up until last year, I hadn’t really ridden a bike since I was a teenager. But after seeing more segregated cycle lanes appear around my area, I realised I could get from my house into the city centre in under 30 minutes almost entirely on protected infrastructure.

      I've started cycling regularly, and eventually I sold my car altogether. I now use my bike every other day for commuting, trips into town, canal rides etc etc. I’m healthier, happier, saving loads of money, and honestly enjoy getting around Leeds far more now. It's hilly in parts but stick to a low gear and it's perfectly manageable, ebikes are great alternatives too and can be purchased through the cycle to work schemes (I saved hundreds on my bike).

      I also cycle year-round, and I think people massively overestimate how “hardcore” cycling is in the UK. Our weather really isn’t that different from places like the Netherlands. Most of the time you’re completely fine with a decent jacket.

      I know the network still has gaps and improvements to make, but for me it’s been a massive step in the right direction and has made cycling feel accessible to normal people again, not just super confident road cyclists.

      Just wondering if anyone else has had a similar experience or enjoys using the bike lanes too?

      submitted by /u/_testingdude
      [link] [comments]

    20. 🔗 crosspoint-reader/crosspoint-reader sd-fonts-m1-b4: fix: Roundraff theme home menu offset with no recent books (#1845) release

      Summary

      With the Roundraff theme selected and no recent books, the home screen
      menu shows a "Continue Reading" option and menu handling is offset by
      one ("Continue Reading" actually does "Browse Files", "File Transfer"
      actually does "Settings", etc.) Simple fix here is to omit the "Continue
      Reading" menu item when there are no recent books.


      AI Usage

      While CrossPoint doesn't have restrictions on AI tools in contributing,
      please be transparent about their usage as it
      helps set the right context for reviewers.

      Did you use AI tools to help write this code? NO

    21. 🔗 r/reverseengineering VLC Media Player MKV Exploit Analysis rss
    22. 🔗 r/york Different angles on one perfect subject đŸ’« rss

      Different angles on one perfect subject đŸ’« | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    23. 🔗 r/Yorkshire 'We're all human': Reform response to Sheffield candidate accused of Nazi praise rss
    24. 🔗 r/LocalLLaMA Qwen3.6 27B uncensored heretic v2 Native MTP Preserved is Out Now With KLD 0.0021, 6/100 Refusals and the Full 15 MTPs Preserved and Retained, Available in Safetensors, GGUFs and NVFP4s formats. rss

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-GGUF: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-GGUF

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4-GGUF: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-NVFP4-GGUF

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-NVFP4

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-NVFP4-MLP- Only: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored- heretic-v2-Native-MTP-Preserved-NVFP4-MLP-Only

      llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP-Preserved-GPTQ-Int4: https://huggingface.co/llmfan46/Qwen3.6-27B-uncensored-heretic-v2-Native-MTP- Preserved-GPTQ-Int4

      All are confirmed to have their full 15 MTPs retained and preserved.

      Comes with benchmark too.

      Find all my models here: HuggingFace- LLMFan46

      submitted by /u/LLMFan46
      [link] [comments]

    25. 🔗 r/Leeds Does anyone else remember when you could buy cats at kirkgate market ? rss

      And pirated dad's,before 2010 and other crazy stuff or I'm I confusing it with the wrong place I'm pretty sure we got a cat from there some time in the 2000's but I could be wrong

      submitted by /u/TipAdditional4625
      [link] [comments]

    26. 🔗 Console.dev newsletter honker rss

      Description: Durable queues for SQLite.

      What we like: Adds pub/sub, task queue, and event streams to SQLite. No need for client polling or a broker. Shipped as a SQLite extension with bindings for Python, Node, Rust, Go, Ruby, etc. Allows an INSERT and enqueue as part of the same transaction (with rollback). Also supports cron.

      What we dislike: Polling is via a SELECT per millisecond per database, which should be lightweight, but is an extra high-frequency query. Still experimental.

    27. 🔗 Console.dev newsletter Plow rss

      Description: HTTP benchmarking.

      What we like: Runs HTTP requests and benchmarks latency and response codes. Configurable concurrency, duration, request count, and ramp up time. Outputs stats to the terminal in real time. Supports JSON output and provides a web UI.

      What we dislike: Pretty straightforward HTTP request support, including different methods e.g. POST (with body). For more complex benchmarks, k6 is a good, scriptable alternative.

  4. May 06, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-06 rss

      IDA Plugin Updates on 2026-05-06

      Activity:

    2. 🔗 r/Leeds Anybody in Leeds that can help get hired? rss

      I have administrative experience, hospitality, customer service exec experience. Overseas experience with places like hotels and holiday parks.

      Im really trying to get anything cash in hand or full time contract literally any job as long as im getting the hours every week and not being blown off.

      The jobs could literally be anything I genuinly have no ego when it comes to the title or the work im doing, im almost 20 and im about to be cooked by the system from the outside looking in.

      Feels like im running out of time to build something out of myself, and the bills gotta be paid.

      submitted by /u/BeingRemarkable2117
      [link] [comments]

    3. 🔗 r/Harrogate Harrogate Traffic Relief rss

      The traffic in and around Harrogate is a joke, and has been commented on for as long as I can remember.

      but I’m curious, I’ve no idea how to solve it so what are people’s suggestions? It seems to me there’s just no where to speed flow or reroute around bottlenecks.

      Better busses? By-passes? How do we fix it?

      submitted by /u/CyclePrevious9043
      [link] [comments]

    4. 🔗 r/Leeds Anyone walked around Black Carr woods? rss

      Evening all. Recently found out about the greenside tunnel under Pudsey, and found black Carr wood quite close by. Been thinking of taking a bit of a hike around the wood and wondered if any can tell me if it's worth it. It mostly looks nice but I did see some pictures of old rusted cars and seeing right next some industrial parts of tyersal wondered if it is a nice wooded walk or a bit of a dumping ground. Would be starting from Fulneck way.

      submitted by /u/spiderham42
      [link] [comments]

    5. 🔗 Jeremy Fielding (YouTube) Wall-E Is Getting Complicated. rss

      If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join

      If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding

      Social media, websites, and other channel

      Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA

      Notes:

      Technical corrections

      Nothing yet

    6. 🔗 @HexRaysSA@infosec.exchange New training updates, plus Spring discounts: mastodon

      New training updates, plus Spring discounts:
      ‱ On-demand Starter → 20% off with code STR20
      ‱ AI-powered Intermediate → 40% off (May 12) with code AI-INTER40
      ‱ Malware, Decompiler & Programming → 30% off with code SPRING30

      Details + course breakdown: https://hex-rays.com/blog/spring-training- sale-2026
      *Limited time offer, check blog for expiration dates!

    7. 🔗 r/LocalLLaMA ZAYA1-8B: Frontier intelligence density, trained on AMD rss

      ZAYA1-8B: Frontier intelligence density, trained on AMD | submitted by /u/carbocation
      [link] [comments]
      ---|---

    8. 🔗 r/york Moving back - flat hunting rss

      I'm coming home! So excited to be moving back but slightly worried about finding a flat after a few years abroad. I know the drill since the last time I lived there, but wanted to see if anything has changed - do things still move at the speed of light - by the time something hits Rightmove, it's already full of viewings and likely to be gone tomorrow - is that still the case?

      I can't remember what month most student lets turn over / when the most availability is...? (I know the new system may impact this)

      Should I just book a hotel and wait till I'm in town to sort out viewings? (and trust I'll find somewhere within a week?)

      Budget is 1.1-1.5k, would like to be relatively near the uni. I know the dust is still settling from the new Renters' rights and I've read so many posts on here about where to look/ agents to avoid etc, but curious how things feel locally lately.

      Last but not least - any anecdotes for getting pets approved since the rule changes? Any differences between getting a cat approved (vs dogs)?

      Thanks!

      submitted by /u/fruitloopfitness
      [link] [comments]

    9. 🔗 r/Leeds Does anyone remember Toyworld Megastore? rss

      As a kid I loved this toy shop, it was on the Headrow, attached to the Headrow Shopping Centre (later turned into the core, now demolished) and was to right of the entrance, which the same unit later became GAME. It seems to have had a very short lifespan, opening and closing in the mid 2000s but having another store on the top floor of the Headrow Shopping centre in the 90s.

      Some of the only info I can find online, is my own reddit post from 3 years ago, https://www.reddit.com/r/Leeds/comments/z57afp/does_anyone_remember_toyworld_megastore/

      I'd love to find a photo of the store, or literally any info/memories - it's basically all gone and I'm so annoyed at myself for not having saved the one photo that existed 3 years ago.

      Thank you in advance!

      submitted by /u/Same_Ability3423
      [link] [comments]

    10. 🔗 r/Yorkshire Silktone Waggonway rss

      Silktone Waggonway | I create short history forgotten videos around Yorkshire and specifically Barnsley, here's my latest short Silkstone Waggonway submitted by /u/9arke1
      [link] [comments]
      ---|---

    11. 🔗 Hex-Rays Blog New Training Formats, New Workflows, New Skills rss

      New Training Formats, New Workflows, New Skills

      We’ve made meaningful updates to our training lineup with the introduction of a new on-demand format for beginners, integration of AI into our Intermediate course, and expanded hands-on content across advanced trainings.

    12. 🔗 Simon Willison Live blog: Code w/ Claude 2026 rss

      I'm at Anthropic's Code w/ Claude event today. Here's my live blog of the morning keynote sessions.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    13. 🔗 r/LocalLLaMA None of this will ever get stolen rss

      None of this will ever get stolen | It's crazy that they're thinking of doing this. There are problems with people stealing catalytic converters off people's cars and now they want to put a rack outside your house!? submitted by /u/martin_xs6
      [link] [comments]
      ---|---

    14. 🔗 r/york Lost keys rss

      I lost a set of keys with a black carabiner on them, two old style keys and one modern one, within the nunnery lane area.

      Any leads?
      I'm really worried😓

      submitted by /u/soupygirls
      [link] [comments]

    15. 🔗 Simon Willison Vibe coding and agentic engineering are getting closer than I'd like rss

      I recently talked with Joseph Ruscio about AI coding tools for Heavybit's High Leverage podcast: Ep. #9, The AI Coding Paradigm Shift with Simon Willison. Here are some of my highlights, including my disturbing realization that vibe coding and agentic engineering have started to converge in my own work.

      One thing I really enjoy about podcasts is that they sometimes push me to think out loud in a way that exposes an idea I've not previously been able to put into words.

      Vibe coding and agentic engineering are starting to overlap

      A few weeks after vibe coding was first coined I published Not all AI-assisted programming is vibe coding (but vibe coding rocks), where I firmly staked out my belief that "vibe coding" is a very different beast from responsible use of AI to write code, which I've since started to call agentic engineering.

      When Joseph brought up the distinction between the two I had a sudden realization that they're not nearly as distinct for me as they used to be:

      Weirdly though, those things have started to blur for me already, which is quite upsetting.

      I thought we had a very clear delineation where vibe coding is the thing where you're not looking at the code at all. You might not even know how to program. You might be a non-programmer who asks for a thing, and gets a thing, and if the thing works, then great! And if it doesn't, you tell it that it doesn't work and cross your fingers.

      But at no point are you really caring about the code quality or any of those additional constraints. And my take on vibe coding was that it's fantastic, provided you understand when it can be used and when it can't.

      A personal tool for you, where if there's a bug it hurts only you, go ahead!

      If you're building software for other people, vibe coding is grossly irresponsible because it's other people's information. Other people get hurt by your stupid bugs. You need to have a higher level than that.

      This contrasts with agentic engineering where you are a professional software engineer. You understand security and maintainability and operations and performance and so forth. You're using these tools to the highest of your own ability. I'm finding the scope of challenges I can take on has gone up by a significant amount because I've got the support of these tools.

      But I'm still leaning on my 25 years of experience as a software engineer.

      The goal is to build high quality production systems: if you're building lower quality stuff faster, I think that's bad. I want to build higher quality stuff faster. I want everything I'm building to be better in every way than it was before.

      The problem is that as the coding agents get more reliable, I'm not reviewing every line of code that they write anymore, even for my production level stuff.

      I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it's just going to do it right. It's not going to mess that up. You have it add automated tests, you have it add documentation, you know it's going to be good.

      But I'm not reviewing that code. And now I've got that feeling of guilt: if I haven't reviewed the code, is it really responsible for me to use this in production?

      The thing that really helps me is thinking back to when I've worked at larger organizations where I've been an engineering manager. Other teams are building software that my team depends on.

      If another team hands over something and says, "hey, this is the image resize service, here's how to use it to resize your images"... I'm not going to go and read every line of code that they wrote.

      I'm going to look at their documentation and I'm going to use it to resize some images. And then I'm going to start shipping my own features. And if I start running into problems where the image resizer thing appears to have bugs or the performance isn't good, that's when I might dig into their Git repositories and see what's going on. But for the most part I treat that as a semi-black box that I don't look at until I need to.

      I'm starting to treat the agents in the same way. And it still feels uncomfortable, because human beings are accountable for what they do. A team can build a reputation. I can say "I trust that team over there. They built good software in the past. They're not going to build something rubbish because that affects their professional reputations."

      Claude Code does not have a professional reputation! It can't take accountability for what it's done. But it's been proving itself anyway - time and time again it's churning out straightforward things and doing them right in the style that I like.

      There's an element of the normalization of deviance here - every time a model turns out to have written the right code without me monitoring it closely there's a risk that I'll trust it at the wrong moment in the future and get burned.

      The new challenge of evaluating software

      It used to be if you found a GitHub repository with a hundred commits and a good readme and automated tests and stuff, you could be pretty sure that the person writing that had put a lot of care and attention into that project.

      And now I can knock out a git repository with a hundred commits and a beautiful readme and comprehensive tests of every line of code in half an hour! It looks identical to those projects that have had a great deal of care and attention. Maybe it is as good as them. I don't know. I can't tell from looking at it. Even for my own projects, I can't tell.

      So I realized what I value more than the quality of the tests and documentation is that I want somebody to have used the thing. If you've got a vibe coded thing which you have used every day for the past two weeks, that's much more valuable to me than something that you've just spat out and hardly even exercised.

      The bottlenecks have shifted

      If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn't.

      It's not just the downstream stuff, it's the upstream stuff as well. I saw a great talk by Jenny Wen, who's the design leader at Anthropic, where she said we have all of these design processes that are based around the idea that you need to get the design right - because if you hand it off to the engineers and they spend three months building the wrong thing, that's catastrophic.

      There's this whole very extensive design process that you put in place because that design results in expensive work. But if it doesn't take three months to build, maybe the design process can be a whole lot riskier because cost, if you get something wrong, has been reduced so much.

      Why I'm still not afraid for my career

      When I look at my conversations with the agents, it's very clear to me that this is moon language for the vast majority of human beings.

      There are a whole bunch of reasons I'm not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you're doing, you can run so much faster with them. [...]

      I'm constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we're trying to achieve here is still really difficult. [...]

      Matthew Yglesias, who's a political commentator, yesterday tweeted, "Five months in, I think I've decided that I don't want to vibecode — I want professionally managed software companies to use AI coding assistance to make more/better/cheaper software products that they sell to me for money." And that feels about right to me. I can plumb my house if I watch enough YouTube videos on plumbing. I would rather hire a plumber.

      On the threat to SaaS providers of companies rolling their own solutions instead:

      I just realized it's the thing I said earlier about how I only want to use your side project if you've used it for a few weeks. The enterprise version of that is I don't want a CRM unless at least two other giant enterprises have successfully used that CRM for six months. [...] You want solutions that are proven to work before you take a risk on them.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    16. 🔗 r/reverseengineering pyghidra-mcp Meets Ghidra GUI: Drive Project-Wide RE with Local AI rss
    17. 🔗 r/york York station gateway what do you think? rss

      York station gateway what do you think? | submitted by /u/Coffee000Oopss
      [link] [comments]
      ---|---

    18. 🔗 r/Leeds I bought a job lot of antique postcards from Leeds off eBay rss

      When I saw 50 antique postcards of Leeds on eBay for ÂŁ20 it was was a no brainer of a buy!

      Most date to the first decade of the 20th century and they include lovely, stylised images of streets that look so familiar but also very different. Some also have messages on the back, frankly irrisitable to a nosy person such as myself.

      I've posted a gallery of some of the best ones on my Leeds history newsletter, Bury the Leeds, which is free to read and to subscribe to.

      https://burytheleeds.substack.com/p/looking-back-at-leeds-through-antique

      My favourite is the image of Headingley from 1909 which includes the beast of a stump of the Shire Oak, an ancient tree that was said to have stood on Otley Road for 1,000 years. By the 20th century, only a hulking stump remained before that was destroyed during a storm in 1941. The Original Oak pub is named after it and so is the Skyrack, which is an old timey derivation of 'Shire Oak'.

      I also love the one of the fashionable ladies promenading down Woodhouse Moor in 1904 and the very evocative shots of Briggate and Boar Lane, when trams ruled. You can really imagine how these busy streets must have sounded back then.

      I'm giving the postcards away with a book I've made featuring some of my most interesting and unusual stories about the city. I know several r/Leeds redditors have ordered copies. I'm celebrating one year of this project now so thanks for the support and to the mods!

      submitted by /u/bluetrainlinesss
      [link] [comments]

    19. 🔗 r/LocalLLaMA Bad news: Apple drops high-memory Mac Studio configs rss

      Bad news: Apple drops high-memory Mac Studio configs | Looks like Apple has quietly killed off the higher-memory Mac Studio options. The M3 Ultra Mac Studio is now only available with 96GB RAM. The 512GB option was already removed back in March, and now the 256GB config is gone too. Apple has said both the Mac Studio and Mac mini will stay supply-constrained for the next few months. The Mac mini is also stuck at 48GB RAM max for now. Probably their high-memory chip stock got too expensive to keep producing. This is a real bummer for us! Big unified memory configs were one of the few (relatively) affordable ways to run large models locally. I am glad I own the M3 Utlra 512, will definitely keep this on (my favorite local model is Qwen 397b atm). submitted by /u/jzn21
      [link] [comments]
      ---|---

    20. 🔗 r/Yorkshire Please get out there and vote May 7th (tomorrow.) rss

      The North is often neglected by the government, so the best chance that YOU have to get the work done in your area is by voting in the local election tomorrow.

      If you don’t know who to vote, do your research and see who aligns more with your community. Vote based on who you believe will help your local area the most.

      This isn’t a political soapbox post, I won’t tell you who to vote for. Just please, use your voice. There are a lot of cunts who just wanna use your seat and sit on it, and nothing will ever change. This is an important election with a lot of new voices who could genuinely help your local ward. I wish the best for your local area in the next 4 years and that’s why i’m making this post!

      We don’t get a lot of chances to enact change, so it’s best to use it when we can.

      submitted by /u/coolfunkDJ
      [link] [comments]

    21. 🔗 Anton Zhiyanov Solod v0.1: Go ergonomics, practical stdlib, native C interop rss

      Solod (So) is a system-level language with Go syntax and zero runtime. It's designed for two main audiences:

      • Go developers who want low-level control and zero-cost C interop, without having to learn a new language or standard library.
      • C developers who like Go's style.

      The initial version (let's call it v0) was focused on picking a subset of Go and translating it to C. The next logical step was to port Go's standard library and make it easier to interop with C. That's what the v0.1 release I'm presenting today is all about.

      Standard library ‱ SQLite bindings ‱ Persistent map ‱ Store and retrieve ‱ Command-line interface ‱ Performance ‱ Wrapping up

      Standard library

      Solod v0.1 ships with the following stdlib packages ported from Go:

      • io, bufio, and fmt — Abstractions and types for general-purpose I/O.
      • bytes, strings, strconv, and unicode/utf8 — Common byte and text operations.
      • slices and maps — Generic heap-allocated data structures.
      • crypto/rand and math/rand — Generating random data.
      • flag, os, and path — Working with the command line and files.
      • log/slog — Structured logging.
      • time — Measuring and displaying time.

      And a couple of its own packages:

      • mem — Memory allocation with a pluggable allocator interface.
      • c — Low-level C interop helpers.

      Stdlib documentation

      In the following sections, I'll demonstrate some of the v0.1 features using a simple example: a persistent key-value store backed by SQLite.

      SQLite bindings

      Since So doesn't provide database/sql yet, we'll call SQLite directly through its C API. To do this, let's import the necessary headers with the so:include directive and generate extern declarations using the sobind tool:

      package main
      
      import "solod.dev/so/c"
      
      //so:include <sqlite3.h>
      
      // SQLite constants.
      //
      //so:extern SQLITE_OK
      const sqliteOK = 0
      //so:extern SQLITE_ROW
      const sqliteRow = 100
      //so:extern SQLITE_DONE
      const sqliteDone = 101
      
      // SQLite types.
      //
      //so:extern
      type sqlite3 struct{}
      //so:extern
      type sqlite3_stmt struct{}
      //so:extern
      type sqlite3_value struct{}
      //so:extern
      type sqlite3_callback func(any, int32, **c.Char, **c.Char) int32
      
      // SQLite functions.
      func sqlite3_open(filename string, ppDb **sqlite3) int32
      func sqlite3_prepare_v2(db *sqlite3, zSql string, nByte int32, ppStmt **sqlite3_stmt, pzTail **c.ConstChar) int32
      func sqlite3_step(arg0 *sqlite3_stmt) int32
      func sqlite3_finalize(pStmt *sqlite3_stmt) int32
      func sqlite3_close(arg0 *sqlite3) int32
      func sqlite3_exec(arg0 *sqlite3, sql string, callback sqlite3_callback, arg3 any, errmsg **c.Char) int32
      
      // more declarations...
      

      The so:extern directive is required for constants (sqliteOK) and types (sqlite3_stmt). As for functions (sqlite3_prepare_v2), we can just declare them without a body — the transpiler will treat them as extern declarations even without so:extern.

      Persistent map

      With the SQLite API in place, let's implement a key-value type that wraps the database connection:

      // SQLMap is a simple key-value store backed by an SQLite database.
      type SQLMap struct {
          db *sqlite3
      }
      

      Add a constructor that connects to an SQLite database and creates a table to store the items:

      var ErrCreate = errors.New("sqlmap: create schema failed")
      const sqlCreate = "create table if not exists kv (key text primary key, val)"
      
      // NewSQLMap creates a new SQLMap using the provided connection string.
      // It opens a connection to the SQLite database and creates the underlying
      // key-value table if it does not already exist.
      //
      // The caller is responsible for calling Close on the returned SQLMap
      // when it is no longer needed.
      func NewSQLMap(connStr string) (SQLMap, error) {
          var db *sqlite3
          rc := sqlite3_open(connStr, &db)
          if rc != sqliteOK {
              return SQLMap{}, ErrCreate
          }
      
          rc = sqlite3_exec(db, sqlCreate, nil, nil, nil)
          if rc != sqliteOK {
              sqlite3_close(db)
              return SQLMap{}, ErrCreate
          }
          return SQLMap{db}, nil
      }
      
      // Close releases resources associated with the SQLMap.
      func (m *SQLMap) Close() {
          sqlite3_close(m.db)
      }
      

      As you can see, this So code looks a lot like regular Go code. However, there are some key differences:

      • When compiled, the code is first translated to plain C, then compiled into a native binary using GCC or Clang.
      • Unlike Go, there is no runtime (no automatic heap memory allocation, no garbage collection, no goroutine scheduler).
      • There is no overhead when calling C functions, unlike Go's Cgo.
      • The interop syntax is a bit cleaner. For example, Go's string (sqlCreate in the sqlite3_exec call) automatically decays to C's const char*.

      Store and retrieve

      First, let's implement the Set method:

      var (
          ErrPrepare = errors.New("sqlmap: prepare failed")
          ErrExec    = errors.New("sqlmap: exec failed")
      )
      
      const sqlSet = "insert or replace into kv (key, val) values (?, ?)"
      
      // Set stores a string value for the specified key.
      func (m *SQLMap) Set(key string, val string) error {
          var stmt *sqlite3_stmt
          rc := sqlite3_prepare_v2(m.db, sqlSet, -1, &stmt, nil)
          if rc != sqliteOK {
              return ErrPrepare
          }
          defer sqlite3_finalize(stmt)
      
          sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil)
          sqlite3_bind_text(stmt, 2, val, int32(len(val)), nil)
      
          rc = sqlite3_step(stmt)
          if rc != sqliteDone {
              return ErrExec
          }
          return nil
      }
      

      No surprises here, just a bunch of SQLite API calls.

      The Get method is more interesting:

      var ErrNotFound = errors.New("sqlmap: not found")
      const sqlGet = "select val from kv where key = ?"
      
      // Get returns the value associated with the specified key.
      // The caller owns the returned string and must free it with mem.FreeString.
      func (m *SQLMap) Get(a mem.Allocator, key string) (string, error) {
          var stmt *sqlite3_stmt
          rc := sqlite3_prepare_v2(m.db, sqlGet, -1, &stmt, nil)
          if rc != sqliteOK {
              return "", ErrPrepare
          }
          defer sqlite3_finalize(stmt)
      
          sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil)
          rc = sqlite3_step(stmt)
          if rc == sqliteDone {
              return "", ErrNotFound
          }
          if rc != sqliteRow {
              return "", ErrExec
          }
      
          text := sqlite3_column_text(stmt, 0)
          tmp := c.String(text)
          result := strings.Clone(a, tmp)
          return result, nil
      }
      

      The pointer returned by sqlite3_column_text is managed by SQLite. It becomes invalid after calling sqlite3_finalize (which Get does before returning). Because of this, we need to allocate a copy of the returned value, using strings.Clone in this case.

      So's approach to memory allocation is similar to Zig's — all heap allocations must be done explicitly by providing a specific instance of the mem.Allocator interface.

      The caller, of course, must free the allocated string:

      func main() {
          m, err := NewSQLMap(":memory:")
          if err != nil {
              panic(err)
          }
          defer m.Close()
      
          m.Set("name", "Alice")
          name, err := m.Get(mem.System, "name")
          if err != nil {
              panic(err)
          }
          println("name =", name)
          mem.FreeString(mem.System, name)
      }
      
      
      
      name = Alice
      

      Here, mem.System is a specific allocator that uses libc's malloc and free. Alternatively, we could use mem.Arena or any other implementation of the mem.Allocator interface:

      var buf [1024]byte // stack-allocated
      arena := mem.NewArena(buf[:])
      
      name, _ := m.Get(&arena, "name")
      mem.FreeString(&arena, name) // no-op for arena; can be omitted
      

      Command-line interface

      With the SQLMap type in place, let's create a simple CLI using the flag package:

      var (
          opFlag  string
          keyFlag string
          valFlag string
      )
      
      func parseFlags() {
          flag.StringVar(&opFlag, "op", "", "operation: get, set, or del")
          flag.StringVar(&keyFlag, "key", "", "key name")
          flag.StringVar(&valFlag, "val", "", "value (for set operation)")
          flag.Parse()
      }
      
      func main() {
          parseFlags()
          // ...
      }
      

      Then add command routing:

      m, err := NewSQLMap("sqlmap.db")
      check(err)
      defer m.Close()
      
      switch opFlag {
      case "set":
          err = m.Set(keyFlag, valFlag)
          check(err)
      case "get":
          val, err := m.Get(mem.System, keyFlag)
          check(err)
          println(val)
          mem.FreeString(mem.System, val)
      case "del":
          err = m.Delete(keyFlag)
          check(err)
      default:
          flag.Usage()
          os.Exit(1)
      }
      
      
      
      sqlmap -op=set -key=name -val=alice
      sqlmap -op=get -key=name
      alice
      

      Again, no surprises here — the flag package works just as it does in Go.

      Performance

      Solod isn't trying to outperform hand-tuned C. Still, performance matters: the code is benchmarked and optimized to run reasonably fast. Since So compiles to plain C and then to native code with full optimizations, the results are sometimes better than Go's.

      Here are some highlights from the benchmarks:

      • Buffered I/O is 3x faster than Go.
      • String and byte operations are up to 2.5x faster.
      • Maps are 1.5x faster for modifications.
      • Integer formatting is 2x faster.

      There're no GC pauses and no Cgo bridge cost when calling C libraries. The tradeoff is that you have to handle memory yourself, but as the SQLite example above shows, So's allocator interface makes that pretty manageable.

      Solod vs. Go benchmarks

      Wrapping up

      Solod is still in its early days, but with the v0.1 release, it's ready for hobby projects. The already-ported parts of the Go standard library make it easy to write command-line tools (check out the cat, head, sort, and wc examples). Plus, with native C interop, you can build just about anything else you need.

      The next release (v0.2) will likely focus on networking, concurrency, or both — along with more stdlib packages.

      If you're interested, take a look at So's readme — it has all the information you need to get started. Or try So online without installing anything.

    22. 🔗 r/york York Dungeon investigates 'poltergeist' after tumblers fall from shelves rss
    23. 🔗 sacha chua :: living an awesome life La semaine du 27 avril au 3 mai rss

      lundi 27 avril

      J'ai ajouté la capacité de naviguer en temps réel à mon paquet subed.el. C'était déjà trÚs pratique pour ajouter les chapitres à la transcription de ma conversation avec John Wiegley et Karthik Chikmagalur. Elle a besoin d'une petite modification pour convertir les notes que j'avais prises pendant la conversation.

      J'ai emmené ma fille à son cours de gymnastique. Il y avait un remplaçant. Je suis ravie de voir que le remplaçant a porté un masque KN-95 sans demander.

      Je me suis organisé avec ma mÚre pour installer l'app BDO Pay sur mon téléphone.

      J'ai préparé les éléments pour coudre mon chapeau comme le chapeau que j'avais cousu pour ma fille.

      mardi 28

      J'ai emmené ma fille à Adventure Alley pour jouer avec ses amies. C'était un peu cher, mais ma fille s'est amusée, donc ce n'est pas un problÚme si nous allons là-bas de temps en temps.

      mercredi 29

      L'écran de remplacement est arrivé au magasin Apple, donc je vais aller là-bas demain.

      J'ai réécrit une partie de la page EmacsNewbie sur l'EmacsWiki.

      Ma fille a cousu mon chapeau.

      Sur Stardew Valley, nous avons acheté un cochon et un mouton. Nous avons amélioré le poulailler en un grand poulailler et nous avons ajouté une cuisine à notre maison.

      jeudi 30

      J'ai été ravie en discutant avec Prot sur l'expérience de l'éditeur Emacs pour les débutants.

      Mon mari, ma fille, et moi avons fait du vélo avec son amie et le pÚre de son amie.

      Sur Stardew, ma fille a remarquĂ© que j'ai accidentellement achetĂ© une vache que j'appelle ChĂšvre au lieu de la chĂšvre que j'ai prĂ©vu d'acheter pour le centre communautaire. Oups! Elle s'est trĂšs amusĂ©e et elle m'a demandĂ©, quand j'achĂšte finalement une chĂšvre, si je pouvais l'appeler Vache. Les animaux seront trĂšs confus, et moi aussi. Je l'ai quand mĂȘme fait.

      vendredi 1er mai

      L'école avait un remplaçant et elle n'a pas voulu y assister, donc j'ai prévenu l'école de son absence et nous avons fait un compromis entre ses devoirs et des jeux.

      Nous sommes allées au Stockyards pour acheter des tissus pour son maillot de bain. Elle a trouvé les deux couleurs qu'elle voulait, mais il ne restait qu'un yard d'une couleur. Il faudra que nous planifions soigneusement. Nous avons acheté des fils chez Michaels. Elle a aussi acheté une boßte de mochi puffs chez Marry Me Mochi.

      Elle a cousu des coutures sur mon chapeau.

      samedi 2

      Pour le petit-dĂ©jeuner, ma fille a prĂ©parĂ© une grande omelette en utilisant six Ɠufs. On s'est rĂ©galĂ©s.

      Ma fille était grincheuse parce que j'ai attiré son attention sur son agitation et elle a senti que j'étais sur son dos.

      Le magasin Apple n'a pas pu réparer l'écran de ma tablette, donc il l'a remplacé par une nouvelle tablette pour une petite somme. L'Apple Pencil était finalement lié à ma garantie AppleCare+, mais malheureusement, il était en rupture de stock partout en ville, donc il fallait que j'attende pendant environ une semaine.

      Une fois rentrée, j'ai trouvé que ma fille s'était calmée. Elle et moi avons joué à Duplo, ce qui est aussi un produit LEGO, mais plus grand que la normale. Je les ai utilisés pour montrer à ma fille des concepts mathématiques comme les permutations et les combinaisons.

      dimanche 3

      Mon mari et moi avons fait du vélo au centre-ville avec ma fille dans mon vélo cargo. Ma fille et moi avons essayé le mochi chez Kibo (c'était délicieux) avant de continuer chez MEC pour chercher une nouvelle gourde pour remplacer celle que j'ai perdue. Elle n'a rien vu qui lui plaisait. Nous avons aussi acheté un mannequin en bois pour faciliter des prototypes pour coudre et des crayons d'aquarelle pour les explorer.

      Une fois rentrĂ©s, mon mari a fait cuire un pain de levain qu'il donnera au pĂšre de l'amie de notre fille, suite Ă  leur conversation vendredi. Ma fille et moi avons travaillĂ© sur le plan de faire son maillot de bain. Elle a voulu une robe qui a un corsage cache-cƓur et une jupe Ă  ourlet tulipe. Pour le dos, elle a voulu des bretelles croisĂ©es avec un petit dos goutte.

      J'étais fatiguée, donc j'ai fait une sieste. Ma fille est venue me réveiller. J'ai remarqué que mes yeux étaient trÚs secs, donc elle a négocié de m'apporter des gouttes pour les yeux et elle me les a administrées pour 25 cents.

      You can e-mail me at sacha@sachachua.com.

    24. 🔗 tomasz-tomczyk/crit v0.10.5 release

      What's Changed

      A maintenance release with broad fixes across the GitHub PR roundtrip, the comment-sync push/pull pipeline, and the local review UI — plus accessibility polish on the sidebar resize handles, a distinct "Approved" state on the review-finish modal.

      General

      Fixes

      Documentation

      Internal refactors

      Full Changelog : v0.10.4...v0.10.5

    25. 🔗 r/york First-Time DM looking for DnD players in York! rss

      Hey everyone! I've been wanting to DM something for a while now and I've been planning a campaign that I'm pretty excited about.

      I've got one player on board so far, so I just need three more players to be able to start playing! The two of us are 26/27, so ideally we're looking for people around the same age.

      If you're interested, just let me know and I'll DM you with more details 😄

      submitted by /u/WeirdoWolfBoy
      [link] [comments]

    26. 🔗 r/LocalLLaMA 2.5x faster inference with Qwen 3.6 27B using MTP - Finally a viable option for local agentic coding - 262k context on 48GB - Fixed chat template - Drop-in OpenAI and Anthropic API endpoints rss

      2026-05-07 edit: I have updated the hardware based recommendations with more focus on quality. I do not recommend q4_0 KV cache anymore beyond 64k context. After multiple rounds of testing with the different size quants, it appears3 is the optimal number for draft speculative decoding. The fastest and best quality quant is q8_0-mtp. F16, which I have also uploaded is actually better but ultra slow (6x slower than q8_0). Many keep saying 8bit is virtually lossless compared to 16bit, and 6bit almost as good as 8bit, but this is simply not true: time and time again I have noticed huge differences in quality and correctness between 8bit and 16bit versions of various models.

      The recent PR to llama.cpp bring MTP support to Qwen 3.6 27B. This uses the built-in tensor layers for speculative decoding. None of the existing GGUF have it, as they need to be converted with this PR.

      I have tested it locally on my mac M2 Max 96GB, and the results are amazing: 2.5x speed increase, bringing it to 28 tok/s!

      I have converted the most useful quants and uploaded them to HF. Even if you are using apple silicon, you should use those instead of MLX. You can download them here:

      https://huggingface.co/froggeric/Qwen3.6-27B-MTP-GGUF

      This also includes 7 fixes I made to the original jinja chat template, due to vLLM specificity which broke in other tools:

      https://huggingface.co/froggeric/Qwen-Fixed-Chat-Templates

      For now, you will need to compile your own version of llama.cpp to use them. It is fairly simple to do:

      ```bash git clone --depth 1 https://github.com/ggml-org/llama.cpp.git cd llama.cpp git fetch origin pull/22673/head:mtp-pr && git checkout mtp-pr

      cmake -B build -DGGML_METAL=ON -DCMAKE_BUILD_TYPE=Release cmake --build build --target llama-cli llama-server ```

      Then to start serving with the API endpoint, use a command similar to:

      bash llama-server -m Qwen3.6-27B-Q5_K_M-mtp.gguf \ --spec-type mtp --spec- draft-n-max 3 \ --cache-type-k q8_0 --cache-type-v q8_0 \ -np 1 -c 262144 --temp 0.7 --top-k 20 -ngl 99 --port 8081

      Vision currently crashes llama.cpp when used alongside MTP. Reported 2026-05-06 in the current PR.

      That's it. Three optimizations in one command:

      Flag | What it does | Impact
      ---|---|---
      --spec-type mtp --spec-draft-n-max 3 | Multi-Token Prediction (built into the model) | 2.5x faster generation
      --cache-type-k q8_0 --cache-type-v q8_0 | 8-bit KV cache (instead of 16-bit) | Half the KV memory , negligible quality loss
      -c 262144 | 262K context window | Full native context on 48 GB Mac with q8_0 KV

      Adjust -m, -c, and --cache-type-k/v for your hardware, according to the tables below.

      Here are my recommendations based on your hardware:

      Apple Silicon

      Qwen3.6-27B is a hybrid model — only 16 of 65 layers use KV cache (verified). The other 48 are linear attention (fixed 898 MiB recurrent state). KV memory is ~4× less than a standard dense model. Runtimes that don't handle this (e.g. vllm) allocate KV for all 65 layers and show much higher memory usage.

      Numbers below are total memory used (model + KV cache + 0.9 GB recurrent state). Must leave ≄ 8 GB for macOS (16 GB Macs excepted).

      RAM | Quant | KV cache | Max context | Total used | Vision
      ---|---|---|---|---|---
      16 GB | IQ2_M | q8_0 | 42K | 12.0 GB | ✗
      24 GB | IQ3_M | | 46K | 16.0 GB | ✗
      24 GB | IQ3_M | q8_0 | 91K | 16.0 GB | ✗
      32 GB | Q5_K_M | | 74K | 24.0 GB | ✗
      32 GB | Q5_K_M | q8_0 | 147K | 24.0 GB | ✗
      32 GB | Q4_K_M | | 99K | 24.0 GB | ✓
      48 GB | Q6_K | | 262K | 39.7 GB | ✓
      48 GB | Q8_0 | | 173K | 40.0 GB | ✓
      48 GB | Q8_0 | q8_0 | 262K | 37.3 GB | ✓
      64 GB | Q8_0 | | 262K | 45.8 GB | ✓
      96 GB | Q8_0 | | 262K | 45.8 GB | ✓

      NVIDIA GPU

      Same model memory as Apple Silicon, plus ~1 GB CUDA overhead.

      VRAM | Quant | KV cache | Max context | Total VRAM used | Vision
      ---|---|---|---|---|---
      12 GB | IQ2_M | q8_0 | 11K | 12.0 GB | ✗
      16 GB | IQ3_M | | 30K | 16.0 GB | ✗
      16 GB | IQ3_M | q8_0 | 60K | 16.0 GB | ✗
      24 GB | Q4_K_M | | 83K | 24.0 GB | ✓
      24 GB | Q4_K_M | q8_0 | 167K | 24.0 GB | ✓
      24 GB | Q5_K_M | | 58K | 24.0 GB | ✗
      48 GB | Q6_K | | 262K | 40.7 GB | ✓
      48 GB | Q8_0 | | 262K | 46.8 GB | ✓
      80 GB | Q8_0 | | 262K | 46.8 GB | ✓

      16 GB Mac: IQ2_M/q8_0 — 42K text-only. No vision.

      24 GB Mac: IQ3_M — 46K (f16 KV) or 91K (q8_0). Vision at 32–65K.

      32 GB Mac: Q5_K_M — 74K text-only (f16 KV), 147K (q8_0). Q4_K_M for vision at 99K.

      48 GB Mac: Q6_K/f16 KV — 262K with vision. Q8_0/q8_0 KV for 262K at higher model quality.

      64 GB+ Mac: Q8_0/f16 KV — 262K with vision. Maximum quality at practical speed.

      12 GB GPU: IQ2_M/q8_0 — 11K. Very limited, no vision.

      16 GB GPU: IQ3_M — 30K (f16 KV) or 60K (q8_0). No vision.

      24 GB GPU: Q4_K_M — 83K with vision (f16 KV). Q5_K_M — 58K text-only (f16 KV), 116K (q8_0).

      48 GB+ GPU: Q6_K/f16 KV — 262K with vision. Q8_0 for max quality.

      Leave KV cache at f16 (blank column) for best quality. Use q8_0 KV only when f16 doesn't give enough context. q4_0 KV should not exceed 64K context.

      Vision adds ~0.9 GB for mmproj. macOS needs ≄ 8 GB for itself (16 GB Macs excepted — use ~4 GB). You can increase available memory by raising the wired memory limit, e.g. for a 96 GB Mac: sudo sysctl iogpu.wired_limit_mb=90112 (88 GB). NVIDIA reserves ~1 GB for CUDA.

      submitted by /u/ex-arman68
      [link] [comments]

    27. 🔗 r/wiesbaden Fine Line Tattoo Artist rss

      Hey,

      Kennst jemand ein gutes Tattoo Studio/ einen guten Tattoo Artist fĂŒr abstrakte Fine Line Tattoos in Wiesbaden oder Umgebung?Ansonsten auch anderswo:)

      submitted by /u/heyheyheyoooooo
      [link] [comments]

    28. 🔗 r/Leeds When is Uniqlo going to open? rss

      I was so excited for this opening to be announced last year, at Christmas it said opening soon, then it changed to fall/winter 2026.

      It's a long time to fit out a shop.

      submitted by /u/used2bfat69
      [link] [comments]

    29. 🔗 r/LocalLLaMA Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...) rss

      Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...) | The following is a non-comprehensive test I came up with to test the quality difference (a.k.a degradation) between different quantizations of Qwen 3.6 27B. I want to figure out what's the best quant to run on my 16 GB VRAM setup. WHAT WE ARE TESTING First, the prompt:

      Given this PGN string of a chess game: 1. b3 e5 2. Nf3 h5 3. d4 exd4 4. Nxd4 Nf6 5. f4 Ke7 6. Qd3 d5 7. h4 * Figure out the current state of the chessboard, create an image in SVG code, also highlight the last move.
      

      I want to see if the models can:

      • Able to track the state of the board after each move, to reach the final state (first half of move 7)
      • Generate the right SVG image of the board, correctly place the pieces, highlight the last move

      And yes, if you are questioning. It could be possible that the model was trained to do the same thing on existing chess games, so I came up with some random moves, the kind of moves that no players above 300 elo would ever have played. For those who are not chess players, this is how the board supposed to look like after move 7. h4. Btw, you supposed to look at the pieces positions and the board orientation, not image quality because this is just a screenshot from Lichess. https://preview.redd.it/6lsfvzy8wfzg1.png?width=1586&format=png&auto=webp&s=94634b461528a6ecc6728eefd23072ab28c3769d CAN OTHER MODELS SOLVE IT? Before we go to the main part, let me show the result from some other models. I find it interesting that not many models were able to figure out the board state, let alone rendering it correctly. Qwen 3.5 27B It was mostly figured out the final position of the pieces, but still render the original board state on top. Highlighted the wrong squares, and the board orientation is wrong. https://preview.redd.it/oanbebp9xfzg1.png?width=1078&format=png&auto=webp&s=b72af75a10f4a9f4d897699b404580370bd29d9e Gemma 4 31B Nice chess dot com flagship board style, i would say it can figure out the board state, but failed to render it correctly. The square pattern also messed up. https://preview.redd.it/w5jwi05nxfzg1.png?width=1640&format=png&auto=webp&s=33e6f21f56c4e98df92c828103ac10714e578973 Qwen3 Coder Next I don't know what to say, quite disappointed. https://preview.redd.it/knltp8h1yfzg1.png?width=1348&format=png&auto=webp&s=1e9207cd1dfd08b049eaa13727703be732d2cb96 Qwen3.6 35B A3B As expected, 35B always be the fastest Qwen model, but at the same time, managed to fail the task successfully in many different ways. This is why I decided to find a way to squeeze 27B into my 16 GB card. The speed alone just not worth it. https://preview.redd.it/orti5kdhyfzg1.png?width=3360&format=png&auto=webp&s=c29a3aae9683e5ceaa15c59ae32adecabdd1b6b6 HOW QWEN3.6 27B SOLVE IT? All the models here are tested with the same set of llama.cpp parameters:

      • temp 0.6
      • top-p 0.95
      • top-k 20
      • min-p 0.0
      • presence_penalty 1.0
      • context window 65536

      BF16 version was from OpenRouter, Q8 to Q4_K_XL versions was on a L40S server, the rest are on my RTX 5060 Ti. The SVG code generated directly on Llama.cpp Web UI without any tools or MCP enabled (I originally ran this test in Pi agent, only to found out that the model tried to peek into the parent folders and found the existing SVG diagrams by higher quants, copied most of it). BF16 - Full precision This is the baseline of this test. It has everything I needed: right position, right board orientation, right piece colors, right highlight. The dotted blue line was unexpected, but it also interesting, because later on you will see, not many of the high quants generate this. https://preview.redd.it/lgizkjklzfzg1.png?width=1424&format=png&auto=webp&s=d7867b55735d3d875e0e36aecbaf3c3f0d1dbd58 Q8_0 As expected Q8 retains pretty much everything from the full precision except the line. https://preview.redd.it/6wjnq6ff0gzg1.png?width=1610&format=png&auto=webp&s=f0d20ff4717b972efffced49ac8d43075fa97eb5 Q6_K We start to see some quality loss here. I mean the placement of the rank 5 pawns. The look of the pieces are mostly because Q6 decided to use a different font. None of the models here trying to draw its own pieces in this test. https://preview.redd.it/kcqj81vl0gzg1.png?width=1608&format=png&auto=webp&s=66c7a219e79a8f6ecf44e27489f337b4016185b5 Q5_K_XL Looks very similar with Q8, but it is worth noticing that the SVG code of Q5 version is 7.1 KB, while Q8 is 4.7 KB. https://preview.redd.it/6wshu7g01gzg1.png?width=1506&format=png&auto=webp&s=289db354fea59c456d8bd2dc7abdbcc1e4282ffd Q4_K_XL and IQ4_XS If you ignore the font choice, you will see Q4_K_XL is a more complete solution, because it has the board coordinates. https://preview.redd.it/pzdghdtm1gzg1.png?width=3326&format=png&auto=webp&s=10c3d7758459f223d195107353f1ec76565cd31d Q3_K_XL and Q3_K_M https://preview.redd.it/56gttur62gzg1.png?width=3330&format=png&auto=webp&s=4af27d8a652e2deef6c14485d0fff4bd3651097f IQ3_XXS Now here's the interesting part, everything was mostly correct, the piece placements and the highlight, and there's the line on the last move! But IQ3_XXS get the board orientation wrong, see the light square on the bottom left? https://preview.redd.it/7jnzxy324gzg1.png?width=1608&format=png&auto=webp&s=178f72f51e65866497f16e861b04c0c448fce774 Q2_K_XL This is just a waste of time. But hey, it got all the pieces positions right. The board is just not aligned at all. https://preview.redd.it/3z63d7bv4gzg1.png?width=1604&format=png&auto=webp&s=f6723b28248327c55bede4e42a4a0cfbe962fb74 SO, WHAT DO I USE? I know a single test is not enough to draw any conclusion here. But personally, I will never go for anything below IQ4_XS after this test (I had bad experience with Q3_K_XL and below in other tries). On my RTX 5060 Ti, I got like pp 100 tps and tg 8 tps for IQ4_XS with vanilla llama.cpp (q8 for both ctk and ctv, fit on). But with TheTom's TurboQuant fork, I managed to get up to pp 760 tps and tg 22 tps , by forcing GPU offload for all layers (-ngl 99), quite usable.

      llama-cpp-turboquant/build/bin/llama-server -fa 1 -c 75000 -np 1 --no-mmap --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 --presence_penalty 1.0 -ctk turbo4 -ctv turbo2 -ub 128 -b 256 -m Qwen3.6-27B-IQ4_XS.gguf -ngl 99
      

      The only down side is I have to keep the context window below 75k, and use turbo4/turbo2 for KV cache quant. Below are some example of different KV cache quants. https://preview.redd.it/y0y7o6h09gzg1.png?width=3320&format=png&auto=webp&s=bd7c855100ff63c9bb666a4f4a61b966ad6eebca https://preview.redd.it/dyrru7z19gzg1.png?width=3314&format=png&auto=webp&s=d54238d7a31c6cd8858f84df67ff588dc22d726b You can see all the result directly here https://qwen3-6-27b-benchmark.vercel.app/ submitted by /u/bobaburger
      [link] [comments]
      ---|---

    30. 🔗 r/reverseengineering ant4g0nist/pyre: Ghidra decompiler in your browser rss
    31. 🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
      sync repo: ~1 changed
      
      ## Changes
      - [HashDB](https://github.com/oalabs/hashdb-ida):
        - 1.10.0: archive contents changed, download URL changed
      
    32. 🔗 Ampcode News Amp, Rebuilt rss

      Today we're starting to roll out the new Amp.

      Not all of it, not yet. But the first piece: a rebuilt Amp CLI. Codename: Neo.

      In The Coding Agent is Dead we wrote about where this is going: agents with longer leashes, less handholding, and many more places to run. Not just one agent in one terminal. Agents prompted from anywhere, running everywhere.

      That's the new Amp we're building.

      But the terminal still matters and will matter. There will be moments where you want the agent right next to you.

      So we rebuilt the CLI first. It is still Amp in your terminal. But it's running on a completely new architecture: remote-controllable, compaction-first, plugin-powered, and much faster. Built for what's coming.

      Let's walk through it.

      Remote Control

      When you start a thread in the new Amp CLI, you can now remote control it from ampcode.com.

      You'll not only get live updates but you can also send messages, queue and dequeue them, or cancel what the agent is currently doing:

      The architecture that enables this is the reason we rewrote Amp. And remote control is just the start.

      No More Manual Context Management

      A core principle behind the rebuild: build for what the frontier models can do now, in 2026, and what they will be able to do in the future. Do not build for what once was.

      Today's leading frontier models are great at handling compaction.

      So Amp now manages context for you.

      You don't have to watch context percentages anymore, or decide when to handoff, or extract information from a thread in a panic.

      When the context window fills up, Amp now compacts the thread: it summarizes the current context, starts a fresh window with that summary, and keeps going.

      Compaction now runs automatically when the context window is 90% full.

      It was also the first thing we added to the new architecture. During one migration, we had to shut it off for a day and everyone complained. One beta-user reported: "I love having auto-compaction. NOT missing handoff..."

      So handoff is out. Compaction is in.

      Plugins

      With this release we're officially releasing the Amp Plugin API.

      Amp plugins can:

      • Handle events — amp.on(...) for tool calls, tool results, and agent lifecycle events
      • Add tools — amp.registerTool(...) for custom tools the agent can call
      • Add commands — amp.registerCommand(...) for command palette actions
      • Show UI elements — ctx.ui.notify(...), ctx.ui.confirm(...), ctx.ui.input(...), and ctx.ui.select(...)
      • Ask AI questions — amp.ai.ask(...) for yes/no classification with confidence and reasoning

      Here, for example, is a plugin that registers a tool called ask_user_choice. The agent can use it to present the user with options:

      // .amp/plugins/ask-user-choice.ts
      
      import type { PluginAPI } from '@ampcode/plugin'
      
      export default function (amp: PluginAPI) {
          amp.registerTool({
              name: 'ask_user_choice',
              description:
                  'Present the user with a multiple choice question when there are several possible approaches and you need them to pick one. Use when you have 2-5 concrete options to choose from.',
              inputSchema: {
                  type: 'object',
                  properties: {
                      question: { type: 'string', description: 'The question to ask the user' },
                      options: {
                          type: 'array',
                          items: { type: 'string' },
                          description: 'The options to choose from (2-5 items)',
                      },
                  },
                  required: ['question', 'options'],
              },
              async execute(input, ctx) {
                  const question = input.question as string
                  const options = input.options as string[]
                  const optionsList = options.map((opt, i) => `${i + 1}. ${opt}`).join('\n')
      
                  const answer = await ctx.ui.input({
                      title: question,
                      helpText: `${optionsList}\n\nType the number of your choice`,
                      submitButtonText: 'Select',
                  })
      
                  if (!answer) return 'User dismissed the question without choosing.'
      
                  const index = parseInt(answer.trim(), 10) - 1
                  if (index >= 0 && index < options.length) {
                      return `User selected option ${index + 1}: ${options[index]}`
                  }
                  return `User responded with: ${answer}`
              },
          })
      }
      

      That's it: a single file in .amp/plugins and Amp gets a new tool. It looks like this:

      The ask_user_choice tool in action

      The Amp Plugin API documentation has more examples, including a full permissions plugin.

      Queuing & Steering

      Queuing messages is now the default. When you send a message while the agent is busy, it'll get added to the queue instead of stopping and interrupting the agent.

      This, too, we think fits the models of today and tomorrow better. They work for longer and need fewer mid-flight yanks.

      If you want to fast-track a queued message, you can steer.

      Steering lets you send a queued message as soon as possible, not just when the agent becomes idle. The next time a tool result is sent up to the agent, for example.

      Use ↑ to select a queued message, then steer it with ⏎:

      You can also hit Esc Esc to interrupt the agent and send immediately.

      Permissions

      Amp will no longer ask for permission before running tools.

      What was once the --dangerously-allow-all flag is now the default behavior for users who have not configured permissions.

      The old permissions system still exists. It's now a built-in plugin. If your existing Amp settings already opt into permissions — through amp.permissions, amp.dangerouslyAllowAll: false, or amp.guardedFiles.allowlist — Amp loads that plugin and works as before. (When the plugin is active, it applies in both amp and amp --execute.)

      Why change the default?

      A year ago tool calls were simpler to check: inspect the name, inspect the arguments, do string-based matching, allow or deny. Now, frontier models write throwaway scripts to get stuff done. They chain shell commands.

      It's near-impossible to determine statically whether a tool invocation will be destructive or not.

      When a model writes five 20-line Python scripts in parallel to do something, checking whether a tool call contains rm -rf gives you a false sense of security.

      On top of that, there are now custom skills and scripts, specifically built for agents. And different organizations have different policies around which model is allowed to call which tool.

      So permissions now live in the Plugin API.

      If you need a policy, build the one that matches your setup. Point Amp at the Amp Plugin API and ask it to help you.

      Performance & Efficiency

      The old Amp CLI got slow with huge threads. Neo doesn't. Here's a comparison, using a thread with around 5000 messages:

      Metric Old New Improvement
      CPU% (mean ± sd) 84.1% ± 1.6% 17.4% ± 8.8% 79% less CPU
      CPU% (peak) 86.3% 25.8% —
      Memory (idle) 1814 MB 540 MB 70% less memory

      Rendering performance has improved, too.

      Before:

      After:

      What's Gone

      We also removed features. Of course we did, otherwise it wouldn't be an Amp release, would it?

      Our goal is to keep you on the frontier. Amp should not make you work like it's still 2025.

      Some features made sense when models needed more babysitting, more manual context management, more careful steering. They don't anymore. When a feature starts tying you to the old way to use agents, it goes.

      Handoff is gone. As described above, compaction made it obsolete. There are some valid use cases for Handoff even when there's enough space left in the context, but we don't think it warrants the complexity introduced by many small, connected threads.

      You can also still reference other threads and Amp will read them and extract the relevant information.

      For example, you can use Ctrl+O and thread: new to create a new thread, then hit Enter to quickly insert a reference to the previous thread. Amp will use that reference along with the rest of your prompt to read the previous thread.

      Amp no longer rolls back file changes when you edit or restore a message. We've found ourselves using this less and less as models advanced. The models are now good enough to undo changes for you, with more finesse than a rollback. And, the truth is, the rollback feature was always best-effort: if the agent wrote and ran code that generated files, we didn't keep track of that without elaborate snapshotting.

      Skill management: Amp still supports Agent Skills but we no longer offer commands or subcommands to add, remove, or update skills. That's better done by separate tools, such as skills.

      User-invokable skills: We also removed support for user-invokable skills. The latest generation of models now invokes skills reliably.

      Themes: Custom themes made it harder to keep the CLI legible, polished, and recognizably Amp. We’d rather ship one good interface than support many broken-looking ones.

      Manual bash invocation: in the old Amp CLI you could invoke bash commands by using $ and $$ in the prompt editor. An interesting idea a year ago, but now with models being ever more capable at running commands on their own and without blowing up their context window (and that context window being unlimited, practically) it's no longer useful.

      Rollout

      We’re rolling Neo out over the next few days. If you want to skip the line, send us an email. We'll flip the switch for you.

      This is the first piece of the new Amp.

      More soon.