- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- April 30, 2026
-
🔗 Cryptography & Security Newsletter ECH Is Done, But Can We Make It Work? rss
Some technologies are easier to deploy than others. Take TLS, for example. Once enough time passes and we upgrade the servers and clients, we’re done. Encrypted Client Hello (ECH) is not one of those technologies. To get it to be effective, we first need to go through the usual upgrade cycle, iron out the last kinks, and then also get enough of the ecosystem to opt in to achieve safety in numbers.
-
🔗 r/Yorkshire Now and Then rss
submitted by /u/Still_Function_5428
[link] [comments] -
🔗 Textualize/textual The Antsy Release release
This release adds support for dedicated ansi themes, which are also exposed from the command palette.
There were a few tweaks to the theming system which may result in broken snapshots, but there should be no visual changes.
[8.2.5] - 2026-04-30
Added
Changed
App.ansi_colormay now beNoneto use theansivalue from the theme. #6513
-
🔗 r/Leeds Varsity night - A grumble rss
It was varsity night in headingly last night. We live around the Trelawns, and the whole damn street is littered with glass. Shattered pint glasses, beer bottles.
We've been out this morning sweeping it up. I honestly cannot fathom the lack of basic respect.
submitted by /u/Swivials
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.10.2 release
What's Changed
- feat: send + cache verified author identity on share by @tomasz-tomczyk in #371
- fix: persist verified user_id on auth login by @tomasz-tomczyk in #393
Note : You might need to run
crit auth loginagain to link your profile properly for the future.- feat: distinct "Approved" state for review-finish modal by @tomasz-tomczyk in #381
- feat: improve agent integrations with global install + aider automation by @tomasz-tomczyk in #373
- fix: patch hljs markdown grammar and re-enable for diff view by @tomasz-tomczyk in #388 (Thanks @hbogaeus for reporting!)
- fix: keep SSE alive past idle timeout (Safari "Connection lost") by @tomasz-tomczyk in #376 (thanks Jared for reporting!)
- fix: expand hljs language coverage via alias resolution by @tomasz-tomczyk in #378
- fix: Ctrl+Enter to save when editing replies (#382) by @tomasz-tomczyk in #386 (Thanks @hbogaeus for reporting!)
- fix: align light theme with modern GitHub for visible diff highlights by @tomasz-tomczyk in #387 (Thanks @hbogaeus for reporting!)
- fix: Change comment submit button text to 'Add comment' by @TalAmuyal in #385 - Thank you!
- fix: strip GIT_* env from test process to prevent worktree corruption by @tomasz-tomczyk in #383
- docs: Docker recipe for sandboxed agents by @tomasz-tomczyk in #372 (Thanks Jared for the suggestion!)
- chore: wait for unit + e2e uploads before codecov status by @tomasz-tomczyk
- chore: pre-release audit fixes (Go backend) by @tomasz-tomczyk in #389
- chore: pre-release audit fixes (frontend) by @tomasz-tomczyk in #390
- refactor: return errors from installAider; unify integration list by @tomasz-tomczyk in #394
- chore: wire markdown-patch smoke test into CI by @tomasz-tomczyk in #395
- chore: move mise-trust to pre-start so worktree shell can load mise by @tomasz-tomczyk
New Contributors
- @TalAmuyal made their first contribution in #385
Full Changelog :
v0.10.1...v0.10.2 -
🔗 r/reverseengineering HexDig 1.0.0 a lightweight binwalk alternative working both on Windows and Linux, written in C++, give it a try! rss
submitted by /u/gcarmix1
[link] [comments] -
🔗 r/reverseengineering GitHub - iss4cf0ng/CVE-2026-31431-Linux-Copy-Fail: Rust implementation Exploit/PoC of CVE-2026-31431-Linux-Copy-Fail, allow executing customized shellcode (such as Meterpreter). rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/Yorkshire Culloden tower rising above the Swale. Can you spot the Mallard duck? rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Yorkshire Yorkshire Water Seeks Views On Multimillion-Pound Scarborough Investment rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 keeweb/keeweb 1.19.0 release
keeweb-1.19.0
-
🔗 keeweb/keeweb v1.18.8 release
What's Changed
- fix: specify puppeteer argument to fix test builds for ci by @Aetherinox in #2143
- Add verify workflow by @HarlemSquirrel in #2142
- fix: add legacy support for npm run dev by @Aetherinox in #2144
- Bump electron to 13 by @HarlemSquirrel in #2052
- fix: addresses not being able to unset a keyfile once added to a vault by @Aetherinox in #2146
- fix: support multiple otpauth url structures by @Aetherinox in #2148
- repo: convert issue templates into forms by @Aetherinox in #2150
- fix: convert space character to non-breaking space on password reveal by @Aetherinox in #2151
- Fixed issue with Csv parser parse (#1904) by @R3dIO in #1944
- Update to new UUID for firefox extension by @HarlemSquirrel in #2174
- Hotfix: Downgrade gdrive scope to drive.file by @vanceism7 in #2208
New Contributors
- @Aetherinox made their first contribution in #2143
- @R3dIO made their first contribution in #1944
- @vanceism7 made their first contribution in #2208
Full Changelog :
v1.18.7...v1.18.8 -
🔗 Evan Schwartz Your Clippy Config Should Be Stricter rss
“If it compiles, it works.” This feeling is one of the main things Rust engineers love most about Rust, and a reason why using it with coding agents is especially nice. After debugging some code that compiled but mysteriously stopped in production, I realized that it’s useful to enable more Clippy lints to catch bugs that the compiler won't prevent by itself. It's especially useful as guardrails for coding agents, but stricter linting can make your code safer, whether or not you’re coding with LLMs.
Motivating Bug: UTF-8-Oblivious String Slicing
Scour is the personalized content feed that I work on. Every Friday, Scour sends an email digest to each user with the top posts that matched their interests. On a recent Friday, the email sending job mysteriously stopped. This was puzzling because I had already put in place multiple type system-level safeguards and tests to ensure that it would continue with a log on all types of errors.
After digging into the logs, I found the culprit to be
thread 'tokio-runtime-worker' panicked... byte index 200 is not a char boundary. A function naively truncated article summaries without checking for UTF-8 character boundaries, which caused a panic and stopped the Tokio worker thread running the email sending loop.The solution for this particular bug was a safer method for truncating article summaries that respects UTF-8 character boundaries. However, this problem was reminiscent enough of the 2025 Cloudflare
unwrapbug that "broke the internet" that I wanted some more general solution.Rust's compiler prevents many types of bugs but there are still production problems it can't catch. Panics will either crash your program or quietly kill Tokio worker threads. Deadlocks and dropped futures can make work silently stop. And plenty of numeric operations can silently cause incorrect behavior.
We can stave off many of these types of bugs by making Clippy even stricter than it already is.
This is especially relevant in the age of coding agents. A seasoned Rust engineer might naturally avoid patterns that could cause problems. An agent or a junior colleague might not. Stricter Clippy rules make it easier to rely on code you didn't personally write. Also, enabling new lints on an existing codebase is tedious, and exactly the kind of task that is good to hand to a coding agent.
Enabling More Clippy Lints
Clippy ships with hundreds of lints that are disabled by default. Some are disabled because they might have false positives and some are style choices which you might reasonably not want.
Which lints should we enable to help us get back the "if it compiles [and passes Clippy], it works" feeling?
Why Not Enable Lint Categories?
Clippy's lints are grouped into categories: Correctness, Suspicious, Complexity, Perf, Style, Pedantic, Restriction, Cargo, Nursery, and Deprecated.
Unfortunately, none of these categories cleanly map onto "don't let this panic or do the wrong thing in production".
In fact, the Clippy docs say that "The
restrictioncategory should, emphatically, not be enabled as a whole." Clippy even includes a dedicated lint,blanket_clippy_restriction_lints, to discourage you from enabling this category. While therestrictioncategory includes many useful lints, it also includes some that directly contradict one another. For example, it contains lints to enforce bothbig_endian_bytesandlittle_endian_bytes.The docs say "Lints should be considered on a case-by-case basis before enabling". Of course, you can enable whole categories like
pedanticandrestrictionand thenallowspecific ones you want to disable, but I'm outlining a selective opt-in here.Lints That Don't Fire Are Still Useful
Even if you don't use a certain pattern in your code base today, it's not bad to enable the lint anyway. Inapplicable lints serve as cheap tripwires in case the given pattern is ever added later, whether by you, a colleague, or a coding agent.
My Lints
Every project is different and you should look through the available lints to see which ones make sense for your project.
Also, check when lints landed in stable if your Minimum Supported Rust Version predates 1.95, as some of these may have been added after your MSRV.
With those caveats out of the way, here are the lints I enabled, roughly categorized by what kind of behavior they prevent. You can skip to the bottom if you just want to copy my config.
Don't Panic
This group prevents panics from unwraps and unsafe slicing or indexing into arrays and strings.
Note that some of these, like
string_sliceandindexing_slicingmay produce many warnings throughout your code base. That may be annoying to fix. However, using safe methods like.get()and iterators instead of slicing prevents pretty severe footguns, so I would argue that it's worth it.string_slice-&s[a..b]on&str(UTF-8 boundary panic). This would have caught my initial bug.indexing_slicing-arr[i]/&arr[a..b]unwrap_used-Option::unwrap/Result::unwrappanic-panic!()callstodo/unimplemented/unreachable- placeholder-panic macrosget_unwrap-vec.get(i).unwrap()unwrap_in_result-.unwrap()inside functions that return aResultunchecked_time_subtraction-Instant - Instantpanics if the second is largerpanic_in_result_fn-panic!/assert!inside a function that returns aResult
You might or might not want to enable
expect_used. Calling.expecton anOptionorResultcan result in a panic. However, the message you pass toexpectshould already document why that thing shouldn't happen. Enabling the lint and then selectively disabling it throughout your code with#[expect(expect_used, reason = "...")]may end up duplicating the same rationale for using it in the first place.Another lint that is a real judgement call is
arithmetic_side_effects. This can prevent overflows and division by zero. However, it will cause Clippy to warn you about every place you use math operators:+,-,*,<<,/, and%. I tried enabling it in my code base and would estimate that around 15% of the warnings caught real issues and 85% was just noise.Don't Fail Silently
let_underscore_future-let _ = futuredrops without awaitinglet_underscore_must_use-let _ = result_returning()swallows errorsunused_result_ok-result.ok();silently dropsErrmap_err_ignore-.map_err(|_| MyErr)loses source errorassertions_on_result_states-assert!(r.is_ok())discards the error message
Don't Do Bad Async Stuff
These prevent various concurrency bugs and deadlocks:
await_holding_lock-MutexGuardacross.awaitawait_holding_refcell_ref-RefCell::borrow_mutacross.awaitif_let_mutex(only relevant if you're using an earlier edition than 2024) -if let _ = mutex.lock() { other_lock() }deadlock pattern. The scoping was fixed in the 2024 edition so this is no longer an issue.large_futures- aFuturethat is too large can cause a stack overflow
Don't Do Unsafe Things with Memory
mem_forget-mem::forgetleaksundocumented_unsafe_blocks- everyunsafe {}needs a// SAFETY:commentmultiple_unsafe_ops_per_block- one unsafe op per block (one comment per op)unnecessary_safety_doc/unnecessary_safety_comment- only document safety where it belongs
Don't Do Potentially Incorrect Things with Numbers
float_cmp-a == bon floatsfloat_cmp_const- stricter, also flags comparisons against constantslossy_float_literal- silently-rounded float literals (16_777_217.0_f32)cast_sign_loss-(-1_i8) as u64wraps tou64::MAXinvalid_upcast_comparisons-(x: i32 as i64) > i32::MAX as i64always false
The lints
cast_possible_wrap,cast_precision_loss,cast_possible_truncationeffectively force you to document invariants when doing lossy casts between numeric types. You might or might not find that useful.Don't Do Bad Things That are Easy to Avoid
rc_mutex-Rc<Mutex<_>>(Rcis single-threaded)debug_assert_with_mut_call-debug_assert!(stack.pop().is_some())differs in debug vs releaseiter_not_returning_iterator- method namediterreturning non-Iteratorexpl_impl_clone_on_copy- manualCloneimpl that disagrees withCopyinfallible_try_from-TryFromimpl whose error isInfallibleshould beFromdbg_macro-dbg!calls should be removed after debugging
Don't
allowYour Way Around These LintsThese two are especially useful if you're using a coding agent. Instead of letting the agent write
#[allow(lint_we_wanted_to_enable)], it should provide a reason wherever it's disabling a lint.allow_attributes- every#[allow]becomes#[expect(..., reason = "…")]allow_attributes_without_reason- every#[expect]requires a reason
Workaround for Workspace Inheritance
If you're using a Cargo workspace, you'll want to enable these lints in the workspace Cargo.toml. Unfortunately, each workspace crate needs to opt in to inheriting lints with
lints.workspace = true, rather than inheriting the lints by default. On nightly, there's amissing_lints_inheritancelint that specifically checks for this.If you're using stable Rust, you can use
cargo-workspace-lintsor a simple shell script run on CI to make sure you don't forget to make a workspace crate inherit the lints.Warn or Deny?
When enabling lints, you can either set Clippy to
warnordenythem. Either works but I personally prefer setting these towarnand running Clippy with-D warningsbefore committing and on CI. This makes local iteration marginally easier because you can compile your code initially without fixing all the lints right away.Note: if you set Clippy on CI to deny warnings, you should make sure to specify a specific Rust version. Otherwise, lints added in newer versions will cause your build to fail. (Thanks to u/scook0 for pointing this out!)
My Configs
# Workspace Cargo.toml [workspace.lints.clippy] # Don't Panic - prevent panics from unwraps and unsafe slicing or indexing string_slice = "warn" indexing_slicing = "warn" unwrap_used = "warn" panic = "warn" todo = "warn" unimplemented = "warn" unreachable = "warn" get_unwrap = "warn" unwrap_in_result = "warn" unchecked_time_subtraction = "warn" panic_in_result_fn = "warn" # Optional - see post for caveats # expect_used = "warn" # arithmetic_side_effects = "warn" # Don't Fail Silently - prevent dropped futures and swallowed errors let_underscore_future = "warn" let_underscore_must_use = "warn" unused_result_ok = "warn" map_err_ignore = "warn" assertions_on_result_states = "warn" # Don't Do Bad Async Stuff - prevent deadlocks and concurrency bugs await_holding_lock = "warn" await_holding_refcell_ref = "warn" if_let_mutex = "warn" # only relevant on editions before 2024 large_futures = "warn" # Don't Do Unsafe Things with Memory mem_forget = "warn" undocumented_unsafe_blocks = "warn" multiple_unsafe_ops_per_block = "warn" unnecessary_safety_doc = "warn" unnecessary_safety_comment = "warn" # Don't Do Potentially Incorrect Things with Numbers float_cmp = "warn" float_cmp_const = "warn" lossy_float_literal = "warn" cast_sign_loss = "warn" invalid_upcast_comparisons = "warn" # Optional - these effectively force you to document numeric invariants # cast_possible_wrap = "warn" # cast_precision_loss = "warn" # cast_possible_truncation = "warn" # Don't Do Bad Things That are Easy to Avoid rc_mutex = "warn" debug_assert_with_mut_call = "warn" iter_not_returning_iterator = "warn" expl_impl_clone_on_copy = "warn" infallible_try_from = "warn" dbg_macro = "warn" # Don't `allow` Your Way Around These Lints - every suppression must be # a deliberate #[expect(..., reason = "…")] rather than a silent #[allow] allow_attributes = "warn" allow_attributes_without_reason = "warn"
# Workspace clippy.toml allow-indexing-slicing-in-tests = true allow-panic-in-tests = true allow-unwrap-in-tests = true allow-expect-in-tests = true allow-dbg-in-tests = true
Conclusion
Ultimately, as Clippy's docs say, "You can choose how much Clippy is supposed to
annoyhelp you." But especially in the age of coding agents, I think it's worth tightening the guardrails so you end up with even fewer mysterious bugs in production and more code where you can say "if it compiles and lints, it should work."
Discuss on r/rust, Lobsters, or Hacker News.
-
🔗 Console.dev newsletter goshs rss
Description: Simple web server.
What we like: Supports multiple protocols as well as HTTP, including SMB, DNS, WebDAV, SMTP. Includes file-based ACLs so you can use it to set up file sharing. SSL handled through Let’s Encrypt or providing your own keys. Can embed static files. Written in Go so can be shipped as a single binary.
What we dislike: The non-HTTP servers are mainly designed for pentesting and CTFs rather than fully functional server replacements. This includes a reverse shell generator. This is an odd digression for a web server, but you’ll probably just use Caddy if you want a pure Go web server.
-
🔗 Console.dev newsletter Quarkdown rss
Description: Markdown meets LaTeX.
What we like: Use Markdown to write typeset reports, docs, static websites, slides. Includes live preview with fast compilation so you can avoid LaTeX dependencies. Has enhancements like figures, formulae, code, bibliography. Include data from files and manipulate it with variables and scripting.
What we dislike: Academic writing in LaTeX (or equivalent) is the dream, but most work really just happens in Word or Google Docs, especially if you’re collaborating with multiple authors!
-
🔗 Servo Blog March in Servo: keyboard navigation, better debugging, FreeBSD support, and more! rss
Servo 0.1.0 represents Servo’s biggest month ever, with a record 530 commits and our first ever release on crates.io! For security fixes, see § Security.
With this release Servo becomes more accessible, thanks to tab navigation (@mrobinson, @Loirooriol, #42952, #43019, #43058, #43246, #43267, #43067), keyboard navigation with Alt+Shift and the accesskey attribute (@mrobinson, #43031, #43144, #43434), and keyboard scrolling with Space and Shift+Space (@mrobinson, #43322).
We’ve shipped several new web platform features:
- < input type=range> (@BudiArb, @rayguo17, @mrobinson, #41562)
- < script blocking=render> (@TimvdLippe, #43150)
- < svg width> and < svg height> (@Loirooriol, #43583)
- ‘X-Frame-Options’ (@TimvdLippe, #43539, #43708)
- ‘Content-Security-Policy: frame-ancestors’ (@TimvdLippe, #43630)
- ‘::first-letter’ styling (@minghuaw, @xiaochengh, @Loirooriol, #43027)
- ‘::placeholder’ styling (@stevennovaryo, #43053)
- ‘::file-selector-button’ styling (@lukewarlow, @AlexVasiluta, #43498)
- ‘background-blend-mode’ (@mrobinson, #43666)
- ‘content’ on ‘::marker’ (@niyabits, @Loirooriol, #43515)
- ‘list-style-type:
’ (@Loirooriol, #43111) - ‘attr(namespace|local)’ and ‘clamp(none)’ (@Loirooriol, #43045)
- < system-color> (@longvatrong111, @mrobinson, #42529, #43105, #43107)
- < step-position> values ‘jump-start’ , ‘jump-end’ , ‘jump-none’ , and ‘jump-both’ (@yezhizhen, #43061)
Plus a bunch of new DOM APIs:
- CommandEvent (@lukewarlow, #43190)
- moveBefore() on Node (@lukewarlow, #41238)
- relatedTarget on MouseEvent and PointerEvent (@simonwuelker, #42989)
- command on HTMLButtonElement (@lukewarlow, #43190)
- selectedOptions on HTMLSelectElement (@jakubadamw, #43017)
- url on LargestContentfulPaint (@shubhamg13, #42901, #42949)
- crypto.subtle.digest() for TurboSHAKE (@kkoyung, #43551)
- crypto.subtle.getPublicKey() for ECDH , ECDSA , Ed25519 , RSASSA-PKCS1-v1_5 , RSA-PSS , RSA-OAEP , and X25519 (@kkoyung, @Taym95, #43073, #43093, #43106, #43115)
servoshell is now installed as
servoshellorservoshell.exe, rather thanservoorservo.exe(@jschwe, @mrobinson, #42958).--userscriptshas been removed for now, but anyone who uses it is welcome to reinstate it as a wrapper aroundUserContentManager::add_script(@jschwe, #43573). We’ve fixed a bug where link hover status lines are sometimes not legible (@simartin, #43320), and we’re working on getting servoshell signed for macOS to avoid getting blocked by Gatekeeper (@jschwe, #42912).After a long effort by @valpackett, @dlrobertson, and more recently @nortti0 and @sagudev (#43116, #43134), we can now build Servo for FreeBSD! Note that Servo 0.1.0 still has some issues that need to be worked around, but you can get all the details in #44601.
A great deal of work went into making the crates.io release possible, including renaming
libservoto justservo(@jschwe, #43141), making each package self-contained (@jschwe, #43180, #43165), fixing build issues (@delan, @jschwe, #43170, #43458, #43463) and crates.io compliance issues (@jschwe, #43459), configuring package metadata (@jschwe, @StaySafe020, #43078, #43264, #43451, #43457, #43654), and organising our dependency tree (@jschwe, @yezhizhen, @webbeef, @mrobinson, #42916, #43243, #43263, #43516, #43526, #43552, #43615, #43622, #43273, #43092). As a result, you can now take your first step towards embedding Servo in a Rust app with:$ cargo add servoThis is another big update, so here’s an outline:
Security __ crypto.subtle.deriveBits() for X25519 checking for all-zero secrets, and verify() for HMAC comparing signatures, are now done in constant time (@kkoyung, #43775, #43773). ‘Content-Security-Policy’ now handles redirects correctly (@TimvdLippe, #43438), and sends violation reports with the correct blockedURI and referrer (@TimvdLippe, #43367, #43645, #43483). The policy in <meta> now combines with the policy sent in HTTP headers, rather than overriding it (@TimvdLippe, @elomscansio, #43063). When checking nonces, we now reject elements with duplicate attributes (@dyegoaurelio, #43216). The document containing an < iframe> can no longer access the contents of error pages (@TimvdLippe, #43539), and CSP violations inside an <iframe> are now correctly reported (@TimvdLippe, #43652). Work in progress We’ve landed more work towards supporting IndexedDB , under --pref dom_indexeddb_enabled (@arihant2math, @gterzian, @Taym95, @jerensl, #42139, #42727, #43096, #43041, #42451, #43721, #43754, #42786), and towards supporting IntersectionObserver , under --pref dom_intersection_observer_enabled (@stevennovaryo, @mrobinson, #42251). We’re continuing to implement document.execCommand() for rich text editing (@TimvdLippe, #43177), under --pref dom_exec_command_enabled. ‘beforeinput’ and ‘input’ events are now fired when executing supported and enabled commands (@TimvdLippe, #43087), the ‘defaultParagraphSeparator’ and ‘styleWithCSS’ commands are now supported (@TimvdLippe, #43028), and the ‘delete’ command is partially supported (@TimvdLippe, #43016, #43082). We’re also working on the Font Loading API (@simonwuelker, #43286), under --pref dom_fontface_enabled. new FontFace() now accepts ArrayBuffer in its source argument (@simonwuelker, #43281). All of the features above are enabled in servoshell’s experimental mode. Work on accessibility support for web contents continues under --pref accessibility_enabled. There was a breaking change in the embedding API (@delan, @alice, #43029), and we’ve landed support for “grafting” the accessibility tree of a document into that of its containing webview (@delan, @alice, #43012, #43013, #43556). As a result, when you navigate, separate documents can have separate accessibility trees without complicating the embedder. < link rel=modulepreload> is now partially supported (@Gae24, #42964), though recursive fetching of descendants is gated by --pref dom_allow_preloading_module_descendants (@Gae24, #43353). For a long time, Servo has had some support for the Web Bluetooth API under --pref dom_bluetooth_enabled. We’ve recently reworked our implementation to adopt btleplug, the cross-platform Rust- native Bluetooth LE library (@webbeef, #43529, #43581). We’re now implementing the Web Animations API, starting with AnimationTimeline and DocumentTimeline (@mrobinson, #43711). We’ve landed more fixes to Servo’s async parser (@simonwuelker, #42930, #42959), under --pref dom_servoparser_async_html_tokenizer_enabled. If we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that don’t use document.write(), and even improve the html5ever API for the ecosystem. For developers
Servo’s DevTools feature now has partial support for inspecting service workers (@CynthiaOketch, #43659), as well as using the navigation controls along the top of the UI (@brentschroeter, @eerii, #43026).
In the Inspector tab, we’ve fixed a bug where the UI stops updating when navigating to a new page (@brentschroeter, #43153).
In the Console tab, you can now evaluate JavaScript in web workers and service workers (@SharanRP, #43361, #43492).
In the Debugger tab, you can now Step In , Step Out , and Step Over (@eerii, @atbrakhi, #42907, #43040, #43042, #43135). We’ve landed partial support for the Scopes panel (@eerii, @atbrakhi, #43166, #43167, #43232), the Call stack panel (@atbrakhi, @eerii, #43015, #43039), and showing you information when hovering over objects , arrays , functions , and other values (@atbrakhi, @eerii, #43319, #43356, #43456, #42996, #42936, #42994).
We’ve fixed some long-outstanding bugs where the DevTools UI may stop responding due to protocol desyncs (@brentschroeter, @eerii, #43230, #43236), or due to messages from multiple Servo threads being interleaved (@brentschroeter, @eerii, #43472).
For developers of Servo itself, mach can be a bit opaque at times. To make mach more transparent and composable, we’ve added
mach print-envandmach execcommands (@jschwe, #42888).We’re also working on a new dev container, which will provide an alternative to our usual procedures for setting up a Servo build environment (@jschwe, @sagudev, #43127, #43131, #43139).
Embedding and automation Breaking changes: Servo::set_accessibility_active() is now WebView::set_accessibility_active() (@delan, @alice, #43029), to make the API harder to misuse (see the docs for more details). What was previously named WebView::pinch_zoom() has been renamed to adjust_pinch_zoom(), and we’ve added a pinch_zoom() method that lets you read the current pinch zoom level (@chrisduerr, #43228). WebView::set_delegate(), set_clipboard_delegate(), and set_gamepad_provider() are now WebViewBuilder::delegate(), clipboard_delegate(), and gamepad_delegate() (@mrobinson, #43205, #43233). Note that setgamepadprovider() is now gamepad_delegate(), consistent with the GamepadProvider rename below. WebViewDelegate::show_bluetooth_device_dialog() has been reworked to use the same “request object” pattern as the request_*() methods, giving you a BluetoothDeviceSelectionRequest with clear methods (@webbeef, #43580). GamepadProvider has been renamed to GamepadDelegate, and gamepad_provider() on WebView has been renamed to gamepad_delegate() (@mrobinson, #43233). The empty default implementation of EventLoopWaker::wake has been removed, because it almost never makes sense for a new custom impl to leave the method empty (@chrisduerr, @mrobinson, #43250). Opts::print_pwm is now DiagnosticsLogging::progressive_web_metrics (@mrobinson, #43209). Removed from our API: Opts::nonincremental_layout (@mrobinson, #43207) – no replacement. This only really worked in legacy layout. Opts::user_stylesheets (@mrobinson, #43206) – use UserContentManager::add_stylesheet() instead. This is how servoshell’s --user-stylesheet option works. You can now read and write cookies with SiteDataManager::cookies_for_url() and set_cookie_for_url() (@longvatrong111, #43600). ClipboardDelegate and StringRequest are now exposed to the public API, allowing you to implement custom clipboard delegates (@jdm, @chrisduerr, #43203, #43261). You can pass your custom delegate to WebViewBuilder::clipboard_delegate(). You can now get the EmbedderControlId associated with an InputMethodControl by calling InputMethodControl::id() (@chrisduerr, #43248). PixelFormat now implements Debug (@chrisduerr, @mrobinson, #43249). We’ve improved the docs for Servo, ServoBuilder, WebViewBuilder, RenderingContext (@chrisduerr, #43229), EmbedderControlId, EmbedderControlRequest, EmbedderControlResponse, SimpleDialogRequest, AlertResponse, ConfirmResponse, PromptResponse, EmbedderMsg (@mukilan, #43564), ResourceReaderMethods (@jschwe, @mrobinson, #43769), servo::input_events (@mukilan, #43681), and WheelDelta (@yezhizhen, @mrobinson, #43210). We fixed a deadlock in WebDriver that occurs under heavy use of actions from multiple input sources (@yezhizhen, #43202, #43169, #43262, #43275, #43301), ‘pointerMove’ actions with a ‘duration’ are now smoothly interpolated (@yezhizhen, #42946, #43076). Add Cookie is now more conformant (@yezhizhen, #43690), which led to Servo developers landing a spec patch. ‘pause’ actions are now slightly more efficient (@yezhizhen, #43014), and we’ve fixed a bug where ‘wheel’ actions fail to interleave with other actions (@yezhizhen, #43126). More on the web platform
Carets now blink in text fields (@mrobinson, #43128). You can configure or disable blinking carets with
--pref editing_caret_blink_time=0or a duration in milliseconds. Clicking to move the caret is more forgiving now (@mrobinson, #43238), and moving the caret by a word at a time is more conventional on Windows and Linux, with Ctrl instead of Alt (@mrobinson, #43436). We’ve also fixed a bug where pressing the arrow keys in text fields both moves the caret (good) and scrolls the page (bad), and fixed a bug where the caret fails to render on empty lines (@mrobinson, @freyacodes, #43247, #42218).Input has improved, with more responsive touchpad scrolling on Linux (@mrobinson, @chrisduerr, #43350). Pointer events and mouse events can now be captured across shadow DOM boundaries (@simonwuelker, #42987), and we’ve now started working towards shadow-DOM-compatible focus (@mrobinson, #43811). Pressing Space or Enter inside text fields no longer causes them to be clicked (@mrobinson, #43343).
The lang attribute is now taken into account when shaping, which is important for the correct rendering of Chinese and Japanese text (@RichardTjokroutomo, @mrobinson, #43447). ‘font-weight’ is now matched more accurately when no available font is an exact match (@shubhamg13, #43125).
Navigation is one of the most complicated parts of HTML: navigating can run some JavaScript that replaces the page, just run some JavaScript, or depending on the response, do nothing at all. < iframe> makes navigation doubly complicated: the document containing an <iframe> can observe and interact with the document inside the <iframe> in various ways, often synchronously. This has been the source of many bugs over the years, but we’ve recently fixed one of those major issues (@jdm, #43496).


javascript:URLs are a massive special case with many quirks, and <iframe> has its own big edge cases.new Worker() now supports JS modules (@pylbrecht, @Gae24, #40365), and CanvasRenderingContext2D now supports drawing text with Variation Selectors , allowing you to control things like emoji presentation and CJK shaping (@yezhizhen, #43449).
Servo now fires ‘pointerover’ , ‘pointerout’ , ‘pointerenter’ , and ‘pointerleave’ events on web content (@webbeef, #42736), ‘scroll’ events on VisualViewport (@stevennovaryo, #42771), and ‘scrollend’ events on Document , Element , and VisualViewport (@abdelrahman1234567, @mrobinson, #38773). We also fire ‘error’ events when event handler attributes contain syntax errors (@simonwuelker, #43178).
We’ve improved the default appearance of < summary> (@Loirooriol, #43111), < select> (@lukewarlow, #43175), < input type=file> (@lukewarlow, @AlexVasiluta, @lukewarlow, #43498, #43186), and < textarea> and < input type=text> and friends (@mrobinson, #43132), plus ‘::marker’ in mixed LTR/RTL content (@Loirooriol, #43201). < select> also now requires user interaction to open the picker (@SharanRP, #43485).
< form action>, < iframe src>, open(url) on XMLHttpRequest , new EventSource(url) , and new Worker(url) now correctly resolve the URL with the page encoding (@SharanRP, @jdm, @jayant911, @Veercodeprog, @sabbCodes, #43521, #43554, #43572, #43537, #43634, #43588).
‘direction’ now works on grid containers (@nicoburns, #42118), SVG images can now be used in ‘border-image’ (@shubhamg13, #42566), ‘linear-gradient()’ now dithers to reduce banding (@Messi002, #43603), ‘letter-spacing’ no longer applies to invisible zero-width formatting characters (@simonwuelker, #42961), and ‘:active’ now matches disabled or non-focusable elements too, as long as they are being clicked (@webbeef, #42935).
DOMContentLoaded timings in PerformanceNavigationTiming are more accurate (@simonwuelker, #43151). PerformancePaintTiming and LargestContentfulPaint are more accurate too, taking <iframe> into account (@shubhamg13, #42149), and checking for and ignoring things like broken images and transparent backgrounds (@shubhamg13, #42833, #42975, #43475).
We’ve improved the conformance of JS modules (@Gae24, #43585), < button command> (@lukewarlow, #42883), < font size> (@shubhamg13, #43103), < link media> and < link type> (@TimvdLippe, #43043), < option selected> (@SharanRP, #43582), < script integrity> and < style integrity> (@Gae24, #42931), EventSource (@mishop-15, #42179), SubtleCrypto (@kkoyung, #42984, #43315, #43533, #43519), Worker (@simonwuelker, #43329), HTMLVideoElement (@shubhamg13, #43341), dataset on Element (@TimvdLippe, #43046), and querySelector() and querySelectorAll() (@simonwuelker, #42991).
We’ve fixed bugs related to error reporting (@simonwuelker, @xZaisk, @yezhizhen, @eyupcanakman, #43191, #43323, #43101, #43560), event loops (@jayant911, #43523), focus (@jakubadamw, #43431), quirks mode (@mrobinson, @Loirooriol, @lukewarlow, #42960, #43368), < iframe> (@TimvdLippe, @jdm, #43539, #43732), the ‘animationstart’ and ‘animationend’ events (@simonwuelker, #43454), the ‘touchmove’ event (@yezhizhen, #42926), CanvasRenderingContext2D (@simonwuelker, #43218), Worker (@bruno-j- nicoletti, #43213), ‘:active’ on <input> (@mrobinson, #43722), ‘overflow: scroll’ on ‘::before’ and ‘::after’ (@stevennovaryo, #43231), ‘position: absolute’ (@yoursanonymous, @Loirooriol, #43084), and < img> and < svg> without width or height attributes (@Loirooriol, #42666). Fixing that last bug led to Servo developers finding two spec issues!
We’ve landed partial support for using CSScounters in ‘list-style- type’ on ‘display: list-item’ and ‘content’ on ‘::marker’, but the counter values themselves are not calculated yet, so all list items still read as
0.or similar. In any case, you can use aor ‘symbols()’ in ‘list-style-type’, and ‘counter()’ and ‘counters()’ in ‘content’ (@Loirooriol, #43111). We’ve also landed partial support for < marquee> and the HTMLMarqueeElement interface, including basic layout, but the contents are not animated yet (@mrobinson, @lukewarlow, #43520, #43610).
Servo now exposes several attributes that have no direct effect, but are needed for web compatibility (@lukewarlow, #43500, #43499, #43502, #43518):
- noHref on HTMLAreaElement
- hreflang , type , charset on HTMLAnchorElement
- useMap on HTMLInputElement and HTMLObjectElement
- longDesc on HTMLIFrameElement and HTMLFrameElement
Performance and stability We’ve fixed sluggish scrolling on long documents like this page on docs.rs (@webbeef, @yezhizhen, #43074, #43138), and reduced the memory usage of BoxFragment by 10% (@stevennovaryo, #43056). about:memory now has a Force GC button (@webbeef, #42798), and no longer reports all processes as content processes in multiprocess mode (@webbeef, #42923). Web fonts are no longer fetched more than once, and they no longer cause reflow when they fail to load (@minghuaw, #43382, #43595). We’re also working towards better caching for shaping results (@mrobinson, @lukewarlow, @Loirooriol, #43653). Event handler attribute lookup is more efficient now (@Narfinger, #43337), and we’ve made DOM tree walking more efficient in many cases (@Narfinger, #42781, #42978, #43476). crypto.subtle.encrypt() , decrypt() , sign() , verify() , digest() , importKey() , unwrapKey() , decapsulateKey() , and decapsulateBits() are more efficient now (@kkoyung, #42927), thanks to a recent spec update. More of Servo now uses cheaper crossbeam channels instead of IPC channels, unless Servo is running in multiprocess mode, or avoids IPC altogether (@Narfinger, @jschwe, @Taym95, #42077, #43309, #42966). We’ve also reduced clones, allocations, conversions, comparisons, and borrow checks in many parts of Servo (@simonwuelker, @kkoyung, @mrobinson, @Narfinger, @yezhizhen, @TG199, #43212, #43055, #43066, #43304, #43452, #43717, #43780, #43088, #43226). DOM data structures (#[dom_struct]) can refer to one another, with the help of garbage collection. But when DOM objects are being destroyed, those references can become invalid for a brief moment, depending on the order the GC finalizers run in. This can be unsound if those references are accessed, which is a very easy mistake to make if the type has an impl Drop. To help prevent that class of bug, we’re reworking our DOM types so that none of them have #[dom_struct] and impl Drop at the same time (@willypuzzle, #42937, #42982, #43018, #43071, #43222, #43288, #43544, #43563, #43631). We’ve fixed a crash caused by an IPC resource leak when making many requests over time (@yezhizhen, #43381), and some bugs found by ThreadSanitizer and --debug-mozjs (@jdm, @Loirooriol, #42976, #42963, #43487). We’ve also fixed crashes in CanvasRenderingContext2D (@yezhizhen, #43449), Crypto (@rogerkorantenng, #43501), devtools (@simonwuelker, #43133), event handler attributes (@simonwuelker, #43178), Promise (@Narfinger, @jdm, #43470), and WebDriver (@Tarmil, @yezhizhen, #42739, #43381). We’ve continued our long-running effort to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@Narfinger, @Gae24, @Uiniel, @TimvdLippe, @yezhizhen, @sagudev, @PuercoPop, @pylbrecht, @arabson99, @jayant911, #42957, #43108, #43130, #43215, #43183, #43219, #43245, #43220, #43252, #43268, #43184, #43277, #43278, #43284, #43302, #43312, #43348, #43327, #43362, #43365, #43383, #43432, #43259, #43439, #43473, #43481, #43480, #43479, #43525, #43535, #43543, #43549, #43570, #43571, #43569, #43579, #43584, #43657, #43713). Thanks to a wide range of people, many of whom were contributing to Servo for their first time, we’ve also landed a bunch of architectural improvements (@elomscansio, @mukilan, #43646), cleanups (@simartin, @SharanRP, @TG199, @sabbCodes, @niyabits, @eerii, @atbrakhi, #43276, #43285, #43532, #43778, #43771, #43566, #43567, #43587, #43140, #43316), and refactors (@sabbCodes, @arabson99, @jayant911, @StaySafe020, @saydmateen, @eerii, @TimvdLippe, @elomscansio, @CynthiaOketch, #43614, #43641, #43619, #43642, #43623, #43656, #43644, #43672, #43664, #43676, #43684, #43679, #43678, #43655, #43675, #43731, #43729, #43728, #43740, #43751, #43748, #43747, #43752, #43745, #43724, #43723, #43765, #43767, #43181, #43269, #43270, #43279, #43437, #43597, #43607, #43602, #43616, #43609, #43612, #43647, #43651, #43662, #43714, #43774). Donations
Thanks again for your generous support! We are now receiving 7167 USD/month (+2.6% from February) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 37 GitHub users (+5 from February) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. If you’re interested in this kind of sponsorship, please contact us at join@servo.org.
7167 USD/month
10000
Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
-
- April 29, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-29 rss
IDA Plugin Updates on 2026-04-29
Activity:
- ida-mcp-server
- 13f82c62: fix: function-size pre-filter (16 KB threshold) restores MAX_FUNCSIZE…
- eb63c538: fix: extend pathological-func pre-filter for Rust deep generics
- 86e2d687: feat: lazy-init C++ class recovery on first decompile
- f384fe25: feat: add Itanium C++ ABI class recovery tool (recover_cpp_classes)
- e8112416: feat: tier-4 raw disassembly fallback - guarantees 100% coverage
- ed57dab4: feat: handle extern symbols + bump MAX_FUNCSIZE for "too big function"
- dbf27026: fix: handle thunks/trampolines + null-JSON in decompile_function
- 6ac2f0e4: fix: tighten Go-symbol regex - require trailing '.' to avoid C++ fals…
- python-elpida_core.py
- 5a88e62b: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:37Z
- 15a038c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:15Z
- 6698cd11: HERMES correction note: clear stale items before daily-13
- 7628fb89: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:53Z
- 2d9c8c09: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:30Z
- d24ffc93: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:05Z
- f184eab8: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:40Z
- 9f5d78c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:12Z
- c7e897f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:44Z
- bd98077e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:20Z
- quokka
- 43316396: Merge pull request #110 from quarkslab/dependabot/github_actions/acti…
- ida-mcp-server
-
🔗 livestorejs/livestore "v0.4.0-dev.23" release
Release
0.4.0-dev.23.Posted on behalf of @schickling field | value
---|---
agent_name| 🌱 co1-alder
agent_session_id| 6c624c62-4e38-435e-b267-475ef99d9340
agent_tool| Codex CLI
agent_tool_version| codex-cli 0.124.0
agent_runtime| Codex CLI codex-cli 0.124.0
agent_model| unknown
worktree| megarepo-all/schickling/2026-04-26-livestore-release
machine| dev3
tooling_profile| dotfiles@19cf6f4 -
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [clang-include](https://github.com/oxikkk/ida-clang-include): 1.1.0 -
🔗 r/Yorkshire Flamborough Cliffs rss
| The amazing cliffs today at flamborough submitted by /u/J_1989_EDI
[link] [comments]
---|--- -
🔗 r/Yorkshire How driving Yorkshire Dales B road in the evening is like rss
| submitted by /u/alanas4201
[link] [comments]
---|--- -
🔗 Simon Willison LLM 0.32a0 is a major backwards-compatible refactor rss
I just released LLM 0.32a0, an alpha release of my LLM Python library and CLI tool for accessing LLMs, with some consequential changes that I've been working towards for quite a while.
Previous versions of LLM modeled the world in terms of prompts and responses. Send the model a text prompt, get back a text response.
import llm model = llm.get_model("gpt-5.5") response = model.prompt("Capital of France?") print(response.text())
This made sense when I started working on the library back in April 2023. A lot has changed since then!
LLM provides an abstraction over thousands of different models via its plugin system. The original abstraction - of text input that returns text output - was no longer able to represent everything I needed it to.
Over time LLM itself has grown attachments to handle image, audio, and video input, then schemas for outputting structured JSON, then tools for executing tool calls. Meanwhile LLMs kept evolving, adding reasoning support and the ability to return images and all kinds of other interesting capabilities.
LLM needs to evolve to better handle the diversity of input and output types that can be processed by today's frontier models.
The 0.32a0 alpha has two key changes: model inputs can be represented as a sequence of messages, and model responses can be composed of a stream of differently typed parts.
Prompts as a sequence of messages
LLMs accept input as text, but ever since ChatGPT demonstrated the value of a two-way conversational interface, the most common way to prompt them has been to treat that input as a sequence of conversational turns.
The first turn might look like this:
user: Capital of France? assistant:(The model then gets to fill out the reply from the assistant.)
But each subsequent turn needs to replay the entire conversation up to that point, as a sort of screenplay:
user: Capital of France? assistant: Paris user: Germany? assistant:Most of the JSON APIs from the major vendors follow this pattern. Here's what the above looks like using the OpenAI chat completions API, which has been widely imitated by other providers:
curl https://api.openai.com/v1/chat/completions \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.5", "messages": [ { "role": "user", "content": "Capital of France?" }, { "role": "assistant", "content": "Paris" }, { "role": "user", "content": "Germany?" } ] }'
Prior to 0.32, LLM modeled these as conversations:
model = llm.get_model("gpt-5.5") conversation = model.conversation() r1 = conversation.prompt("Capital of France?") print(r1.text()) # Outputs "Paris" r2 = conversation.prompt("Germany?") print(r2.text()) # Outputs "Berlin"
This worked if you were building a conversation with the model from scratch, but it didn't provide a way to feed in a previous conversation from the start. This made tasks like building an emulation of the OpenAI chat completions API much harder than they should have been.
The
llmCLI tool worked around this through a custom mechanism for persisting and inflating conversations using SQLite, but that never became a stable part of the LLM API - and there are many places you might want to use the Python library without committing to SQLite as the storage layer.The new alpha now supports this:
import llm from llm import user, assistant model = llm.get_model("gpt-5.5") response = model.prompt(messages=[ user("Capital of France?"), assistant("Paris"), user("Germany?"), ]) print(response.text())
The
llm.user()andllm.assistant()functions are new builder functions designed to be used within thatmessages=[]array.The previous
prompt=option still works, but LLM upgrades it to a single-item messages array behind the scenes.You can also now reply to a response, as an alternative to building a conversation:
response2 = response.reply("How about Hungary?") print(response2) # Default __str__() calls .text()
Streaming parts
The other major new interface in the alpha concerns streaming results back from a prompt.
Previously, LLM supported streaming like this:
response = model.prompt("Generate an SVG of a pelican riding a bicycle") for chunk in response: print(chunk, end="")
Or this async variant:
import asyncio import llm model = llm.get_async_model("gpt-5.5") response = model.prompt("Generate an SVG of a pelican riding a bicycle") async def run(): async for chunk in response: print(chunk, end="", flush=True) asyncio.run(run())
Many of today's models return mixed types of content. A prompt run against Claude might return reasoning output, then text, then a JSON request for a tool call, then more text content.
Some models can even execute tools on the server-side, for example OpenAI's code interpreter tool or Anthropic's web search. This means the results from the model can combine text, tool calls, tool outputs and other formats.
Multi-modal output models are starting to emerge too, which can return images or even snippets of audio intermixed into that streaming response.
The new LLM alpha models these as a stream of typed message parts. Here's what that looks like as a Python API consumer:
import asyncio import llm model = llm.get_model("gpt-5.5") prompt = "invent 3 cool dogs, first talk about your motivations" def describe_dog(name: str, bio: str) -> str: """Record the name and biography of a hypothetical dog.""" return f"{name}: {bio}" def sync_example(): response = model.prompt( prompt, tools=[describe_dog], ) for event in response.stream_events(): if event.type == "text": print(event.chunk, end="", flush=True) elif event.type == "tool_call_name": print(f"\nTool call: {event.chunk}(", end="", flush=True) elif event.type == "tool_call_args": print(event.chunk, end="", flush=True) async def async_example(): model = llm.get_async_model("gpt-5.5") response = model.prompt( prompt, tools=[describe_dog], ) async for event in response.astream_events(): if event.type == "text": print(event.chunk, end="", flush=True) elif event.type == "tool_call_name": print(f"\nTool call: {event.chunk}(", end="", flush=True) elif event.type == "tool_call_args": print(event.chunk, end="", flush=True) sync_example() asyncio.run(async_example())
Sample output (from just the first sync example):
My motivation: create three memorable dogs with distinct “cool” styles—one cinematic, one adventurous, and one charmingly chaotic—so each feels like they could star in their own story.
Tool call: describe_dog({"name": "Nova Jetpaw", "bio": "A sleek silver-gray whippet who wears tiny aviator goggles and loves sprinting along moonlit beaches. Nova is fearless, elegant, and rumored to outrun drones just for fun."}
Tool call: describe_dog({"name": "Mochi Thunderbark", "bio": "A fluffy corgi with a dramatic black-and-gold bandana and the confidence of a rock star. Mochi is short, loud, loyal, and leads a neighborhood 'security patrol' made entirely of squirrels."}
Tool call: describe_dog({"name": "Atlas Snowfang", "bio": "A massive white husky with ice-blue eyes and a backpack full of trail snacks. Atlas is calm, heroic, and always knows the way home—even during blizzards, fog, or confusing camping trips."}At the end of the response you can call
response.execute_tool_calls()to actually run the functions that were requested, or send aresponse.reply()to have those tools called and their return values sent back to the model:print(response.reply("Tell me about the dogs"))
This new mechanism for streaming different token types means the CLI tool can now display "thinking" text in a different color from the text in the final response. The thinking text goes to stderr so it won't affect results that are piped into other tools.
This example uses Claude Sonnet 4.6 (with an updated streaming event version of the llm-anthropic plugin) as Anthropic's models return their reasoning text as part of the response:
llm -m claude-sonnet-4.6 'Think about 3 cool dogs then describe them' \ -o thinking_display 1
You can suppress the output of reasoning tokens using the new
-R/--no-reasoningflag. Surprisingly that ended up being the only CLI-facing change in this release.A mechanism for serializing and deserializing responses
As mentioned earlier, LLM has quite inflexible code at the moment for persisting conversations to SQLite. I've added a new mechanism in 0.32a0 that should provide Python API users a way to roll their own alternative:
serializable = response.to_dict() # serializable is a JSON-style dictionary # store it anywhere you like, then inflate it: response = Response.from_dict(serializable)
The dictionary this returns is actually a
TypedDictdefined in the new llm/serialization.py module.What's next?
I'm releasing this as an alpha so I can upgrade various plugins and exercise the new design in real world environments for a few days. I expect the stable 0.32 release will be very similar to this alpha, unless alpha testing reveals some design flaw in the way I've put this all together.
There's one remaining large task: I'd like to redesign the SQLite logging system to better capture the more finely grained details that are returned by this new abstraction.
Ideally I'd like to model this as a graph, to best support situations like an OpenAI-style chat completions API where the same conversations are constantly extended and then repeated with every prompt. I want to be able to store those without duplicating them in the database.
I'm undecided as to whether that should be a feature in 0.32 or I should hold it for 0.33.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/york Help me reach £500 donations for York's homeless before tomorrow? rss
| Hi all! Some of you might remember my last post and how much amazing support I got from our local Reddit group when I first began fundraising. This will be my last update before the sleep out actually takes place! Tomorrow evening I will be taking part in York's annual Charity Sleep Out to help raise money for some of the wonderful charities in York who provide food and other essential support to those in our local area who are homeless or otherwise in need. I've had the absolute pleasure of volunteering with Hoping Kitchen on Sundays and I know how well-loved KEYS is, so it's a really worthy cause. Whilst it won't be even close to what those who sleep rough experience on a daily basis, I am the kind of person who had to borrow a wooly hat from a friend because I would very much usually rather be indoors doing literally anything outside ever. Most importantly, my pet parrots and bunnies will miss me very much and probably give me a few nips upon my return for leaving them without their usual bedtime snuggles for an evening. Would be really great to get to £500 before the event begins tomorrow! I'll try and remember to post some pictures whilst we're camping out tomorrow to keep you all updated https://www.givewheel.com/fundraising/14777/kayleighs-york-charity-sleepout-2026/ submitted by /u/kittywenham
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life Working on the Emacs newbie experience rss
The Emacs Carnival April 2026 theme of newbies/starter kits nudged me to think about how new users can learn what they need in order to get started. In particular, I wanted to think about these questions that newbies might have:
- Is it worth it?
- How do I start?
- Should I use a starter kit? How?
- I'm stuck, how can I get help?
- This is overwhelming. How do I make it more manageable?
I worked on some pages in the EmacsWiki:
- EmacsWiki: Emacs Newbie
- I removed or deemphasized some links that might be confusing for newbies.
- EmacsWiki: Learning Emacs
- I reorganized the items and added some more notes.
- EmacsWiki: Emacs Screencasts
- I tweaked the beginner information section.
- I added a section for starter kits.
- EmacsWiki: Starter Kits
- I added "Things to know before you start" to help newbies who might not have Git installed or who might not know how to get to the command line. I also organized the starter kits by type.
- EmacsWiki: Keybinding Guide
- Replaced the link with Mastering Emacs. I'd add https://www.gnu.org/software/emacs/manual/html_node/efaq/Binding-keys-to-commands.html, but it's not responding to me at the moment even though downforeveryoneorjustme says that it's up.
People often recommend Emacs News to people who want to learn more about what's going on in the Emacs community, so I added some notes to that one as well.
- I added an introduction to the Emacs News category page to direct new people to some tips for making the most of Emacs News
- I moved the e-mail subscription above the RSS feed, since people are more familiar with e-mail as a subscription mechanism.
- I added a tutorial for setting up newsticker within Emacs.
- I set up some shorter URLs (sachachua.com/emacs-news, sach.ac/emacs-news, yayemacs.com/news).
Just gotta find some newbies to test these ideas with… Email me! =)
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life Emacs beginner resources rss
: Updated my page from 2014 with more recent resources.
Welcome to Emacs! Thank you for considering this strange and wonderful text editor. Here are some resources that can help you on your journey.
- GNU Emacs: A Guided Tour: This page has screenshots and a short tutoral.
- The EmacsNewbie page on EmacsWiki
- An Emacs Tutorial: Beginner's Guide to Emacs - Mastering Emacs
Many people use Emacs just for Org Mode. Here are some resources for getting started:
- Org mode beginning at the basics
- Top (Org Mode Compact Guide)
- james-stoup/emacs-org-mode-tutorial: A primer for users trying to make sense of Org Mode · GitHub
You can view 1 comment or e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds T&A link - Tuesday 28th - Briggate: "4 teen boys - aged 13 to 16 - arrested following city centre stabbing incident" rss
Reports that a 34‑year‑old man was taken to hospital after being reportedly stabbed during an altercation near the McDonalds on Briggate on Tuesday night.
Also in the YEP:
Of course it was outside the McDonalds :(
I hope those responsible are dealt with robustly to send the right message.
submitted by /u/thetapeworm
[link] [comments] -
🔗 r/Leeds Bike stolen city centre rss
Victoria Pendleton bike stolen today from outside Leeds train station between 12:30-16:30 :(
Please dm if any information thank you
submitted by /u/Few_Health_5530
[link] [comments] -
🔗 r/Yorkshire Whitby steam trains return delayed rss
| submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 Andrew Ayer - Blog FastCGI: 30 Years Old and Still the Better Protocol for Reverse Proxies rss
HTTP reverse proxying is a minefield. Just the other week, a researcher disclosed a desync vulnerability in Discord's media proxy that allowed spying on private attachments. This is not unusual; these vulnerabilities just keep coming.
The problem is the widespread use of HTTP as the protocol between reverse proxies and backends, even though it's unfit for the job. But we don't have to use HTTP here. There's a 30-year-old protocol for proxy-to-backend communication that avoids HTTP's pitfalls. It's called FastCGI, and its specification was released 30 years ago today.
FastCGI is a Wire Protocol, not a Process Model
It's true that some web servers can automatically spawn FastCGI processes to handle requests for files with the
.fcgiextension, much like they would for.cgifiles. But you don't have to use FastCGI this way - you can also use the FastCGI protocol just like HTTP, with requests sent over a TCP or UNIX socket to a long-running daemon that handles them as if they were HTTP requests.For example, in Go all you have to do is import the net/http/fcgi standard library package and replace
http.Servewithfcgi.Serve:Go HTTP
l, _ := net.Listen("tcp", "127.0.0.1:8080") http.Serve(l, handler)Go FastCGI
l, _ := net.Listen("tcp", "127.0.0.1:8080") fcgi.Serve(l, handler)Everything else about your app stays the same - even your handler, which continues to use the standard
http.ResponseWriterandhttp.Requesttypes.Popular proxies like Apache, Caddy, nginx, and HAProxy support FastCGI backends, and the configuration is simple:
nginx HTTP
proxy_pass http://localhost:8080;nginx FastCGI
fastcgi_pass localhost:8080; include fastcgi_params;Show more config examples
Apache HTTP
ProxyPass / http://localhost:8080/Apache FastCGI
ProxyPass / fcgi://localhost:8080/Caddy HTTP
reverse_proxy localhost:8080 { transport http { } }Caddy FastCGI
reverse_proxy localhost:8080 { transport fastcgi { } }HAProxy HTTP
backend app_backend server s1 localhost:8080HAProxy FastCGI
fcgi-app fcgi_app docroot / backend app_backend use-fcgi-app fcgi_app server s1 localhost:8080 proto fcgiWhy HTTP Sucks for Reverse Proxies: Desync Attacks / Request Smuggling
HTTP/1.1 has the tragic property of looking simple on the surface (it's just text!) but actually being a nightmare to parse robustly. There are so many different ways to format the same HTTP message, and there are too many edge cases and ambiguities for implementations to handle consistently. As a result, no two HTTP/1.1 implementations are exactly the same, and the same message can be parsed differently by different parsers.
The most serious problem is that there is no explicit framing of HTTP messages - the message itself describes where it ends, and there are multiple ways for a message to do that, all with their own edge cases. Implementations can disagree about where a message ends, and consequently, where the next message begins. This is the foundation of HTTP desync attacks, also known as request smuggling, wherein a reverse proxy and a backend disagree about the boundaries between HTTP messages, causing all sorts of nightmare security issues, such as the Discord vulnerability I linked above.
A lot of people seem to think you can just patch the parser divergences, but this is a losing strategy. James Kettle just keeps finding new ones. After finding another batch last year, he declared "HTTP/1.1 must die".
HTTP/2, when consistently used between the proxy and backend , fixes desync by putting clear boundaries around messages, but FastCGI has been doing that since 1996 with a simpler protocol. For context, nginx has supported FastCGI backends since its first release, but only got support for HTTP/2 backends in late 2025. Apache's support for HTTP/2 backends is still "experimental".
Why HTTP Sucks for Reverse Proxies: Untrusted Headers
If desync attacks were the only problem, you could just use HTTP/2 and call it a day. Unfortunately, there's another problem: HTTP has no robust way for the proxy to convey trusted information about the request, such as the real client IP address, authenticated username (if the proxy handles authentication), or client certificate details (if mTLS is used).
The only option is to stick this information in HTTP headers, alongside the headers proxied from the client, without a clear structural distinction between trusted headers from the proxy and untrusted headers from a potential attacker. For example, the
X-Real-IPheader is often used to convey the client's real IP address. In theory, if your proxy correctly deletes all instances of theX-Real-IPheader (not just the first, and including case variations likex-REaL-ip) before adding its own, you're safe.In practice, this is a minefield and there are an awful lot of ways your backend can end up trusting attacker-controlled data. Your proxy really needs to delete not just
X-Real- IP, but any header that's used for this sort of thing, just in case some part of your stack relies on it without your knowledge. For example, the Chi middleware determines the client's real IP address by looking at theTrue-Client-IPheader first. Only ifTrue-Client-IPdoesn't exist does it useX-Real-IP. So even if your proxy does the right thing withX-Real-IP, you can still be pwned by an attacker sending aTrue-Client-IPheader.FastCGI completely avoids this class of problem by providing domain separation between headers from the client and information added by the proxy. Though trusted data from the proxy and HTTP request headers are transmitted to the backend in the same key/value parameter list, HTTP header names are prefixed with the string "HTTP_", making it structurally impossible for clients to send a header that would be interpreted as trusted data.
FastCGI defines some standard parameters such as
REMOTE_ADDRto convey the real client IP address. Go'snet/http/fcgipackage automatically uses this parameter to populate theRemoteAddrfield ofhttp.Request, rendering middleware unnecessary. It Just Works. Proxies can also use non-standard parameters to report whether HTTPS was used, what TLS ciphersuite was negotiated, and what client certificate was presented, if any. Go automatically sets theRequest'sTLSfield to a non-nil (but empty) value if the request used HTTPS, which is very handy for enforcing the use of HTTPS. Thefcgi.ProcessEnvfunction can be used to access the full set of trusted parameters sent by the proxy.Closing Thoughts
If FastCGI is the better protocol, why isn't it more popular? Maybe it's the name - while capitalizing on CGI's popularity made sense in 1996, CGI feels dated in 2026. There's also an enduring lack of awareness of the security problems with HTTP reverse proxying. Watchfire described desync attacks in 2005, and gave a prescient warning of their intractability, but the attacks were inexplicably ignored for over a decade. In an alternate timeline, Watchfire's research was taken seriously and people went looking for other protocols for reverse proxies.
FastCGI is very usable today, and has been in production use at SSLMate for over 10 years. That said, using a vintage technology has some downsides. It was never updated to support WebSockets. The tooling is not as good. For example, curl has no way to make requests to a FastCGI server. It supports FTP, Gopher, and even SMTP (however that works), but not FastCGI. When I benchmarked Go's FastCGI server behind a variety of reverse proxies, some workloads had worse throughput compared to HTTP/1.1 or HTTP/2. I don't think that's inherent to the protocol, but a reflection that FastCGI code paths have not been optimized as much as HTTP.
Despite these shortcomings, I still think FastCGI is worth using. I don't use WebSockets, and it's fast enough for my use case (and maybe yours too). If it ever became the bottleneck, I'd rather buy more hardware than deal with the nightmare of HTTP reverse proxying.
Happy 30th birthday, FastCGI!
-
🔗 r/LocalLLaMA mistralai/Mistral-Medium-3.5-128B · Hugging Face rss
| https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUFMistral Medium 3.5 128B
Mistral Medium 3.5 is our first flagship merged model. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. Mistral Medium 3.5 replaces its predecessor Mistral Medium 3.1 and Magistral in Le Chat. It also replaces Devstral 2 in our coding agent Vibe. Concretely, expect better performance for instruct, reasoning and coding tasks in a new unified model in comparison with our previous released models. Reasoning effort is configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios. Find more information on our blog.
Key Features
Mistral Medium 3.5 includes the following architectural choices:
- Dense 128B parameters.
- 256k context length.
- Multimodal input : Accepts both text and image input, with text output.
- Instruct and Reasoning functionalities with function calls (reasoning effort configurable per request).
Mistral Medium 3.5 offers the following capabilities:
- Reasoning Mode : Toggle between fast instant reply mode and reasoning mode, boosting performance with test-time compute when requested.
- Vision : Analyzes images and provides insights based on visual content, in addition to text.
- Multilingual : Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, and Arabic.
- System Prompt : Strong adherence and support for system prompts.
- Agentic : Best-in-class agentic capabilities with native function calling and JSON output.
- Large Context Window : Supports a 256k context window.
We release this model under a Modified MIT License): Open-source license for both commercial and non-commercial use with exceptions for companies with large revenue.
Recommended Settings
- Reasoning Effort :
'none'→ Do not use reasoning'high'→ Use reasoning (recommended for complex prompts and agentic usage) Usereasoning_effort="high"for complex tasks and agentic coding.
- Temperature : 0.7 for
reasoning_effort="high". Temp between 0.0 and 0.7 forreasoning_effort="none"depending on the task. Generally, lower means answer that are more to the point and higher allows the model to be more creative. It is a good practice to try different values in order to improve the model performance to meet your demands.
submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 Jessitron Span or Attribute? in OpenTelemetry custom instrumentation rss
TL;DR: Attribute. More information on one event gives us more correlation power. It’s also cheaper.
When you want to add some information to your tracing telemetry, you could emit a log, create a span, or add a piece of data to your current span. Adding a piece of data to your current span is the best! Usually.

Attributes are the best, and also the cheapest.
If you have request name, user ID, request properties, feature flags, and notes about what happened in a single event, then you can correlate
- feature flags with error rate
- number of items with latency
- which users hit the same stack trace The more data on the top-level span, the more answers you can get to “What is different about the requests that failed?”[1]
More information in one place is better! You can say
trace.getCurrentSpan().set_attribute(“my_module.items.count”, items.length)anywhere in your code, and accumulate data on a single event. This might be my favorite thing about OpenTelemetry tracing.Providers like Honeycomb that charge per event make adding attributes nearly free. (There’s still network, and long-term storage if you use that.)
Spans are for important units of work.
But sometimes it’s better to create a whole new span!
When to start a new span:
- Incoming request - Gotta create a top-level span to represent the work, so that you can add all those sweet attributes to it! This might be a root span (incoming work from outside, new trace) or a server span (continuing a propagated trace). In services, these come from instrumentation libraries.
- Network boundaries - spans are great for seeing dependencies between components. When you’re calling out to another service or database, it’s normal to make a client span for the outgoing call. These are created by many instrumentation libraries.
- Async boundaries - spans are great for seeing what ran in parallel and what concurrently.
- Performance concerns - spans are great for seeing what is slow.
Logs are useful sometimes.
If something might happen more than once, then a single-valued attribute can’t record them all. If you want to track how long that thing took, use a span. If it’s a fixed-time event (like an interrupt or error), then a log is good![2][2]
For example, if there’s only way an exception could be thrown in the scope of the span, then putting
exception.messageon the span is great. But if it’s possible for another exception to be thrown, that message would be overwritten! This is a good time to emit a log. Make sure the log participates in the trace (it includes trace and span ID), and then it will show up on your current span in the trace view. It doesn’t hurt to put that message on the span as well.These are suggestions.
These are guidelines, but the choice is yours. What do you want your trace to look like? What do you want to see called out in the trace waterfall, and what do you want to have together for correlation? Maybe you want both: an attribute on the root span, and a span that shows duration and detail.
Tracing tells the story of your application. Tell it the way that works for you.
Prompt
Get the AI to tell the story to you, and to verify that it works by testing. Here’s some advice to add to give your AI when coding:
## Observability Practices - add important data to the current span as attributes. Examples: - request parameters, especially internal IDs - feature flag values - anything that the code branches on - counts of how many times a loop was iterated - results of downstream calls - Name attributes like: <application>.<module>.<field> - Do not create span events, they're expensive. - Create logs only on exceptions - bring in instrumentation libraries for frameworks and client libraries to create the span structure - when kicking off async work, create a new span around each async task so that we can see what happens concurrently and what waits. - Use the Honeycomb MCP to check that your attributes and spans show up correctly after testing.
[1] The data doesn’t have to be on the same span to correlate it; Honeycomb can query across spans and logs in a trace. But it’s faster and easier when the data is on the same span, and BubbleUp (“what is different?”) works on single events.
[2] You might wonder, why a log instead of a span event? They are the same inside Honeycomb. Logs are sent immediately and are more likely to arrive. This matters in web clients, where people close the tab and the span never ends.
-
🔗 r/LocalLLaMA 16x DGX Sparks - What should I run? rss
| Let’s build the biggest ever DGX Spark Cluster at home. This is going into my home lab server rack, 2TB of unified memory. • 16x Sparks • 1x 200Gbps FS 24 x 200Gb QSFP56 Switch • 16x QSFP56 DAC cables Should be all setup by tomorrow afternoon, what should I run? submitted by /u/Kurcide
[link] [comments]
---|--- -
🔗 r/reverseengineering I built a free open-source CAN bus reverse engineering workstation in Python — 15 tabs, offline ML, dual AI engines, MitM gateway rss
submitted by /u/Repulsive_Factor5654
[link] [comments] -
🔗 r/reverseengineering I'm not an expert but a beginner. So using guides I've tried an app in everyway for intercepting network traffic.Frida didn't worked even. The app doesn't even work in a rooted I tried decompiling and change network config. But doesn't work as after installed the app redirects play store for update. rss
submitted by /u/Educational-Tip8889
[link] [comments] -
🔗 r/york tansy beetle on clifton sands !! rss
| submitted by /u/whtmynm
[link] [comments]
---|--- -
🔗 r/LocalLLaMA What it feels like to have to have Qwen 3.6 or Gemma 4 running locally rss
| Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!). But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090. submitted by /u/GodComplecs
[link] [comments]
---|--- -
🔗 r/wiesbaden Neue Freunde finden 25-36+- rss
Hello, bin 34, Single und neu in Wiesbaden. Da meine Freunde dank Kindern kaum noch vor die Tür gehen, bin ich auf der Suche nach Jungen und aktiven Menschen, die Lust haben sich regelmäßig zu treffen. Garnicht so einfach in WI :( Bumble BFF und Gemeinsam Erleben hat für mich leider garnicht funktioniert und random ein Tanzkurs oder ähnliches anfangen ist auch nicht so mein Ding.
Ich bin super gerne unterwegs und möchte einfach mal wieder öfter raus und feiern, auf Straßenfeste, in Bars oder einfach nur spazieren. Genau so gerne chille ich zuhause, mache einen Spieleabend, koche was leckeres und starte einen Film/Serien-Marathon. Bin sportlich und auch sonst für vieles zu begeistern.
Wäre cool, Gleichgesinnte zu treffen, bevorzugt in meinem Alter, so Pi mal Daumen 😁
submitted by /u/M0zep5
[link] [comments] -
🔗 backnotprop/plannotator v0.19.3 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
What's New in v0.19.3
v0.19.3 makes feedback messages fully configurable and cleans up the stacked PR selector for teams working with long PR chains. Three PRs, one from an external contributor.
Configurable Feedback Messages
Every message Plannotator sends to your agent is now customizable through
~/.plannotator/config.json. Plan approvals, plan denials, review approvals, review feedback suffixes, and annotation feedback all flow through a shared prompt pipeline with{{variable}}template interpolation.The config supports generic overrides that apply to all runtimes, plus per- runtime overrides for cases where Claude Code, OpenCode, and Pi need different phrasing. A four-level resolution order (runtime-specific, generic, runtime built-in default, global default) means you can be as granular or as broad as you want. Users who don't touch the config get identical behavior to previous versions.
This started with @oorestisime's PR adding configurable review approval prompts (#561), which was then expanded to cover all 17 hardcoded feedback strings across the hook, OpenCode, and Pi integrations (#627). The full pipeline includes 72 tests (55 unit, 17 integration) covering template resolution, config merging, backward compatibility, and end-to-end disk-to- output flow.
A new documentation page at Custom Feedback walks through the config format, available template variables, and a context-anchoring pattern contributed by @aviadshiber.
- #561 by @oorestisime, closing #558
- #627 by @backnotprop, closing #624
Hide Merged PRs in Stacked PR Selector
When reviewing a long chain of stacked PRs, merged PRs would show up alongside open ones in the stack tree and PR selector. For teams that iterate through a stack over several sessions, this made it harder to see which PRs still needed review.
A "Hide merged" toggle now appears in both the stack tree popover and the PR selector dropdown. When enabled, merged PRs are removed from the list and a summary count shows how many are hidden. When visible, merged PRs appear dimmed with a strikethrough title and a "merged" badge, and they're not clickable. The toggle state persists via cookie across sessions. Tree indentation was also tightened to 2px per level to prevent horizontal overflow on deep stacks (10+ nodes).
- #626 by @backnotprop, closing #625
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat(review): add configurable approval prompts by @oorestisime in #561
- feat(review): hide/de-emphasize merged PRs in stacked PR selector by @backnotprop in #626
- feat(feedback): configurable plan, annotation, and review feedback by @backnotprop in #627
Contributors
@oorestisime filed #558 requesting commit-on-approve for code review sessions, then contributed #561 adding configurable review approval prompts. That PR seeded the broader feedback customization pipeline shipped in this release.
Community members whose issues shaped this release:
- @JohannesKlauss filed #624 requesting customizable feedback prompts for the build agent handoff
- @leoreisdias filed #625 requesting that merged PRs be hidden from the stacked PR selector, with a detailed description of the 10+ PR workflow that motivated the change
- @aviadshiber contributed a context-anchoring prompt pattern featured in the custom feedback documentation
Full Changelog :
v0.19.2...v0.19.3 -
🔗 r/Yorkshire Collapsing Labour vote in Barnsley sees some choosing between Greens and Reform rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 r/wiesbaden Bernd Zehner löscht ein Drittel seiner Rezensionen in seinem Restaurant (geöffnet im Februar) rss
submitted by /u/Traditional_Face_984
[link] [comments]
-
- April 28, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-28 rss
IDA Plugin Updates on 2026-04-28
New Releases:
Activity:
- capa
- claude-of-alexandria
- fe1d2580: chore(deps-dev): bump the minor-and-patch group (#11)
- ida-domain
- ida-structor
- 141a4d46: feat: Add early stopping and ordered xref scanning for type validation
- mips_call_analyzer
- aeaecb84: init
- python-elpida_core.py
- 2f09280a: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:41Z
- 0466d82c: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:21Z
- 2216d956: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:57Z
- 57c73e44: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:33Z
- 295cf3f4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:08Z
- 5cc39a47: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:43Z
- 80b56fe0: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:18Z
- 55613c14: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:52Z
- b45ffb00: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:25Z
- a4772cd4: Constitutional event: strip-fix restored PROCEED, A3 voice, P055 norm…
- scripts
- 9e0ee439: added script for c2 extraction from EchoGather
-
🔗 r/york My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm rss
| Any information would be greatly appreciated as I require my bike for work submitted by /u/MidnightFar3298
[link] [comments]
---|--- -
🔗 r/Leeds Wheelchair accessible taxi services rss
Hey everyone, I’m a full time wheelchair user from London. I have quadriplegic cerebral palsy so can’t walk at all. I’m looking to study electronic music production at Leeds Conservatoire in September of this year and have to travel up to Leeds for accommodation viewings on Thursday. I was wondering if anyone could give me some taxi companies that do/may provide wheelchair accessible taxi services with full ramp access?
Uber, at least in London is a bit hit and miss so that’s why I’m asking for taxi services rather than just using Uber. I also wanted to ask, is there a taxi rank at Leeds station and do they have wheelchair accessible vehicles there?
Thanks in advance and feel free to add any tips or experiences of travelling in Leeds as a wheelchair user. Even if you are able bodied, please let me know if there’s anything you think I should bear in mind while navigating the city in general.
Thanks again everyone!
submitted by /u/LORDLUK3
[link] [comments] -
🔗 @binaryninja@infosec.exchange To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This mastodon
To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This feature is disabled by default in paid versions and enabled by default in our free version. Either way, you can change the setting whenever you want. Details in our latest blog post: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#crash- reporting
-
🔗 r/york Bees on Gillygate rss
Hi!
I don’t suppose anyone saw the swarm of bees all over Gillygate around the Tesco today?
Just wondered if anyone knows if it’s cleared up or what caused it?
This was about 13:45, and apparently they weren’t there in the morning.
submitted by /u/SadAndGloomy
[link] [comments] -
🔗 badlogic/pi-mono v0.70.6 release
New Features
- Cloudflare Workers AI provider support with
CLOUDFLARE_API_KEY/CLOUDFLARE_ACCOUNT_IDsetup. See docs/providers.md#api-keys. (#3851 by @mchenco) - Pi update checks now use
pi.devand identify Pi with api/<version>user agent. See docs/packages.md. (#3877 by @mitsuhiko)
Added
- Added Cloudflare Workers AI as a built-in provider with
CLOUDFLARE_API_KEY/CLOUDFLARE_ACCOUNT_IDsetup, default model resolution,/loginsupport, and provider documentation (#3851 by @mchenco).
Changed
- Changed Pi version checks to identify Pi with a
pi/<version>user agent (#3877 by @mitsuhiko).
Fixed
- Fixed config selector scroll indicators to show item counts instead of line counts (#3820 by @aliou).
- Fixed exported HTML to escape embedded image data and session metadata, preventing crafted session content from injecting markup (#3819 by @justinpbarnett, #3883 by @justinpbarnett).
- Fixed Bun-based package manager startup by locating global
node_modulesrelative to Bun's install layout (#3861 by @thirtythreeforty). - Fixed Bedrock inference profile capability checks by normalizing profile ARNs to the underlying model name.
- Fixed file discovery to fall back to
fdfindwhenfdis unavailable. - Fixed
pi updateto skip self-update reinstalls when the installed version is already current (#3853). - Fixed Cloudflare Workers AI attribution headers to honor the install telemetry setting.
- Fixed
pi update --selfdetection and execution for Windows package-manager shim installs, including symlinked global package roots, and print the manual fallback command when self-update fails (#3857).
- Cloudflare Workers AI provider support with
-
🔗 r/reverseengineering Building a perfect clone of 1993 game SimTower (via RE) rss
submitted by /u/scatematica
[link] [comments] -
🔗 r/LocalLLaMA Something from Mistral (Vibe) tomorrow rss
| Model(s) or Tool upgrade/New Tool? Source Tweet : https://xcancel.com/mistralvibe/status/2049147645894021147#m submitted by /u/pmttyji
[link] [comments]
---|--- -
🔗 r/Yorkshire Looking for a Lost Super Street Fighter 2 Arcade Cabinet (Sheffield/Yorkshire – early 2000s) rss
I’m trying to track down an arcade cabinet I used to play in the early 2000s, and I’m hoping someone in Yorkshire might know its current location.
Between 2002–2004, I regularly played a Super Street Fighter 2 machine in a takeaway called Pizza Metro on London Road in Sheffield.
Details I remember:
-
Small black cabinet
-
Dragon symbol on the side (green or possibly yellow)
-
Standard 6-button layout (Street Fighter style, diagonal)
-
One joystick was slightly larger than the other (not sure which side)
-
It was Super Street Fighter 2 (not Super Turbo — not the version Akuma)
I used to play it a lot during a brief period Iiving in Sheffield about 23 years ago, so it’s quite nostalgic for me.
Around 2005, the shop returned the cabinet to the arcade vendor they rented it from, the vendor later sold it to someone else. I managed to contact the vendor at the time, but they couldn’t remember who it was sold to.
Ideally, I’d be interested in buying the cabinet if it still exists. However, if it’s not for sale, I’d really just like to confirm the exact joystick and button setup.
If someone believes they’ve found the right machine, I’m happy to:
Confirm from clear photos/videos and arrange to see it in person to verify details.
I’m offering £100 for a solid, verifiable lead (e.g. correct cabinet identification, owner info, or confirmed hardware details.
If anyone remembers this cabinet, knows the vendor, or has any leads at all, I’d really appreciate it. I know it's a long shot but I've decided to try anyway.
submitted by /u/goldstand
[link] [comments] -
-
🔗 Locklin on science Bouncing droplet “quantum mechanics” rss
I was always a fan of de Broglie and Bohm’s “pilot wave” idea. This is a fully deterministic theory of quantum mechanics which physicists don’t like because “le hidden variables” (also it isn’t yet relativistic I guess). The original pilot wave idea didn’t work out because de Broglie couldn’t calculate scattering cross sections, though Bohm […]
-
🔗 r/Leeds nightclub interview?? rss
Hey guys! I have an interview for a bartender position at Backrooms nightclub tomorrow and I’ve never had an interview in a club but I really wanna work there bc I love the whole vibe of clubs and want to get into bartending. What kind of things do they ask you for these roles?? If anyone has any personal experience too it would be massively appreciated
submitted by /u/WhereasFar9745
[link] [comments] -
🔗 r/reverseengineering How I reverse-engineered a SQLite WAL database inside a VS Code extension - custom merge engine, header byte patching, and protobuf decoding without a schema rss
submitted by /u/PangolinConfident163
[link] [comments] -
🔗 r/york Does anyone know if there is an update regarding foss islands chimney? rss
| I noticed the temporary fencing looks to now be permanent, which is a shame- was a handy shortcut to Halfords and vice versa! submitted by /u/UnhingedSerialKiller
[link] [comments]
---|--- -
🔗 r/reverseengineering AI solved our CTF in 6min rss
submitted by /u/eshard-cybersec
[link] [comments] -
🔗 r/LocalLLaMA meantime on r/vibecoding rss
| words of wisdom submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation rss
| Evaluated Qwen 3.6 27B across BF16, Q4_K_M, and Q8_0 GGUF quant variants with llama-cpp-python using Neo AI Engineer. Benchmarks used:- HumanEval: code generation
- HellaSwag: commonsense reasoning
- BFCL: function calling
Total samples:
- HumanEval: 164
- HellaSwag: 100
- BFCL: 400
Results: BF16
- HumanEval: 56.10% 92/164
- HellaSwag: 90.00% 90/100
- BFCL: 63.25% 253/400
- Avg accuracy: 69.78%
- Throughput: 15.5 tok/s
- Peak RAM: 54 GB
- Model size: 53.8 GB
Q4_K_M
- HumanEval: 50.61% 83/164
- HellaSwag: 86.00% 86/100
- BFCL: 63.00% 252/400
- Avg accuracy: 66.54%
- Throughput: 22.5 tok/s
- Peak RAM: 28 GB
- Model size: 16.8 GB
Q8_0
- HumanEval: 52.44% 86/164
- HellaSwag: 83.00% 83/100
- BFCL: 63.00% 252/400
- Avg accuracy: 66.15%
- Throughput: 18.0 tok/s
- Peak RAM: 42 GB
- Model size: 28.6 GB
What stood out: Q4_K_M looks like the best practical variant here. It keeps BFCL almost identical to BF16, drops about 5.5 points on HumanEval, and is still only 4 points behind BF16 on HellaSwag. The tradeoff is pretty good:
- 1.45x faster than BF16
- 48% less peak RAM
- 68.8% smaller model file
- nearly identical function calling score
Q8_0 was a bit underwhelming in this run. It improved HumanEval over Q4_K_M by ~1.8 points, but used 42 GB RAM vs 28 GB and was slower. It also scored lower than Q4_K_M on HellaSwag in this eval. For local/CPU deployment, I would probably pick Q4_K_M unless the workload is heavily code-generation focused. For maximum quality, BF16 still wins. Evaluation setup:
- GGUF via llama-cpp-python
- n_ctx: 32768
- checkpointed evaluation
- HumanEval, HellaSwag, and BFCL all completed
- BFCL had 400 function calling samples
This evaluation was done using Neo AI Engineer, which built the GGUF eval setup, handled checkpointed runs, and consolidated the benchmark results. I manually reviewed the outcome as well. Complete case study with benchmarking results, approach and code snippets in mentioned in the comments below 👇 submitted by /u/gvij
[link] [comments]
---|--- -
🔗 backnotprop/plannotator v0.19.2 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
v0.17.2 | Supply-chain hardening, sticky toolstrip and badges, overlay scrollbars, external annotation highlighting, Conventional Comments
What's New in v0.19.2
v0.19.2 adds stacked PR review, source line numbers in exported feedback, and several UX fixes. Five PRs, one from a first-time contributor.
Code Review
Stacked PR Review
Reviewing a PR that belongs to a stack used to mean reviewing it in isolation. You could see the diff for that one branch, but not how it fit into the larger chain. Switching to a different PR in the stack meant closing the review and starting a new session.
Stacked PR review keeps you in a single session across every PR in the stack. A stack tree popover shows the full chain with clickable navigation. Each PR gets its own worktree checkout, so switching PRs recomputes the diff against the correct base without mixing changes between layers. Two scope modes let you toggle between viewing a single PR's changes (layer) and all accumulated changes from the default branch (full-stack).
Multi-PR posting lets you submit review feedback to multiple PRs at once. A confirmation dialog shows exactly where comments will go before posting to GitHub or GitLab, with parallel submission and partial-failure retry. Annotations from full-stack diffs can't be mapped to a single PR's line numbers, so they're surfaced as copyable markdown rather than silently dropped.
A new "Branch" option in the default diff type setting (and first-run dialog) gives users who work primarily with committed changes a one-click default.
- #620 by @backnotprop
Source Line Numbers in Exported Feedback
When Claude receives annotation feedback, it got the block content and the highlighted text but had no way to locate the annotation in the source file. For large documents with repeated headings or similar paragraphs, this ambiguity forced extra round-trips.
Exported annotations now include source line numbers. Single-line blocks show
(line 42), multi-line blocks show(lines 10–14). Code blocks account for fence lines when computing ranges. Files with YAML frontmatter are offset- corrected so line numbers match the original file, not the parsed output.For converted content (HTML files rendered through Turndown, URLs fetched via Jina Reader), the feedback includes a caveat that line numbers refer to the converted markdown rather than the original source. When viewing a linked HTML document within a plan, the conversion flag is derived per-document so mixed collections of markdown and HTML files each get the correct label.
- #623 by @backnotprop
UX
Diff Type Dialog Re-Presented
Many users who set up Plannotator before v0.17.8 never saw the "Committed" option (branch diff vs. the default branch) because the first-run dialog only showed at install time. Users were asking how to set committed changes as their default without realizing the option existed.
The dialog is now re-presented to existing users with clearer descriptions, a wider layout with a 60/40 split, and a hover-to-zoom preview of the toolbar dropdown. The dialog reminds users they can switch views anytime during a review. Existing preferences are preserved — this only re-shows the picker, it doesn't reset anyone's choice.
Options Menu Ghost Dot Removed
The pulsing notification dot on the Options menu was meant to flag new settings after an update. In practice, the dot appeared on every session and users couldn't figure out how to dismiss it. The entire new-settings-hint system has been removed. Settings changes are communicated through release notes instead.
Additional Changes
- Docs: toolbar inventory updated. Documentation references to "Insert" and "Replace" annotation types have been scrubbed to match the shipped UI, which uses Delete, Comment, Quick Label, Looks Good, Global Comment, and Copy. — #618 by @vxio, closing #617
- Docs: OpenCode plugin configuration. Clarified plugin setup instructions for OpenCode users. — commit
33f409a
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: stacked PR review — PR switching, scope toggling, multi-PR posting by @backnotprop in #620
- feat(plan,annotate): include source line numbers in exported feedback by @backnotprop in #623
- docs: scrub Insert/Replace from docs to match shipped UI by @vxio in #618
- fix: remove ghost dot on Options menu (new-settings-hint system) by @backnotprop in commit
7ab2d8f - fix: re-show diff type setup dialog with clearer options and toolbar hint by @backnotprop in commits
aaad89e,03d4e8b
New Contributors
Community
@vxio noticed the docs still referenced Insert and Replace annotation types that were removed from the UI, filed #617, and contributed the fix in #618. First contribution to the project.
Full Changelog :
v0.19.1...v0.19.2 -
🔗 r/Leeds Firstbus app update shenanigans rss
If you use the Firstbus app for tickets, be warned, they are rolling out an update. The update has gone so well that they have a banner on the website pointing to a separate FAQ specifically for the update with a big list of reasons why you will probably have to call them to get access to your tickets...
https://www.firstbus.co.uk/help-support/help-and-support/first-bus-app- update
submitted by /u/awesomeweles
[link] [comments] -
🔗 r/reverseengineering Example structure for evidence-based vulnerability reports rss
submitted by /u/RoutineWeary6823
[link] [comments] -
🔗 r/LocalLLaMA Duality of r/LocalLLaMA rss
| submitted by /u/HornyGooner4402
[link] [comments]
---|--- -
🔗 r/reverseengineering DeepZero - Automated Vulnerability Research rss
submitted by /u/watchdogsrox
[link] [comments] -
🔗 r/LocalLLaMA I'm done with using local LLMs for coding rss
I think gave it a fair shot over the past few weeks, forcing myself to use local models for non-work tech asks. I use Claude Code at my job so that's what I'm comparing to.
I used Qwen 27B and Gemma 4 31B, these are considered the best local models under the multi-hundred LLMs. I also tried multiple agentic apps. My verdict is that the loss of productivity is not worth it the advantages.
I'll give a brief overview of my main issues.
Shitty decision-making and tool-calls
This is a big one. Claude seems to read my mind in most cases, but Qwen 27B makes me give it the Carlo Ancelotti eyebrow more often than not. The LLM just isn't proceeding how I would proceed.
I was mainly using local LLMs for OS/Docker tasks. Is this considered much harder than coding or something?
To give an example, tasks like " Here's a Github repo, I want you to Dockerize it." I'd expect any dummy to follow the README's instructions and execute them. (EDIT: full prompt here: https://reddit.com/r/LocalLLaMA/comments/1sxqa2c/im_done_with_using_local_llms_for_coding/oiowcxe/ )
Issues like having a 'docker build' that takes longer than the default timeout, which sends them on unrelated follow-ups (as if the task failed), instead of checking if it's still running. I had Qwen try to repeat the installation commands on the host (also Ubuntu) to see what happens. It started assuming "it must have failed because of torchcodec" just like that, pulling this entirely out of its ass, instead of checking output.
I tried to meet the models half-way. Having this in AGENTS.md: " If you run a Docker build command, or any other command that you think will have a lot of debug output, then do the following: 1. run it in a subagent, so we don't pollute the main context, 2. pipe the output to a temporary file, so we can refer to it later using tail and grep." And yet twice in a row I came back to a broken session with 250k input tokens because the LLM is reading all the output of 'docker build' or 'docker compose up'.
I know there's huge AGENTS.md that treat the LLM like a programmable robot, giving it long elaborate protocols because they don't expect to have decent self-guidance, I didn't try those tbh. And tbh none of them go into details like not reading the output of 'docker build'. I stuck to the default prompts of the agentic apps I used, + a few guidelines in my AGENTS.md.
Performance
Not only are the LLMs slow, but no matter which app I'm using, the prompt cache frequently seems to break. Translation: long pauses where nothing seems to happen.
For Claude Code specifically, this is made worse by the fact that it doesn't print the LLM's output to the user. It's one of the reasons I often preferred Qwen Code. It's very frustrating when not only is the outcome looking bad, but I'm not getting rapid feedback.
I'm not learning anything
Other than changing the URL of the Chat Completions server, there's no difference between using a local LLM and a cloud one, just more grief.
There's definitely experienced to be gained learning how to prompt an LLM. But I think coding tasks are just too hard for the small ones, it's like playing a game on Hardcore. I'm looking for a sweetspot in learning curve and this is just not worth it.
What now
For my coding and OS stuff, I'm gonna put some money on OpenRouter and exclusively use big boys like Kimi. If one model pisses me off, move on to the next one. If I find a favorite, I'll sign up to its yearly plan to save money.
I'll still use small local models for automation, basic research, and language tasks. I've had fun writing basic automation skills/bots that run stuff on my PC, and these will always be useful.
I also love using local LLMs for writing or text games. Speed isn't an issue there, the prompt cache's always being hit. Technically you could also use a cloud model for this too, but you'd be paying out the ass because after a while each new turn is sending like 100k tokens.
Thanks for reading my blog.
submitted by /u/dtdisapointingresult
[link] [comments] -
🔗 Jessitron Communication is hard, but sometimes I can fix it. rss
We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.
Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.
Here’s an example.
Iterating could be easier. The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic. ! As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking. In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn't tell exactly what I was talking about. I can’t just point to the UI.
This wasn't the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I'm talking about.
When communication gets difficult, that’s a signal. I can change this.
So I made it make a way to point to the UI.
In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ‘r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?
Claude understood and added a hover effect that highlights the originating bash tool call.

That hover effect is OK; I used it a few times. Those reference tags are gold! I've used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.
This was a great idea. Iterating is much easier now!
I am in the loop and on the loop.
There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.
Then there’s a meta-level feedback loop, the “is this working?” check when I feel resistance. Frustration, tedium, annoyance-these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the “middle loop,” and Kief Morris renamed it "human on the loop."
Here, I’m both in the development loop with the AI, and I’m “on the loop” as a thoughtful collaborator, smoothing the development loop when it gets rough.
Resistance will be assimilated.
As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!
-
🔗 r/wiesbaden Eiserne Hand mit der Vespa rss
Kurz und knappe Frage an die Moped / Rollerfahrer.
Meine Freundin muss nach Taunusstein pendeln und überlegt auf Roller umzusteigen.
Daher meine Frage :
Kommt eine kleine Vespa / Moped mit 50ccm die eiserne Hand hoch ? Also mit sinnvoller Geschwindigkeit?
Hat das einer von euch schon gemacht ?
Ich danke schonmal für die Antworten :)
submitted by /u/metaldog
[link] [comments] -
🔗 r/Leeds best tuna melt paninis? rss
i’m craving a tuna melt really badly right now and i’m in the city centre for lunch tomorrow and want to get something good. does anyone have any recommendations? cheese, tuna, and toasted panini bread is all i need right now 🙏
submitted by /u/Shoddy_Day
[link] [comments] -
🔗 Mitchell Hashimoto Ghostty Is Leaving GitHub rss
(empty) -
🔗 Armin Ronacher Before GitHub rss
GitHub was not the first home of my Open Source software. SourceForge was.
Before GitHub, I had my own Trac installation. I had Subversion repositories, tickets, tarballs, and documentation on infrastructure I controlled. Later I moved projects to Bitbucket, back when Bitbucket still felt like a serious alternative place for Open Source projects, especially for people who were not all-in on Git yet.
And then, eventually, GitHub became the place, and I moved all of it there.
It is hard for me to overstate how important GitHub became in my life. A large part of my Open Source identity formed there. Projects I worked on found users there. People found me there, and I found other people there. Many professional relationships and many friendships started because some repository, issue, pull request, or comment thread made two people aware of each other.
That is why I find what is happening to GitHub today so sad and so disappointing. I do not look at it as just the folks at Microsoft making product decisions I dislike. GitHub was part of the social infrastructure of Open Source for a very long time. For many of us, it was not merely where the code lived; it was where a large part of the community lived.
So when I think about GitHub's decline, I also think about what came before it, and what might come after it. I have written a few times over the years about dependencies, and in particular about the problem of micro dependencies. In my mind, GitHub gave life to that phenomenon. It was something I definitely did not completely support, but it also made Open Source more inclusive. GitHub changed how Open Source feels, and later npm and other systems changed how dependencies feel. Put them together and you get a world in which publishing code is almost frictionless, consuming code is almost frictionless, and the number of projects in the world explodes.
That has many upsides. But it is worth remembering that Open Source did not always work this way.
A Smaller World
Before GitHub, Open Source was a much smaller world. Not necessarily in the number of people who cared about it, but in the number of projects most of us could realistically depend on.
There were well-known projects, maintained over long periods of time by a comparatively small number of people. You knew the names. You knew the mailing lists. You knew who had been around for years and who had earned trust. That trust was not perfect, and the old world had plenty of gatekeeping, but reputation mattered in a very direct way. We took pride (and got frustrated) when the Debian folks came and told us our licensing stuff was murky or the copyright headers were not up to snuff, because they packaged things up.
A dependency was not just a package name. It was a project with a history, a website, a maintainer, a release process, a lot of friction, and often a place in a larger community. You did not add dependencies casually, because the act of depending on something usually meant you had to understand where it came from.
Not all of this was necessarily intentional, but because these projects were comparatively large, they also needed to bring their own infrastructure. Small projects might run on a university server, and many of them were on SourceForge, but the larger ones ran their own show. They grouped together into larger collectives to make it work.
We Ran Our Own Infrastructure
My first Open Source projects lived on infrastructure I ran myself. There was a Trac installation, Subversion repositories, tarballs, documentation, and release files served from my own machines or from servers under my control. That was normal. If you wanted to publish software, you often also became a small-time system administrator. Georg and I ran our own collective for our Open Source projects: Pocoo. We shared server costs and the burden of maintaining Subversion and Trac, mailing lists and more.
Subversion in particular made this "running your own forge" natural. It was centralized: you needed a server, and somebody had to operate it. The project had a home, and that home was usually quite literal: a hostname, a directory, a Trac instance, a mailing list archive.
When Mercurial and Git arrived, they were philosophically the opposite. Both were distributed. Everybody could have the full repository. Everybody could have their own copy, their own branches, their own history. In principle, those distributed version control systems should have reduced the need for a single center. But despite all of this, GitHub became the center.
That is one of the great ironies of modern Open Source. The distributed version control system won, and then the world standardized on one enormous centralized service for hosting it.
What GitHub Gave Us
It is easy now to talk only about GitHub's failures, of which there are currently many, but that would be unfair: GitHub was, and continues to be, a tremendous gift to Open Source.
It made creating a project easy and it made discovering projects easy. It made contributing understandable to people who had never subscribed to a development mailing list in their life. It gave projects issue trackers, pull requests, release pages, wikis, organization pages, API access, webhooks, and later CI. It normalized the idea that Open Source happens in the open, with visible history and visible collaboration. And it was an excellent and reasonable default choice for a decade.
But maybe the most underappreciated thing GitHub did was archival work: GitHub became a library. It became an index of a huge part of the software commons because even abandoned projects remained findable. You could find forks, and old issues and discussions all stayed online. For all the complaints one can make about centralization, that centralization also created discoverable memory. The leaders there once cared a lot about keeping GitHub available even in countries that were sanctioned by the US.
I know what the alternative looks like, because I was living it. Some of my earliest Open Source projects are technically still on PyPI, but the actual packages are gone. The metadata points to my old server, and that server has long stopped serving those files.
That was normal before the large platforms. A personal domain expired, a VPS was shut down, a developer passed away, and with them went the services they paid for. The web was once full of little software homes, and many of them are gone 1.
npm and the Dependency Explosion
The micro-dependency problem was not just that people published very small packages. The hosted infrastructure of GitHub and npm made it feel as if there was no cost to create, publish, discover, install, and depend on them.
In the pre-GitHub world, reputation and longevity were part of the dependency selection process almost by necessity, and it often required vendoring. Plenty of our early dependencies were just vendored into our own Subversion trees by default, in part because we could not even rely on other services being up when we needed them and because maintaining scripts that fetched them, in the pre-API days, was painful. The implied friction forced some reflection, and it resulted in different developer behavior. With npm-style ecosystems, the package graph can grow faster than anybody's ability to reason about it.
The problem that this type of thinking created also meant that solutions had to be found along the way. GitHub helped compensate for the accountability problem and it helped with licensing. At one point, the newfound influx of developers and merged pull requests left a lot of open questions about what the state of licenses actually was. GitHub even attempted to rectify this with their terms of service.
The thinking for many years was that if I am going to depend on some tiny package, I at least want to see its repository. I want to see whether the maintainer exists, whether there are issues, whether there were recent changes, whether other projects use it, whether the code is what the package claims it is. GitHub became part of the system that provides trust, and more recently it has even become one of the few systems that can publish packages to npm and other registries with trusted publishing.
That means when trust in GitHub erodes, the problem is not isolated to source hosting. It affects the whole supply chain culture that formed around it.
GitHub Is Slowly Dying
GitHub is currently losing some of what made it feel inevitable. Maybe that's just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable.
Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It's a miracle that things are going as well as they are.
For a while, leaving GitHub felt like a symbolic move mostly made by smaller projects or by people with strong views about software freedom. I definitely cringed when Zig moved to Codeberg! But I now see people with real weight and signal talking about leaving GitHub. The most obvious one is Mitchell Hashimoto, who announced that Ghostty will move. Where it will move is not clear, but it's a strong signal. But there are others, too. Strudel moved to Codeberg and so did Tenacity. Will they cause enough of a shift? Probably not, but I find myself on non-GitHub properties more frequently again compared to just a year ago.
One can argue that this is good: it is healthy for Open Source to stop pretending that one company should be the default home of everything. Git itself was designed for a world with many homes.
Dispersion Has a Cost
Going back to many forges, many servers, many small homes, and many independent communities will increase decentralization, and in many ways it will force systems to adapt. This can restore autonomy and make projects less dependent on the whims of Microsoft leadership. It can also allow different communities to choose different workflows. What's happening in Pi's issue tracker currently is largely a result of GitHub's product choices not working in the present-day world of Open Source. It was built for engagement, not for maintainer sanity.
It can also make the web forget again. I quite like software that forgets because it has a cleansing element. Maybe the real risk of loss will make us reflect more on actually taking advantage of a distributed version control system.
But if projects move to something more akin to self-hosted forges, to their own self-hosted Mercurial or cgit servers, we run the risk of losing things that we don't want to lose. The code might be distributed in theory, but the social context often is not. Issues, reviews, design discussions, release notes, security advisories, and old tarballs are fragile. They disappear much more easily than we like to admit. Mailing lists, which carried a lot of this in earlier years, have not kept up with the needs of today, and are largely a user experience disaster.
We Need an Archive
As much as I like the idea of things fading out of existence, we absolutely need libraries and archives.
Regardless of whether GitHub is here to stay or projects find new homes, what I would like to see is some public, boring, well-funded archive for Open Source software. Something with the power of an endowment or public funding to keep it afloat. Something whose job is not to win the developer productivity market but just to make sure that the most important things we create do not disappear.
The bells and whistles can be someone else's problem, but source archives, release artifacts, metadata, and enough project context to understand what happened should be preserved somewhere that is not tied to the business model or leadership mood of a single company.
GitHub accidentally became that archive because it became the center of Open Source activity. Once that no longer holds, we should not assume some magic archival function will emerge or that GitHub will continue to function as such. We have already seen what happens when project homes are just personal servers and good intentions, and we have seen what happened to Google Code and Bitbucket.
I hope GitHub recovers, I really do, in part because a lot of history lives there and because the people still working on it inherited something genuinely important. But I no longer think it is responsible to let the continued memory of Open Source depend on GitHub remaining a healthy product.
The world before GitHub had more autonomy and more loss, and in some ways, we're probably going to move back there, at least for a while. Whatever people want to start building next should try to keep the memory and lose the dependence. It should be easier to move projects, easier to mirror their social context, easier to preserve releases, and harder for one company's drift to become a cultural crisis for everyone else.
I do not want to go back to the old web of broken tarball links and abandoned Trac instances. I also do not want Open Source to pretend that the last twenty years were normal or permanent. GitHub wrote a remarkable chapter of Open Source, and if that chapter is ending, the next one should learn from it and also from what came before.
- This is also a good reminder that we rely so very much on the Internet Archive for many projects of the time.↩
-
- April 27, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-27 rss
IDA Plugin Updates on 2026-04-27
Activity:
- binsync
- 7ccbd7cc: Fix documentation links (#520)
- capa
- 87f0970d: Update README with dynamic capa heading (#3060)
- ida-hcli
- python-elpida_core.py
- 3829ddf5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:54Z
- 12409dce: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:32Z
- 4a279d04: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:09Z
- 00d970a9: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:45Z
- af252e8e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:25Z
- 75dca59f: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:59Z
- 4d2465d4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:36Z
- 06a6c379: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:11Z
- 31158572: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:43Z
- 811516f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:19Z
- binsync
-
🔗 r/Leeds Scam companies to avoid rss
I will attach pictures showing what to look out for, additionally, be careful of any promising high pay. These people compliment you, and essentially groom you into an extremely low wage, door to door sales job, whilst promising greater things e.g. Quick career progression
submitted by /u/Fit-Librarian5590
[link] [comments] -
🔗 r/LocalLLaMA Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. rss
| TRELLIS.2 is a state-of-the-art large 3D generative model (4B parameters) designed for high-fidelity image-to-3D generation. It leverages a novel "field-free" sparse voxel structure termed O-Voxel to reconstruct and generate arbitrary 3D assets with complex topologies, sharp features, and full PBR materials.
Link to the Paper:
Link to the Code:
Link to Try Out A Live Demo:
submitted by /u/44th--Hokage
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.70.5 release
Fixed
- Fixed HTML export preserving ANSI-renderer trailing padding as extra blank wrapped lines.
-
🔗 badlogic/pi-mono v0.70.4 release
Fixed
- Fixed packaged
pistartup failing because the session selector imported a source-only utility path.
- Fixed packaged
-
🔗 r/york Where do parents buy baby/child car seats now that Paul Stride has closed? rss
Where is there nearby that is good for buying car seats? Don’t know what you’ve got until it’s gone, Paul Stride was amazing and we now need a replacement for our 3 year old.
submitted by /u/amusedfridaygoat
[link] [comments] -
🔗 MetaBrainz MusicBrainz Server update, 2026-04-27 rss
This release mostly consists of a very substantial rewrite of the external links editor code, to make that section of our editors more efficient. While doing that we also fixed a few long-standing links editor bugs. While we kept this code in beta for quite a while so the community could help us catch most new bugs, do not hesitate to report any issues you might find.
A new release of MusicBrainz Docker is also available that matches this update of MusicBrainz Server. See the release notes for update instructions.
Thanks to rinsuki for having contributed to the code. Thanks to fabe56, HibiscusKazeneko and Lioncat6 for having reported bugs and suggested improvements. Thanks to Besnik, DenilsonSama, Khaled Salama, Marc Riera, ShimiDoki, Vaclovas Intas, cerberuzzz, coldified_, dddrnzv, dulijuong_artist, imgradeone, karpuzikov, mfmeulenbelt, salo.rock, smreo1590, syntariavoxmortem, wileyfoxyx and yyb987 for updating the translations. And thanks to all others who tested the beta version!
The git tag is v-2026-04-27.0.
Fixed Bug
- [MBS-8570] - "This relationship already exists" error message does not go away when one duplicate URL is removed
- [MBS-12032] - Adding a duplicate URL rel moves link to new section
- [MBS-14307] - Wikipedia extracts are not displaying
- [MBS-14309] - Can't click documentation/help links
Improvement
- [MBS-14279] - Support Amazon Belgium links
- [MBS-14280] - Block archive.today, archive.is, archive.ph, archive.li, archive.fo, archive.md and archive.vn links
Task
-
🔗 badlogic/pi-mono v0.70.3 release
New Features
pi updatecan now update pi itself in addition to installed pi packages. See docs/packages.md. (#3680 by @mitsuhiko)- Azure Cognitive Services endpoint support for Azure OpenAI Responses deployments. See docs/providers.md#api-keys. (#3799 by @marcbloech)
- Suppressible Anthropic extra-usage billing warning via
warnings.anthropicExtraUsagein/settings. See docs/settings.md. (#3808) - Extension-controlled working row visibility via
ctx.ui.setWorkingVisible(), allowing extensions to hide the built-in loader row and render custom working state. See docs/extensions.md and examples/extensions/border-status-editor.ts. (#3674)
Added
- Added
pi updatesupport for updating pi itself in addition to installed pi packages (#3680 by @mitsuhiko). - Added Azure Cognitive Services endpoint support for Azure OpenAI Responses base URLs (#3799 by @marcbloech).
- Added
warnings.anthropicExtraUsageand a/settingswarnings submenu to suppress the Anthropic extra usage billing warning (#3808) - Added
ctx.ui.setWorkingVisible()so extensions can hide the built-in interactive working loader row without reserving layout space, plus a border-status editor example that moves working state into a custom editor border (#3674)
Fixed
- Fixed duplicate printable characters from Kitty keyboard protocol CSI-u plus raw character input on layouts such as Italian (#3780).
- Fixed API-key environment discovery and Bun startup to fall back to
/proc/self/environwhen Bun's sandbox leavesprocess.envempty (#3801 by @mdsjip). - Fixed Bun sandboxed package-manager commands when
process.envis empty (#3807 by @mdsjip). - Fixed symlinked packages, resources, skills, and sessions being duplicated in selectors and loaders (#3818 by @aliou).
- Fixed Bedrock prompt-caching and adaptive-thinking capability checks for inference profile ARNs (#3527 by @anirudhmarc).
- Fixed OpenAI Codex Responses default verbosity to
lowwhen no verbosity is specified. - Stopped sending empty
toolsarrays to providers that reject them when tools are disabled (#3650 by @HQidea). - Fixed Anthropic SSE parsing to ignore unknown proxy events such as OpenAI-style
doneterminators (#3708). - Fixed provider registration with override-only
models.jsonentries to preserve built-in model lists (#3651). - Fixed
/loginto show auth supplied bymodels.jsonprovider definitions. - Fixed HTML export whitespace around extension-rendered tool output and expandable output hints.
- Fixed bash executor temp output streams leaking file descriptors when output was truncated by line count (#3786)
- Fixed extension
pi.setSessionName()updates to refresh the interactive terminal title immediately (#3686) - Fixed
/treecancellation viasession_before_treeleaving the session stuck in compaction state (#3688) - Fixed Escape interrupt handling when extensions hide the built-in working loader row (#3674)
- Fixed coding-agent test expectations for current default models and missing-auth guidance.
- Fixed long local-LLM SSE streams aborting at 5 minutes with
UND_ERR_BODY_TIMEOUTby disabling undicibodyTimeout/headersTimeouton the global dispatcher; provider SDKs continue to enforce their own deadlines viaretry.provider.timeoutMs(#3715)
-
🔗 Simon Willison Tracking the history of the now-deceased OpenAI Microsoft AGI clause rss
For many years, Microsoft and OpenAI's relationship has included a weird clause saying that, should AGI be achieved, Microsoft's commercial IP rights to OpenAI's technology would be null and void. That clause appeared to end today. I decided to try and track its expression over time on openai.com.
OpenAI, July 22nd 2019 in Microsoft invests in and partners with OpenAI to support us building beneficial AGI (emphasis mine):
OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.
But what is AGI? The OpenAI Charter was first published in April 2018 and has remained unchanged at least since this March 11th 2019 archive.org capture:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
Here's the problem: if you're going to sign an agreement with Microsoft that is dependent on knowing when "AGI" has been achieved, you need something a little more concrete.
In December 2024 The Information reported the details (summarized here outside of their paywall by TechCrunch):
Last year’s agreement between Microsoft and OpenAI, which hasn’t been disclosed, said AGI would be achieved only when OpenAI has developed systems that have the ability to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.
So AGI is now whenever OpenAI's systems are capable of generating $100 billion in profit?
In October 2025 the process changed to being judged by an "independent expert panel". In The next chapter of the Microsoft–OpenAI partnership:
The agreement preserves key elements that have fueled this successful partnership—meaning OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until Artificial General Intelligence (AGI). [...]
Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel. [...]
Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first.
OpenAI on February 27th, 2026 in Joint Statement from OpenAI and Microsoft:
AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same.
OpenAI today, April 27th 2026 in The next phase of the Microsoft OpenAI partnership (emphasis mine):
- Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.
- Microsoft will no longer pay a revenue share to OpenAI.
- Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.
As far as I can tell "independent of OpenAI’s technology progress" is a declaration that the AGI clause is now dead. Here's The Verge coming to the same conclusion: The AGI clause is dead.
My all-time favorite commentary on OpenAI's approach to AGI remains this 2023 hypothetical by Matt Levine:
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/york Askham Tesco recycling rss
Does anyone know when the big cardboard recycling skip gets emptied? It's been full for weeks now and is in a state
submitted by /u/Isla_Nooblar
[link] [comments] -
🔗 @binaryninja@infosec.exchange The debugger got some real love in our latest update. Hardware breakpoints and mastodon
The debugger got some real love in our latest update. Hardware breakpoints and conditional breakpoints have both landed, and the new debug adapters make things faster and more reliable across a range of workflows. Read more from the latest blog: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#debugger
-
🔗 r/LocalLLaMA MIMO V2.5 PRO rss
| submitted by /u/Namra_7
[link] [comments]
---|--- -
🔗 r/reverseengineering rfcat-py3 rss
submitted by /u/qucrypt
[link] [comments] -
🔗 r/wiesbaden Hat jemand Lust, diesen Mittwoch mit mir zu nem Konzert nach Köln (Aries) zu fahren? Ich zahle das Ticket rss
Ich (M21) wohne in Nähe Wiesbaden und gehe diesen Mittwoch auf ein Konzert in Köln. Der Künstler heißt Aries und geht so in Richtung Indie/Pop/Rock/Hip-Hop (hier eine Geschmacksprobe). Ich freu mich schon richtig drauf. Mein Problem ist nur, ich hab kein Auto und mit den Öffis käme ich so ca. 6 Uhr morgens wieder zu Hause an.
Wenn mich jemand von euch mitnimmt (Hin- und Rückreise), würde ich das Ticket + 20€ Spritgeld bezahlen. Also wer Lust auf sowas hat, meldet euch gerne in den nächsten 24h bei mir.
Edit: wenn ihr andere Ideen habt, was ich tun soll, wenn das hier nichts wird: immer her damit. Mein aktueller Backup-Plan ist Hinfahrt mit BlaBlaCar und beim Konzert durch die Menge zu gehen und anzusprechen mit nem Pappschild:
Köln -> Frankfurt
Anybody?
submitted by /u/BullfrogMiserable554
[link] [comments] -
🔗 r/york Thinking of buying a Persimmon new build home in Selby. There’s so many mixed reviews about this company. Was wondering on people’s experiences with this company. rss
submitted by /u/Stumbling_Gecko_473
[link] [comments] -
🔗 r/york Big group for breakfast / brunch rss
Can anyone recommend somewhere for breakfast or brunch please? 15 of us in total.
submitted by /u/sheffieldpud
[link] [comments] -
🔗 r/LocalLLaMA Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 rss
| Hey fellow Llamas, your time is precious, so I'll keep it short. We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. We call it Luce DFlash (https://github.com/Luce-Org/lucebox-hub; MIT) ~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining (z-lab published a matched Qwen3.6-DFlash draft on 2026-04-26, still under training, so AL should keep climbing). If you have CUDA 12+ and an NVIDIA GPU (RTX 3090 / 4090 / 5090, DGX Spark, other Blackwell, or Jetson AGX Thor with CUDA 13+), all you need is # After cloning the repo (link in the first comment): cd lucebox-hub/dflashcmake -B build -S . -DCMAKE_BUILD_TYPE=Releasecmake --build build --target test_dflash -j# Fetch target (~16 GB)huggingface-cli download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/# Matched 3.6 draft is gated: accept terms + set HF_TOKEN firsthuggingface-cli download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/# RunDFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):"That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. The binary links libggml*.a and never libllama. Luce DFlash will- Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify).
- Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K.
- Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts).
- Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s.
- Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL.
Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256:
Bench AR tok/s DFlash tok/s AL SpeedupHumanEval 34.90 78.16 5.94 2.24xMath500 35.13 69.77 5.15 1.99xGSM8K 34.89 59.65 4.43 1.71xMean 34.97 69.19 5.17 1.98xAs you can see, the speedup is real on consumer hardware, not a paper number. Target graph produces bit-identical output to autoregressive in AR mode; the draft graph matches the z-lab PyTorch reference at cos sim 0.999812. Q4_0 KV costs ~3% AL at short context (8.56 to 8.33) and wins at long context where F16 won't fit anyway. Constraints: CUDA only, greedy verify only (temperature/top_p on the OpenAI server are accepted and ignored), no Metal / ROCm / multi-GPU. Repo started single-3090, recent community PRs added support for RTX 5090, DGX Spark / GB10, other Blackwell cards, and Jetson AGX Thor (sm_110 + CUDA 13). Feedback more than welcome! submitted by /u/sandropuppo
[link] [comments]
---|--- -
🔗 r/Leeds Problem neighbours rss
We have a house of multiple occupancy next door to our house which has adjoining garages. One of the garages is rented out by someone who does not live in any of the nearby houses and just rents the garage. This garage is in very frequent use by the guy renting who is habitually working on his car or multiple cars which groups of noisy ppl, dragging equipment around and using power tools weekend after weekend whenever the weather is good. We have a lovely quiet area apart from when this guy and his cohort show up - who don't even live here.
Is there any department in LCC we can contact to get help with this as it is starting to really affect out quality of life and put us off spending time in our own garden and I imagine it is affecting other neighbours too. or does anyone know how I find out who owns the property next door.
Imagine every Sunday it was like having a mechanics / building site going full tilt all afternoon. It's amazing how thoughtless people can be.
Thanks
submitted by /u/sanchez599
[link] [comments] -
🔗 r/Leeds Best pub chips in Leeds rss
Looking for the best pub chips in Leeds. Must be CHUNKY chips, strictly NO fries. Include pics if poss. Countryside areas preferred (to pair with a walk)
TIA 🥔🥔🥔🥔
submitted by /u/Educational_Clue7522
[link] [comments] -
🔗 r/Leeds Going to Leeds city centre tomorrow what’s the best club to go to for jungle D&B? rss
What would be best for a Tuesday night ?
submitted by /u/No_Excitement678
[link] [comments] -
🔗 r/reverseengineering Using Google's Gemma 4 E4B local AI model to Reverse Engineer a simple Crackme rss
submitted by /u/CatAffectionate6618
[link] [comments] -
🔗 r/Leeds Gym friend rss
Hey everyone,
I’m looking for a gym partner to train with regularly. Ideally someone who can spot me on certain lifts and help with general accountability.
I’m 26M and work in the city centre. I’m planning to join either The Edge or PureGym at the Merrion Centre. My main focus is building overall strength and improving general health, so it would be great to find someone with similar goals.
My preferred training times are:
Weekdays: after 6pm (or possibly before 8am)
Weekends: flexible
I’m relatively new—trained consistently for about 6 months last year but fell out of the routine, so I’m keen to get back into it properly. If you already have a workout plan you’re following, I’d be happy to tag along.
My main goal right now is improving my bench press, along with bodyweight exercises like pull-ups.
submitted by /u/CraftyBrie
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
sync plugin-repository.json No plugin changes detected -
🔗 r/Yorkshire Fuel costs soar 65% for Yorkshire Air Ambulance rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Harrogate Has the gentrification of Bilton began? rss
Lots of new movers, young and from Leeds. Will this lead to businesses popping up supporting their tastes? The Knox is pricier than some town center spots already!
submitted by /u/MechanicAggressive16
[link] [comments] -
🔗 sacha chua :: living an awesome life 2026-04-27 Emacs news rss
There was a big discussion on lobste.rs about people's favourite Emacs packages and that sparked similar conversations on Reddit and HN. Discussions like that are a great source of inspiration. I added a couple of small improvements to my config based on this week's Emacs news, like diff-hl.
Also, lots of people expressed their appreciation for Chris Wellons, who is moving on to other editors for now. Me, I've enjoyed using simple-httpd, impatient, and skewer, and I'm glad Chris made and shared them. Many of his packages already have new maintainers, and the rest are up for adoption. Perhaps we'll see him around again someday!
- Help wanted:
- Upcoming events (iCal file, Org):
- Emacs Berlin: Emacs-Berlin Hybrid Meetup https://emacs-berlin.org/ Wed Apr 29 1000 America/Vancouver - 1200 America/Chicago - 1300 America/Toronto - 1700 Etc/GMT - 1900 Europe/Berlin - 2230 Asia/Kolkata – Thu Apr 30 0100 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri May 1 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Emacs.si (in person): Emacs.si meetup #5 2026 (v #živo) https://dogodki.kompot.si/events/b4192df7-3da4-41b8-95a3-532b93923656 Mon May 4 1900 CET
- EmacsATX: Emacs Social https://www.meetup.com/emacsatx/events/314341747/ Thu May 7 1600 America/Vancouver - 1800 America/Chicago - 1900 America/Toronto - 2300 Etc/GMT – Fri May 8 0100 Europe/Berlin - 0430 Asia/Kolkata - 0700 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri May 8 1800 Europe/Paris
- Other stuff:
- Sacha Chua: April 30 Yay Emacs: Sacha and Prot Talk Emacs - Newbies/Starter Kits (Prot)
- Battle of the Editors - Satellite Event - Tue Jun 30 4:30 PM Aachen, Seffenterweg 23 / Kopernikusstr. 6 (IT Center) for hackathon participants and guests
- Sacha Chua: May 4: Emacs Chat with Amin Bandali
- Emacs configuration:
- Emacs Lisp:
- What are some common code smells that inexperienced Elispers make?
- Dave Pearson: expando.el v1.6 - expand macro in a different window; fix keybinding
- Protesilaos: Emacs livestream: Maintaining Denote, TMR, and more (YouTube 3:06:05)
- Ideas for things to bind to C-z (@oantolin@mathstodon.xyz)
- Appearance:
- Navigation:
- Dave Pearson: itch.el v1.3.0 - switch to the scratch buffer
- Tip: repeat-map and expreg-expand (@plantarum@ottawa.place)
- The Definitive Guide to Code Folding in Emacs (Reddit, Irreal)
- Writing:
- Dave Pearson: blogmore.el v4.2 - cycle image extensions
- Dave Pearson: kbdify.el v1.0.0 - marking up keys in Markdown
- Denote:
- Org Mode:
- (emacs) org mode - your life in plain text (09:49)
- Spacemacs | Org-contacts Agenda Anniversaires | Productivité (02:22)
- How I use org-roam - The Universe of Joshua Blais
- Spacemacs | Org-roam Notes avec tags | Productivité (00:59)
- Import, export, and integration:
- Quick tutorial to get a blog online from Org mode thanks to Org Social | Andros Fenollosa (@andros@activity.andros.dev, in Spanish, @hispaemacs@fosstodon.org)
- Como colorear los bloques de código en Org-mode | Andros Fenollosa (2016, @hispaemacs@fosstodon.org)
- Code for org-edit-special, eglot, and Python (@anoncheg@mastodontech.de)
- Get ready for Orgy in 15 minutes — Bastien Guerry (Irreal, JC Helary) - static site generator
- Tony Zorman: Writing Literate Blog Posts
- Completion:
- Coding:
- Math:
- Shells:
- Web:
- Multimedia:
- AI:
- Community:
- Fortnightly Tips, Tricks, and Questions — 2026-04-21 / week 16
- Your sources for inspiration
- Sacha Chua: YE20 braindump: Emacs Carnival: Newbies/starter kits (YouTube, 1:03:50)
- Randy Ridenour: Emacs and the Sunk Cost Fallacy
- Emacs Philosophy and Infinite Depth with Protesilaos - The Universe of Joshua Blais (YouTube, 1:40:55)
- A month of Elisp · Perpetually Curious Blog
- Other:
- I made a TaskJuggler major mode for Emacs (Reddit)
- Charles Choi: Some nice to know keybindings when using the mouse in Emacs (Irreal)
- Marcin Borkowski: How I use my numeric keypad with Emacs Ledger mode
- anju v1.2: center and fill menus, edit - duplicate, look up; improve mouse interactions in Emacs (@kickingvegas@sfba.social)
- Rahul Juliato: Getting Emacs proced.el to Show CPU and Memory on macOS (Reddit)
- Emacs development:
- Re: About "prefixed-core" - Philip Kaludercic
- Add treesit-query-with-fallback
- New user option compilation-search-extra-path
- ; * etc/NEWS: Announce "setrgbf" and "setrgbb" terminfo capabilities
- Add language-environment and input methods for Syriac
- Rebind 'tab-bar-mouse-close-tab' from <down-mouse-2> to <mouse-2>
- Show executed tests from erts files via the ERT results buffer
- New packages:
- denote-wordcloud: Generate a word cloud (MELPA)
- dmsg: Timestamped debug messages with backtrace support (GNU ELPA)
- evil-ghostel: Evil-mode integration for ghostel (MELPA)
- mozc-modeless: Modeless Japanese input with Mozc (MELPA)
- org-lark: Export Lark docs to Org (MELPA)
- verdict: Generic test runner with treemacs results UI (MELPA)
- with-command-redo: Repeat commands with automatic undo (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like going Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss
Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?
A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 100+ members and counting!
We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people
The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.
We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc
We also have a discord for chill voice chats & casual gaming etc.
If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome
I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc
Leave a comment & I'll DM an invite when available! x
PLEASE CHECK DMS FOR INVITES
submitted by /u/rmonkey100
[link] [comments] -
🔗 r/york Flowers make this city even better somehow🥹💐🪻 rss
| submitted by /u/Wedding-Beauty
[link] [comments]
---|--- -
🔗 r/Leeds Is this a scam job? rss
Has anyone had any experience of Pentagon Solutions claiming to be based in Mabgate Business Centre? The email (received after submitting my CV on Indeed) and website aren't very professional and I suspect it is another scam company. Can anyone confirm?
submitted by /u/eupatorium60
[link] [comments] -
🔗 r/LocalLLaMA To 16GB VRAM users, plug in your old GPU rss
For those who want to run latest dense ~30b models and only have 16GB VRAM, if you have a old card with 6GB VRAM or more, plug it in.
It matters that everything fits on the VRAM, even on 2 cards. Even if one of them is quite weak.
I have a 5070Ti 16GB and a old 2060 6GB. The common idea is you need 2 same GPU to maximize performance. But one day I was strike by the idea, why not give it a try?
Let's see, if you did not bought a mother board just for LLM, it's very possible you have a true PCI-E x16 slot and a couple that looks like x16 but are actually wired with x4, just like me. That's a perfect slot for a old card.
16GB + 6GB = 22GB, it's getting close to the 24GB class card. If you have a better old card, lucky you!
Then you use llama-server with a config like this
[*] jinja = true cache-prompt = true n-gpu-layers = 999 no-mmap = true mlock = false np = 1 t = 0 [qwen/qwen3.6-27b] model = ./Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf mmproj = ./Qwen3.6-27B-GGUF/mmproj-Qwen3.6-27B-BF16.gguf reasoning = on dev = Vulkan1,Vulkan2 c = 128000 no-mmproj-offload = true cache-type-k = q8_0 cache-type-v = q8_0A couple specific points:
- dev=Vulkan1,Vulkan2, this enables the two GPUs, runllama-server.exe --list-devicesto see what you should set.
- no-mmap and mlock=false keeps the model away from your RAM
- np=1, no-mmproj-offload (or do not supply mmproj model), cache-type-k and cache-type-v to minimize VRAM needed
- n-gpu-layers=999 to prefer GPU offloading, well this may be unnecessary, but I'd keeps it
- split-mode=layer to split the layers asymmetrically across the device, "layer" is the default though so you don't see it above.
- c=128000 could be a little stretch, but works well enough for me.BTW I also have intel integrated GPU that I plugged the monitors into, which is Vulkan0.
Some numbers, basically, at 128k max context, 71k actual context useage, pp=186t/s and tg=19t/s, quite usable speed compared to the 4t/s on single card.
[56288] prompt eval time = 5761.53 ms / 1076 tokens ( 5.35 ms per token, 186.76 tokens per second) [56288] eval time = 58000.15 ms / 1114 tokens ( 52.06 ms per token, 19.21 tokens per second) [56288] total time = 63761.69 ms / 2190 tokens [56288] slot release: id 0 | task 654 | stop processing: n_tokens = 71703, truncated = 0Edit:
Some folks want numbers, so here is llama bench. This is with cuda instead. Runs with --device CUDA0 are on single GPU. Without uses all GPU. It's fairly clear fitting on GPU, even on a second weak one, matters a lot for tg speed, especially at long context.
llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 903.13 ± 26.25 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.54 ± 0.14 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 663.60 ± 9.22 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.03 ± 0.08 | llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 769.00 ± 4.50 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 25.40 ± 0.30 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 668.83 ± 2.83 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.31 ± 0.09 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 981.43 ± 27.91 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.87 ± 0.17 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 751.15 ± 16.03 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.08 ± 0.12 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 807.61 ± 7.40 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 24.85 ± 1.57 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 732.96 ± 3.86 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.40 ± 0.07 |submitted by /u/akira3weet
[link] [comments] -
🔗 r/Yorkshire Cherry trees colouring the world. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Leeds Does anyone have spare beer bottles? rss
I am brewing my own beer and I need bottles preferably brown. If you work in a pub and have empties I can come and collect? My local only does alc free bottles and doesn’t sell many. Thanks
submitted by /u/DiligentPotential960
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.10.1 release
What's Changed
Comments panel redesign
The comments panel has been rebuilt with a segmented filter (All / Open / Resolved) and collapsible groups. Pair it with the new "hide resolved comments" setting (
hshortcut) to focus on what's still open during a review.- feat: redesign comments panel with segmented filter and collapsible groups by @tomasz-tomczyk in #354 - thanks @omervk for suggestions in this area!
General
- feat: redesign disconnected state as a sticky banner by @tomasz-tomczyk in #347 - Thanks @vereis for inspiration!
- feat: add hide-resolved setting for inline comments by @tomasz-tomczyk in #353 - Thanks @vereis for the suggestion!
- feat: store CLI args in review file and include in share payload by @tomasz-tomczyk in #349
- feat: replace custom LCS word-diff with @sanity/diff-match-patch by @tomasz-tomczyk in #348
- fix: remove blur/scrim overlay from disconnected state by @tomasz-tomczyk in #352
- fix: fetch comment replies from crit-web during share sync by @tomasz-tomczyk in #350
- fix: hide TOC panel when buildToc is called with no headings by @tomasz-tomczyk in #360
- fix: clarify Hide resolved comments label in settings by @tomasz-tomczyk in #364
- fix: hide comment-line highlight when 'h' hides resolved comments by @tomasz-tomczyk in #365
- fix: collapse reply form after submit; auto-close empty comment forms by @tomasz-tomczyk in #366
- fix: preserve replies on fingerprint-matched comments + cleanup by @tomasz-tomczyk in #367
Internal refactors
- docs: update plugin install instructions to claude CLI syntax by @tomasz-tomczyk in #351
- docs: rule on cookies vs localStorage for persisted settings by @tomasz-tomczyk
- chore: remove releasing section from AGENTS.md by @tomasz-tomczyk
- chore: add Codecov integration for unit and e2e coverage by @tomasz-tomczyk in #359
- chore: Exclude vendored Go packages from coverage profile by @tomasz-tomczyk in #361
- test: add unit tests for high-value uncovered functions by @tomasz-tomczyk in #362
- test: add comprehensive tests for server handlers, session, auth, and daemon by @tomasz-tomczyk in #363
- chore: update GitHub Actions to latest versions, add dependabot by @tomasz-tomczyk in #355
- chore(deps-dev): bump stylelint from 17.7.0 to 17.9.0 by @dependabot in #356
- chore(deps): bump mermaid from 11.13.0 to 11.14.0 by @dependabot in #357
- chore(deps-dev): bump eslint from 10.2.0 to 10.2.1 by @dependabot in #358
- chore: copy mermaid 11.14.0 to frontend/ by @tomasz-tomczyk
- chore: add mise trust to wt.toml post-start and fix e2e-share rate limiting by @tomasz-tomczyk
- refactor: hide-resolved state, persist filter, restore switch CSS by @tomasz-tomczyk in #368
- refactor: port hook lifecycle and a11y fixes from crit-web for parity by @tomasz-tomczyk in #369
Full Changelog :
v0.10.0...v0.10.1 -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/wiesbaden Need help with moving rss
Hey Leute!
Meine Freundin und ich sind gerade für die Uni nach Wiesbaden gezogen (Daimlerstraße, 65197). Ich habe selbst einen Transporter gemietet und bin mit unseren Sachen hierher gefahren. Jetzt haben wir aber ein Problem: Wir bekommen unsere Waschmaschine einfach nicht vom Transporter in unsere Wohnung im 4. Stock.
Hat jemand Tipps oder vielleicht sogar kurzfristig Zeit, kurz mit anzupacken? Würden natürlich auch was dafür geben!
Vielen Dank schon mal!
---
English:
Hey guys!
My girlfriend and I just moved to Wiesbaden for university (Daimlerstraße, 65197). I rented a van myself and drove all our stuff here to our new apartment. But now we have a problem: we can’t get our washing machine from the transporter up to our apartment on the 4th floor.
Any suggestions, or maybe someone nearby who could help us carry it up? Happy to compensate!
Thanks a lot in advance.
submitted by /u/Orph3us_151
[link] [comments]
-




