- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- April 30, 2026
-
🔗 r/wiesbaden Hab Schlüssel in Mainzer str gefunden rss
submitted by /u/Wll-jiiz
[link] [comments] -
🔗 Evan Schwartz Scour - April Update rss
Hi friends,
In April, Scour scoured 778,059 posts from 25,790 feeds. This month, my focus was on ranking improvements and adding a number of new features:
🔃 Ranking Improvements
Scour is designed to find hidden gems that interest you, while trying to avoid using popularity signals or pigeonholing you into a narrow slice of content simply because you clicked on one thing (you can read the ranking philosophy here).
Your Scour feed now subtly adjusts based on which content you click on, like, or dislike. Interests whose related content you like will get a small boost, as well as posts from domains that you tend to like. This effect is intentionally subtle.
The feed is also much better now at balancing across your different interests. I revamped the way it does the final content selection to have an explicit diversification step that balances the feed based on your interests, the sources, and other criteria.
↕️ Tap to Expand
Scour's interface has undergone a number of iterations this month. Now, you click or tap a post to expand it. The expanded view contains a short snippet from the post with a link to read more, as well as buttons to save, react, report it, etc.
📑 Saved Posts
Want to save an item to read for later? You can now save items, which is separate from liking them. Saved items are private and don't affect your feed's ranking at all. Also, Scour will occasionally resurface a couple of your saved items while you're browsing your feed so you can revisit things you might not have had time to read before.
📖 Reading Posts on Scour
You can read post summaries and some entire posts directly on Scour. Click on Read More, which is shown when you click on a post, to go to the post preview page. That page has better styling now, so it should be nicer to read. Plus, code blocks now get automatic syntax highlighting.
🍱 Browse Interests by Category
You can now browse popular interests by category. Technology is broken out into subcategories, or you can easily skip past it to find other topics like Science & Nature, Food & Cooking, Arts & Design, etc.
🌐 Post List by Domain
Clicking on a post's domain now brings you to a chronological list of all the posts from that site and, optionally, all the subdomains. You can easily block domains on that page if you don't want any of their content appearing in your feed, or just browse to see what else was published.
🔢 Pagination by Default
The default feed view switched from infinite scrolling to paginated. You can click the link at the bottom of the page to use infinite scroll, or toggle this in your settings.
🙏 Thanks
Thanks to Gordon McLean for the Scour mention in Why I Still Like the Internet!
And thanks to everyone whose feedback shaped the roadmap this month:
- Thanks to Qiang Huang for requesting an easier way to see the post preview!
- Thanks to Shane Sveller for lots of UI feedback and requesting the ability to block multiple subdomains!
- Thanks to Phil Eaton and Gordon McLean for pointing out that the footer was impossible to reach (it's now hidden completely when infinite scroll is enabled)! Thanks also to Phil for asking to see all posts from a domain!
- Thanks to u/goma_goma for suggesting adding Saved Posts!
- Thanks to Adam Benenson and Patrick Wadström for the feedback that led to the categorized interests view!
🔖 Some of My Favorite Posts
Here were some of my favorite posts that I found on Scour in April:
- TurboPuffer wrote an interesting blog post about efficiently merging recency and other numeric signals into lexical (BM25) scores for documents. I'm currently working on adding lexical scoring to Scour, so this was very timely for me: Mixing numeric attributes into text search for better first-stage relevance.
- On the topic of search, Doug Turnbull had a good post discussing Can agents replace the search stack? and Daniel Tunkelang wrote about using multiple documents to represent a search query in Distilling Retrieval Pipelines to a Single Embedding Model. I'm not switching Scour's architecture to either of these just yet, but they're interesting food for thought.
- I uninstalled Ollama, the tool for running local LLMs, after reading: Friends Don't Let Friends Use Ollama.
- This is a gem of a comment and historical tidbit in the SQLite source code that Avinash Sajjanshetty found while working on the Turso rewrite: SQLite prefixes its temp files with
etilqs_. - On the non-software front, this article makes an unfortunately compelling point: Iran didn’t have a nuclear weapon before this war. But you can see why it would develop one now.
For Rust developers, I also wrote up this blog post: Your Clippy Config Should Be Stricter.
Have ideas for how to make Scour better? Post them on the feedback board!
Happy Scouring!
- Evan
-
🔗 r/wiesbaden Neue Arbeit rss
Ich bin QA /Testautomatisierer in Softwareentwicklung Bereich. Hab ich auch Juristisch hintergrund. Als ich wohnen in WI, suche nach etwas passenden für mich.
submitted by /u/NikolaBilbil
[link] [comments] -
🔗 r/Leeds Time to get real, Leeds rss
submitted by /u/loudribs
[link] [comments] -
🔗 tomasz-tomczyk/crit session-backup-3860575: fix: e2e — adjust range-mode tests for header redesign + popover changes release
Range-mode E2E suite had eight failures on CI flagged against changes
made earlier in this branch. All are mismatches between the test
assertions (written for the original PR's design) and the redesigned
header chip + stack popover. Behavior is intentional; tests updated
to match.-
Restore the .stack-popover-default class on the root marker.
The class was dropped as "dead CSS" since nothing styled it, but
several tests rely on it as a stable selector for the
non-interactive default-branch row. Adding it back is a no-op
visually and gives the tests a documented anchor again. -
Update "current entry has reviewing marker" → "current entry is
aria-current". The (reviewing) text marker was deliberately
removed earlier; the brand-tinted background + aria-current="page"
convey current state without the extra text. Tests now assert the
role/aria signal instead. -
Rewrite "chip is hidden when stack 0 or 1 entries" →
"chip stays visible; popover shows no-stack placeholder". Page-load
UX prefers immediate chip render (label paints from focus data
without waiting for /api/picker), so the chip is always visible in
range mode. Empty stacks now render a "No surrounding stack"
placeholder inside the popover. -
Migrate scope-toggle-files test from #diffScopeToggle (legacy
diff-area-header bar) to the in-popover scope rows
(#stackPopover [data-action="scope"]). The legacy bar is
intentionally hidden — toggle moved into the popover with
one-line subcopy. -
showScope = true unconditionally in renderStackChip's "Compare
against" section. Previous condition (focus.is_stacked ||
!!focus.default_sha) hid the rows entirely when neither held —
confusing for users of unstacked range mode. Now Layer is always
rendered as the canonical default, full-stack is rendered but
disabled (with an explanatory title) when default_sha is missing.
Co-Authored-By: Claude Opus 4.7 (1M context) noreply@anthropic.com
-
-
🔗 r/Leeds Some snapshots I took last weekend rss
submitted by /u/waterflowingdown
[link] [comments] -
🔗 r/york Meeting new people rss
Where would be a good place to go to try and meet new people and make friends? I've been left in york on my lonesome and I wanted to try and change that but no luck. Something within my age range would be nice (im 23)
submitted by /u/ChibiXenovia
[link] [comments] -
🔗 backnotprop/plannotator v0.19.4 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
What's New in v0.19.4
v0.19.4 is a review editor release. Six PRs bring a new diff type for reviewing entire repos, a code file viewer with syntax highlighting and annotation support, per-file quick-settings for diff display, and a hide- whitespace toggle that matches GitHub's
?w=1behavior."All Files" Diff Type
Sometimes you want to review an entire repository, not just uncommitted changes. The new "All files" diff type diffs the empty tree against HEAD, showing every tracked file as an addition. This is useful when a repo has no working tree changes but you still want to launch a review, or when you want to browse the full codebase through the review UI.
The option appears in the diff type dropdown alongside the existing choices and can be set as the default preference in the setup dialog or config.
- #629 by @backnotprop
Code File Viewer with Syntax Highlighting and Annotations
Clicking a code file link (
.ts,.py,.go, etc.) in a plan or annotated document now opens a read-only dialog with full syntax highlighting. The server runspreloadFilefrom@pierre/diffs/ssrto return pre-rendered HTML, so there's zero client-side Shiki cost and the file renders instantly.The viewer supports line-level annotations: click the gutter button on any line, select a line range, or drag across text to add a comment. Annotations appear in the sidebar panel alongside prose annotations and are included in exported feedback. Draft recovery covers code file annotations as well, so nothing is lost if the server restarts.
Under the hood, the dialog uses a new reusable
PopoutDialogcomponent extracted from the table popout, which also fixes the table popout's missing backdrop blur.- #634 by @backnotprop
Hide Whitespace
A new toggle suppresses whitespace-only changes in diffs, matching GitHub's
?w=1/git diff -wbehavior. When enabled, re-indentation, alignment padding, and interior whitespace changes are removed from the diff, leaving only substantive code changes visible. The implementation normalizes all whitespace runs to a single space before diffing, so interior whitespace changes like alignment shifts and extra spaces between tokens are correctly suppressed.The toggle is available in the per-file quick-settings popover and in the global settings dialog. Default: off.
- #631 by @backnotprop
- #635 by @backnotprop
Quick-Settings Popover
Each file header in the review editor now has a gear icon that opens a compact popover with all diff display options: style (split/unified), overflow mode, change indicators, inline diff granularity, line numbers, diff background colors, and hide whitespace. This provides quick per-file access to display options without opening the full settings dialog.
- #631 by @backnotprop
Additional Changes
- Gutter hover button fix : The
+button for adding line comments wasn't rendering because Pierre updates hover state imperatively (no React re-render). The button now always renders and checks hover state at click time. Button styling was also updated to match Pierre's theme spec. — #630 - @pierre/diffs 1.1.20 : Picks up split-view scroll fix, WorkerPool race condition fix, hydration fixes, CSS refactoring, and empty-file-as-deleted fix. — #630
- File tree expand toggle : The separate expand-all and collapse-all buttons are replaced with a single toggle. Shows the collapse action when all folders are expanded, otherwise shows expand. Disabled when the tree has no folders. — #633
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat(review): add "All files" diff type by @backnotprop in #629
- fix(review): gutter hover button not rendering + update @pierre/diffs to 1.1.20 by @backnotprop in #630
- feat(review): diff display options — hide whitespace + quick-settings popover by @backnotprop in #631
- feat(review): single file tree expand/collapse toggle by @backnotprop in #633
- feat(ui): code file viewer with syntax highlighting and annotations by @backnotprop in #634
- fix(review): hide-whitespace matches GitHub's git diff -w by @backnotprop in #635
Full Changelog :
v0.19.3...v0.19.4 -
🔗 r/reverseengineering Revealing NVIDIA Closed-Source Driver Command Streams for CPU-GPU Runtime Behavior Insight rss
submitted by /u/mttd
[link] [comments] -
🔗 Kagi release notes April 30th, 2025 - Kagi API preview and ecosystem updates rss
Kagi APIs: the same search technology that powers Kagi is opening up to
developers
Starting next week, we’ll begin onboarding developers to the Kagi API dashboard. Access will roll out first to people who joined the API waitlist or contacted Kagi support.
With the new Search API developers can bring Kagi Search into their own apps, tools, and AI systems. Here's an early look:

If you'd like to join this early preview of the Kagi API, please fill out this form. We'll reach out next week!
Kagi Search
New landing
We updated our landing page to bring awareness to Kagi's wider ecosystem beyond search. Check it out!
This is the first of many steps toward helping more people discover everything Kagi has to offer.
- IP address and subnet search to bring up the Wolfram Alpha answer #10147 @dronics
- Wrong Kagi Knowledge result for Mother's Day search #7086 @dreifach
- "1 lakh crore" returns confusing results #9050 @holdenr
- Custom assistant without internet access results in error #9876 @Thibaultmol
- "Sign up for free" link on Pricing page not working #10314 @Hanbyeol
- Disable Search Grouping in News Tab #10254 @dvdnet89
- Auto suggest gives results which trigger bangs improperly #5346 @LadyStrawberries
- Reverse image search returns primarily Russian and Russian-translated results #9111 @Jake-Moss
- Runway (the AI video generation company) got erased from search result #10369 @yanda
- Quick, direct access to "Set Kagi as default Search" instructions on your landing page (or close by). #6646 @ragnar
- Web search image preview does not match the actual image searches. Also the image results are not relevant at all. #10367 @StealthGirl
- Better UX for date calculator widget. #10282 @leftium
- Redirect to first result bang no longer working if preceded by a space #10385 @znmto
Imgdata leaking into search results #10355 @Keli- Free search quota never expires #10403 @afestein
- Ranking adjustment doesn't do anything when JavaScript is disabled. #10425 @SkyDotBit
- Advanced Search modal and scrollbar behavior #4509 @dix
Kagi Assistant
- We increased the Assistant's file upload size limit to 30 MB #8872 @mrzv
- Degradation of file analysis functionality in Kagi Assistant #10290 @v3max
- Umlauts are sometimes not displayed in the Quick Assistant #9289 @Kel
- Universal summarizer "Continue in Assistant" button fails: "We are sorry, this input is not supported. (Invalid Input)" #10368 @Self-Perfection
Kagi News
- Kagi News -> timeline ambigious #8525 @yeri
- Story corrections, both from user reports and our own continuous fact-checking. When something turns out to be wrong, we fix it and show a small correction notice on the story, with the changed sentence highlighted on your next visit.
- Stories can pull in related coverage from other categories, so a single big story can span Science, World, and Tech when it makes sense.
- Cleaner prose in hard-news categories: fewer filler phrases, less editorializing, more neutral writing.
- Snappier all around: faster initial load, much faster story search, and browser back/forward now restores the page instead of reloading it.
- Custom category order syncs reliably across devices now. Fixed several cases where reorders were lost or overwritten.
- Category tabs use proper ARIA semantics for assistive tech.
Kagi Translate
- Keyboard shortcuts in Kagi Translate #10306 @mb
- Poor text formatting of image translations on Kagi Translate app #10016 @San
- Pinyin absent for alternative translations #10340 @phuertay
- Add Seto and Võro to Kagi Translate #10324 @mb
- Correct file extensions when saving translations #10311 @mb
- Add Montenegrin as an option in Translate #10230 @mb
- Pasting text in Translate app is hard #10047 @marty
- Pasted text from books or PDFs is auto-formatted: broken mid-sentence line breaks, hyphenation across lines, and stray whitespace get cleaned up. An undo toast lets you revert if you wanted the original.
- Auto-language switch now shows a toast with undo, and skips ambiguous cases like uncertain, mixed, or mid-typing input.
- Pin any language to the top of your list, including custom or non-standard ones.
- Romanization shown beneath alternative translations into Japanese, Chinese, Korean, Arabic, Russian, and other non-Latin scripts.
- Link previews (Open Graph) for translated text now show the actual translation when shared on social media, instead of a generic logo. The /extension page also got its own dedicated preview.
- New languages: Seto, Võro, Montenegrin, and Badini Kurdish (with both Arabic and Latin Hawar scripts).
- Formal Ukrainian now correctly capitalizes Ви and Ваш.
- Downloaded translations get the right file extension based on the detected content format.
Post of the week

Follow us and tag us in your comments, we love hearing from you.
Kagi is growing
The team is expanding, and we're looking for talented people who want to help build a better web alongside us. We're hiring for multiple roles, including:
-
Product Designer (UI/UX) : Take strategic ownership of end-to-end design across Kagi's product ecosystem. Apply here.
-
An Education Partnerships Lead : If you believe the most important thing technology can do for students is teach them how to think for themselves, we'd like to talk. Apply here.
-
A Senior Platform Engineer : If you have strong opinions about API contracts, auth correctness, and migrating user data without losing anyone's trust, we'd like to talk. Apply here.
We also have openings for a Senior Search Engineer, Senior Platform Engineer, Senior Full-Stack Developer (Kagi Labs), and an AI Specialist. See the full list of openings here.
Kagi tip of the week 💡
Between AI-image filters, clickbait controls, reverse lookup, and source filters, there's a lot of power hiding behind the Images and Videos tabs. Here's how to get the most out of them.
Kagi art
Less scrolling, more living.

-
🔗 r/york Stork rss
| Seen near Taddy this week by a gardener friend. Going to bed a bigger bird box! submitted by /u/yorangey
[link] [comments]
---|--- -
🔗 r/york Stork rss
Seen in Taddy by a gardener friend this week. Going to need a bigger bird box!
submitted by /u/yorangey
[link] [comments] -
🔗 roboflow/supervision supervision-0.28.0: CompactMask & SAM3 release
🔦 Spotlight
Memory-efficient masks with
sv.CompactMaskSegmentation models produce one full-resolution bitmap per instance. On a 1920×1080 image with 28 detections that is ~55 MB of mask data. Most pixels are background.
sv.CompactMaskstores only the tight bounding-box crop, RLE-encoded — the same 28 masks drop to ~237 KB of crops, a 240× reduction before RLE kicks in.It's a drop-in replacement: annotators, filters, and
areaall work unchanged.import supervision as sv # any segmentation model — RF-DETR Seg, YOLO-Seg, SAM3 detections = model.predict(image) # sv.Detections with dense masks dense_mb = detections.mask.nbytes / 1024 / 1024 compact = sv.CompactMask.from_dense( masks=detections.mask, xyxy=detections.xyxy, image_shape=image.shape[:2], ) detections.mask = compact # swap in — API unchanged # filter by pixel area without materialising dense masks large = detections[compact.area > 1000] # annotators call .to_dense() internally annotated = sv.MaskAnnotator().annotate(image.copy(), detections)
SAM3 text-prompted segmentation
SAM3 segments objects by free-text prompt — no class list, no bounding boxes.
sv.Detections.from_sam3()parses both PCS (multi-prompt) and PVS (video) response formats into a standardsv.Detections, withclass_idset to the prompt index.import requests, base64 import supervision as sv PROMPTS = ["person", "bag"] with open("image.jpg", "rb") as f: img_b64 = base64.b64encode(f.read()).decode() response = requests.post( f"https://api.roboflow.com/inferenceproxy/seg-preview?api_key={API_KEY}", json={ "image": {"type": "base64", "value": img_b64}, "prompts": [{"type": "text", "text": p} for p in PROMPTS], }, headers={"Content-Type": "application/json"}, ) sam3_result = response.json() h, w = cv2.imread("image.jpg").shape[:2] detections = sv.Detections.from_sam3(sam3_result=sam3_result, resolution_wh=(w, h)) # class_id == 0 → "person", class_id == 1 → "bag"
🔄 Migration
VideoInfo.fpsis nowfloatNTSC frame rates (23.976, 29.97, 59.94) were silently truncated.
fpsis now the true float — cast at call sites that need an integer.Before
info = sv.VideoInfo.from_video_path("clip.mp4") buf = collections.deque(maxlen=info.fps) trace = sv.TraceAnnotator(trace_length=info.fps)After
info = sv.VideoInfo.from_video_path("clip.mp4") buf = collections.deque(maxlen=int(info.fps)) trace = sv.TraceAnnotator(trace_length=int(info.fps))sv.ByteTrackdeprecated — useByteTrackTrackerTracker implementations now live in the dedicated
trackerspackage.sv.ByteTrackremains available in 0.28–0.29 withDeprecationWarning; removal in 0.30.0.Before
tracker = sv.ByteTrack() detections = tracker.update_with_detections(detections)After
# pip install trackers from trackers import ByteTrackTracker tracker = ByteTrackTracker() detections = tracker.update(detections)
🚀 Added
-
Memory-efficient masks with
sv.CompactMask. Sparse segmentation masks are now stored as a crop region plus RLE-encoded data instead of full-resolution bitmaps, cutting memory use by 10–100× for typical instance-segmentation outputs. It's a drop-in change —sv.Detections.mask, filtering, merging, andareaall keep working without materialising the full array. (#2159) -
SAM3 detection and PVS support in
from_inference.sv.Detections.from_inferencenow parses SAM3 detection and point-video-segmentation outputs, both from the localinferencepackage and from Roboflow-hosted server responses. (#2103, #2152) -
Compressed COCO RLE masks in
from_inference. Inference responses withrleorrle_maskfields containing a compressed counts string (as produced bypycocotools) are decoded directly into binary masks, skipping the lossy polygon round-trip. (#2178) -
Standard
loggingmodule instead ofprint. Diagnostic output is now emitted under thesupervisionlogger, so applications can capture, filter, or silence it through standardloggingconfiguration. (#2154) -
RGBA hex codes in
sv.Color.sv.Color.from_hexaccepts 8-digit hex (#ff00ff80), andColor.as_hex()round-trips alpha when not fully opaque. New top-level helpers:sv.hex_to_rgba,sv.rgba_to_hex, andsv.is_valid_hex. (#2004) -
Dynamic kernel sizing in blur and pixelate annotators.
BlurAnnotator(kernel_size=None)andPixelateAnnotator(pixel_size=None)(the new default) compute the kernel per detection as a fraction of the shorter bounding-box side, giving visually consistent results across object scales. (#709) -
sv.ImageAssetsfor sample images. A counterpart to the existing video assets — downloads sample images for examples and tutorials. (#932) -
Boundary warnings in
InferenceSlicer. Emits a warning when callback detections fall outside tile boundaries, helping you spot coordinate-system bugs in custom callbacks early. (#2186)
⚠️ Breaking Changes
-
sv.VideoInfo.fpsis nowfloat, notint. Frame rates like 23.976, 29.97, and 59.94 are no longer truncated. If you passfpsto APIs that require an integer (deque(maxlen=...),TraceAnnotator(trace_length=...)), wrap withint(...). (#2210) -
sv.rle_to_maskreturnsbool, notuint8. This matches the long-declared signature. Code that doesmask * 255still works via NumPy broadcasting, but explicit casts likemask.view(np.uint8)will break. Add.astype(np.uint8)if you relied on the undocumented integer output. (#2178)
See the migration guide below for before/after snippets.
🌱 Changed
-
Metric arrays use
float32instead offloat64.sv.MeanAveragePrecisionResultand related arrays (mAP_scores,ap_per_class,iou_thresholds, precision/recall) drop tofloat32, reducing memory and speeding up computation. Numerical results may differ in the last few digits. (#2169) -
rle_to_maskandmask_to_rlemoved. New canonical path:supervision.detection.utils.converters. The oldsupervision.dataset.utilsimport still works but is deprecated. (#2178)
🗑️ Deprecated
-
normalized_xyxyargument renamed toxyxyindenormalize_boxes.sv.denormalize_boxes(normalized_xyxy=...)still works but emits aFutureWarning; switch toxyxy=. Scheduled for removal in 0.30.0. -
sv.ByteTrack→ByteTrackTracker(externaltrackerspackage). Install withpip install trackers; the method renames fromupdate_with_detections()toupdate(). Scheduled for removal in 0.30.0. (#2215) -
supervision.keypoint→supervision.key_points. Also deprecated: theLMMenum (useVLM),from_lmm(usefrom_vlm),create_tilesinsupervision.utils.image,ensure_cv2_image_for_processinginsupervision.utils.conversion, and the keypoint validators insupervision.validators. (#2214)
🔧 Fixed
-
PolygonZoneno longer double-counts overlapping zones. When two polygons contain the same anchor, each zone now reflects its own containment instead of every zone claiming the detection. (#1991) -
LineZonerespects class identity across reused tracker IDs. Trackers that recycletracker_idacross classes no longer leak crossing state from one object to another. (#1868) -
process_videoraises immediately on callback errors. Previously the exception was swallowed and the process hung until the writer was flushed. (#2022) -
DetectionDatasetpopulatesclass_name. Loaded annotations now carrydata["class_name"], matching what model connectors produce. (#2156) -
ByteTrackpreserves externally assignedtracker_id. No longer overwrites caller-assigned IDs on the first update. (#1364) -
Confusion matrix double-counting fixed.
evaluate_detection_batchnow correctly matches multiple predictions to the same target, so FP/FN counts match expectations. (#1853) -
MeanAverageRecallmAR@K is now COCO-compliant. Computed using top-K detections per image; previous values were inflated relative topycocotools. (#2136) -
Detections.is_empty()handles emptytracker_id. ReturnsTruefor zero-row detections regardless of whethertracker_idisNoneor an empty array. (#2209) -
CSVSinkandJSONSinkslicecustom_dataper row. NumPy arrays, lists, and tuples whose length matches the detection count are now indexed per row, instead of being written whole for every detection. (#2199, #2216) -
TraceAnnotatorsmooth mode handles stationary tracks. Deduplicates anchor points and falls back to a raw polyline whensplprepcannot fit fewer than 4 unique points. (#2217) -
load_coco_annotationsrejects path-traversal annotations. Refusesfile_nameentries that escape the images directory via../or absolute paths. (#2218) -
OBB datasets no longer blow up memory. Loading oriented-bounding-box datasets stopped allocating full-image masks per box. (#2187)
-
KeyPointsboolean mask indexing fixed. Uniform-count selection now works correctly when all instances share the same keypoint count. (#2188) -
DetectionDataset.as_coco()preservesareaandiscrowd. No longer dropped silently in the round-trip. (#2185) -
force_mask=Trueprecision and COCO empty-polygon export. Annotation conversion no longer loses precision, and COCO export tolerates empty polygons across formats. (#1746, #1086, #265)
🏆 Contributors
A huge thank you to everyone who shipped this release:
- @Erol444 — SAM3 detection and PVS parsing
- @leeclemnet (LinkedIn) — compressed COCO RLE masks and
rle_to_maskcorrectness - @abritton2002 —
VideoInfo.fpsas float andDetections.is_empty()fix - @shaun0927 (LinkedIn) — sink slicing, trace annotator, COCO path-traversal hardening
- @happyhj (LinkedIn) —
class_nameinDetectionDataset - @farukalamai (LinkedIn) —
CSVSinkNumPy slicing - @stop1one (LinkedIn) — COCO-compliant
MeanAverageRecall - @Adithi-Sreenath (LinkedIn) —
PolygonZoneoverlap fix - @JESUSROYETH —
LineZoneclass-aware tracker IDs - @realh4m —
process_videoerror propagation - @rolson24 (LinkedIn) —
ByteTrackpreserves external tracker IDs - @panagiotamoraiti (LinkedIn) — confusion matrix correctness
- @Youho99, @kirilllzaitsev — COCO empty polygons and
force_masksconsistency - @aza-ali — RGBA hex support in
sv.Color - @Clemens-E — dynamic kernel sizing for blur and pixelate annotators
- @NickHerrig (LinkedIn) —
sv.ImageAssets - @0xD4rky —
force_mask=Trueprecision fix - @Borda (LinkedIn) —
CompactMask, metrics float32, deprecations
Full changelog :
0.27.0...0.28.0 -
-
🔗 The Pragmatic Engineer The Pulse: token spend breaks budgets – what next? rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of three topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .
Last week, we covered the slightly perverse trend of "tokenmaxxing" across the industry, where devs run agents with the sole aim of boosting their personal "token stats" in an effort to rank higher on internal token leaderboards, and not be seen as a Luddite who doesn't use AI tools enough compared to peers.
This week, I spoke with a software engineer at a large company and another at a seed-stage place. Both shared almost identical stories: at their latest all- hands, company leadership expressed concerns about the fast-rising costs of tokens. At both places, token spend has increased by ~10x in the last six months - with no signs of slowing down.
I wanted to find out about this trend, so I talked to devs at 15 businesses. Below is what I learned about what's happening in workplaces of all sizes. Names are anonymized.
Large companies
Setting the default model to a cheaper one: 10,000+ person SaaS company,
offices on all continents
Inside a large SaaS company, most devs use an internal background coding tool for coding. This model defaults to Claude Sonnet, which is the cheaper Claude version. Model selection is not persisted, so devs who prefer working with Opus, for instance, must reselect it on every subsequent startup.
This tool supports all major frontier models such as Sonnet, Opus, GPT, and Gemini. Devs at the company whom I talked to are very heavy users of the tool and have not encountered usage limitations.
Fintech company, US, Series D, ~8,000 people. Staff engineer:
"The cost in token spend is off the charts - and leadership has shared this trend with us. They have not said anything beyond showing growth in spend, and mentioning that this won't be sustainable. So, nothing specific yet, but my sense is that something will have to change. Limits or prioritizing cheaper models, cutting back on hiring? Who knows."
Infra company, US, publicly traded, ~5,000 people. Engineering Director:
" We're monitoring but not restricting. We are spot checking the heaviest users, but we are seeing the business cases working out.
We are offering some guidance on model selection - e.g., turn off the new high-effort setting in Claude. Some users are trying open source models - but open source model usage is a bottom-up initiative, not a top-down one."
Information technology, US, 10,000+ people. Director of Engineering:
"We have already had to raise our API budget limits multiple times in April. We recently switched to a much higher-effort level for Claude, which significantly increased the cost per PR.
One reason for the cost spike is using state-of-the-art models for demanding tasks. We are using that high-effort setting even for fairly trivial tasks that could have been handled by much cheaper models, or even by lower-effort Claude loops. Despite a few of us pointing this out, leadership has basically said budget is not the concern right now.
I sense that the budget increase has not been forecasted, and we're in for a reckoning.**** I suspect the attitude changes once finance and other cost- conscious parts of the org realize we are spending hundreds of dollars per day, per highly-engaged developer. For now, fear of missing out and not wanting to fall behind seems to be outweighing cost discipline."
Games studio, US+Europe, ~5,000 people. Senior developer:
"What budget increase? It's very hard to get a budget for AI here! Claude Code is still not rolled out because $200/month/dev is seen as too high a cost. I talk with people at startups where $1,000/month in spending is totally normal, and it's night and day here."
Fintech company, US+Europe, late stage, ~5,000 people. Staff engineer:
" Some developers are now spending $500 a day (!!) on Claude Code. Practically speaking, this means that employee costs have doubled. Productivity has increased, in my view, but now the bottleneck is code reviews. AI can spit out code quite quickly, but we still have human reviews in place. Leadership encourages using AI for code review, but my team will not blindly trust AI.
The push from AI is coming from the top. This year's performance review had a section on AI, rating devs by how well they used AI, so this is another reason everyone just uses it as much as they can."
Mid-sized companies
SaaS industry, US, ~2,000 people. Dev Productivity Lead:
"Model routing helped keep our costs growing less dramatically. For example, changing the default model reduced cost by 30%. This is our strategy with AI spend, summarized:Short term: spend, spend, spend! Experiment and use whatever models make sense.Measure the impact. Measure key outcomes and report on spend, monthly.When spend vs results diverge: adjust. When our spend increases dramatically, but outcomes don't follow: see what we can do to adjust the delta. More spend should mean better outcomes. If not, we are doing something wrong."
Finance industry, US, ~2,000 people. VP of AI:
"We have Cursor and Claude Desktop, both of which have around 800-1,200 total users. Token usage is growing somewhat unexpectedly. Estimates are being adjusted on the fly; the initial plan to have strict limits (say, $100 per user) is breaking when reality hits, and people exhaust them in 3-5 working days.
Using expensive models is a problem. In regards to Cursor, many devs are defaulting to the most expensive models without realizing that going with Opus gives single percentage gains in intelligence compared to Sonnet, for example, while exhausting their budgets almost immediately.
We are working on blocking/managing out the most expensive models [with Cursor] , as going into thousands of dollars per user, per month is not sustainable on our scale. Cursor is a good partner and we're working with them to switch to a "pooled spend" model where heavy users can tap into a pool of extra spend.
Claude is a similar story. We were at $100 of Claude Desktop limit for everyone, but as we are moving forward, I can see that we would need to go much higher, especially for business-critical use cases."
Infra company, US, late-stage, ~700 people. Founder:
"We haven't had much of an issue. Most folks police themselves for runaway costs; for example, we had someone hit like $10K in a week because they messed up caching, but it was caught and they corrected their harness.
For the most part, we don 't see our high-end folks spending more than ~$1K/week. Now, to be clear, this is not a small amount! BUT it's already a small subset of the population.
We're just factoring it into engineering costs at this point: if it's, say, $2K/month per employee, that's $24K per year.
Who cares, then, when engineers already cost $200-400K/year in cash comp? Okay, so what if it's $5K/month. That's $60K/year.
Our bet is that token costs will stabilize and we 'll eventually end up with local-ish models.
Now, it could be five years before they stabilize, but overall, spend today isn't that insane to me.
There's a lot of people who are just dumb about it, but most legit execs push back on this. Take the Ralph loops or other insanity where someone spends $1K/day, $5K/week or stuff like this. That's all just people being fools thinking they're doing "R&D," or somehow that they're smarter than everyone else, but they're just producing junk that never ships or is not useful.
We saw a bit of "stupid overspend" in the first couple months, but that's all gone now. Costs could go up even more if we would "crack the whip" in wanting to see even more output, but we're not doing that."
Healthcare industry, US, ~500 people. Senior engineering manager:
"We are not holding back on spend, and have a monthly spend leaderboard. And we WANT devs to spend more on tokens! For example, one of my engineers spent $1,400 on a long Claude Code session in a single day.
We are seeing massive leverage, and we do more with the same number of people. This is why we are okay with our spending spiking. Our traffic is growing more than 10x, year-on-year, and we have managed to keep things running with the same team, and these AI tools.
Engineering is now blocked on Product and Design - which never happened before! This is how fast execution has become. We now have Staff+ engineers writing Product PRDs so we can move faster.
I've been in tech for close to 15 years and I never saw dramatic change like this. I just came back after a 3-month break, and every single thing is different in my day! I feel these AI agents are the biggest change in the industry since high-level languages became widespread."
E-commerce company, US & Europe, ~2,000 devs. Head of Engineering:
"The increase in spend is INSANE. It's about usage going up, with no signs of stopping. Usage is off the charts.
We currently do not have limits in place, and are not pausing now. Our CEO is AI-pilled and won't let us slow down.
We do buy tokens at a discount. They start from 5% and go up with usage with the vendors we use (the usual suspects.)
We don't let devs use anything lower than Opus 4.7 for coding. Cheaper models might work better, but a slight error pushed to prod would result in hours of toil."
Small companies
Series A, US, ~50 people. Principal Engineer:
"About 15 devs are heavy users of AI and costs are rising very fast. Almost everyone uses Claude and Claude Code. We are considering four potential options:Increase AI budget, and start measuring more. Continue doing what we are, but allow devs to use more tokens instead of hiring limits. The precise ROI is hard to quantify, but we'll start to measure and track both AI adoption and impact.Optimize token consumption. Use cheaper models for simpler tasks, review token usage, and see where we can cut usage. Downside: this approach could become one with diminishing returns, fast.Integrate more AI providers in the company. Find wrappers to abstract LLMs. The problem is: how do you replace Claude Code, for instance?Pivot to local models: such as Kimi, Qwen, and so on. The problem is it's a big investment in high-end hardware or cloud GPUs. Upside: it offers better long-term cost control, once done.
We are likely to go with option #1: increase spend BUT maintain momentum and put the right measurements in place. We can do #2, #3 and #4 later. But if we kill AI usage momentum inside the company, the outcome will probably be worse."
AI infra, US, seed stage, ~15 people. Founder:
"We saw a 15x increase in 6 months: Six months ago our spend per developer was ~$200/monthToday, it's around $3,000/developer/month, for our seven devs
We're not slowing usage, especially as we are building an AI infra product. The increase was much faster than expected, though."Small, bootstrapped company, Europe. Founding engineer:
"Our current strategy in dealing with the increase in costs is to switch to a cheaper model; unfortunately, from Opus to Sonnet in our case. That said, Sonnet is quite decent."
How businesses manage token spend
Regardless of company size, there seems to be two strategies for how companies deal with increased spending. A summary:
Strategy #1: "let it rip and start measuring." Around half of respondents say AI spend is rising dramatically, and they have decided to do nothing about it. They want devs to use AI as much as it makes sense to, and to help the work as much as possible.
However, because the cost is rising dramatically, these companies are now starting to measure usage and attempting to measure the impact of their AI tools.
There's a few companies where the impact seems to be very positive, already. Smaller startups whose business is exploding in numbers of customers, load, and revenue, see that they don't need to hire more staff because existing engineers can keep supporting the growth with AI tools.
Strategy #2: curb spending. Commonly mentioned cost-saving approaches:
- Use cheaper models for simpler tasks
- Set default models to less capable ones
- Set a spending cap and make it hard for engineers to exceed it, or require consent for doing so
Most companies using strategy #1 have briefly considered going with this approach, but threw it away, because they see this approach as optimizing on the wrong thing: cutting costs before the productivity impact of using state- of-the-art tools is even known!
Discounts exist when the spend is in the millions of dollars. I asked several people if they are getting discounts from vendors when buying tokens at scale. There were no exact numbers, but this is what I gathered in aggregate about possible custom agreements:
- Cursor: open to discounts above a few million dollars in spend. Companies have negotiated discounts with Cursor after crossing $1M of spending. Some companies negotiated tiered discounts from this level, starting at 5% and going higher as their spend goes up.
- Anthropic: no discounts. I talked with companies spending $5M+ per year on Claude which have received no discounts. If Anthropic offers discounts, it will likely be at a much higher tier.
- All discounts are custom, so try to negotiate - it's free! Pricing discounts are on a per-customer basis, and highly custom. The easiest way to see if a discount is available is to ask the vendors!
-- -
Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:
- Load from AI breaks GitHub - but why not other vendors? GitHub's reliability is less than one nine, and getting worse. Prolific open source contributor, Mitchell Hashimoto, is quitting GitHub because he thinks it's not suited for professional work. GitHub's leadership blames the 3.5x increase in service load as the cause of degradation - or it might be self-inflicted.
- Anthropic 's speedrun to destroy trust. Anthropic could do no wrong until recently, but in the past month, that's all changed. Silently nerfing Claude Code, banning companies from Claude, and baffling price rises all add to a sense that Anthropic is in its "extraction" era of generating more revenue for the same or worse service.
- Industry pulse. Dramatic price increases at GitHub Copilot, explosive growth at Codex, Google scrambling to build a good coding model, Cursor might be bought by SpaceX, AI agent deletes car business, and more.
- Mitchell Hashimoto & the "building block economy." Ghostty's creator finds that open source "building blocks" are the best way to win massive adoption by software components - but it's got harder to build a business on top of open building blocks.
-
🔗 r/reverseengineering Free CAN bus reverse engineering workstation - offline ML, dual AI engines, UDS security access, MitM gateway, 15 tabs rss
submitted by /u/Repulsive_Factor5654
[link] [comments] -
🔗 r/Leeds Growing Well Study - Participants Needed rss
Hi there!
My name is Chloe Thackray and I am reaching out from the University of Leeds.
We are currently conducting a large-scale, national research project called the Growing Well Study and are looking for families with young children to take part (6ms - 4 years). You will receive up to £50 in vouchers as a thank you!
The project is focused on preschool diet, growth and dental health and it will help inform national policy-recommendations.
What will be involved:
- Short online survey
- Local measurement appointment (height, weight, tummy)
- 3 daily online food diaries
- Repeat in 1 year + free dental check
We will be hosting measurement appointments soon at Sunny Bank Mills and the Merrion Centre in Leeds.
If you have a child within this age range we would love you to take part. You can sign up and complete our online survey here: https://survey.natcen.ac.uk/GWS
Thank you so much!
submitted by /u/GrowingWellStudy_UoL
[link] [comments] -
🔗 r/york It’s amazing how the Cathedral seems to change personality depending on where you're standing. I could spend hours here rss
| submitted by /u/Wallabydoll
[link] [comments]
---|--- -
🔗 Cryptography & Security Newsletter ECH Is Done, But Can We Make It Work? rss
Some technologies are easier to deploy than others. Take TLS, for example. Once enough time passes and we upgrade the servers and clients, we’re done. Encrypted Client Hello (ECH) is not one of those technologies. To get it to be effective, we first need to go through the usual upgrade cycle, iron out the last kinks, and then also get enough of the ecosystem to opt in to achieve safety in numbers.
-
🔗 r/Yorkshire Homeowners in Yorkshire turn to solar panels as oil prices rise rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Yorkshire Now and Then rss
submitted by /u/Still_Function_5428
[link] [comments] -
🔗 r/Leeds Varsity night - A grumble rss
It was varsity night in headingly last night. We live around the Trelawns, and the whole damn street is littered with glass. Shattered pint glasses, beer bottles.
We've been out this morning sweeping it up. I honestly cannot fathom the lack of basic respect.
submitted by /u/Swivials
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.10.2 release
What's Changed
- feat: send + cache verified author identity on share by @tomasz-tomczyk in #371
- fix: persist verified user_id on auth login by @tomasz-tomczyk in #393
Note : You might need to run
crit auth loginagain to link your profile properly for the future.- feat: distinct "Approved" state for review-finish modal by @tomasz-tomczyk in #381
- feat: improve agent integrations with global install + aider automation by @tomasz-tomczyk in #373
- fix: patch hljs markdown grammar and re-enable for diff view by @tomasz-tomczyk in #388 (Thanks @hbogaeus for reporting!)
- fix: keep SSE alive past idle timeout (Safari "Connection lost") by @tomasz-tomczyk in #376 (thanks Jared for reporting!)
- fix: expand hljs language coverage via alias resolution by @tomasz-tomczyk in #378
- fix: Ctrl+Enter to save when editing replies (#382) by @tomasz-tomczyk in #386 (Thanks @hbogaeus for reporting!)
- fix: align light theme with modern GitHub for visible diff highlights by @tomasz-tomczyk in #387 (Thanks @hbogaeus for reporting!)
- fix: Change comment submit button text to 'Add comment' by @TalAmuyal in #385 - Thank you!
- fix: strip GIT_* env from test process to prevent worktree corruption by @tomasz-tomczyk in #383
- docs: Docker recipe for sandboxed agents by @tomasz-tomczyk in #372 (Thanks Jared for the suggestion!)
- chore: wait for unit + e2e uploads before codecov status by @tomasz-tomczyk
- chore: pre-release audit fixes (Go backend) by @tomasz-tomczyk in #389
- chore: pre-release audit fixes (frontend) by @tomasz-tomczyk in #390
- refactor: return errors from installAider; unify integration list by @tomasz-tomczyk in #394
- chore: wire markdown-patch smoke test into CI by @tomasz-tomczyk in #395
- chore: move mise-trust to pre-start so worktree shell can load mise by @tomasz-tomczyk
New Contributors
- @TalAmuyal made their first contribution in #385
Full Changelog :
v0.10.1...v0.10.2 -
🔗 r/reverseengineering HexDig 1.0.0 a lightweight binwalk alternative working both on Windows and Linux, written in C++, give it a try! rss
submitted by /u/gcarmix1
[link] [comments] -
🔗 r/reverseengineering GitHub - iss4cf0ng/CVE-2026-31431-Linux-Copy-Fail: Rust implementation Exploit/PoC of CVE-2026-31431-Linux-Copy-Fail, allow executing customized shellcode (such as Meterpreter). rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 r/Yorkshire Culloden tower rising above the Swale. Can you spot the Mallard duck? rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Yorkshire Yorkshire Water Seeks Views On Multimillion-Pound Scarborough Investment rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 r/Yorkshire New jobs as East Yorkshire company announces round-the-clock production move rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 Evan Schwartz Your Clippy Config Should Be Stricter rss
“If it compiles, it works.” This feeling is one of the main things Rust engineers love most about Rust, and a reason why using it with coding agents is especially nice. After debugging some code that compiled but mysteriously stopped in production, I realized that it’s useful to enable more Clippy lints to catch bugs that the compiler won't prevent by itself. It's especially useful as guardrails for coding agents, but stricter linting can make your code safer, whether or not you’re coding with LLMs.
Motivating Bug: UTF-8-Oblivious String Slicing
Scour is the personalized content feed that I work on. Every Friday, Scour sends an email digest to each user with the top posts that matched their interests. On a recent Friday, the email sending job mysteriously stopped. This was puzzling because I had already put in place multiple type system-level safeguards and tests to ensure that it would continue with a log on all types of errors.
After digging into the logs, I found the culprit to be
thread 'tokio-runtime-worker' panicked... byte index 200 is not a char boundary. A function naively truncated article summaries without checking for UTF-8 character boundaries, which caused a panic and stopped the Tokio worker thread running the email sending loop.The solution for this particular bug was a safer method for truncating article summaries that respects UTF-8 character boundaries. However, this problem was reminiscent enough of the 2025 Cloudflare
unwrapbug that "broke the internet" that I wanted some more general solution.Rust's compiler prevents many types of bugs but there are still production problems it can't catch. Panics will either crash your program or quietly kill Tokio worker threads. Deadlocks and dropped futures can make work silently stop. And plenty of numeric operations can silently cause incorrect behavior.
We can stave off many of these types of bugs by making Clippy even stricter than it already is.
This is especially relevant in the age of coding agents. A seasoned Rust engineer might naturally avoid patterns that could cause problems. An agent or a junior colleague might not. Stricter Clippy rules make it easier to rely on code you didn't personally write. Also, enabling new lints on an existing codebase is tedious, and exactly the kind of task that is good to hand to a coding agent.
Enabling More Clippy Lints
Clippy ships with hundreds of lints that are disabled by default. Some are disabled because they might have false positives and some are style choices which you might reasonably not want.
Which lints should we enable to help us get back the "if it compiles [and passes Clippy], it works" feeling?
Why Not Enable Lint Categories?
Clippy's lints are grouped into categories: Correctness, Suspicious, Complexity, Perf, Style, Pedantic, Restriction, Cargo, Nursery, and Deprecated.
Unfortunately, none of these categories cleanly map onto "don't let this panic or do the wrong thing in production".
In fact, the Clippy docs say that "The
restrictioncategory should, emphatically, not be enabled as a whole." Clippy even includes a dedicated lint,blanket_clippy_restriction_lints, to discourage you from enabling this category. While therestrictioncategory includes many useful lints, it also includes some that directly contradict one another. For example, it contains lints to enforce bothbig_endian_bytesandlittle_endian_bytes.The docs say "Lints should be considered on a case-by-case basis before enabling". Of course, you can enable whole categories like
pedanticandrestrictionand thenallowspecific ones you want to disable, but I'm outlining a selective opt-in here.Lints That Don't Fire Are Still Useful
Even if you don't use a certain pattern in your code base today, it's not bad to enable the lint anyway. Inapplicable lints serve as cheap tripwires in case the given pattern is ever added later, whether by you, a colleague, or a coding agent.
My Lints
Every project is different and you should look through the available lints to see which ones make sense for your project.
Also, check when lints landed in stable if your Minimum Supported Rust Version predates 1.95, as some of these may have been added after your MSRV.
With those caveats out of the way, here are the lints I enabled, roughly categorized by what kind of behavior they prevent. You can skip to the bottom if you just want to copy my config.
Don't Panic
This group prevents panics from unwraps and unsafe slicing or indexing into arrays and strings.
Note that some of these, like
string_sliceandindexing_slicingmay produce many warnings throughout your code base. That may be annoying to fix. However, using safe methods like.get()and iterators instead of slicing prevents pretty severe footguns, so I would argue that it's worth it.string_slice-&s[a..b]on&str(UTF-8 boundary panic). This would have caught my initial bug.indexing_slicing-arr[i]/&arr[a..b]unwrap_used-Option::unwrap/Result::unwrappanic-panic!()callstodo/unimplemented/unreachable- placeholder-panic macrosget_unwrap-vec.get(i).unwrap()unwrap_in_result-.unwrap()inside functions that return aResultunchecked_time_subtraction-Instant - Instantpanics if the second is largerpanic_in_result_fn-panic!/assert!inside a function that returns aResult
You might or might not want to enable
expect_used. Calling.expecton anOptionorResultcan result in a panic. However, the message you pass toexpectshould already document why that thing shouldn't happen. Enabling the lint and then selectively disabling it throughout your code with#[expect(expect_used, reason = "...")]may end up duplicating the same rationale for using it in the first place.Another lint that is a real judgement call is
arithmetic_side_effects. This can prevent overflows and division by zero. However, it will cause Clippy to warn you about every place you use math operators:+,-,*,<<,/, and%. I tried enabling it in my code base and would estimate that around 15% of the warnings caught real issues and 85% was just noise.Don't Fail Silently
let_underscore_future-let _ = futuredrops without awaitinglet_underscore_must_use-let _ = result_returning()swallows errorsunused_result_ok-result.ok();silently dropsErrmap_err_ignore-.map_err(|_| MyErr)loses source errorassertions_on_result_states-assert!(r.is_ok())discards the error message
Don't Do Bad Async Stuff
These prevent various concurrency bugs and deadlocks:
await_holding_lock-MutexGuardacross.awaitawait_holding_refcell_ref-RefCell::borrow_mutacross.awaitif_let_mutex(only relevant if you're using an earlier edition than 2024) -if let _ = mutex.lock() { other_lock() }deadlock pattern. The scoping was fixed in the 2024 edition so this is no longer an issue.large_futures- aFuturethat is too large can cause a stack overflow
Don't Do Unsafe Things with Memory
mem_forget-mem::forgetleaksundocumented_unsafe_blocks- everyunsafe {}needs a// SAFETY:commentmultiple_unsafe_ops_per_block- one unsafe op per block (one comment per op)unnecessary_safety_doc/unnecessary_safety_comment- only document safety where it belongs
Don't Do Potentially Incorrect Things with Numbers
float_cmp-a == bon floatsfloat_cmp_const- stricter, also flags comparisons against constantslossy_float_literal- silently-rounded float literals (16_777_217.0_f32)cast_sign_loss-(-1_i8) as u64wraps tou64::MAXinvalid_upcast_comparisons-(x: i32 as i64) > i32::MAX as i64always false
The lints
cast_possible_wrap,cast_precision_loss,cast_possible_truncationeffectively force you to document invariants when doing lossy casts between numeric types. You might or might not find that useful.Don't Do Bad Things That are Easy to Avoid
rc_mutex-Rc<Mutex<_>>(Rcis single-threaded)debug_assert_with_mut_call-debug_assert!(stack.pop().is_some())differs in debug vs releaseiter_not_returning_iterator- method namediterreturning non-Iteratorexpl_impl_clone_on_copy- manualCloneimpl that disagrees withCopyinfallible_try_from-TryFromimpl whose error isInfallibleshould beFromdbg_macro-dbg!calls should be removed after debugging
Don't
allowYour Way Around These LintsThese two are especially useful if you're using a coding agent. Instead of letting the agent write
#[allow(lint_we_wanted_to_enable)], it should provide a reason wherever it's disabling a lint.allow_attributes- every#[allow]becomes#[expect(..., reason = "…")]allow_attributes_without_reason- every#[expect]requires a reason
Workaround for Workspace Inheritance
If you're using a Cargo workspace, you'll want to enable these lints in the workspace Cargo.toml. Unfortunately, each workspace crate needs to opt in to inheriting lints with
lints.workspace = true, rather than inheriting the lints by default. On nightly, there's amissing_lints_inheritancelint that specifically checks for this.If you're using stable Rust, you can use
cargo-workspace-lintsor a simple shell script run on CI to make sure you don't forget to make a workspace crate inherit the lints.Warn or Deny?
When enabling lints, you can either set Clippy to
warnordenythem. Either works but I personally prefer setting these towarnand running Clippy with-D warningsbefore committing and on CI. This makes local iteration marginally easier because you can compile your code initially without fixing all the lints right away.Note: if you set Clippy on CI to deny warnings, you should make sure to specify a specific Rust version. Otherwise, lints added in newer versions will cause your build to fail. (Thanks to u/scook0 for pointing this out!)
My Configs
# Workspace Cargo.toml [workspace.lints.clippy] # Don't Panic - prevent panics from unwraps and unsafe slicing or indexing string_slice = "warn" indexing_slicing = "warn" unwrap_used = "warn" panic = "warn" todo = "warn" unimplemented = "warn" unreachable = "warn" get_unwrap = "warn" unwrap_in_result = "warn" unchecked_time_subtraction = "warn" panic_in_result_fn = "warn" # Optional - see post for caveats # expect_used = "warn" # arithmetic_side_effects = "warn" # Don't Fail Silently - prevent dropped futures and swallowed errors let_underscore_future = "warn" let_underscore_must_use = "warn" unused_result_ok = "warn" map_err_ignore = "warn" assertions_on_result_states = "warn" # Don't Do Bad Async Stuff - prevent deadlocks and concurrency bugs await_holding_lock = "warn" await_holding_refcell_ref = "warn" if_let_mutex = "warn" # only relevant on editions before 2024 large_futures = "warn" # Don't Do Unsafe Things with Memory mem_forget = "warn" undocumented_unsafe_blocks = "warn" multiple_unsafe_ops_per_block = "warn" unnecessary_safety_doc = "warn" unnecessary_safety_comment = "warn" # Don't Do Potentially Incorrect Things with Numbers float_cmp = "warn" float_cmp_const = "warn" lossy_float_literal = "warn" cast_sign_loss = "warn" invalid_upcast_comparisons = "warn" # Optional - these effectively force you to document numeric invariants # cast_possible_wrap = "warn" # cast_precision_loss = "warn" # cast_possible_truncation = "warn" # Don't Do Bad Things That are Easy to Avoid rc_mutex = "warn" debug_assert_with_mut_call = "warn" iter_not_returning_iterator = "warn" expl_impl_clone_on_copy = "warn" infallible_try_from = "warn" dbg_macro = "warn" # Don't `allow` Your Way Around These Lints - every suppression must be # a deliberate #[expect(..., reason = "…")] rather than a silent #[allow] allow_attributes = "warn" allow_attributes_without_reason = "warn"
# Workspace clippy.toml allow-indexing-slicing-in-tests = true allow-panic-in-tests = true allow-unwrap-in-tests = true allow-expect-in-tests = true allow-dbg-in-tests = true
Conclusion
Ultimately, as Clippy's docs say, "You can choose how much Clippy is supposed to
annoyhelp you." But especially in the age of coding agents, I think it's worth tightening the guardrails so you end up with even fewer mysterious bugs in production and more code where you can say "if it compiles and lints, it should work."
Discuss on r/rust, Lobsters, or Hacker News.
In response to this post, Billy Levin wrote up a case for enabling whole lint categories and disabling the specific lints you don't want: Your Clippy Config Should Be Stricter-er. If you found this post interesting, that one's worth a read before you decide which approach is best for you.
-
🔗 Rust Blog Announcing Google Summer of Code 2026 selected projects rss
As previously announced, the Rust Project is participating in Google Summer of Code (GSoC) 2026. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open source.
A few months ago, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories before GSoC officially started!
The applicants prepared and submitted their project proposals by the end of March. This year, we received 96 proposals, which is a 50% increase from last year. We are glad that there was again a lot of interest in our projects! Like many other GSoC organizations this year, we somewhat struggled with some AI- generated proposals and low-quality contributions generated using AI agents, but it stayed manageable.
GSoC requires us to produce an ordered list of the best proposals, which is always challenging, as Rust is a big project with many priorities. Our mentors examined the submitted proposals and evaluated them based on their prior interactions with the given applicant, their contributions so far, the quality of the proposal itself, but also the importance of the proposed project for the Rust Project and its wider community. We also had to take mentor bandwidth and availability into account. Unfortunately, we had to cancel some projects due to several mentors losing their funding for Rust work in the past few weeks.
As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between proposals targeting different work to avoid overloading a single mentor with multiple projects. In the end, we narrowed the list down to the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.
Selected projects
On the 30th of April, Google has announced the accepted projects. We are happy to share that 13 Rust Project proposals were accepted by Google for Google Summer of Code 2026. That is a lot of projects! We are really happy and excited about GSoC 2026!
Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):
- A Frontend for Safe GPU Offloading in Rust by Marcelo Domínguez, mentored by Manuel Drehwald
- Adding WebAssembly Linking Support to Wild by Kei Akiyama, mentored by David Lattimore
- Bringing autodiff and offload into Rust CI by Shota Sugano, mentored by Manuel Drehwald
- Debugger for Miri by Mohamed Ali Mohamed, mentored by Oli Scherer
- Implementing impl and mut restrictions by Ryosuke Yamano, mentored by Jacob Pratt and Urgau
- Improving Ergonomics and Safety of serialport-rs by Tanmay, mentored by Christian Meusel
- libc: transition differing bit-width time and offset variants and deprecate bug-prone constants by Adam Martinez, mentored by Trevor Gross
- Link Linux kernel and its Modules with Wild by Vishruth Thimmaiah, mentored by David Lattimore
- Migrating rust-analyzer assists to SyntaxEditor by Shourya Sharma, mentored by Chayim Refael Friedman and Lukas Wirth
- Port std::arch test suite to rust-lang/rust by Sumit Kumas, mentored by Jakub Beránek and Folkert de Vries
- Reorganizing tests/ui/issues by Matthew, mentored by Teapot and Kivooeo
- Utilize debugger APIs to improve debug info test accuracy and error reporting by Anthony Bolden, mentored by Jakub Beránek and Jieyou Xu
- XDG path support for rustup by Guicheng Liu, mentored by rami3l
Congratulations to all applicants whose project was selected! Our mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.
We are excited to mentor three contributors who already experienced GSoC with us in the previous year. Welcome back, Kei, Marcelo and Shourya!
We would like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still current and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.
There is a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!
The accepted GSoC projects will run for several months. After GSoC 2026 finishes (in autumn of 2026), we will publish a blog post in which we will summarize the outcome of the accepted projects.
- The most popular project topic received fourteen different proposals! ↩
-
🔗 Console.dev newsletter goshs rss
Description: Simple web server.
What we like: Supports multiple protocols as well as HTTP, including SMB, DNS, WebDAV, SMTP. Includes file-based ACLs so you can use it to set up file sharing. SSL handled through Let’s Encrypt or providing your own keys. Can embed static files. Written in Go so can be shipped as a single binary.
What we dislike: The non-HTTP servers are mainly designed for pentesting and CTFs rather than fully functional server replacements. This includes a reverse shell generator. This is an odd digression for a web server, but you’ll probably just use Caddy if you want a pure Go web server.
-
🔗 Console.dev newsletter Quarkdown rss
Description: Markdown meets LaTeX.
What we like: Use Markdown to write typeset reports, docs, static websites, slides. Includes live preview with fast compilation so you can avoid LaTeX dependencies. Has enhancements like figures, formulae, code, bibliography. Include data from files and manipulate it with variables and scripting.
What we dislike: Academic writing in LaTeX (or equivalent) is the dream, but most work really just happens in Word or Google Docs, especially if you’re collaborating with multiple authors!
-
🔗 Servo Blog March in Servo: keyboard navigation, better debugging, FreeBSD support, and more! rss
Servo 0.1.0 represents Servo’s biggest month ever, with a record 530 commits and our first ever release on crates.io! For security fixes, see § Security.
With this release Servo becomes more accessible, thanks to tab navigation (@mrobinson, @Loirooriol, #42952, #43019, #43058, #43246, #43267, #43067), keyboard navigation with Alt+Shift and the accesskey attribute (@mrobinson, #43031, #43144, #43434), and keyboard scrolling with Space and Shift+Space (@mrobinson, #43322).
We’ve shipped several new web platform features:
- < input type=range> (@BudiArb, @rayguo17, @mrobinson, #41562)
- < script blocking=render> (@TimvdLippe, #43150)
- < svg width> and < svg height> (@Loirooriol, #43583)
- ‘X-Frame-Options’ (@TimvdLippe, #43539, #43708)
- ‘Content-Security-Policy: frame-ancestors’ (@TimvdLippe, #43630)
- ‘::first-letter’ styling (@minghuaw, @xiaochengh, @Loirooriol, #43027)
- ‘::placeholder’ styling (@stevennovaryo, #43053)
- ‘::file-selector-button’ styling (@lukewarlow, @AlexVasiluta, #43498)
- ‘background-blend-mode’ (@mrobinson, #43666)
- ‘content’ on ‘::marker’ (@niyabits, @Loirooriol, #43515)
- ‘list-style-type:
’ (@Loirooriol, #43111) - ‘attr(namespace|local)’ and ‘clamp(none)’ (@Loirooriol, #43045)
- < system-color> (@longvatrong111, @mrobinson, #42529, #43105, #43107)
- < step-position> values ‘jump-start’ , ‘jump-end’ , ‘jump-none’ , and ‘jump-both’ (@yezhizhen, #43061)
Plus a bunch of new DOM APIs:
- CommandEvent (@lukewarlow, #43190)
- moveBefore() on Node (@lukewarlow, #41238)
- relatedTarget on MouseEvent and PointerEvent (@simonwuelker, #42989)
- command on HTMLButtonElement (@lukewarlow, #43190)
- selectedOptions on HTMLSelectElement (@jakubadamw, #43017)
- url on LargestContentfulPaint (@shubhamg13, #42901, #42949)
- crypto.subtle.digest() for TurboSHAKE (@kkoyung, #43551)
- crypto.subtle.getPublicKey() for ECDH , ECDSA , Ed25519 , RSASSA-PKCS1-v1_5 , RSA-PSS , RSA-OAEP , and X25519 (@kkoyung, @Taym95, #43073, #43093, #43106, #43115)
servoshell is now installed as
servoshellorservoshell.exe, rather thanservoorservo.exe(@jschwe, @mrobinson, #42958).--userscriptshas been removed for now, but anyone who uses it is welcome to reinstate it as a wrapper aroundUserContentManager::add_script(@jschwe, #43573). We’ve fixed a bug where link hover status lines are sometimes not legible (@simartin, #43320), and we’re working on getting servoshell signed for macOS to avoid getting blocked by Gatekeeper (@jschwe, #42912).After a long effort by @valpackett, @dlrobertson, and more recently @nortti0 and @sagudev (#43116, #43134), we can now build Servo for FreeBSD! Note that Servo 0.1.0 still has some issues that need to be worked around, but you can get all the details in #44601.
A great deal of work went into making the crates.io release possible, including renaming
libservoto justservo(@jschwe, #43141), making each package self-contained (@jschwe, #43180, #43165), fixing build issues (@delan, @jschwe, #43170, #43458, #43463) and crates.io compliance issues (@jschwe, #43459), configuring package metadata (@jschwe, @StaySafe020, #43078, #43264, #43451, #43457, #43654), and organising our dependency tree (@jschwe, @yezhizhen, @webbeef, @mrobinson, #42916, #43243, #43263, #43516, #43526, #43552, #43615, #43622, #43273, #43092). As a result, you can now take your first step towards embedding Servo in a Rust app with:$ cargo add servoThis is another big update, so here’s an outline:
Security __ crypto.subtle.deriveBits() for X25519 checking for all-zero secrets, and verify() for HMAC comparing signatures, are now done in constant time (@kkoyung, #43775, #43773). ‘Content-Security-Policy’ now handles redirects correctly (@TimvdLippe, #43438), and sends violation reports with the correct blockedURI and referrer (@TimvdLippe, #43367, #43645, #43483). The policy in <meta> now combines with the policy sent in HTTP headers, rather than overriding it (@TimvdLippe, @elomscansio, #43063). When checking nonces, we now reject elements with duplicate attributes (@dyegoaurelio, #43216). The document containing an < iframe> can no longer access the contents of error pages (@TimvdLippe, #43539), and CSP violations inside an <iframe> are now correctly reported (@TimvdLippe, #43652). Work in progress We’ve landed more work towards supporting IndexedDB , under --pref dom_indexeddb_enabled (@arihant2math, @gterzian, @Taym95, @jerensl, #42139, #42727, #43096, #43041, #42451, #43721, #43754, #42786), and towards supporting IntersectionObserver , under --pref dom_intersection_observer_enabled (@stevennovaryo, @mrobinson, #42251). We’re continuing to implement document.execCommand() for rich text editing (@TimvdLippe, #43177), under --pref dom_exec_command_enabled. ‘beforeinput’ and ‘input’ events are now fired when executing supported and enabled commands (@TimvdLippe, #43087), the ‘defaultParagraphSeparator’ and ‘styleWithCSS’ commands are now supported (@TimvdLippe, #43028), and the ‘delete’ command is partially supported (@TimvdLippe, #43016, #43082). We’re also working on the Font Loading API (@simonwuelker, #43286), under --pref dom_fontface_enabled. new FontFace() now accepts ArrayBuffer in its source argument (@simonwuelker, #43281). All of the features above are enabled in servoshell’s experimental mode. Work on accessibility support for web contents continues under --pref accessibility_enabled. There was a breaking change in the embedding API (@delan, @alice, #43029), and we’ve landed support for “grafting” the accessibility tree of a document into that of its containing webview (@delan, @alice, #43012, #43013, #43556). As a result, when you navigate, separate documents can have separate accessibility trees without complicating the embedder. < link rel=modulepreload> is now partially supported (@Gae24, #42964), though recursive fetching of descendants is gated by --pref dom_allow_preloading_module_descendants (@Gae24, #43353). For a long time, Servo has had some support for the Web Bluetooth API under --pref dom_bluetooth_enabled. We’ve recently reworked our implementation to adopt btleplug, the cross-platform Rust- native Bluetooth LE library (@webbeef, #43529, #43581). We’re now implementing the Web Animations API, starting with AnimationTimeline and DocumentTimeline (@mrobinson, #43711). We’ve landed more fixes to Servo’s async parser (@simonwuelker, #42930, #42959), under --pref dom_servoparser_async_html_tokenizer_enabled. If we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that don’t use document.write(), and even improve the html5ever API for the ecosystem. For developers
Servo’s DevTools feature now has partial support for inspecting service workers (@CynthiaOketch, #43659), as well as using the navigation controls along the top of the UI (@brentschroeter, @eerii, #43026).
In the Inspector tab, we’ve fixed a bug where the UI stops updating when navigating to a new page (@brentschroeter, #43153).
In the Console tab, you can now evaluate JavaScript in web workers and service workers (@SharanRP, #43361, #43492).
In the Debugger tab, you can now Step In , Step Out , and Step Over (@eerii, @atbrakhi, #42907, #43040, #43042, #43135). We’ve landed partial support for the Scopes panel (@eerii, @atbrakhi, #43166, #43167, #43232), the Call stack panel (@atbrakhi, @eerii, #43015, #43039), and showing you information when hovering over objects , arrays , functions , and other values (@atbrakhi, @eerii, #43319, #43356, #43456, #42996, #42936, #42994).
We’ve fixed some long-outstanding bugs where the DevTools UI may stop responding due to protocol desyncs (@brentschroeter, @eerii, #43230, #43236), or due to messages from multiple Servo threads being interleaved (@brentschroeter, @eerii, #43472).
For developers of Servo itself, mach can be a bit opaque at times. To make mach more transparent and composable, we’ve added
mach print-envandmach execcommands (@jschwe, #42888).We’re also working on a new dev container, which will provide an alternative to our usual procedures for setting up a Servo build environment (@jschwe, @sagudev, #43127, #43131, #43139).
Embedding and automation Breaking changes: Servo::set_accessibility_active() is now WebView::set_accessibility_active() (@delan, @alice, #43029), to make the API harder to misuse (see the docs for more details). What was previously named WebView::pinch_zoom() has been renamed to adjust_pinch_zoom(), and we’ve added a pinch_zoom() method that lets you read the current pinch zoom level (@chrisduerr, #43228). WebView::set_delegate(), set_clipboard_delegate(), and set_gamepad_provider() are now WebViewBuilder::delegate(), clipboard_delegate(), and gamepad_delegate() (@mrobinson, #43205, #43233). Note that setgamepadprovider() is now gamepad_delegate(), consistent with the GamepadProvider rename below. WebViewDelegate::show_bluetooth_device_dialog() has been reworked to use the same “request object” pattern as the request_*() methods, giving you a BluetoothDeviceSelectionRequest with clear methods (@webbeef, #43580). GamepadProvider has been renamed to GamepadDelegate, and gamepad_provider() on WebView has been renamed to gamepad_delegate() (@mrobinson, #43233). The empty default implementation of EventLoopWaker::wake has been removed, because it almost never makes sense for a new custom impl to leave the method empty (@chrisduerr, @mrobinson, #43250). Opts::print_pwm is now DiagnosticsLogging::progressive_web_metrics (@mrobinson, #43209). Removed from our API: Opts::nonincremental_layout (@mrobinson, #43207) – no replacement. This only really worked in legacy layout. Opts::user_stylesheets (@mrobinson, #43206) – use UserContentManager::add_stylesheet() instead. This is how servoshell’s --user-stylesheet option works. You can now read and write cookies with SiteDataManager::cookies_for_url() and set_cookie_for_url() (@longvatrong111, #43600). ClipboardDelegate and StringRequest are now exposed to the public API, allowing you to implement custom clipboard delegates (@jdm, @chrisduerr, #43203, #43261). You can pass your custom delegate to WebViewBuilder::clipboard_delegate(). You can now get the EmbedderControlId associated with an InputMethodControl by calling InputMethodControl::id() (@chrisduerr, #43248). PixelFormat now implements Debug (@chrisduerr, @mrobinson, #43249). We’ve improved the docs for Servo, ServoBuilder, WebViewBuilder, RenderingContext (@chrisduerr, #43229), EmbedderControlId, EmbedderControlRequest, EmbedderControlResponse, SimpleDialogRequest, AlertResponse, ConfirmResponse, PromptResponse, EmbedderMsg (@mukilan, #43564), ResourceReaderMethods (@jschwe, @mrobinson, #43769), servo::input_events (@mukilan, #43681), and WheelDelta (@yezhizhen, @mrobinson, #43210). We fixed a deadlock in WebDriver that occurs under heavy use of actions from multiple input sources (@yezhizhen, #43202, #43169, #43262, #43275, #43301), ‘pointerMove’ actions with a ‘duration’ are now smoothly interpolated (@yezhizhen, #42946, #43076). Add Cookie is now more conformant (@yezhizhen, #43690), which led to Servo developers landing a spec patch. ‘pause’ actions are now slightly more efficient (@yezhizhen, #43014), and we’ve fixed a bug where ‘wheel’ actions fail to interleave with other actions (@yezhizhen, #43126). More on the web platform
Carets now blink in text fields (@mrobinson, #43128). You can configure or disable blinking carets with
--pref editing_caret_blink_time=0or a duration in milliseconds. Clicking to move the caret is more forgiving now (@mrobinson, #43238), and moving the caret by a word at a time is more conventional on Windows and Linux, with Ctrl instead of Alt (@mrobinson, #43436). We’ve also fixed a bug where pressing the arrow keys in text fields both moves the caret (good) and scrolls the page (bad), and fixed a bug where the caret fails to render on empty lines (@mrobinson, @freyacodes, #43247, #42218).Input has improved, with more responsive touchpad scrolling on Linux (@mrobinson, @chrisduerr, #43350). Pointer events and mouse events can now be captured across shadow DOM boundaries (@simonwuelker, #42987), and we’ve now started working towards shadow-DOM-compatible focus (@mrobinson, #43811). Pressing Space or Enter inside text fields no longer causes them to be clicked (@mrobinson, #43343).
The lang attribute is now taken into account when shaping, which is important for the correct rendering of Chinese and Japanese text (@RichardTjokroutomo, @mrobinson, #43447). ‘font-weight’ is now matched more accurately when no available font is an exact match (@shubhamg13, #43125).
Navigation is one of the most complicated parts of HTML: navigating can run some JavaScript that replaces the page, just run some JavaScript, or depending on the response, do nothing at all. < iframe> makes navigation doubly complicated: the document containing an <iframe> can observe and interact with the document inside the <iframe> in various ways, often synchronously. This has been the source of many bugs over the years, but we’ve recently fixed one of those major issues (@jdm, #43496).


javascript:URLs are a massive special case with many quirks, and <iframe> has its own big edge cases.new Worker() now supports JS modules (@pylbrecht, @Gae24, #40365), and CanvasRenderingContext2D now supports drawing text with Variation Selectors , allowing you to control things like emoji presentation and CJK shaping (@yezhizhen, #43449).
Servo now fires ‘pointerover’ , ‘pointerout’ , ‘pointerenter’ , and ‘pointerleave’ events on web content (@webbeef, #42736), ‘scroll’ events on VisualViewport (@stevennovaryo, #42771), and ‘scrollend’ events on Document , Element , and VisualViewport (@abdelrahman1234567, @mrobinson, #38773). We also fire ‘error’ events when event handler attributes contain syntax errors (@simonwuelker, #43178).
We’ve improved the default appearance of < summary> (@Loirooriol, #43111), < select> (@lukewarlow, #43175), < input type=file> (@lukewarlow, @AlexVasiluta, @lukewarlow, #43498, #43186), and < textarea> and < input type=text> and friends (@mrobinson, #43132), plus ‘::marker’ in mixed LTR/RTL content (@Loirooriol, #43201). < select> also now requires user interaction to open the picker (@SharanRP, #43485).
< form action>, < iframe src>, open(url) on XMLHttpRequest , new EventSource(url) , and new Worker(url) now correctly resolve the URL with the page encoding (@SharanRP, @jdm, @jayant911, @Veercodeprog, @sabbCodes, #43521, #43554, #43572, #43537, #43634, #43588).
‘direction’ now works on grid containers (@nicoburns, #42118), SVG images can now be used in ‘border-image’ (@shubhamg13, #42566), ‘linear-gradient()’ now dithers to reduce banding (@Messi002, #43603), ‘letter-spacing’ no longer applies to invisible zero-width formatting characters (@simonwuelker, #42961), and ‘:active’ now matches disabled or non-focusable elements too, as long as they are being clicked (@webbeef, #42935).
DOMContentLoaded timings in PerformanceNavigationTiming are more accurate (@simonwuelker, #43151). PerformancePaintTiming and LargestContentfulPaint are more accurate too, taking <iframe> into account (@shubhamg13, #42149), and checking for and ignoring things like broken images and transparent backgrounds (@shubhamg13, #42833, #42975, #43475).
We’ve improved the conformance of JS modules (@Gae24, #43585), < button command> (@lukewarlow, #42883), < font size> (@shubhamg13, #43103), < link media> and < link type> (@TimvdLippe, #43043), < option selected> (@SharanRP, #43582), < script integrity> and < style integrity> (@Gae24, #42931), EventSource (@mishop-15, #42179), SubtleCrypto (@kkoyung, #42984, #43315, #43533, #43519), Worker (@simonwuelker, #43329), HTMLVideoElement (@shubhamg13, #43341), dataset on Element (@TimvdLippe, #43046), and querySelector() and querySelectorAll() (@simonwuelker, #42991).
We’ve fixed bugs related to error reporting (@simonwuelker, @xZaisk, @yezhizhen, @eyupcanakman, #43191, #43323, #43101, #43560), event loops (@jayant911, #43523), focus (@jakubadamw, #43431), quirks mode (@mrobinson, @Loirooriol, @lukewarlow, #42960, #43368), < iframe> (@TimvdLippe, @jdm, #43539, #43732), the ‘animationstart’ and ‘animationend’ events (@simonwuelker, #43454), the ‘touchmove’ event (@yezhizhen, #42926), CanvasRenderingContext2D (@simonwuelker, #43218), Worker (@bruno-j- nicoletti, #43213), ‘:active’ on <input> (@mrobinson, #43722), ‘overflow: scroll’ on ‘::before’ and ‘::after’ (@stevennovaryo, #43231), ‘position: absolute’ (@yoursanonymous, @Loirooriol, #43084), and < img> and < svg> without width or height attributes (@Loirooriol, #42666). Fixing that last bug led to Servo developers finding two spec issues!
We’ve landed partial support for using CSScounters in ‘list-style- type’ on ‘display: list-item’ and ‘content’ on ‘::marker’, but the counter values themselves are not calculated yet, so all list items still read as
0.or similar. In any case, you can use aor ‘symbols()’ in ‘list-style-type’, and ‘counter()’ and ‘counters()’ in ‘content’ (@Loirooriol, #43111). We’ve also landed partial support for < marquee> and the HTMLMarqueeElement interface, including basic layout, but the contents are not animated yet (@mrobinson, @lukewarlow, #43520, #43610).
Servo now exposes several attributes that have no direct effect, but are needed for web compatibility (@lukewarlow, #43500, #43499, #43502, #43518):
- noHref on HTMLAreaElement
- hreflang , type , charset on HTMLAnchorElement
- useMap on HTMLInputElement and HTMLObjectElement
- longDesc on HTMLIFrameElement and HTMLFrameElement
Performance and stability We’ve fixed sluggish scrolling on long documents like this page on docs.rs (@webbeef, @yezhizhen, #43074, #43138), and reduced the memory usage of BoxFragment by 10% (@stevennovaryo, #43056). about:memory now has a Force GC button (@webbeef, #42798), and no longer reports all processes as content processes in multiprocess mode (@webbeef, #42923). Web fonts are no longer fetched more than once, and they no longer cause reflow when they fail to load (@minghuaw, #43382, #43595). We’re also working towards better caching for shaping results (@mrobinson, @lukewarlow, @Loirooriol, #43653). Event handler attribute lookup is more efficient now (@Narfinger, #43337), and we’ve made DOM tree walking more efficient in many cases (@Narfinger, #42781, #42978, #43476). crypto.subtle.encrypt() , decrypt() , sign() , verify() , digest() , importKey() , unwrapKey() , decapsulateKey() , and decapsulateBits() are more efficient now (@kkoyung, #42927), thanks to a recent spec update. More of Servo now uses cheaper crossbeam channels instead of IPC channels, unless Servo is running in multiprocess mode, or avoids IPC altogether (@Narfinger, @jschwe, @Taym95, #42077, #43309, #42966). We’ve also reduced clones, allocations, conversions, comparisons, and borrow checks in many parts of Servo (@simonwuelker, @kkoyung, @mrobinson, @Narfinger, @yezhizhen, @TG199, #43212, #43055, #43066, #43304, #43452, #43717, #43780, #43088, #43226). DOM data structures (#[dom_struct]) can refer to one another, with the help of garbage collection. But when DOM objects are being destroyed, those references can become invalid for a brief moment, depending on the order the GC finalizers run in. This can be unsound if those references are accessed, which is a very easy mistake to make if the type has an impl Drop. To help prevent that class of bug, we’re reworking our DOM types so that none of them have #[dom_struct] and impl Drop at the same time (@willypuzzle, #42937, #42982, #43018, #43071, #43222, #43288, #43544, #43563, #43631). We’ve fixed a crash caused by an IPC resource leak when making many requests over time (@yezhizhen, #43381), and some bugs found by ThreadSanitizer and --debug-mozjs (@jdm, @Loirooriol, #42976, #42963, #43487). We’ve also fixed crashes in CanvasRenderingContext2D (@yezhizhen, #43449), Crypto (@rogerkorantenng, #43501), devtools (@simonwuelker, #43133), event handler attributes (@simonwuelker, #43178), Promise (@Narfinger, @jdm, #43470), and WebDriver (@Tarmil, @yezhizhen, #42739, #43381). We’ve continued our long-running effort to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@Narfinger, @Gae24, @Uiniel, @TimvdLippe, @yezhizhen, @sagudev, @PuercoPop, @pylbrecht, @arabson99, @jayant911, #42957, #43108, #43130, #43215, #43183, #43219, #43245, #43220, #43252, #43268, #43184, #43277, #43278, #43284, #43302, #43312, #43348, #43327, #43362, #43365, #43383, #43432, #43259, #43439, #43473, #43481, #43480, #43479, #43525, #43535, #43543, #43549, #43570, #43571, #43569, #43579, #43584, #43657, #43713). Thanks to a wide range of people, many of whom were contributing to Servo for their first time, we’ve also landed a bunch of architectural improvements (@elomscansio, @mukilan, #43646), cleanups (@simartin, @SharanRP, @TG199, @sabbCodes, @niyabits, @eerii, @atbrakhi, #43276, #43285, #43532, #43778, #43771, #43566, #43567, #43587, #43140, #43316), and refactors (@sabbCodes, @arabson99, @jayant911, @StaySafe020, @saydmateen, @eerii, @TimvdLippe, @elomscansio, @CynthiaOketch, #43614, #43641, #43619, #43642, #43623, #43656, #43644, #43672, #43664, #43676, #43684, #43679, #43678, #43655, #43675, #43731, #43729, #43728, #43740, #43751, #43748, #43747, #43752, #43745, #43724, #43723, #43765, #43767, #43181, #43269, #43270, #43279, #43437, #43597, #43607, #43602, #43616, #43609, #43612, #43647, #43651, #43662, #43714, #43774). Donations
Thanks again for your generous support! We are now receiving 7167 USD/month (+2.6% from February) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 37 GitHub users (+5 from February) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. If you’re interested in this kind of sponsorship, please contact us at join@servo.org.
7167 USD/month
10000
Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
-
- April 29, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-29 rss
IDA Plugin Updates on 2026-04-29
Activity:
- ida-mcp-server
- 13f82c62: fix: function-size pre-filter (16 KB threshold) restores MAX_FUNCSIZE…
- eb63c538: fix: extend pathological-func pre-filter for Rust deep generics
- 86e2d687: feat: lazy-init C++ class recovery on first decompile
- f384fe25: feat: add Itanium C++ ABI class recovery tool (recover_cpp_classes)
- e8112416: feat: tier-4 raw disassembly fallback - guarantees 100% coverage
- ed57dab4: feat: handle extern symbols + bump MAX_FUNCSIZE for "too big function"
- dbf27026: fix: handle thunks/trampolines + null-JSON in decompile_function
- 6ac2f0e4: fix: tighten Go-symbol regex - require trailing '.' to avoid C++ fals…
- python-elpida_core.py
- 5a88e62b: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:37Z
- 15a038c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:15Z
- 6698cd11: HERMES correction note: clear stale items before daily-13
- 7628fb89: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:53Z
- 2d9c8c09: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:30Z
- d24ffc93: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:05Z
- f184eab8: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:40Z
- 9f5d78c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:12Z
- c7e897f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:44Z
- bd98077e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:20Z
- quokka
- 43316396: Merge pull request #110 from quarkslab/dependabot/github_actions/acti…
- ida-mcp-server
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [clang-include](https://github.com/oxikkk/ida-clang-include): 1.1.0 -
🔗 r/Yorkshire Flamborough Cliffs rss
| The amazing cliffs today at flamborough submitted by /u/J_1989_EDI
[link] [comments]
---|--- -
🔗 r/Yorkshire How driving Yorkshire Dales B road in the evening is like rss
| submitted by /u/alanas4201
[link] [comments]
---|--- -
🔗 Simon Willison LLM 0.32a0 is a major backwards-compatible refactor rss
I just released LLM 0.32a0, an alpha release of my LLM Python library and CLI tool for accessing LLMs, with some consequential changes that I've been working towards for quite a while.
Previous versions of LLM modeled the world in terms of prompts and responses. Send the model a text prompt, get back a text response.
import llm model = llm.get_model("gpt-5.5") response = model.prompt("Capital of France?") print(response.text())
This made sense when I started working on the library back in April 2023. A lot has changed since then!
LLM provides an abstraction over thousands of different models via its plugin system. The original abstraction - of text input that returns text output - was no longer able to represent everything I needed it to.
Over time LLM itself has grown attachments to handle image, audio, and video input, then schemas for outputting structured JSON, then tools for executing tool calls. Meanwhile LLMs kept evolving, adding reasoning support and the ability to return images and all kinds of other interesting capabilities.
LLM needs to evolve to better handle the diversity of input and output types that can be processed by today's frontier models.
The 0.32a0 alpha has two key changes: model inputs can be represented as a sequence of messages, and model responses can be composed of a stream of differently typed parts.
Prompts as a sequence of messages
LLMs accept input as text, but ever since ChatGPT demonstrated the value of a two-way conversational interface, the most common way to prompt them has been to treat that input as a sequence of conversational turns.
The first turn might look like this:
user: Capital of France? assistant:(The model then gets to fill out the reply from the assistant.)
But each subsequent turn needs to replay the entire conversation up to that point, as a sort of screenplay:
user: Capital of France? assistant: Paris user: Germany? assistant:Most of the JSON APIs from the major vendors follow this pattern. Here's what the above looks like using the OpenAI chat completions API, which has been widely imitated by other providers:
curl https://api.openai.com/v1/chat/completions \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-5.5", "messages": [ { "role": "user", "content": "Capital of France?" }, { "role": "assistant", "content": "Paris" }, { "role": "user", "content": "Germany?" } ] }'
Prior to 0.32, LLM modeled these as conversations:
model = llm.get_model("gpt-5.5") conversation = model.conversation() r1 = conversation.prompt("Capital of France?") print(r1.text()) # Outputs "Paris" r2 = conversation.prompt("Germany?") print(r2.text()) # Outputs "Berlin"
This worked if you were building a conversation with the model from scratch, but it didn't provide a way to feed in a previous conversation from the start. This made tasks like building an emulation of the OpenAI chat completions API much harder than they should have been.
The
llmCLI tool worked around this through a custom mechanism for persisting and inflating conversations using SQLite, but that never became a stable part of the LLM API - and there are many places you might want to use the Python library without committing to SQLite as the storage layer.The new alpha now supports this:
import llm from llm import user, assistant model = llm.get_model("gpt-5.5") response = model.prompt(messages=[ user("Capital of France?"), assistant("Paris"), user("Germany?"), ]) print(response.text())
The
llm.user()andllm.assistant()functions are new builder functions designed to be used within thatmessages=[]array.The previous
prompt=option still works, but LLM upgrades it to a single-item messages array behind the scenes.You can also now reply to a response, as an alternative to building a conversation:
response2 = response.reply("How about Hungary?") print(response2) # Default __str__() calls .text()
Streaming parts
The other major new interface in the alpha concerns streaming results back from a prompt.
Previously, LLM supported streaming like this:
response = model.prompt("Generate an SVG of a pelican riding a bicycle") for chunk in response: print(chunk, end="")
Or this async variant:
import asyncio import llm model = llm.get_async_model("gpt-5.5") response = model.prompt("Generate an SVG of a pelican riding a bicycle") async def run(): async for chunk in response: print(chunk, end="", flush=True) asyncio.run(run())
Many of today's models return mixed types of content. A prompt run against Claude might return reasoning output, then text, then a JSON request for a tool call, then more text content.
Some models can even execute tools on the server-side, for example OpenAI's code interpreter tool or Anthropic's web search. This means the results from the model can combine text, tool calls, tool outputs and other formats.
Multi-modal output models are starting to emerge too, which can return images or even snippets of audio intermixed into that streaming response.
The new LLM alpha models these as a stream of typed message parts. Here's what that looks like as a Python API consumer:
import asyncio import llm model = llm.get_model("gpt-5.5") prompt = "invent 3 cool dogs, first talk about your motivations" def describe_dog(name: str, bio: str) -> str: """Record the name and biography of a hypothetical dog.""" return f"{name}: {bio}" def sync_example(): response = model.prompt( prompt, tools=[describe_dog], ) for event in response.stream_events(): if event.type == "text": print(event.chunk, end="", flush=True) elif event.type == "tool_call_name": print(f"\nTool call: {event.chunk}(", end="", flush=True) elif event.type == "tool_call_args": print(event.chunk, end="", flush=True) async def async_example(): model = llm.get_async_model("gpt-5.5") response = model.prompt( prompt, tools=[describe_dog], ) async for event in response.astream_events(): if event.type == "text": print(event.chunk, end="", flush=True) elif event.type == "tool_call_name": print(f"\nTool call: {event.chunk}(", end="", flush=True) elif event.type == "tool_call_args": print(event.chunk, end="", flush=True) sync_example() asyncio.run(async_example())
Sample output (from just the first sync example):
My motivation: create three memorable dogs with distinct “cool” styles—one cinematic, one adventurous, and one charmingly chaotic—so each feels like they could star in their own story.
Tool call: describe_dog({"name": "Nova Jetpaw", "bio": "A sleek silver-gray whippet who wears tiny aviator goggles and loves sprinting along moonlit beaches. Nova is fearless, elegant, and rumored to outrun drones just for fun."}
Tool call: describe_dog({"name": "Mochi Thunderbark", "bio": "A fluffy corgi with a dramatic black-and-gold bandana and the confidence of a rock star. Mochi is short, loud, loyal, and leads a neighborhood 'security patrol' made entirely of squirrels."}
Tool call: describe_dog({"name": "Atlas Snowfang", "bio": "A massive white husky with ice-blue eyes and a backpack full of trail snacks. Atlas is calm, heroic, and always knows the way home—even during blizzards, fog, or confusing camping trips."}At the end of the response you can call
response.execute_tool_calls()to actually run the functions that were requested, or send aresponse.reply()to have those tools called and their return values sent back to the model:print(response.reply("Tell me about the dogs"))
This new mechanism for streaming different token types means the CLI tool can now display "thinking" text in a different color from the text in the final response. The thinking text goes to stderr so it won't affect results that are piped into other tools.
This example uses Claude Sonnet 4.6 (with an updated streaming event version of the llm-anthropic plugin) as Anthropic's models return their reasoning text as part of the response:
llm -m claude-sonnet-4.6 'Think about 3 cool dogs then describe them' \ -o thinking_display 1
You can suppress the output of reasoning tokens using the new
-R/--no-reasoningflag. Surprisingly that ended up being the only CLI-facing change in this release.A mechanism for serializing and deserializing responses
As mentioned earlier, LLM has quite inflexible code at the moment for persisting conversations to SQLite. I've added a new mechanism in 0.32a0 that should provide Python API users a way to roll their own alternative:
serializable = response.to_dict() # serializable is a JSON-style dictionary # store it anywhere you like, then inflate it: response = Response.from_dict(serializable)
The dictionary this returns is actually a
TypedDictdefined in the new llm/serialization.py module.What's next?
I'm releasing this as an alpha so I can upgrade various plugins and exercise the new design in real world environments for a few days. I expect the stable 0.32 release will be very similar to this alpha, unless alpha testing reveals some design flaw in the way I've put this all together.
There's one remaining large task: I'd like to redesign the SQLite logging system to better capture the more finely grained details that are returned by this new abstraction.
Ideally I'd like to model this as a graph, to best support situations like an OpenAI-style chat completions API where the same conversations are constantly extended and then repeated with every prompt. I want to be able to store those without duplicating them in the database.
I'm undecided as to whether that should be a feature in 0.32 or I should hold it for 0.33.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 sacha chua :: living an awesome life What's in the Emacs newcomers-presets theme? rss
The development version of Emacs as of Feb 2026 includes a
newcomers-presetstheme that can be enabled from the splash screen or by usingM-x load-theme RET newcomers-presets RET. (Not sure how to run that command? Start with the guided tour/tutorial or choose "Help - Tutorial" from the Emacs menu.)
Figure 1: Newcomer presets are on the splash screen If you like it and want to make it automatically enabled in future Emacs sessions:
- Use
M-x customize-themes - Select the checkbox next to
newcomer-presetsby either clicking on it or using TAB to navigate to it and then pressing RET. - Click on or use RET to select Save Theme Settings.
Figure 2: Saving the theme setting I'm not sure if someone else has made notes on what it does yet, so I thought I'd put this together.
Most Emacs newbies aren't running the development version of Emacs at the moment, but it will eventually make its way into Emacs 31. I wonder if it might be a good idea to extract the theme as a package that people can use
use-packageon if they want. I am not entirely sure about using themes for this, but it's worth an experiment.Here's a list of what newcomers-presets includes. I'll also include the corresponding Emacs Lisp in case you want to copy just that part, or you can also get it as copy-of-newcomers-presets.el. If you want to load it in your existing Emacs, you can add
(load-file "path/to/copy-of-newcomers-presets.el")to your InitFile. You can useC-h f(describe-function) orC-h v(describe-variable) to learn more about the functions or variables it changes. I'm manually making this page, so there might have been some changes to etc/themes/newcomers-presets-theme.el since .;; -*- lexical-binding: t -*- ;; Based on https://github.com/emacs-mirror/emacs/tree/master/etc/themes/newcomers-presets-theme.elKeyboard shortcuts
Some commands allow you to use just the last part of the keyboard shortcut in order to repeat them. Related: Repeat Mode: Stop Repeating Yourself | Emacs Redux
(setopt repeat-mode t)Appearance
Scrolling happens more smoothly instead of jumping by character.
(setq pixel-scroll-mode t)Line numbers are shown in both text and code buffers.
(add-hook 'prog-mode-hook 'display-line-numbers-mode) (add-hook 'text-mode-hook 'display-line-numbers-mode)Column numbers are shown in the mode line.
(setopt column-number-mode t)If you change your system-wide fixed-width font, Emacs will also update. the system-defined font dynamically.
(setopt font-use-system-font t)You can resize your frames or windows to any size instead of being limited to whole-character steps.
(setopt frame-resize-pixelwise t) (setopt window-resize-pixelwise t)The frame size will stay the same even if you change the font, menu bar, tool bar, tab bar, internal borders, fringes, or scroll bars.
(setopt frame-inhibit-implied-resize t)If a mode line is wider than the currently selected window, it is compressed by replacing repeating spaces with a single space.
(setopt mode-line-compact 'long)Saving data between sessions
Minibuffer history is saved between Emacs sessions so you can use
M-xand then useM-pandM-nto navigate your history.(setopt savehist-mode t)Your place in a file is saved between Emacs sessions.
(setopt save-place-mode t)Your recently-opened files are saved between Emacs sessions, so you can use
M-x find-fileand other commands and then useM-pandM-nto navigate your history.Completion
This set of options affects the completion candidates (the suggestions that appear when you press
M-xand thenTAB, or when you useTABat other prompts).You can use the arrow keys to select completion candidates in the minibuffer, and you can use RET to select the highlighted one.
(setopt minibuffer-visible-completions t)Additional details for completion suggestions are shown before or after the suggestions. For example,
M-x describe-symbol(C-h o) shows additional information.(setopt completions-detailed t)Completion candidates can be grouped together if the function that sets up the completion specifies it.
(setopt completions-group t)When you press TAB to see the completion candidates for a prompt (for example,
M-xand thenTAB), the first TAB will display the completion list, and the second TAB will select the buffer.(setopt completion-auto-select 'second-tab)This Completions buffer will update as you type so that you can narrow down the candidates.
(setopt completion-eager-update t)The following completion styles are set up:
- basic: You can type the start of a candidate. (ex:
abcwill listabcdeandabcxyz) - partial-completion: You can specify multiple words and each word will be considered as the prefix for matching candidates. For example, if you type
a-b, that will matchapple-bananaif it is one of the options. - emacs22: When you move your point to the middle of some text and then complete, the text before your point is used to filter the completion and the text after your point is added to the end of the result.
More info: Completion styles
(setopt completion-styles '(basic emacs22 flex))Automatically show the completion preview based on the text at point.
TABaccepts the completion suggestion andM-icompletes the longest common prefix.(setopt global-completion-preview-mode t)TABfirst tries to indent the current line. If the line was already indented, then Emacs tries to complete the thing at point. Some programming language modes have their own variable to control this, e.g.,c-tab-always-indent, so it might need additional customization.(setopt tab-always-indent 'complete)Help
If you pause after typing the first part of a keyboard shortcut (ex:
C-c), Emacs will display the keyboard shortcuts that you can continue with.(setopt which-key-mode t)Tab bar
The tab bar is always shown. Tabs let you save the way you have one or more windows arranged, and which buffers are displayed in those windows. You can click on a tab or use
M-x tab-switchto switch to that configuration, or click on the+sign or useM-x tab-newto add another tab. More info: Tab Bars(info "(emacs) Tab Bars")"(setopt tab-bar-show 0)
Figure 4: The tab bar is displayed at the top of a buffer. The tabs are saved between Emacs sessions.
(setopt tab-bar-history-mode t)The Dired file manager
Dired buffers are refreshed whenever you revisit a directory.
(setopt dired-auto-revert-buffer t)You can use the mouse to drag files in Dired. Ctrl+leftdrag copies the file, Shift+leftdrag moves it, Meta+leftdrag links it. You can also drag the to other applications on X11, Haiku, Mac OS, and GNUstep.
(setopt dired-mouse-drag-files t)Show the current directory when prompting for a shell command. This affects
shell-commandandasync-shell-command.(setopt shell-command-prompt-show-cwd t)Package management
If you open a file for which Emacs has optional packages that provide extra support in GNU ELPA or NonGNU ELPA, Emacs will add [Upgrade?] to the mode line to make it easier to install the appropriate package.
Figure 6: Package autosuggest adds an Upgrade? to the modeline when you open a file for which Emacs has an optional package available (setopt package-autosuggest-mode t)When you're working with
M-x list-packages,x(M-x package-menu-execute) now requires you to select something instead of acting the current package by default. Pressi(package-menu-mark-install) to mark a package for installation, pressd(package-menu-mark-delete) to mark a package for deletion, pressu(package-menu-mark-unmark) to unmark a package, and pressx(package-menu-execute) to execute the operations.(setopt package-menu-use-current-if-no-marks nil)Code
In code buffers, Emacs will display errors and warnings by using
flymake-mode.(add-hook 'prog-mode-hook 'flyspell-mode)If you use
M-x compile, the*compilation*window will scroll as new output appears, but it will stop at the first error so that you can investigate more easily.(setopt compilation-scroll-output 'first-error)You can Ctrl+leftclick on a function name to jump to its definition using
xref-find-definitions-at-mouse.(setopt global-xref-mouse-mode t)Emacs will automatically insert matching parentheses, brackets, and braces.
(setopt electric-pair-mode t)Emacs will generally use spaces instead of tabs when indenting code.
(setopt indent-tabs-mode nil)If there is a project-specific .editorconfig file, Emacs will follow those settings. (More about EditorConfig)
(setopt editorconfig-mode t)Tags tables are automatically regenerated whenever you save files. This uses Etags to make it easier to jump to the definitions of functions or variables.
Version control
(setopt etags-regen-mode t)Files are reloaded from disk if they have been updated by your version control system.
(setopt vc-auto-revert-mode t)If a directory has changed in version control but you have some modified files, Emacs will ask if you want to save those changed files.
(setopt vc-dir-save-some-buffers-on-revert t)If you use
vc-find-revisionto go to a specific version of the file, it is displayed in a temporary buffer and does not replace the copy that you currently have.(setopt vc-find-revision-no-save t)If you open a symbolic link to a file under version control, Emacs will open the real file and display a message. That way, it will still be version-controlled.
(setopt vc-follow-symlinks t)C-x v IandC-x v Onow have additional keyboard shortcuts. For example,C-x v I Lisvc-root-log-incomingandC-x v O Lisvc-root-log-outgoing. UseC-x v I C-handC-x v O C-hto see other commands.(setopt vc-use-incoming-outgoing-prefixes t)The version control system is automatically determined for all buffers. (Standard Emacs just checks it in dired, shell, eshell, or compilation-mode buffers.)
(setopt vc-deduce-backend-nonvc-modes t)Things I haven't been able to figure out yet
On Linux with X11, Haiku, or macOS / GNUstep: When a buffer has an associated filename, you can drag the filename from the modeline and drop it into other programs. (Haven't been able to get this working.)
(setopt mouse-drag-mode-line-buffer t)You can e-mail me at sacha@sachachua.com.
- Use
-
🔗 r/york Help me reach £500 donations for York's homeless before tomorrow? rss
| Hi all! Some of you might remember my last post and how much amazing support I got from our local Reddit group when I first began fundraising. This will be my last update before the sleep out actually takes place! Tomorrow evening I will be taking part in York's annual Charity Sleep Out to help raise money for some of the wonderful charities in York who provide food and other essential support to those in our local area who are homeless or otherwise in need. I've had the absolute pleasure of volunteering with Hoping Kitchen on Sundays and I know how well-loved KEYS is, so it's a really worthy cause. Whilst it won't be even close to what those who sleep rough experience on a daily basis, I am the kind of person who had to borrow a wooly hat from a friend because I would very much usually rather be indoors doing literally anything outside ever. Most importantly, my pet parrots and bunnies will miss me very much and probably give me a few nips upon my return for leaving them without their usual bedtime snuggles for an evening. Would be really great to get to £500 before the event begins tomorrow! I'll try and remember to post some pictures whilst we're camping out tomorrow to keep you all updated https://www.givewheel.com/fundraising/14777/kayleighs-york-charity-sleepout-2026/ submitted by /u/kittywenham
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life Working on the Emacs newbie experience rss
The Emacs Carnival April 2026 theme of newbies/starter kits nudged me to think about how new users can learn what they need in order to get started. In particular, I wanted to think about these questions that newbies might have:
- Is it worth it?
- How do I start?
- Should I use a starter kit? How?
- I'm stuck, how can I get help?
- This is overwhelming. How do I make it more manageable?
I worked on some pages in the EmacsWiki:
- EmacsWiki: Emacs Newbie
- I removed or deemphasized some links that might be confusing for newbies.
- EmacsWiki: Learning Emacs
- I reorganized the items and added some more notes.
- EmacsWiki: Emacs Screencasts
- I tweaked the beginner information section.
- I added a section for starter kits.
- EmacsWiki: Starter Kits
- I added "Things to know before you start" to help newbies who might not have Git installed or who might not know how to get to the command line. I also organized the starter kits by type.
- EmacsWiki: Keybinding Guide
- Replaced the link with Mastering Emacs. The GNU copy of the Emacs FAQ is not responding to me at the moment even though downforeveryoneorjustme says that it's up, boo.
People often recommend Emacs News to people who want to learn more about what's going on in the Emacs community, so I added some notes to that one as well.
- I added an introduction to the Emacs News category page to direct new people to some tips for making the most of Emacs News
- I moved the e-mail subscription above the RSS feed, since people are more familiar with e-mail as a subscription mechanism.
- I added a tutorial for setting up newsticker within Emacs.
- I set up some shorter URLs (sachachua.com/emacs-news, sach.ac/emacs-news, yayemacs.com/news).
Just gotta find some newbies to test these ideas with… Email me! =)
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life Emacs beginner resources rss
: Updated my page from 2014 with more recent resources.
Welcome to Emacs! Thank you for considering this strange and wonderful text editor. Here are some resources that can help you on your journey.
- GNU Emacs: A Guided Tour: This page has screenshots and a short tutoral.
- The EmacsNewbie page on EmacsWiki
- An Emacs Tutorial: Beginner's Guide to Emacs - Mastering Emacs
Many people use Emacs just for Org Mode. Here are some resources for getting started:
- Org mode beginning at the basics
- Top (Org Mode Compact Guide)
- james-stoup/emacs-org-mode-tutorial: A primer for users trying to make sense of Org Mode · GitHub
You can view 1 comment or e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds T&A link - Tuesday 28th - Briggate: "4 teen boys - aged 13 to 16 - arrested following city centre stabbing incident" rss
Reports that a 34‑year‑old man was taken to hospital after being reportedly stabbed during an altercation near the McDonalds on Briggate on Tuesday night.
Also in the YEP:
Of course it was outside the McDonalds :(
I hope those responsible are dealt with robustly to send the right message.
submitted by /u/thetapeworm
[link] [comments] -
🔗 r/Leeds Bike stolen city centre rss
Victoria Pendleton bike stolen today from outside Leeds train station between 12:30-16:30 :(
Please dm if any information thank you
submitted by /u/Few_Health_5530
[link] [comments] -
🔗 r/Yorkshire Whitby steam trains return delayed rss
| submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 Andrew Ayer - Blog FastCGI: 30 Years Old and Still the Better Protocol for Reverse Proxies rss
HTTP reverse proxying is a minefield. Just the other week, a researcher disclosed a desync vulnerability in Discord's media proxy that allowed spying on private attachments. This is not unusual; these vulnerabilities just keep coming.
The problem is the widespread use of HTTP as the protocol between reverse proxies and backends, even though it's unfit for the job. But we don't have to use HTTP here. There's a 30-year-old protocol for proxy-to-backend communication that avoids HTTP's pitfalls. It's called FastCGI, and its specification was released 30 years ago today.
FastCGI is a Wire Protocol, not a Process Model
It's true that some web servers can automatically spawn FastCGI processes to handle requests for files with the
.fcgiextension, much like they would for.cgifiles. But you don't have to use FastCGI this way - you can also use the FastCGI protocol just like HTTP, with requests sent over a TCP or UNIX socket to a long-running daemon that handles them as if they were HTTP requests.For example, in Go all you have to do is import the net/http/fcgi standard library package and replace
http.Servewithfcgi.Serve:Go HTTP
l, _ := net.Listen("tcp", "127.0.0.1:8080") http.Serve(l, handler)Go FastCGI
l, _ := net.Listen("tcp", "127.0.0.1:8080") fcgi.Serve(l, handler)Everything else about your app stays the same - even your handler, which continues to use the standard
http.ResponseWriterandhttp.Requesttypes.Popular proxies like Apache, Caddy, nginx, and HAProxy support FastCGI backends, and the configuration is simple:
nginx HTTP
proxy_pass http://localhost:8080;nginx FastCGI
fastcgi_pass localhost:8080; include fastcgi_params;Show more config examples
Apache HTTP
ProxyPass / http://localhost:8080/Apache FastCGI
ProxyPass / fcgi://localhost:8080/Caddy HTTP
reverse_proxy localhost:8080 { transport http { } }Caddy FastCGI
reverse_proxy localhost:8080 { transport fastcgi { } }HAProxy HTTP
backend app_backend server s1 localhost:8080HAProxy FastCGI
fcgi-app fcgi_app docroot / backend app_backend use-fcgi-app fcgi_app server s1 localhost:8080 proto fcgiWhy HTTP Sucks for Reverse Proxies: Desync Attacks / Request Smuggling
HTTP/1.1 has the tragic property of looking simple on the surface (it's just text!) but actually being a nightmare to parse robustly. There are so many different ways to format the same HTTP message, and there are too many edge cases and ambiguities for implementations to handle consistently. As a result, no two HTTP/1.1 implementations are exactly the same, and the same message can be parsed differently by different parsers.
The most serious problem is that there is no explicit framing of HTTP messages - the message itself describes where it ends, and there are multiple ways for a message to do that, all with their own edge cases. Implementations can disagree about where a message ends, and consequently, where the next message begins. This is the foundation of HTTP desync attacks, also known as request smuggling, wherein a reverse proxy and a backend disagree about the boundaries between HTTP messages, causing all sorts of nightmare security issues, such as the Discord vulnerability I linked above.
A lot of people seem to think you can just patch the parser divergences, but this is a losing strategy. James Kettle just keeps finding new ones. After finding another batch last year, he declared "HTTP/1.1 must die".
HTTP/2, when consistently used between the proxy and backend , fixes desync by putting clear boundaries around messages, but FastCGI has been doing that since 1996 with a simpler protocol. For context, nginx has supported FastCGI backends since its first release, but only got support for HTTP/2 backends in late 2025. Apache's support for HTTP/2 backends is still "experimental".
Why HTTP Sucks for Reverse Proxies: Untrusted Headers
If desync attacks were the only problem, you could just use HTTP/2 and call it a day. Unfortunately, there's another problem: HTTP has no robust way for the proxy to convey trusted information about the request, such as the real client IP address, authenticated username (if the proxy handles authentication), or client certificate details (if mTLS is used).
The only option is to stick this information in HTTP headers, alongside the headers proxied from the client, without a clear structural distinction between trusted headers from the proxy and untrusted headers from a potential attacker. For example, the
X-Real-IPheader is often used to convey the client's real IP address. In theory, if your proxy correctly deletes all instances of theX-Real-IPheader (not just the first, and including case variations likex-REaL-ip) before adding its own, you're safe.In practice, this is a minefield and there are an awful lot of ways your backend can end up trusting attacker-controlled data. Your proxy really needs to delete not just
X-Real- IP, but any header that's used for this sort of thing, just in case some part of your stack relies on it without your knowledge. For example, the Chi middleware determines the client's real IP address by looking at theTrue-Client-IPheader first. Only ifTrue-Client-IPdoesn't exist does it useX-Real-IP. So even if your proxy does the right thing withX-Real-IP, you can still be pwned by an attacker sending aTrue-Client-IPheader.FastCGI completely avoids this class of problem by providing domain separation between headers from the client and information added by the proxy. Though trusted data from the proxy and HTTP request headers are transmitted to the backend in the same key/value parameter list, HTTP header names are prefixed with the string "HTTP_", making it structurally impossible for clients to send a header that would be interpreted as trusted data.
FastCGI defines some standard parameters such as
REMOTE_ADDRto convey the real client IP address. Go'snet/http/fcgipackage automatically uses this parameter to populate theRemoteAddrfield ofhttp.Request, rendering middleware unnecessary. It Just Works. Proxies can also use non-standard parameters to report whether HTTPS was used, what TLS ciphersuite was negotiated, and what client certificate was presented, if any. Go automatically sets theRequest'sTLSfield to a non-nil (but empty) value if the request used HTTPS, which is very handy for enforcing the use of HTTPS. Thefcgi.ProcessEnvfunction can be used to access the full set of trusted parameters sent by the proxy.Closing Thoughts
If FastCGI is the better protocol, why isn't it more popular? Maybe it's the name - while capitalizing on CGI's popularity made sense in 1996, CGI feels dated in 2026. There's also an enduring lack of awareness of the security problems with HTTP reverse proxying. Watchfire described desync attacks in 2005, and gave a prescient warning of their intractability, but the attacks were inexplicably ignored for over a decade. In an alternate timeline, Watchfire's research was taken seriously and people went looking for other protocols for reverse proxies.
FastCGI is very usable today, and has been in production use at SSLMate for over 10 years. That said, using a vintage technology has some downsides. It was never updated to support WebSockets. The tooling is not as good. For example, curl has no way to make requests to a FastCGI server. It supports FTP, Gopher, and even SMTP (however that works), but not FastCGI. When I benchmarked Go's FastCGI server behind a variety of reverse proxies, some workloads had worse throughput compared to HTTP/1.1 or HTTP/2. I don't think that's inherent to the protocol, but a reflection that FastCGI code paths have not been optimized as much as HTTP.
Despite these shortcomings, I still think FastCGI is worth using. I don't use WebSockets, and it's fast enough for my use case (and maybe yours too). If it ever became the bottleneck, I'd rather buy more hardware than deal with the nightmare of HTTP reverse proxying.
Happy 30th birthday, FastCGI!
-
🔗 r/LocalLLaMA mistralai/Mistral-Medium-3.5-128B · Hugging Face rss
| https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUFMistral Medium 3.5 128B
Mistral Medium 3.5 is our first flagship merged model. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. Mistral Medium 3.5 replaces its predecessor Mistral Medium 3.1 and Magistral in Le Chat. It also replaces Devstral 2 in our coding agent Vibe. Concretely, expect better performance for instruct, reasoning and coding tasks in a new unified model in comparison with our previous released models. Reasoning effort is configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios. Find more information on our blog.
Key Features
Mistral Medium 3.5 includes the following architectural choices:
- Dense 128B parameters.
- 256k context length.
- Multimodal input : Accepts both text and image input, with text output.
- Instruct and Reasoning functionalities with function calls (reasoning effort configurable per request).
Mistral Medium 3.5 offers the following capabilities:
- Reasoning Mode : Toggle between fast instant reply mode and reasoning mode, boosting performance with test-time compute when requested.
- Vision : Analyzes images and provides insights based on visual content, in addition to text.
- Multilingual : Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, and Arabic.
- System Prompt : Strong adherence and support for system prompts.
- Agentic : Best-in-class agentic capabilities with native function calling and JSON output.
- Large Context Window : Supports a 256k context window.
We release this model under a Modified MIT License): Open-source license for both commercial and non-commercial use with exceptions for companies with large revenue.
Recommended Settings
- Reasoning Effort :
'none'→ Do not use reasoning'high'→ Use reasoning (recommended for complex prompts and agentic usage) Usereasoning_effort="high"for complex tasks and agentic coding.
- Temperature : 0.7 for
reasoning_effort="high". Temp between 0.0 and 0.7 forreasoning_effort="none"depending on the task. Generally, lower means answer that are more to the point and higher allows the model to be more creative. It is a good practice to try different values in order to improve the model performance to meet your demands.
submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 Jessitron Span or Attribute? in OpenTelemetry custom instrumentation rss
TL;DR: Attribute. More information on one event gives us more correlation power. It’s also cheaper.
When you want to add some information to your tracing telemetry, you could emit a log, create a span, or add a piece of data to your current span. Adding a piece of data to your current span is the best! Usually.

Attributes are the best, and also the cheapest.
If you have request name, user ID, request properties, feature flags, and notes about what happened in a single event, then you can correlate
- feature flags with error rate
- number of items with latency
- which users hit the same stack trace The more data on the top-level span, the more answers you can get to “What is different about the requests that failed?”[1]
More information in one place is better! You can say
trace.getCurrentSpan().set_attribute(“my_module.items.count”, items.length)anywhere in your code, and accumulate data on a single event. This might be my favorite thing about OpenTelemetry tracing.Providers like Honeycomb that charge per event make adding attributes nearly free. (There’s still network, and long-term storage if you use that.)
Spans are for important units of work.
But sometimes it’s better to create a whole new span!
When to start a new span:
- Incoming request - Gotta create a top-level span to represent the work, so that you can add all those sweet attributes to it! This might be a root span (incoming work from outside, new trace) or a server span (continuing a propagated trace). In services, these come from instrumentation libraries.
- Network boundaries - spans are great for seeing dependencies between components. When you’re calling out to another service or database, it’s normal to make a client span for the outgoing call. These are created by many instrumentation libraries.
- Async boundaries - spans are great for seeing what ran in parallel and what concurrently.
- Performance concerns - spans are great for seeing what is slow.
Logs are useful sometimes.
If something might happen more than once, then a single-valued attribute can’t record them all. If you want to track how long that thing took, use a span. If it’s a fixed-time event (like an interrupt or error), then a log is good![2][2]
For example, if there’s only way an exception could be thrown in the scope of the span, then putting
exception.messageon the span is great. But if it’s possible for another exception to be thrown, that message would be overwritten! This is a good time to emit a log. Make sure the log participates in the trace (it includes trace and span ID), and then it will show up on your current span in the trace view. It doesn’t hurt to put that message on the span as well.These are suggestions.
These are guidelines, but the choice is yours. What do you want your trace to look like? What do you want to see called out in the trace waterfall, and what do you want to have together for correlation? Maybe you want both: an attribute on the root span, and a span that shows duration and detail.
Tracing tells the story of your application. Tell it the way that works for you.
Prompt
Get the AI to tell the story to you, and to verify that it works by testing. Here’s some advice to add to give your AI when coding:
## Observability Practices - add important data to the current span as attributes. Examples: - request parameters, especially internal IDs - feature flag values - anything that the code branches on - counts of how many times a loop was iterated - results of downstream calls - Name attributes like: <application>.<module>.<field> - Do not create span events, they're expensive. - Create logs only on exceptions - bring in instrumentation libraries for frameworks and client libraries to create the span structure - when kicking off async work, create a new span around each async task so that we can see what happens concurrently and what waits. - Use the Honeycomb MCP to check that your attributes and spans show up correctly after testing.
[1] The data doesn’t have to be on the same span to correlate it; Honeycomb can query across spans and logs in a trace. But it’s faster and easier when the data is on the same span, and BubbleUp (“what is different?”) works on single events.
[2] You might wonder, why a log instead of a span event? They are the same inside Honeycomb. Logs are sent immediately and are more likely to arrive. This matters in web clients, where people close the tab and the span never ends.
-
🔗 r/LocalLLaMA 16x DGX Sparks - What should I run? rss
| Let’s build the biggest ever DGX Spark Cluster at home. This is going into my home lab server rack, 2TB of unified memory. • 16x Sparks • 1x 200Gbps FS 24 x 200Gb QSFP56 Switch • 16x QSFP56 DAC cables Should be all setup by tomorrow afternoon, what should I run? submitted by /u/Kurcide
[link] [comments]
---|--- -
🔗 r/reverseengineering I built a free open-source CAN bus reverse engineering workstation in Python — 15 tabs, offline ML, dual AI engines, MitM gateway rss
submitted by /u/Repulsive_Factor5654
[link] [comments] -
🔗 r/york tansy beetle on clifton sands !! rss
| submitted by /u/whtmynm
[link] [comments]
---|--- -
🔗 r/LocalLLaMA What it feels like to have to have Qwen 3.6 or Gemma 4 running locally rss
| Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!). But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090. submitted by /u/GodComplecs
[link] [comments]
---|--- -
🔗 r/wiesbaden Neue Freunde finden 25-36+- rss
Hello, bin 34, Single und neu in Wiesbaden. Da meine Freunde dank Kindern kaum noch vor die Tür gehen, bin ich auf der Suche nach Jungen und aktiven Menschen, die Lust haben sich regelmäßig zu treffen. Garnicht so einfach in WI :( Bumble BFF und Gemeinsam Erleben hat für mich leider garnicht funktioniert und random ein Tanzkurs oder ähnliches anfangen ist auch nicht so mein Ding.
Ich bin super gerne unterwegs und möchte einfach mal wieder öfter raus und feiern, auf Straßenfeste, in Bars oder einfach nur spazieren. Genau so gerne chille ich zuhause, mache einen Spieleabend, koche was leckeres und starte einen Film/Serien-Marathon. Bin sportlich und auch sonst für vieles zu begeistern.
Wäre cool, Gleichgesinnte zu treffen, bevorzugt in meinem Alter, so Pi mal Daumen 😁
submitted by /u/M0zep5
[link] [comments] -
🔗 backnotprop/plannotator v0.19.3 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
What's New in v0.19.3
v0.19.3 makes feedback messages fully configurable and cleans up the stacked PR selector for teams working with long PR chains. Three PRs, one from an external contributor.
Configurable Feedback Messages
Every message Plannotator sends to your agent is now customizable through
~/.plannotator/config.json. Plan approvals, plan denials, review approvals, review feedback suffixes, and annotation feedback all flow through a shared prompt pipeline with{{variable}}template interpolation.The config supports generic overrides that apply to all runtimes, plus per- runtime overrides for cases where Claude Code, OpenCode, and Pi need different phrasing. A four-level resolution order (runtime-specific, generic, runtime built-in default, global default) means you can be as granular or as broad as you want. Users who don't touch the config get identical behavior to previous versions.
This started with @oorestisime's PR adding configurable review approval prompts (#561), which was then expanded to cover all 17 hardcoded feedback strings across the hook, OpenCode, and Pi integrations (#627). The full pipeline includes 72 tests (55 unit, 17 integration) covering template resolution, config merging, backward compatibility, and end-to-end disk-to- output flow.
A new documentation page at Custom Feedback walks through the config format, available template variables, and a context-anchoring pattern contributed by @aviadshiber.
- #561 by @oorestisime, closing #558
- #627 by @backnotprop, closing #624
Hide Merged PRs in Stacked PR Selector
When reviewing a long chain of stacked PRs, merged PRs would show up alongside open ones in the stack tree and PR selector. For teams that iterate through a stack over several sessions, this made it harder to see which PRs still needed review.
A "Hide merged" toggle now appears in both the stack tree popover and the PR selector dropdown. When enabled, merged PRs are removed from the list and a summary count shows how many are hidden. When visible, merged PRs appear dimmed with a strikethrough title and a "merged" badge, and they're not clickable. The toggle state persists via cookie across sessions. Tree indentation was also tightened to 2px per level to prevent horizontal overflow on deep stacks (10+ nodes).
- #626 by @backnotprop, closing #625
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat(review): add configurable approval prompts by @oorestisime in #561
- feat(review): hide/de-emphasize merged PRs in stacked PR selector by @backnotprop in #626
- feat(feedback): configurable plan, annotation, and review feedback by @backnotprop in #627
Contributors
@oorestisime filed #558 requesting commit-on-approve for code review sessions, then contributed #561 adding configurable review approval prompts. That PR seeded the broader feedback customization pipeline shipped in this release.
Community members whose issues shaped this release:
- @JohannesKlauss filed #624 requesting customizable feedback prompts for the build agent handoff
- @leoreisdias filed #625 requesting that merged PRs be hidden from the stacked PR selector, with a detailed description of the 10+ PR workflow that motivated the change
- @aviadshiber contributed a context-anchoring prompt pattern featured in the custom feedback documentation
Full Changelog :
v0.19.2...v0.19.3 -
🔗 r/Yorkshire Collapsing Labour vote in Barnsley sees some choosing between Greens and Reform rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 r/LocalLLaMA AMD has invented something that lets you use AI at home! They call it a "computer" rss
| submitted by /u/9gxa05s8fa8sh
[link] [comments]
---|--- -
🔗 r/wiesbaden Bernd Zehner löscht ein Drittel seiner Rezensionen in seinem Restaurant (geöffnet im Februar) rss
submitted by /u/Traditional_Face_984
[link] [comments]
-
- April 28, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-28 rss
IDA Plugin Updates on 2026-04-28
New Releases:
Activity:
- capa
- claude-of-alexandria
- fe1d2580: chore(deps-dev): bump the minor-and-patch group (#11)
- ida-domain
- ida-structor
- 141a4d46: feat: Add early stopping and ordered xref scanning for type validation
- mips_call_analyzer
- aeaecb84: init
- python-elpida_core.py
- 2f09280a: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:41Z
- 0466d82c: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:21Z
- 2216d956: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:57Z
- 57c73e44: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:33Z
- 295cf3f4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:08Z
- 5cc39a47: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:43Z
- 80b56fe0: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:18Z
- 55613c14: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:52Z
- b45ffb00: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:25Z
- a4772cd4: Constitutional event: strip-fix restored PROCEED, A3 voice, P055 norm…
- scripts
- 9e0ee439: added script for c2 extraction from EchoGather
-
🔗 r/york My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm rss
| Any information would be greatly appreciated as I require my bike for work submitted by /u/MidnightFar3298
[link] [comments]
---|--- -
🔗 r/Leeds Wheelchair accessible taxi services rss
Hey everyone, I’m a full time wheelchair user from London. I have quadriplegic cerebral palsy so can’t walk at all. I’m looking to study electronic music production at Leeds Conservatoire in September of this year and have to travel up to Leeds for accommodation viewings on Thursday. I was wondering if anyone could give me some taxi companies that do/may provide wheelchair accessible taxi services with full ramp access?
Uber, at least in London is a bit hit and miss so that’s why I’m asking for taxi services rather than just using Uber. I also wanted to ask, is there a taxi rank at Leeds station and do they have wheelchair accessible vehicles there?
Thanks in advance and feel free to add any tips or experiences of travelling in Leeds as a wheelchair user. Even if you are able bodied, please let me know if there’s anything you think I should bear in mind while navigating the city in general.
Thanks again everyone!
submitted by /u/LORDLUK3
[link] [comments] -
🔗 @binaryninja@infosec.exchange To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This mastodon
To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This feature is disabled by default in paid versions and enabled by default in our free version. Either way, you can change the setting whenever you want. Details in our latest blog post: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#crash- reporting
-
🔗 r/york Bees on Gillygate rss
Hi!
I don’t suppose anyone saw the swarm of bees all over Gillygate around the Tesco today?
Just wondered if anyone knows if it’s cleared up or what caused it?
This was about 13:45, and apparently they weren’t there in the morning.
submitted by /u/SadAndGloomy
[link] [comments] -
🔗 badlogic/pi-mono v0.70.6 release
New Features
- Cloudflare Workers AI provider support with
CLOUDFLARE_API_KEY/CLOUDFLARE_ACCOUNT_IDsetup. See docs/providers.md#api-keys. (#3851 by @mchenco) - Pi update checks now use
pi.devand identify Pi with api/<version>user agent. See docs/packages.md. (#3877 by @mitsuhiko)
Added
- Added Cloudflare Workers AI as a built-in provider with
CLOUDFLARE_API_KEY/CLOUDFLARE_ACCOUNT_IDsetup, default model resolution,/loginsupport, and provider documentation (#3851 by @mchenco).
Changed
- Changed Pi version checks to identify Pi with a
pi/<version>user agent (#3877 by @mitsuhiko).
Fixed
- Fixed config selector scroll indicators to show item counts instead of line counts (#3820 by @aliou).
- Fixed exported HTML to escape embedded image data and session metadata, preventing crafted session content from injecting markup (#3819 by @justinpbarnett, #3883 by @justinpbarnett).
- Fixed Bun-based package manager startup by locating global
node_modulesrelative to Bun's install layout (#3861 by @thirtythreeforty). - Fixed Bedrock inference profile capability checks by normalizing profile ARNs to the underlying model name.
- Fixed file discovery to fall back to
fdfindwhenfdis unavailable. - Fixed
pi updateto skip self-update reinstalls when the installed version is already current (#3853). - Fixed Cloudflare Workers AI attribution headers to honor the install telemetry setting.
- Fixed
pi update --selfdetection and execution for Windows package-manager shim installs, including symlinked global package roots, and print the manual fallback command when self-update fails (#3857).
- Cloudflare Workers AI provider support with
-
🔗 r/reverseengineering Building a perfect clone of 1993 game SimTower (via RE) rss
submitted by /u/scatematica
[link] [comments] -
🔗 r/LocalLLaMA Something from Mistral (Vibe) tomorrow rss
| Model(s) or Tool upgrade/New Tool? Source Tweet : https://xcancel.com/mistralvibe/status/2049147645894021147#m submitted by /u/pmttyji
[link] [comments]
---|--- -
🔗 r/Yorkshire Looking for a Lost Super Street Fighter 2 Arcade Cabinet (Sheffield/Yorkshire – early 2000s) rss
I’m trying to track down an arcade cabinet I used to play in the early 2000s, and I’m hoping someone in Yorkshire might know its current location.
Between 2002–2004, I regularly played a Super Street Fighter 2 machine in a takeaway called Pizza Metro on London Road in Sheffield.
Details I remember:
-
Small black cabinet
-
Dragon symbol on the side (green or possibly yellow)
-
Standard 6-button layout (Street Fighter style, diagonal)
-
One joystick was slightly larger than the other (not sure which side)
-
It was Super Street Fighter 2 (not Super Turbo — not the version Akuma)
I used to play it a lot during a brief period Iiving in Sheffield about 23 years ago, so it’s quite nostalgic for me.
Around 2005, the shop returned the cabinet to the arcade vendor they rented it from, the vendor later sold it to someone else. I managed to contact the vendor at the time, but they couldn’t remember who it was sold to.
Ideally, I’d be interested in buying the cabinet if it still exists. However, if it’s not for sale, I’d really just like to confirm the exact joystick and button setup.
If someone believes they’ve found the right machine, I’m happy to:
Confirm from clear photos/videos and arrange to see it in person to verify details.
I’m offering £100 for a solid, verifiable lead (e.g. correct cabinet identification, owner info, or confirmed hardware details.
If anyone remembers this cabinet, knows the vendor, or has any leads at all, I’d really appreciate it. I know it's a long shot but I've decided to try anyway.
submitted by /u/goldstand
[link] [comments] -
-
🔗 Locklin on science Bouncing droplet “quantum mechanics” rss
I was always a fan of de Broglie and Bohm’s “pilot wave” idea. This is a fully deterministic theory of quantum mechanics which physicists don’t like because “le hidden variables” (also it isn’t yet relativistic I guess). The original pilot wave idea didn’t work out because de Broglie couldn’t calculate scattering cross sections, though Bohm […]
-
🔗 r/Leeds nightclub interview?? rss
Hey guys! I have an interview for a bartender position at Backrooms nightclub tomorrow and I’ve never had an interview in a club but I really wanna work there bc I love the whole vibe of clubs and want to get into bartending. What kind of things do they ask you for these roles?? If anyone has any personal experience too it would be massively appreciated
submitted by /u/WhereasFar9745
[link] [comments] -
🔗 r/reverseengineering How I reverse-engineered a SQLite WAL database inside a VS Code extension - custom merge engine, header byte patching, and protobuf decoding without a schema rss
submitted by /u/PangolinConfident163
[link] [comments] -
🔗 r/york Does anyone know if there is an update regarding foss islands chimney? rss
| I noticed the temporary fencing looks to now be permanent, which is a shame- was a handy shortcut to Halfords and vice versa! submitted by /u/UnhingedSerialKiller
[link] [comments]
---|--- -
🔗 r/reverseengineering AI solved our CTF in 6min rss
submitted by /u/eshard-cybersec
[link] [comments] -
🔗 r/LocalLLaMA meantime on r/vibecoding rss
| words of wisdom submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation rss
| Evaluated Qwen 3.6 27B across BF16, Q4_K_M, and Q8_0 GGUF quant variants with llama-cpp-python using Neo AI Engineer. Benchmarks used:- HumanEval: code generation
- HellaSwag: commonsense reasoning
- BFCL: function calling
Total samples:
- HumanEval: 164
- HellaSwag: 100
- BFCL: 400
Results: BF16
- HumanEval: 56.10% 92/164
- HellaSwag: 90.00% 90/100
- BFCL: 63.25% 253/400
- Avg accuracy: 69.78%
- Throughput: 15.5 tok/s
- Peak RAM: 54 GB
- Model size: 53.8 GB
Q4_K_M
- HumanEval: 50.61% 83/164
- HellaSwag: 86.00% 86/100
- BFCL: 63.00% 252/400
- Avg accuracy: 66.54%
- Throughput: 22.5 tok/s
- Peak RAM: 28 GB
- Model size: 16.8 GB
Q8_0
- HumanEval: 52.44% 86/164
- HellaSwag: 83.00% 83/100
- BFCL: 63.00% 252/400
- Avg accuracy: 66.15%
- Throughput: 18.0 tok/s
- Peak RAM: 42 GB
- Model size: 28.6 GB
What stood out: Q4_K_M looks like the best practical variant here. It keeps BFCL almost identical to BF16, drops about 5.5 points on HumanEval, and is still only 4 points behind BF16 on HellaSwag. The tradeoff is pretty good:
- 1.45x faster than BF16
- 48% less peak RAM
- 68.8% smaller model file
- nearly identical function calling score
Q8_0 was a bit underwhelming in this run. It improved HumanEval over Q4_K_M by ~1.8 points, but used 42 GB RAM vs 28 GB and was slower. It also scored lower than Q4_K_M on HellaSwag in this eval. For local/CPU deployment, I would probably pick Q4_K_M unless the workload is heavily code-generation focused. For maximum quality, BF16 still wins. Evaluation setup:
- GGUF via llama-cpp-python
- n_ctx: 32768
- checkpointed evaluation
- HumanEval, HellaSwag, and BFCL all completed
- BFCL had 400 function calling samples
This evaluation was done using Neo AI Engineer, which built the GGUF eval setup, handled checkpointed runs, and consolidated the benchmark results. I manually reviewed the outcome as well. Complete case study with benchmarking results, approach and code snippets in mentioned in the comments below 👇 submitted by /u/gvij
[link] [comments]
---|--- -
🔗 backnotprop/plannotator v0.19.2 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
v0.18.0 | Annotate focus & wide modes, OpenCode origin detection, word-level inline plan diff, Markdown content negotiation, color swatches
v0.17.10 | HTML and URL annotation, loopback binding by default, Safari scroll fix, triple-click fix, release pipeline smoke tests
v0.17.9 | Hotfix: pin Bun to 1.3.11 for macOS binary codesign regression
v0.17.8 | Configurable default diff type, close button for sessions, annotate data loss fix, markdown rendering polish
v0.17.7 | Fix "fetch did not return a Response" error in OpenCode web/serve modes
v0.17.6 | Bun.serve error handlers for diagnostic 500 responses, install.cmd cache fix
v0.17.5 | Fix VCS detection crash when p4 not installed, install script cache path fix
v0.17.4 | Vault browser merged into Files tab, Kanagawa themes, Pi idle session tool fix
v0.17.3 | Sticky lane repo/branch badge overflow fix
v0.17.2 | Supply-chain hardening, sticky toolstrip and badges, overlay scrollbars, external annotation highlighting, Conventional Comments
What's New in v0.19.2
v0.19.2 adds stacked PR review, source line numbers in exported feedback, and several UX fixes. Five PRs, one from a first-time contributor.
Code Review
Stacked PR Review
Reviewing a PR that belongs to a stack used to mean reviewing it in isolation. You could see the diff for that one branch, but not how it fit into the larger chain. Switching to a different PR in the stack meant closing the review and starting a new session.
Stacked PR review keeps you in a single session across every PR in the stack. A stack tree popover shows the full chain with clickable navigation. Each PR gets its own worktree checkout, so switching PRs recomputes the diff against the correct base without mixing changes between layers. Two scope modes let you toggle between viewing a single PR's changes (layer) and all accumulated changes from the default branch (full-stack).
Multi-PR posting lets you submit review feedback to multiple PRs at once. A confirmation dialog shows exactly where comments will go before posting to GitHub or GitLab, with parallel submission and partial-failure retry. Annotations from full-stack diffs can't be mapped to a single PR's line numbers, so they're surfaced as copyable markdown rather than silently dropped.
A new "Branch" option in the default diff type setting (and first-run dialog) gives users who work primarily with committed changes a one-click default.
- #620 by @backnotprop
Source Line Numbers in Exported Feedback
When Claude receives annotation feedback, it got the block content and the highlighted text but had no way to locate the annotation in the source file. For large documents with repeated headings or similar paragraphs, this ambiguity forced extra round-trips.
Exported annotations now include source line numbers. Single-line blocks show
(line 42), multi-line blocks show(lines 10–14). Code blocks account for fence lines when computing ranges. Files with YAML frontmatter are offset- corrected so line numbers match the original file, not the parsed output.For converted content (HTML files rendered through Turndown, URLs fetched via Jina Reader), the feedback includes a caveat that line numbers refer to the converted markdown rather than the original source. When viewing a linked HTML document within a plan, the conversion flag is derived per-document so mixed collections of markdown and HTML files each get the correct label.
- #623 by @backnotprop
UX
Diff Type Dialog Re-Presented
Many users who set up Plannotator before v0.17.8 never saw the "Committed" option (branch diff vs. the default branch) because the first-run dialog only showed at install time. Users were asking how to set committed changes as their default without realizing the option existed.
The dialog is now re-presented to existing users with clearer descriptions, a wider layout with a 60/40 split, and a hover-to-zoom preview of the toolbar dropdown. The dialog reminds users they can switch views anytime during a review. Existing preferences are preserved — this only re-shows the picker, it doesn't reset anyone's choice.
Options Menu Ghost Dot Removed
The pulsing notification dot on the Options menu was meant to flag new settings after an update. In practice, the dot appeared on every session and users couldn't figure out how to dismiss it. The entire new-settings-hint system has been removed. Settings changes are communicated through release notes instead.
Additional Changes
- Docs: toolbar inventory updated. Documentation references to "Insert" and "Replace" annotation types have been scrubbed to match the shipped UI, which uses Delete, Comment, Quick Label, Looks Good, Global Comment, and Copy. — #618 by @vxio, closing #617
- Docs: OpenCode plugin configuration. Clarified plugin setup instructions for OpenCode users. — commit
33f409a
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat: stacked PR review — PR switching, scope toggling, multi-PR posting by @backnotprop in #620
- feat(plan,annotate): include source line numbers in exported feedback by @backnotprop in #623
- docs: scrub Insert/Replace from docs to match shipped UI by @vxio in #618
- fix: remove ghost dot on Options menu (new-settings-hint system) by @backnotprop in commit
7ab2d8f - fix: re-show diff type setup dialog with clearer options and toolbar hint by @backnotprop in commits
aaad89e,03d4e8b
New Contributors
Community
@vxio noticed the docs still referenced Insert and Replace annotation types that were removed from the UI, filed #617, and contributed the fix in #618. First contribution to the project.
Full Changelog :
v0.19.1...v0.19.2 -
🔗 r/Leeds Firstbus app update shenanigans rss
If you use the Firstbus app for tickets, be warned, they are rolling out an update. The update has gone so well that they have a banner on the website pointing to a separate FAQ specifically for the update with a big list of reasons why you will probably have to call them to get access to your tickets...
https://www.firstbus.co.uk/help-support/help-and-support/first-bus-app- update
submitted by /u/awesomeweles
[link] [comments] -
🔗 r/reverseengineering Example structure for evidence-based vulnerability reports rss
submitted by /u/RoutineWeary6823
[link] [comments] -
🔗 r/LocalLLaMA Duality of r/LocalLLaMA rss
| submitted by /u/HornyGooner4402
[link] [comments]
---|--- -
🔗 r/LocalLLaMA I'm done with using local LLMs for coding rss
I think gave it a fair shot over the past few weeks, forcing myself to use local models for non-work tech asks. I use Claude Code at my job so that's what I'm comparing to.
I used Qwen 27B and Gemma 4 31B, these are considered the best local models under the multi-hundred LLMs. I also tried multiple agentic apps. My verdict is that the loss of productivity is not worth it the advantages.
I'll give a brief overview of my main issues.
Shitty decision-making and tool-calls
This is a big one. Claude seems to read my mind in most cases, but Qwen 27B makes me give it the Carlo Ancelotti eyebrow more often than not. The LLM just isn't proceeding how I would proceed.
I was mainly using local LLMs for OS/Docker tasks. Is this considered much harder than coding or something?
To give an example, tasks like " Here's a Github repo, I want you to Dockerize it." I'd expect any dummy to follow the README's instructions and execute them. (EDIT: full prompt here: https://reddit.com/r/LocalLLaMA/comments/1sxqa2c/im_done_with_using_local_llms_for_coding/oiowcxe/ )
Issues like having a 'docker build' that takes longer than the default timeout, which sends them on unrelated follow-ups (as if the task failed), instead of checking if it's still running. I had Qwen try to repeat the installation commands on the host (also Ubuntu) to see what happens. It started assuming "it must have failed because of torchcodec" just like that, pulling this entirely out of its ass, instead of checking output.
I tried to meet the models half-way. Having this in AGENTS.md: " If you run a Docker build command, or any other command that you think will have a lot of debug output, then do the following: 1. run it in a subagent, so we don't pollute the main context, 2. pipe the output to a temporary file, so we can refer to it later using tail and grep." And yet twice in a row I came back to a broken session with 250k input tokens because the LLM is reading all the output of 'docker build' or 'docker compose up'.
I know there's huge AGENTS.md that treat the LLM like a programmable robot, giving it long elaborate protocols because they don't expect to have decent self-guidance, I didn't try those tbh. And tbh none of them go into details like not reading the output of 'docker build'. I stuck to the default prompts of the agentic apps I used, + a few guidelines in my AGENTS.md.
Performance
Not only are the LLMs slow, but no matter which app I'm using, the prompt cache frequently seems to break. Translation: long pauses where nothing seems to happen.
For Claude Code specifically, this is made worse by the fact that it doesn't print the LLM's output to the user. It's one of the reasons I often preferred Qwen Code. It's very frustrating when not only is the outcome looking bad, but I'm not getting rapid feedback.
I'm not learning anything
Other than changing the URL of the Chat Completions server, there's no difference between using a local LLM and a cloud one, just more grief.
There's definitely experienced to be gained learning how to prompt an LLM. But I think coding tasks are just too hard for the small ones, it's like playing a game on Hardcore. I'm looking for a sweetspot in learning curve and this is just not worth it.
What now
For my coding and OS stuff, I'm gonna put some money on OpenRouter and exclusively use big boys like Kimi. If one model pisses me off, move on to the next one. If I find a favorite, I'll sign up to its yearly plan to save money.
I'll still use small local models for automation, basic research, and language tasks. I've had fun writing basic automation skills/bots that run stuff on my PC, and these will always be useful.
I also love using local LLMs for writing or text games. Speed isn't an issue there, the prompt cache's always being hit. Technically you could also use a cloud model for this too, but you'd be paying out the ass because after a while each new turn is sending like 100k tokens.
Thanks for reading my blog.
submitted by /u/dtdisapointingresult
[link] [comments] -
🔗 Jessitron Communication is hard, but sometimes I can fix it. rss
We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.
Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.
Here’s an example.
Iterating could be easier. The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic. ! As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking. In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn't tell exactly what I was talking about. I can’t just point to the UI.
This wasn't the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I'm talking about.
When communication gets difficult, that’s a signal. I can change this.
So I made it make a way to point to the UI.
In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ‘r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?
Claude understood and added a hover effect that highlights the originating bash tool call.

That hover effect is OK; I used it a few times. Those reference tags are gold! I've used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.
This was a great idea. Iterating is much easier now!
I am in the loop and on the loop.
There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.
Then there’s a meta-level feedback loop, the “is this working?” check when I feel resistance. Frustration, tedium, annoyance-these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the “middle loop,” and Kief Morris renamed it "human on the loop."
Here, I’m both in the development loop with the AI, and I’m “on the loop” as a thoughtful collaborator, smoothing the development loop when it gets rough.
Resistance will be assimilated.
As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!
-
🔗 r/wiesbaden Eiserne Hand mit der Vespa rss
Kurz und knappe Frage an die Moped / Rollerfahrer.
Meine Freundin muss nach Taunusstein pendeln und überlegt auf Roller umzusteigen.
Daher meine Frage :
Kommt eine kleine Vespa / Moped mit 50ccm die eiserne Hand hoch ? Also mit sinnvoller Geschwindigkeit?
Hat das einer von euch schon gemacht ?
Ich danke schonmal für die Antworten :)
submitted by /u/metaldog
[link] [comments] -
🔗 r/Leeds best tuna melt paninis? rss
i’m craving a tuna melt really badly right now and i’m in the city centre for lunch tomorrow and want to get something good. does anyone have any recommendations? cheese, tuna, and toasted panini bread is all i need right now 🙏
submitted by /u/Shoddy_Day
[link] [comments] -
🔗 Mitchell Hashimoto Ghostty Is Leaving GitHub rss
(empty) -
🔗 Armin Ronacher Before GitHub rss
GitHub was not the first home of my Open Source software. SourceForge was.
Before GitHub, I had my own Trac installation. I had Subversion repositories, tickets, tarballs, and documentation on infrastructure I controlled. Later I moved projects to Bitbucket, back when Bitbucket still felt like a serious alternative place for Open Source projects, especially for people who were not all-in on Git yet.
And then, eventually, GitHub became the place, and I moved all of it there.
It is hard for me to overstate how important GitHub became in my life. A large part of my Open Source identity formed there. Projects I worked on found users there. People found me there, and I found other people there. Many professional relationships and many friendships started because some repository, issue, pull request, or comment thread made two people aware of each other.
That is why I find what is happening to GitHub today so sad and so disappointing. I do not look at it as just the folks at Microsoft making product decisions I dislike. GitHub was part of the social infrastructure of Open Source for a very long time. For many of us, it was not merely where the code lived; it was where a large part of the community lived.
So when I think about GitHub's decline, I also think about what came before it, and what might come after it. I have written a few times over the years about dependencies, and in particular about the problem of micro dependencies. In my mind, GitHub gave life to that phenomenon. It was something I definitely did not completely support, but it also made Open Source more inclusive. GitHub changed how Open Source feels, and later npm and other systems changed how dependencies feel. Put them together and you get a world in which publishing code is almost frictionless, consuming code is almost frictionless, and the number of projects in the world explodes.
That has many upsides. But it is worth remembering that Open Source did not always work this way.
A Smaller World
Before GitHub, Open Source was a much smaller world. Not necessarily in the number of people who cared about it, but in the number of projects most of us could realistically depend on.
There were well-known projects, maintained over long periods of time by a comparatively small number of people. You knew the names. You knew the mailing lists. You knew who had been around for years and who had earned trust. That trust was not perfect, and the old world had plenty of gatekeeping, but reputation mattered in a very direct way. We took pride (and got frustrated) when the Debian folks came and told us our licensing stuff was murky or the copyright headers were not up to snuff, because they packaged things up.
A dependency was not just a package name. It was a project with a history, a website, a maintainer, a release process, a lot of friction, and often a place in a larger community. You did not add dependencies casually, because the act of depending on something usually meant you had to understand where it came from.
Not all of this was necessarily intentional, but because these projects were comparatively large, they also needed to bring their own infrastructure. Small projects might run on a university server, and many of them were on SourceForge, but the larger ones ran their own show. They grouped together into larger collectives to make it work.
We Ran Our Own Infrastructure
My first Open Source projects lived on infrastructure I ran myself. There was a Trac installation, Subversion repositories, tarballs, documentation, and release files served from my own machines or from servers under my control. That was normal. If you wanted to publish software, you often also became a small-time system administrator. Georg and I ran our own collective for our Open Source projects: Pocoo. We shared server costs and the burden of maintaining Subversion and Trac, mailing lists and more.
Subversion in particular made this "running your own forge" natural. It was centralized: you needed a server, and somebody had to operate it. The project had a home, and that home was usually quite literal: a hostname, a directory, a Trac instance, a mailing list archive.
When Mercurial and Git arrived, they were philosophically the opposite. Both were distributed. Everybody could have the full repository. Everybody could have their own copy, their own branches, their own history. In principle, those distributed version control systems should have reduced the need for a single center. But despite all of this, GitHub became the center.
That is one of the great ironies of modern Open Source. The distributed version control system won, and then the world standardized on one enormous centralized service for hosting it.
What GitHub Gave Us
It is easy now to talk only about GitHub's failures, of which there are currently many, but that would be unfair: GitHub was, and continues to be, a tremendous gift to Open Source.
It made creating a project easy and it made discovering projects easy. It made contributing understandable to people who had never subscribed to a development mailing list in their life. It gave projects issue trackers, pull requests, release pages, wikis, organization pages, API access, webhooks, and later CI. It normalized the idea that Open Source happens in the open, with visible history and visible collaboration. And it was an excellent and reasonable default choice for a decade.
But maybe the most underappreciated thing GitHub did was archival work: GitHub became a library. It became an index of a huge part of the software commons because even abandoned projects remained findable. You could find forks, and old issues and discussions all stayed online. For all the complaints one can make about centralization, that centralization also created discoverable memory. The leaders there once cared a lot about keeping GitHub available even in countries that were sanctioned by the US.
I know what the alternative looks like, because I was living it. Some of my earliest Open Source projects are technically still on PyPI, but the actual packages are gone. The metadata points to my old server, and that server has long stopped serving those files.
That was normal before the large platforms. A personal domain expired, a VPS was shut down, a developer passed away, and with them went the services they paid for. The web was once full of little software homes, and many of them are gone 1.
npm and the Dependency Explosion
The micro-dependency problem was not just that people published very small packages. The hosted infrastructure of GitHub and npm made it feel as if there was no cost to create, publish, discover, install, and depend on them.
In the pre-GitHub world, reputation and longevity were part of the dependency selection process almost by necessity, and it often required vendoring. Plenty of our early dependencies were just vendored into our own Subversion trees by default, in part because we could not even rely on other services being up when we needed them and because maintaining scripts that fetched them, in the pre-API days, was painful. The implied friction forced some reflection, and it resulted in different developer behavior. With npm-style ecosystems, the package graph can grow faster than anybody's ability to reason about it.
The problem that this type of thinking created also meant that solutions had to be found along the way. GitHub helped compensate for the accountability problem and it helped with licensing. At one point, the newfound influx of developers and merged pull requests left a lot of open questions about what the state of licenses actually was. GitHub even attempted to rectify this with their terms of service.
The thinking for many years was that if I am going to depend on some tiny package, I at least want to see its repository. I want to see whether the maintainer exists, whether there are issues, whether there were recent changes, whether other projects use it, whether the code is what the package claims it is. GitHub became part of the system that provides trust, and more recently it has even become one of the few systems that can publish packages to npm and other registries with trusted publishing.
That means when trust in GitHub erodes, the problem is not isolated to source hosting. It affects the whole supply chain culture that formed around it.
GitHub Is Slowly Dying
GitHub is currently losing some of what made it feel inevitable. Maybe that's just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable.
Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It's a miracle that things are going as well as they are.
For a while, leaving GitHub felt like a symbolic move mostly made by smaller projects or by people with strong views about software freedom. I definitely cringed when Zig moved to Codeberg! But I now see people with real weight and signal talking about leaving GitHub. The most obvious one is Mitchell Hashimoto, who announced that Ghostty will move. Where it will move is not clear, but it's a strong signal. But there are others, too. Strudel moved to Codeberg and so did Tenacity. Will they cause enough of a shift? Probably not, but I find myself on non-GitHub properties more frequently again compared to just a year ago.
One can argue that this is good: it is healthy for Open Source to stop pretending that one company should be the default home of everything. Git itself was designed for a world with many homes.
Dispersion Has a Cost
Going back to many forges, many servers, many small homes, and many independent communities will increase decentralization, and in many ways it will force systems to adapt. This can restore autonomy and make projects less dependent on the whims of Microsoft leadership. It can also allow different communities to choose different workflows. What's happening in Pi's issue tracker currently is largely a result of GitHub's product choices not working in the present-day world of Open Source. It was built for engagement, not for maintainer sanity.
It can also make the web forget again. I quite like software that forgets because it has a cleansing element. Maybe the real risk of loss will make us reflect more on actually taking advantage of a distributed version control system.
But if projects move to something more akin to self-hosted forges, to their own self-hosted Mercurial or cgit servers, we run the risk of losing things that we don't want to lose. The code might be distributed in theory, but the social context often is not. Issues, reviews, design discussions, release notes, security advisories, and old tarballs are fragile. They disappear much more easily than we like to admit. Mailing lists, which carried a lot of this in earlier years, have not kept up with the needs of today, and are largely a user experience disaster.
We Need an Archive
As much as I like the idea of things fading out of existence, we absolutely need libraries and archives.
Regardless of whether GitHub is here to stay or projects find new homes, what I would like to see is some public, boring, well-funded archive for Open Source software. Something with the power of an endowment or public funding to keep it afloat. Something whose job is not to win the developer productivity market but just to make sure that the most important things we create do not disappear.
The bells and whistles can be someone else's problem, but source archives, release artifacts, metadata, and enough project context to understand what happened should be preserved somewhere that is not tied to the business model or leadership mood of a single company.
GitHub accidentally became that archive because it became the center of Open Source activity. Once that no longer holds, we should not assume some magic archival function will emerge or that GitHub will continue to function as such. We have already seen what happens when project homes are just personal servers and good intentions, and we have seen what happened to Google Code and Bitbucket.
I hope GitHub recovers, I really do, in part because a lot of history lives there and because the people still working on it inherited something genuinely important. But I no longer think it is responsible to let the continued memory of Open Source depend on GitHub remaining a healthy product.
The world before GitHub had more autonomy and more loss, and in some ways, we're probably going to move back there, at least for a while. Whatever people want to start building next should try to keep the memory and lose the dependence. It should be easier to move projects, easier to mirror their social context, easier to preserve releases, and harder for one company's drift to become a cultural crisis for everyone else.
I do not want to go back to the old web of broken tarball links and abandoned Trac instances. I also do not want Open Source to pretend that the last twenty years were normal or permanent. GitHub wrote a remarkable chapter of Open Source, and if that chapter is ending, the next one should learn from it and also from what came before.
- This is also a good reminder that we rely so very much on the Internet Archive for many projects of the time.↩
-
- April 27, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-27 rss
IDA Plugin Updates on 2026-04-27
Activity:
- binsync
- 7ccbd7cc: Fix documentation links (#520)
- capa
- 87f0970d: Update README with dynamic capa heading (#3060)
- ida-hcli
- python-elpida_core.py
- 3829ddf5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:54Z
- 12409dce: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:32Z
- 4a279d04: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T23:09Z
- 00d970a9: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:45Z
- af252e8e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T22:25Z
- 75dca59f: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:59Z
- 4d2465d4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:36Z
- 06a6c379: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T21:11Z
- 31158572: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:43Z
- 811516f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-27T20:19Z
- binsync
-
🔗 r/Leeds Scam companies to avoid rss
I will attach pictures showing what to look out for, additionally, be careful of any promising high pay. These people compliment you, and essentially groom you into an extremely low wage, door to door sales job, whilst promising greater things e.g. Quick career progression
submitted by /u/Fit-Librarian5590
[link] [comments] -
🔗 r/LocalLLaMA Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation. rss
| TRELLIS.2 is a state-of-the-art large 3D generative model (4B parameters) designed for high-fidelity image-to-3D generation. It leverages a novel "field-free" sparse voxel structure termed O-Voxel to reconstruct and generate arbitrary 3D assets with complex topologies, sharp features, and full PBR materials.
Link to the Paper:
Link to the Code:
Link to Try Out A Live Demo:
submitted by /u/44th--Hokage
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.70.5 release
Fixed
- Fixed HTML export preserving ANSI-renderer trailing padding as extra blank wrapped lines.
-
🔗 badlogic/pi-mono v0.70.4 release
Fixed
- Fixed packaged
pistartup failing because the session selector imported a source-only utility path.
- Fixed packaged
-
🔗 r/york Where do parents buy baby/child car seats now that Paul Stride has closed? rss
Where is there nearby that is good for buying car seats? Don’t know what you’ve got until it’s gone, Paul Stride was amazing and we now need a replacement for our 3 year old.
submitted by /u/amusedfridaygoat
[link] [comments] -
🔗 MetaBrainz MusicBrainz Server update, 2026-04-27 rss
This release mostly consists of a very substantial rewrite of the external links editor code, to make that section of our editors more efficient. While doing that we also fixed a few long-standing links editor bugs. While we kept this code in beta for quite a while so the community could help us catch most new bugs, do not hesitate to report any issues you might find.
A new release of MusicBrainz Docker is also available that matches this update of MusicBrainz Server. See the release notes for update instructions.
Thanks to rinsuki for having contributed to the code. Thanks to fabe56, HibiscusKazeneko and Lioncat6 for having reported bugs and suggested improvements. Thanks to Besnik, DenilsonSama, Khaled Salama, Marc Riera, ShimiDoki, Vaclovas Intas, cerberuzzz, coldified_, dddrnzv, dulijuong_artist, imgradeone, karpuzikov, mfmeulenbelt, salo.rock, smreo1590, syntariavoxmortem, wileyfoxyx and yyb987 for updating the translations. And thanks to all others who tested the beta version!
The git tag is v-2026-04-27.0.
Fixed Bug
- [MBS-8570] - "This relationship already exists" error message does not go away when one duplicate URL is removed
- [MBS-12032] - Adding a duplicate URL rel moves link to new section
- [MBS-14307] - Wikipedia extracts are not displaying
- [MBS-14309] - Can't click documentation/help links
Improvement
- [MBS-14279] - Support Amazon Belgium links
- [MBS-14280] - Block archive.today, archive.is, archive.ph, archive.li, archive.fo, archive.md and archive.vn links
Task
-
🔗 badlogic/pi-mono v0.70.3 release
New Features
pi updatecan now update pi itself in addition to installed pi packages. See docs/packages.md. (#3680 by @mitsuhiko)- Azure Cognitive Services endpoint support for Azure OpenAI Responses deployments. See docs/providers.md#api-keys. (#3799 by @marcbloech)
- Suppressible Anthropic extra-usage billing warning via
warnings.anthropicExtraUsagein/settings. See docs/settings.md. (#3808) - Extension-controlled working row visibility via
ctx.ui.setWorkingVisible(), allowing extensions to hide the built-in loader row and render custom working state. See docs/extensions.md and examples/extensions/border-status-editor.ts. (#3674)
Added
- Added
pi updatesupport for updating pi itself in addition to installed pi packages (#3680 by @mitsuhiko). - Added Azure Cognitive Services endpoint support for Azure OpenAI Responses base URLs (#3799 by @marcbloech).
- Added
warnings.anthropicExtraUsageand a/settingswarnings submenu to suppress the Anthropic extra usage billing warning (#3808) - Added
ctx.ui.setWorkingVisible()so extensions can hide the built-in interactive working loader row without reserving layout space, plus a border-status editor example that moves working state into a custom editor border (#3674)
Fixed
- Fixed duplicate printable characters from Kitty keyboard protocol CSI-u plus raw character input on layouts such as Italian (#3780).
- Fixed API-key environment discovery and Bun startup to fall back to
/proc/self/environwhen Bun's sandbox leavesprocess.envempty (#3801 by @mdsjip). - Fixed Bun sandboxed package-manager commands when
process.envis empty (#3807 by @mdsjip). - Fixed symlinked packages, resources, skills, and sessions being duplicated in selectors and loaders (#3818 by @aliou).
- Fixed Bedrock prompt-caching and adaptive-thinking capability checks for inference profile ARNs (#3527 by @anirudhmarc).
- Fixed OpenAI Codex Responses default verbosity to
lowwhen no verbosity is specified. - Stopped sending empty
toolsarrays to providers that reject them when tools are disabled (#3650 by @HQidea). - Fixed Anthropic SSE parsing to ignore unknown proxy events such as OpenAI-style
doneterminators (#3708). - Fixed provider registration with override-only
models.jsonentries to preserve built-in model lists (#3651). - Fixed
/loginto show auth supplied bymodels.jsonprovider definitions. - Fixed HTML export whitespace around extension-rendered tool output and expandable output hints.
- Fixed bash executor temp output streams leaking file descriptors when output was truncated by line count (#3786)
- Fixed extension
pi.setSessionName()updates to refresh the interactive terminal title immediately (#3686) - Fixed
/treecancellation viasession_before_treeleaving the session stuck in compaction state (#3688) - Fixed Escape interrupt handling when extensions hide the built-in working loader row (#3674)
- Fixed coding-agent test expectations for current default models and missing-auth guidance.
- Fixed long local-LLM SSE streams aborting at 5 minutes with
UND_ERR_BODY_TIMEOUTby disabling undicibodyTimeout/headersTimeouton the global dispatcher; provider SDKs continue to enforce their own deadlines viaretry.provider.timeoutMs(#3715)
-
🔗 Simon Willison Tracking the history of the now-deceased OpenAI Microsoft AGI clause rss
For many years, Microsoft and OpenAI's relationship has included a weird clause saying that, should AGI be achieved, Microsoft's commercial IP rights to OpenAI's technology would be null and void. That clause appeared to end today. I decided to try and track its expression over time on openai.com.
OpenAI, July 22nd 2019 in Microsoft invests in and partners with OpenAI to support us building beneficial AGI (emphasis mine):
OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.
But what is AGI? The OpenAI Charter was first published in April 2018 and has remained unchanged at least since this March 11th 2019 archive.org capture:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
Here's the problem: if you're going to sign an agreement with Microsoft that is dependent on knowing when "AGI" has been achieved, you need something a little more concrete.
In December 2024 The Information reported the details (summarized here outside of their paywall by TechCrunch):
Last year’s agreement between Microsoft and OpenAI, which hasn’t been disclosed, said AGI would be achieved only when OpenAI has developed systems that have the ability to generate the maximum total profits to which its earliest investors, including Microsoft, are entitled, according to documents OpenAI distributed to investors. Those profits total about $100 billion, the documents showed.
So AGI is now whenever OpenAI's systems are capable of generating $100 billion in profit?
In October 2025 the process changed to being judged by an "independent expert panel". In The next chapter of the Microsoft–OpenAI partnership:
The agreement preserves key elements that have fueled this successful partnership—meaning OpenAI remains Microsoft’s frontier model partner and Microsoft continues to have exclusive IP rights and Azure API exclusivity until Artificial General Intelligence (AGI). [...]
Once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel. [...]
Microsoft’s IP rights to research, defined as the confidential methods used in the development of models and systems, will remain until either the expert panel verifies AGI or through 2030, whichever is first.
OpenAI on February 27th, 2026 in Joint Statement from OpenAI and Microsoft:
AGI definition and processes are unchanged. The contractual definition of AGI and the process for determining if it has been achieved remains the same.
OpenAI today, April 27th 2026 in The next phase of the Microsoft OpenAI partnership (emphasis mine):
- Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.
- Microsoft will no longer pay a revenue share to OpenAI.
- Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap.
As far as I can tell "independent of OpenAI’s technology progress" is a declaration that the AGI clause is now dead. Here's The Verge coming to the same conclusion: The AGI clause is dead.
My all-time favorite commentary on OpenAI's approach to AGI remains this 2023 hypothetical by Matt Levine:
And the investors wailed and gnashed their teeth but it’s true, that is what they agreed to, and they had no legal recourse. And OpenAI’s new CEO, and its nonprofit board, cut them a check for their capped return and said “bye” and went back to running OpenAI for the benefit of humanity. It turned out that a benign, carefully governed artificial superintelligence is really good for humanity, and OpenAI quickly solved all of humanity’s problems and ushered in an age of peace and abundance in which nobody wanted for anything or needed any Microsoft products. And capitalism came to an end.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/york Askham Tesco recycling rss
Does anyone know when the big cardboard recycling skip gets emptied? It's been full for weeks now and is in a state
submitted by /u/Isla_Nooblar
[link] [comments] -
🔗 @binaryninja@infosec.exchange The debugger got some real love in our latest update. Hardware breakpoints and mastodon
The debugger got some real love in our latest update. Hardware breakpoints and conditional breakpoints have both landed, and the new debug adapters make things faster and more reliable across a range of workflows. Read more from the latest blog: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#debugger
-
🔗 r/LocalLLaMA MIMO V2.5 PRO rss
| submitted by /u/Namra_7
[link] [comments]
---|--- -
🔗 r/reverseengineering rfcat-py3 rss
submitted by /u/qucrypt
[link] [comments] -
🔗 r/wiesbaden Hat jemand Lust, diesen Mittwoch mit mir zu nem Konzert nach Köln (Aries) zu fahren? Ich zahle das Ticket rss
Ich (M21) wohne in Nähe Wiesbaden und gehe diesen Mittwoch auf ein Konzert in Köln. Der Künstler heißt Aries und geht so in Richtung Indie/Pop/Rock/Hip-Hop (hier eine Geschmacksprobe). Ich freu mich schon richtig drauf. Mein Problem ist nur, ich hab kein Auto und mit den Öffis käme ich so ca. 6 Uhr morgens wieder zu Hause an.
Wenn mich jemand von euch mitnimmt (Hin- und Rückreise), würde ich das Ticket + 20€ Spritgeld bezahlen. Also wer Lust auf sowas hat, meldet euch gerne in den nächsten 24h bei mir.
Edit: wenn ihr andere Ideen habt, was ich tun soll, wenn das hier nichts wird: immer her damit. Mein aktueller Backup-Plan ist Hinfahrt mit BlaBlaCar und beim Konzert durch die Menge zu gehen und anzusprechen mit nem Pappschild:
Köln -> Frankfurt
Anybody?
submitted by /u/BullfrogMiserable554
[link] [comments] -
🔗 r/york Thinking of buying a Persimmon new build home in Selby. There’s so many mixed reviews about this company. Was wondering on people’s experiences with this company. rss
submitted by /u/Stumbling_Gecko_473
[link] [comments] -
🔗 r/LocalLLaMA Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090 rss
| Hey fellow Llamas, your time is precious, so I'll keep it short. We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. We call it Luce DFlash (https://github.com/Luce-Org/lucebox-hub; MIT) ~1.98x mean over autoregressive on Qwen3.6 across HumanEval / GSM8K / Math500, with zero retraining (z-lab published a matched Qwen3.6-DFlash draft on 2026-04-26, still under training, so AL should keep climbing). If you have CUDA 12+ and an NVIDIA GPU (RTX 3090 / 4090 / 5090, DGX Spark, other Blackwell, or Jetson AGX Thor with CUDA 13+), all you need is # After cloning the repo (link in the first comment): cd lucebox-hub/dflashcmake -B build -S . -DCMAKE_BUILD_TYPE=Releasecmake --build build --target test_dflash -j# Fetch target (~16 GB)huggingface-cli download unsloth/Qwen3.6-27B-GGUF Qwen3.6-27B-Q4_K_M.gguf --local-dir models/# Matched 3.6 draft is gated: accept terms + set HF_TOKEN firsthuggingface-cli download z-lab/Qwen3.6-27B-DFlash --local-dir models/draft/# RunDFLASH_TARGET=models/Qwen3.6-27B-Q4_K_M.gguf python3 scripts/run.py --prompt "def fibonacci(n):"That's it. No Python runtime in the engine, no llama.cpp install, no vLLM, no SGLang. The binary links libggml*.a and never libllama. Luce DFlash will- Load Qwen3.6-27B Q4_K_M target weights (~16 GB) plus the matched DFlash bf16 draft (~3.46 GB) and run DDTree tree-verify speculative decoding (block size 16, default budget 22, greedy verify).
- Compress the KV cache to TQ3_0 (3.5 bpv, ~9.7x vs F16) and roll a 4096-slot target_feat ring so 256K context fits in 24 GB. Q4_0 is the legacy path and tops out near 128K.
- Auto-bump the prefill ubatch from 16 to 192 for prompts past 2048 tokens (~913 tok/s prefill on 13K prompts).
- Apply sliding-window flash attention at decode (default 2048-token window, 100% speculative acceptance retained) so 60K context still decodes at 89.7 tok/s instead of 25.8 tok/s.
- Serve over an OpenAI-compatible HTTP endpoint or a local chat REPL.
Running on RTX 3090, Qwen3.6-27B UD-Q4_K_XL (unsloth Dynamic 2.0) target, 10 prompts/dataset, n_gen=256:
Bench AR tok/s DFlash tok/s AL SpeedupHumanEval 34.90 78.16 5.94 2.24xMath500 35.13 69.77 5.15 1.99xGSM8K 34.89 59.65 4.43 1.71xMean 34.97 69.19 5.17 1.98xAs you can see, the speedup is real on consumer hardware, not a paper number. Target graph produces bit-identical output to autoregressive in AR mode; the draft graph matches the z-lab PyTorch reference at cos sim 0.999812. Q4_0 KV costs ~3% AL at short context (8.56 to 8.33) and wins at long context where F16 won't fit anyway. Constraints: CUDA only, greedy verify only (temperature/top_p on the OpenAI server are accepted and ignored), no Metal / ROCm / multi-GPU. Repo started single-3090, recent community PRs added support for RTX 5090, DGX Spark / GB10, other Blackwell cards, and Jetson AGX Thor (sm_110 + CUDA 13). Feedback more than welcome! submitted by /u/sandropuppo
[link] [comments]
---|--- -
🔗 r/Leeds Problem neighbours rss
We have a house of multiple occupancy next door to our house which has adjoining garages. One of the garages is rented out by someone who does not live in any of the nearby houses and just rents the garage. This garage is in very frequent use by the guy renting who is habitually working on his car or multiple cars which groups of noisy ppl, dragging equipment around and using power tools weekend after weekend whenever the weather is good. We have a lovely quiet area apart from when this guy and his cohort show up - who don't even live here.
Is there any department in LCC we can contact to get help with this as it is starting to really affect out quality of life and put us off spending time in our own garden and I imagine it is affecting other neighbours too. or does anyone know how I find out who owns the property next door.
Imagine every Sunday it was like having a mechanics / building site going full tilt all afternoon. It's amazing how thoughtless people can be.
Thanks
submitted by /u/sanchez599
[link] [comments] -
🔗 r/Leeds Best pub chips in Leeds rss
Looking for the best pub chips in Leeds. Must be CHUNKY chips, strictly NO fries. Include pics if poss. Countryside areas preferred (to pair with a walk)
TIA 🥔🥔🥔🥔
submitted by /u/Educational_Clue7522
[link] [comments] -
🔗 r/reverseengineering Using Google's Gemma 4 E4B local AI model to Reverse Engineer a simple Crackme rss
submitted by /u/CatAffectionate6618
[link] [comments] -
🔗 r/Leeds Gym friend rss
Hey everyone,
I’m looking for a gym partner to train with regularly. Ideally someone who can spot me on certain lifts and help with general accountability.
I’m 26M and work in the city centre. I’m planning to join either The Edge or PureGym at the Merrion Centre. My main focus is building overall strength and improving general health, so it would be great to find someone with similar goals.
My preferred training times are:
Weekdays: after 6pm (or possibly before 8am)
Weekends: flexible
I’m relatively new—trained consistently for about 6 months last year but fell out of the routine, so I’m keen to get back into it properly. If you already have a workout plan you’re following, I’d be happy to tag along.
My main goal right now is improving my bench press, along with bodyweight exercises like pull-ups.
submitted by /u/CraftyBrie
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
sync plugin-repository.json No plugin changes detected -
🔗 r/Yorkshire Fuel costs soar 65% for Yorkshire Air Ambulance rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Harrogate Has the gentrification of Bilton began? rss
Lots of new movers, young and from Leeds. Will this lead to businesses popping up supporting their tastes? The Knox is pricier than some town center spots already!
submitted by /u/MechanicAggressive16
[link] [comments] -
🔗 sacha chua :: living an awesome life 2026-04-27 Emacs news rss
There was a big discussion on lobste.rs about people's favourite Emacs packages and that sparked similar conversations on Reddit and HN. Discussions like that are a great source of inspiration. I added a couple of small improvements to my config based on this week's Emacs news, like diff-hl.
Also, lots of people expressed their appreciation for Chris Wellons, who is moving on to other editors for now. Me, I've enjoyed using simple-httpd, impatient, and skewer, and I'm glad Chris made and shared them. Many of his packages already have new maintainers, and the rest are up for adoption. Perhaps we'll see him around again someday!
- Help wanted:
- Upcoming events (iCal file, Org):
- Emacs Berlin: Emacs-Berlin Hybrid Meetup https://emacs-berlin.org/ Wed Apr 29 1000 America/Vancouver - 1200 America/Chicago - 1300 America/Toronto - 1700 Etc/GMT - 1900 Europe/Berlin - 2230 Asia/Kolkata – Thu Apr 30 0100 Asia/Singapore
- M-x Research: TBA https://m-x-research.github.io/ Fri May 1 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1500 Etc/GMT - 1700 Europe/Berlin - 2030 Asia/Kolkata - 2300 Asia/Singapore
- Emacs.si (in person): Emacs.si meetup #5 2026 (v #živo) https://dogodki.kompot.si/events/b4192df7-3da4-41b8-95a3-532b93923656 Mon May 4 1900 CET
- EmacsATX: Emacs Social https://www.meetup.com/emacsatx/events/314341747/ Thu May 7 1600 America/Vancouver - 1800 America/Chicago - 1900 America/Toronto - 2300 Etc/GMT – Fri May 8 0100 Europe/Berlin - 0430 Asia/Kolkata - 0700 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri May 8 1800 Europe/Paris
- Other stuff:
- Sacha Chua: April 30 Yay Emacs: Sacha and Prot Talk Emacs - Newbies/Starter Kits (Prot)
- Battle of the Editors - Satellite Event - Tue Jun 30 4:30 PM Aachen, Seffenterweg 23 / Kopernikusstr. 6 (IT Center) for hackathon participants and guests
- Sacha Chua: May 4: Emacs Chat with Amin Bandali
- Emacs configuration:
- Emacs Lisp:
- What are some common code smells that inexperienced Elispers make?
- Dave Pearson: expando.el v1.6 - expand macro in a different window; fix keybinding
- Protesilaos: Emacs livestream: Maintaining Denote, TMR, and more (YouTube 3:06:05)
- Ideas for things to bind to C-z (@oantolin@mathstodon.xyz)
- Appearance:
- Navigation:
- Dave Pearson: itch.el v1.3.0 - switch to the scratch buffer
- Tip: repeat-map and expreg-expand (@plantarum@ottawa.place)
- The Definitive Guide to Code Folding in Emacs (Reddit, Irreal)
- Writing:
- Dave Pearson: blogmore.el v4.2 - cycle image extensions
- Dave Pearson: kbdify.el v1.0.0 - marking up keys in Markdown
- Denote:
- Org Mode:
- (emacs) org mode - your life in plain text (09:49)
- Spacemacs | Org-contacts Agenda Anniversaires | Productivité (02:22)
- How I use org-roam - The Universe of Joshua Blais
- Spacemacs | Org-roam Notes avec tags | Productivité (00:59)
- Import, export, and integration:
- Quick tutorial to get a blog online from Org mode thanks to Org Social | Andros Fenollosa (@andros@activity.andros.dev, in Spanish, @hispaemacs@fosstodon.org)
- Como colorear los bloques de código en Org-mode | Andros Fenollosa (2016, @hispaemacs@fosstodon.org)
- Code for org-edit-special, eglot, and Python (@anoncheg@mastodontech.de)
- Get ready for Orgy in 15 minutes — Bastien Guerry (Irreal, JC Helary) - static site generator
- Tony Zorman: Writing Literate Blog Posts
- Completion:
- Coding:
- Math:
- Shells:
- Web:
- Multimedia:
- AI:
- Community:
- Fortnightly Tips, Tricks, and Questions — 2026-04-21 / week 16
- Your sources for inspiration
- Sacha Chua: YE20 braindump: Emacs Carnival: Newbies/starter kits (YouTube, 1:03:50)
- Randy Ridenour: Emacs and the Sunk Cost Fallacy
- Emacs Philosophy and Infinite Depth with Protesilaos - The Universe of Joshua Blais (YouTube, 1:40:55)
- A month of Elisp · Perpetually Curious Blog
- Other:
- I made a TaskJuggler major mode for Emacs (Reddit)
- Charles Choi: Some nice to know keybindings when using the mouse in Emacs (Irreal)
- Marcin Borkowski: How I use my numeric keypad with Emacs Ledger mode
- anju v1.2: center and fill menus, edit - duplicate, look up; improve mouse interactions in Emacs (@kickingvegas@sfba.social)
- Rahul Juliato: Getting Emacs proced.el to Show CPU and Memory on macOS (Reddit)
- Emacs development:
- Re: About "prefixed-core" - Philip Kaludercic
- Add treesit-query-with-fallback
- New user option compilation-search-extra-path
- ; * etc/NEWS: Announce "setrgbf" and "setrgbb" terminfo capabilities
- Add language-environment and input methods for Syriac
- Rebind 'tab-bar-mouse-close-tab' from <down-mouse-2> to <mouse-2>
- Show executed tests from erts files via the ERT results buffer
- New packages:
- denote-wordcloud: Generate a word cloud (MELPA)
- dmsg: Timestamped debug messages with backtrace support (GNU ELPA)
- evil-ghostel: Evil-mode integration for ghostel (MELPA)
- mozc-modeless: Modeless Japanese input with Mozc (MELPA)
- org-lark: Export Lark docs to Org (MELPA)
- verdict: Generic test runner with treemacs results UI (MELPA)
- with-command-redo: Repeat commands with automatic undo (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Anyone looking for more Alt/Rock Friends? like going Key Club, Spoons, NQ64, Pixel Bar etc?.. Join our Alt/Rock/Emo Whatsapp Social Group! xo rss
Love Keyclub (Slamdunk, FUEL, GARAGE Clubnights), NQ64, Pixel Bar, Wetherspoons, Pubs etc but have a lack of alternative friends to go with? Just want to make more alternative friends, have fun chats & get involved in social events?
A few of us from Reddit, Facebook etc have banded together from previous appeals and have a new fun Whatsapp Alt/Rock/Emo Social Group chat now, 100+ members and counting!
We had a successful recruitment on here a few months ago which blew up & got overwhelming so had to trickle people in but there are too many to go through, so starting a new fresh post to add more people
The group is roughly 18-35 age range & currently around 50/50 gender mix so plenty of people of different age/genders etc, very inclusive and everyone is getting on great together.
We have regular nights out especially on Weekends (Keyclub Club Nights, Spoons, Bars, NQ64, Pixel Bar, Flight Club, Cinema trips.. anything fun really!) which can get anywhere from 10-15 people attending. Spoons & Key Club on Saturdays is a particular fave. but we are always planning social events, mid week chill things etc
We also have a discord for chill voice chats & casual gaming etc.
If you'd like to join then leave a comment with your age/gender & I'll DM you an invite! all welcome
I will invite in slowly as to keep the ratio of ages, sex etc balanced so theres always people of similar age etc
Leave a comment & I'll DM an invite when available! x
PLEASE CHECK DMS FOR INVITES
submitted by /u/rmonkey100
[link] [comments] -
🔗 r/york Flowers make this city even better somehow🥹💐🪻 rss
| submitted by /u/Wedding-Beauty
[link] [comments]
---|--- -
🔗 r/LocalLLaMA To 16GB VRAM users, plug in your old GPU rss
For those who want to run latest dense ~30b models and only have 16GB VRAM, if you have a old card with 6GB VRAM or more, plug it in.
It matters that everything fits on the VRAM, even on 2 cards. Even if one of them is quite weak.
I have a 5070Ti 16GB and a old 2060 6GB. The common idea is you need 2 same GPU to maximize performance. But one day I was strike by the idea, why not give it a try?
Let's see, if you did not bought a mother board just for LLM, it's very possible you have a true PCI-E x16 slot and a couple that looks like x16 but are actually wired with x4, just like me. That's a perfect slot for a old card.
16GB + 6GB = 22GB, it's getting close to the 24GB class card. If you have a better old card, lucky you!
Then you use llama-server with a config like this
[*] jinja = true cache-prompt = true n-gpu-layers = 999 no-mmap = true mlock = false np = 1 t = 0 [qwen/qwen3.6-27b] model = ./Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf mmproj = ./Qwen3.6-27B-GGUF/mmproj-Qwen3.6-27B-BF16.gguf reasoning = on dev = Vulkan1,Vulkan2 c = 128000 no-mmproj-offload = true cache-type-k = q8_0 cache-type-v = q8_0A couple specific points:
- dev=Vulkan1,Vulkan2, this enables the two GPUs, runllama-server.exe --list-devicesto see what you should set.
- no-mmap and mlock=false keeps the model away from your RAM
- np=1, no-mmproj-offload (or do not supply mmproj model), cache-type-k and cache-type-v to minimize VRAM needed
- n-gpu-layers=999 to prefer GPU offloading, well this may be unnecessary, but I'd keeps it
- split-mode=layer to split the layers asymmetrically across the device, "layer" is the default though so you don't see it above.
- c=128000 could be a little stretch, but works well enough for me.BTW I also have intel integrated GPU that I plugged the monitors into, which is Vulkan0.
Some numbers, basically, at 128k max context, 71k actual context useage, pp=186t/s and tg=19t/s, quite usable speed compared to the 4t/s on single card.
[56288] prompt eval time = 5761.53 ms / 1076 tokens ( 5.35 ms per token, 186.76 tokens per second) [56288] eval time = 58000.15 ms / 1114 tokens ( 52.06 ms per token, 19.21 tokens per second) [56288] total time = 63761.69 ms / 2190 tokens [56288] slot release: id 0 | task 654 | stop processing: n_tokens = 71703, truncated = 0Edit:
Some folks want numbers, so here is llama bench. This is with cuda instead. Runs with --device CUDA0 are on single GPU. Without uses all GPU. It's fairly clear fitting on GPU, even on a second weak one, matters a lot for tg speed, especially at long context.
llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 903.13 ± 26.25 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.54 ± 0.14 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 663.60 ± 9.22 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.03 ± 0.08 | llama-b8948-bin-win-cuda-12.4-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 769.00 ± 4.50 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 25.40 ± 0.30 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 668.83 ± 2.83 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.31 ± 0.09 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --device CUDA0 --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | dev | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ------------ | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d8192 | 981.43 ± 27.91 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d8192 | 16.87 ± 0.17 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | pp512 @ d16384 | 751.15 ± 16.03 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | CUDA0 | 64 | tg128 @ d16384 | 12.08 ± 0.12 | llama-b8948-bin-win-cuda-13.1-x64/llama-bench.exe \ --model ./lmstudio-community/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf \ --fit-target 64 -d 8192,16384 | model | size | params | backend | ngl | fitt | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d8192 | 807.61 ± 7.40 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d8192 | 24.85 ± 1.57 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | pp512 @ d16384 | 732.96 ± 3.86 | | qwen35 27B Q4_K - Medium | 15.40 GiB | 26.90 B | CUDA | 99 | 64 | tg128 @ d16384 | 24.40 ± 0.07 |submitted by /u/akira3weet
[link] [comments] -
🔗 r/Yorkshire Cherry trees colouring the world. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/Leeds Does anyone have spare beer bottles? rss
I am brewing my own beer and I need bottles preferably brown. If you work in a pub and have empties I can come and collect? My local only does alc free bottles and doesn’t sell many. Thanks
submitted by /u/DiligentPotential960
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.10.1 release
What's Changed
Comments panel redesign
The comments panel has been rebuilt with a segmented filter (All / Open / Resolved) and collapsible groups. Pair it with the new "hide resolved comments" setting (
hshortcut) to focus on what's still open during a review.- feat: redesign comments panel with segmented filter and collapsible groups by @tomasz-tomczyk in #354 - thanks @omervk for suggestions in this area!
General
- feat: redesign disconnected state as a sticky banner by @tomasz-tomczyk in #347 - Thanks @vereis for inspiration!
- feat: add hide-resolved setting for inline comments by @tomasz-tomczyk in #353 - Thanks @vereis for the suggestion!
- feat: store CLI args in review file and include in share payload by @tomasz-tomczyk in #349
- feat: replace custom LCS word-diff with @sanity/diff-match-patch by @tomasz-tomczyk in #348
- fix: remove blur/scrim overlay from disconnected state by @tomasz-tomczyk in #352
- fix: fetch comment replies from crit-web during share sync by @tomasz-tomczyk in #350
- fix: hide TOC panel when buildToc is called with no headings by @tomasz-tomczyk in #360
- fix: clarify Hide resolved comments label in settings by @tomasz-tomczyk in #364
- fix: hide comment-line highlight when 'h' hides resolved comments by @tomasz-tomczyk in #365
- fix: collapse reply form after submit; auto-close empty comment forms by @tomasz-tomczyk in #366
- fix: preserve replies on fingerprint-matched comments + cleanup by @tomasz-tomczyk in #367
Internal refactors
- docs: update plugin install instructions to claude CLI syntax by @tomasz-tomczyk in #351
- docs: rule on cookies vs localStorage for persisted settings by @tomasz-tomczyk
- chore: remove releasing section from AGENTS.md by @tomasz-tomczyk
- chore: add Codecov integration for unit and e2e coverage by @tomasz-tomczyk in #359
- chore: Exclude vendored Go packages from coverage profile by @tomasz-tomczyk in #361
- test: add unit tests for high-value uncovered functions by @tomasz-tomczyk in #362
- test: add comprehensive tests for server handlers, session, auth, and daemon by @tomasz-tomczyk in #363
- chore: update GitHub Actions to latest versions, add dependabot by @tomasz-tomczyk in #355
- chore(deps-dev): bump stylelint from 17.7.0 to 17.9.0 by @dependabot in #356
- chore(deps): bump mermaid from 11.13.0 to 11.14.0 by @dependabot in #357
- chore(deps-dev): bump eslint from 10.2.0 to 10.2.1 by @dependabot in #358
- chore: copy mermaid 11.14.0 to frontend/ by @tomasz-tomczyk
- chore: add mise trust to wt.toml post-start and fix e2e-share rate limiting by @tomasz-tomczyk
- refactor: hide-resolved state, persist filter, restore switch CSS by @tomasz-tomczyk in #368
- refactor: port hook lifecycle and a11y fixes from crit-web for parity by @tomasz-tomczyk in #369
Full Changelog :
v0.10.0...v0.10.1 -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/wiesbaden Need help with moving rss
Hey Leute!
Meine Freundin und ich sind gerade für die Uni nach Wiesbaden gezogen (Daimlerstraße, 65197). Ich habe selbst einen Transporter gemietet und bin mit unseren Sachen hierher gefahren. Jetzt haben wir aber ein Problem: Wir bekommen unsere Waschmaschine einfach nicht vom Transporter in unsere Wohnung im 4. Stock.
Hat jemand Tipps oder vielleicht sogar kurzfristig Zeit, kurz mit anzupacken? Würden natürlich auch was dafür geben!
Vielen Dank schon mal!
---
English:
Hey guys!
My girlfriend and I just moved to Wiesbaden for university (Daimlerstraße, 65197). I rented a van myself and drove all our stuff here to our new apartment. But now we have a problem: we can’t get our washing machine from the transporter up to our apartment on the 4th floor.
Any suggestions, or maybe someone nearby who could help us carry it up? Happy to compensate!
Thanks a lot in advance.
submitted by /u/Orph3us_151
[link] [comments]
-






