- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- April 26, 2026
-
🔗 r/Harrogate Cheapest option to London rss
I need to travel to London once a week for a few months for a job, what’s the best and cheapest way to book this? I’ve found booking via uber gets 10% credits and Avios points. Are there any others??
submitted by /u/Odd_Bookkeeper_6027
[link] [comments] -
🔗 Register Spill Joy & Curiosity #83 rss
This is a time of great technological change. You could even wring a "once in a lifetime" out of me . Many times per week now I say to either myself or someone who just shared some news: this is crazy, man.
The numbers, the pace, the demand, the bottlenecks shifting, the new capabilities emerging, and, man, the predictions. The predictions. AI will do that, AI will do this, in the future we'll do all of this and none of that, but surely this will still be that and that thing will be the most important thing.
I've done it too, of course. I've predicted quite a few things in past issues of this newsletter and, hey, yes, I was right a few times. And so were others.
But we're talking about technological progress here and that is very hard to predict, especially its second-order effects. So, as you read through the things I shared below, I want you to keep the following quote in mind, because it's been stuck in mine for many weeks now and I found it helpful to carry around with me:
He did not create a world that went as he wanted, but he created a world that went well. We have many examples of that. Trains and bicycles come in, and we get feminism because it's easier for people, especially women, to move freely and independently. They can organize. They can mobilize. We get suffragettes. Did the inventor of the train intend for there to be women's liberation? No. Did it go the way he imagined? No. Did it go well? Yes.
Or consider this:
After the Great War, the Haber-Bosch process was used throughout the world to fix nitrogen on a grand scale. […] It was synthetic fertilizer that enabled Europe, the Americas, China and India to escape mass starvation and consign famine largely to the history books: the annual death rate from famine in the 1960s was 100 times greater than in the 2010s. […] If Haber and Bosch had not achieved their near-impossible innovation, the world would have ploughed every possible acre, felled every forest and drained every wetland, yet would be teetering on the brink of starvation, just as William Crookes had forecast.
That was after the war. Here's what Bosch and Haber did with their process during the war:
Then in September 1914 Bosch made the famous 'saltpetre promise' that he could convert the Oppau plant so that it turned ammonia into nitrate, using a newly discovered iron-bismuth catalyst. He built an even bigger plant at Leuna, producing huge quantities of nitrate and thus probably prolonging the war. Haber, in the meantime, had invented gas warfare, personally presiding over the first chlorine attack at Ypres in March 1915.
Now, who would've predicted going from that to that?
-
Amp's smart mode now uses Opus 4.7. I think it's a great model. I now often switch between smart and deep mode. One plans, the other reviews, and vice versa.
-
Last week I re-read Mike Acton's Expectations of Professional Software Engineers and, man, is it good. So, so good. If you haven't, you need to read this right now. This is software engineering in a team, in a company, in a business. Hacking isn't programming isn't engineering, but what he describes here, that's the real thing. And -- of course you have to say this, Thorsten -- yes: this all still applies when using AI. Maybe even more so. Just like The Basics.
-
For many, many years I've come across strong recommendations to watch this talk by Richard Hamming: You and Your Research. Not considering myself a scientist, I shrugged off those recommendations and never saw it. I can tell you now: that was a huge mistake. This morning, right after waking up, still in bed, I read this transcript, start to end, and let me tell you this: watch the talk or read the transcript! If you're here, reading this newsletter, I'm certain you will get something out of it. It's fantastic.
-
Highly, highly recommend you watch this interview with Dylan Patel on the current state of tokenomics. Really: if you only have a vague idea of what "compute constrained" means, you have to watch this. (Also, the last ten minutes, in which Dylan talks about the optics of the model companies, are kinda separate from tokenomics, but worth it alone.)
-
Talking of which: "Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." $60 billion (!) now sounds like $60 million did in 2012.
-
Kevin Kwok's thoughts on Cursor's and SpaceX's partnership are interesting, but I disagree with him on the premise that model and harness have to go hand in hand. I don't think the causality of the loop is there: Claude 3.5's ability and eagerness for tool calls was the Urknall of agents. That's what lead to us to build Amp and Anthropic to build Claude Code.
-
Bonkers numbers: Google wants to invest up to $60B in Anthropic. The Hacker News comments are interesting.
-
Justin Jackson is asking: what has technology done to us? I very much don't agree with the quoted statement of "technology will always do its worst thing" (and neither does Justin, it sounds like.)
-
It's cool to care: "Whenever somebody asks why, I don't have a good answer. Because it's fun? Because it's moving? Because I enjoy it? I feel the need to justify it, as if there's some logical reason that will make all of this okay. But maybe I don't have to. Maybe joy doesn't need justification. […] So much of our culture tells us that it's not cool to care. It's better to be detached, dismissive, disinterested. Enthusiasm is cringe. Sincerity is weakness. I've certainly felt that pressure - the urge to play it cool, to pretend I'm above it all. To act as if I only enjoy something a 'normal' amount. Well, fuck that."
-
Take some time to play around with ChatGPT Images 2.0. It's mind-blowing. If they can accurately reconstruct screenshots like, regardless of whether that's the "image" model part or the "thinking" model part, I think something just shifted. Also, what a sick landingpage.
-
This was great: What will be scarce? The question that leads to the one in the title is this: "If advanced AI brings material abundance--if machines can produce many if not all forms of human production at very low marginal cost--does economics become irrelevant?" The whole piece is explains the possible mechanisms at play and also answer the question of whether economics will become irrelevant, but even more interesting is the prediction on the future of work: "The economics of structural change tells us that when technology makes one type of production cheap, the economy doesn't collapse. It transforms. It shifts toward the things that technology can't make cheap. For AI, those things are exactly the ones where human involvement carries inherent, irreplaceable value." And that means the "durable jobs will be in the relational sector, where the human element is the product itself." Or, in other words: "You don't need to be Picasso. You need to be the person whose involvement makes the product feel like it was made for someone, by someone."
-
"A parasite that has been eating people for 3,500 years is about to be wiped off the planet. It infected 3.5 million people in 1986. Last year, it infected 10. And I have not seen it make a single front page." Believe it or not, but in seventh grade I gave a presentation in biology class on the Guinea worm. Use Google Image search if you're as brave as I was in seventh grade. Yeah, thought so.
-
This is from December last year, so the numbers are even crazier now, which makes this even more interesting: Liar's Valuation. I knew about "take last month's revenue and multiply by twelve," but the tiered investment rounds were new to me, and so was the "give heavy discount in year one, but then report year three bookings as ARR."
-
The annotated Unicode map. More of this!
-
Yes, it's Sky Sports News of all places: "Pressure is a privilege. And if you're feeling any pressure or the weight of any expectation, you are breathing rare air, that very few of us get to live inside." Good frame.
-
Or, as Josh Kushner said: "Every experience is training you for the next one… In order to become king, God didn't give David a crown, he gave him Goliath."
-
Tim Cook is stepping down as Apple's CEO. This Stratechery reflection was very interesting: Tim Cook's Impeccable Timing. For example, I had no clue that Apple in China (as in: moving its manufacturing to China) was the work of Cook. For me, Cook will always be the CEO who was at the helm when the M1 shipped, one of most remarkable engineering achievements I've witnessed.
-
Apple's incoming CEO John Ternus in 2024 in a commencement speech: "At some point in my first year, I found myself at a supplier facility. I was far away from home, it was well past midnight. I was using a magnifying glass to count the number of grooves on the head of this screw, which, remember, lives on the back of the display. And I was arguing with the supplier because these parts had 35 grooves, they were supposed to have 25. I distinctly remember stepping back for a minute and thinking to myself, 'What the hell am I doing? Is this normal?' And I thought about it, and I realized it might not be normal, but it's right. It's right because I'd already spent months working on that product, and if you're going to spend that much time on something, you should put in your very best effort. Maybe a customer notices, maybe they don't, but either way, whenever I saw one of those displays on someone's desk, it mattered to me to know that my teammates and I had considered everything about it and done the very best job we could." There's a lot more good stuff in there. I'm excited.
-
After probably ten years of using Alfred I switched over to Raycast two years ago and one thing that I've sporadically but consistently missed was Alfred's "Large Type" feature: you type a bit of text hin, hit a shortcut, and boom, the text is now as big as your display. Very helpful when you want to show someone in the room the wifi password, for example. So, this week I thought: surely there's a Raycast plugin for that? And there is but the text isn't that large. But guess what, there's also this: large-type.com. How good is that?
-
Adam Mastroianni again with some very good writing on capital-S science: Nothing ever dies. It merely becomes embarrassing. I didn't know that ego depletion doesn't reproduce! While reading I had to think of Brandolini's Law: "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." (In 2015 both Brandolini and I both gave a talk at a Ruby conference in Wrocław, Poland, and we chatted for half an hour at the airport and, not sure exactly why, but I'm oddly proud of that.)
-
Orson Scott Card, author of Ender's Game: "Those changes made, I sent it to Ben again. I did not remind him of what he had advised me to do. I merely told him I liked my title, and said, 'I have addressed your other concerns,' which was true. I figured he wouldn't remember what his exact words had been. My answer was a check. [...] Did Ben's feedback help? Yes -- but his specific advice was not right, and I knew it. [...] Editors don't know more than you about your story. They especially don't know why they decide to accept or reject stories. YOU have to know what your story needs to be, and take only advice that you believe in."
-
Reminded me a lot of Bill Hader on feedback: "When people give you notes on something, when they tell you it's wrong, they're usually right. When they tell you how to fix it, they're wrong."
-
exe.dev raised a Series A: "We are building a cloud that makes sense for the current and future state of software development. One that includes the features needed for fast, secure development out of the box. A cloud developers actually enjoy using. We want to revitalize the spirit of projects like early Heroku (though our technology is very different) and ship features that bring you joy." (Not to take away from this announcement, hence the parenthetical: the impact Heroku had on a certain generation of programmers working on developer tooling is hard to overstate. I bring it up a lot , and so do my teammates who are close my age and worked with web technologies in the early 2010s.) I'm very excited to see what they'll do! I like using exe.dev a lot.
-
I also really like David's personal statement that goes along with the funding announcement: I am building a cloud.
-
Just a reminder: chat jimmy exists. Try it. You have to. Try it and then imagine what we could do if one of today's frontier models would run at even half that speed. Send me a letter if you know whether that's physically impossible.
-
New Larry David biography is coming out this year. Pretty, pretty, pretty good.
-
Elad Gil's Random thoughts while gazing at the misty AI Frontier. Lots of interesting things in there. AI researchers' distributed IPO, compute constraints, hidden layoffs, and also this bit: "It is not just the model you use, but the environment, prompting, etc you build around it that helps impact your choice. Brand also matters more then many people think. At some point, either one coding model breaks very far ahead, or they stay neck in neck."
-
Maggie Appleton: One Developer, Two Dozen Agents, Zero Alignment. I think I see the same future that Maggie sees. And we're building it at Amp.
-
That's a title worthy of a book, not a post, but the content is still fascinating: Fabric is harder than steel. As someone who's been chasing the perfect t-shirt for years and who has a very deep fascination with "tech shirts" (not company logos, but high-quality shirts made of "functional" textiles), this was very cool. I often wondered: how can car seats be this good for so long? Well, turns out it's engineering.
-
Jeff Geerling: New 10 GbE USB adapters are cooler, smaller, cheaper. I could read blog posts like this one five times every day.
-
I Found It: The Best Free Restaurant Bread in America. This was fantastic. Go read it if you have an hour and want to smile and enjoy some great writing. There are many quote-worthy sentences in there, but I'll let you read them yourself. Instead, here's a free bread anecdote. Once upon a time, I was working on a farm in Australia, along with around ten other backpackers. Handful of Germans, handful of French people, two Brits. One day we were sitting around the big table in this "shed" (actually a big house, with a shed-like quality, if you will) we were living in, chit-chatting about stuff. What do you miss the most from home? came up as a question and after someone said that they miss a proper shower and feeling clean for once many of us nodded. Yes, that'd be something. Then someone said: I really, really miss the bread. And everybody , because we've all seen and tasted what the Australians call bread, let out a big sigh and said, oh yes, the bread, I miss the bread. And precisely one second later, the room split into two factions and the Germans stared at the French and the French stared at the Germans and both factions, at the same time, said something to the effect of: wait, what the fuck, why do you miss bread, your bread fucking sucks, our bread is good bread, your bread is garbage, shut up. But, sadly, the French wouldn't see how wrong they were, thinking their long, dumb, comic book bread is any good. And I'm pretty sure that created a rift in our little community of grape pickers. Anyway, hopefully I pissed of all the Australians and French people reading this -- your bread sucks. So, go read the article and have some fun.
Know which bread's the best? You should subscribe:
-
-
🔗 r/reverseengineering Importing GTA IV texture dictionary natively in Unreal rss
submitted by /u/Needatax
[link] [comments] -
🔗 WerWolv/ImHex Nightly Builds release
-
- April 25, 2026
-
🔗 r/Leeds Leeds Dungeons and Dragons - The East Ridings Group rss
Hi everyone!
I am one of the DMs of the East Ridings of Leeds and just trying to promote the group.
We run a sandbox style game. There is a continent and a town and DMs in the community run games in this world and players jump in on games in this world when they want. All the DMs and players collaborate together. You are welcome to come and join! We take anyone from experienced players to brand new people. We mostly play in Chance and Counters at the moment and we try to host regular games, at least once a week if we can and we are looking for new DMs and players. So if you have been looking for a way into DnD as a beginner, want to play regular sessions to grow your character idea or simply want to jump in now and again for a laugh, we have it all! Message me and I'll invite you. We have over 30 people in our community and it is growing every day.
We mostly communicate on Discord so I can give you an invite for that to get started and we have a wiki page with our rules and guidance. We don't charge for our sessions and the only thing you'll ever have to pay for is potentially whatever the venue wants and your own stuff.
So get in touch!
submitted by /u/Lit-Rature
[link] [comments] -
🔗 r/york Kickabout Community rss
| Enjoy a friendly football game to break up the week. Kickabout Community supports independent 5-a-side and 7-a-side adult football games across York. We’re a volunteer-run group of organisers, making football accessible for players of all ability, gender, age, and fitness levels. 👉 Join Kickabout Community here: https://chat.whatsapp.com/CSt29p06AGLL1E91uu5Eze 📍 Pitches used: • York Sports Village • University of York Sports Centre • PlayFootball Clifton Moor • Energise Acomb 💷 Subs: £3-4 per session (covering pitch hire, balls, and bibs) We are not a business and not profit-making. Any surplus funds are for player socials or charitable donations. submitted by /u/Chance_Board_5424
[link] [comments]
---|--- -
🔗 r/Harrogate Looking for new friends 39 (F) Harrogate based , rss
Hi.
I’m looking for some new friends in the area for evening drinks, meals , walks xx
submitted by /u/Firm_Guess306
[link] [comments] -
🔗 niri-wm/niri v26.04 release
Niri is a scrollable-tiling Wayland compositor. Windows are arranged in columns on an infinite strip going to the right. Opening a new window never causes existing windows to resize.
As you may have noticed, niri now lives in a GitHub org rather than my (@YaLTeR) personal account.
The primary reason was the ability to give out issue triage permissions: I'd like to give a massive thanks to @Sempyos for triaging all of our issues and pull requests, answering many, many questions, and helping people diagnose their problems with niri.
We've also moved a few niri-adjacent projects to the GitHub org, like the awesome-niri list of related projects maintained by @Vortriz and a new artwork repo by @bluelinden and @HumpityDumpityDumber—two of the creators of our project logo. In the artwork repo, you can find a badge and several wallpapers, including two stunning 3D works created by @Duncan- Rose in Blender:
The main niri repo also flew past 20,000 stars in February! 🌟 Thanks everyone for support.
Note
Packagers:
- our minimum supported Rust version is now 1.85.
niri.serviceno longer hardcodes/usr/bin/in the niri binary path (thanks @Axlefublr).- @markK24 restructured the dinit service files:
3bfa4a7
Now with introductions out of the way, here are the improvements from the last release.
Blur
It's here. The most requested niri feature by far. Our highest upvoted issue on GitHub. After tireless fork maintenance by @visualglitch91 and @Naxdy, blur is in mainline niri for everyone to use!
Windows and layer-shell components can request blur through the
ext- background-effectWayland protocol with no extra niri configuration. Many already do:- Dank Material Shell v1.4.5: enable background blur in settings
- Noctalia shell: enable in settings and see docs
- Vicinae launcher
- foot terminal v1.26: set
blur=truein colors config - kitty terminal v0.46.2: set
background_blur 1 - Ghostty terminal: will have support in v1.4
Toolkits:
- Quickshell: will have support in v0.3
- winit: will have support in v0.31
For apps that don't support
ext-background-effectyet, you can enable blur through the niri config:// Enable blur behind the Alacritty terminal. window-rule { match app-id="^Alacritty$" background-effect { blur true } } // Enable blur behind the fuzzel launcher. layer-rule { match namespace="^launcher$" background-effect { blur true } }Keep in mind that niri-configured blur needs the right
geometry-corner- radius, and it won't work with complex surface shapes. See the Window Effects wiki page for details.
Have I seen this screenshot before?..We have both normal blur and xray blur that always shows the wallpaper. Xray blur is the default because it's much more efficient: niri computes the blurred wallpaper once, and then reuses it as a static image, which is extremely cheap. Only if the wallpaper changes, the blur is recomputed (so an animated background will shrink the efficiency gains).
If you prefer non-xray (normal) blur, you can enable it with a window/layer rule. For example, you can set it on top and overlay layers (that usually overlap other content), via the new
layermatcher:// Make top and overlay layers use the regular blur (if enabled), // while bottom and background layers keep using the efficient xray blur. layer-rule { match layer="top" match layer="overlay" background-effect { xray false } }So, if blur is so good, where's blur 2? Err, I mean, why did it take so long to add?
In short, background blur turned out to be a massive undertaking. Not because of the blur algorithm itself (by the way, if you want to learn about different blurs, including the widely used Dual Kawase, I highly recommend this blog post), but because window background effects in general required a lot of thinking and additions to the code, especially to make them as efficient as possible. This is one of the most complex niri features thus far.
Xray and non-xray effects are also pretty much two entirely separate and very different beasts, code-wise. Non-xray reads back the just-rendered pixels in the middle of a frame, blurs them, then continues drawing the frame. This required extensive refactors of Smithay's rendering architecture (big thanks to @Drakulix!). Xray on the other hand requires threading the window positions all throughout the rendering code to draw the right cut-out of the background.
But it gets worse: we have our Overview. It was quite a challenge figuring out how to support xray blur in the overview, while maintaining the property that it is never re-rendered.
niri-xray-blur-offscreens.mp4
I also had to get both of them working with all other niri features, like blocking out from screencasts. When the window itself is blocked out that's easy, but what if something in the background layer , inside the blur, is blocked out? An unusual case for sure, but hardly a good exclude if your sensitive data gets accidentally leaked.
niri-xray-blur-blocked-out.mp4
By the way, I made it so xray can be used on its own, without the blur. As well as the noise and saturation effects (normally for reducing blur color banding and bumping the vividness). For example:
window-rule { match app-id="Alacritty" // Xray without the blur! background-effect { xray true } }One more thing you can do starting from this release is to configure niri to apply transparency and background effects to pop-up menus, using the new
popupsblock in window or layer rules.// Blur the background behind pop-up menus in Loupe. window-rule { match app-id="Loupe" popups { // Matches the default libadwaita pop-up corner radius. geometry-corner-radius 15 // Note: it'll look better to set background opacity // through your GTK theme CSS and not here. // This is just an example that makes it look obvious. opacity 0.5 background-effect { blur true } } }Keep in mind that pop-up rules tend to bump even more into problems with application behavior and surface shapes. For example, web apps or Electron don't use Wayland pop-ups at all; they're entirely emulated inside the client—niri cannot do anything with them.
Shape-wise, in GTK 4, pop-ups with
has-arrow=truewon't look right because they aren't rounded rectangles. Thankfully, clients implementingext- background-effectcan shape their blur in any sort of elaborate pattern.Well, enough about blur, we've got more interesting things to cover!
Optional includes
Pretty much right after I added config includes last release (before I merged them even), people started requesting optional includes—that can be absent without failing config loading. Some use-cases are being able to change parts of an immutable niri config on NixOS, or having local/private overrides for parts of the config.
I pushed back for a time because I think some of those problems should be solved elsewhere, rather than requiring every program with includes to support optional. However, the added code complexity was rather low, so I eventually went ahead and accepted @johnrichardrinehart's implementation.
Starting from this release, you can make an include optional by setting
optional=true:// Won't fail if this file doesn't exist. include optional=true "optional-config.kdl" // Regular include, will fail if the file doesn't exist. include "required-config.kdl"When an optional include file is missing, niri will emit a warning in the logs on every config reload. This reminds you that the file is missing while still loading the config successfully.
The optional file is still watched for changes, so if you create it later, the config will automatically reload and apply the new settings. Finally,
optional=trueonly affects whether a missing file causes an error, so if the file exists but contains invalid syntax or other errors, those errors will still cause a parsing failure.While we're talking about includes: they now expand paths starting with
~to the home directory, so~/file.kdlwill expand to/home/user/file.kdl. Thanks to @HigherOrderLogic and @BennyDeeDev for prototype implementations of this feature.Pointer warping while scrolling
Last release, I made dragging windows horizontally by their titlebars scroll the view left and right. This made mouse-only navigation much more convenient, but I still felt that something was missing.
This release makes the pointer warp from one side of the screen to the other during view scrolling gestures, similarly to Blender. It makes scrolling through several windows natural and convenient, even when you start right next to the monitor edge.
niri-pointer-warp.mp4
Screencasting features
Earlier in the release cycle, I spent some time improving various aspects of our screencasting support. In niri, you can screencast through xdg-desktop- portal-gnome via PipeWire (the recommended approach), or through wlr- screencopy (mainly intended for tools such as wf-recorder). Both of these have seen improvements.
Pointer in window screencasts
When sharing the screen, you generally want to include the cursor in the video stream. In PipeWire, you can either do the simple thing and just draw the cursor directly inside the video frames, or you can attach it as a separate frame metadata. In this mode, the video stream itself doesn't contain the cursor, instead, the compositor sends a separate buffer with the cursor icon and coordinates. The consuming application itself (such as OBS, or your browser in a video meeting) then has to draw the cursor on top.
This allows the consuming application to control the cursor visibility. You might have seen this toggle in OBS; it can work thanks to the metadata cursor mode:
Ever since I implemented PipeWire screencasting in niri about a month into development, it's been using the embedded cursor mode for simplicity. I rendered the cursor for monitor streams and hid it for window streams, and this mostly did what you wanted (and the cursor toggle in OBS didn't work).
Doing it properly has always been in the back of my mind though. I was most missing cursor in window streams because I pretty much always use the window target when screensharing in meetings.
Well, in the summer of last year, @abmantis took up the task. The road was quite bumpy though: they hit and managed to debug a memory corruption issue in PipeWire (that other compositors haven't hit due to more eagerly overriding unchanged data every frame). The bug was thankfully promptly fixed by a PipeWire developer.
It took me several more months to get to the PR (busy with uni and other things as usual), then some heavy refactoring to make it work correctly and iron out all the edge cases, and now niri does screencasting with cursor metadata!
The implementation is quite comprehensive. It works in both monitor and window capture modes and draws the cursor along with its drag-and-drop icon (if any). In window capture mode, the cursor is shown only when it's targeting the window or any of its pop-ups. So, for example, if you cast a window fully covered by another window, and move your mouse on top of that, the screencast of the window below will not show the cursor.
niri-cursor-cast-metadata.mp4
Metadata cursor is also intended to support an optimization where if you move your mouse over an otherwise unchanging screen, the compositor can skip sending these stationary video frames and only update the cursor position. Unfortunately, the OBS PipeWire code doesn't quite allow for this code path yet, so I couldn't do this for the time being.
While working on this, I also found several disagreements between the intended meaning in PipeWire (indicated through comments in header files) and what the code was doing in consumers such as libwebrtc (used in all browsers). This is unfortunate since compositors have to not only work around these problems, but also keep the workarounds forever, as they can't tell how old is the library on the other side of the PipeWire stream. It would be good to have high-quality PipeWire producer and consumer examples for these more complex scenarios, that get all the small details right.
Anyhow! Metadata cursor is here and works with everything I tested. As a bonus, I added pointer capture to window screenshots with a new flag on the action:
screenshot-window show-pointer=true.Delayed start for dynamic cast target
Dynamic cast target is a niri feature that lets you instantly switch what you're screencasting with a keybind. I personally use it all the time because it's very convenient to toggle screencasting between different windows without going through some video conferencing screen sharing UI or having to cast the entire monitor.
You start a dynamic cast by selecting a special "niri Dynamic Cast Target" in the window picker. The dynamic cast always starts as a blank video stream to avoid sharing something sensitive by mistake.
Before this release, the dynamic cast literally started as a 1×1 black pixel video stream. This worked just fine in every app... except, apparently, everyone's favorite Microsoft Teams. So, in this release I changed the dynamic cast to delay starting the video stream until the first dynamic target is selected. As far as the screen sharing programs are concerned, you're just taking a bit longer to pick what to screen share. No more slightly odd brief 1×1 video.
Cast IPC
It can be useful to know if there's an ongoing screencast. For example, desktop bars may want to show a screen recording indicator to alert you of any unintended screen capture.
For PipeWire, a bar could enumerate all ongoing video streams and try to figure out which ones are screencasts, but it's error prone, and for wlr- screencopy there's no way at all to tell from outside the compositor.
So in this release, I added screencast IPC to niri. You can see currently active screencasts with
niri msg casts. Desktop components can subscribe to the niri event stream and listen for the new cast events.The
Castobject provides various bits of information: kind (PipeWire or wlr-screencopy), current target (output, window), whether the cast is active. PipeWire screencasts provide their node ID which you can use to find out the consumer, while wlr-screencopy screencasts provide the client process ID for the same purpose.DankMaterialShell already shows a screencasting indicator using the niri IPC:
And if you need more, it's easy to make a plugin that shows all exposed information.

While working on this, I found that I had a bunch of duplicate screencast sources in OBS.Keep in mind that for wlr-screencopy, there's no robust way to tell apart different screencasts and screenshots, so I had to come up with some heuristics. Notably, xdg-desktop-portal-wlr always uses and keeps alive a single wlr-screencopy manager object, so there's no way to tell when a screencast has stopped short of using a timeout for when the last frame was requested.
All of these wlr-screencopy problems are fixed in the new ext-image-copy- capture protocol, but we don't have it in niri just yet (and some clients will remain legacy anyway).
Also, with cast IPC providing IDs for screencasts, we can add actions to manipulate them. The new
niri msg action stop-cast --session-id <ID>will force-stop a PipeWire screencast (wlr-screencopy ones cannot currently be stopped through IPC).niri-stop-cast.mp4
Miscellaneous fixes
Some more random things I fixed in this release:
- Copying with damage would always include the cursor even if the wlr-screencopy client said not to; now this is honored.
- Fixed behavior when a wlr-screencopy client requests multiple frame copies with damage at once. I don't know of any client that does this but now it should work.
- Fixed the niri wlr-screencopy data never getting freed in some cases, like when the client was killed.
- Reduced the default PipeWire screencast buffer count from 16 to 8.
- @kriive worked around a use-after-free bug in pipewire-rs by reordering some struct fields in niri.
- Fixed wrong rendering z-order that could appear for one frame when switching the dynamic cast target to a window.
Animation improvements
In winter, I felt like doing a bit of an "animation detox" and spent several weeks with some niri animations disabled. (If you're curious, I turn off window open, close, resize and movement animations, and leave horizontal view movement since it helps with spatial awareness.) While doing this, I noticed some jank in the unfullscreen/unmaximize animation.
You can configure individual animations differently in niri. However, in several cases, two animations are meant to run together and match exactly. In those cases, niri will synchronize the animations—for example, it can run some animation that is otherwise disabled.
In particular, a window resize animation is synchronized with the horizontal view movement animation that it causes. This way, resizing a window next to the right edge of the monitor will "grow" it to the left instead of an awkward combination of growing to the right and moving back in-view.
The problem that I found is that while fullscreen/maximize correctly synchronized the view movement, unfullscreen/unmaximize didn't. So the window would unmaximize instantly (window-resize is off) but slowly scroll back into position.
This is now fixed:
synchronized.unmaximize.anim.mp4
Another animation issue that's been bugging me for a while was fairly specific. When you "drag out" a maximized window, it will automatically unmaximize. If it was floating before you maximized it, it will also automatically return to floating. And when you did this specific action—drag a window to unmaximize into floating—it would skip the horizontal view movement animation of other tiled windows on the same workspace.
unmaximize.view.scroll.anim.mp4
This was a tricky issue to find because it was at an intersection with another feature: left-right workspace scrolling if you drag-and-drop near a monitor edge. If this drag-and-drop scrolled the view, the view needs to resnap to a window edge, but if it never scrolled the view, then the view should remain exactly as it was, no resnapping. It turned out that this drag-and-drop finalization code ran right after the horizontal view animation was started, and since no scrolling had occurred, it immediately skipped the animation.
The fix was to take a possibly running animation into account explicitly (and add a test of course).
Finally, there was always some weirdness when "dragging out" the leftmost column on a workspace, specifically when it wasn't focused. (You can easily hit this when moving windows from the overview.)
"Dragging out" in this case preserves the view position, which is intended: the focused window (not the one we're dragging out) always remains fixed in the view, regardless of what's happening around it. But dropping the window back would awkwardly put it on the right side instead of where it previously was.
drag.leftmost.column.mp4
After carefully reading through the relevant code (which is among the earliest code I wrote in niri since this is a fundamental windowing operation, but also changed shape many times over development), I noticed that some operation ordering wasn't quite logical when inserting the leftmost column into a workspace, and was able to refactor things a bit to make it work right.
And the last small animation fix was to prevent the slowdown/speedup setting affecting the duration the config error notification is visible on screen.
IME in pop-ups
We fixed (or rather worked around) one long-standing annoying problem: GTK 4 pop-ups with input fields didn't work if you were running an IME like Fcitx5. Effectively, you couldn't open any pop-up with a text entry.
The underlying issue is that Smithay's abstractions don't allow for multiple input grabs at once. Pop-ups generally take a pointer and keyboard grab (notice how when a pop-up is open, moving the mouse over other windows doesn't trigger any hover effects in them), but an IME also works through a keyboard grab in order to handle key events. These two conflicted with each other in niri, so it dropped the pop-up grab, which closed the pop-up immediately upon opening it.
A proper fix would be rearchitecting this part of Smithay, but until then, I loosened some checks, allowing this grab sequence to work. Finally, IME users are able to rename files in Nautilus.
Escape to cancel drag-and-drop
Pressing
Escapeduring drag-and-drop will now cancel the operation. I wanted to add this for a while since it's a common gesture, so I did as soon as Smithay recently merged the necessary code.Input device improvements
We've had an assortment of improvements to input devices:
- Fixed compounding slowdown over time when using a high Hz mouse with cursor
hide-after-inactive-msor an idle monitoring daemon. - If you have libwayland-server v1.23 or later, niri will increase its Wayland buffer size, so moving a high Hz mouse over non-responsive windows will no longer quickly crash them.
- @qqwa added the
map-to-focused-outputtablet option that makes the tablet target the currently focused output rather than some single configured output. - @skrmc fixed an issue where putting the cursor at the topmost pixels on a workspace wouldn't always target a maximized window under the cursor.
- Fixed Alt-Tab reacting to mouse input before it's visible.
- @ArijanJ made trackball (
on-button-down) scrolling work in the overview. - @mgabor3141 made the Num Lock state preserve across loading a custom .xkb file keymap.
- @Atan-D-RP4 fixed niri being unable to use any input devices when starting from a different TTY via tmux.
- Enabled the loading of libinput plugins.
GPU profiling
One of my main blockers for blur in niri has always been the lack of GPU profiling integration in Smithay. Blur is a heavy operation, and I wanted to see its performance behavior to make good decisions about the code architecture.
In Smithay and niri we use Tracy, a highly capable frame profiler. It supports showing GPU zones, however collecting timestamps from the GPU requires a fair bit of integration work: you need to submit timestamp queries along with your GPU work, then keep a queue of in- flight queries, periodically collect the values of completed queries and upload them to Tracy. At the end of 2025, I sat down and did the necessary work in Smithay which enables both profiling GPU operations done by Smithay itself, and for compositors to annotate their own GPU operations.
Here's an example Tracy recording with GPU zones shown in red at the top, and CPU zones in teal below.
On this recording niri draws a single frame, first to the DRM buffer (goes to the monitor), then, separately, to a buffer for an ongoing PipeWire screencast. You can see the screencast rendering on the GPU in parallel with some CPU work, then as soon as it's done, the CPU is notified, and sends the finished frame over PipeWire to the screencast consumer.
On multi-GPU systems (common on laptops if you have an integrated + discrete GPU), Tracy will show multiple GPU tracks:
On this frame profile, I have a laptop with the main screen (connected to the iGPU) and an external screen (connected to the dGPU). You can see the main GPU rendering both screens (niri renders everything on the main GPU), then the external screen contents are copied over to the dGPU where they are rendered in a single texture draw.
This profiling integration allowed me to verify that blur isn't slower than expected (actually it turned out to run faster than I thought it would). Also, it's now much easier to diagnose dropped frames caused by GPU rendering stalls.
Rendering optimizations
In Smithay and, by extension, niri, rendering works by first constructing a render list , a
Vecof render elements that describe exactly how the final scene is laid out on screen. This render list is then processed by the damage tracker to cut out all invisible and unchanged regions, and then, only if anything needs to be redrawn, the damage tracker hands the elements over to the GPU—just the ones that changed. Compositors try hard to minimize unnecessary redrawing and waking up the GPU to conserve battery.When designing the rendering architecture in niri, I implemented everything through iterators. Functions like
Workspace::render()would return a type like-> impl Iterator<Item = SomeRenderElement>, aggregating and processing render elements from their constituent parts (like individual windows on a workspace). At the top level,Niri::render()would collect from such an iterator into aVecof render elements.Generally, this code structure avoids intermediate allocations (returning an iterator like this compiles down to a state machine that creates all items on- demand as they are pulled by the caller). It also avoids doing unnecessary work since the caller can cut the iterator short at any time if it doesn't need some of the items.
However, as you may know if you have dealt with complex iterators in Rust, there's a whole range of annoyances that come with this kind of structure. For a start, it's hard to write any logic, like conditionals, around returning iterators. Since this
-> impl Iteratormust be a single type, you cannot just write:if condition { return one_iterator; } else { return another_iterator; }It won't compile as these are two different types. You have to come up with workarounds.
Then, in many cases in niri, the returned iterator would borrow from
&self, leading to complex lifetimes. I actually designed for this from the start, withrender()functions intentionally borrowing a shared&self, preceded by a separateupdate(&mut self)step. However, rendering also needs an exclusive&mut Renderer, and this thing did cause annoying borrowing issues every now and then.In several cases, the borrowing is not practical to work around, so I had to fall back to returning a
Vecfrom intermediate functions, which is a short- lived temporary allocation that's immediately freed. Especially unfortunate is that the iterator approach doesn't really work across crate boundaries, so Smithay's surface rendering function returns aVec—and niri calls it per Wayland window and pop-up , causing many temporary allocations during rendering.For a few months, thoughts brewed in my head on how to rearchitect this. Finally, in December, I felt like I had a solid, working idea, and attempted the refactor.
The idea was to replace pull-based functions with push-based ones. Instead of returning
-> impl Iterator<Element>, all rendering functions would accept apush: &mut dyn FnMut(Element)closure, and call it to push their render elements to the list. At the top level, push would simplyfinal_vec.push(element), and intermediate rendering functions would forward this push function down. (This design is not unlike how render tree construction works in GTK 4.)The refactor honestly succeeded beyond my expectations. It solved pretty much all problems I've had. Conditionals become trivial and just work. No complex iterator chains. Functions can still do their logic by wrapping the parent's
pushin their own closure. There's no borrowing. All temporaryVecs gone. As for cutting iterators short, we didn't actually need it in niri.It also sped up the render list construction by 2-3× on my main machines (I didn't expect that):
And, wildly enough, by 8× on my ancient Eee PC! Render list construction does not include the rendering time, which still dominates the frame duration. But it happens much more frequently, even when no actual rendering is needed afterwards, so the improvement is very welcome.
I measured performance and memory use with Tracy. Here's an example profile showing old and new render list construction side-by-side:
The orange line at the bottom tracks the allocated memory. You can see that the previous rendering allocates and drops many times, while the only allocations in the new rendering are pretty much growing the output vector (the steps are the vector capacity increasing as more elements are added—it should be possible to reuse the same vector to get rid of even those, but I haven't got around to it yet).
If you're curious for a more detailed motivation and want to see the diff, which somehow turned out to be negative , see the pull request.
Old laptop support
There's been a long-standing niri issue where screenshots (both built-in and through wlr-screencopy tools) didn't work on old Intel laptops with a weird error. Last week, @xdagiz finally dug in and figured it out: a wrong OpenGL enum value in Smithay.
Also, I did some small optimizations to the niri shaders and managed to fit our resize and clip into the (extremely limited) GPU of an ancient ASUS Eee PC that I have lying around, meaning that window resize animations and compositor-rounded corners now work there (can't say they are particularly smooth though).
Both things combine to show you the following image:
Other improvements in this release
- @cmeissl fixed a VRAM leak that occurred on some systems after closing certain apps.
- @sodiboo and @HigherOrderLogic implemented the
ext-foreign-toplevel-listprotocol which will help Quickshell and other shells associate Wayland window objects with niri IPC window IDs. - @Ind-E made it so the error message for a duplicate bind in the config also shows the first definition of the same bind.
- Mod+LMB window dragging is now indicated with a grabbing cursor (thanks @kchibisov).
- @zimward added the
--pathargument toniri msg action load-config-filewhich lets you switch to a different niri config at runtime. - @Fingel added DMA-BUF support to nested niri, which makes hardware acceleration work there again, now that Mesa's wl_drm is deprecated and phased out.
- Removed padding that niri added to layer-shell pop-ups near monitor edges, as it was more confusing than helpful.
- The default config now binds Mod+M to
maximize-window-to-edgesand Mod+Shift+R toswitch-preset-column-width-back. - Added the
force-disable-connectors-on-resumedebug flag to force a screen blank on TTY switch into niri or waking up from suspend, which can help on some rare hardware configurations. - Putting a window into windowed fullscreen now correctly squares the corners.
- Fixed constant screen repainting while the overview is open.
- Slightly corrected the
relative-to=workspace-viewgradient border rendering for interactively dragged windows. - @jakobhellermann prettified diaeresis shortcut rendering in the Important Hotkeys dialog.
- Fixed the description of
expel-window-from-columninniri msg actionto say that it expels the bottom window (this has been the behavior for a few niri releases already). - Fixed several panics possible if a client tries to use a recently removed output.
- Fixed broken rendering when
clip-to-geometryis applied to a client that attachesy_invertbuffers. - @tobhe fixed building on OpenBSD.
- @titaniumtraveler made nested niri set its window
app-id. - @DuskyElf changed niri to re-evaluate the
ignore-drm-devicedebug settings when a new GPU is plugged in, allowing to use/dev/dri/by-path/symlinks there. - Updated Smithay:
- Improved automatic GPU selection on some devices such as ARM Macs. Asahi and Pinephone should now run niri out of the box with no manual
render-drm-deviceconfiguration necessary. - Improved the behavior of some layer-shell clients like wl_shimeji by not considering subsurfaces for layer surface positioning.
- Improved support for docks that cause monitor EDID to be loaded late.
- Made screenshots and screencasts work on older Intel systems.
- Fixed stale outputs being left behind when some USB-C docks are disconnected while the computer is suspended.
- Fixed a
zxdg_exporter_v2panic with some clients. - Fixed a memory leak when clients using the clipboard protocols don't destroy them explicitly.
- Fixed a panic when the client tries to set unrecognized text-input content hint and purpose (this started to happen in the GTK 4.23 development release).
- Various fixes to drag-and-drop, IME text input and multi-GPU, as well as various performance improvements.
- Improved automatic GPU selection on some devices such as ARM Macs. Asahi and Pinephone should now run niri out of the box with no manual
Funding
I work on niri in the spare time that I have from my university studies. If you like what I do, you can support my work on GitHub Sponsors. Big thanks to all current and past sponsors!
-
🔗 r/Leeds Has anyone seen this cat? (East Leeds area) rss
FOUND!!!! Thanks for the help everyone!!!!
My cat Bliss went missing last night at about 8pm. He is fully white, with blue eyes, a male, neutered and microchipped, and profoundly deaf. I have posted missing posters in the area with my telephone number on them but there has been no clues as to where he is yet. If anyone has any information on him or has seen him, I would be eternally grateful!!!
submitted by /u/Dazailover101
[link] [comments] -
🔗 r/Harrogate Classic cars at ASDA rss
Has anyone seen those classic cars in the ASDA carpark? they’ve been sat there for weeks now, anyone have any info? Think two of them are Triumphs one of em had a Riley badge, seems like a big risk to leave em there if they’re even real!
submitted by /u/farfrombornagain
[link] [comments] -
🔗 r/wiesbaden Kann man (auch wenn offiziell nicht erlaubt) mit angeleintem Hund auf den Alten Friedhof gehen? Also, wird das toleriert oder macht das echt niemand? Bekomme Besuch mit Hund, der dort in der Nähe wohnen wird… rss
Danke für Tipps
submitted by /u/Haunting-Ad2182
[link] [comments] -
🔗 r/reverseengineering [CrackMe] PyVMP v5 : The Wall. I dare you to break it (again). rss
submitted by /u/PynaBola
[link] [comments] -
🔗 r/Yorkshire Out and about rss
| a few grand days out……. submitted by /u/scottishdarkhorse
[link] [comments]
---|--- -
🔗 r/LocalLLaMA "Weights are coming".Xiaomi’s MiMo V2.5 Pro has landed at 54 in the Artificial Analysis Intelligence Index. rss
| From: - Xiaomi MiMo on 𝕏: https://x.com/XiaomiMiMo/status/2047840164777726076 - Artificial Analysis 𝕏: https://x.com/ArtificialAnlys/status/2047799218828665093 submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 r/Yorkshire Upsall rss
| Found the perfect spot for lunch this week! I'm a field-based telecoms engineer and earlier this week I was working on expanding the fibre network in the beautiful village of Knayton, the exchange is a little shed on the hillside but the view from the rear is just stunning! submitted by /u/Trancer79
[link] [comments]
---|--- -
🔗 r/reverseengineering Built a tool for reverse-engineering code line-by-line (30+ languages) with vibe code AI Instead of summarizing functions, it explains *each line in context* — useful for: rss
submitted by /u/keoperz0
[link] [comments] -
🔗 r/Yorkshire A nice view of Scarborough rss
| submitted by /u/aeeriel98
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: +4 releases rss
sync repo: +4 releases ## New releases - [DeepExtract](https://github.com/marcosd4h/deepextractida): 0.9.13 - [augur](https://github.com/0xdea/augur): 0.9.1 - [haruspex](https://github.com/0xdea/haruspex): 0.9.1 - [rhabdomancer](https://github.com/0xdea/rhabdomancer): 0.9.1 -
🔗 r/york York game today rss
Hey all, the York Dale game is on DAZN today, wondered if anyone knew what pubs would be showing it as google and social media aren’t being particularly helpful!
submitted by /u/leo_smith08
[link] [comments] -
🔗 r/reverseengineering Claude APK reverse engineering rss
submitted by /u/Present-Reception119
[link] [comments] -
🔗 r/LocalLLaMA I'm glad we have deepseek rss
other companies are slowly going away from open weight, not releasing base models, delaying open weight distribution, not releasing top models (this one I think is fair, but still), and I also noticed they stopped publishing research (old Gemma and qwen had detailed papers about the models training and characteristics, now it's replaced by blog posts and model cards)
Kimi (no base model for Kimi k2.5), GLM (no base model for glm 5 and 5.1), minimax (delayed open weights and problematic license for m2.7) and qwen (qwen 3.5 397B was open weight, 3.6 is not)
Meanwhile, deepseek keeps publishing mind-blowing research every month, release their base models, release the open weight as soon as the model is officially launched and explain model training and architecture in detail with a launch paper
They are extremely important in the field and are the ones pushing the technology and efficiency forward
Unfortunately they don't release small models, but we can't have everything can we?
submitted by /u/guiopen
[link] [comments] -
🔗 Ampcode News Opus 4.7 rss
Opus 4.7 now powers Amp's
smartmode.In our internal evals, Opus 4.7 scored ~72%, up from Opus 4.6's ~65% - the first model since GPT 5.4 to clear 70%.
It takes some getting used to
Compared to Opus 4.7, Opus 4.6 was forgiving.
You could give it a vague task and it would often infer the missing pieces, make a plan, and start working. Sometimes that was useful. But it also could lead to the model confidently solving a nearby problem instead of the one you actually had. Or rushing to the first, but not the best, solution.
Opus 4.7 is less like that.
It follows prompts more closely. It fills in fewer gaps. It researches more. It is less likely to silently generalize from "fix this case" to "fix every related case." If the task is underspecified, you are more likely to get a narrow answer, a pause, or a request for the missing constraint.
At first, that can feel worse. But then you realize that a good prompt can make it go further.
Opus 4.7 is better at harder coding work, especially tasks that span multiple files, tools, and verification steps. It is better at keeping the shape of a change in its head and carrying it through the codebase. It's better at refactoring too. Its explanations are more thorough.
Fewer Built-in Tools
We removed
grep,glob, andmermaidfromsmart.Opus 4.7 is good enough at using the shell directly. When it needs to search, it can run
rgor use the codebase search agent.Its ASCII diagrams are also equal to or better than what Opus 4.6 achieved with Mermaid diagrams.
Token Usage
Our internal assessment matches Anthropic's (see last section and graph): "token usage across all effort levels is improved." Opus 4.7 might use more tokens in some cases, but those tokens are smarter and lead to better results. And better results lead to less tokens wasted.
How to Use It
The main change is simple: tell it what success looks like.
A few patterns have worked well for us:
- Give it success criteria, not steps. Tell it what done means, not every move to make. Example: "Clean up the billing settings. Done means no public API changes, no database changes,
pnpm test billingpasses, andpnpm typecheckpasses." - Give it a way to check itself. A model with a test, CLI, Storybook, preview URL, or screenshot diff is much better than a model guessing from code. Example: "Fix the import flow. Reproduce it with
pnpm cli import ./fixtures/bad.csv. It is fixed when that command succeeds andpnpm test importpasses." - Brainstorm, pick, implement. Use one pass to explore options, then implement the chosen approach. Example: "Compare two ways to remove this duplicate state. Recommend one. Do not edit files yet." Then: "Implement option B. Keep the API unchanged and verify with
pnpm test settings."
Update Amp to the latest version by running
amp updateand you're ready to go:smartmode is now powered by Opus 4.7. - Give it success criteria, not steps. Tell it what done means, not every move to make. Example: "Clean up the billing settings. Done means no public API changes, no database changes,
-
- April 24, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-24 rss
IDA Plugin Updates on 2026-04-24
New Releases:
- augur v0.9.1
- DeepExtractIDA v0.9.13
- haruspex v0.9.1
- ida-domain v0.5.0
- ida-hcli v0.17.5
- ida-mcp-in-vm v0.1.0 - Initial public release
- ida-structor v2.1.0
- ida-structor v2.0.0
- plugin-ida v3.5.2
- rhabdomancer v0.9.1
Activity:
- Artifact-for-replication_gpt4o
- augur
- DeepExtractIDA
- d80c2ffe: Add configurable loop_analysis_max_depth and max_xrefs limits across …
- EMC
- 16594960: Add TensorBoard event file for run on 2026-04-24
- b4934a5d: Fix non-deterministic opcode extraction by sorting successors in DFS …
- b96f092a: Update .gitignore and enhance compare_results.py functionality; add s…
- 283a2b0c: .
- 49a481af: Refactor code structure for improved readability and maintainability
- 60aa23eb: .
- function-string-associate-extra
- haruspex
- 1bfa5487: feat: compatibility release for ida 9.3sp2
- ida-clang-include
- ida-domain
- ida-hcli
- ida-mcp-in-vm
- ida-pro-mcp
- 40e94f36: Merge pull request #383 from Evian-Zhang/download_base_url_with_env
- 44eff798: Derive download URLs from request base across proxies
- a4c7bc5b: Merge branch 'feature/sigmaker-support'
- ecfee9e4: Improve tests to be semantically meaningful per review feedback
- 544b4fb7: Add MIT license header to vendored _sigmaker.py for proper attribution
- bee45e3c: Address PR review: remove redundant scan_signature, revert pyproject.…
- e5af52a2: Add tests for api_sigmaker tools
- ca357739: Make sigmaker self-contained: vendor core engine, remove pip dependency
- bf59293d: Add signature making support via sigmaker integration
- ida-structor
- 5ee14a15: feat: Add interactive matching and merging of existing structures
- 233efe07: ci: Enable ccache and Ninja for Windows builds
- 58448c67: feat: Enable in-place updates of existing generated structures
- 9836ad4e: feat: Support tail calls in call graph and cross-reference analysis
- c20998ce: fix: Prevent collecting pointee accesses as parent struct fields
- 8b307a7e: ci: Cache Unix builds with ccache
- d51071ee: fix: Link Z3 statically into plugin builds
- 30914809: fix: Link Windows plugin against IDA SDK libs
- 3c283bd6: fix: Restore cross-platform workflow builds
- 447ec7fa: perf: Parallelize additional synthesis passes
- bfecd1fe: perf: Parallelize O(n^2) candidate pruning and coverage mapping
- plugin-ida
- python-elpida_core.py
- b25ddf33: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T23:57Z
- 935c9cab: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T23:39Z
- 2ab01a5f: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T23:22Z
- c8378211: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T23:01Z
- 9de2576d: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T22:43Z
- e9ccea8a: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T22:24Z
- dfe18d3c: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T22:04Z
- 37f8cc04: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T21:44Z
- c34aae01: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-24T21:24Z
- rhabdomancer
- a613045f: feat: compatibility release for ida 9.3sp2
- tix-seven
- 6cec6776: feat: add arduino sketches
- 2b9e1f88: docs(gate-server): document /tickets/issue endpoint behavior
- 64fd5e44: chore(gate-server): track credentials directory with .gitkeep
- 4cf47168: chore(web/supabase): grant public schema API access for app roles
- 1d57ab70: feat(web): add ticket issuance UI with gate-server client
- 322b017a: fix(web): align denial_reason types and log rendering with schema
- d1e59c2f: feat(web/gates): rewrite gate CRUD to use gate_assignment for event l…
- f301ee17: feat(web): replace Geist Sans with Inter + TT Norms local fonts
- 3085e9cf: test(gate-server): add ticket issuance tests
- f492ce00: feat(gate-server): add POST /tickets/issue endpoint
- 0c67fb97: refactor(gate-server): extract require_api_key into shared dependenci…
- 91658879: feat(gate-server/mosip): add fail-fast validation for missing MOSIP c…
- cbb64fc7: fix(gate-server/config): resolve .env and alembic paths relative to a…
- 32ee5668: chore: add authenticator.log to .gitignore
-
🔗 r/reverseengineering NEC V810 and V830 (V800 family) CPU Definition module for Ghidra rss
submitted by /u/Inevitable-Spring-17
[link] [comments] -
🔗 r/york York Minster rss
| It’s always nice to be back here. I left York when I was 9 years old……fifty years on the streets are filled with a lot more tourists. Its my little happy place. submitted by /u/scottishdarkhorse
[link] [comments]
---|--- -
🔗 r/reverseengineering Detect Shulfar Malware Encrypted TCP C&C Traffic Using PacketSmith Yara-X Detection Module rss
submitted by /u/MFMokbel
[link] [comments] -
🔗 @binaryninja@infosec.exchange Our latest release makes it much easier to move analysis between tools. With mastodon
Our latest release makes it much easier to move analysis between tools. With new Ghidra Export support and a major overhaul to IDB import, more of your work carries over cleanly and more IDA databases work better in Binary Ninja. https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#interoperability
-
🔗 r/LocalLLaMA This is where we are right now, LocalLLaMA rss
| the future is now submitted by /u/jacek2023
[link] [comments]
---|--- -
🔗 r/Yorkshire a few pics from today rss
| submitted by /u/buster1bbb
[link] [comments]
---|--- -
🔗 r/york Practicing speaking Arabic rss
Hi! Where is a good place to casually practice Arabic in York?
submitted by /u/Livid-Trade-3907
[link] [comments] -
🔗 r/LocalLLaMA Deepseek V4 AGI comfirmed rss
| submitted by /u/Swimming-Sky-7025
[link] [comments]
---|--- -
🔗 r/york River cruises rss
Does anyone know why river cruises in York are so short? Any other mainland European city on a large navigable river such as the Ouse would normally have extended cruising. Half day or at least several hours. Isn't the ouse navigable by big boats for quite a way north and south?
Why don't they capitalise on this?
Am I missing something?
submitted by /u/Educational-Ground83
[link] [comments] -
🔗 r/reverseengineering SentinelLABS just cracked a 20-year-old mystery: Fast16, a state-grade sabotage tool that predates Stuxnet by five years rss
submitted by /u/bscottrosen21
[link] [comments] -
🔗 r/Leeds Change to the64 for 25/04 only. rss
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 sacha chua :: living an awesome life La semaine du 13 au 19 avril rss
lundi 13
Ma fille a séché les cours toute la journée. Elle a dit qu'elle était fatiguée. Elle est restée à la maison au lieu d'aller à son cours de gymnastique.
J'ai configuré obs-websocket pour lancer et arrêter la diffusion en direct depuis Emacs.
Il faisait très beau, donc je me suis assise dehors et j'ai lu la configuration d'Emacs de tecosaur. Non seulement sa configuration était très détaillée, mais elle était aussi magnifiquement mise en page.
J'ai préparé mon bulletin d'information sur Emacs pendant que je diffusais en direct.
Le glacier était toujours fermé, donc nous avons acheté de la crème glacée au supermarché à la place.
À l'heure du coucher, ma fille a dit qu'elle aurait aimé rester une enfant. Elle a dit qu'elle aimait bien KidSpark, qui est réservé aux enfants jusqu'à 10 ans.
mardi 14
Ma fille a suivi son cours. Après l'école, nous avons fait du vélo au parc pour jouer avec ses amies, qui en faisaient aussi.
J'ai continué à améliorer obs-websocket pour gérer mon direct depuis Emacs. J'ai aussi réécrit mon correctif pour l'opération « sentence-at-point » sur Org Mode.
J'étais fatiguée et j'avais un peu mal à la tête.
mercredi 15
Ma fille s'est réveillée tard, mais elle a participé à son cours toute seule.
J'ai mis à jour mon OBS pour ajouter socialstream.ninja via une source navigateur. Maintenant, je peux afficher les commentaires et je peux envoyer un message depuis Emacs sur YouTube.
J'ai travaillé un peu comme consultante. Le design du profil avait besoin d'une petite correction.
Ma fille et moi avons joué à Stardew Valley.
Mon mari avait une course près du Musée des beaux-arts de l'Ontario. Ma fille était heureuse de sécher les cours l'après-midi parce que l'école avait une remplaçante. J'ai emmené ma fille là-bas et nous avons passé du temps à essayer les activités au musée et à dessiner sur nos tablettes.
Après le dîner, nous nous sommes entraînées à peindre des yeux avec des aquarelles.
jeudi 16
J'avais rendez-vous avec Protesilaos pour l'informer de mes progrès depuis notre conversation précédente et lui poser mes nouvelles questions. J'ai fait fonctionner mon code pour lancer ma vidéo à partir d'un horodatage et j'ai écrit une fonction pour calculer la conversion entre l'heure réelle et le temps écoulé.
Ma fille et moi avons joué à la Play-Doh, au sungka (un jeu traditionnel philippin), et aux charades.
vendredi 17
J'ai révisé les sous-titres de ma conversation avec Prot d'hier. J'ai ajouté deux fonctions pour gérer l'étiquette d'interlocuteur quand on divise ou fusionne des sous-titres. J'ai aussi programmé trois conversations sur Emacs et j'ai publié les événements sur YouTube et sur mon site grâce à d'autres fonctions. J'ai aussi modifié ma bibliothèque pour publier mon site afin qu'elle n'inclue pas les fichiers privés.
J'ai travaillé sur nos impôts.
Ma fille s'est réveillée toute seule ce matin, à temps pour le petit-déjeuner, notre routine matinale, et son interrogation de mathématiques à l'école. Mais elle a séché les cours l'après-midi et elle s'est assise tout l'après-midi contre sa porte. Au lieu de se détendre, elle s'est davantage braquée contre moi. Je ne sais pas quoi faire dans cette situation.
samedi 18
Pour le petit-déjeuner, j'ai préparé des crêpes avec le reste de la crème fouettée. Il reste juste un peu de la créme, donc je n'ai pas pu fouetter dans le mélanger. J'ai fouetté à la main. J'ai aussi utilisé la crème fouettée congelée que j'avais faite il y a plusieurs mois. Je les ai mangé avec des pêches et de la mangue. C'était parfait.
Lire la configuration lettrée d'Emacs de tecosaur me rend jaloux de sa mise en page, donc j'ai passé du temps en ameliorant l'export de ma configuration. C'est très long. Le PDF est 736 pages. Seule la table de matières est 15 pages. Je veux ajouter plus de commentaires et implementer plus d'exports LaTeX pour mes types de liens.
Ma fille était grincheuse contre moi du matin, mais l'après-midi, elle a réapparu et elle a voulu passer du temps avec moi.
Nous avons joué à Minecraft pour essayer les nouveaux cubes de soufre. Nous avons généré un Warden et lui avons donné un cube qui nous donnaient un bloc de champignon. Le Warden s'amusait avec le cube.
Nous avons joué avec Play-Doh. Je l'ai étalé très finement et nous l'avons coupé à beaucoup de pièces. Elle les a tressé. Elle a voulu essayer une tresse couronne, donc j'ai tressé ses cheveux.
Pour le dîner, nous avons préparé des sushis.
Nous avons joué encore à Stardew Valley Expanded. Nous avons bien progressé dans les paquets du centre communautaire, même si j'ai oublié d'obtenir l'engrais de centre communautaire après la Fête des Œufs pour accélerer les fraises. Tant pis.
Ma fille a pratiqué son vocabulaire français en racontant l'histoire de la famille d'Eevee.
dimanche 19
Ma fille s'est réveillée à 8h00 aujourd'hui. Elle trouve que c'est plus facile de se réveiller quand il n'y a pas école. Il est bon que je n'avait pas commencé une diffusion en direct.
Ma fille et moi sommes allées aux Stockyards à vélo pour acheter des tissus pour coudre un chapeau d'été. Elle avait fait du lèche-vitrine mais elle n'en avait pas trouvé un qui lui convenait, donc nous devons le faire nous-même. Elle a choisi du tissu jaune Pokémon. Elle a aussi voulu de la laine pour faire du crochet une couverture.
Nous avons mangé du Panda Express pour le déjeuner. Le repas enfant m'a suffi.
Je l'ai déposée à la maison et j'ai apporté des donations au Goodwill en faisant le grand ménage. J'ai aussi fait les courses. Une fois que je suis rentrée, ma fille m'a montré fièrement qu'elle a fait les lits comme un hôtel.
Nous avons joué à Stardew Valley Expanded après le dîner. L'été a commencé. Je pense que je dois planter plus de doubeurre pour le paquet récoltes de qualité qui demande 5 récoltes de qualité or.
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life April 30 Yay Emacs: Sacha and Prot Talk Emacs - Newbies/Starter Kits rss
I will livestream it and update this post with notes.
(America/Toronto, UTC-4) = Thu Apr 30 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST
The Emacs Carnival theme for April 2026 is newbies/starter kits. I'd like to chat with Prot about not only helping people get into Emacs but also supporting lifelong learning.
Prot had some notes on how he started with Emacs in 2019 in All about switching to Emacs (video blog) | Protesilaos. These notes were just a few months after he started, so his experience was pretty fresh.
In Computing in freedom with GNU Emacs | Protesilaos (2026), he said:
Remember that I started using Emacs without a background in programming. … I learnt the basics within a few days. I started writing my own Emacs Lisp within weeks. And within a year I had my modus-themes moved into core Emacs.
Prot has several projects that might be of interest to many newcomers to Emacs:
- modus-themes, which are part of Emacs core and are therefore just a
M-x load-themeaway - Emacs Lisp Elements, a book that helps people learn Emacs Lisp
- Where does this fit into people's learning journeys? How can they come across it and use it?
- perhaps Denote
- What would it take for people to learn enough to be able to use this?
He also offers Emacs coaching. I wonder if any newbies have taken advantage of that. There are a few other coaches listed on the EmacsWiki. (Ooh, Emacs buddy, that was neat.)
Other possible topics: Philip suggested the following general themes for the Emacs Carnival:
- What are your memories of starting with Emacs?
- What experiences do you have with teaching Emacs to new users?
- Do you think if starter kits are more of a hindrance in the long term or necessary for many users to even try Emacs?
- What defaults do you think should be changed for everyone (new and old users)?
- What defaults do you think should be changed for new users (see NewcomersTheme)?
- What is the sweet-spot between starter-kit minimalism and maximalism?
You can e-mail me at sacha@sachachua.com.
- modus-themes, which are part of Emacs core and are therefore just a
-
🔗 r/reverseengineering Built a forensic tool that detects and extracts payloads hidden in ELF/PE slack space — with visual diff heatmaps showing exactly what changed rss
submitted by /u/NoBreadfruit7323
[link] [comments] -
🔗 r/reverseengineering Learn Something Old Every Day: 8087 Emulation on 8086 Systems rss
submitted by /u/alberto-m-dev
[link] [comments] -
🔗 r/Harrogate Quiet venue for online job-interview? rss
I will be in Harrogate for a short break soon, and have unexpectedly got a job interview (online) on the same day, so I need to find somewhere quiet to do the interview as it is before my hotel check-in time. Any recommendations? Its a Monday afternoon if that makes a difference. Willing to pay if necessary.
submitted by /u/LibrarySpooks
[link] [comments] -
🔗 r/LocalLLaMA Anthropic admits to have made hosted models more stupid, proving the importance of open weight, local models rss
On March 4, we changed Claude Code's default reasoning effort from
hightomediumto reduce the very long latency—enough to make the UI appear frozen—some users were seeing inhighmode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they'd prefer to default to higher intelligence and opt into lower effort for simple tasks. This impacted Sonnet 4.6 and Opus 4.6. On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6. On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.In each of these they made conscious choices to lower server load at the cost of quality, completely outside the end users control and without informing their paying customers of the changes. For me, this proves that if you depend on an AI model for your service or to do your job, the only sane choice is to pick an open-weight model that you can host yourself, or that you can pay someone to host for you. submitted by /u/spaceman_
[link] [comments]
---|--- -
🔗 r/wiesbaden Tolle Bäckerei, aber der Name gefällt mir nicht - ein Stern! rss
submitted by /u/Tisiphoni1
[link] [comments] -
🔗 r/reverseengineering rbinmcp: a Rust MCP server for binary analysis, reverse engineering, and malware triage. rss
submitted by /u/ectkirk
[link] [comments] -
🔗 r/Yorkshire Man jailed for raping Leeds University fresher in 1977, following DNA breakthrough rss
| submitted by /u/Legitimate-Break-143
[link] [comments]
---|--- -
🔗 r/york Scarborough Bridge looking toward town today rss
| Always one of my favourite views, especially in the sunshine! submitted by /u/York_shireman
[link] [comments]
---|--- -
🔗 r/wiesbaden Tattoo Artists rss
Looking for tattoo artist recommendations.
submitted by /u/Full-Comparison9574
[link] [comments] -
🔗 r/Leeds Any garden centers sell tomatillos and more interesting fruit/veg plants? rss
Looking for stuff beyond the usual tomatoes and chilli plants.
White strawbs?
Etc
submitted by /u/Calm-Passenger7334
[link] [comments] -
🔗 r/reverseengineering We built an RF-Neural TRNG – try to break it rss
submitted by /u/Sea-Dragonfruit-1881
[link] [comments] -
🔗 r/Leeds PSA: Postal voting envelopes rss
A note for anyone who is postal voting for the upcoming elections - I noticed that envelope A (brown envelope for the ballot paper) had a very poor seal.
In case this is more than just a one-off bad envelope, I wanted to highlight that it's apparently ok to seal with tape.
You do not need to request a replacement if you have:
Information from Postal voting - List of possible mistakes (leeds.gov.uk)
submitted by /u/The_Deacon
[link] [comments] -
🔗 r/Leeds Is Leeds actually affordable right now or is that outdated advice? rss
Hi everyone, I’ve been seeing a lot of mixed opinions online about the cost of living in Leeds. Some people still say it’s one of the more affordable cities in the UK, while others mention rising rent, bills and overall expenses. For those currently living there, what’s the reality in 2026? Are certain areas still budget-friendly and what kind of monthly costs should someone realistically expect (rent, transport, groceries, etc.)? Just trying to get a clear and honest picture before making any plans.
submitted by /u/Independent_Grab_977
[link] [comments] -
🔗 r/york Model trains in York - can you help? rss
I’m planning a York-based visitor attraction concept around the joy of model railways, miniature worlds and hands-on experiences.
The big challenge is not the train bit. I know that world well enough.
The challenge is finding the right people around it - people with experience in attractions, hospitality, property, operations, partnerships, fundraising or building something from scratch.
So this is a straightforward ask.
If you’ve helped launch or grow a visitor attraction, family experience, museum, leisure venue or similar, I’d love to hear from you.
If you know someone who has, I’d be grateful for an introduction.I’m especially interested in speaking to people who are practical, commercially minded and excited by making something distinctive happen in York.
Comment below or send me a message.
submitted by /u/TrainTraxUK
[link] [comments] -
🔗 r/Harrogate Recommendations for massage rss
Other than the Turkish Baths (which is lovely but quite expensive so I can’t justify going there all the time), can anyone recommend a good place to book a massage please? Has anyone been to Thai Siam Relax Therapy on Station Bridge, any good? Any others you’d recommend? Thanks
submitted by /u/purte
[link] [comments] -
🔗 r/LocalLLaMA Deepseek v4 people rss
| submitted by /u/markeus101
[link] [comments]
---|--- -
🔗 Simon Willison DeepSeek V4 - almost on the frontier, a fraction of the price rss
Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their hotly anticipated V4 series in the shape of two preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash.
Both models are 1 million token context Mixture of Experts. Pro is 1.6T total parameters, 49B active. Flash is 284B total, 13B active. They're using the standard MIT license.
I think this makes DeepSeek-V4-Pro the new largest open weights model. It's larger than Kimi K2.6 (1.1T) and GLM-5.1 (754B) and more than twice the size of DeepSeek V3.2 (685B).
Pro is 865GB on Hugging Face, Flash is 160GB. I'm hoping that a lightly quantized Flash will run on my 128GB M5 MacBook Pro. It's possible the Pro model may run on it if I can stream just the necessary active experts from disk.
For the moment I tried the models out via OpenRouter, using llm-openrouter:
llm install llm-openrouter llm openrouter refresh llm -m openrouter/deepseek/deepseek-v4-pro 'Generate an SVG of a pelican riding a bicycle'Here's the pelican for DeepSeek-V4-Flash:

And for DeepSeek-V4-Pro:

For comparison, take a look at the pelicans I got from DeepSeek V3.2 in December, V3.1 in August, and V3-0324 in March 2025.
So the pelicans are pretty good, but what's really notable here is the cost. DeepSeek V4 is a very, very inexpensive model.
This is DeepSeek's pricing page. They're charging $0.14/million tokens input and $0.28/million tokens output for Flash, and $1.74/million input and $3.48/million output for Pro.
Here's a comparison table with the frontier models from Gemini, OpenAI and Anthropic:
Model Input ($/M) Output ($/M) DeepSeek V4 Flash $0.14 $0.28 GPT-5.4 Nano $0.20 $1.25 Gemini 3.1 Flash-Lite $0.25 $1.50 Gemini 3 Flash Preview $0.50 $3 GPT-5.4 Mini $0.75 $4.50 Claude Haiku 4.5 $1 $5 DeepSeek V4 Pro $1.74 $3.48 Gemini 3.1 Pro $2 $12 GPT-5.4 $2.50 $15 Claude Sonnet 4.6 $3 $15 Claude Opus 4.7 $5 $25 GPT-5.5 $5 $30 DeepSeek-V4-Flash is the cheapest of the small models, beating even OpenAI's GPT-5.4 Nano. DeepSeek-V4-Pro is the cheapest of the larger frontier models.
This note from the DeepSeek paper helps explain why they can price these models so low - they've focused a great deal on efficiency with this release, especially for longer context prompts:
In the scenario of 1M-token context, even DeepSeek-V4-Pro, which has a larger number of activated parameters, attains only 27% of the single-token FLOPs (measured in equivalent FP8 FLOPs) and 10% of the KV cache size relative to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller number of activated parameters, pushes efficiency even further: in the 1M-token context setting, it achieves only 10% of the single-token FLOPs and 7% of the KV cache size compared with DeepSeek-V3.2.
DeepSeek's self-reported benchmarks in their paper show their Pro model competitive with those other frontier models, albeit with this note:
Through the expansion of reasoning tokens, DeepSeek-V4-Pro-Max demonstrates superior performance relative to GPT-5.2 and Gemini-3.0-Pro on standard reasoning benchmarks. Nevertheless, its performance falls marginally short of GPT-5.4 and Gemini-3.1-Pro, suggesting a developmental trajectory that trails state-of-the-art frontier models by approximately 3 to 6 months.
I'm keeping an eye on huggingface.co/unsloth/models as I expect the Unsloth team will have a set of quantized versions out pretty soon. It's going to be very interesting to see how well that Flash model runs on my own machine.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +3 releases rss
sync repo: +1 plugin, +3 releases ## New plugins - [clang-include](https://github.com/oxikkk/ida-clang-include) (1.0.0) ## New releases - [BinSync](https://github.com/binsync/binsync): 5.14.1 - [unicorn-tracer-arm64](https://github.com/chenxvb/unicorn-trace): 0.3.1 -
🔗 r/LocalLLaMA Deepseek V4 Flash and Non-Flash Out on HuggingFace rss
-
🔗 r/LocalLLaMA This isn’t X this is Y needs to die rss
All models spam this exact phrase liberally. Time to train it out.
That is all.
submitted by /u/twnznz
[link] [comments]
-
- April 23, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-04-23 rss
IDA Plugin Updates on 2026-04-23
New Releases:
Activity:
- augur
- binsync
- 3a25a55b: Bump pycparser (#496)
- capa
- b9f83061: update submodules
- e745fa6a: style: ruff format changed files
- a834c4c0: fix: clean up CHANGELOG bug fixes formatting
- 7484d3fc: fix: loader.py reads entire file for magic byte check
- 9954d994: fix: freeze/init.py: logically impossible condition
- aa9f09db: fix: render_default always returns empty string
- a5082bee: fix: remove unused gzip import in test_helpers.py
- f6f3380f: fix: EXTENSIONS_DYNAMIC has inconsistent leading dots
- 6431be2c: fix: rules/init.py: duplicate bytes_features line
- 62e6af31: fix: dotnetfile.py: missing import for capa.features.extractors.common
- f17629a4: fix: freeze/init.py: NO_ADDRESS < NO_ADDRESS returns True
- c7d3de8b: fix: base_extractor.py: metaclass is Python 2 syntax, ignored in Py3
- 58b7a9fc: fix: elffile.py: get_base_address returns None instead of NO_ADDRESS
- 8bea7c70: fix: DNTokenOffsetAddress.eq lacks type guard
- 3c61d995: fix: ProcessAddress.eq and ThreadAddress.eq assert on type
- a8fafe0d: fix: optimizer doesn't recurse into And/Or/Some children
- 53158b47: fix: find_dynamic_limitations_from_cli overwrites instead of OR-ing
- 9289f09f: fix: load_one_jsonl_from_path: finally block runs on unrelated except…
- 8f946778: fix: extract_os yields duplicate/contradictory OS values
- 527fb397: fix: vverbose.py: render_call variable assigned but never used
- haruspex
- ida-mcp-server
- IDA-NO-MCP
- 3a519c4f: Merge pull request #17 from xiaozhu1337/main
- IDAPluginList
- b492a450: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- idasql
- 23192544: Enable MCP by default; fix vtable UPDATE error reporting
- inertia_decompiler
- rhabdomancer
-
🔗 Simon Willison Extract PDF text in your browser with LiteParse for the web rss
LlamaIndex have a most excellent open source project called LiteParse, which provides a Node.js CLI tool for extracting text from PDFs. I got a version of LiteParse working entirely in the browser, using most of the same libraries that LiteParse uses to run in Node.js.
Spatial text parsing
Refreshingly, LiteParse doesn't use AI models to do what it does: it's good old-fashioned PDF parsing, falling back to Tesseract OCR (or other pluggable OCR engines) for PDFs that contain images of text rather than the text itself.
The hard problem that LiteParse solves is extracting text in a sensible order despite the infuriating vagaries of PDF layouts. They describe this as "spatial text parsing" - they use some very clever heuristics to detect things like multi-column layouts and group and return the text in a sensible linear flow.
The LiteParse documentation describes a pattern for implementing Visual Citations with Bounding Boxes. I really like this idea: being able to answer questions from a PDF and accompany those answers with cropped, highlighted images feels like a great way of increasing the credibility of answers from RAG-style Q&A.
LiteParse is provided as a pure CLI tool, designed to be used by agents. You run it like this:
npm i -g @llamaindex/liteparse lit parse document.pdfI explored its capabilities with Claude and quickly determined that there was no real reason it had to stay a CLI app: it's built on top of PDF.js and Tesseract.js, two libraries I've used for something similar in a browser in the past.
The only reason LiteParse didn't have a pure browser-based version is that nobody had built one yet...
Introducing LiteParse for the web
Visit https://simonw.github.io/liteparse/ to try out LiteParse against any PDF file, running entirely in your browser. Here's what that looks like:

The tool can work with or without running OCR, and can optionally display images for every page in the PDF further down the page.
Building it with Claude Code and Opus 4.7
The process of building this started in the regular Claude app on my iPhone. I wanted to try out LiteParse myself, so I started by uploading a random PDF I happened to have on my phone along with this prompt:
Clone https://github.com/run-llama/liteparse and try it against this fileRegular Claude chat can clone directly from GitHub these days, and while by default it can't access most of the internet from its container it can also install packages from PyPI and npm.
I often use this to try out new pieces of open source software on my phone - it's a quick way to exercise something without having to sit down with my laptop.
You can follow my full conversation in this shared Claude transcript. I asked a few follow-up questions about how it worked, and then asked:
Does this library run in a browser? Could it?This gave me a thorough enough answer that I was convinced it was worth trying getting that to work for real. I opened up my laptop and switched to Claude Code.
I forked the original repo on GitHub, cloned a local copy, started a new
webbranch and pasted that last reply from Claude into a new file called notes.md. Then I told Claude Code:Get this working as a web app. index.html, when loaded, should render an app that lets users open a PDF in their browser and select OCR or non-OCR mode and have this run. Read notes.md for initial research on this problem, then write out plan.md with your detailed implementation planI always like to start with a plan for this kind of project. Sometimes I'll use Claude's "planning mode", but in this case I knew I'd want the plan as an artifact in the repository so I told it to write
plan.mddirectly.This also means I can iterate on the plan with Claude. I noticed that Claude had decided to punt on generating screenshots of images in the PDF, and suggested we defer a "canvas-encode swap" to v2. I fixed that by prompting:
Update the plan to say we WILL do the canvas-encode swap so the screenshots thing worksAfter a few short follow-up prompts, here's the plan.md I thought was strong enough to implement.
I prompted:
build it.And then mostly left Claude Code to its own devices, tinkered with some other projects, caught up on Duolingo and occasionally checked in to see how it was doing.
I added a few prompts to the queue as I was working. Those don't yet show up in my exported transcript, but it turns out running
rg queue-operation --no-filename | grep enqueue | jq -r '.content'in the relevant~/.claude/projects/folder extracts them.Here are the key follow-up prompts with some notes:
-
When you implement this use playwright and red/green TDD, plan that too- I've written more about red/green TDD here. -
let's use PDF.js's own renderer(it was messing around with pdfium) -
The final UI should include both the text and the pretty-printed JSON output, both of those in textareas and both with copy-to-clipboard buttons - it should also be mobile friendly- I had a new idea for how the UI should work -
small commits along the way- see below -
Make sure the index.html page includes a link back to https://github.com/run-llama/liteparse near the top of the page- it's important to credit your dependencies in a project like this! View on GitHub → is bad copy because that's not the repo with this web app in, it's the web app for the underlying LiteParse libraryRun OCR should be unchecked by default-
When I try to parse a PDF in my browser I see 'Parse failed: undefined is not a function (near '...value of readableStream...')- it was testing with Playwright in Chrome, turned out there was a bug in Safari ... oh that is in safari but it works in chromeWhen "Copy" is clicked the text should change to "Copied!" for 1.5s-
[Image #1] Style the file input so that long filenames don't break things on Firefox like this - in fact add one of those drag-drop zone UIs which you can also click to select a file- dropping screenshots in of small UI glitches works surprisingly well Tweak the drop zone such that the text is vertically centered, right now it is a bit closer to the top-
it breaks in Safari on macOS, works in both Chrome and Firefox. On Safari I see "Parse failed: undefined is not a function (near '...value of readableStream...')" after I click the Parse button, when OCR is not checked- it still wasn't working in Safari... -
works in safari now- but it fixed it pretty quickly once I pointed that out and it got Playwright working with that browser
I've started habitually asking for "small commits along the way" because it makes for code that's easier to understand or review later on, and I have an unproven hunch that it helps the agent work more effectively too - it's yet another encouragement towards planning and taking on one problem at a time.
While it was working I decided it would be nice to be able to interact with an in-progress version. I asked a separate Claude Code session against the same directory for tips on how to run it, and it told me to use
npx vite. Running that started a development server with live-reloading, which meant I could instantly see the effect of each change it made on disk - and prompt with further requests for tweaks and fixes.Towards the end I decided it was going to be good enough to publish. I started a fresh Claude Code instance and told it:
Look at the web/ folder - set up GitHub actions for this repo such that any push runs the tests, and if the tests pass it then does a GitHub Pages deploy of the built vite app such that the web/index.html page is the index.html page for the thing that is deployed and it works on GitHub PagesAfter a bit more iteration here's the GitHub Actions workflow that builds the app using Vite and deploys the result to https://simonw.github.io/liteparse/.
I love GitHub Pages for this kind of thing because it can be quickly configured (by Claude, in this case) to turn any repository into a deployed web-app, at zero cost and with whatever build step is necessary. It even works against private repos, if you don't mind your only security being a secret URL.
With this kind of project there's always a major risk that the model might "cheat" - mark key features as "TODO" and fake them, or take shortcuts that ignore the initial requirements.
The responsible way to prevent this is to review all of the code... but this wasn't intended as that kind of project, so instead I fired up OpenAI Codex with GPT-5.5 (I had preview access) and told it:
Describe the difference between how the node.js CLI tool runs and how the web/ version runsThe answer I got back was enough to give me confidence that Claude hadn't taken any project-threatening shortcuts.
... and that was about it. Total time in Claude Code for that "build it" step was 59 minutes. I used my claude-code-transcripts tool to export a readable version of the full transcript which you can view here, albeit without those additional queued prompts (here's my issue to fix that).
Is this even vibe coding any more?
I'm a pedantic stickler when it comes to the original definition of vibe coding - vibe coding does not mean any time you use AI to help you write code, it's when you use AI without reviewing or caring about the code that's written at all.
By my own definition, this LiteParse for the web project is about as pure vibe coding as you can get! I have not looked at a single line of the HTML and TypeScript written for this project - in fact while writing this sentence I had to go and check if it had used JavaScript or TypeScript.
Yet somehow this one doesn't feel as vibe coded to me as many of my other vibe coded projects:
- As a static in-browser web application hosted on GitHub Pages the blast radius for any bugs is almost non-existent: it either works for your PDF or doesn't.
- No private data is transferred anywhere - all processing happens in your browser - so a security audit is unnecessary. I've glanced once at the network panel while it's running and no additional requests are made when a PDF is being parsed.
- There was still a whole lot of engineering experience and knowledge required to use the models in this way. Identifying that porting LiteParse to run directly in a browser was critical to the rest of the project.
Most importantly, I'm happy to attach my reputation to this project and recommend that other people try it out. Unlike most of my vibe coded tools I'm not convinced that spending significant additional engineering time on this would have resulted in a meaningfully better initial release. It's fine as it is!
I haven't opened a PR against the origin repository because I've not discussed it with the LiteParse team. I've opened an issue, and if they want my vibe coded implementation as a starting point for something more official they're welcome to take it.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
-
🔗 r/Yorkshire Opinion: Nigel Farage's legacy is Brexit. Brexit has not delivered even 10% of what was promised. I have no reason to believe Reform's next flashy promise will come true rss
I remember Brexit promises. "We will be Singapore-on-Thames", "Turkey will join the EU", "Other countries will leave the EU too", "We'll leave the EU to save more money for the NHS", "Migration rates will lower", "We will hold all the cards in EU-UK negotiations", among other things.
All lies.
We definitely don't hold all the cards in negotiations. And why would we? We are a little island of 67 million people, reliant on imports. The EU is a free trade bloc of 450 million people, with territory the size of a continent.
Lies, enabled by Nigel Farage and his friends. So, frankly - and maybe I'm the oddball - I don't understand why he is still relevant.
Reform talks about cutting red tape to support British business. However, they don't tell people that Brexit was the biggest Red Tape, anti-business act of the century.
Brexit added admin costs and paperwork for businesses around Britain.
Brexit made imports more expensive, which drives up our cost-of-living.
Brexit took away EU development money from deprived areas of Britain.
Brexit was a profound failure. Therefore, I have no reason to believe Nigel's next flashy promise will materialise.
And that's not even getting into Reform's shambolic views on the Iran war and Net Zero. That war, which was unnecessary and drove up our fossil fuel energy bills? Seriously? Is Trump really that important to Reform?
That is all. It is frustrating to see this party stay relevant across the north, considering these factors.
submitted by /u/coffeewalnut08
[link] [comments] -
🔗 Simon Willison A pelican for GPT-5.5 via the semi-official Codex backdoor API rss
GPT-5.5 is out. It's available in OpenAI Codex and is rolling out to paid ChatGPT subscribers. I've had some preview access and found it to be a fast, effective and highly capable model. As is usually the case these days, it's hard to put into words what's good about it - I ask it to build things and it builds exactly what I ask for!
There's one notable omission from today's release - the API:
API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale. We'll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon.
When I run my pelican benchmark I always prefer to use an API, to avoid hidden system prompts in ChatGPT or other agent harnesses from impacting the results.
The OpenClaw backdoor
One of the ongoing tension points in the AI world over the past few months has concerned how agent harnesses like OpenClaw and Pi interact with the APIs provided by the big providers.
Both OpenAI and Anthropic offer popular monthly subscriptions which provide access to their models at a significant discount to their raw API.
OpenClaw integrated directly with this mechanism, and was then blocked from doing so by Anthropic. This kicked off a whole thing. OpenAI - who recently hired OpenClaw creator Peter Steinberger - saw an opportunity for an easy karma win and announced that OpenClaw was welcome to continue integrating with OpenAI's subscriptions via the same mechanism used by their (open source) Codex CLI tool.
Does this mean anyone can write code that integrates with OpenAI's Codex-specific APIs to hook into those existing subscriptions?
The other day Jeremy Howard asked:
Anyone know whether OpenAI officially supports the use of the
/backend-api/codex/responsesendpoint that Pi and Opencode (IIUC) uses?It turned out that on March 30th OpenAI's Romain Huet had tweeted:
We want people to be able to use Codex, and their ChatGPT subscription, wherever they like! That means in the app, in the terminal, but also in JetBrains, Xcode, OpenCode, Pi, and now Claude Code.
That’s why Codex CLI and Codex app server are open source too! 🙂
And Peter Steinberger replied to Jeremy that:
OpenAI sub is officially supported.
llm-openai-via-codex
So... I had Claude Code reverse-engineer the openai/codex repo, figure out how authentication tokens were stored and build me llm-openai-via-codex, a new plugin for LLM which picks up your existing Codex subscription and uses it to run prompts!
(With hindsight I wish I'd used GPT-5.4 or the GPT-5.5 preview, it would have been funnier. I genuinely considered rewriting the project from scratch using Codex and GPT-5.5 for the sake of the joke, but decided not to spend any more time on this!)
Here's how to use it:
- Install Codex CLI, buy an OpenAI plan, login to Codex
- Install LLM:
uv tool install llm - Install the new plugin:
llm install llm-openai-via-codex - Start prompting:
llm -m openai-codex/gpt-5.5 'Your prompt goes here'
All existing LLM features should also work - use
-a filepath.jpg/URLto attach an image,llm chat -m openai-codex/gpt-5.5to start an ongoing chat,llm logsto view logged conversations andllm --tool ...to try it out with tool support.And some pelicans
Let's generate a pelican!
llm install llm-openai-via-codex llm -m openai-codex/gpt-5.5 'Generate an SVG of a pelican riding a bicycle'Here's what I got back:

I've seen better from GPT-5.4, so I tagged on
-o reasoning_effort xhighand tried again:That one took almost four minutes to generate, but I think it's a much better effort.

If you compare the SVG code (default, xhigh) the
xhighone took a very different approach, which is much more CSS-heavy - as demonstrated by those gradients.xhighused 9,322 reasoning tokens where the default used just 39.A few more notes on GPT-5.5
One of the most notable things about GPT-5.5 is the pricing. Once it goes live in the API it's going to be priced at twice the cost of GPT-5.4 - $5 per 1M input tokens and $30 per 1M output tokens, where 5.4 is $2.5 and $15.
GPT-5.5 Pro will be even more: $30 per 1M input tokens and $180 per 1M output tokens.
GPT-5.4 will remain available. At half the price of 5.5 this feels like 5.4 is to 5.5 as Claude Sonnet is to Claude Opus.
Ethan Mollick has a detailed review of GPT-5.5 where he put it (and GPT-5.5 Pro) through an array of interesting challenges. His verdict: the jagged frontier continues to hold, with GPT-5.5 excellent at some things and challenged by others in a way that remains difficult to predict.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/LocalLLaMA Qwen 3.6 27B Makes Huge Gains in Agency on Artificial Analysis - Ties with Sonnet 4.6 rss
| It is crazy that Qwen3.6 27B now matches Sonnet 4.6 on AA's Agentic Index, overtaking Gemini 3.1 Pro Preview, GPT 5.2 and 5.3 as well as MiniMax 2.7. It made gains across all three indices but the way the Coding Index works, I don't think the gains are as apparent as they should be. The Coding Index only uses Terminal Bench Hard and SciCode which are both strange choices. Cleary the training on the 3.6 models out now has focused on agentic use for OpenClaw/Hermes but it's interesting how close to frontier models such a small model can get. Qwen3.6 122B might be epic. . . submitted by /u/dionysio211
[link] [comments]
---|--- -
🔗 The Pragmatic Engineer The Pulse: ‘Tokenmaxxing’ as a weird new trend rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .
Inside Meta, an engineer created a "token leaderboard" that ranks employees by token usage. Last week, The Information reported:
"Employees at Meta Platforms who want to show off their AI superuser chops are competing on an internal leaderboard for status as a "Session Immortal"-- or, even better, "Token Legend."
The rankings, set up by a Meta employee on its intranet using company data, measure how many tokens -- the units of data processed by AI models -- employees are burning through. Dubbed "Claudeonomics" after the flagship product of AI startup Anthropic, the leaderboard aggregates AI usage from more than 85,000 Meta employees, listing the top 250 power users.
The practice is emblematic of Silicon Valley's newest form of conspicuous consumption, known as "tokenmaxxing," which has turned token usage into a benchmark for productivity and a competitive measure of who is most AI native. Workers are maximizing their prompts, coding sessions and the number of agents working in parallel to climb internal rankings at Meta and other companies and demonstrate their value as AI automates functions such as coding."
I spoke with a few engineers at Meta about what's happening, and this is what they said:
- Massive waste. Plenty of devs are running an OpenClaw-like internal agent that burns massive amounts of tokens for little to no outcome.
- Outages caused by AI overuse. A dev mentioned that some SEVs were caused by what looked like careless AI code generation; almost like a dev behind the SEV was more concerned with churning out massive amounts of code with AI than with product quality.
- Gamified leaderboard. Those at the top of the leaderboard produce throwaway, wasteful work. This is painfully clear to anyone who checks Trajectories (AI prompts), which can be viewed.
As per The Information, Meta employees used a total of 60.2 trillion AI tokens (!!) in 30 days. If this was charged at Anthropic's API prices, it would cost $900M. Of course, Meta is likely purchasing tokens at a discount, but that could still come in at $100M+ - in large part from senseless "tokenmaxxing".
After backlash on social media, Meta abolished the internal leaderboard last week. One day after The Information revealed details about the incredible tokenmaxxing numbers, I confirmed that Meta has taken down its leaderboard; perhaps they realized that the incentive created enormous and unnecessary waste. If so, it's a bit surprising that it took media coverage for the social media giant to reach that conclusion.
One engineer at Meta told me they think Meta had a different goal with the token leaderboard. A long-tenured engineer suspects increasing AI usage actually was the real goal. They said:
"Putting a leaderboard in place was always going to incentivize much more AI usage. And more AI usage means producing a lot more real-world traces. These traces can then be used to train Meta's next-generation coding model better.
I believe this was the goal, even if no one said it out loud.
It's an expensive way to generate data for training, but if any company has the means to do so, it's Meta."
Microsoft: full-force tokenmaxxing
Similarly, Microsoft has had an internal token leaderboard like Meta's since January, and it started pretty well, as I reported back at the time: there's an internal token dashboard that displays the individuals who use the most tokens in order to promote the use of tokens and experimentation with LLMs. At the Windows maker, this leaderboard is interesting:
- Very senior engineers - distinguished-level folks - are in the top 5 across the whole company, despite the fact that this group generally wrote little code in the past.
- VP-level folks make the top 10 and top 20, despite often being in meetings for most of the day and rarely writing code.
However, what starts as a metric for performance reviews or promotions can quickly become a target for devs. I talked with a software engineer at the Windows maker who admitted they're full-on "tokenmaxxing" - not to get on the leaderboard, but rather because they don't want to be seen as using too few tokens:
"We have internal dashboards and metrics tracking AI usage, token usage, percentage of code written by AI vs hand-written code.
I am conscious of not wanting to be seen as "uses too little AI," and I'm not ashamed to say I need to do tokenmaxxing to do this. Things I do to inflate my token usage metrics:Ask AI questions about the code already in the documentation. The AI pulls up the documentation, processes it, and gives me results 10x slower, but while burning lots of tokens. I could use "readthedocs" [an internal product], but then my token numbers would be lowerAsk the AI to prototype a feature that I have no intention of working on. Prompt it a few more times, then throw the whole thing awayDefault to always using the agent, even when I know I could do the work by hand much faster. Then watch it fail"
This engineer is relatively new at the company, so is concerned about job security, and is playing this game to avoid being tagged as insufficiently "AI-native" by burning far more tokens than necessary.
Salesforce: burning tokens to hit "minimum" & "ideal" targets
Elsewhere, Salesforce has created "tokenmaxxing" incentives, as well.**** Talking with an engineer there, I learned that the company built two tools that effectively incentivize excessive spending on tokens:
- " Minimum" incentives with a tracking tool. There's a Mac widget that shows your own spend, updated every 15 minutes. It also displays minimum expected spend. Last week, the target was $100 on Claude Code, and $70 on Cursor.
- Showing everyone 's spend. A web-based tool to see the token spend of any colleague. It's used to check where team mates' usage is at.
- " Maximum" spend limits that can be exceeded. Up to a week ago, there was also a maximum monthly limit of $250 for Claude Code and $170 for Cursor. However, this can be exceeded with the simple press of a button if the limit is reached. I 've learned that last week, some engineering organisations at Salesforce had their "maximum" limit removed in order to "remove any friction from the development process."
The message Salesforce sends to staff is clear: "use a minimum of $170/month tokens or be flagged." Who wants to get flagged for using too few tokens? The outcome is somewhat wasteful token spend:
- Burning tokens for nothing. Devs ask Claude or Cursor: "build me X," where X is a project or product with nothing to do with their work, and not something they'd ever ship. It's just a way to burn tokens
- Calibrating token spend to be above average. Plenty of devs browse peers' token spend to figure out the slightly-above average point, then use the tokens needed to hit that mark
Shopify: an example on how to avoid tokenmaxxing
The first-ever token leaderboard that I'm aware of was built by Shopify in 2025. And it worked well! Last June, the Head of Engineering at Shopify, Farhan Thawar, told me on The Pragmatic Engineer Podcast:
"We have a leaderboard where we actively celebrate the people who use the most tokens because we want to make sure they are [celebrated] if they're doing great work with AI.
[And for the top people on the leaderboard,] I want to see why they spent say $1,000 a month in credits for Cursor. Maybe that's because they're building something great and they have an agent workforce underneath them!"
I asked Farhan for details on how it's gone since. Here's what he told me:
"We have since renamed the token leaderboard to usage dashboard: for obvious reasons, as we don't want to encourage "competing" to make it to the top of this board. We have token spend on our internal wiki profile as well as on the usage dashboard.
We also have circuit breakers to catch "runaway agents." So if personal spend spikes within a day, we can cut off access immediately, and you can renew if the usage spike was deliberate, or if it was a runaway agent. The circuit breaker worked well for us: we've not only caught runaway agents, but found bugs in our infra this way!"
Shopify's approach seems to have worked for a few reasons:
- The usage dashboard served as a "push" for devs to use AI tools, early-on. Last year, devs were mostly experimenting with AI tools because they were not as performant as today. The usage dashboard encouraged developers to try new tools, and highlighted power users.
- Circuit breakers helped. Cutting off spend when usage spikes helped catch "runaway agents."
- High usage is looked at. Farhan checks-in with top-spending individuals to understand the use cases. Any tokenmaxxing would likely have been spotted at this stage, which would have been a bit embarrassing for the user!
One more interesting learning Farhan shared with me: it's more interesting to not look at "who spent the most in overall token cost?" but instead, "whose tokens cost the most?" Devs who generate tokens that come out as expensive have turned out to do in-depth work that was interesting to learn about!
Tokenmaxxing: great for AI vendors, bad for everyone else
I see very few rational reasons why incentivizing tokenmaxxing makes sense for any company. It results in increasing AI spend - by a lot! - in return for little to no value. Heck, in some cases it actually incentivises slower work - as shown by devs using the AI to answer questions when documentation is readily available - and encouraging 'busywork' where devs prompt projects that they don't even want to ship. Tokenmaxxing seems to push devs to focus on stuff that makes no difference to a business.
It feels to me that a good part of the industry is using token count numbers similarly to how the lines-of-code-produced metric was used years ago. There was a time when the number of lines written daily or monthly was an important metric in programmer productivity, until it became clear that it's a terrible thing to focus on. A lines-of-code metric can easily be gamed by writing boilerplate or throwaway code. Also, the best developers are not necessarily those who write the most code; they're the ones who solve hard problems for the business quickly and reliably with - or without - code!
Similarly, the number of tokens a dev generates can easily be gamed, and if this metric is measured then devs will indeed game it. But doing so generates a massive accompanying AI bill!
-- -
Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:
- New trend: token spend breaks budgets - what next? In the past 2-3 months, spending on AI agents has exploded at many tech companies, and the ramifications of this are starting to dawn on engineering leaders. We've sourced details from 15 companies, including the different ways they are coping with this realization.
- New trend: more AI vendors can 't keep up with demand. Related to massively increased spending, GitHub Copilot and Anthropic are starting to limit less-profitable individual users, so they can serve business users whose spend has easily 10x'd in the last few months. The exception is OpenAI and Codex.
- Morale at Meta hits all-time low? Business is booming but devs at Meta are furious and worried due to looming layoffs, and an invasive tracking program rolled out to all US employees.
-
🔗 @binaryninja@infosec.exchange This marks the first stable release of our v2 Enterprise server bringing major mastodon
This marks the first stable release of our v2 Enterprise server bringing major improvements for Enterprise customers, and while we will continue supporting v1 for a period of time, v2 is where we recommend heading next. More on the v2 server and the rest of 5.3 here: https://binary.ninja/2026/01/26/enterprise-2.0.html
-
🔗 r/Leeds Scam Gardeners rss
I appreciate this is a long shot, and I know people should be more careful with what is a well-known scam at this point. However...
Does anyone recognise the flatbed Transit in this blurry picture?
They approached an elderly man on the street and offered to cut his hedges. They took £150 up front and then left without completing the job. The next day they came round to the house again and said he had not paid and that he needed to pay up. They have damaged his property and keep threatening him. The police have been absolutely useless and say it's a civil matter.
They are extorting him and he is scared to death.
They are using the phone number: 07986992278
Edit: updated phone number. Thanks for the feedback.
Update: thanks for those that solved it so quickly, incredible work 😀.
submitted by /u/Kindly_Hand4472
[link] [comments] -
🔗 r/wiesbaden Hebebühne PKW rss
Hi Zusammen,
ich würde gerne eine Hebebühne mieten um paar Sachen an meinem Pkw zu machen. Wo kann man sowas in Wiesbaden mieten?
Habe im Internet nicht direkt was gefunden.
Vielen Dank!
submitted by /u/Lebenskuenstlerinho
[link] [comments] -
🔗 MetaBrainz Picard 3 beta 1 released rss
Today, we're making available another pre-release version for the upcoming MusicBrainz Picard 3. Beta 1 focuses on fixing issues that were found in the previous releases as well as some minor improvements and updated translations.
Download links and a list of changes since Picard 3 alpha 4 are available below. For a more detailed overview of what is new in Picard 3 please see the previous blog post Picard 3 Alpha Release.
While we have all the major features implemented and with the latest bug fixes we are confident in the current code, this is still a pre-release and there might be bugs. If you use this, do so with care, backup your files and please report any issues you encounter.
Some of the changes are also backward incompatible, hence we recommend you make a backup of your Picard.ini config file before trying the beta version. You can do so in Picard’s Options under Advanced > Maintenance.
What’s new?
Bug fixes
- [PICARD-3236] - PyJWT~=2.12 requirement too strict and impacts distro packaging
- [PICARD-3237] - AppStream metadata validation fails due to changed FAQ URL
- [PICARD-3238] - No longer able to paste text value into multiple tracks
- [PICARD-3239] - Picard can't remove plugin data on Windows
- [PICARD-3246] - Genre tag changes on every reload when multiple genres have equal vote counts
- [PICARD-3249] - Tags not suggested in Edit Tag dialog
New Features
- [PICARD-2982] - Submit Listens to ListenBrainz using Picard
- [PICARD-3250] - Support Simplified Chinese to Traditional Chinese plugin in official builds
Improvements
- [PICARD-3254] - Plugins v3 MANIFEST: add support for
report_bugs_tofield - [PICARD-292] - Wizard/configuration tutorial on first run
- [PICARD-3199] - Detect FLAC
unsyncedlyricstag - [PICARD-3240] - Map
syncedlyricstoWM/Lyrics_Synchronisedfor ASF - [PICARD-3244] - Fix word-wrap issue regarding the network cache size option setting
Tasks
- [PICARD-3243] - Documentation: Add note about unnecessary spaces in script functions
- [PICARD-3247] - Update snap build for Picard 3 with Qt6
Download
We appreciate your interest in trying this new version. Use with care, backup your files and please use theMetaBrainz community forums and the ticket system to give feedback and report bugs.
For Windows and macOS you can download the beta version from the Picard download page. Linux users can run from source or try the beta channel of the Picard snap package.
Picard is free software and the source code is available on GitHub.
Acknowledgements
Code contributions by Bob Swift, Deepak Kumar, Laurent Monin and Philipp Wolfer.
Translations were updated by bababasti (German), coldified_ (Korean), cristian_emanuel (Portuguese (Brazil)) and Marc Riera (Catalan). -
🔗 r/Yorkshire Actually obsessed with the atmosphere in this shot! It's god's own country for a reason🥺💫 rss
| 📷Dave Z Photography submitted by /u/RedDevilPlay
[link] [comments]
---|--- -
🔗 r/Harrogate About to move to Harrogate rss
Hi all,
My wife (29F) and I (29M) are about to move into our first house in Bilton, Harrogate.
We’re not from the area so don’t know it particularly well and would love some local recommendations.
Things we’re looking for:
Gyms (good value vs higher-end, open to both)
Golf course, I am keen to join a club (24 handicap)
Tennis / padel clubs nearby - possibly with a gym to tie in above
Pubs for watching sport (football/rugby)
Pubs for a few casual drinks
Good beer gardens for summer
Running/cycling routes
Also any advice on the more boring stuff:
Reliable broadband providers in the area?
Energy suppliers (we’ll compare, but keen to hear real experiences)
And more generally any hidden gems, things to avoid, or “wish you knew when you moved” tips would be massively appreciated.
Thanks in advance
submitted by /u/BroadwayEssentials
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +3 releases rss
sync repo: +3 releases ## New releases - [FeelingLucky](https://github.com/terrynini/feelinglucky): 1.0.2, 1.0.1, 1.0.0 ## Changes - [ApplyCalleeTypeEx](https://github.com/dump-guy/applycalleetypeex): - host changed: Dump-GUY/ApplyCalleeTypeEx → dump-guy/applycalleetypeex - [CrystalRE](https://github.com/nico-posada/crystalre): - host changed: Nico-Posada/CrystalRE → nico-posada/crystalre - [DBImporter](https://github.com/hexrayssa/ida-dbimporter): - host changed: HexRaysSA/ida-dbimporter → hexrayssa/ida-dbimporter - [DeepExtract](https://github.com/marcosd4h/deepextractida): - host changed: marcosd4h/DeepExtractIDA → marcosd4h/deepextractida - [EmuIt](https://github.com/azzonfire/emuit): - host changed: AzzOnFire/emuit → azzonfire/emuit - [GoResolver](https://github.com/volexity/goresolver): - host changed: volexity/GoResolver → volexity/goresolver - [HappyIDA](https://github.com/happyida/happyida): - host changed: HappyIDA/HappyIDA → happyida/happyida - [HashDB](https://github.com/oalabs/hashdb-ida): - host changed: OALabs/hashdb-ida → oalabs/hashdb-ida - [IDASignsrch](https://github.com/l4ys/idasignsrch): - host changed: L4ys/IDASignsrch → l4ys/idasignsrch - [IDAssist](https://github.com/symgraph/idassist): - host changed: symgraph/IDAssist → symgraph/idassist - [IDAssistMCP](https://github.com/symgraph/idassistmcp): - host changed: symgraph/IDAssistMCP → symgraph/idassistmcp - [LazyCross](https://github.com/l4ys/lazycross): - host changed: L4ys/LazyCross → l4ys/lazycross - [LazyIDA](https://github.com/l4ys/lazyida): - host changed: L4ys/LazyIDA → l4ys/lazyida - [ReCopilot](https://github.com/xingtulab/recopilot): - host changed: XingTuLab/recopilot → xingtulab/recopilot - [SuperHint](https://github.com/p05wn/superhint): - host changed: p05wn/SuperHint → p05wn/superhint - [ZoomAllViews](https://github.com/dump-guy/zoomallviews): - host changed: Dump-GUY/ZoomAllViews → dump-guy/zoomallviews - [bindiff](https://github.com/hexrays-plugin-contributions/bindiff): - host changed: HexRays-plugin-contributions/bindiff → hexrays-plugin-contributions/bindiff - [binexport](https://github.com/hexrays-plugin-contributions/binexport): - host changed: HexRays-plugin-contributions/binexport → hexrays-plugin-contributions/binexport - [deREferencing](https://github.com/danigargu/dereferencing): - host changed: danigargu/deREferencing → danigargu/dereferencing - [edit-function-prototype](https://github.com/oxikkk/ida-edit-function-prototype): - host changed: oxiKKK/ida-edit-function-prototype → oxikkk/ida-edit-function-prototype - [function-string-associate](https://github.com/oxikkk/ida-function-string-associate): - host changed: oxiKKK/ida-function-string-associate → oxikkk/ida-function-string-associate - [gepetto](https://github.com/justicerage/gepetto): - host changed: JusticeRage/Gepetto → justicerage/gepetto - [hrtng](https://github.com/kasperskylab/hrtng): - host changed: KasperskyLab/hrtng → kasperskylab/hrtng - [ida-cyberchef](https://github.com/hexrayssa/ida-cyberchef): - host changed: HexRaysSA/ida-cyberchef → hexrayssa/ida-cyberchef - [ida-security-scanner](https://github.com/symbioticsec/ida-security-scanner): - host changed: SymbioticSec/ida-security-scanner → symbioticsec/ida-security-scanner - [ida-terminal-plugin](https://github.com/hexrayssa/ida-terminal-plugin): - host changed: HexRaysSA/ida-terminal-plugin → hexrayssa/ida-terminal-plugin - [unicorn-tracer-arm64](https://github.com/chenxvb/unicorn-trace): - host changed: chenxvb/Unicorn-Trace → chenxvb/unicorn-trace - [vt-ida-plugin](https://github.com/virustotal/vt-ida-plugin): - host changed: VirusTotal/vt-ida-plugin → virustotal/vt-ida-plugin - [vtable-context-tools](https://github.com/oxikkk/ida-vtable-tools): - host changed: oxiKKK/ida-vtable-tools → oxikkk/ida-vtable-tools - [xray](https://github.com/hexrays-plugin-contributions/xray): - host changed: HexRays-plugin-contributions/xray → hexrays-plugin-contributions/xray - [yarka](https://github.com/azzonfire/yarka): - host changed: AzzOnFire/yarka → azzonfire/yarka -
🔗 r/reverseengineering Claude Code - What do you think? What do you feel is missing? rss
submitted by /u/Outrageous-Pea9611
[link] [comments] -
🔗 r/reverseengineering I spent 4 years building a static unpacker for Nuitka-compiled Python binaries including Commercial encrypted builds. Finally open-sourcing it. rss
submitted by /u/Dima_Reverse
[link] [comments] -
🔗 r/york Thursday (today) 15:19 York-Edinburgh First Class upgrade going free rss
| Hi, I’m a fool and have booked a SeatFrog upgrade for the wrong day. If anybody is travelling on this service, happy to give you the upgrade for free. You must already have a ticket for this service - this is just an upgrade. submitted by /u/APieceOfLalique
[link] [comments]
---|--- -
🔗 r/york View of the Minster from top of St. Olave’s rss
| I ascended the bell tower of St. Olave church this morning, and thought I’d snap quick pic while there. Not the best photography in the world, but hopefully worth sharing 😊 submitted by /u/Simple_Joys
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen 3.6 27B is a BEAST rss
I have a 5090 Laptop from work, 24GB VRAM.
I have been testing every model that comes out, and I can confidently say I’ll be cancelling my cloud subscriptions.
All my tool call and data science benchmarks that prove a model is reliably good for my use case, passed.
It might not be the case for other professions, but for pyspark/python and data transformation debugging it’s basically perfect.
Using llama.cpp, q4_k_m at q4_0, still looking at options for optimising.
Edit - I chose to go with IQ4_XS at 200k q8_0,
I have not used speculative decoding yet, will get there when I get there.
Specs:
ASUS ROG Strix SCAR 18
RTX 5090 24GB
64GB DDR5 RAM
submitted by /u/AverageFormal9076
[link] [comments] -
🔗 r/wiesbaden Beste Ramen rss
Hallo nach Wiesbaden, ich bin arbeitsmäßig am Wochenende in Wiesbaden und ich bin Ramen Fanatiker. Leider gibt es in meiner Gegend keine guten Restaurants weshalb ich jede Chance nutzen muss außerhalb eine Schüssel zu schlürfen. Was ist eurer Meinung nach das beste Restaurant? Am liebsten nur welche die wirklich von Japanern betrieben werden und sehr authentisch sind. Keine Fusion oder sowas :)
submitted by /u/djaevuI
[link] [comments] -
🔗 r/york I’m properly obsessed with how this place looks at sunset 🌇🤩 rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 Hex-Rays Blog Product Update: IDA 9.3sp2 Release rss
-
🔗 r/Leeds Yeah that about sums it up rss
submitted by /u/scottawesome
[link] [comments] -
🔗 r/york Is the Barbican always as warm as it was last night? rss
I went to see Jalen Ngonda at the Barbican and it was absolutely roasting inside the main arena / gig space. It was only 2/3s full (if that) and the security staff were handing out waters to people at the front, Jalen was complaining about the heat, it was ridiculous.
submitted by /u/WishfulStinking2
[link] [comments] -
🔗 r/Leeds Good massage/masseurs Leeds rss
Hello everyone, I am looking for any recommendations for a good massage therapist/massage shop in Leeds!
(And I'm not looking for a massage parlour before some funny onion suggests Winston's)
submitted by /u/InevitableSingle9652
[link] [comments] -
🔗 r/reverseengineering Fibratus 3.0.0 | Ad-hoc direct/indirect syscall evasion detection and 50+ new rules rss
submitted by /u/rabbitstack
[link] [comments] -
🔗 r/york Need help finding a wedding DJ, any recommendations? rss
Our original plan for music fell through, so now we’re scrambling a bit trying to find a wedding DJ. I’m hoping to find someone dependable who can handle both the formal parts of the reception and the party side of things without a lot of stress. If you hired someone you loved, I’d really appreciate any recomme͏ndations.
submitted by /u/goldy_bra_r
[link] [comments] -
🔗 r/Yorkshire Marsden to Slaithwaite along the Canal is beautiful. rss
| submitted by /u/CartoonistCalm9801
[link] [comments]
---|--- -
🔗 Console.dev newsletter HyperFrames rss
Description: Write HTML. Render video.
What we like: Compositions are HTML using data attributes rather than React, so it doesn’t require a build step or bundler. Supports seekable, frame-accurate animations. Easy to preview in the browser. Export to MP4. Includes AI agent skills or start manually.
What we dislike: Preview does the rendering work in real time so it can cause stuttering with large compositions that doesn’t happen once rendered.
-
🔗 Console.dev newsletter Pijul rss
Description: Distributed version control.
What we like: Independent changes can be applied in any order without changing the result - much simpler than rebase. Conflicts are expected and considered a first-class state that can be resolved, then never come back. Changes are stored as patches which model an atomic unit of work rather than a snapshot or version.
What we dislike: Development seems sporadic, although it is seemingly active.
-
🔗 Armin Ronacher Equity for Europeans rss
If you spend enough time in US business or finance conversations, one word keeps showing up: equity.
Coming from a German-speaking, central European background, I found it surprisingly hard to fully internalize what that word means. More than that, I find it very hard to talk with other Europeans about it. Worst of all it's almost impossible to explain it in German without either sounding overly technical or losing an important part of the meaning.
This post is in English, but it is written mostly for readers in Germany, Austria, and Switzerland, and more broadly for people from continental Europe. I move between “German-speaking” and “continental European” a bit. They are not the same thing, of course, but many continental European countries share a civil-law background that differs sharply from the English common-law and equity tradition. The words differ by language and jurisdiction, but the conceptual gap I am interested in shows up in similar ways.
In US usage, the word "equity" appears everywhere:
- real estate: "build equity in your home"
- startups: "employees get equity"
- public markets: "equity investors"
- private deals: "take an equity stake"
- personal finance: "negative equity in a car"
- social policy: "diversity, equity, and inclusion"
If you try to translate this into German, you have to choose words. Of course we can say Eigenkapital , Beteiligung , Anteil , Vermögen , Nettovermögen , or sometimes Substanzwert. In narrow contexts, each can be correct, but none of them carries the full concept. I find that gap interesting, because language affects default behavior and how we think about things.
One Word, Shared Meanings
In the English language, "equity" often carries multiple things at once. I believe the following ones to be the most important ones:
- A legal-fairness dimension: historically tied to equity in law
- A financial-accounting dimension: residual value after debt
- A cultural dimension: ownership as a path to wealth and agency
If you open Wikipedia, you will find many more distinct meanings of equity, but they all relate to much the same concept, just from different angles.
German, on the other hand, can express each of these layers precisely, including the subtleties within each, but it uses different words and there is no common, everyday umbrella word that naturally bundles all three.
When a concept has one short, reusable, positive word, people can move it across contexts very easily. When the concept is split into technical fragments, it tends to stay technical, and people do not necessarily think of these things as related at all in a continental European context.
How Equity Got Here
What is hard for Europeans to understand is how the financial meaning of equity appeared, because it did not appear out of nowhere. The word's original meaning comes from fairness or impartiality, and it made it to modern English via Old French and Latin (equité / aequitas).
Historically, English law had separate traditions: common law courts and courts of equity (especially the Court of Chancery). Equity in law was about fairness, conscience, and remedies where strict common law rules were too rigid. Take mortgages for instance: in older English practice, a mortgage could transfer title as security. Under strict common law, missing a deadline could mean losing the property entirely. Courts of equity developed the "equity of redemption": a borrower could still redeem by paying what was owed.
That equitable interest became foundational for how ownership and claims were understood. In finance, equity came to mean not just a number, but a claim: the residual owner's stake after prior claims are satisfied.
The European Split
German and continental European legal development took a different path. Civil law systems did not build the same separate institutional track of "equity courts" versus common law courts. Fairness principles absolutely exist, but inside the codified system, not as a parallel jurisdiction with its own language and mythology.
As a result, German vocabulary has many different words, and they are highly domain-specific. There are equivalents in other languages, and to some degree they exist in English too:
- company balance sheet: Eigenkapital
- ownership share: Beteiligung , Anteil
- unrealized asset value: stille Reserven
- household wealth: Vermögen , Nettovermögen
- investment action: Anlage , Investition
- residual net assets: Reinvermögen
This precision is useful for legal drafting and accounting. But it also means we have less of the shared mental package that many Americans get from "equity": own a piece, carry risk, participate in upside, build wealth.
Schuld Is Not Just Debt
There is another linguistic oddity worth noting: in German, "Schuld" can mean both debt/liability and guilt, and I think that too has changed how we think about equity.
"Schuld" in everyday language makes debt feel more morally charged than it does in the US. Indebtedness is often framed as a burden, and it is not thought of as a tool at all.
US financial language, by contrast, often frames debt more instrumentally and pairs it with an explicit positive counterpart: equity. Equity is what is yours after debt, what can appreciate, what can be transferred, and what can give you control.
In American financial language, debt is not as morally burdened, and equity is more than the absence of debt: it is the positive claim on the balance sheet — ownership, optionality, control, and upside.
Practical Matters
If you grew up with German-speaking framing, many US statements around equity can sound ideological or naive. From a continental European lens, they can sound like imported jargon or hollow. But if we ignore the concept, we lose something practical:
- We discuss salaries in cash terms but under-discuss ownership.
- We treat employee participation as exotic instead of normal.
- We under-explain compounding and intergenerational transfer.
- We miss a language for talking about agency through ownership.
I am not saying German-speaking Europeans are incapable of this mindset. Obviously we are not. But we clearly tend to think about these things differently.
Normalize Equity
When you hear “equity,” it helps to think of it as a rightful stake. Historically, it is connected to fairness and the recognition of a claim where strict rules would be too rigid. Financially, it is the part that remains after prior obligations. Culturally, it is something that can grow into control, agency, and upside.
That is not a perfect definition, but it captures why the term is so sticky in American discourse. It combines a present claim with a future possibility. It is not just what remains after debt; it is the part that can grow, compound, and give you agency.
If Europeans want to talk more seriously about entrepreneurship, retirement, housing, and wealth building, we would benefit from a stronger everyday vocabulary for exactly this idea. We need a longing for equity so that ownership does not remain something for founders, lawyers, accountants, and wealthy families, but becomes a normal part of how people think about work, risk, and their future.
Not because we should imitate America, but because this mental model helps people make clearer decisions about ownership, incentives, and long-term agency. For Europe, that shift feels long overdue.
-


















