- ↔
- →
to read (pdf)
- Letting AI Actively Manage Its Own Context | 明天的乌云
- Garden Offices for Sale UK - Portable Space
- Cord: Coordinating Trees of AI Agents | June Kim
- Style tips for less experienced developers coding with AI · honnibal.dev
- Haskell for all: Beyond agentic coding
- March 01, 2026
-
🔗 Textualize/textual The Juicy Release release
Small update
[8.0.1] - 2026-03-01
Fixed
DirectoryTreeruns more operations in a thread to avoid micro-freezes
Changes
- Some tweaks to garbage collection to reduce gc time #6402
-
🔗 r/Yorkshire Ukrainian refugee describes moving journey to her new life in the North East rss
| Oksana Halchenko is one of 90 Ukrainians living in Redcar and Cleveland under the Homes for Ukraine scheme, which has supported nearly 200 arrivals since the Russian invasion began in February 2022. She was helped by Andrew Parker, an officer in the council’s Housing Team who has taken a leading role in the Homes for Ukraine Scheme. She said: "I left hospital in my city, Mariupol, on crutches. By luck, I made it to another small town but which was still under the Russian occupation. submitted by /u/coffeewalnut08
[link] [comments]
---|--- -
🔗 Probably Dance Word Map – A Game About Hill Climbing and Stepping Stones rss
I really like Semantle because I noticed that progress is similar to the description in Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth Stanley. The objective function is very clear and you can hill-climb your way to the top, but hill-climbing is actually very difficult in high-dimensional spaces, so you need to explore to find stepping stones and then exploit based on those stepping stones. I have long looked for a game where I can practice that behavior and Semantle almost was that game, so I decided to evolve it to make it easier to practice doing the right behaviors.
The result is Word Map. Give it a play.
At a high level it's a daily word game similar to Wordle. You start off by guessing randomly and trying to find patterns to what the target word should be. When you get close to the goal the titular word map pops up. This map addresses the biggest flaw of Semantle: The protracted grind to get to the end. With the help of the map, Word Map is closer in difficulty to Wordle. You won't guess right in six tries obviously, but here I solve the puzzle on most days, where in Semantle I don't.
Development - Trying out Vibe Coding
This was my first non-trivial project that's mostly vibe-coded. Probably 98% of the code is written by Claude Code. It was overall very pleasant. I especially like how easy it is to polish things. E.g. while doing the final edits on this blog post I thought "man the text should really increase and decrease in size as you zoom in and out, but it also shouldn't be too big when you keep on zooming out" so I just asked Claude to do it and it was done in seconds. I could have also done it myself in minutes, but there's so much less friction when you can just tell an AI to do things. You end up doing more iterations.
The biggest win is that I don't know CSS and don't know React and don't know Typescript and don't know what to use to draw a 3D map in a browser, but I am still able to ship a web app using those technologies. And I learned a bit about React and Typescript while I was at it, which are things I had been meaning to do for a while. (I'll never learn CSS though. I've tried often enough and have concluded that I'll just leave that one to the AIs)
It is still flawed and I like to think my programming skills are still relevant, at least for a few more months as this thing gets better. I intervened in a few places:
- Claude wanted to use cosine similarity as the distance metric (meaning normalize then do dot product). This is the most obvious thing to do but the size of the vectors contains meaning, so you really want some distance metric that takes that into account. I previously had good experiences with the Tanimoto coefficient so I used that:
- It stumbled over the map selection logic a few times. First it wrote a raycast, which makes sense. You can do two ways of selecting with raycasts, "cast with a radius and select the first hit point" or "select the point closest to the ray", both of which have annoying edge cases. So I asked it to implement "first snap to the ground then select the nearest," which is better but also has a few edge cases. So I started writing a very detailed description of how the approaches should be combined until I realized that this is silly and it would be faster for me to make the last little changes myself. Another problem was that this evolved over several sessions so I'd point out a selection bug to Claude and it would have forgotten why the code was the way it was and would say "oh this is complicated let me just delete all this and make it a simple raycast". Maybe it just needed better comments, a thing that Claude does not yet do on its own. But overall this actually went fairly well and I still got this done faster and to a higher quality than if I had written it all myself.
- It couldn't figure out a bug where the input field kept on losing keyboard focus. Turns out it was disabling the input field while the guess is being submitted. This makes sense in case the server is laggy, but also kills keyboard focus. I had to step in and debug this one because Claude couldn't figure it out after I asked it three or four times.
But those are details, and Claude figured out many more tricky details in this than I did. The mistakes it makes now still follow clear patterns but the error-area is much smaller than it was a year ago. A year ago it would get itself confused halfway through a long piece of code. Now it gets itself confused when there are complex interactions of things.
This project is big enough where you still have to come up with a plan and make Claude do the work step by step. It was able to create a website that looked and worked like Semantle in one go (with no map). But then I still needed backend functionality to batch-generate puzzles and to deploy this to a server and to enable caching and to test performance (I still expect the server to go down if this becomes popular, but it's fast enough that I'm not worried about hundred of users). It helped with all of this but you have to ask it. Also I had to ask it to clean up messes that it made by copy+pasting the same code five times. I noticed this because I asked it to fix a bug (the "I give up" button would still show even though you had already solved a puzzle), but I was still able to trigger the bug by doing something slightly different. Then I finally looked at the code and noticed that there were five places doing the same thing, three of which still had the bug. It can clean that up much quicker than I can, but you have to ask it to do so. I have seen people post prompts online that tell Claude to clean up duplication on its own, and I'll have to figure out how to set that up.
This was not a quick project even though I only wrote like 2% of the code. It took three months before it was good. Since I started with an existing game, this was mostly an exercise in design and taste.
- I had a sense early on that I wanted to somehow visualize which directions you need to go. I thought of drawing arrows next to guesses but then you immediately want the landscape in which the arrows make sense.
- I wanted a rugged mountain landscape made out of words where you can see when nearby words climb up walls or lead down into valleys. The target word would be the peak of the mountain. But UMAP just wouldn't give me layouts that made sense, and I had a hard time trying to nudge it to arrange everything around one central point. I would have had to design every single mountain by hand. (or come up with a way to get an AI to design every single mountain by hand)
- In the end I had to settle for a 1D UMAP projection and arrange all the words in a circle, with the distance to the center given by the similarity metric. It means you lose most of the semantic meaning of the words, but it's much easier to understand and play. You still get some neat effects out of it where as you explore around the mountain you discover new aspects of the target word.
But once again overall this was a very pleasant experience. I can definitely tell that I'm running into some limits of vibe-coding already, but not in a hard way. There is always a workaround where you break down the problem a bit more. I think if I didn't have Claude code, I would have never gotten this project done. I often just have an hour in the evening or thirty minutes in the morning and there is no way that I'd attempt something as intimidating as "figure out how to draw a map of word-embeddings" in that little time. But Claude just goes for it and gets it surprisingly right on the first try, and then you can always iterate on the details in the following days.
Mission Accomplished?
Did I actually accomplish my goal of making a game about hill climbing and stepping stones? Not fully. But there are a few things you can learn from playing this game:
- Hill climbing works really well in high-dimensional spaces.
- But hill climbing is also really difficult in high-dimensional spaces. At least when they're visualized in low dimensions. It's really hard to figure out the gradient that would allow you to take the next step, and to come up with a word from that.
- Stepping stones help enormously. I really like how exploring the map worked out. At some point you get a bunch of words that have an aspect of the target word that you haven't incorporated at all, and that gets the mental gears turning.
- Near the end of the game it's still hard to find stepping stones, mostly because I projected the high-dimensional word vector down to 1D. The "Smart Hint" feature is a bit of a clumsy way out of that, because it tells you the missing parts of the high-dimensional vector. It usually makes it pretty easy to come up with the target word. I wish I had come up with a less-big-hammer to solve that. (my best idea for that is to somehow use higher-dimensional UMAP and to draw the other dimensions as other colors or other symbols or something)
I still think this is a good game for practicing the approach for solving hard problems. To work as a game it had to be difficult in some aspects, but not too difficult, and easy in other aspects, but not too easy. I wanted a game where a human would have to use Novelty Search (the algorithm by Kenneth Stanley, who wrote the "Why Greatness Cannot Be Planned" book I referred to at the beginning), and I think I got that. And maybe it can serve as a stepping stone for someone else to make an interesting game that further explores gameplay that requires this kind of thinking.
- Claude wanted to use cosine similarity as the distance metric (meaning normalize then do dot product). This is the most obvious thing to do but the size of the vectors contains meaning, so you really want some distance metric that takes that into account. I previously had good experiences with the Tanimoto coefficient so I used that:
-
🔗 r/Yorkshire The strictest village in England 😮❤️ rss
| submitted by /u/JustYouTryItLad
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Breaking : Today Qwen 3.5 small rss
| submitted by /u/Illustrious-Swim9663
[link] [comments]
---|--- -
🔗 r/reverseengineering Reverse engineering “Hello World” in QuickBasic 3.0: bloat & bytecode from 1987 AD rss
submitted by /u/alberto-m-dev
[link] [comments] -
🔗 r/Yorkshire Yorkshire Cat Rescue sees rise in abandoned cats as costs increase rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/wiesbaden Opel House with Coffeeshop 1921-1986 in Wiesbaden rss
submitted by /u/real_human_being78
[link] [comments] -
🔗 sacha chua :: living an awesome life Emacs Carnival Feb 2026 wrap-up: Completion rss
Check out all the wonderful entries people sent in for the Emacs Carnival Feb 2026 theme of Completion:
- Completion (in Emacs hledger) — Arjen Wiersma
- Emacs Carnival: Completion — Where Are The Wise Men? - Mike Hostetler
- File name completion in Emacs - Dmitry Dolzhenko
- Emacs Carnival: "Completion" - Christian Cleberg
- An Alternate Completing Read - Howard Abrams
- Guide to Modern Emacs Completion: vertico, corfu & friends - jneidel
- Sorting completion candidates, such as sorting Org headings by level - Sacha Chua
- Emacs Carnival: Completion – Eric MacAdie
- CHAT emacs completions - George Jones
- Emacs completion and handling accented characters with orderless - Sacha Chua
- Using speech recognition for on-the-fly translations in Emacs and faking in-buffer completion for the results - Sacha Chua
- Exploring large amounts of data with completion - Omar Antolin
- Emacs Carnival: Completion - Neil
- Emacs Carnival: Org Mode Completions - Elsa Gonsiorowski
- Completion of hugo links in Emacs - jneidel
- Emacs Carnival: Completion in Beancount Plain Text Accounting - John Rakestraw
Also, this one about completing the loop:
Sometimes I miss things, so if you wrote something and you don't see it here, please let me know! Please e-mail me at sacha@sachachua.com or DM me via Mastodon with a link to your post(s). If you like the idea but didn't get something together in time for February, it's never too late. Even if you come across this years later, feel free to write about the topic if it inspires you. I'd love to include a link to your notes in Emacs News.
I added a ton of links from the Emacs News archives to the Resources and Ideas section, so check those out too.
I had a lot of fun learning together with everyone. I already have a couple of ideas for March's Emacs Carnival theme of Mistakes and Misconceptions (thanks to Philip Kaludercic for hosting!), and I can't wait to see what people will come up with next!
You can e-mail me at sacha@sachachua.com.
-
🔗 r/york Grr… rss
| submitted by /u/LookOverall
[link] [comments]
---|--- -
🔗 r/Leeds Japanese lessons rss
I’m hoping to go to Japan in a few years and thought it would be good to move away from Duolingo as a way to learn a language. Does anyone know where I could do some night classes or maybe a part time course in conversational Japanese?
submitted by /u/fatgirlseatmorev20
[link] [comments] -
🔗 r/york Best veggie roast rss
Have been searching for a good vegetarian roast in York for a while without much luck. Anybody got any recs please?
submitted by /u/Yorkywelsh
[link] [comments] -
🔗 r/Leeds Short hair on women hairdressers rss
I recently moved to Leeds (near city centre) and was wondering what are good hairdressers that could cut and shave a short hair hairstyle on a woman? What I've noticed is that many hairdressers can only do long hair and can't shave the sides well and if i go to barbers most of them can shave the sides well but can't cut the top nicely. I'll add a picture of more or less what hairstyle i have. Ideally the suggestions dont cost an arm and a leg 🫣 thank you!
submitted by /u/No_Persimmon51
[link] [comments] -
🔗 Register Spill Joy & Curiosity #76 rss
This week I found myself writing code by hand again.
Not a lot, maybe ten, twenty lines in total, which is far less than what I had Amp produce, but still: actual typing out of code. Miracle I didn't get any blisters.
At our Amp meetup in Singapore I mentioned this on stage and someone in the audience cheekily asked: "You just told us that these agents can now work well when you give them a longer leash and yet you wrote code by hand, how come?"
The answer can probably be boiled down to something that sounds very trite: to build software means to learn.
When you build a new piece of software, you learn what the software is actually supposed to do, how it should do it, and why your pre-building ideas now seem naive. (If you're thinking "well, can't we figure out all of that before we build" go ahead and type "waterfall software" into Google.)
Right now, at Amp, we're building something new. We don't yet know everything about this thing we're building. We don't know how it should behave in this case, or in that case, how the runtime behaves here, or over there.
Writing code by hand is one way (!) to answer these questions, because you truly bump into what you don't know when you have to type something out. You find yourself picking an array and write down that the type for
clientsisClient[]and then you wonder: wait a second, do we even need to allow for multiple clients to be connected at the same time? why? when? No, we actually don't, it should beclient: Client.An agent is happy to pick an answer for you -- without telling you. It will just write the code.
That might not be a problem. If you're not building something new or if you don't even need to learn how the software works (which is probably more often the case than you might think) or if you already have a good mental model, let the agent rip. In fact, I'd even say that in the majority of cases it's not a problem, because most software development is not building something new.
But if you need learn , so you can make better engineering tradeoffs and product decisions, it seems to me that one of the most practical ways to do might just still be to get your hands dirty. Let's see how long that lasts.
-
Ladybird adopts Rust, with help from AI. Now that's engineering: "Our first target was LibJS , Ladybird's JavaScript engine. […] This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns." And also this: "If you look at the code, you'll notice it has a strong 'translated from C++' vibe. That's because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline." That's how you build software: step by step, and choosing tradeoffs carefully. And that, I'm rather sure, won't go away.
-
Talking about ports: Cloudflare used "one engineer and an AI model" and "$1,100 in tokens" to create a drop-in Next.js replacement built on top of Vite. The sections on why this was a good fit for AI and the approach they took are very interesting. So is this point at the end: "It's not clear yet which abstractions are truly foundational and which ones were just crutches for human cognition. That line is going to shift a lot over the next few years. But vinext is a data point. We took an API contract, a build tool, and an AI model, and the AI wrote everything in between. No intermediate framework needed. We think this pattern will repeat across a lot of software. The layers we've built up over the years aren't all going to make it." Let's see whether frameworks like Next.js or vinext will still be useful in a few years. Oh and of course there's drama between Cloudflare and Vercel so Vercel shot back.
-
Man, I had this link here, to Anthropic's Statement from Dario Amodei on our discussions with the Department of War, saved so I can write about it in this edition, but good lord, there's now fifteen other things to link to. Just type "Anthropic" or "OpenAI" into Google News. Or don't, there's a lot of noise and dust in the air and if you aren't on the inside it seems hard to get an accurate impression of what happened (or is happening). What I did find very interesting, regardless of surrounding context, was this post by Palmer Luckey.
-
This really was as good as everyone said it is: The Very Hungry Caterpillar, an examination of Eric Carle's famous book on the Looking At Picture Books substack. I highly recommend you read this. What a wonderful way to look at books, at design, at the world. It's also funny.
-
This one too: How to Make a Living as an Artist. There are many things you can get out of this post if you've ever built and shipped something, regardless of whether that was a painting, some words, code, or something else.
-
Justin Duke's scattered thoughts on LLM tools: "it seems like the logical endpoint is infinite and perfectly abstracted sandboxes with previewing, isolation, and very tight feedback loops. But right now the largest gap between where we and most other organizations are and that brilliant future is not on the AI side but on all the calls from coming inside the house that make it difficult to sandbox a mature application." Question is: does "mature application" mean the same thing it did a year ago?
-
This Eileen Gu clip made the rounds recently and I find it incredibly fascinating. Over the last ten, fifteen years I made several attempts to get into meditation, read quite a lot about it, including some books, and now know that (1) I am not the thoughts that pop up in my head (2) my brain is a seemingly random thought-generator (3) you can influence what thoughts it generates by practicing (4) I am the thoughts I repeatedly think. The ability to modify what you think is incredible (as I wrote in admiration here) and I wish I could do it was effortlessly as Eileen Gu describes here.
-
Logan Kilpatrick: "The compute bottleneck is massively under appreciated. I would guess the gap between supply and demand is growing single digit % every day." If you've never really dug into this topic, I recommend this podcast with Dylan Patel. He's a smart guy and if I had listened to him all the way back in fall of 2024, when I first heard of him, I would've bought SK Hynix and Sandisk stock and made a lot of money.
-
Lovely and well-made: An interactive intro to quadtrees. Makes me want to build something with quadtrees. Notable: how it explains usecases for quadtrees, besides the very obvious one of, well, a map.
-
What Claude Code Actually Chooses. Interesting: "We pointed Claude Code at real repos 2,430 times and watched what it chose. No tool names in any prompt. Open-ended questions only. […] The big finding: Claude Code builds, not buys. Custom/DIY is the most common single label extracted, appearing in 12 of 20 categories (though it spans categories while individual tools are category-specific)." Make sure to click through to the full report to see how they came up with these numbers. And while it's interesting, I'm also not sure whether it matters that much outside of an experiment.
-
The left is missing out on AI. I'm not sure whether I'd say "the left", but when I read this I couldn't help but say "oh boy" out loud when it reminded me that people still talk about "stochastic parrots" and "spicy autocomplete" and "these models can't think".
-
The Hardest Lessons For Startups To Learn, a vintage Paul Graham essay from 2006 that I somehow came across this week. I'm not sure whether I've read it before, but I must've because I nodded to everything he's saying here. Or maybe it's the last fifteen years, give or take, of working in startups. Really good.
-
Times are changing, there's a lot of things to adapt, including interviewing: How We Hire Engineers When AI Writes Our Code. "I'll hand you a small problem - one that we've solved ourselves - usually from a bare-bones Figma file or a short spec. This might be a simple flow or a lightweight feature that would ordinarily take a day or two to build and ship. But for this exercise, you'll have just a few hours--and that's not enough time to make a polished product. I want to see how you work within constraints. You're encouraged to use AI to solve the problem. Whatever tools you would want to use as an employee, use them during the interview. We'll give you a Claude, Codex, Cursor, or Gemini license if you need one. I want to see you balance LLM-generated code against your own judgment.
But make no mistake--even if you aren't writing the code, you own the output." I haven't formally interviewed engineers in over a year but I think this is how I'd do it too.
-
Really, really, really good and thought-provoking: Nobody knows how the whole system works.
-
Phil Eaton started a company: "I quit my job at EnterpriseDB hacking on PostgreSQL products last month to start a company researching and writing about software infrastructure. […] This company, The Consensus, will talk about databases and programming languages and web servers and everything else that is important for experienced developers to understand and think about. It is independent of any software vendor and independent of any particular technology."
-
"Cognitive debt, a term gaining traction recently, instead communicates the notion that the debt compounded from going fast lives in the brains of the developers and affects their lived experiences and abilities to 'go fast' or to make changes. Even if AI agents produce code that could be easy to understand, the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it."
-
Ben Wallace: The happiest I've ever been. I've had quite a few conversations with programmer friends over the last year that ended with someone wondering: do I still enjoy this? Is this the programming I want to do? Some answer with yes, others with no. I understand both answers and the "code was never important" comments are not helpful to those who really, really enjoyed writing code. If you're in sales, that might be because you love negotiation, or the product you're selling, or making money, or, hey, because you love talking to people, love finding out what their problems are, love to visit them. If your job suddenly changed from that to never talking to a human again, I bet you'll find it hard to take solace in "it was never about the people, it was always about closing the deal."
-
747s and Coding Agents. Thoughts on learning and getting better and what coding agents might take away from us. Very good.
-
Interesting: Building An Elite AI Engineering Culture In 2026. This isn't a guide for how to achieve an "elite" culture, I'd say, but more an examination. Interesting to read through and compare. For example, these two points: "The most consequential organizational change in 2025-2026 is the dissolution of the design-engineering boundary at top companies" and "No design-to-dev handoff. No PM-to-engineering handoff. No QA as a separate gate. Everyone ships." -- that describes what we do at Amp pretty well. Tim and Brett, our "designers" at Amp, do design , but they also ship what they design and ship other code and debug distributed systems stuff. I don't think I ever saw a classic "design Figma" at Amp. We also don't have PMs. I'm probably the closest thing we have to a PM, but I have a very different title and am the #2 contributor in code (Quinn is #1). Last year, when we started Amp, we started working this way because it was natural with just two senior people in a repository (Quinn and myself). Sure, push to main, we're all grown-ups. But then over the year, we added more and more people and kept this way of working and now I'm pretty certain that it's because of AI that we work this way. I need to write more about that.
-
Murat Demirbas on the End of Productivity Theater. This is something I've also wonder about a lot on the past few years, even, say, pre-AI: "I remember the early 2010s as the golden age of productivity hacking. Lifehacker, 37signals, and their ilk were everywhere, and it felt like everyone was working on jury-rigging color-coded Moleskine task-trackers and web apps into the perfect Getting Things Done system. So recently I found myself wondering: what happened to all that excitement? Did I just outgrow the productivity movement, or did the movement itself lose stream?" His analysis seems spot-on.
-
Now this is a great thought experiment: "There's a well-known phenomenon in the facial aesthetics literature whereby 'average faces' (that is, faces formed by superimposing many faces atop one another) tend to be more attractive than the average person. […] Recently, I have begun to wonder if LLM-writing faces a similar challenge."
Wrote code by hand and wondered how the hell that happened? My friend, you need to subscribe:
-
-
🔗 r/LocalLLaMA The U.S. used Anthropic AI tools during airstrikes on Iran rss
Hours after announcing that the federal government would cease using artificial intelligence tools developed by the tech company Anthropic, U.S. President Trump utilized those very tools to launch a massive airstrike against Iran. Sources familiar with the matter confirmed that command centers in various locations, including U.S. Central Command (CENTCOM), have been using Anthropic’s Claude AI tool. Despite escalating tensions between the company and the Pentagon, the command continued to employ the tool for intelligence assessments, target identification, and combat simulations, highlighting the deep level of involvement of AI tools in military operations. The U.S. government and Anthropic have been in a dispute for months over how the Pentagon utilizes its AI models. On Friday, President Trump ordered all agencies to stop cooperating with the company, and the Department of Defense also determined that the firm poses a security threat and a risk to its supply chain.
submitted by /u/External_Mood4719
[link] [comments]
-
- February 28, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-28 rss
IDA Plugin Updates on 2026-02-28
Activity:
- DeepExtractIDA
- dotfiles
- 5a851624: update
- IDA-pro-mcp-Optimize
- bc9e0ec8: feat(core): comprehensive MCP plugin optimization
- ida_domain_mcp
- a0e2b85f: upload binary
- IDAPluginList
- d718cb90: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
-
🔗 r/reverseengineering InnoExtractor 2026 v11.5.1.172 released rss
submitted by /u/Gomedas
[link] [comments] -
🔗 r/Harrogate Petrol Station Vacuum rss
Does anybody know of any petrol stations in Harrogate which have a vacuum cleaner which I can use to clean out the car? Any with a self cleaning place in general would be perfect!
submitted by /u/SignificantCaptain95
[link] [comments] -
🔗 r/Leeds Filming Locations in 'Fat Friends' rss
Ey up!
Watching 'Fat Friends' on Netflix for the first time, loving spotting the different filming locations.
Would love to hear your Fat Friends stories or facts. Did you ever see it being filmed? Was your mate an extra?
Trying to work out where :
- Norma's house is - thought it was one of the Normans in Kirkstall but the street is too short.
- S4 Ep3 - which building Norma is standing on. Must be LGI/uni as it's right next to civic hall and town hall.
submitted by /u/CloudOrigami
[link] [comments] -
🔗 r/Yorkshire Where's best for a day out. 2 adults and a dog. rss
Hi all, Wanting to go out tomorrow so looking for suggestions on places to go. It will be myself, my boyfriend and our dog. We like food and unusual things. Nowhere too busy as the dog doesn't like it.
submitted by /u/Altruistic-Cattle631
[link] [comments] -
🔗 r/york New earswick rss
What’s happening with all the flowers in new earswick opposite the fish and chip shop ??
submitted by /u/Radiant-Barracuda-21
[link] [comments] -
🔗 r/Leeds Horsforth Station Road rss
Station Road blocked off by the police earlier. Any insights anyone?
submitted by /u/01130161
[link] [comments] -
🔗 r/Yorkshire What’s the most only in Yorkshire experience you’ve ever had? rss
submitted by /u/CloudBookmark
[link] [comments] -
🔗 r/Yorkshire After a couple of hours of rain, the sun came out over our glorious county rss
| As a warm up before tackling Pen-y-ghent tomorrow, we decided to have a wander round the Ingleton waterfall trail. Absolutely gorgeous scenery all along, marred only by the worst cup of tea ever from the little van halfway round! submitted by /u/r3tromonkey
[link] [comments]
---|--- -
🔗 r/reverseengineering Reverse Engineering LockHunter and Adding Visibility for Memory Mapped Files rss
submitted by /u/ahm3dgg
[link] [comments] -
🔗 r/reverseengineering Reverse engineering Zomato's Android app: Bypassing SSL pinning to find plain-JSON MQTT credentials rss
submitted by /u/Ok_Reveal_4284
[link] [comments] -
🔗 r/reverseengineering From Wi‑Fi Access to Root: Reverse Engineering a $50 CarPlay Dongle rss
submitted by /u/louisss-e
[link] [comments] -
🔗 r/york Question - York’s medieval character rss
How much of York’s medieval character is genuinely preserved versus restored or reconstructed?
submitted by /u/Less-Pair6695
[link] [comments] -
🔗 r/LocalLLaMA Qwen 3.5-35B-A3B is beyond expectations. It's replaced GPT-OSS-120B as my daily driver and it's 1/3 the size. rss
I know everyone has their own subjective take on what models are the best, at which types of tasks, at which sizes, at which quants, at which context lengths and so on and so forth.
But Qwen 3.5-35B-A3B has completely shocked me.
My use-case is pretty broad, but generally focuses around development tasks.
- I have an N8N server setup that aggregates all of my messages, emails, alerts and aggregates them into priority based batches via the LLM.
- I have multiple systems I've created which dynamically generate other systems based on internal tooling I've created based on user requests.
- Timed task systems which utilize custom MCP's I've created, think things like "Get me the current mortgage rate in the USA", then having it run once a day and giving it access to a custom browser MCP. (Only reason custom is important here is because it's self documenting, this isn't published anywhere for it to be part of the training).
- Multiple different systems that require vision and interpretation of said visual understanding.
- I run it on opencode as well to analyze large code bases
This model, is... Amazing. It yaps a lot in thinking, but is amazing. I don't know what kind of black magic the Qwen team pumped into this model, but it worked.
It's not the smartest model in the world, it doesn't have all the knowledge crammed into it's data set... But it's very often smart enough to know when it doesn't know something, and when you give it the ability to use a browser it will find the data it needs to fill in the gaps.
Anyone else having a similar experience? (I'm using unsloths Q4-K-XL, running on a 5090 and 3090 @ 100k context)
submitted by /u/valdev
[link] [comments] -
🔗 r/reverseengineering I built a headless UE5 client in pure Python rss
submitted by /u/MineRoutine2059
[link] [comments] -
🔗 r/LocalLLaMA OpenAI pivot investors love rss
| submitted by /u/PaceImaginary8610
[link] [comments]
---|--- -
🔗 r/Yorkshire Anyone know where these are from? rss
| submitted by /u/Yorkshire_Pudding_2
[link] [comments]
---|--- -
🔗 r/LocalLLaMA DeepSeek V4 will be released next week and will have image and video generation capabilities, according to the Financial Times rss
| Financial Times: DeepSeek to release long-awaited AI model in new challenge to US rivals (paywall): https://www.ft.com/content/e3366881-0622-40a7-9c34-a0d82e3d573e submitted by /u/Nunki08
[link] [comments]
---|--- -
🔗 hyprwm/Hyprland v0.54.0 release
A big (large), actually huge update for y'all!!
Special thanks to our HIs (Human Intelligences) for powering Hyprland development.
Breaking changes
togglesplitandswapsplithave been removed after being long deprecated. Uselayoutmsgwith the same params instead.single_window_aspect_ratioandsingle_window_aspect_ratio_tolerancehave been migrated from dwindle to layout, and are layout-agnostic
New features:
- cmakelists: add fno-omit-frame-pointer for tracy builds
- desktop/window: add stable id and use it for foreign
- gestures: add cursor zoom (#13033)
- groupbar: added group:groupbar:text_padding (#12818)
- hyprctl: add error messages to hyprctl hyprpaper wallpaper (#13234)
- hyprctl: add overFullscreen field in hyprctl window debug (#13066)
- hyprpm: add full nix integration (#13189)
- keybinds: add inhibiting gestures under shortcut inhibitors (#12692)
- main: add watchdog-fd and safe-mode options to help message (#12922)
- opengl: add debug:gl_debugging (#13183)
- start: add --force-nixgl and check /run/opengl-driver (#13385)
- start: add parent-death handling for BSDs (#12863)
Fixes:
- algo/dwindle: fix focal point not being properly used in movedTarget (#13373)
- algo/master: fix master:orientation being a noop
- algo/master: fix orientation cycling (#13372)
- algo/scrolling: fix crashes on destroying ws
- core/compositor: immediately do readable if adding waiter fails for scheduling state
- compositor: fix calculating x11 work area (#13347)
- config/descriptions: fix use_cpu_buffer (#13285)
- core/xwaylandmgr: fix min/max clamp potentially crashing
- decorations/border: fix damage scheduling after #12665
- desktop/layerRuleApplicator: fix an epic c+p fail
- desktop/ls: fix invalid clamp
- desktop/popup: fix use after free in Popup (#13335)
- desktop/reserved: fix a possible reserved crash (#13207)
- desktop/ruleApplicator: fix typo in border color rule parsing (#12995)
- desktop/rules: fix border colors not resetting. (#13382)
- desktop/workspaceHistory: fix tracking for multiple monitors (#12979)
- desktopAnimationMgr: fix slide direction
- dynamicPermManager: fix c+p fail
- eventLoop: various eventloopmgr fixes (#13091)
- example: fixup config for togglesplit
- fifo: miscellaneous fifo fixes (#13136)
- fix: handle fullscreen windows on special workspaces (#12851)
- hyprctl: fix layerrules not being applied dynamically with hyprctl (#13080)
- hyprerror: add padding & adjust for scale when reserving area (#13158)
- hyprerror: fix horizontal overflow and damage box (#12719)
- hyprpm: fix build step execution
- hyprpm: fix clang-format
- input: fix edge grab resize logic for gaps_out > 0 (#13144)
- input: fix kinetic scroll (#13233)
- keybinds: fix unguarded member access in moveWindowOrGroup (#13337)
- mainLoopExecutor: fix incorrect pipe check
- monitor: fix DS deactivation (#13188)
- multigpu: fix multi gpu checking (#13277)
- nix: add hyprland-uwsm to passthru.providedSessions
- nix: fix evaluation warnings, the xorg package set has been deprecated (#13231)
- pluginsystem: fix crash when unloading plugin hyprctl commands (#12821)
- protocols/cm: Fix image description info events (#12781)
- protocols/contentType: fix missing destroy
- protocols/contentType: fix typo in already constructed check
- protocols/dmabuf: fix DMA-BUF checks and events (#12965)
- protocols/syncobj: fix DRM sync obj support logging (#12946)
- renderer/pass: fix surface opaque region bounds used in occluding (#13124)
- renderer: add surface shader variants with less branching and uniforms (#13030)
- renderer: optimise shader usage further, split shaders and add more caching (#12992)
- renderer: fix dgpu directscanout explicit sync (#13229)
- renderer: fix frame sync (#13061)
- renderer: fix mouse motion in VRR (#12665)
- renderer: fix non shader cm reset (#13027)
- renderer: fix screen export back to srgb (#13148)
- systemd/sdDaemon: fix incorrect strnlen
- target: fix geometry for x11 floats
- tester: fix sleeps waiting for too long (#12774)
- xwayland/xwm: fix _NET_WM_STATE_MAXIMIZED_VERT type (#13151)
- xwayland/xwm: fix window closing when props race
- xwayland: fix size mismatch for no scaling (#13263)
Other:
- Nix: apply glaze patch
- Nix: re-enable hyprpm
- Reapply "hyprpm: bump glaze version"
- Revert "hyprpm: bump glaze version"
- algo/scrolling: adjust focus callbacks to be more intuitive
- animation: reset tick state on session activation (#13024)
- animationMgr: avoid uaf in ::tick() if handleUpdate destroys AV
- anr: open anr dialog on parent's workspace (#12509)
- anr: remove window on closewindow (#13007)
- buffer: add move constructor and operator to CHLBufferReference (#13157)
- cm: block DS for scRGB in HDR mode (#13262)
- cmake: bump wayland-server version to 1.22.91 (#13242)
- cmake: use OpenGL::GLES3 when OpenGL::GL does not exist (#13260)
- cmakelists: don't require debug for tracy
- compositor: guard null view() in getWindowFromSurface (#13255)
- config: don't crash on permission with a config check
- config: return windowrulev2 layerrulev2 error messages (#12847)
- config: support no_vrr rule on vrr 1 (#13250)
- core: optimize some common branches
- decoration: take desiredExtents on all sides into account (#12935)
- dekstop/window: read static rules before guessing initial size if possible (#12783)
- desktop/LS: avoid creating an invalid LS if no monitor could be found (#12787)
- desktop/ls: clamp layer from protocol
- desktop/popup: avoid crash on null popup child in rechecking
- desktop/popup: only remove reserved for window popups
- desktop/reservedArea: clamp dynamic types to 0
- desktop/reservedArea: clamp to 0
- desktop/rules: use pid for exec rules (#13374)
- desktop/window: avoid uaf on instant removal of a window
- desktop/window: catch bad any cast tokens
- desktop/window: go back to the previously focused window in a group (#12763)
- desktop/window: remove old fn defs
- desktop/window: track explicit workspace assignments to prevent X11 configure overwrites (#12850)
- desktop/window: use workArea for idealBB (#12802)
- desktop/windowRule: allow expression in min_size/max_size (#12977)
- desktop/windowRule: use content rule as enum directly (#13275)
- desktop: restore invisible floating window alpha/opacity when focused over fullscreen (#12994)
- event: refactor HookSystem into a typed event bus (#13333)
- eventLoop: remove failed readable waiters
- framebuffer: revert viewport (#12842)
- gestures/fs: remove unneeded floating state switch (#13127)
- hyprctl: adjust json case
- hyprctl: bump hyprpaper protocol to rev 2 (#12838)
- hyprctl: remove trailing comma from json object (#13042)
- hyprerror: clear reserved area on destroy (#13046)
- hyprpm,Makefile: drop cmake ninja build
- hyprpm: bump glaze version
- hyprpm: drop meson dep
- hyprpm: exclude glaze from all targets during fetch
- hyprpm: use provided pkgconf env if available
- i18n: add Romanian translations (#13075)
- i18n: add Traditional Chinese (zh_TW) translations (#13210)
- i18n: add Vietnamese translation (#13163)
- i18n: add bengali translations (#13185)
- i18n: update russian translation (#13247)
- input/TI: avoid UAF in destroy
- input/ti: avoid sending events to inactive TIs
- input: guard null
view()when processing mouse down (#12772) - input: use fresh cursor pos when sending motion events (#13366)
- internal: removed Herobrine
- layershell: restore focus to layer shell surface after popup is destroyed (#13225)
- layout: rethonk layouts from the ground up (#12890)
- monitor: revert "remove disconnected monitor before unsafe state #12544" (#13154)
- nix: remove glaze patch
- opengl/fb: use GL_DEPTH24_STENCIL8 instead of GL_STENCIL_INDEX8 (#13067)
- opengl: allow texture filter to be changed (#13078)
- opengl: set EGL_CONTEXT_RELEASE_BEHAVIOR_KHR if supported (#13114)
- pointermgr: damage only the surface size (#13284)
- pointermgr: remove onRenderBufferDestroy (#13008)
- pointermgr: revert "damage only the surface size (#13284)"
- popup: check for expired weak ptr (#13352)
- popup: reposition with reserved taken into account
- proto/shm: update wl_shm to v2 (#13187)
- protocolMgr: remove IME / virtual input protocols from sandbox whitelist
- protocols/toplevelExport: Support transparency in toplevel export (#12824)
- protocols: implement image-capture-source-v1 and image-copy-capture-v1 (#11709)
- renderer/fb: dont forget to set m_drmFormat (#12833)
- renderer/gl: add internal gl formats and reduce internal driver format conversions (#12879)
- renderer/opengl: invalidate intermediate FBs post render, avoid stencil if possible (#12848)
- renderer: allow tearing with DS with invisible cursors (#13155)
- renderer: better sdr eotf settings (#12812)
- renderer: minor framebuffer and renderbuffer changes (#12831)
- renderer: shader code refactor (#12926)
- shm: ensure we use right gl unpack alignment (#12975)
- start: use nixGL if Hyprland is nix but not NixOS (#12845)
- systemd/sdDaemon: initialize sockaddr_un
- testers: add missing #include
(#12862) - tests: Test the
no_focus_on_activatewindow rule (#13015) - time: ensure type correctness and calculate nsec correctly (#13167)
- versionKeeper: ignore minor rev version
- view: send wl_surface.enter to subsurfaces of popups (#13353)
- wayland/output: return all bound wl_output instances in outputResourceFrom (#13315)
- welcome: skip in safe mode
- xwayland/xwm: get supported props on constructing surface (#13156)
- xwayland/xwm: handle INCR clipboard transfer chunks correctly (#13125)
- xwayland/xwm: prevent onWrite infinite loop and clean orphan transfers (#13122)
- xwayland: ensure NO_XWAYLAND builds (#13160)
- xwayland: normalize OR geometry to logical coords with force_zero_scaling (#13359)
- xwayland: validate size hints before floating (#13361)
Special thanks
As always, massive thanks to our wonderful donators and sponsors:
Sponsors
Diamond
37Signals
Gold
Framework
Donators
Top Supporters:
Seishin, Kay, johndoe42, d, vmfunc, Theory_Lukas, --, MasterHowToLearn, iain, ari-cake, TyrHeimdal, alexmanman5, MadCatX, Xoores, inittux111, RaymondLC92, Insprill, John Shelburne, Illyan, Jas Singh, Joshua Weaver, miget.com, Tonao Paneguini, Brandon Wang, Arkevius, Semtex, Snorezor, ExBhal, alukortti, lzieniew, taigrr, 3RM, DHH, Hunter Wesson, Sierra Layla Vithica, soy_3l.beantser, Anon2033, Tom94
New Monthly Supporters:
monkeypost, lorenzhawkes, Adam Saudagar, Donovan Young, SpoderMouse, prafesa, b3st1m0s, CaptainShwah, Mozart409, bernd, dingo, Marc Galbraith, Mongoss, .tweep, x-wilk, Yngviwarr, moonshiner113, Dani Moreira, Nathan LeSueur, Chimal, edgarsilva, NachoAz, mo, McRealz, wrkshpstudio, crutonjohn
One-time Donators:
macsek, kxwm, Bex Jonathan, Alex, Tomas Kirkegaard, Viacheslav Demushkin, Clive, phil, luxxa, peterjs, tetamusha, pallavk, michaelsx, LichHunter, fratervital, Marpin, SxK, mglvsky, Pembo, Priyav Shah, ChazBeaver, Kim, JonGoogle, matt p, tim, ybaroj, Mr. Monet Baches, NoX, knurreleif, bosnaufal, Alex Vera, fathulk, nh3, Peter, Charles Silva, Tyvren, BI0L0G0S, fonte-della- bonitate, Alex Paterson, Ar, sK0pe, criss, Dnehring, Justin, hylk, 邱國玉KoryChiu, KSzykula, Loutci, jgarzadi, vladzapp, TonyDuan, Brian Starke, Jacobrale, Arvet, Jim C, frank2108, Bat-fox, M.Bergsprekken, sh-r0, Emmerich, davzucky, 3speed, 7KiLL, nu11p7r, Douglas Thomas, Ross, Dave Dashefsky, gignom, Androlax, Dakota, soup, Mac, Quiaro, bittersweet, earthian, Benedict Sonntag, Plockn, Palmen, SD, CyanideData, Spencer Flagg, davide, ashirsc, ddubs, dahol, C. Willard A.K.A Skubaaa, ddollar, Kelvin, Gwynspring, Richard, Zoltán, FirstKix, Zeux, CodeTex, shoedler, brk, Ben Damman, Nils Melchert, Ekoban, D., istoleyurballs , gaKz, ComputerPone, Cell the Führer, defaltastra, Vex, Bulletcharm, cosmincartas, Eccomi, vsa, YvesCB, mmsaf, JonathanHart, Sean Hogge, leat bear, Arizon, JohannesChristel, Darmock, Olivier, Mehran, Anon, Trevvvvvvvvvvvvvvvvvvvv, C8H10N4O2, BeNe, Ko-fi Supporter :3, brad, rzsombor, Faustian, Jemmer, Antonio Sanguigni, woozee, Bluudek, chonaldo, LP, Spanching, Armin, BarbaPeru, Rockey, soba, FalconOne, eizengan, むらびと, zanneth, 0xk1f0, Luccz, Shailesh Kanojia, ForgeWork , Richard Nunez, keith@groupdigital.com, pinklizzy, win_cat_define, Bill, johhnry, Matysek, anonymus, github.com/wh1le, Iiro Ullin, Filinto Delgado, badoken, Simon Brundin, Ethan, Theo Puranen Åhfeldt, PoorProgrammer, lukas0008, Paweł S, Vandroiy, Mathias Brännström, Happyelkk, zerocool823, Bryan, ralph_wiggums, DNA, skatos24, Darogirn , Hidde, phlay, lindolo25, Siege, Gus, Max, John Chukwuma, Loopy, Ben, PJ, mick, herakles, mikeU-1F45F, Ammanas, SeanGriffin, Artsiom, Erick, Marko, Ricky, Vincent mouline
Full Changelog :
v0.53.0...v0.54.0 -
🔗 benji.dog rss

Sick day meant we got to finish our Lego Sherlock Holmes book nook
-
🔗 r/wiesbaden Das Gerät hat mich die ganze Nacht genervt rss
Hat wohl was mit geomapping / Vermessung zu tun. Aber warum gerade hier, weiss da jemand mehr ?
letzter Zeit hab ich noch die gesehen, hoffe es macht auch Sinn und ist nicht zu teuer
https://www.wiesbaden.de/rathaus/smart-city/kurzinformation_kamerafahrzeuge
submitted by /u/Kind_Ad_5086
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +4 releases, -1 release, ~1 changed rss
sync repo: +2 plugins, +4 releases, -1 release, ~1 changed ## New plugins - [Patching](https://github.com/starsunyzl/idapatching) (0.3.0) - [function-string-associate](https://github.com/oxiKKK/ida-function-string-associate) (1.0.0) ## New releases - [IFL](https://github.com/hasherezade/ida_ifl): 1.5.3 - [bitopt](https://github.com/teflate/bitopt): 1.0.1 ## Changes - [IFL](https://github.com/hasherezade/ida_ifl): - removed version(s): 1.5.2 - [bitopt](https://github.com/teflate/bitopt): - 1.0.0: archive contents changed, download URL changed -
🔗 Servo Blog January in Servo: preloads, better forms, details styling, and more! rss
Servo 0.0.5 is here, bringing with it lots of improvements in web platform features. Some highlights:
- < link rel=preload> (@TimvdLippe, @jdm, #40059)
- < style blocking> and < link blocking> (@TimvdLippe, #42096)
- < img align> (@mrobinson, #42220)
- < select disabled> (@simonwuelker, #42036)
- OGG files can now be played in < audio> (@jdm, #41789)
- ‘cursor-color’ (@mrobinson, #41976)
- ‘content:
’ works on all elements (@andreubotella, #41480) - ‘::details-content’ on <details> (@lukewarlow, #42107)
- ‘:open’ on <details> (@lukewarlow, #42195)
- ‘:active’ on (@mrobinson, #42095)
- Origin API (@WaterWhisperer, #41712)
- MouseEvent.detail (@mrobinson, #41833)
- Request.keepalive (@TimvdLippe, @WaterWhisperer, #41457, #41811)
- Cyclic imports , import attributes , and JSON modules (@Gae24, #41779)
- navigator.sendBeacon() is enabled by default (@TimvdLippe, #41694)
https_proxy,HTTPS_PROXY, andNO_PROXY(@Narfinger, #41689)- ML-KEM , ML-DSA , and AES-OCB in Crypto (@kkoyung, #41604, #41617, #41615, #41627, #41628, #41647, #41659, #41676, #41791, #41822, #41813, #41829)

Web APIs Servo now plays OGG media inside <audio> elements (@jdm, #41789)! We disabled this feature many years ago due to bugs in GStreamer, our media playback engine, but those bugs have since been fixed. We now support non-px sizes for width and height attributes in <svg> elements (@rodio, #40761). Inactive documents will now correctly reject fullscreen mode changes (@stevennovaryo, #42068). We’ve enabled support for the navigator.sendBeacon() by default (@TimvdLippe, #41694); the dom_navigator_sendbeacon_enabled preference has been removed. As part of this work, we implemented the keepalive feature of the Request API (@TimvdLippe, @WaterWhisperer, #41457, #41811). That’s not all for network-related improvements! Quota errors from the fetchLater() API provide more details (@TimvdLippe, #41665), and fetch response body promises now reject when invalid gzip content is encountered (@arayaryoma, #39438). Meanwhile, EventSource connections will no longer endlessly reconnect for permanent failures (@WaterWhisperer, #41651, #42137), and now use the correct ‘Last-Event-Id’ header when reconnecting (@WaterWhisperer, #42103). Finally, Servo will create PerformanceResourceTiming entries for requests that returned unsuccessful responses (@bellau, #41804). There has been lots of work related to navigating pages and loading iframes. We process URL fragments more consistently when navigating via window.location (@TimvdLippe, #41805, #41834), and allow evaluating javascript: URLs when a document’s domain has been modified (@jdm, #41969). XML documents loaded in an <iframe> no longer inherit their encoding from the parent document (@simonwuelker, #41637). We’re also made it possible to use blob: URLs from inside ‘about:blank’ and ‘about:srcdoc’ documents (@jdm, #41966, #42104). Finally, constructed documents (e.g. new Document()) now inherit the origin and domain of the document that created them (@TimvdLippe, #41780), and we implemented the new Origin API (@WaterWhisperer, #41712). Servo’s mixed content protections are steadily increasing. Insecure requests (e.g. HTTP) originating from <iframe> elements can now be upgraded to secure protocols (@WaterWhisperer, #41661), and redirected requests now check the most recent URL when determining if the protocol is secure (@WaterWhisperer, #41832). < style blocking> and < link blocking> can now be used to block rendering while loading stylesheets that are added dynamically (@TimvdLippe, #42096), and stylesheets loaded when parsing the document will block the document ‘load’ event more consistently (@TimvdLippe, @mrobinson, #41986, #41987, #41988, #41973). We also fire the ‘error’ event if a fetched stylesheet response is invalid (@TimvdLippe, @mrobinson, #42037). Servo now leads other browsers in support for new Web Cryptography algorithms! This includes full support for ML-KEM (@kkoyung, #41604, #41617, #41615, #41627), ML-DSA (@kkoyung, #41628, #41647, #41659, #41676), and AES-OCB (@kkoyung, #41791, #41822, #41813, #41829), plus improvements to AES-GCM (@kkoyung, #41950). Additionally, the error messages returned by many Crypto APIs are now more detailed (@PaulTreitel, @danilopedraza, #41964, #41468, #41902). JS module loading received a lot of attention – we’ve improved support for cyclic imports (@Gae24, #41779), import attributes (@Gae24, #42185), and JSON modules (@Gae24, @jdm, #42138). Additionally, the attribute now triggers preload fetch operations that can improve page load speeds (@TimvdLippe, @jdm, #40059). IndexedDB support continues to make progress, though for now the feature is disabled by default (--pref dom_indexeddb_enabled). This month we gained improvements to connection queues (@gterzian, #41500, #42053) and request granularity (@gterzian, #41933). We were accidentally persisting SessionStorage data beyond the current session, but this has been corrected (@arihant2math, #41326). Text input fields have received a lot of love this month. Clicking in an input field will position the cursor accordingly (@mrobinson, @jdm, @Loirooriol, #41906, #41974, #41931), as will clicking past the end of a multiline input (@mrobinson, @Loirooriol, #41909). Selecting text with the mouse in input fields works (@mrobinson, #42049), and double and triple clicks now toggle selections (@mrobinson, #41926). Finally, we fixed a bug causing the input caret to be hidden in <input> elements inside of Shadow DOM content (@stevennovaryo, #42233). ‘cursor-color’ is respected when rendering the input cursor (@mrobinson, #41976), and newlines can no longer be pasted into single line inputs (@mrobinson, #41934). Finally, we fixed a panic when focusing a text field that is disabled (@mrobinson, #42078), as well as panics in APIs like HTMLInputElement.setRangeText() that confused bytes and UTF-8 character indices (@mrobinson, #41588). We also made time to improve form controls! The default styling of many controls received some care (@mrobinson, #42085), while < input type=button> can now be styled with the ‘:active’ pseudo-class (@mrobinson, #42095). Conversely, disabled < select> elements can no longer be activated (@simonwuelker, #42036). Mouse events triggered by the embedder are more complete; MouseEvent.detail correctly reports the click count for ‘mouseup’ and ‘mousedown’ events (@mrobinson, #41833), and many other members are now consistent with other mouse events (@mrobinson, #42013). Performing a pinch zoom on mobile is now reflected in the VisualViewport API (@stevennovaryo, #41754), though for now the feature is disabled by default (--pref dom_visual_viewport_enabled). We’ve changed the behaviour of Web APIs that use the [Clamp] annotation (such as Blob.slice()). The previous implementation would cast floating point values to their integer equivalents, but the standard requires more specific rounding logic (@Taym95, #41640). The RGBA8 constant is now available in WebGL 1 rendering contexts; it was previously only available in WebGL 2 contexts (@simonwuelker, #42048). Fonts were another area of focus this month. Loading web fonts from file: URLs works as expected (@TimvdLippe, #41714), as does using web fonts within Shadow DOM content (@minghuaw, #42151). Each web font request now creates a PerformanceResourceTiming entry (@lumi-me- not, #41784). Servo supports font variations as of November 2025, so as of this month, the FontFace constructor no longer ignores the ‘font-variation-settings’ property (@muse254, #41968). Cursive scripts now ignore the ‘letter-spacing’ CSS property (@mrobinson, #42165), and we significantly reduced the time and memory required when rendering non-ASCII text (@mrobinson, @Loirooriol, #42105, #42162) and when text nodes share the same font (@mrobinson, #41876). CSS __ There were lots of improvements to block layout algorithms (@Loirooriol, #41492, #41624, #41632, #41655, #41652, #41683). These often affect pages where a block element (such as a <div>) exists within some other layout mode (such as an inline <span>, or a flexbox context), and fixes like these ensure Servo matches the output of other browsers. Elements with scrollable overflow can be scrolled more consistently, even with CSS transforms applied to them (@stevennovaryo, #41707, #42005). You can now use ‘content: ’ on any element (@andreubotella, #41480). Generated image content used to only work with pseudo-elements, but that restriction no longer applies. < details> elements can now be styled with the ‘::details-content’ pseudo-element (@lukewarlow, #42107), as well as the ‘:open’ pseudo-class (@lukewarlow, #42195). CSS styles now inherit correctly through ‘display: contents’ as well as < slot> elements in Shadow DOM content (@longvatrong111, @Loirooriol, @mrobinson, #41855). ‘overflow-clip-margin’ now works correctly when ‘border-radius’ is present (@Loirooriol, #41967). We fixed bugs involving text inside flexbox elements: they now use consistent baselines for alignment (@lukewarlow, @mrobinson, #42038), and style updates are propagated to the text correctly (@mrobinson, #41951). < img align> now aligns the image as expected (@mrobinson, #42220). ‘word-break: keep-all’ now prevents line breaks in CJK text (@RichardTjokroutomo, #42088). We also fixed some bugs involving floats , collapsing margins , and phantom line boxes (@Loirooriol, #41812), which sound much cooler than they actually are. Finally, we upgraded our Stylo dependency to the latest changes as of January 1 2026 (@Loirooriol, #41916, #41696). Stylo powers our CSS parsing and style resolution engine, and this upgrade improves support for parsing color functions like ‘color-mix()’ , and improves our CSS animations and transitions for borders and overflow clipping. Automation and introspection Last month Servo gained support for HTTP proxies. We now support HTTPS proxies as well (@Narfinger, #41689), which can be configured with the https_proxy or HTTPS_PROXY environment variables, or the network_https_proxy_uri preference. In addition, the NO_PROXY environment variable or the network_http_no_proxy preference can disable any proxy for particular domains. Our developer tools integration continues to improve. Worker globals are now categorized correctly in the UI (@atbrakhi, #41929), and the Sources panel is populated for very short documents (@atbrakhi, #41983). Servo will report console messages that were logged before the developer tools are opened (@eerii, @mrobinson, #41895). Finally, we fixed a panic when selecting nodes in the layout inspector that have no style information (@eerii, #41800). We’re working towards supporting pausing in the JS debugger (@eerii, @atbrakhi, @jdm, #42007), and breakpoints can be toggled through the UI (@eerii, @atbrakhi, #41925, #42154). While the debugger is paused, hovering over JS objects will report the object’s properties for builtin JS classes (@eerii, @atbrakhi, #42186). Stay tuned for more JS debugging updates in next month’s blog post! Servo’s WebDriver server is also maturing. Evaluating a synchronous script that returns a Promise will wait until that promise settles (@yezhizhen, #41823). ‘touchmove’ events are fired for pointer actions when a button is pressed (@yezhizhen, #41801), and ‘touchcancel’ events are fired for canceled pointer action items (@yezhizhen, #41937). Finally, any pointer actions that would trigger duplicate ‘mousemove’ events are silently discarded (@mrobinson, #42034). Element Clear commands now test whether the element is interactable (@yezhizhen, #42124). Now a null script execution timeout value will never trigger a timeout (@yezhizhen, #42184), and synthesized ‘pointermove’ events have a consistent pointerId value (@yezhizhen, #41726). Embedding
You can now cross-compile Servo using Windows as the host (@yezhizhen, #41748).
We’ve pinned all git dependencies to specific revisions, to reduce the risk of build failures (@Narfinger, #42029). We intend to eventually forbid git dependencies in Servo libraries, which will help unblock releasing Servo on crates.io.
SiteDataManager now has a new clear_site_data() method to clear all stored data for a particular host (@janvarga, #41618, #41709, #41852).
Our nightly testing UI, servoshell , now respects any customized installation path on Windows (@yezhizhen, #41653). We fixed a crash in the Android app when pausing the application (@NiklasMerz, #41827). Additionally, clicking inside a webview in the desktop app will remove focus from any browser UI (@mrobinson, #42080).
We’ve laid more groundwork towards exposing accessibility tree information from webviews (@delan, @lukewarlow, @alice, #41924). There’s nothing to test yet, but keep an eye on our tracking issue if you want to be notified when nightly builds are ready for testing!
Stability & performance We’ve converted many uses of IPC channels in the engine to channels that are more efficient when multiprocess mode is disabled (@Narfinger, @jdm, @sagudev, @mrobinson, #41178, #41071, #41733, #41806, #41380, #41809, #41774, #42032, #42033, #41412). Since multiprocess mode is not yet enabled by default (--multiprocess), this is a significant boost to Servo’s everyday performance. Servo now sets a socket timeout for HTTP connections (@Narfinger, @mrobinson, #41710). This is controlled by the network_connection_timeout preference, and defaults to 15 seconds. Each instance of Servo now starts four fewer threads (@Narfinger, #41740). Any network operations that trigger a synchronous UI operation (such as an HTTP authentication prompt) no longer blocks other network tasks from completing (@Narfinger, @jdm, #41965, #41857). It’s said that one of the hardest problems in computer science is cache invalidation. We improved the memory usage of dynamic inline SVG content by evicting stale SVG tree data from a cache (@TomRCummings, #41675). Meanwhile, we added a new cache to reduce memory usage and improve rendering performance for pages with animating images (@Narfinger, #41956). Servo’s JS engine now accounts for 2D and 3D canvas-related memory usage when deciding how often to perform garbage collection (@sagudev, #42180). This can reduce the risk of out-of-memory (OOM) errors on pages that create large numbers of short- lived WebGL or WebGPU objects. To reduce the risk of panics involving the JS engine integration, we’re continuing to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@sagudev, #41692, #41782, #41756, #41808, #41879, #41878, #41955, #41971, #42123). We also continue to identify and forbid code patterns that can trigger rare crashes when garbage collection happens while destroying webviews (@willypuzzle, #41717, #41783, #41911, #41911, #41977, #41984, #42243). This month also brought fixes for panics in parallel layout (@mrobinson, #42026), WebGPU (@WaterWhisperer, #42050), <link> fetching (@jdm, #42208), Element.attachShadow() (@mrobinson, #42237), text input methods (@mrobinson, #42240), Web Workers when the developer tools are active (@mrobinson, #42159), IndexedDB (@gterzian, #41960), and asynchronous session history updates (@mrobinson, #42238). Node.compareDocumentPosition() is now more efficient (@webbeef, #42260), and selections in text inputs no longer require a full page layout (@mrobinson, @Loirooriol, #41963). Donations
Thanks again for your generous support! We are now receiving 7007 USD/month (−1.4% over December) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 33 GitHub users (+3 over December) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsor: str4d! If you’re interested in this kind of sponsorship, please contact us at join@servo.org.
7007 USD/month
10000
Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
Conference talks and blogs
There were two talks about Servo at FOSDEM 2026 (videos and slides here):
-
Implementing Streams Spec in Servo – Taym Haddadi (@taym95) described the challenges of implementing the Streams Standard.
-
The Servo project and its impact on the web platform – Manuel Rego (@rego) highlighted the ways that Servo has shaped the web platform and contributed to web standards since it started in 2012.
-
- February 27, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-27 rss
IDA Plugin Updates on 2026-02-27
New Releases:
Activity:
- binlex
- f2d1ee92: @pwnslinger has signed the CLA in c3rb3ru5d3d53c/binlex#169
- capa
- da1abed3: ci: pin pip-audit action SHAs and update to v1.1.0 (#2884)
- DeepExtractIDA
- ida-codex-mcp
- 4e9a5b11: 支持多IDA IDE联动调动,对于排查一个涉及多个模块的函数调用流程分析非常有用支持多IDA IDE联动调动,对于排查一个涉及多个模块的函…
- ida-dbimporter
- ida-domain
- ida-hcli
- 526f84ce: 0.16.4
- ida-ios-helper
- 1460e20e: :art: Remove cf release
- IDA-NO-MCP
- 37bded17: Merge pull request #5 from awaxiaoyu/fix-large-file-crash
- ida-terminal-plugin
- ida_ifl
- dadfbbb7: [NOBIN] Updated plugin version in JSON
- IDAPluginList
- 2e0ed2e0: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- msc-thesis-LLMs-to-rank-decompilers
- 445e722f: removed [H]
- quokka
- binlex
-
🔗 r/LocalLLaMA President Trump orders ALL Federal agencies in the US Government to immediately stop using Anthropic's technology. rss
| https://preview.redd.it/m3lk2lo3k4mg1.png?width=1200&format=png&auto=webp&s=513cae2c197f8e4fe712baa4ae7420972e7f4047 https://truthsocial.com/@realDonaldTrump/posts/116144552969293195 Reports have been circulating that the U.S. Department of Defense issued an ultimatum to AI giant Anthropic to remove two "guardrails" by Friday. U.S. President Trump announced that every federal agency in the U.S. government must immediately stop using all of Anthropic's technology. For agencies like the War Department that use Anthropic products at all levels, there will be a six-month phase-out period. Anthropic had better cooperate, or the full power of the presidency will be used to force their compliance, including civil and criminal consequences. Writing on the social platform Truth Social, he stated that Anthropic had made a catastrophic mistake by daring to coerce the War Department and forcing them to abide by its terms of service rather than the National Constitution. "Their selfishness is putting American lives at risk, placing our military in danger, and jeopardizing our national security." Trump noted, "It is we who will decide the fate of the nation, not some out-of-control radical-left AI company run by a group of people who know nothing about the real world." U.S. Secretary of Defense Pete Hegseth immediately instructed the War Department to list Anthropic as a "supply chain risk" to national security, effective immediately. Any contractor, supplier, or partner doing business with the U.S. military is prohibited from engaging in any commercial activities with Anthropic. Anthropic will continue to provide services to the War Department for no more than six months to allow for a seamless transition to another better, more patriotic service. Hegseth wrote on the X platform, stating that Anthropic’s attempt to seize veto power over the U.S. military’s operational decisions is unacceptable. "As Trump stated, only the Commander-in-Chief and the American people can decide the fate of our armed forces, not unelected tech executives." Anthropic's stance is fundamentally at odds with American principles, and its relationship with the U.S. Armed Forces and the federal government has been permanently altered. OpenAI CEO Sam Altman told employees that he hopes the company can try to help de-escalate the tensions between Anthropic and the Department of Defense. Altman stated, "AI should not be used for mass surveillance or autonomous lethal weapons, and humans must remain involved in high-risk automated decision-making; these are our primary red lines." OpenAI employees have already begun speaking out on social media in support of Anthropic. According to their website, approximately 70 current employees have signed an open letter titled "We Will Not Be Divided," aimed at "building consensus and solidarity in the face of pressure from the Department of Defense." Altman said, "Despite my many disagreements with Anthropic, I fundamentally trust them as a company. I believe they truly care about safety, and I am also glad they have consistently supported our warriors. I am not sure how things will unfold from here." Update: https://www.anthropic.com/news/statement-comments-secretary-war I know this company doesn't develop open-source models, but it's still quite interesting. submitted by /u/External_Mood4719
[link] [comments]
---|--- -
🔗 r/reverseengineering Security Research Blog Review rss
submitted by /u/Outrageous_Egg7579
[link] [comments] -
🔗 r/york historical and/or spooky recommendations? rss
i’ve been to york a few times over the last few years and i absolutely love it. the only thing is, my trips always seem to be last-minute, so i never end up planning much else outside of the same things i’ve already enjoyed.
off the top of my head, i’ve done the jorvik viking centre, walked around clifford’s tower (but never actually gone inside), done the deathly dark tours three times (they’re amazing! i’d ask for another recommendation to switch it up but i know i’ll probably end up using them again), walked the city walls, went to the dungeons, went on river cruises, visited the railway museum. this isn’t a lot considering the amount of times i’ve been — i always end up just wandering around the city and the shops, reading bits of history as i go.
i’m really into history (anything spooky is a plus) so any recommendations based on that would be great. my last visit was last weekend and as always i really enjoyed it, but i did the same activities as usual so i would love some suggestions for next time!
submitted by /u/ilexaquiifolium
[link] [comments] -
🔗 r/LocalLLaMA Back in my day, LocalLLaMa were the pioneers! rss
| submitted by /u/ForsookComparison
[link] [comments]
---|--- -
🔗 r/Leeds What do you think about the air quality in Leeds? rss
Hi everyone! I’m a student at University of Leeds and I’m conducting a study on how we receive information about air quality in Leeds and what we think of the Council’s current strategies.
Whether you’re worried about pollution or think the current measures are a bit much, I want to hear your honest views. It only takes 5 minutes and is completely anonymous. Plus, there is an Amazon gift-card sweepstakes you can enter at the end…
Survey link: https://app.onlinesurveys.jisc.ac.uk/s/leeds/leeds-air-quality- perceptions-survey
Many thanks if you can take part 😊
submitted by /u/Subject-Donkey-5873
[link] [comments] -
🔗 r/york Oot an' aboot. rss
| submitted by /u/OpportunityNearby827
[link] [comments]
---|--- -
🔗 @HexRaysSA@infosec.exchange ❤️ LAST CHANCE for savings you'll love... mastodon
❤️ LAST CHANCE for savings you'll love...
IDA Pro’s 40% OFF* this Valentine's season.Use code LOVE40 by February 28 to get your license to the industry's best, for 40% less: http://hex-rays.com/license-love
*Terms apply.
-
🔗 sacha chua :: living an awesome life Using speech recognition for on-the-fly translations in Emacs and faking in-buffer completion for the results rss
When I'm writing a journal entry in French, I sometimes want to translate a phrase that I can't look up word by word using a dictionary. Instead of switching to a browser, I can use an Emacs function to prompt me for text and either insert or display the translation. The plz library makes HTTP requests slightly neater.
(defun my-french-en-to-fr (text &optional display-only) (interactive (list (read-string "Text: ") current-prefix-arg)) (let* ((url "https://translation.googleapis.com/language/translate/v2") (params `(("key" . ,(getenv "GOOGLE_API_KEY")) ("q" . ,text) ("source" . "en") ("target" . "fr") ("format" . "text"))) (query-string (mapconcat (lambda (pair) (format "%s=%s" (url-hexify-string (car pair)) (url-hexify-string (cdr pair)))) params "&")) (full-url (concat url "?" query-string))) (let* ((response (plz 'get full-url :as #'json-read)) (data (alist-get 'data response)) (translations (alist-get 'translations data)) (first-translation (car translations)) (translated-text (alist-get 'translatedText first-translation))) (when (called-interactively-p 'any) (if display-only (message "%s" translated-text) (insert translated-text))) translated-text)))I think it would be even nicer if I could use speech synthesis, so I can keep it a little more separate from my typing thoughts. I want to be able to say "Okay, translate …" or "Okay, … in French" to get a translation. I've been using my fork of natrys/whisper.el for speech recognition in English, and I like it a lot. By adding a function to
whisper-after-transcription-hook, I can modify the intermediate results before they're inserted into the buffer.(defun my-whisper-translate () (goto-char (point-min)) (let ((case-fold-search t)) (when (re-search-forward "okay[,\\.]? translate[,\\.]? \\(.+\\)\\|okay[,\\.]? \\(.+?\\) in French" nil t) (let* ((s (or (match-string 1) (match-string 2))) (translation (save-match-data (my-french-en-to-fr s)))) (replace-match (propertize translation 'type-hint translation 'help-echo s)))))) (with-eval-after-load 'whisper (add-hook 'whisper-after-transcription-hook 'my-whisper-translate 70))But that's too easy. I want to actually type things myself so that I get more practice. Something like an autocomplete suggestion would be handy as a way of showing me a hint at the cursor. The usual completion-at-point functions are too eager to insert things if there's only one candidate, so we'll just fake it with an overlay. This code works only with my whisper.el fork because it supports using a list of functions for
whisper-insert-text-at-point.(defun my-whisper-maybe-type-with-hints (text) "Add this function to `whisper-insert-text-at-point'." (let ((hint (and text (org-find-text-property-in-string 'type-hint text)))) (if hint (progn (my-type-with-hint hint) nil) text))) (defvar-local my-practice-overlay nil) (defvar-local my-practice-target nil) (defvar-local my-practice-start nil) (defun my-practice-cleanup () "Remove the overlay and stop monitoring." (when (overlayp my-practice-overlay) (delete-overlay my-practice-overlay)) (setq my-practice-overlay nil my-practice-target nil my-practice-start nil) (remove-hook 'post-command-hook #'my-practice-monitor t)) (defun my-practice-monitor () "Updates hint or cancels." (let* ((pos (point)) (input (buffer-substring-no-properties my-practice-start pos)) (input-len (length input)) (target-len (length my-practice-target))) (cond ((or (< pos my-practice-start) (> pos (+ my-practice-start target-len)) (string-match "[\n\t]" input) (string= input my-practice-target)) (my-practice-cleanup)) ((string-prefix-p (downcase input) (downcase my-practice-target)) (let ((remaining (substring my-practice-target input-len))) (move-overlay my-practice-overlay pos pos) (overlay-put my-practice-overlay 'after-string (propertize remaining 'face 'shadow)))) (t ; typo (move-overlay my-practice-overlay pos pos) (overlay-put my-practice-overlay 'after-string (propertize (substring my-practice-target input-len) 'face 'error)))))) (defun my-type-with-hint (string) "Show hints for STRING." (interactive "sString to practice: ") (my-practice-cleanup) (setq-local my-practice-target string) (setq-local my-practice-start (point)) (setq-local my-practice-overlay (make-overlay (point) (point) nil t t)) (overlay-put my-practice-overlay 'after-string (propertize string 'face 'shadow)) (add-hook 'post-command-hook #'my-practice-monitor nil t))Here's a demonstration of me saying "Okay, this is a test, in French.":
Screencast of using speech recognition to translate into French and provide a hint when typingSince we're faking in-buffer completion here, maybe we can still get away with considering this as an entry for Emacs Carnival February 2026: Completion ? =)
This is part of my Emacs configuration.You can e-mail me at sacha@sachachua.com.
-
🔗 r/Harrogate Looking for date location in Harrogate rss
Hi looking for date location like bar, im mid 20s dating slightly younger or my age.
belgrave in Leeds is my sort of place but gets busy and loud, went with flatmate and was so hard to hear them and talk even pretty early at 6pm, the walk there and from was actually better!!
So looking for somewhere quieter kinda cool vibe , not formal or
The ideal place would be something like Monty’s Bar that’s in Shoreditch which has sofas without it functioning as a main function room that belgrave does, more just like a edgy living room vibe and bar without the party
Thanks
submitted by /u/Apprehensive_Ring666
[link] [comments] -
🔗 r/Leeds Looking for date location in Leeds rss
Hi looking for date location like bar, belgrave is my sort of place but gets busy and loud, went with flatmate and was so hard to hear them and talk even pretty early at 6pm, the walk there and from was actually better!!
So looking for somewhere quieter kinda cool vibe
The ideal place would be something like Monty’s Bar that’s in Shoreditch which has sofas without it functioning as a main function room that belgrave does, more just like a edgy living room vibe and bar without the party
Thanks
submitted by /u/Apprehensive_Ring666
[link] [comments] -
🔗 r/wiesbaden Stau auflösen 2.0 rss
Die Partei hat immer recht.
submitted by /u/Affisaurus
[link] [comments] -
🔗 r/wiesbaden Morgen Plattenladen Pop-up in Mainz ✨🤝 rss
submitted by /u/EmploymentUnique2066
[link] [comments] -
🔗 r/york Need food ideas?! rss
Hi, I’m wanting to take my partner for a meal in York and wanting some recommendations. Happy to eat most cuisines but there is a seafood/fish allergy. Thank you in advance. If the place has online menus even better.
submitted by /u/Mormagon108
[link] [comments] -
🔗 r/LocalLLaMA New Qwen3.5-35B-A3B Unsloth Dynamic GGUFs + Benchmarks rss
| Hey r/LocalLlama! We just updated Qwen3.5-35B Unsloth Dynamic quants being SOTA on nearly all bits. We did over 150 KL Divergence benchmarks, totally 9TB of GGUFs. We uploaded all research artifacts. We also fixed a tool calling chat template bug (affects all quant uploaders)- We tested Bartowski, Ubergram, AesSedai, Noctrex and our new Dynamic GGUFs
- 99.9% KL Divergence shows SOTA on Pareto Frontier for UD-Q4_K_XL, IQ3_XXS & more.
- Retiring MXFP4 from all GGUF quants: Q2_K_XL, Q3_K_XL and Q4_K_XL, except for a select few layers.
- Qwen3.5-35B-A3B GGUFs are updated to use new fixes (112B, 27B still converting, re-download once they are updated)
- Imatrix definitely helps reduce KLD & PPL.
- I quants (iq3_xxs, iq2_s etc) makes inference 5-10% slower.
- Quantizing ssm_out (Mamba layers) is not a good idea, and ffn_down_exps.
Some tensors are very sensitive to quantization
- We made over 9TB of research artifacts available for the community to investigate further on our Experiments page. It includes KLD metrics and all 121 configs we tested.
- We varied bit widths across each tensor type, and generated a best and worst Pareto Frontier plot below vs 99.9% KLD.
- For the best items to quantize, ffn_up_exps and ffn_gate_exps are generally ok to quantize to 3bit. ffn_down_exps is slightly more sensitive.
- For the worst items, ssm_out dramatically increases KLD and the disk space savings is minuscule. For example, ssm_out at q2_k does dramatically worse. Quantizing any attn_* is especially sensitive for hybrid architectures, and so leaving them in higher precision works well.
https://preview.redd.it/pakdmbv1n2mg1.png?width=1183&format=png&auto=webp&s=be8940bf7c49157d1e34bb82053e70b44f0e1744 Tensor type vs bits on 99.9% KL Divergence
- We plot all quant levels vs 99.9% KLD, and sort from worst KLD to best. Quantizing ffn_* layers too heavily down is not a good idea.
- However, some bit widths are good, especially 3bit. - for example leaving ffn_* (down, up, gate) at around iq3_xxs seems to be best compromise on disk space and 99.9% KLD change. 2 bits cause more degradation.
MXFP4 is much worse on many tensors - attn_gate, attn_q, ssm_beta, ssm_alpha using MXFP4 is not a good idea, and rather Q4_K is better - also MXFP4 uses 4.25 bits per weight, whilst Q4_K uses 4.5 bits per weight. It's better to use Q4_K than MXFP4 when choosing between them. https://preview.redd.it/xgugdgzmv2mg1.png?width=989&format=png&auto=webp&s=eddc2c32d343410a27f405289fd976e858d6f6a8 Imatrix works remarkably well
- Imatrix definitely helps weight the quantization process in the right way. For example previously ssm_out at 2bits was really bad, however imatrix reduces the 99.9% KLD by a lot.
- Imatrix generally helps on lower bits, and works on all quants and bit widths.
https://preview.redd.it/yidhlf79o2mg1.png?width=1389&format=png&auto=webp&s=c9b5f1f6510d0aa5ebbf4b06ba9908947a21e93e I quants (iq3_xxs, iq2_s etc) makes inference 5-10% slower, they're definitely better in terms of efficiency, but there is a tradeoff. Benjamin’s recent MiniMax‑M2.5 analysis shows a case how perplexity and KLD can still be very misleading. Unsloth Dynamic IQ2_XXS performs better than AesSedai’s IQ3_S on real world evals (LiveCodeBench v6, MMLU Pro) despite being 11GB smaller. Yet, AesSedai’s perplexity and KLD benchmarks suggest the opposite. (PPL: 0.3552 vs 0.2441; KLD: 9.0338 vs 8.2849 - lower is better). https://preview.redd.it/hwif5hfex2mg1.png?width=1078&format=png&auto=webp&s=d6fef62ede6626f47991a3dbc90183b9d621d0bc Perplexity and KLD can also be misleading but, as precaution we replaced any MXFP4 layer. Real-world evals (LiveCodeBench v6 etc.) are much better benchmarks, but can take many days. This mismatch shows how lower perplexity or KLD doesn’t necessarily translate to better real-world performance. The graph also shows UD‑Q4-K‑XL outperforming other Q4 quants, while being ~8GB smaller. This doesn’t mean perplexity or KLD is useless, as they provide a rough signal. So, going forward, we’ll publish perplexity and KLD for every quant so the community has some reference. Updated GGUFs here: https://huggingface.co/collections/unsloth/qwen35 For more investigation deets and benchmarks you can read: https://unsloth.ai/docs/models/qwen3.5 Thank you for reading and once again for the feedback and incredible support. Huge thanks to the Qwen team as well for releasing Qwen3.5. If there’s any suggestions please let us know and have a great Friday / weekend guys! Benchmarking Details & Appreciation:
- We utilized bartowski's wonderful imatrix file to make the comparisons more fair - our Dynamic 2.0 method uses a conversational format, but we found benchmarking to be fairer if we used a more general imatrix
- We appreciated some friendly guidance from Ubergram and the community!
- For perplexity we used the below. We also use the BF16 as the base KLD file.
LLAMA_SET_ROWS=1 ./llama.cpp/llama-perplexity --flash-attn on --fit off --batch-size 16384 --ubatch-size 16384 --device {device} --model {model} --ctx-size 512
submitted by /u/danielhanchen
[link] [comments]
---|--- -
🔗 r/york A wander around the streets.. rss
| submitted by /u/OpportunityNearby827
[link] [comments]
---|--- -
🔗 News Minimalist 🐢 Pakistan and Afghanistan at war + 10 more stories rss
In the last 3 days ChatGPT read 94301 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[6.2] Afghanistan and Pakistan engage in open war following cross-border attacks —abcnews.com(+215)
Pakistan’s defense minister declared an “open war” with Afghanistan on Friday following a significant escalation in cross-border military strikes, marking the most serious confrontation between the neighbors in years.
Following Afghan cross-border attacks Thursday, Pakistan launched retaliatory airstrikes on Kabul and Kandahar. Tensions have peaked over Pakistan’s claims that Afghanistan harbors TTP militants, an allegation Kabul denies while citing civilian casualties from recent Pakistani military operations along the porous frontier.
[6.3] Pentagon demands unrestricted military access to Anthropic's AI or risks contract loss —apnews.com(+100)
Defense Secretary Pete Hegseth gave Anthropic a Friday deadline to allow unrestricted military use of its AI or risk losing government contracts and facing Defense Production Act intervention.
Defense officials suggested invoking the Cold War-era Defense Production Act to bypass Anthropic's ethical restrictions. This unprecedented move aims to integrate Claude AI into military networks despite leadership concerns regarding autonomous weaponry, mass surveillance, and safety limits for artificial intelligence.
While the DPA historically boosts production during emergencies like pandemics, experts warn that using it to dictate service terms is legally questionable and could trigger significant litigation between Anthropic and the government.
[5.5] EU expands funding for abortion access within the bloc —apnews.com(+24)
The European Commission has authorized using the 147 billion euro European Social Funds Plus to support citizens traveling to access safe abortions from EU nations with restrictive health laws.
This decision follows the My Voice, My Choice campaign, which gathered over one million signatures via the European Citizens’ Initiative. While no new fund was created, the Commission confirmed that existing resources can defray costs for women seeking legal healthcare across borders.
Although abortion is legal in most of Europe, it remains highly restricted in countries like Poland and Malta. Proponents call the move a victory for social justice, while some critics oppose the intervention.
Highly covered news with significance over 5.5
[6.0] Chip giant Nvidia defies AI concerns with record $215bn revenue — bbc.com (+138)
[6.5] World-first stem cell therapy trial shows promise for treating spina bifida in the womb — nature.com (+6)
[5.9] France's National Assembly approves assisted dying bill — lalibre.be (French) (+7)
[5.9] Chinese scientists transform desert sand into fertile soil using microbes — en.tempo.co (+2)
[5.7] Novartis settles with Henrietta Lacks' estate over use of her cells — independent.co.uk (+10)
[5.6] Germany's ruling parties reverse heat pump mandate, allowing homeowner choice — nzz.ch (German) (+9)
[5.5] UK's first geothermal power plant generates electricity for 10,000 homes and produces lithium — bbc.com (+2)
[5.5] Chilean telescope captures detailed view of Milky Way's star-forming core — apnews.com (+33)
Thanks for reading!
— Vadim
You can customize this newsletter with premium.
-
🔗 r/wiesbaden essbare Blumen rss
Hat jemand eine Idee, wo ich in Wiesbaden essbare Blumen kaufen kann, um einen Kuchen zu dekorieren? :)
submitted by /u/Turbulent_Life_5826
[link] [comments] -
🔗 r/Yorkshire Jake Lambert will be headlining a comedy evening at The Glee Club Leeds on 10th June, in aid of Epilepsy Action! rss
| submitted by /u/NationalDoodleDay
[link] [comments]
---|--- -
🔗 3Blue1Brown (YouTube) The most beautiful formula not enough people understand rss
On the volumes of higher-dimensional spheres Explore the 3b1b virtual career fair: See https://3b1b.co/talent Become a supporter for early views of new videos: https://3b1b.co/support An equally valuable form of support is to simply share the videos. Home page: https://www.3blue1brown.com
Thanks to UC Santa Cruz for letting me film there, and special thanks to Pedro Morales-Almazan for arranging everything.
My video on Numberphile with a fun application of this problem: https://youtu.be/6_yU9eJ0NxA
Timestamps: 0:00 - Introduction 1:01 - Random puzzle 6:16 - Outside the box 14:35 - Setting up the volume grid 21:14 - Why 4πr^2 25:21 - Archimedes in higher dimensions 36:17 - The general formula 40:40 - 1/2 factorial 44:58 - Why 5D spheres are the biggest 50:16 - Concentration at the surface 54:27 - A unit-free interpretation 57:50 - 3b1b Talent 59:13 - Explaining the intro animation
These animations are largely made using a custom Python library, manim. See the FAQ comments here: https://3b1b.co/faq#manim
Music by Vincent Rubinetti. https://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown https://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u
3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.
Mailing list: https://3blue1brown.substack.com Twitter: https://twitter.com/3blue1brown Bluesky: https://bsky.app/profile/3blue1brown.com Instagram: https://www.instagram.com/3blue1brown Reddit: https://www.reddit.com/r/3blue1brown Facebook: https://www.facebook.com/3blue1brown Patreon: https://patreon.com/3blue1brown Website: https://www.3blue1brown.com
-
🔗 r/Leeds How to spend 3 hours in leeds rss
I came to leeds for an interview which I'll be done by 4 and my bus leaves by 7:10pm. I don't know what to do for the next 3hrs. I don't want to just sit inside the bus depot. is there any activities or events thats happening in leeds today? or some sightseeing suggestions please
Edit: Thank you strangers for all your suggestions. I went to trinity and did a bit of shopping and went to a nearby bakery for lunch/dinner. My bus was delayed a bit but I'm back in my city now
submitted by /u/No-Repeat7457
[link] [comments] -
🔗 r/LocalLLaMA PewDiePie fine-tuned Qwen2.5-Coder-32B to beat ChatGPT 4o on coding benchmarks. rss
| submitted by /u/hedgehog0
[link] [comments]
---|--- -
🔗 r/Harrogate New Royal Hunting Forest exhibition in Knaresborough rss
submitted by /u/No_Nose_3849
[link] [comments] -
🔗 r/Yorkshire Whitby against racism rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 r/york Hidden gems to explore in York rss
| Reddit tricked me into reading an AI-generated page about hidden gems in York. I'm pleased to report that everyone's favourite cafe has made the list! See the screenshot below. Reddit cites u/WhapXI's comment and u/WhatWeHavingForTea's comment as evidence. I would like to compliment the AI on its refined taste in cafes. https://preview.redd.it/v4yf71es11mg1.png?width=865&format=png&auto=webp&s=0df8d86bbac10bb61f12d53de2e64b9921681f8e See for yourself here: https://www.reddit.com/answers/7d474f44-7294-4c6f-ad77-e40716ed14f8/?q=Hidden+gems+to+explore+in+York&source=PDP&tl=en . submitted by /u/sbernard
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Follow-up: Qwen3.5-35B-A3B — 7 community-requested experiments on RTX 5080 16GB rss
TL;DR : Community asked great questions on my original benchmarks post. I ran every experiment you requested. The headline: KV q8_0 is confirmed free lunch, Q4_K_M remains king,
--fit onwithout batch flags hits 74.7 tok/s (+7% over my original config), and KL divergence confirms UD-Q4_K_XL is even worse than PPL suggested. Full results and updated launch command below.Context
After posting Qwen3.5-35B-A3B quantization quality + speed benchmarks on RTX 5080 16GB, you folks raised a bunch of great questions. Rather than hand-waving, I ran every experiment I could. Here's what I found.
Hardware : RTX 5080 16GB + 128GB DDR5 + Ryzen 9 9950X (32 threads) Software : llama.cpp (built from source, CUDA 12.8, sm_120) Base model : Qwen3.5-35B-A3B (MoE: 256 experts/layer, top-8 + 1 shared, ~3B active params/token)
Experiment 1: KV Cache Quality — Is q8_0 really "free"?
Requested by : u/PhilippeEiffel, u/MrMisterShin, u/llama-impersonator, u/WittyAmbassador7340, u/kreigiron, u/bartskol
Fair concern — I claimed KV q8_0 was free but didn't have PPL data to back it up. Here's the full matrix:
Model Quant | KV f16 | KV q8_0 | KV q4_0
---|---|---|---
Q8_0 | 5.8831 | 5.8822 (-0.02%) | 5.8694 (-0.23%)
Q4_K_M | 6.0184 | 5.9997 (-0.31%) | 6.0422 (+0.40%)Verdict : KV q8_0 is genuinely free. PPL differences are within noise (< 0.4%). Even KV q4_0 is acceptable for most use cases. The "instant accuracy drops" some of you reported aren't reflected in PPL metrics — though I acknowledge PPL may not capture all degradation modes (more on that below).
Recommendation unchanged : Use
-ctk q8_0 -ctv q8_0for +12-38% throughput at zero measurable quality cost.Caveat: These PPL tests used 512 token context. Some users report KV q8_0 degrading at very long contexts (40-100k tokens) where quantization errors may accumulate. If you're regularly running huge contexts, test carefully.
Experiment 2: KL Divergence — Does PPL tell the whole story?
Requested by : u/JermMX5, u/Embarrassed_Ad3189
u/JermMX5 cited the Accuracy is Not All You Need paper showing PPL can stay flat while token accuracy collapses. Great point. So I ran KLD against Q8_0 base logits (512 ctx, 80 chunks):
Quant | Mean KLD | Max KLD | Same Top-1 Token %
---|---|---|---
Q4_K_M | 0.0282 | 4.2146 | 92.4%
UD-Q4_K_XL | 0.1087 | 7.7947 | 86.2%Verdict : KLD confirms and amplifies the PPL findings. UD-Q4_K_XL is 3.9x worse than Q4_K_M by mean KLD and only preserves the top-1 token 86.2% of the time (vs 92.4%). PPL was not misleading here — it correctly ranked the quants, but KLD shows the gap is even larger than PPL suggested.
Practical note : Qwen3.5's 248K vocab makes full KLD evaluation produce enormous logit files (~19 GiB for 80 chunks). I used
--chunks 80with uint16 storage which is feasible with 128GB RAM. If you have a smaller system,--chunks 20-30should give stable relative rankings.Experiment 3: Bartowski Q4_K_L — Is the imatrix quant worth it?
Requested by : u/bettertoknow
bartowski's Q4_K_L uses Q8_0 for embed/output tensors plus more q5_K and q6_K layers than Q4_K_M. Quality- wise, it's measurably better:
Metric | Q4_K_M (Unsloth) | Q4_K_L (bartowski) | Q8_0 (reference)
---|---|---|---
PPL (WikiText-2) | 6.6688 | 6.6125 (-0.8%) | 6.5342
Mean KLD | 0.0282 | 0.0181 (-36%) | —
Same top-1 % | 92.4% | 94.2% | —
File size | 20 GB (4.74 BPW) | 20.1 GB (4.98 BPW) | 36.9 GBBut here's the problem — speed:
Config | Short | Medium | Long | Multi-turn | VRAM
---|---|---|---|---|---
Q4_K_M fit-nobatch | 74.7 tok/s | 72.9 | 73.7 | 76.1 | 14559 MB
Q4_K_L fit-nobatch | 41.4 tok/s | 41.4 | 40.8 | 41.8 | 14489 MBQ4_K_L is 44% slower. The larger q5_K/q6_K tensors (4.98 BPW vs 4.74) mean the model buffer is 8984 MiB vs Q4_K_M's 8556 MiB, causing
--fitto overflow more expert layers to CPU (19/41 vs ~16/41). Manual--n-cpu-moe 24OOMs entirely because the model buffer alone exceeds what's available after compute buffer allocation.Verdict : Q4_K_L has genuinely better quality (especially visible in KLD: -36%), but the speed penalty is massive on single-GPU setups where VRAM is the constraint. If your model fits fully in VRAM (5090 32GB), Q4_K_L is a strict upgrade. On 16GB cards, Q4_K_M wins decisively.
Experiment 4: --fit Tuning — Can we close the gap with manual offload?
Requested by : u/Chromix_, u/guiopen, u/wisepal_app, u/DonkeyBonked
In my original post,
--fit onwas ~7% slower than manual--n-cpu-moe 24. u/Chromix_ suggested the issue might be that-b 4096 -ub 4096batch flags consume VRAM that--fitcan't then use for expert layers. Nailed it.Config | Short | Medium | Long | Multi-turn | VRAM
---|---|---|---|---|---
C7 baseline (--n-cpu-moe 24, -b 4096) | 69.6 tok/s | 67.0 | 65.7 | 69.2 | 14874 MB
fit-default (--fit on, -b 4096) | 64.3 | 62.8 | 57.4 | 54.2 | 14595 MB
fit-256 (--fit-target 256, -b 4096) | 66.0 | 64.7 | 63.7 | 66.0 | 15321 MB
fit-nobatch (--fit on, no -b/-ub) | 74.7 | 72.9 | 73.7 | 76.1 | 14559 MB*high variance with outliers
Verdict : u/Chromix_ was right. Removing
-b 4096 -ub 4096lets--fitallocate VRAM optimally for expert layers. fit-nobatch is the new winner at ~74 tok/s — simpler config AND faster than manual tuning.--fit-target 256alone doesn't close the gap; removing the batch flags is the key insight.Experiment 5: Speculative Decoding — Can we go faster?
Requested by : u/BreizhNode, plus our own optimization roadmap
Bad news first : No compatible draft model exists. Qwen3.5 has a 248K vocabulary, Qwen3 has 151K. The smallest Qwen3.5 model is 27B — there's no small Qwen3.5 that could serve as a draft. Draft-model speculation is a dead end for now.
So I tried self-speculative methods (no draft model needed):
Config | Short | Medium | Long | Multi-turn | Status
---|---|---|---|---|---
fit-nobatch baseline | 74.7 tok/s | 72.9 | 73.7 | 76.1 | —
ngram-simple | 44.9 | 43.4 | 42.9 | 49.1 | works
ngram-mod (m=64) | 44.6 | FAIL | FAIL | FAIL | crashes
ngram-simple-short (n=8, m=64) | 45.0 | 43.1 | 43.1 | FAIL | partialNote : ngram tests ran on a different llama.cpp build (
latestvslatest-fit) that had a ~40% regression for unrelated reasons, so the absolute numbers aren't directly comparable. But even accounting for that, there's no speedup from ngram speculation on conversational workloads.Verdict : Self-speculative ngram methods provide zero benefit for diverse conversational workloads. ngram-mod is unstable (crashes after first request). Not recommended. If Qwen releases a small Qwen3.5 model (1-3B), draft- model speculation could be huge — but that doesn't exist yet.
Experiment 6: Qwen3.5-27B Dense — MoE vs Dense on single GPU
Requested by : u/moahmo88, u/Agreeable_Effect938
Some of you asked whether the dense 27B model might be a better fit for single-GPU setups. After all, it's simpler (no expert routing) and smaller (15.6 GB Q4_K_M).
Metric | 35B-A3B Q4_K_M (MoE) | 27B Q4_K_M (dense)
---|---|---
PPL (WikiText-2) | 6.6688 | 6.8573 (+2.8%)
Active params/token | ~3B | 27B
File size | 20 GB | 15.6 GB
Config | Short | Medium | Long | Multi-turn | VRAM
---|---|---|---|---|---
35B-A3B Q4_K_M fit-nobatch | 74.7 tok/s | 72.9 | 73.7 | 76.1 | 14559 MB
27B dense fit | 7.4 tok/s | 7.4 | 7.2 | 7.1 | 14075 MBYes, that's 10x slower. And it has worse quality.
The dense model needs all 27B parameters computed per token vs only ~3B active for MoE. Even with
--fitputting 54/65 layers on GPU, the remaining 11 layers on CPU create a massive bottleneck. Theoretical max even fully on GPU: ~61 tok/s (960 GB/s ÷ 15.6 GB model).Verdict : The MoE architecture is the entire advantage on consumer hardware. Only ~3B active params per token means ~10x less memory bandwidth per token. The 35B-A3B MoE is vastly faster on single-GPU setups with limited VRAM. The 27B dense is the stronger model on capability benchmarks and instruction following — if you can fit it fully in VRAM (24GB+ cards), it's a great choice. On 16GB cards where it runs at 7 tok/s, it's not practical for interactive use.
Experiment 7: MXFP4_MOE — The Unsloth-recommended alternative
Requested by : u/ayylmaonade, u/jumpingcross, u/danielhanchen (Unsloth creator)
After u/danielhanchen confirmed UD-Q4_K_XL has issues and specifically recommended MXFP4 as the alternative, I ran both quality and speed benchmarks.
Quality (partial — MXFP4 dequant path has a memory leak that OOMs after ~40-50 chunks):
Metric | Q4_K_M | MXFP4_MOE | UD-Q4_K_XL
---|---|---|---
PPL (~40 chunks) | ~6.00 | ~5.9-6.2* (the PPL runs all crashed due to memory leak, 5.96 is unverifiable) | ~7.17
Mean KLD (31 chunks) | 0.028 | 0.050 | 0.109
Same top-1 % | 92.4% | 91.0% | 86.2%
File size | 21.2 GB | 18.4 GB | 19.8 GBSpeed :
Config | Short | Medium | Long | Multi-turn | VRAM
---|---|---|---|---|---
Q4_K_M fit-nobatch | 74.7 tok/s | 72.9 | 73.7 | 76.1 | 14559 MB
MXFP4_MOE fit-nobatch | 49.5 tok/s | 47.8 | 46.9 | 43.0 | 14531 MBVerdict : MXFP4_MOE has comparable PPL to Q4_K_M (~5.9-6.2 vs 6.00, though partial evaluation due to memory leak) but is 34-42% slower (~47 tok/s vs ~74 tok/s). Despite the smaller file size (18.4 vs 21.2 GB), it doesn't translate to more expert layers on GPU — VRAM usage is nearly identical. There's also a memory leak bug in the MXFP4 dequant path that prevents full perplexity evaluation. Not recommended over Q4_K_M — the quality gain is marginal while the speed loss is massive.
u/danielhanchen — if the Unsloth team has different results on MXFP4 speed, I'd love to compare notes. My build is llama.cpp b8149 with CUDA 12.8 on sm_120.
Research Findings
A few questions didn't need experiments, just digging:
Why is Ollama 3x slower? (u/InternationalNebula7)
Ollama has no MoE expert offloading. When a MoE model doesn't fit in VRAM, Ollama splits at the layer level — entire transformer blocks go to CPU or GPU. This means the GPU sits completely idle waiting for CPU layers. With expert- only offloading, attention/norms stay on GPU while only routed expert FFNs go to CPU — the GPU stays busy.
There's an open PR (ollama/ollama#12333) to add
num_moe_offloadbut it hasn't merged yet. On top of that, Ollama defaults to KV cache f16 (we use q8_0, +20% throughput) and doesn't expose batch size or flash attention controls.Pre-built binaries vs source for Blackwell (u/wisepal_app)
For RTX 50-series : building from source matters. Release binaries use CUDA 12.4 which doesn't include sm_120 (Blackwell). You need CUDA 12.8+ for native support. Without it, PTX from sm_89 (Ada) gets JIT-compiled — slower first launch and you miss Blackwell-specific kernels.
For RTX 30/40-series : pre-built is fine (0-5% difference). Those architectures are already in the release builds.
8 GB VRAM recommendations (u/Qxz3)
Use Q4_K_M with full expert offload (
-ot "exps=CPU"): ~7.2 GB VRAM, ~50 tok/s in our tests (on RTX 5080 — your results will vary depending on GPU memory bandwidth). Key flags:-ctk q8_0 -ctv q8_0(free lunch),-fa on,--no-mmap, and tune your thread count (tryphysical_cores / 1.5as starting point, sweep from there).Updated Launch Command
Based on everything above, here's the new recommended config. Simpler AND faster than my original post:
./llama-server \ -m ./Qwen3.5-35B-A3B-Q4_K_M.gguf \ -c 65536 \ --fit on \ -fa on \ -t 20 \ --no-mmap \ --jinja \ -ctk q8_0 \ -ctv q8_0What changed from the original post :
- Removed
-ngl 999 --n-cpu-moe 24→ replaced with--fit on(auto VRAM management) - Removed
-b 4096 -ub 4096→ this was the key insight from u/Chromix_ — batch flags eat VRAM that--fitneeds for expert layers - Result: 74.7 tok/s (up from 69.6), simpler config, and
--fitadapts automatically to your available VRAM
Summary Table
What | Result | Verdict
---|---|---
KV q8_0 quality | < 0.4% PPL difference | Free lunch. Use it.
KLD: Q4_K_M vs UD-Q4_K_XL | 0.028 vs 0.109 (3.9x worse) | UD-Q4_K_XL is bad for MoE
Bartowski Q4_K_L | -0.8% PPL, -36% KLD, but 44% slower | Not worth it on 16GB
--fitwithout batch flags | 74.7 tok/s (+7% over manual) | New best config
ngram self-speculation | No speedup, unstable | Don't bother
27B dense vs 35B-A3B MoE | 10x slower, worse quality | MoE wins completely
MXFP4_MOE | Marginal quality gain, 34-42% slower | Q4_K_M still bestAcknowledgments
Thanks to everyone who pushed for better data:
- u/PhilippeEiffel, u/MrMisterShin, u/llama-impersonator, u/WittyAmbassador7340, u/kreigiron, u/bartskol — KV cache quality concerns led to the full PPL matrix (E1)
- u/JermMX5, u/Embarrassed_Ad3189 — pushed for KLD over PPL, which revealed the UD-Q4_K_XL gap is worse than PPL showed (E2)
- u/bettertoknow — Bartowski Q4_K_L benchmark, good call even though it turned out too slow for our setup (E3)
- u/Chromix_, u/guiopen, u/wisepal_app, u/DonkeyBonked —
--fittuning, especially Chromix_'s insight about batch flags eating VRAM, which gave us the new fastest config (E4) - u/BreizhNode — speculative decoding investigation, saved others the trouble (E5)
- u/moahmo88, u/Agreeable_Effect938 — 27B dense comparison, definitively answered "is MoE worth the complexity?" (E6)
- u/ayylmaonade, u/jumpingcross, u/danielhanchen — MXFP4_MOE testing, important to validate the Unsloth creator's recommendation (E7)
- u/InternationalNebula7 — Ollama performance gap explanation
- u/Qxz3 — 8GB VRAM config guidance
- u/JoNike — original RTX 5080 partial offload data that informed our testing
- u/3spky5u-oss — comprehensive RTX 5090 head-to-head benchmarks
- u/catplusplusok, u/SlimeQ, u/guiopen — chat template and tool calling tips
- u/chickN00dle, u/Odd-Ordinary-5922 — KV cache sensitivity reports at long context
- u/TheRealMasonMac —
--fit ondocumentation and RTX 4070 results - u/pmttyji, u/Subject-Tea-5253 — batch/ubatch tuning data
- u/Pristine-Woodpecker — independent confirmation of UD-Q4_K_XL quality issues
- u/jslominski, u/jiegec, u/Corosus, u/DeedleDumbDee, u/Monad_Maya, u/l33t-Mt, u/kkb294, u/zmanning, u/Additional-Action566 — speed reports across different GPUs
All raw data (benchmark JSONs, PPL logs, KLD logs, config files) is in my llm-server repo for anyone who wants to reproduce or verify.
Edit : Previous post here. This is a follow-up with all the experiments you requested.
Edit 2: Corrected some numbers that had errors in the original post. None of the conclusions change:
- E2 (KLD): Max KLD values were wrong — Q4_K_M is 4.21 (not 0.19), UD-Q4_K_XL is 7.79 (not 1.22). This actually makes UD-Q4_K_XL look worse than originally stated.
- E5 (Speculative): ngram-simple multi-turn was 49.1 tok/s (not 51.3). Still no benefit.
- E7 (MXFP4): Mean KLD is 0.050 (not 0.037), PPL is ~5.9-6.2 (partial, memory leak crashed all full runs), multi-turn speed is 43.0 tok/s (not 44.1). Still not recommended over Q4_K_M.
Edit 3: THANK YOU FOR THE AWARD, RANDOM CITIZEN!
Edit 4: Updated E6 (27B dense) wording — several commenters correctly pointed out that calling 27B "worse quality" based on PPL alone is misleading. The 27B dominates on capability benchmarks and instruction following; my results only show it's 10x slower on 16GB VRAM where it can't fit fully on GPU. If you have a 24GB+ card and can load it entirely in VRAM, 27B is a great model.
Added caveat to E1 (KV q8_0) that my PPL tests used 512 token context — some users report degradation at very long contexts (40-100k+).
Clarified that the ~50 tok/s 8GB VRAM number (E5 C5 full offload config) was on RTX 5080, not a separate 8GB card — a 3060 12GB will see lower numbers due to lower memory bandwidth.
Thanks u/_-_David, u/ArckToons, u/Front_Eagle739, and u/cookieGaboo24.
Edit 5: u/Corosus found --fit on performs poorly on Vulkan backend (13 tok/s vs 33 tok/s with manual --n-cpu-moe 24 on a 5070 Ti). My --fit results are CUDA-specific — Vulkan users should stick with manual offloading. Thanks man!
Edit 6: THANK YOU ANOTHER CITIZEN OF SUPER EARTH FOR THE AWARD!
Edit 7: Thanks to the community overwhelming reactions, and suggestions. I will definitely conduct another round of experiments to gather more data. Also...
OMG GUYS THANKS FOR THE AWARDS!
submitted by /u/gaztrab
[link] [comments] - Removed
-
🔗 r/reverseengineering magisk-renef — Auto-run renef dynamic instrumentation server on Android via Magisk/KernelSU rss
submitted by /u/ResponsiblePlant8874
[link] [comments] -
🔗 r/york Some photos I took of your beautiful city last week! rss
| Went to York on holiday last week to see prima facie and took some photos. I had the best time and everyone was super nice, definitely coming back sometime:) submitted by /u/Organic_Repair8717
[link] [comments]
---|--- -
🔗 r/Leeds Bra fitting rss
I’ve been debating to get a proper bra fitting before I buy new but idk where is best in the Leeds city centre to get fitted like there’s few bra shops there pour moi , Ann summers and bravissimo all I know of unless there is more I wanted to know where is best to get decent fitted
submitted by /u/Trashbandit_seal
[link] [comments] -
🔗 r/Yorkshire Fishlake, St Cuthbert, South Yorkshire The 12thC south doorway, with four orders of sculpture, is one of Yorkshire’s finest examples of a Romanesque door. rss
| @simonsmith submitted by /u/Mundane-Temporary426
[link] [comments]
---|--- -
🔗 r/reverseengineering Building a map tool for Cataclismo rss
submitted by /u/Bobby_Bonsaimind
[link] [comments] -
🔗 Stavros' Stuff Latest Posts I made a voice note taker rss
It's small and tiny and so cuteHave you ever always wanted a very very small voice note recorder that would fit in your pocket? Something that would always work, and always be available to take a note at the touch of a button, with no fuss? Me neither.
Until, that is, I saw the Pebble Index 01, then I absolutely needed it right away and had to have it in my life immediately, but alas, it is not available, plus it’s disposable, and I don’t like creating e-waste. What was a poor maker like me supposed to do when struck down so cruelly by the vicissitudes of fate?
There was only one thing I could do:
I could build my own, shitty version of it for $8, and that’s exactly what I did.
The problem
Like everyone else, I have some sort of undiagnosed ADHD, which manifests itself as my brain itching for a specific task, and the itch becoming unbearable unless I scratch it. This usually results in me getting my
-
🔗 Baby Steps How Dada enables internal references rss
In my previous Dada blog post, I talked about how Dada enables composable sharing. Today I'm going to start diving into Dada's permission system; permissions are Dada's equivalent to Rust's borrow checker.
Goal: richer, place-based permissions
Dada aims to exceed Rust's capabilities by using place-based permissions. Dada lets you write functions and types that capture both a value and things borrowed from that value.
As a fun example, imagine you are writing some Rust code to process a comma- separated list, just looking for entries of length 5 or more:
let list: String = format!("...something big, with commas..."); let items: Vec<&str> = list .split(",") .map(|s| s.trim()) // strip whitespace .filter(|s| s.len() > 5) .collect();One of the cool things about Rust is how this code looks a lot like some high- level language like Python or JavaScript, but in those languages the
splitcall is going to be doing a lot of work, since it will have to allocate tons of small strings, copying out the data. But in Rust the&strvalues are just pointers into the original string and sosplitis very cheap. I love this.On the other hand, suppose you want to package up some of those values, along with the backing string, and send them to another thread to be processed. You might think you can just make a struct like so…
struct Message { list: String, items: Vec<&str>, // ---- // goal is to hold a reference // to strings from list }…and then create the list and items and store them into it:
let list: String = format!("...something big, with commas..."); let items: Vec<&str> = /* as before */; let message = Message { list, items }; // ---- // | // This *moves* `list` into the struct. // That in turn invalidates `items`, which // is borrowed from `list`, so there is no // way to construct `Message`.But as experienced Rustaceans know, this will not work. When you have borrowed data like an
&str, that data cannot be moved. If you want to handle a case like this, you need to convert from&strinto sending indices, owned strings, or some other solution. Argh!Dada's permissions use places , not lifetimes
Dada does things a bit differently. The first thing is that, when you create a reference, the resulting type names the place that the data was borrowed from , not the lifetime of the reference. So the type annotation for
itemswould sayref[list] String1 (at least, if you wanted to write out the full details rather than leaving it to the type inferencer):let list: given String = "...something big, with commas..." let items: given Vec[ref[list] String] = list .split(",") .map(_.trim()) // strip whitespace .filter(_.len() > 5) // ------- I *think* this is the syntax I want for closures? // I forget what I had in mind, it's not implemented. .collect()I've blogged before about how I would like to redefine lifetimes in Rust to be places as I feel that a type like
ref[list] Stringis much easier to teach and explain: instead of having to explain that a lifetime references some part of the code, or what have you, you can say that "this is aStringthat references the variablelist".But what's also cool is that named places open the door to more flexible borrows. In Dada, if you wanted to package up the list and the items, you could build a
Messagetype like so:class Message( list: String items: Vec[ref[self.list] String] // --------- // Borrowed from another field! ) // As before: let list: String = "...something big, with commas..." let items: Vec[ref[list] String] = list .split(",") .map(_.strip()) // strip whitespace .filter(_.len() > 5) .collect() // Create the message, this is the fun part! let message = Message(list.give, items.give)Note that last line -
Message(list.give, items.give). We can create a new class and movelistinto it along withitems, which borrows from list. Neat, right?OK, so let's back up and talk about how this all works.
References in Dada are the default
Let's start with syntax. Before we tackle the
Messageexample, I want to go back to theCharacterexample from previous posts, because it's a bit easier for explanatory purposes. Here is some Rust code that declares a structCharacter, creates an owned copy of it, and then gets a few references into it.struct Character { name: String, class: String, hp: u32, } let ch: Character = Character { name: format!("Ferris"), class: format!("Rustacean"), hp: 22 }; let p: &Character = &ch; let q: &String = &p.name;The Dada equivalent to this code is as follows:
class Character( name: String, klass: String, hp: u32, ) let ch: Character = Character("Tzara", "Dadaist", 22) let p: ref[ch] Character = ch let q: ref[p] String = p.nameThe first thing to note is that, in Dada, the default when you name a variable or a place is to create a reference. So
let p = chdoesn't movech, as it would in Rust, it creates a reference to theCharacterstored inch. You could also explicitly writelet p = ch.ref, but that is not preferred. Similarly,let q = p.namecreates a reference to the value in the fieldname. (If you wanted to move the character, you would writelet ch2 = ch.give, notlet ch2 = chas in Rust.)Notice that I said
let p = ch"creates a reference to theCharacterstored inch". In particular, I did not say "creates a reference toch". That's a subtle choice of wording, but it has big implications.References in Dada are not pointers
The reason I wrote that
let p = ch"creates a reference to theCharacterstored inch" and not "creates a reference toch" is because, in Dada, references are not pointers. Rather, they are shallow copies of the value, very much like how we saw in the previous post that ashared Characteracts like anArc<Character>but is represented as a shallow copy.So where in Rust the following code…
let ch = Character { ... }; let p = &ch; let q = &ch.name;…looks like this in memory…
# Rust memory representation Stack Heap ───── ──── ┌───► ch: Character { │ ┌───► name: String { │ │ buffer: ───────────► "Ferris" │ │ length: 6 │ │ capacity: 12 │ │ }, │ │ ... │ │ } │ │ └──── p │ └── qin Dada, code like this
let ch = Character(...) let p = ch let q = ch.namewould look like so
# Dada memory representation Stack Heap ───── ──── ch: Character { name: String { buffer: ───────┬───► "Ferris" length: 6 │ capacity: 12 │ }, │ .. │ } │ │ p: Character { │ name: String { │ buffer: ───────┤ length: 6 │ capacity: 12 │ ... │ } │ } │ │ q: String { │ buffer: ───────────────┘ length: 6 capacity: 12 }Clearly, the Dada representation takes up more memory on the stack. But note that it doesn 't duplicate the memory in the heap, which tends to be where the vast majority of the data is found.
Dada talks about values not references
This gets at something important. Rust, like C, makes pointers first-class. So given
x: &String,xrefers to the pointer and*xrefers to its referent, theString.Dada, like Java, goes another way.
x: ref Stringis aStringvalue - including in memory representation! The difference between agiven String,shared String, andref Stringis not in their memory layout, all of them are the same, but they differ in whether they own their contents.2So in Dada, there is no
*xoperation to go from "pointer" to "referent". That doesn't make sense. Your variable always contains a string, but the permissions you have to use that string will change.In fact, the goal is that people don 't have to learn the memory representation as they learn Dada, you are supposed to be able to think of Dada variables as if they were all objects on the heap, just like in Java or Python, even though in fact they are stored on the stack.3
Rust does not permit moves of borrowed data
In Rust, you cannot move values while they are borrowed. So if you have code like this that moves
chintoch1…let ch = Character { ... }; let name = &ch.name; // create reference let ch1 = ch; // moves `ch`…then this code only compiles if
nameis not used again:let ch = Character { ... }; let name = &ch.name; // create reference let ch1 = ch; // ERROR: cannot move while borrowed let name1 = name; // use reference again…but Dada can
There are two reasons that Rust forbids moves of borrowed data:
- References are pointers, so those pointers may become invalidated. In the example above,
namepoints to the stack slot forch, so ifchwere to be moved intoch1, that makes the reference invalid. - The type system would lose track of things. Internally, the Rust borrow checker has a kind of "indirection". It knows that
chis borrowed for some span of the code (a "lifetime"), and it knows that the lifetime in the type ofnameis related to that lifetime, but it doesn't really know thatnameis borrowed fromchin particular.4
Neither of these apply to Dada:
- Because references are not pointers into the stack, but rather shallow copies, moving the borrowed value doesn't invalidate their contents. They remain valid.
- Because Dada's types reference actual variable names, we can modify them to reflect moves.
Dada tracks moves in its types
OK, let's revisit that Rust example that was giving us an error. When we convert it to Dada, we find that it type checks just fine:
class Character(...) // as before let ch: given Character = Character(...) let name: ref[ch.name] String = ch.name // -- originally it was borrowed from `ch` let ch1 = ch.give // ------- but `ch` was moved to `ch1` let name1: ref[ch1.name] = name // --- now it is borrowed from `ch1`Woah, neat! We can see that when we move from
chintoch1, the compiler updates the types of the variables around it. So actually the type ofnamechanges toref[ch1.name] String. And then when we move fromnametoname1, that's totally valid.In PL land, updating the type of a variable from one thing to another is called a "strong update". Obviously things can get a bit complicated when control-flow is involved, e.g., in a situation like this:
let ch = Character(...) let ch1 = Character(...) let name = ch.name if some_condition_is_true() { // On this path, the type of `name` changes // to `ref[ch1.name] String`, and so `ch` // is no longer considered borrowed. ch1 = ch.give ch = Character(...) // not borrowed, we can mutate } else { // On this path, the type of `name` // remains unchanged, and `ch` is borrowed. } // Here, the types are merged, so the // type of `name` is `ref[ch.name, ch1.name] String`. // Therefore, `ch` is considered borrowed here.Renaming lets us call functions with borrowed values
OK, let's take the next step. Let's define a Dada function that takes an owned value and another value borrowed from it, like the name, and then call it:
fn character_and_name( ch1: given Character, name1: ref[ch1] String, ) { // ... does something ... }We could call this function like so, as you might expect:
let ch = Character(...) let name = ch.name character_and_name(ch.give, name)So…how does this work? Internally, the type checker type-checks a function call by creating a simpler snippet of code, essentially, and then type- checking that. It's like desugaring but only at type-check time. In this simpler snippet, there are a series of
letstatements to create temporary variables for each argument. These temporaries always have an explicit type taken from the method signature, and they are initialized with the values of each argument:// type checker "desugars" `character_and_name(ch.give, name)` // into more primitive operations: let tmp1: given Character = ch.give // --------------- ------- // | taken from the call // taken from fn sig let tmp2: ref[tmp1.name] String = name // --------------------- ---- // | taken from the call // taken from fn sig, // but rewritten to use the new // temporariesIf this type checks, then the type checker knows you have supplied values of the required types, and so this is a valid call. Of course there are a few more steps, but that's the basic idea.
Notice what happens if you supply data borrowed from the wrong place:
let ch = Character(...) let ch1 = Character(...) character_and_name(ch, ch1.name) // --- wrong place!This will fail to type check because you get:
let tmp1: given Character = ch.give let tmp2: ref[tmp1.name] String = ch1.name // -------- // has type `ref[ch1.name] String`, // not `ref[tmp1.name] String`Class constructors are "just" special functions
So now, if we go all the way back to our original example, we can see how the
Messageexample worked:class Message( list: String items: Vec[ref[self.list] String] )Basically, when you construct a
Message(list, items), that's "just another function call" from the type system's perspective, except thatselfin the signature is handled carefully.This is modeled, not implemented
I should be clear, this system is modeled in the dada- model repository, which implements a kind of "mini Dada" that captures what I believe to be the most interesting bits. I'm working on fleshing out that model a bit more, but it's got most of what I showed you here.5 For example, here is a test that you get an error when you give a reference to the wrong value.
The "real implementation" is lagging quite a bit, and doesn't really handle the interesting bits yet. Scaling it up from model to real implementation involves solving type inference and some other thorny challenges, and I haven't gotten there yet - though I have some pretty interesting experiments going on there too, in terms of the compiler architecture.6
This could apply to Rust
I believe we could apply most of this system to Rust. Obviously we'd have to rework the borrow checker to be based on places, but that's the straight- forward part. The harder bit is the fact that
&Tis a pointer in Rust, and that we cannot readily change. However, for many use cases of self-references, this isn't as important as it sounds. Often, the data you wish to reference is living in the heap, and so the pointer isn't actually invalidated when the original value is moved.Consider our opening example. You might imagine Rust allowing something like this in Rust:
struct Message { list: String, items: Vec<&{self.list} str>, }In this case, the
strdata is heap-allocated, so moving the string doesn't actually invalidate the&strvalue (it would invalidate an&Stringvalue, interestingly).In Rust today, the compiler doesn't know all the details of what's going on.
Stringhas aDerefimpl and so it's quite opaque whetherstris heap- allocated or not. But we are working on various changes to this system in the Beyond the&goal, most notably the Field Projections work. There is likely some opportunity to address this in that context, though to be honest I'm behind in catching up on the details.
-
I'll note in passing that Dada unifies
strandStringinto one type as well. I'll talk in detail about how that works in a future blog post. ↩︎ -
This is kind of like C++ references (e.g.,
String&), which also act "as if" they were a value (i.e., you writes.foo(), nots->foo()), but a C++ reference is truly a pointer, unlike a Dada ref. ↩︎ -
This goal was in part inspired by a conversation I had early on within Amazon, where a (quite experienced) developer told me, "It took me months to understand what variables are in Rust". ↩︎
-
I explained this some years back in a talk on Polonius at Rust Belt Rust, if you'd like more detail. ↩︎
-
No closures or iterator chains! ↩︎
-
As a teaser, I'm building it in async Rust, where each inference variable is a "future" and use "await" to find out when other parts of the code might have added constraints. ↩︎
- References are pointers, so those pointers may become invalidated. In the example above,
-
- February 26, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-26 rss
IDA Plugin Updates on 2026-02-26
New Releases:
Activity:
- bitopt
- b25b2adc: chore: bump plugin version in metadata
- capa
- DeepExtractIDA
- ida-domain
- e32528a6: Enable tests also for ida 9.3 (#46)
- ida-function-string-associate
- ida-hcli
- msc-thesis-LLMs-to-rank-decompilers
- 352daad9: update
- pdb
- playlist
- e6a3a01c: skill differential
- python-elpida_core.py
- 25f37bd5: fix: add crystallization_hub + diplomatic_handshake to Dockerfile COPY
- 3a790123: fix: D0 frozen core becomes witness/observer of D11 synthesis
- df96dfb9: fix: Gemini model strings — no new key needed, model names changed
- 77f4c456: Section 23: Oracle cross-framework coordinator + 27-axiom federation …
- 2f0db308: Checkpoint Section 22: Qwen lost-code recovery
- aa8a4bbc: Update CHECKPOINT_MARCH1: Wave 3 results + battery test synthesis + 7…
- 923ad3e3: Wave 3 complete: syntactic-intent evasion validated + EEE + living ax…
- 6280c63d: Wave 3: automated minimal-rephrasing execution script
- 21fb382d: fix: WorldEmitter — filter test entries + S3-backed watermark
- tenrec
- dadc7dde: Merge pull request #25 from nonetype/feat/main-thread-executor
- vdump
- 9c589dad: Minor changes
- bitopt
-
🔗 @binaryninja@infosec.exchange Do you even lift? Because Glenn does -- and he'll walk you through the steps mastodon
Do you even lift? Because Glenn does -- and he'll walk you through the steps so you can get decompilation by implementing lifting for your customer architectures in Binary Ninja in part 2/3 of our architecture guide:
-
🔗 r/york Micklegate filming rss
Not a huge inconvenience but not being able to walk up just now without there having been any signage in advance has been a pain - I've just finished a really long shift so the added detour wasn't hugely appreciated - I'd have planned a different route home!
submitted by /u/Sir-Snickolas
[link] [comments] -
🔗 Kagi release notes Feb 26th, 2026 - Smoothing the edges rss
Kagi Search
Wolfram|Alpha widget supercharged
We're introducing a new and improved Wolfram|Alpha widget with support for rich equations, plots, better region-dependent queries, and more!

Other improvements and bug fixes
- Kagi Privacy Pass extension conflicts with Kagi Search extension in Firefox, breaking login token recognition in private browsing windows. #6432 @stone. This was a bug in Firefox - thank you to Mozilla for the fix!
- The after-login redirect doesn't work for maps or assistant #8407 @Boomkop3
- !hn bang does not use correct URL #8534 @davej
- Emoji search: Japanese symbols should come up when searching for themselves #9823 @karol
- Not possible to block TLD in personalised results #7104 @MrMoment
- Search box submits incomplete text #9836 @ssg
- Translation won't trigger in search until I reload the page #8409 @Gamesnic
- Content appears behind Dynamic Island on landscape iOS #9772 @ohnojono
- Can’t add a team member from iOS Safari #9003 @pbronez
- Kagi Wolfram answer doesn't match direct Wolfram query #9866 @RonanCJ
- Reverse image search comes up with empty spots #8666 @Boomkop3
- Opening search results in new tab with Vimium shortcuts doesn't work when authenticating via Privacy Pass #9894 @kbkle
- Image search directly from regular search bar #9889 @Boomkop3
- "Weather Saturday" returns for location Saturdaygua instead of weather on saturday #2388 @kevin51jiang
- Searching
Pseudo Codegives(data not available)from Kagi Knowledge #9922 @xjc - "Interesting Finds" does not respect filter rules #8578 @dabluecaboose
- Search snap: Reddit - @r returns little to no results on iOS due to
old.reddit.com#9582 @owl - Allow setting open_snap_domain for custom bangs #9901 @shorden
- Low quality translated Reddit results #5212 @bram
- Click on "More Results" loses the focus #5736 @expurple
PS, we've started publishing results for your SlopStop reports -- see them here. More details in the upcoming changelog.
Kagi Assistant
- Kagi Assistant - Ki Model - Toggle detailed search results broken with a lot of searches #7880 @Elias
- Assistant turns email address like text into mailto in codeblocks #9843 @Numerlor
- Slash-commands being sent to model along with system prompt #9852 @igakagi
- Ki can't access uploaded images from Python #7376 @fxgn
Kagi Maps
- Kagi maps "No POI found matching the query" #9799 @Jobby
- Maps sends double-URL encoded string to images #9698 @gdfgfasf
- Inconsistent Display of Postal Codes #9711 @iamjameswalters
- Maps Search Not Finding Some Places On First Try #9637 @Gredharm
- Searching "Chagos Islands" (or other locations w/o POI data) fails; doesn't fall-back to entry w/ valid POI data #9888 @Cajunvoodoo
Kagi Translate
- Translate Document - "Upgrade to premium" / inconsistent limits #9811 @widow5131
- Kagi translate mixup: Swiss High German vs. Swiss German #9827 @kagiiskey
- I'm interested in integrating Kagi Translate with Anki Flashcards #9750 @johnsturgeon
- Ability to set the default translation quality #9802 @PetrIako
- Translating input to same language: English->English #9690 @Cyb3rKo
- Translate document - premium not working #9879 @widow5131
- Fix needed for Korean word order of "total" count #9718 @Hanbyeol
- Wrong interface language in Kagi Translate #9880 @jstolarek
- Kagi Translate Firefox extension: incomplete translation on some sites #9862 @exzombie
- When I type a long sentence, it freezes and I can't scroll. #9940 @ZK
Kagi Translate - iOS and Android apps
- Fix needed for Korean word order of "total" count #9718 @Hanbyeol
- Make “Translate with Kagi” appear directly in Android text selection menu #9801 @Matou
- Added 'email' writing style for proofreading
- Added setting to toggle haptics ON/OFF
- Fixed UI issue on Android where certain elements were being drawn under system bars
Post of the week
Here is this week's featured social media mention:

Don't forget to follow us and tag us in your comments, we love hearing from you!
Kagi Specials

Kagi is happy to be part of the privacy alliance with Windscribe, a feature-rich VPN with built-in ad and malware blocking and audited no-logs policy.
Through this partnership via Kagi Specials, Kagi members receive a 3-month Windscribe Pro trial, then lock in the Pro plan at just $49/yr for life. In turn, Windscribe members get 3 months of Kagi's Professional plan.
Community creations
If you're using Scribbles to run your blog, you can now add Small Web badges directly to your blog footer, just head to the new "Small Web" section in your blog settings:

Kagi on TV!
Kagi was prominently featured as a private alternative to Google on KTLA 5 News, including an interview with Kagi's very own John Bardinelli, who recently joined the team as our Growth Manager.

-
🔗 r/reverseengineering From DDS Packets to Robot Shells: Two RCEs in Unitree Robots (CVE-2026-27509 & CVE-2026-27510) rss
submitted by /u/WiseTuna
[link] [comments] -
🔗 sacha chua :: living an awesome life Emacs completion and handling accented characters with orderless rss
I like using the orderless completion package for Emacs because it allows me to specify different parts of a completion candidate than any order I want. Because I'm learning French, I want commands like
consult-line(which uses minibuffer completion) andcompletion-at-point(which uses in-buffer completion) to also match candidates where the words might have accented characters. For example, instead of having to type "utilisé" with the accented é, I want to type "utilise" and have it match both "utilise" and "utilisé".(defvar my-orderless-accent-replacements '(("a" . "[aàáâãäå]") ("e" . "[eèéêë]") ("i" . "[iìíîï]") ("o" . "[oòóôõöœ]") ("u" . "[uùúûü]") ("c" . "[cç]") ("n" . "[nñ]"))) ; in case anyone needs ñ for Spanish (defun my-orderless-accent-dispatch (pattern &rest _) (seq-reduce (lambda (prev val) (replace-regexp-in-string (car val) (cdr val) prev)) my-orderless-accent-replacements pattern)) (use-package orderless :custom (completion-styles '(orderless basic)) (completion-category-overrides '((file (styles basic partial-completion)))) (orderless-style-dispatchers '(my-orderless-accent-dispatch orderless-affix-dispatch)))
Figure 1: Screenshot of consult-line showing matching against accented characters
Figure 2: Screenshot of completion-at-point matching "fev" with "février" This is an entry for Emacs Carnival February 2026: Completion.
This is part of my Emacs configuration.You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life IndieWeb Carnival February 2026: Intersecting interests rss
In EnglishThis month, the theme for the IndieWeb Carnival is "Converging Interests." It might actually be easier to list which of my interests don't converge. My interests often overlap. I'll start with a description of my main interests and how they're linked.
Programming is generally useful. I'm particularly interested in automation and cognitive and physical aids like voice interfaces. I love Emacs. It's ostensibly a text editor, but I've tinkered with it to such an extent that I use it for almost everything: managing my notes and tasks, of course, but even recording and editing audio files and organizing my drawings.
Writing helps me think, remember, and share. Org Mode in Emacs allows me to use the technique of literate programming, which combines explaining and coding. Some ideas are easier to think about and express through drawing, which allows me to explore them non-linearly. My drawings apply to all my interests, such as parenting, technology, learning, and planning. Sketchnoting is a great way to learn many things, share my notes, and remember specific moments. For example, my daughter is eager to finish a visual summary we developed together, which was possible because I had written many notes in the web journal I developed and in my French journal.
I've been learning French for the past 4 months, and that also touches various aspects of my daily life. I help my daughter with school, I try to use AI, I tinker with my tools, I watch shows, and I look up words related to my interests. For instance, I updated my handwriting font to include accented letters. This combined drawing, programming, and naturally, learning French. I also modified my writing environment in Emacs to look up words in the dictionary and display AI feedback. I particularly enjoy exploring learning techniques with my daughter, such as flashcards and stories following the principle of comprehensible input. Which methods are effective against which challenges, and how can we make the most of available technology? What we learn will help us across all subjects.
Similarly, learning the piano helps me appreciate the challenge and pleasure of making progress. It's also a good way to help my daughter learn it as well.
Since my life is filled with intertwining interests, it is important to manage my attention despite many distracting temptations, such as programming new tools. I might start a task and then find myself doing something completely different after a series of small, totally logical steps. You know how it goes—one thing leads to another. So I have to write my notes as I go. There is no rush and few of my tasks are urgent, so when I lose my train of thought, I can laugh and look for it again. If I write and share these notes, someone might find them even years later and remind me of them. It is very difficult to choose a moment to stop exploring and to publish my notes. The temptation is always to keep following a new idea.
Fortunately, the cumulative effect of hobbies that complement each other encourages me to grow, and when I am blocked in one direction, one or two other paths usually open up. Speaking of directions, I find it difficult to write when I want to introduce two or more simultaneous streams of ideas because writing is so linear. Still, it's better to write even if it's a bit disjointed.
I think speech recognition helps me capture more ideas, and I'm looking forward to how advances in technology can help me make them happen. I can also get better by learning and linking new curiosities to my other curiosities. I look forward to seeing what kinds of things are possible.
Although I have several hours of freedom now that my daughter can do many things herself, there's always more that I want to learn. Intertwined hobbies thrive, while isolated hobbies are forgotten. For example, I no longer play Stardew Valley since my daughter doesn't play it anymore. It’s a fun game, but if I'm choosing what to spend my time on, I prefer activities that serve multiple goals goals simultaneously. The garden of my interests is not formal and orderly, but rather natural and tangled.
My daughter also has many interests. One year she was interested in Rubik's Cubes and other puzzles; this year she's learning everything about Pokémon. The transience of her interests doesn't bother me. It all combines in unexpected ways. It will be interesting to see how she grows, and to see how I'll grow too.
Thanks to Zachary Kai for hosting the IndieWeb Carnival this month!
En françaisCe mois-ci, le thème du Carnaval IndieWeb est « Intérêts convergents. » C'est peut-être plus facile de lister lesquels de mes centres d'intérêt ne sont pas convergents. Mes centres d'intérêt se recoupent souvent. Je vais commencer par une description de mes premiers intérêts et des façons dont ils sont liés.
La programmation est généralement utile. Je suis particulièrement intéressée par l'automatisation et les aides cognitives et physiques comme l'interface vocale. J'adore Emacs, qui est un éditeur de texte, mais je le bricole à tel point que je l'utilise pour presque tout : gérer mes notes et mes tâches, bien sûr, mais même enregistrer et éditer des fichiers audio et organiser mes dessins.
L'écriture m'aide à penser, à me remémorer et à partager. Org Mode sous Emacs me permet d'utiliser la technique de « programmation lettrée », qui est la combinaison de l'explication et de la programmation. Quelques idées sont plus faciles à penser et à exprimer par le dessin, lequel me permet de les explorer non linéairement. Mes dessins s'appliquent aussi à tous mes centres d'intérêt, comme la parentalité, la technologie, l'apprentissage et la planification. Le sketchnoting est une bonne manière d'apprendre beaucoup de choses, de partager mes notes et de me souvenir de certains moments. Par exemple, ma fille a hâte de finir une synthèse visuelle que nous avons élaborée ensemble, et qui est possible parce que j'avais écrit beaucoup de notes dans le journal web que j'avais développé et dans mon journal en français.
L'apprentissage du français depuis 4 mois touche aussi divers aspects de ma vie quotidienne. J'aide ma fille à l'école, j'essaie d'utiliser l'IA, je bricole mes outils, je regarde des émissions, je cherche des mots pour mes centres d'intérêt. Par exemple, j'ai mis à jour la police de caractères de mon écriture pour inclure les lettres accentuées. Cela a associé le dessin, la programmation, et naturellement l'apprentissage du français. J'ai aussi modifié mon environnement d'écriture sous Emacs pour rechercher les mots dans le dictionnaire et pour afficher les commentaires de l'IA. J'aime particulièrement explorer des techniques d'apprentissage avec ma fille comme les cartes mémoire et les histoires qui suivent le principe de l'apport compréhensible. Quelles méthodes sont efficaces contre quels défis, et comment nous pouvons tirer le meilleur parti des technologies disponibles ? Ce que nous apprenons nous servira bien dans tous les sujets.
De la même manière, l'apprentissage du piano m'aide à apprécier le défi et le plaisir de progresser. Une autre raison de le faire est qu'il aide ma fille à l'apprendre aussi.
Comme ma vie est remplie d'intérêts qui s'entrelacent, c'est important de gérer mon attention face à plusieurs tentations de s'éparpiller, comme la programmation de nouvelles automatisations. Je commence peut-être une tâche et je me retrouve ensuite à faire une tâche complètement différente après une suite d'étapes logiques. On sait ce que c'est, de fil en aiguille. Donc je dois écrire mes notes au fur et à mesure. Rien ne me presse et peu de mes tâches sont urgentes, donc quand je perds le fil de mes pensées, je peux rire et le retrouver. Si j'écris et que je partage ces notes, quelqu'un peut les trouver même après plusieurs années et me les rappeler. C'est très difficile de choisir un moment où j'arrête d'explorer et où je publie mes notes. La tentation est toujours de continuer à suivre une nouvelle idée.
Heureusement, l'effet cumulatif de loisirs qui se complètent m'encourage à grandir, et quand je suis bloquée dans une direction, une ou deux autres pistes se sont ouvertes. En parlant de directions, je trouve que c'est difficile d'écrire quand je veux introduire deux ou plusieurs suites d'idées simultanées, à cause de la linéarité de l'écriture. De toute façon, c'est mieux d'écrire même si c'est un peu décousu.
Je pense que la reconnaissance vocale m'aide à saisir plus d'idées et les progrès technologiques m'aident à les exécuter. Je vais aussi m'améliorer en apprenant et en reliant de nouvelles curiosités à mes autres curiosités. J'ai hâte de voir quelles sortes de choses sont possibles.
Bien que j'aie plusieurs heures de liberté maintenant que ma fille est capable de faire beaucoup de choses elle-même, il y a toujours plus de choses que je veux apprendre. Les loisirs entrelacés se développent, tandis que les loisirs isolés sont oubliés. Par exemple, je ne joue plus à Stardew Valley maintenant que ma fille n'y joue plus. C'est un jeu amusant, mais si je peux choisir un passe-temps, j'en préfère un qui serve des objectifs multiples simultanés. Le jardin de mes intérêts n'est pas formel et ordonné, mais plutôt naturel et entremêlé.
Ma fille a aussi beaucoup de centres d'intérêt. Une année elle s'est intéressée au Cube de Rubik et aux autres casse-têtes, une autre année elle apprenait tout sur Pokémon. Ça ne me dérange pas, tout se combine de façons inattendues. Ce sera intéressant de voir comment elle grandira, et moi aussi.
Merci à Zachary Kai d'accueillir le Carnaval IndieWeb ce mois-ci !
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Harrogate Crime and safety in this area of Harrogate rss
| I’m looking at buying a house in this area of Harrogate. It is not a cheap place at all but it’s a nice house and the road itself seems nice However I am now concerned about the safety and levels of crime in the area. Not on the road I am looking at in particular but on the approach and adjacent roads that we’d have to walk through to get to the town. Please can people let me know your experiences of this area - good and bad? I’d be most interested in hearing from residents. I have previously lived in London (Golders Green, although the Hampstead Garden Suburb), Newcastle City Centre as a student and for work, and I’m used to Middlesbrough - somewhere close to where I grew up and so spent a lot of time there. I’m looking at this as a place where I can start a family potentially and spend a long period of time. I like being close to amenities but at the end of the day safety and feeling comfortable in your area has to be the number one priority. submitted by /u/DoughnutHairy9943
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life Sorting completion candidates, such as sorting Org headings by level rss
: Made the code even neater with
:key, included the old code as wellAt this week's Emacs Berlin meetup, someone wanted to know how to change the order of completion candidates. Specifically, they wanted to list the top level Org Mode headings before the second level headings and so on. They were using org-ql to navigate Org headings, but since org-ql sorts its candidates by the number of matches according to the code in the
org-ql-completing-readfunction, I wasn't quite sure how to get it to do what they wanted. (And I realized my org-ql setup was broken, so I couldn't fiddle with it live. Edit: Turns out I needed to update the peg package) Instead, I showed folksconsult-org-headingwhich is part of the Consult package, which I like to use to jump around the headings in a single Org file. It's a short function that's easy to use as a starting point for something custom.Here's some code that allows you to use
consult-org-headingto jump to an Org heading in the current file with completions sorted by level.(with-eval-after-load 'consult-org (advice-add #'consult-org--headings :filter-return (lambda (candidates) (sort candidates :key (lambda (o) (car (get-text-property 0 'consult-org--heading o)))))))
Figure 1: Screenshot showing where the candidates transition from top-level headings to second-level headings My previous approach defined a different function based on
consult-org-heading, but using the advice feels a little cleaner because it will also make it work for any other function that usesconsult-org--headings. I've included the old code in case you're curious. Here, we don't modify the function's behaviour using advice, we just make a new function (my-consult-org-heading) that calls another function that processes the results a little (my-consult-org--headings).Old code, if you're curious(defun my-consult-org--headings (prefix match scope &rest skip) (let ((candidates (consult-org--headings prefix match scope))) (sort candidates :lessp (lambda (a b) (let ((level-a (car (get-text-property 0 'consult-org--heading a))) (level-b (car (get-text-property 0 'consult-org--heading b)))) (cond ((< level-a level-b) t) ((< level-b level-a) nil) ((string< a b) t) ((string< b a) nil))))))) (defun my-consult-org-heading (&optional match scope) "Jump to an Org heading. MATCH and SCOPE are as in `org-map-entries' and determine which entries are offered. By default, all entries of the current buffer are offered." (interactive (unless (derived-mode-p #'org-mode) (user-error "Must be called from an Org buffer"))) (let ((prefix (not (memq scope '(nil tree region region-start-level file))))) (consult--read (consult--slow-operation "Collecting headings..." (or (my-consult-org--headings prefix match scope) (user-error "No headings"))) :prompt "Go to heading: " :category 'org-heading :sort nil :require-match t :history '(:input consult-org--history) :narrow (consult-org--narrow) :state (consult--jump-state) :annotate #'consult-org--annotate :group (and prefix #'consult-org--group) :lookup (apply-partially #'consult--lookup-prop 'org-marker))))I also wanted to get this to work for
C-u org-refile, which usesorg-refile-get-location. This is a little trickier because the table of completion candidates is a list of cons cells that don't store the level, and it doesn't pass the metadata tocompleting-readto tell it not to re-sort the results. We'll just fake it by counting the number of "/", which is the path separator used iforg-outline-path-complete-in-stepsis set tonil.(with-eval-after-load 'org (advice-add 'org-refile-get-location :around (lambda (fn &rest args) (let ((completion-extra-properties '(:display-sort-function (lambda (candidates) (sort candidates :key (lambda (s) (length (split-string s "/")))))))) (apply fn args)))))
Figure 2: Screenshot of sorted refile entries In general, if you would like completion candidates to be in a certain order, you can specify
display-sort-functioneither by callingcompleting-readwith a collection that's a lambda function instead of a table of completion candidates, or by overriding it withcompletion-category-overridesif there's a category you can use orcompletion-extra-propertiesif not.Here's a short example of passing a lambda to a completion function (thanks to Manuel Uberti):
(defun mu-date-at-point (date) "Insert current DATE at point via `completing-read'." (interactive (let* ((formats '("%Y%m%d" "%F" "%Y%m%d%H%M" "%Y-%m-%dT%T")) (vals (mapcar #'format-time-string formats)) (opts (lambda (string pred action) (if (eq action 'metadata) '(metadata (display-sort-function . identity)) (complete-with-action action vals string pred))))) (list (completing-read "Insert date: " opts nil t)))) (insert date))If you use
consult--readfrom the Consult completion framework, there is a:sortproperty that you can set to either nil or your own function.This entry is part of the Emacs Carnival for Feb 2026: Completion.
This is part of my Emacs configuration.You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Update on my ridiculous connection rss
I’m the idiot who booked this connection between the coach and train station. I With a 3 minute delay from Preston, I am delighted to say I made it on to the coach which I am currently writing this from. Thank who to ever who game advice on the best way to execute this.
submitted by /u/Glittering_Yam_5613
[link] [comments] -
🔗 r/york Latest engineering improvement works for TRU between Leeds & York (via Crossgates) plus effecting trains between Leeds to Selby/Hull rss
| submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 r/Yorkshire Latest engineering improvement works for TRU between Leeds & York (via Crossgates) plus effecting trains between Leeds to Selby/Hull rss
| submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 r/Leeds Latest engineering improvement works for TRU between Leeds & York (via Crossgates) plus effecting trains between Leeds to Selby/Hull rss
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 vercel-labs/agent-browser v0.15.1 release
Patch Changes
7bd8ce9: Added support for chrome:// and chrome-extension:// URLs in navigation and recording commands. These special browser URLs are now preserved as-is instead of having https:// incorrectly prepended.
-
🔗 r/LocalLLaMA American closed models vs Chinese open models is becoming a problem. rss
The work I do involves customers that are sensitive to nation state politics. We cannot and do not use cloud API services for AI because the data must not leak. Ever. As a result we use open models in closed environments.
The problem is that my customers don’t want Chinese models. “National security risk”.
But the only recent semi-capable model we have from the US is gpt-oss-120b, which is far behind modern LLMs like GLM, MiniMax, etc.
So we are in a bind: use an older, less capable model and slowly fall further and further behind the curve, or… what?
I suspect this is why Hegseth is pressuring Anthropic: the DoD needs offline AI for awful purposes and wants Anthropic to give it to them.
But what do we do? Tell the customers we’re switching to Chinese models because the American models are locked away behind paywalls, logging, and training data repositories? Lobby for OpenAI to do us another favor and release another open weights model? We certainly cannot just secretly use Chinese models, but the American ones are soon going to be irrelevant. We’re in a bind.
~~Our one glimmer of hope is StepFun-AI out of South Korea. Maybe they’ll save Americans from themselves.~~ I stand corrected: they’re in Shanghai.
Cohere are in Canada and may be a solid option. Or maybe someone can just torrent Opus once the Pentagon force Anthropic to hand it over…
submitted by /u/JockY
[link] [comments] -
🔗 r/wiesbaden Dobermann mit Biss-vergangenheit sucht dringend ein Erfahrenes Zuhause rss
submitted by /u/_thatkitten
[link] [comments] -
🔗 r/reverseengineering Reverse Engineering Garmin Watch Applications with Ghidra rss
submitted by /u/anvilventures
[link] [comments] -
🔗 r/Leeds Council tax rising again rss
BBC News - Leeds council tax to rise by 4.99% in April https://www.bbc.co.uk/news/articles/cjwz7x3jyllo
submitted by /u/RichieRichard12
[link] [comments] -
🔗 r/york Deed poll solicitor rss
can anyone recommend a solicitor or commissioner of oath to sign the forms for my sons name change?
submitted by /u/Tall_Reaction8859
[link] [comments] -
🔗 r/Leeds Grants for studio recording rss
This may sound like an odd question but does anyone know of any schemes for free or reduced recording time for musicians in Leeds or surrounding areas. First time recording a song on my own and costs are high especially as I'd be recording most of the parts myself so would maybe take upwards of 4 hours. I'm not a youth so student schemes wouldn't be appropriate. If anyone has any leads I would be very grateful. Thank you.
submitted by /u/Intelligent-Deer5667
[link] [comments] -
🔗 r/Yorkshire What part of Yorkshire feels the most authentic? rss
If someone wanted to visit Yorkshire, not just the tourist spots, where would you send them? Places that genuinely reflect the culture, history, and everyday life of the area.
submitted by /u/goxper
[link] [comments] -
🔗 r/Leeds Tennis meet up rss
Hi All,
I want to start playing tennis more but I have no one to play with other than my cousin who isn’t always available. Just wondering if anyone would fancy meeting up throughout the year?
I’m 25f, can hold a rally and have a spare racket and balls.
I live in south Leeds and would potentially be open to travelling further to meet.
If anyone is up for it, let me know :)
submitted by /u/BlendedStuff_NThangs
[link] [comments] -
🔗 r/Leeds Doorstep Scam LS8 rss
Just be aware, a doorstep scammer is doing the rounds in LS8 preying on the elderly and intimidating them for cash payment. A 95 year old relative was presented with this £400 fake “invoice” for alleged chimney removal. No work was done whatsoever, we had the chimney inspected by a professional this morning and they have confirmed that the conman appears to have used his bare hands to smear some sort of cement on the chimney, which had no purpose.
The police have been informed and visited fairly quickly, and fortunately no money was exchanged. The conman did send his sidekick round to try for payment again. We’ve now installed a camera and contacted the conman via his mobile number to have a few words. I don’t think he will be back but I doubt it will stop him trying it with others. My relative has an outdoor key safe installed, which was probably how the conman knew a vulnerable/elderly person lives in the property, so just to be aware, don’t entertain these people or agree to let them have a look at the roof.
submitted by /u/PollyAnais
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: -2 plugins, +3 releases, -3 releases rss
sync repo: -2 plugins, +3 releases, -3 releases ## New releases - [sharingan](https://github.com/n0pex3/sharingan): 1.0.0 - [showcomments](https://github.com/merces/showcomments): 0.5.1, 0.5.0 ## Removed plugins - Sharingan - ShowComments -
🔗 r/Yorkshire Rank the eight cities of Yorkshire (or top 3 will do) rss
If not mistaken (which I often am)
The eight cities of Yorkshire include Ripon, which I did not know was a city.
What order would you rank them (best to worst) from your experience on visiting, living, socialising etc
You can even explain your reasoning :)
submitted by /u/WearingMarcus
[link] [comments] -
🔗 r/Yorkshire Be honest, which Yorkshire stereotype is actually true? rss
We all joke about being tight with money, fiercely proud, and stubborn. But some stereotypes exist for a reason. I’ll start: the won’t travel more than 30 minutes for anything one feels painfully accurate. Which one do you secretly think is spot on?
submitted by /u/PubLogic
[link] [comments] -
🔗 r/Harrogate best dentist in harrogate? rss
Hi all,
Bit of a random one, but can anyone recommend a good dentist in Harrogate? I’ve just moved back near Cold Bath Road and realised I haven’t had a proper check-up in… well, longer than I should admit. I might need Invisalign too, but I’m still undecided.
I’m not terrified of the dentist, just get a bit tense, so I’m looking for somewhere that actually explains things and doesn’t rush you in and out in 10 minutes.
There seem to be loads of practices locally, and it’s hard to tell which ones are genuinely good versus just having nice websites.
Any real experiences would be massively appreciated!
submitted by /u/Latter_Ordinary_9466
[link] [comments] -
🔗 r/Harrogate Knaresborough Tourist Guide rss
If your visiting knarsborough, then i wrote this short tourist guide that might help. feel free to add to it or ask any questions!
submitted by /u/No_Nose_3849
[link] [comments] -
🔗 Cryptography & Security Newsletter Messaging Encryption Has Come a Long Way, but Falls Short rss
We’ve had a pretty good couple of years when it comes to messaging security. Initially, adopting encryption stopped passive surveillance. Later, adoption of end-to-end encryption by the dominant platforms gave us much needed privacy. Some platforms, such as Apple and Signal, even led the way when it comes to resilience against cryptographically relevant quantum computers. Compare this situation to the poor state of email encryption, and the difference is like night and day. Despite this, some structural problems remain, and we’re even in danger of regressing.
-
🔗 r/Leeds Best roast in Leeds? rss
Heading to Leeds soon with a few mates and looking to book a Sunday roast, but unsure of where is best. Any recommendations?
submitted by /u/No-Living-6949
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits ci: add actions/cache to persist caches between runs rss
ci: add actions/cache to persist caches between runs -
🔗 r/york York Minster: W rss
| submitted by /u/Julija82
[link] [comments]
---|--- -
🔗 Project Zero A Deep Dive into the GetProcessHandleFromHwnd API rss
In my previous blog post I mentioned the
GetProcessHandleFromHwndAPI. This was an API I didn’t know existed until I found a publicly disclosed UAC bypass using the Quick Assist UI Access application. This API looked interesting so I thought I should take a closer look.I typically start by reading the documentation for an API I don’t know about, assuming it’s documented at all. It can give you an idea of how long the API has existed as well as its security properties. The documentation’s remarks contain the following three statements that I thought were interesting:
If the caller has UIAccess, however, they can use a windows hook to inject code into the target process, and from within the target process, send a handle back to the caller.
GetProcessHandleFromHwnd is a convenience function that uses this technique to obtain the handle of the process that owns the specified HWND.
Note that it only succeeds in cases where the caller and target process are running as the same user.
The interesting thing about these statements is none of them are completely true. Firstly as the previous blog post outlined it’s not sufficient to have UI Access enabled to use windows hooks, you need to have the same or greater integrity level as the target process. Secondly, if you go and look at how
GetProcessHandleFromHwndis implemented in Windows 11 it’s a Win32k kernel function which opens the process directly, not using windows hooks. And finally, the fact that the Quick Assist bypass which uses the API still works with Administrator Protection means the processes can be running as different users.Of course some of the factual inaccuracies might be changes made to UAC and UI Access over the years since Vista was released. Therefore I thought it’d be interesting to do a quick bit of code archaeology to see how this API has changed over the years and perhaps find some interesting behaviors.
The First Version
The first version of the API exists in Vista, implemented in the
oleacc.dlllibrary. The documentation claims it was supported back in Windows XP, but that makes little sense for what the API was designed for. Checking a copy of the library from XP SP3 doesn’t show the API, so we can assume the documentation is incorrect. The API first tries to open the process directly, but if that fails it’ll use a windows hook exactly as the documentation described.The
oleacc.dlllibrary with the hook will be loaded into the process associated with the window using theSetWindowsHookExAPI and specifying the thread ID parameter. However it still won’t do anything until a custom window message,WM_OLEACC_HOOKis sent to the window. The hook function is roughly as follows (I’ve removed error checking):void HandleHookMessage(CWPSTRUCT *cwp) { UINT msg = RegisterWindowMessage(L"WM_OLEACC_HOOK"); if (cwp->message != msg) return; WCHAR name[64]; wParam = cwp->wParam; StringCchPrintf(name, _countof(name), L"OLEACC_HOOK_SHMEM_%d_%d", wParam, cwp->lParam); HANDLE mapping = OpenFileMapping(FILE_MAP_READ | FILE_MAP_WRITE, FALSE, name); DWORD* buffer = (DWORD*)MapViewOfFile(mapping, FILE_MAP_READ | FILE_MAP_WRITE, 0, 0, sizeof(DWORD)); HANDLE caller = OpenProcess(PROCESS_DUP_HANDLE, FALSE, cwp->wParam); HANDLE current = OpenProcess(PROCESS_DUP_HANDLE | PROCESS_VM_OPERATION | PROCESS_VM_READ | PROCESS_VM_WRITE | SYNCHRONIZE, FALSE, GetCurrentProcessId()); HANDLE dup; DuplicateHandle(CurrentProcess, current, caller, &dup, 0, 0, DUPLICATE_SAME_ACCESS); InterlockedExchange(buffer, (DWORD)dup); // Cleanup handles etc. }The message parameters are the process ID of the caller, who wants to open the process handle and an incrementing counter. These parameters are used to open a named memory section to transfer the duplicated handle value back to the caller. A copy of the current process handle is then opened with a limited set of access rights and duplicated to the caller. Finally the handle value is copied into the shared memory and the message handler returns. The caller of the API can now pick up the duplicated handle and use it as desired.
This code might explain a few additional things about the API documentation. If the two processes are running as different users it’s possible that the target process won’t be able to open the caller for
PROCESS_DUP_HANDLEaccess and the transfer will fail. While the API does set the integrity level of the shared memory it doesn’t set the DACL so that will also prevent it being opened by a different user. Of course if the target process was running as an administrator, like in the UAC case, it almost certainly will have access to both the caller process as well as the shared memory making this a moot point.One minor change was made in Windows 7, the hook function was moved out of the main
oleacc.dlllibrary into its own binary,oleacchooks.dll. The hook function is exposed as ordinal 1 in the export table with no name. This DLL still exists on the latest version of Windows 11 even though the API has since moved into the kernel and there’s no longer any users.The Second Version
The second version of the API doesn’t appear until well into Windows 10’s lifetime, in version 1803. This version is where the API was moved into a Win32k kernel function. The kernel API is exposed as
NtUserGetWindowProcessHandlefromwin32kfull.sys. It’s roughly implemented as follows:HANDLE NtUserGetWindowProcessHandle(HWND hWnd, ACCESS_MASK DesiredAccess) { WND* wnd = ValidateHwnd(Wnd); if (!wnd) { return NULL; } THREADINFO* curr_thread = W32GetThreadWin32Thread(KeGetCurrentThread()); THREADINFO* win_thread = wnd->Thread;; if (curr_thread->Desktop != win_thread->Desktop) { goto access_denied; } PROCESSINFO* win_process = win_thread->ppi; PROCESSINFO* curr_process = curr_thread->ppi; if (gbEnforceUIPI) { if (!CheckAccess(curr_process->UIPIInfo, win_process->UIPIInfo)) { if (!curr_process->HasUiAccessFlag) { goto access_denied; } } } else if (win_thread->AuthId != curr_thread->AuthId) { goto access_denied; } if (win_thread->TIF_flags & (TIF_SYSTEMTHREAD | TIF_CSRSSTHREAD)) { goto access_denied; } KPROCESS process = NULL; DWORD process_id = PsGetThreadProcessId(win_thread->KThread); PsLookupProcessByProcessId(process_id, &process); HANDLE handle = NULL; ObOpenObjectByPointer(process, 0, NULL, DesiredAccess, PsProcessType, KernelMode, &handle); return handle; access_denied: UserSetLastError(ERROR_ACCESS_DENIED); return NULL; }One thing to note with the new API is it takes an
ACCESS_MASKto specify what access the caller wants on the process handle. This is different from the old implementation where the access desired was a fixed value. The window handle is validated and used to lookup the Win32kTHREADINFOstructure for the associated thread and a check is made to ensure both the caller’s thread and the target window are on the same desktop.We then get to the UIPI enforcement checks, first it checks the
gbEnforceUIPIglobal variable. If UIPI is enabled it’ll call aCheckAccessmethod to see if the caller is permitted to access the process for the target window. If the check fails it’ll test if the caller has the UI Access flag enabled, if not the function will deny access, otherwise it’ll be allowed to continue. The access check is quite simple:BOOLEAN CheckAccess(UIPI_INFO *Current, UIPI_INFO* Target) { if (Current->IntegrityLevel > Target->IntegrityLevel) { return TRUE; } if (Current->IntegrityLevel != Target->IntegrityLevel) { return FALSE; } if (Current->AppContainerNo != Target->AppContainerNo && Current->AppContainerNo != -1 && Target->AppContainerNo != -1) { return FALSE; } return TRUE: }If the caller’s integrity level is greater than the target’s, the check is passed immediately. If it’s less than the target’s then it fails immediately. However if the integrity level is the same it does a check to make sure if the processes are in an AppContainer sandbox and that they’re in the same one. If a process is not in an AppContainer sandbox the
AppContainerNovalue is set to -1. The check also ensures that this doesn’t allow a low integrity process access to an AppContainer process as there’s an existing check to prevent this happening viaOpenProcess. If everything passes the check returns TRUE.If UIPI is not enforced then the authentication IDs are compared. The function will only permit access if the caller is in the same logon session, which would mean if UIPI was disabled this wouldn’t permit accessing elevated UAC processes. The final check is whether the target thread is in the system (i.e. kernel) process or a CSRSS process. If they are then access is denied.
Finally, the target process is opened by its process ID by looking up the
KPROCESSpointer then usingObOpenObjectByPointerto open a handle with the desired access. Crucially the access mode is set toKernelMode. This means that no access checks are performed on the process object.One glaring security issue with this function is that the target process is opened without access checking for any access rights the caller wants. This is a problem as it allows any process with the same or higher integrity level to open any other process as long as it has at least one window.
This is a special problem for two process types, first is restricted token sandbox processes. While you might assume this wouldn’t be a big deal if two restricted token sandboxed processes running at the same integrity could access each other, that isn’t always the case. For example Chromium doesn’t allow renderers to open each other, and some renderers have more privilege that others for example if they’re rendering WebUI content. Fortunately at least in this case renderers run under win32k lockdown meaning they can’t create a window even if they wanted to.
The second is protected processes. If you open a handle to a protected process with the access mode set to
KernelModethen it’ll be permitted completely bypassing the protection. You might not think a protected process would create a window, but it could be a message-only window such as to support COM which the code might not even realize it created.However, even if the caller doesn’t have a suitable integrity level it’s sufficient to just have the UI Access flag enabled. This means that tricks such as my token stealing attack would be sufficient to open any other process on the same desktop which created a window. This issue was reported to MSRC and fixed as CVE-2023-41772. The reporter was the same researcher Sascha Mayer who found the Quick Assist UI Access bypass that I mentioned earlier.
The Third Version
This version’s goal was to fix CVE-2023-41772 and there are two major changes. First and most importantly, if the UIPI check fails, the function will still check for the UI Access flag being enabled. However, rather than permitting it to continue, it’ll force the call to
ObOpenObjectByPointerto open a handle with the access mode set toUserModerather thanKernelMode.Passing
UserModeensures that access checking is enabled. The end result is having the UI Access flag enabled doesn’t grant any additional privileges over calling theNtOpenProcesssystem call directly. Presumably it was left this way for compatibility reasons. However, this didn’t change the behavior when the caller’s integrity level is greater or equal to the target’s, the process object will still be opened with the access mode set toKernelMode. This means that when it comes to restricted token sandboxes or protected processes nothing has changed.The second, less important change is that the desired access is now restricted to a limited set of access rights matching the original hook based implementation. The caller can only pass the following access to the function,
PROCESS_DUP_HANDLE,PROCESS_VM_OPERATION,PROCESS_VM_READandPROCESS_VM_WRITEotherwise access is denied. However this amount of access is more than sufficient to completely compromise the target process.The Latest Version
Windows 11 24H2 introduced two major changes to the behavior of
NtUserGetWindowProcessHandle. First there is a change to the UIPI access check, let’s look at a code snippet:BOOLEAN UIPrivilegeIsolation::CheckAccess(UIPI_INFO *Current, UIPI_INFO* Target) { if (!Feature_UIPIAlwaysOn_IsEnabled() && !UIPrivilegeIsolation::fEnforceUIPI) { return TRUE; } if (Target->ProcessProtection != 0 && (Target->ProcessProtection != Current->Protection)) { return FALSE; } if (Current->IntegrityLevel > Target->IntegrityLevel) { return TRUE; } ... }The change introduces a Window feature flag to force UIPI on all the time, previously it was possible to disable UIPI using a system configuration change. A feature flag allows Microsoft to run A/B testing on Windows systems; it likely means that they want to enable UIPI permanently in the future.
The kernel driver also captures the process protection as part of the UIPI information and does a check that either the target is unprotected or the caller has a matching protection level. This stops the previous attack that allows
NtUserGetWindowProcessHandlefrom opening a protected process.One weakness in this check is it doesn’t use the comparison that the kernel uses to determine whether a protected level supersedes another. While that’s good in a way, there is a slight mistake. There’s a PPL App level that’s designed so that other processes at the same level can’t open one another. This behavior is presumably because the PPL App level was designed to be used by third party applications from the Windows Store. The implemented check would allow one PPL App process to open another, of course you’d still need to get code execution in a PPL App process to begin with so this doesn’t seem a major issue.
It’s important to note that the protection check is ignored if UIPI is disabled at a system level. Therefore if you’re willing to reboot the system and have administrator access you can disable UIPI by setting an
EnforceUIPIDWORD registry value with the value of 0 inside the keyHKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System. You might also need to disable theUIPIAlwaysOnfeature flag, you can do that using a tool like ViVe and running the commandViveTool.exe /disable /id:56625134as an administrator and rebooting the machine.The second major change is in
NtUserGetWindowProcessHandle. The function now has two paths controlled by a feature flagResponsiblePid. If the feature flag is disabled it takes the old path, but if it’s enabled it calls a new functionGetWindowProcessHandleUnsafe. Ironically, contrary to the name this seems to be a safer version of the API.The big change here is that to open a process the caller must have the UI Access flag enabled. Calling the API without the UI Access flag will give an access denied error. Also if you disable UIPI at the system level the API will also return access denied, it won’t fall back to an insecure mode of operation. At least on my 25H2 VM the
ResponsiblePidfeature flag is always enabled, but I could just be subject to A/B testing.To open the process with
KernelModeaccess you’ll still need to pass the UIPI check. As you can’t short circuit the check by disabling enforcement; this blocks opening protected processes. Therefore on the latest versions of Windows 11 to access a protected process, not only do you need to disable UIPI, and theUIPIAlwaysOnfeature flag but also theResponsiblePidfeature flag to access the old implementation. TheResponsiblePidfeature flag ID is56032228if you want to disable it with ViVe. This of course requires administrator access and rebooting the machine, it might just be easier to load a kernel driver.Hijacking a TCB level Protected Process
Assuming you’re still running Windows 10 (where this will likely be a forever bug), a pre-24H2 Windows 11 (23H2 Enterprise/Education is still supported until November 2026) or have fully disabled UIPI, we can now
GetProcessHandleFromHwndto compromise a protected process.Ideally we want to get the highest level,
Protected TCBto allow us to then open any other user process on the system regardless of the protection state. How do we get a process running atProtected TCBlevel to create a window we can use to open the process handle? I’ve already described how to do this in a previous blog post back in 2018 on hijacking a protected process through the use of the COMIRundowninterface.Specifically it was possible to force
WerFaultSecure.exerunning atProtected TCBlevel to initialize a COM single-threaded apartment (STA). This allowed access to theIRundowninterface, but more importantly for our purposes a STA also sets up a message only window with theOleMainThreadWndClassclass, which is used for posting calls back to the apartment thread.However it turns out even easier if we no longer need to force COM to initialize.
WerSecureFault.exewill create a number of windows automatically during normal operation. First you need to run the process at the protected level in “upload” mode. Using the following command line:WerFaultSecure.exe -u -p {PID} -ip {PARENT_PID} -s {SECTION_HANDLE}Replace
PIDwith the process ID of a dummy process to debug,PARENT_PIDwith your current process ID andSECTION_HANDLEis a handle to a shared memory section containing the following 32 bit integers,0xF8,PIDandTIDwhere PID and TID are the process ID and thread ID of the dummy debug process. This section handle must be inherited into the new process at creation time.Next you need to find the created window, but that’s easy. Just enumerate windows using the FindWindowEx API. For each window you can lookup the PID using GetWindowThreadProcessId and match it against the created protected process.You might need to use something like an opportunistic lock to suspend the
WerFaultSecure.exeprocess after it has created the window to give you time to enumerate them.The final step is to call
GetProcessHandleFromHwndwith the found window handle and you should get a process handle back withPROCESS_DUP_HANDLE, PROCESS_VM_OPERATION, PROCESS_VM_READ, PROCESS_VM_WRITE, PROCESS_QUERY_LIMITED_INFORMATIONaccess. Typically with this access I’d duplicate a copy of the current process pseudo handle to get a full access handle. However due to the way protected processes work this will fail, as the protection checks cover both opening the process directly and duplicating the handle.Therefore, this is all the access you’re going to get. While you can’t just create a new thread in the process, it gives you sufficient access to the process to allocate and modify executable memory so a simple attack would be to write some shell code into the process and modify an existing jump to execute the code. I’ll leave the final exploitation as an exercise for the reader. Alternatively Sascha Mayer has published a PoC after I had posted a screenshot of my version’s console output that you can play with instead.
Conclusions
In conclusion the
GetProcessHandleFromHwndfunction is quite interesting in how it’s evolved over the years. The first version using windows hooks was actually secure against accessing protected processes as you can’t duplicate a process handle with access rights such asPROCESS_VM_READfrom a protected process to a non-protected process. However it was decided it’d be better to do it all in kernel mode, but the check for protected processes was forgotten.Finally in Windows 11 24H2, along with a general shake up of UIPI this seems to be fixed and the function is also no longer quite so dangerous. Time will tell if at least some of the changes, like making UIPI permanent, come to pass.
-
🔗 r/LocalLLaMA Qwen3.5 122B in 72GB VRAM (3x3090) is the best model available at this time — also it nails the “car wash test” rss
| I am absolutely loving Qwen3.5 122B! It’s the best model I can run on my 72GB VRAM setup, fully loaded on GPU including context. Very good speed at 25 tok/s. Fiddled a bit with the settings to get it to work properly. If you are experiencing endless “but wait” loops, this is what worked for me:- Thinking mode on
- Temperature 0.6
- K Sampling 20
- Top P sampling 0.8
- Min P sampling 0
- Repeat penalty 1.3
Running it in Q3_K it’s a bit slower than GLM Air (30 t/s in IQ4_NL) and GPT- OSS-120B (30-38 t/s in MXFP4), but because it has a smaller footprint in Q3 I am able to push the context to 120k which is great! I tried both MXFP4 and IQ4_XS, but they are too close to 70GB when loaded, forcing me to offload 2-3 layers to RAM or context in RAM — dropping to only 6-8 tok/s. Saw on unsloth website that Q3_K_XL might actually perform on par with the 4bit ones, and I can confirm so far it’s been amazing! submitted by /u/liviuberechet
[link] [comments]
---|--- -
🔗 r/york Micklegate Bar, York’s historic western gateway✨ rss
| @ york.england submitted by /u/WonderfulShape1081
[link] [comments]
---|--- -
🔗 r/Yorkshire LiveScience: "Babies weren't supposed to be mourned in the Roman Empire; These rare liquid-gypsum burials prove otherwise" rss
| submitted by /u/JapKumintang1991
[link] [comments]
---|--- -
🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
sync repo: ~1 changed ## Changes - [bitopt](https://github.com/teflate/bitopt): - 1.0.0: archive contents changed, download URL changed -
🔗 Console.dev newsletter Dozzle rss
Description: Container monitoring & logging.
What we like: Captures logs from Docker, k8s, Podman. Self-hosted. Set up alerts (Slack, Discord, webhooks) for search terms. Use SQL queries to analyze logs.
What we dislike: Good for self-hosted personal projects - their cloud service may be more suitable for important production environments.
-
🔗 Console.dev newsletter Rari rss
Description: React framework with Rust runtime.
What we like: Standard React, but with the HTTP server, RSC renderer, and routing handled by a v8 Rust runtime. True server-side rendering means much faster response times. Standard npm package resolution.
What we dislike: Still very early and experimental, but the performance benefits are interesting.
-