- â
- â
to read (pdf)
- Letting AI Actively Manage Its Own Context | æć€©çäčäș
- Garden Offices for Sale UK - Portable Space
- Cord: Coordinating Trees of AI Agents | June Kim
- Style tips for less experienced developers coding with AI · honnibal.dev
- Haskell for all: Beyond agentic coding
- March 06, 2026
-
đ r/Yorkshire Join me on a hike through a hidden pocket of beauty in West Yorkshire. From Ferrybridge to Brotherton, Fairburn, Ledsham and Ledston. Let me know your thoughts đ rss
-
đ r/Leeds Has anyone else had the pleasure of sharing a bus with the singing man?? rss
submitted by /u/throwawayjinkie
[link] [comments] -
đ r/Yorkshire Barn venues rss
Hi all, looking for venues, preferably barn or with a barn look that you can hire for the day ? Doesnât need to be anything fancy and it doesnât need to include too much, just need somewhere to host a celebration. Thanks !
submitted by /u/AngusWtf
[link] [comments] -
đ r/reverseengineering Reverse-engineered the WiFi transfer protocol for HeyCyan smart glasses (BLE + USR-W630 WiFi module) â first iOS implementation rss
submitted by /u/CockroachLow3274
[link] [comments] -
đ r/wiesbaden Schwarz-weiĂ Fotos entwickeln lassen rss
WeiĂ jemand wo ich einen schwarz-weiĂ Film (Kentmere Pan 400) aus einer analogen Filmkamera entwickeln lassen kann? Rossmann&Co. kenne ich schon. Suche etwas das nicht so teuer ist und trotzem gute Fotos bei rauskommen. Jemand hat mir mal Foto Express in FFM empfohlen. Suche hier was vergleichbares. Danke
submitted by /u/DocterSkinny
[link] [comments] -
đ @binaryninja@infosec.exchange If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh mastodon
If you are at RE//verse, you can find the Binary Ninja Booth in the RE//fresh lounge! We will be running live demos and handing out Binja swag. Come say hey and sign the our banner! Not in Orlando this week? We will be streaming at 3 PM ET live from RE//verse: https://youtube.com/live/bW- oz1UVkCM?feature=share
-
đ facebookresearch/faiss v1.14.1 release
[1.14.0] - 2026-03-06
Added
6cf2c63Update SVS to v0.2.0 and install from conda forge (#4860)1cb3d46feat(svs): LeanVec OOD support (#4773)5cf2c42Expose IndexBinaryFlat to the C API. (#4834)db9ba35add hadamard transformation as an index for IVF (#4856)
Changed
c8579deTry to force relative import statement (#4878)471ddadIncrement to next release, v1.14.1 (#4861)c90c9dcUpdate python to include 3.13 and 3.14 (#4859)8af77feSIMD-optimize multi-bit RaBitQ inner product (#4850)ccc934fScalarQuantizer: split SIMD specializations into per-SIMD TUs + DD dispatch (#4839)
Fixed
5622e93(HEAD -> main, origin/main, origin/HEAD) v1.14.1 Fix build-release (#4876)8431e04Replace squared-distance IP with direct dot-product in multi-bit RaBitQ (#4877)28f79bdFix SWIG 4.4 multi-phase init: replace import_array() with import_array1(-1) (#4846)
Deprecated
-
đ r/Yorkshire Transit Options in Yorkshire Dales Park? rss
Hi friends, I'll be in Hawes this summer and depend mostly on Google maps to get me around via public transit. I'd like to go eastward for a few days (e.g., Ripon, Whitby), but Google shows only rail options that connect through York (e.g., Hawes -> York -> Whitby). Curious if there are additional transit options that offer more direct routes westward or eastward. Appreciate it!
submitted by /u/minpaul
[link] [comments] -
đ News Minimalist đą Weight loss drugs fight multiple addictions + 12 more stories rss
In the last 4 days ChatGPT read 122438 top news stories. After removing previously covered events, there are 13 articles with a significance score over 5.5.

[6.5] GLP-1 drugs may fight addiction across every major substance, according to a study of 600,000 people âtheconversation.com(+30)
A study of 600,000 people found that GLP-1 drugs significantly reduce cravings, overdoses, and deaths across multiple addictions, including opioids and alcohol, marking a potential breakthrough in addiction medicine.
Researchers observed a 50% reduction in substance-related deaths among users already struggling with addiction. The drugs also lowered the risk of developing new dependencies on nicotine and cocaine by roughly 20%, likely by dampening dopamine signaling in the brainâs reward centers.
While not yet approved specifically for addiction, GLP-1 medications are already widely prescribed for diabetes and obesity. Ongoing clinical trials aim to confirm these findings and address questions regarding long-term effectiveness.
[5.8] Iran grants China exclusive passage through the Strait of Hormuz ândtv.com(+110)
Iran will now permit only Chinese vessels to navigate the Strait of Hormuz, rewarding Beijing's support during the regional conflict and further threatening critical global energy supply chains.
The Islamic Revolutionary Guard Corps claims full control of the chokepoint, warning that non-Chinese ships face missile or drone strikes. This blockade impacts regional neighbors like Qatar and the UAE while disrupting twenty percent of the worldâs total oil supply transit.
Beijing previously condemned Western military actions against Iran as unacceptable. Meanwhile, the United States government maintains that military escorts may be deployed to prevent domestic inflation and protect the international flow of commerce.
Highly covered news with significance over 5.5
[6.6] Evo 2: An AI model for genome prediction and design across all life â nature.com (+6)
[6.1] France expands nuclear arsenal and strengthens European defense cooperation â bostonglobe.com (+29)
[5.9] AI blood test detects silent liver disease years before symptoms â sciencedaily.com (+3)
[5.8] Indonesia bans social media for children under 16 â abcnews.com (+45)
[5.7] US forces support Ecuador's fight against drug trafficking organizations â bostonglobe.com (+29)
[5.7] China sets slowest growth target since 1991, focusing on tech and domestic demand â abcnews.com (+49)
[5.5] New study reveals underestimated sea level rise threatens millions more people â abcnews.com (+14)
[5.5] Lawsuit claims Google Gemini AI gave dangerous instructions leading to a man's suicide â time.com (+34)
[5.5] New treatment is reducing seizure frequency in children by 91% â ndtv.com (+11)
[5.8] Japan approves world's first stem cell treatment for Parkinson's and heart failure â nippon.com (+6)
[5.8] BYD introduces new battery technology with over 600 miles of range and rapid charging â fastcompany.com (+3)
Thanks for reading!
â Vadim
You can set up and personalize your own newsletter like this with premium.
-
đ r/york York shot on my cheap little point and shoot film camera:) rss
| Some photos I shot a little while back in your beautiful city! submitted by /u/Organic_Repair8717
[link] [comments]
---|--- -
đ r/york Live music? rss
Looking for a few recommendations for live bands/music in the city centre. My girlfriend and I are here for the weekend. Any recommendations are greatly appreciated, thanks đ
submitted by /u/stayant1
[link] [comments] -
đ r/Harrogate Best way to travel to London rss
submitted by /u/Apprehensive_Ring666
[link] [comments] -
đ r/Leeds Ex Starbucks, Chapel Allerton, What Next rss
Hello
I see the Ex Starbucks, Chapel Allerton, is under offer. Anybody know who's moving in? Big building to fill.
submitted by /u/renlauo
[link] [comments] -
đ r/reverseengineering VirusTotal but free rss
submitted by /u/el_mulon
[link] [comments] -
đ r/york Loft conversion recommendations rss
Hiya lovely people of York - happy Friday!
Looking to get our mid terrace house loft converted - we got very stung by a plumber we found through checkatrade and have had problems finding roofers in the past, so the main thing stopping me is worry about getting the wrong people in!
Anyone got recommendations? (Also rough cost if you don't mind sharing) - we're looking to go as simple as possible, no Dormer or bathroom !
submitted by /u/AutumnDream1ng
[link] [comments] -
đ r/reverseengineering My journey through Reverse Engineering SynthID rss
submitted by /u/Available-Deer1723
[link] [comments] -
đ r/reverseengineering My journey through Reverse Engineering SynthID rss
submitted by /u/MissAppleby
[link] [comments] -
đ r/Yorkshire Fountains Abbey, Ripon, Yorkshire rss
| submitted by /u/mdbeckwith
[link] [comments]
---|--- -
đ jank blog jank is off to a great start in 2026 rss
Hey folks! We're two months into the year and I'd like to cover all of the progress that's been made on jank so far. Before I do that, I want to say thank you to all of my Github sponsors, as well as Clojurists Together for sponsoring this whole year of jank's development!
-
- March 05, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-03-05 rss
IDA Plugin Updates on 2026-03-05
New Releases:
Activity:
- capa
- ghidra
- a7a795b3: Merge remote-tracking branch 'origin/Ghidra_12.1'
- 4e4674be: Merge branch 'GP-6537_ryanmkurtz_PR-1905_mduggan_phar_lap_ne_support'
- 0351dc99: GP-6537: Certify
- 6fa0ddbc: Support large (>2^16) offset to exe file NE header
- f466bb00: Merge remote-tracking branch 'origin/Ghidra_12.1'
- d374989a: Merge remote-tracking branch 'origin/GP-6536_ghidragon_null_ptr_excepâŠ
- 5e46aa4e: Merge remote-tracking branch 'origin/GP-0-dragonmachre-enum-test-fix'âŠ
- 9d55f0d8: Test fix
- ida-edit-function-prototype
- ida-pro-mcp
- b160449c: Merge pull request #252 from baoan7090/baoan7090-patch-1
- 4d613d0a: Merge pull request #262 from haosenwang1018/fix/bare-except
- b0844afa: Merge pull request #257 from withzombies/many-to-many-session-management
- 5aae5542: fix: prevent SIGPIPE crash and port collision with multiple IDA instaâŠ
- 5fb925bd: fix: auto-increment port for multiple IDA instances
- 20f28764: fix: ignore SIGPIPE to prevent IDA crash on client disconnect
- idamagicstrings
- idasql
- msc-thesis-LLMs-to-rank-decompilers
- Rikugan
- 13c59b49: security: add prompt injection mitigation and harden approval gates
- 80bebbb4: update readme
- 4b244a89: update readme
- b80a9f73: docs(webpage): sync docs with current codebase and add llms.txt
- adfc54df: refactor: rewrite MCP client using official mcp SDK
- 4c9f2eb8: fix readme
- 2573664e: adds gif
- 3f7d8725: feat: expanded test suite and misc fixes
- 9ce5f0a0: Merge branch 'desloppify/code-health'
- 629f7298: refactor: code health improvements (desloppify 37â81)
- f3dbff97: Merge branch 'main' of github.com:buzzer-re/Rikugan
- 474bbff9: adds cff example gif
- 25072ded: feat: IL analysis/transform tools, deobfuscation skill, and fixes
- 39f51312: docs: fix platform paths, add llms.txt, Architecture button, il_problem
- sighthouse
- 6fa33f86: First release \o/
- zenyard-ida-public
-
đ r/york No Three data signal in Goodramgate rss
Is anyone else experiencing the Three mobile data drops out in Goodramgate/Kings Sq? Thereâs a full house of signal but no data connection. Eg Streaming music stops playing as I walk through the area. Itâs been happening for a few weeks now.
submitted by /u/dawnriser
[link] [comments] -
đ r/york recently moved to york and looking to make new mates (23m) rss
moved to foss islands 6 months ago but iâm struggling to meet new people here. i work for the NHS doing shifts so i struggle to attend anything regularly. i enjoy swimming, running and good old pintsđ»
any local casual social groups/sports groups? iâve tried meetup.com and others but struggling to find many groups - any recommendations welcome :)
submitted by /u/Internal-Bet4689
[link] [comments] -
đ r/york recently moved to york and looking to make new mates (23m) rss
moved to foss islands 6 months ago but iâm struggling to meet new people here. i work for the NHS doing shifts so i struggle to attend anything regularly. i enjoy swimming, running and good old pintsđ»
any local casual social groups/sports groups? iâve tried meetup.com and others but struggling to find many groups - any recommendations welcome :)
submitted by /u/Internal-Bet4689
[link] [comments] -
đ r/wiesbaden Bestes Eis oder Kuchen fĂŒr Samstag rss
Guude :) Ich bin am Samstag in Wiesbaden und das Wetter soll ja ganz gut werden. Was wÀre so der beste Eisladen in Wiesbaden? Alternativ gerne auch ein Café mit guter Kuchenauswahl/gerne traditioneller :) Danke!!
submitted by /u/LastCauliflower3842
[link] [comments] -
đ r/Leeds Anywhere that serves pints in ice cold glasses? rss
One day of sun and Iâm craving it
submitted by /u/augustbecchio
[link] [comments] -
đ r/wiesbaden US Car people in Wiesbaden rss
Hi guys,
since the weather is great and the (early) season is on, I am looking for likeminded people with (old) US Cars - Muscle Cars, Trucks, Jeeps etc. There are some meetings in the area but after visiting some, I feel like a lot of those people are organized in clubs etc and drive like 200km just to show off their new paintjob ;) I am more into tech talks, wrenching, learning and having fun with the cars. And I am not 60 years old.
I own a '93 Corvette myself, have some knowledge of 350 Chevy V8s and cars in general, and would love to meet new interesting people who share this hobby. I'm german but my english is fine. There are a lot of US spec cars in Wiesbaden so I thought I just write in english here. I am living next to Hainerberg and was greeted by a black Challenger today, so this was my sign to give this post a go. Just hit me up and connect.
Cheers!
submitted by /u/randomsubi
[link] [comments] -
đ r/york Rowntree Park Tennis Partner rss
Hello. I am looking for a tennis buddy to hit balls with on an evening / sometimes during week with the lighter nights coming. Just joined Rowntree Park tennis yesterday so id like to play here. I'm intermediate/ advanced but I don't really like to play competitively tbh.
submitted by /u/BlueSky86010
[link] [comments] -
đ The Pragmatic Engineer The Pulse: Cloudflare rewrites Next.js as AI rewrites commercial open source rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer . This issue is the entire The Pulse issue from the past week, which paying subscribers received seven days ago. This piece generated quite a few comments across subscribers , and so I 'm sharing it more broadly, especially as it raises questions on what is defensible and what is not with open source.
If you 've been forwarded this email, you can subscribe here to get issues like this in your inbox.
Today's issue of The Pulse focuses on a single event because it's a significant one with major potential ripple effects. On Tuesday, Cloudflare shocked the dev world by announcing that they have rewritten Next.js in just one week, with a single developer who used only $1,100 in tokens:
Cloudflare
CTO Dane Knecht on
XThere are several layers to dig into here:
- The Next.js ecosystem: a recap. Close to half of React devs use Next.js, and the best place to deploy Next.js is on Vercel - partly thanks to its proprietary build output.
- What Cloudflare did with Next.js. Replacing the build engine in Next.js with the more standard Vite one, allowing Next.js apps to be easily deployed on Cloudflare.
- AI brings the impossible within reach. What would take years in engineering terms was executed in one week with some tokens.
- " AI slop" still an issue. Contrary to Cloudflare's claims, vinext is not production-ready, and will need plenty of cleanup and auditing to make it on par with Next.js.
1. The Next.js ecosystem: a recap
First, some background. Next.js is the most popular fullstack React framework and around half of all React devs use it, as per recent research such as the 2025 Stack Overflow developer survey. Next.js is an open source project, built and mostly maintained by Vercel, which is the preferred deployment target for Next.js applications for many reasons. One of them is that Next.js is ideal to deploy to Vercel because Next.js applications are built with Vercel's Turbopack build tool. The output of a build is a proprietary format. As Netlify engineer Eduardo Bouças writes:
"The output of a Next.js build has a proprietary and undocumented format that is used in Vercel deployments to provision the infrastructure needed to power the application.
This means that any hosting providers other than Vercel must build on top of undocumented APIs that can introduce unannounced breaking changes in minor or patch releases. (And they have)".
Next.js is an interestingly built project, where everything is open source, and the best place to deploy a Next.js application is on Vercel, as it's optimized to run undocumented build artifacts the most efficiently. This is a smart strategy from Vercel which competitors will dislike, as any hosting provider would prefer Next.js to produce a standard build format. To do this, the build engine, Turbopack, would need to be replaced with something more standard.
Let 's talk about build tools for web development. According to the State of JS 2025 survey, the most popular in the web ecosystem are:
- Vite: the most popular choice for new projects due to its speed and developer experience. Uses projects like esbuild and Rollup under the hood
- Webpack: a legacy tool that's not very performant, but still widely deployed in older projects
- Turbopack: Created by Vercel and optimized for larger Next.js applications. Built in Rust and intended to be more performant
- Bun: a relatively new, all-in-one runtime and bundler. Anthropic acquired the team in December, and some Bun folks are now focused on improving Claude Code's performance.
So, most of the web ecosystem uses Vite as a build tool; Next.js uses Turbopack, and the majority of React applications with a full-stack React framework use Next.js. Basically, most devs using Next.js are likely to use Vite as their build tool.
2. What Cloudflare did with Next.js
Here's a naive idea: what if Next.js used Vite to generate build outputs? In that case, build outputs would be standardized and would run equally well on any cloud provider, as there would be nothing proprietary or undocumented to Vercel.
And this is what Cloudflare did: replace Turbopack with Vite and call the new package 'vinext':
Cloudflare
replaced the Turbopack build dependency with Vite to create vinextBuried midway in the announcement is how this project is experimental and not at all guaranteed to work okay: it's a 'use-at-own-risk' project. Still, the mere fact of this development feels like an earthquake in the tech world because of how it was pulled off.
3. AI brings the impossible within reach
In a blog post announcing the project, Cloudflare claims only one engineer "rebuilt" the whole thing in a way that's trivial to deploy to Cloudflare's own infrastructure, and only cost $1,100 in tokens. From Cloudflare's statement:
"Last week, one engineer and an AI model rebuilt the most popular front-end framework from scratch. The result, vinext (pronounced "vee-next"), is a drop-in replacement for Next.js, built on Vite, that deploys to Cloudflare Workers with a single command. In early benchmarks, it builds production apps up to 4x faster and produces client bundles up to 57% smaller. And we already have customers running it in production.
The whole thing cost about $1,100 in tokens".
What Cloudflare did:
- Took the Next.js public API
- Reimplemented behaviour using Vite
- Created build output whose behaviour matches the "original" Next.js implementation
After 10 years, the core of Next has around 194,000 lines of code (LOC)**. Meanwhile, vinext is about 67,000 lines of code which suggests a much leaner implementation: for example, vinext does not need to support legacy Next APIs, and vinext currently supports 94% of the Next.js API (and it's safe to assume they left complex edge cases in the remaining 6%).
** the Next.js repository is closer to 2M lines of code: 1M is bundled dependencies (eg React bundles, CSS build etc), tests are 308,000 LOC, Turbopack 311,000 LOC.
Pre-AI, this reimplementation would have taken years of engineering time to complete. Doing what Cloudflare did was always possible _ in theory_, but never seemed practical. I mean, why have a team of engineers spend potentially years on generating a standardized build output for Next.js apps? Even if they did, the dev community would have doubts about whether Cloudflare would maintain the project.
This is the thing with forking or rewriting open source projects: a major value proposition for commercial open source is to know that they will be maintained. Vercel has proved it's a reliable custodian of Next.js for the past 10 years. Without AI, it could be assumed that any new reimplementation would eventually run out of steam.
Separately but relatedly, Cloudflare has now proved that the cost of rewriting existing software has become ~100x cheaper, thanks to AI, and this economy is likely to be the case for maintenance, too. Considering how trivial it was to rebuild one of the more complex open source projects, this augers well for it being trivial and much cheaper to maintain in the future. Potentially, Cloudflare no longer needs to budget an engineering team only for maintenance, if a single engineer could maintain the project, part-time!
Cloudflare had a project measured in engineering years, and completed it in one engineering week! It just took a single engineer using OpenCode (open source coding agent), Opus 4.5, and a bunch of tokens, then: 'boom ', vinext was born.
4. "AI slop" still an issue
There are questions about the quality of vinext, though.**** Vercel, naturally, is unhappy and hit out at the obvious weakness that vinext is unfit for production usage because it's insecure. Vercel CEO, Guillermo Rauch, did not miss a beat by tying Cloudflare's effort to the "vibe coding" stereotype of sloppy work executed with a lack of understanding:
Guillermo
Rauch on
XGuillermo has a point: anyone who stopped reading Cloudflare's launch announcement after the first few sentences would assume it's production-ready, with the first paragraph of this announcement closing with:
"And we already have customers running it in production."
However, Cloudflare doesn't share the rather crucial detail that "running in production" means that vinext has been deployed onto a beta site, until more than 1,000 words (around 2-3 pages) into the announcement:
"We want to be clear: vinext is experimental. It's not even one week old, and it has not yet been battle-tested with any meaningful traffic at scale. (...)
We've been working with National Design Studio, a team that's aiming to modernize every government interface, on one of their beta sites , CIO.gov.
Oh. So, "customers running it in production" at Cloudflare apparently means "customer running a beta site in production without meaningful traffic." This is a first from the infrastructure giant, which usually prides itself on accurate statements!
This detail was also absent when Cloudflare's CEO and CTO were boosting vinext like it was a mature, battle-tested product. In that context, Vercel's raising of the issue of security vulnerabilities is more than fair game, in my view.
Still, all that doesn't alter the core learning from this project: that AI has the power to drastically reduce engineering time by up to ~100x and deliver usable-enough output, for relatively negligible financial cost. Just keep in mind that security and reliability issues will probably take plenty of extra time and effort to address.
5. New attack vector on commercial open source?
If arch-rivalries exist in tech, then Cloudflare and Vercel are a prime example. Both are gunning to become the most popular platform for developers to deploy their code, and the CEOs are regularly seen in public taking shots at the other side. One such spat happened in March, as covered at the time:
"Things kicked off on social media, with developers confused about the severity of the incident, and about why Next.js seemed silent, and also why Cloudflare sites were breaking due to its fix for the CVE causing its own issues. It was at that point that Cloudflare's CEO, Matthew Prince, entered the chat to accuse Vercel of not caring about security:
Given the security incident was ongoing, this felt a bit "below the belt" by the Cloudflare chief. Criticizing rivals is fair game, but why not wait until the incident is over? The punch landed, and Vercel's CEO Guillermo Rauch is not someone to take it lying down, so he hit back.
Cloudflare's CEO then responded with a cartoon implying that although Vercel is much larger than its competitor Netlify, Cloudflare is 100x bigger than both, and could stomp them into the ground at will."
Serving the public interest wasn't why Cloudflare rewrote Next.js: they did it because they want Next.js sites to be deployed onto Cloudflare, but doing so made little sense until now because Next.js produced bespoke build output optimized for Vercel's infrastructure. With this change, Cloudflare claims it provides _superior _performance when hosting Next.js apps, according to their own measurements.
I 'd just add that performance is important for developers, but other things matter, too. Cost, reliability, developer experience, and how much devs like a company, are all factors in choosing between vendors. Also, performance measurements from a vendor about its own service must be taken with a large pinch of salt.
Zooming out from this episode, it seems that AI is bringing the value of existing commercial open source moats into question. Vercel carved out a clever open source strategy that helped turn its open source investment into business revenue:
- Build and maintain Next.js, delivering the best developer experience (DX).
- Optimize Vercel to serve the specific (and undocumented) build output of Next.js.
- Most developers onboarding to Next.js will decide to deploy on Vercel to get the most benefit, in terms of DX and performance.
- ⊠repeat for years while the business becomes worth billions! (Vercel was valued at $9B last October).
Underpinning this success are some assumptions:
- Next.js will remain the #1 choice for developers to build React applications, thanks to ongoing investment.
- It is expensive to rewrite Next.js to be deployable and performant on another cloud vendor.
- Even if someone did #2, developers would be skeptical and not switch over.
Vercel can invest in #1 to keep Next as best-in-class, while knowing that the risk of #2 occurring is minor. However, Cloudflare has now "cloned" Next, and can easily keep up with all changes in the future, and port them back to vinext.
But AI makes it trivial to "piggyback" off any commercial open source project, which is a massive problem for commercial open source startups. It puts all the effort and investment into building and maintaining Next.js, while Cloudflare enjoys the benefit of this hard work (the Next.js public API) which is easily deployable to Cloudflare, and it can now undercut Vercel on price. For all future Next.js changes, Cloudflare will just sync it to vinext, using AI!
WordPress had a similar problem, with WP Engine "piggybacking" off its work and undercutting their pricing in 2024. As I analyzed at the time:
"Free-riding on permissive open source is too tempting to pass on for other vendors. WP Engine uses a common loophole of contributing almost nothing in R&D to WordPress, while selling it as a managed service. This means that they could either easily undercut the pricing of larger players like Automattic which do spend on WordPress's R&D. Alternatively, a company like WP Engine could charge as much, or more, as Automattic, but be able to spend a lot more on marketing, while being similarly profitable. "Saving" on R&D gives the "free-riders" plenty of options to grow their businesses: options not necessarily open to Automattic while they invest as much into R&D as they do.
Commercial open source vendors pressure to end "freeriding". Automattic is likely facing lower revenue growth, with customers choosing vendors like WP Engine which offer a similar service -- getting these customers either via a cheaper price or thanks to more marketing spend. This legal fight could be an effort to force WP Engine to stop eating Automattic's lunch, or perhaps get WP Engine to sell to Automattic, which would cement its leading status in managed Wordpress, while also boosting revenue by $400M a year - according to its own figures".
Vercel managed to avoid the "free-riding" problem with Next.js, but that's no longer possible now that AI makes it trivial to rewrite.
6. Defense or offense?
How should commercial open source companies respond to the threat that a competitor can easily rewrite the software behind the managed solutions which they sell as services?
One obvious response is to make tests private, so that replication is harder for AI. One thing that made it so easy for Cloudflare to rewrite Next was the project's comprehensive test suite. From their announcement __(emphasis mine):
"We also want to acknowledge the Next.js team. They've spent years building a framework that raised the bar for what React development could look like. The fact that their API surface is so well-documented and their test suite so comprehensive is a big part of what made this project possible."
Database solution SQLite is famous for its incredible test suite. What some people don't know is that while core SQLite tests are open source, its most comprehensive test suite - TH3 - is closed source. SQLite monetizes its advanced infrastructure as a service for purchase. This is a fair tradeoff: for most contributors, the basic open source tests work well enough. For enterprise users or customers who really care about correctness, it makes sense to purchase advanced testing services from the service's creator.
Open source canvas project, tldraw, announced it will relocate its test suite to a closed source repository; a move which makes plenty of sense. Here's commentary from Simon Willison:
"It's become very apparent over the past few months that a comprehensive test suite is enough to build a completely fresh implementation of any open source library from scratch, potentially in a different language."
In the event, tldraw's announcement turned out to be a joke, but who's laughing now? An open source project with excellent tests is an easy target for an AI agent to execute a full rewrite of it.
Could new licenses be created for the AI era? Existing open source licenses were created on the assumption that humans read open source code, and humans modify it. Agents break that assumption.
Could we see new license types emerge to ban AI agents from modifying projects' source code? It seems pretty far-fetched and hard to implement, but not beyond the realms of possibility.
AI agents are still very new, and going mainstream in tech. Once they break into other industries, I wouldn't be surprised if legal frameworks are reworded to also apply to AI agents. If and when this happens, it would open the path for open source licenses to distinguish between agents and humans.
What is a moat, if code can be trivially ported? A team operating a popular open source project can no longer assume it's expensive to fork or to be completely rewritten, meaning it makes sense to focus on other moats, such as:
- Outstanding (paid) support. AI could make this much easier at a higher quality, if done right.
- Smaller open core, larger closed source part. "Open core" as a business model has been dominant for commercial open source: keep the core of the software open source, while advanced enterprise features are source available or closed source. I would expect more companies to move their additional services to closed source, not source available.
- In-person connection and community. Projects with a real-world community will form a sense of connection that goes beyond code. For example, it's hard to imagine vinext meetups popping up - whereas there are many Next.js communities.
- Infrastructure and hardware remains a massive moat. In a world where software is trivial to copy, infrastructure remains a moat. Commercial open source might make most sense for players that own and operate superior infrastructure layers than their rivals: and being able to offer lower cost, higher reliability, lower latency, higher performance, or a combination of these.
7. AI-world reality
One of the single best AI use cases is full-on rewrites of well-tested products. I estimate that AI sped up the creation of vinext by at least 100x, which is massive. But we don't really see efficiency boosts of anything like that with AI tools, in general. As Laura Tacho shared at The Pragmatic Summit in San Francisco, the average self-reported efficiency 'AI gain' seems to be circa 10%.
I suspect this vast chasm in efficiency boosts is because AI is many times more efficient at "no-brainer tasks" where correctness can be verified with tests, versus those which are more open ended or involve more creativity.
In general, tests are incredibly important for efficient AI usage. On The Pragmatic Engineer Podcast, Peter Steinberger stressed how important "closing the loop" in his developer flow is by instructing the AI to test itself, and ensuring the AI has tests to run that verify correctness.
Automated tests were always considered a best practice for creating maintainable code. Now, having a codebase with extensive tests is the baseline to make AI agents work productively for refactors, rewrites - or even adding new features and verifying that things did not break!
Vendors will start to deploy "migration AI agents" to move customers over to their own stacks. This got lost in Cloudflare's announcement, but it's important:
vinext includes an Agent Skill that handles migration for you. It works with Claude Code, OpenCode, Cursor, Codex, and dozens of other AI coding tools. Install it, open your Next.js project, and tell the AI to migrate:
> npx skills add cloudflare/vinext
Then open your Next.js project in any supported tool and say:
> migrate this project to vinext
The skill handles compatibility checking, dependency installation, config generation, and dev server startup. It knows what vinext supports and will flag anything that needs manual attention.
This is very clever from Cloudflare, and a true "AI-native" move. They have not only used AI to migrate Next.js, but also built an "AI plugin" (a skill) to help customers migrate their existing codebases over to vinext - and deploy on Cloudflare!
This move will surely be copied by other vendors, since migrations which are tedious for humans are much less effort with agents.
AI is making the tech industry more ruthless when it comes to business practices. Laura Tacho said something interesting at The Pragmatic Summit:
"AI is an accelerator, it's a multiplier, and it is moving organizations in different directions."
AI seems to be accelerating the ruthlessness of competition for customers and the speed at which this happens. In one week, Cloudflare rebuilt Next.js, and it's attacking Vercel full-on: claiming their "vibe coded" alternative is more performant and production-ready, and burying at the foot of the launch announcement the crucial information that vinext is very much experimental.
I sense vendors are realizing that there's a limited amount of time in which to use AI to their advantage, and some will decide to use it like Cloudflare has.
On the other hand, AI could be great news for non-commercial open source. AI presents as a threat to commercial open source because it removes existing moats which make code hard to fully rewrite. However, beyond that, AI could help non-commercial open source to thrive:
- With AI, it's easy to fork an open source project and keep the fork in-sync with the original.
- It's trivial to instruct AI to rewrite an open source project to another language or framework.
- âŠand it's equally trivial for AI to add features to a fork.
For these reasons, I believe there could be a lot more forks and rewrites to come, and more open source projects and code, in general.
Takeaways
Personally, I could not have imagined things changing this quickly in software. Rewriting Next.js in a single week, even to a version that is not quite there - but mostly works? This was out of the question as recently as a few months ago.
Things changed around last December, when Opus 4.5 and GPT-5.2 came out and proved capable of writing most of the code. What used to be expensive is now cheap - like rewriting complete projects - and we still need to learn what the "new" expensive parts of software engineering are.
All this is new territory for everyone. To succeed in the tech industry, you need to be able to capitalize upon change, as Cloudflare has clearly done in this case by making the most of an opportunity created by new technology. It's unclear how popular vinext will become, and how much of a moat Vercel has around the broader Next.js ecosystem, but I suspect that it'd take more than a Next rewrite to make Cloudflare into a viable Next.js platform-as-a-service provider.
-
đ r/Leeds Lunch in Leeds - Best value for money? rss
Thank you.
submitted by /u/Bright_Fill_4770
[link] [comments] -
đ r/Yorkshire Sunrise over Langsett Res nr Barnsley this Morning rss
| Didnât see one person the whole walk round. Spring is on its wayâ submitted by /u/Del_213
[link] [comments]
---|--- -
đ r/Leeds Leeds is betting big on new bike lanes. Will people use them? rss
submitted by /u/djstimms
[link] [comments] -
đ r/Leeds Metal fans in Leeds rss
So I (31/m) am considering reviving an idea I had about a year ago for a meetup style group for metal fans in Leeds. I love black metal personally but don't really know anyone locally with similar music tastes. Idea is for gig meets and just general hangouts. Every 3/4weeks, give or take.
I'm aware of the Leeds rock + metal fans meetup group although that seems dead, I joined their WhatsApp and nothing but silence. If there is anything else similar already existing I'd be keen to find out about it. I don't plan on using the meetup platform as I am limited financially, and they charge subscription fees so if anyone has advice on alternative platforms I'd be very interested.
So, who's interested? Open to all fans of heavy music, 25+ preferred only as I'd feel awkward if it's just students or a generally younger crowd.
I'll create something and update this post, depending on feedback.
EDIT: I've made a WhatsApp group and will try arrange something for next week probably. I'll DM everyone who's commented so far, link available DM me for it. Don't want to post it to avoid it being flooded with bots.
submitted by /u/GhengisChasm
[link] [comments] -
đ Simon Willison Can coding agents relicense open source through a âclean roomâ implementation of code? rss
Over the past few months it's become clear that coding agents are extraordinarily good at building a weird version of a "clean room" implementation of code.
The most famous version of this pattern is when Compaq created a clean-room clone of the IBM BIOS back in 1982. They had one team of engineers reverse engineer the BIOS to create a specification, then handed that specification to another team to build a new ground-up version.
This process used to take multiple teams of engineers weeks or months to complete. Coding agents can do a version of this in hours - I experimented with a variant of this pattern against JustHTML back in December.
There are a lot of open questions about this, both ethically and legally. These appear to be coming to a head in the venerable chardet Python library.
chardetwas created by Mark Pilgrim back in 2006 and released under the LGPL. Mark retired from public internet life in 2011 and chardet's maintenance was taken over by others, most notably Dan Blanchard who has been responsible for every release since 1.1 in July 2012.Two days ago Dan released chardet 7.0.0 with the following note in the release notes:
Ground-up, MIT-licensed rewrite of chardet. Same package name, same public API â drop-in replacement for chardet 5.x/6.x. Just way faster and more accurate!
Yesterday Mark Pilgrim opened #327: No right to relicense this project:
[...] First off, I would like to thank the current maintainers and everyone who has contributed to and improved this project over the years. Truly a Free Software success story.
However, it has been brought to my attention that, in the release 7.0.0, the maintainers claim to have the right to "relicense" the project. They have no such right; doing so is an explicit violation of the LGPL. Licensed code, when modified, must be released under the same LGPL license. Their claim that it is a "complete rewrite" is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a "clean room" implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.
Dan's lengthy reply included:
You're right that I have had extensive exposure to the original codebase: I've been maintaining it for over a decade. A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.
However, the purpose of clean-room methodology is to ensure the resulting code is not a derivative work of the original. It is a means to an end, not the end itself. In this case, I can demonstrate that the end result is the same â the new code is structurally independent of the old code â through direct measurement rather than process guarantees alone.
Dan goes on to present results from the JPlag tool - which describes itself as "State-of-the-Art Source Code Plagiarism & Collusion Detection" - showing that the new 7.0.0 release has a max similarity of 1.29% with the previous release and 0.64% with the 1.1 version. Other release versions had similarities more in the 80-93% range.
He then shares critical details about his process, highlights mine:
For full transparency, here's how the rewrite was conducted. I used the superpowers brainstorming skill to create a design document specifying the architecture and approach I wanted based on the following requirements I had for the rewrite [...]
I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code. I then reviewed, tested, and iterated on every piece of the result using Claude. [...]
I understand this is a new and uncomfortable area, and that using AI tools in the rewrite of a long-standing open source project raises legitimate questions. But the evidence here is clear: 7.0 is an independent work, not a derivative of the LGPL-licensed codebase. The MIT license applies to it legitimately.
Since the rewrite was conducted using Claude Code there are a whole lot of interesting artifacts available in the repo. 2026-02-25-chardet-rewrite-plan.md is particularly detailed, stepping through each stage of the rewrite process in turn - starting with the tests, then fleshing out the planned replacement code.
There are several twists that make this case particularly hard to confidently resolve:
- Dan has been immersed in chardet for over a decade, and has clearly been strongly influenced by the original codebase.
- There is one example where Claude Code referenced parts of the codebase while it worked, as shown in the plan - it looked at metadata/charsets.py, a file that lists charsets and their properties expressed as a dictionary of dataclasses.
- More complicated: Claude itself was very likely trained on chardet as part of its enormous quantity of training data - though we have no way of confirming this for sure. Can a model trained on a codebase produce a morally or legally defensible clean-room implementation?
- As discussed in this issue from 2014 (where Dan first openly contemplated a license change) Mark Pilgrim's original code was a manual port from C to Python of Mozilla's MPL-licensed character detection library.
- How significant is the fact that the new release of chardet used the same PyPI package name as the old one? Would a fresh release under a new name have been more defensible?
I have no idea how this one is going to play out. I'm personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.
I see this as a microcosm of the larger question around coding agents for fresh implementations of existing, mature code. This question is hitting the open source world first, but I expect it will soon start showing up in Compaq-like scenarios in the commercial world.
Once commercial companies see that their closely held IP is under threat I expect we'll see some well-funded litigation.
Update 6th March 2026: A detail that's worth emphasizing is that Dan does not claim that the new implementation is a pure "clean room" rewrite. Quoting his comment again:
A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.
I can't find it now, but I saw a comment somewhere that pointed out the absurdity of Dan being blocked from working on a new implementation of character detection as a result of the volunteer effort he put into helping to maintain an existing open source library in that domain.
I enjoyed Armin's take on this situation in AI And The Ship of Theseus, in particular:
There are huge consequences to this. When the cost of generating code goes down that much, and we can re-implement it from test suites alone, what does that mean for the future of software? Will we see a lot of software re-emerging under more permissive licenses? Will we see a lot of proprietary software re-emerging as open source? Will we see a lot of software re-emerging as proprietary?
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
đ r/Harrogate Channel 4's 'The Dog House' is Looking for Loving Homes in Harrogate rss
| Hi everyone!đ I'm part of the team behind Channel 4's The Dog House and I'm wondering whether this might be of interest to anyone here? We're looking for dog lovers in Harrogate who could offer a loving home and a fresh new start to a rescue dog in need for our next series filming in spring. Filmed in partnership with Woodgreen Pets Charity, the series shines a light on how life-changing the bond between humans and dogs can be for both sides. If you're interested to apply you can do so at: https://c4thedoghousetakepart.co.uk/ I've also included our flyer, in case anyone would like to share it with others they might know. submitted by /u/Fallevo
[link] [comments]
---|--- -
đ r/wiesbaden Auto Lackierer rss
Hallo liebe Wiesbadener,
kann jemand einen Auto-Lackierer empfehlen hier in der Gegend?
submitted by /u/nikitsolo
[link] [comments] -
đ r/LocalLLaMA Ran Qwen 3.5 9B on M1 Pro (16GB) as an actual agent, not just a chat demo. Honest results. rss
| Quick context: I run a personal automation system built on Claude Code. It's model-agnostic, so switching to Ollama was a one-line config change, nothing else needed to change. I pointed it at Qwen 3.5 9B and ran real tasks from my actual queue. Hardware: M1 Pro MacBook, 16 GB unified memory. Not a Mac Studio, just a regular laptop. Setup: brew install ollama ollama pull qwen3.5:9b ollama run qwen3.5:9b Ollama exposes an OpenAI-compatible API at localhost:11434. Anything targeting the OpenAI format just points there. No code changes. What actually happened: Memory recall : worked well. My agent reads structured memory files and surfaces relevant context. Qwen handled this correctly. For "read this file, find the relevant part, report it" type tasks, 9B is genuinely fine. Tool calling : reasonable on straightforward requests. It invoked the right tools most of the time on simple agentic tasks. This matters more than text quality when you're running automation. Creative and complex reasoning : noticeable gap. Not a surprise. The point isn't comparing it to Opus. It's whether it can handle a real subset of agent work without touching a cloud API. It can. The slowness was within acceptable range. Aware of it, not punished by it. Bonus: iPhone Ran Qwen 0.8B and 2B on iPhone 17 Pro via PocketPal AI (free, open source, on the App Store). Download the model once over Wi-Fi, then enable airplane mode. It still responds. Nothing left the device. The tiny models have obvious limits. But the fact that this is even possible on hardware you already own in 2026 feels like a threshold has been crossed. The actual framing: This isn't "local AI competes with Claude." It's "not every agent task needs a frontier model." A lot of what agent systems do is genuinely simple: read a file, format output, summarize a short note, route a request. That runs locally without paying per token or sending anything anywhere. The privacy angle is also real if you're building on personal data. I'm curious what hardware others are running 9B models on, and whether anyone has integrated them into actual agent pipelines vs. just using them for chat. Full write-up with more detail on the specific tasks and the cost routing angle: https://thoughts.jock.pl/p/local-llm-macbook-iphone-qwen-experiment submitted by /u/Joozio
[link] [comments]
---|--- -
đ r/LocalLLaMA Final Qwen3.5 Unsloth GGUF Update! rss
| Hey r/LocalLLaMA this week we worked on further improving the best size/KLD tradeoff for Qwen3.5, and weâre excited to share new GGUF benchmarks for Qwen3.5-122B-A10B and Qwen3.5-35B-A3B (99.9% KL divergence). This will likely be our final GGUF update. Weâre also deeply saddened by the news around the Qwen team, and incredibly grateful for everything theyâve done for the open source community! For a lot of model releases, they had to stay up all night and not sleep.- All GGUFs now use our new imatrix calibration dataset so you might see small improvements in chat, coding, long context, and tool-calling use-cases. We are always manually improving this dataset and it will change often.
- This is a follow up to https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/
- We further enhanced our quantization method for Qwen3.5 MoEs to reduce Maximum KLD directly. 99.9% is what is generally used, but for massive outliers, Maximum KLD can be useful. Our New method generally pushes the Maximum KLD quite a much down vs the pre March 5th update. UD-Q4_K_XL is 8% bigger, but reduces maximum KLD by 51%!
| Quant | Old GB | New GB | Max KLD Old | Max KLD New
---|---|---|---|---
UD-Q2_K_XL | 12.0 | 11.3 (-6%) | 8.237 | 8.155 (-1%)
UD-Q3_K_XL | 16.1 | 15.5 (-4%) | 5.505 | 5.146 (-6.5%)
UD-Q4_K_XL | 19.2 | 20.7 (+8%) | 5.894 | 2.877 (-51%)
UD-Q5_K_XL | 23.2 | 24.6 (+6%) | 5.536 | 3.210 (-42%)- Re-download Qwen3.5-35B-A3B , 27B , and 122B-A10B as they're now all updated. Re-download 397B-A17B after todayâs update (still uploading!)
- Qwen3.5-27B and 122B-A10B include the earlier chat template fixes for better tool-calling/coding output. 397B-A17B will also be updated today to include this.
- LM Studio now supports toggling âthinkingâ for our GGUFs. Read our guide or run
lms get unsloth/qwen3.5-4b. This process will be easier very soon. - Benchmarks were conducted using the latest versions for every GGUF provider.
- Replaced BF16 layers with F16 for faster inference on unsupported devices.
- Qwen3.5-35B-A3B now has all variants (Q4_K_M, Q8_0, BF16, etc.) uploaded.
- A reminder KLD and perplexity benchmarks does not exactly reflect real-world use-cases.
- Links to new GGUFs: Qwen3.5-35B-A3B-GGUF, Qwen3.5-122B-A10B-GGUF, Qwen3.5-397B-A17B-GGUF (397B still uploading!)
You can also now Fine-tune Qwen3.5 in Unsloth via our free notebooks! Thanks a lot everyone!
submitted by /u/danielhanchen
[link] [comments] -
đ r/york misty walk this morning :) rss
| it was like a lovely dream submitted by /u/whtmynm
[link] [comments]
---|--- -
đ r/wiesbaden Smoke Together 2.0 rss
Hello everyone, since only one person showed up to the last meeting, which was probably due to the weather, I thought we could try again tomorrow around 6 pm, at the same spot on Kirchenpfad: 50.083057, 8.216951
Who's interested?
submitted by /u/Wide-Distribution-78
[link] [comments] -
đ r/Leeds Kirkstall Abbey rss
Song is âDepartureâ by IHF
submitted by /u/mr_errington
[link] [comments] -
đ r/Leeds Did Mr sunshine call in sick today... rss
Weather forecasted alll day sun!
submitted by /u/newtobitcoin111
[link] [comments] -
đ r/wiesbaden AFD Veranstaltung wurde abgesagt đ€ rss
submitted by /u/Chris0607
[link] [comments] -
đ r/LocalLLaMA Qwen3 vs Qwen3.5 performance rss
| Note that dense models use their listed parameter size (e.g., 27B), while Mixture-of-Experts models (e.g., 397B A17B) are converted to an effective size using ( \sqrt{\text{total} \times \text{active}} ) to approximate their compute-equivalent scale. Data source: https://artificialanalysis.ai/leaderboards/models submitted by /u/Balance-
[link] [comments]
---|--- -
đ r/Leeds Anyone know what they're filming at Browns? rss
Saw some riggers working on Browns yesterday blocking out all the windows. Anyone know what's being filmed?
submitted by /u/Itsalladeepend
[link] [comments] -
đ r/Leeds WFH / Remote Work Advice rss
Hello! Hoping for some advice on co-working in Leeds.
I'm M 32, I work in Consumer Goods/Tech and I am 100% WFH and while the convenience is incredible, the isolation can be a challenge. I would like to establish a solid rhythm of working from town a few days a week and even better, find other people wanting to do the same and have a bit of craic. Grab lunch, beer afterwards etc.
I've tried a few co-working spaces in Leeds but haven't found something that feels sustainable, yet.
What i've tried so far:
Santander work cafe is great.
Its free, but it doesn't open til 9 and they often host events making it hard to establish a solid routine. I have a lot of calls, so I found that difficult to manage while working in there.
2Work is great but is expensive.
At ÂŁ20 a go, that becomes ÂŁ30 after train/parking. It's great if i'm meeting friends after work but not something i could make a regular routine of.
Waterlane Boathouse is good too, but I couldn't do a full 9-5 there and its hard to take calls. Obvs, it's a pub not a corporate office.
What i'm looking for:
- A space where I can reliably get a desk and would be able to take calls throughout the day
-
The cheaper the better
-
Parking would be ideal
If you're in a similar position with WFH/Remote and want to find community during the week please drop me a line!
submitted by /u/Longjumping-Stop-662
[link] [comments] -
-
đ r/reverseengineering DLLHijackHunter v1.2.0 - Now with automated UAC Bypass & COM AutoElevation discovery rss
submitted by /u/Jayendra_J
[link] [comments] -
đ r/Leeds 38/39 Meanwood buses rss
Why have these been changed to single deckers? The amount of full buses that drive past because theyâre not a double decker! Trying to drive less to save money on petrol but my god first bus are making it hard
submitted by /u/sm9981
[link] [comments] -
đ Project Zero On the Effectiveness of Mutational Grammar Fuzzing rss
Mutational grammar fuzzing is a fuzzing technique in which the fuzzer uses a predefined grammar that describes the structure of the samples. When a sample gets mutated, the mutations happen in such a way that any resulting samples still adhere to the grammar rules, thus the structure of the samples gets maintained by the mutation process. In case of coverage-guided grammar fuzzing, if the resulting sample (after the mutation) triggers previously unseen code coverage, this sample is saved to the sample corpus and used as a basis for future mutations.
This technique has proven capable of finding complex issues and I have used it successfully in the past, including to find issues in XSLT implementations in web browsers and even JIT engine bugs.
However, despite the approach being effective, it is not without its flaws which, for a casual fuzzer user, might not be obvious. In this blogpost I will introduce what I perceive to be the flaws of the mutational coverage-guided grammar fuzzing approach. I will also describe a very simple but effective technique I use in my fuzzing runs to counter these flaws.
Please note that while this blogpost focuses on grammar fuzzing, the issues discussed here are not limited to grammar fuzzing as they also affect other structure-aware fuzzing techniques to various degrees. This research is based on the grammar fuzzing implementation in my Jackalope fuzzer, but the issues are not implementation specific.
Issue #1: More coverage does not mean more bugs
The fact that coverage is not a great measure for finding bugs is well known and affects coverage-guided fuzzing in general, not just grammar fuzzing. However this tends to be more problematic for the types of targets where structure-aware fuzzing (including grammar fuzzing) is typically used, such as in language fuzzing. Letâs demonstrate this on an example:
In language fuzzing, bugs often require functions to be called in a certain order or that a result of one function is used as an input to another function. To trigger a recent bug in libxslt two XPath functions need to be called, the document() function and the generate-id() function, where the result of the document() function is used as an input to generate-id() function. There are other requirements to trigger the bug, but for now letâs focus on this requirement.
Hereâs a somewhat minimal sample required to trigger the bug:
<?xml version="1.0"?> <xsl:stylesheet xml:base="#" version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:value-of select="generate-id(document('')/xsl:stylesheet/xsl:template/xsl:message)" /> <xsl:message terminate="no"></xsl:message> </xsl:template> </xsl:stylesheet>With the most relevant part for this discussion being the following element and the XPath expression in the select attribute:
<xsl:value-of select="generate-id(document('')/xsl:stylesheet/xsl:template/xsl:message)" />If you run a mutational, coverage guided fuzzer capable of generating XSLT stylesheets, what it might do is generate two separate samples containing the following snippets:
Sample 1:
<xsl:value-of select="document('')/xsl:stylesheet/xsl:template/xsl:message" />Sample 2:
<xsl:value-of select="generate-id(/a)" />The union of these two samplesâ coverage is going to be the same as the coverage of the buggy sample, however having document() and generate-id() in two different samples in the corpus isnât really helpful for triggering the bug.
It is also possible for the fuzzer to generate a single sample with both of these functions that again results in the same coverage as the buggy sample, but with both functions operating on independent data:
<xsl:template match="/"> ... <xsl:value-of select="document('')/xsl:stylesheet/xsl:template/xsl:message" /> <xsl:value-of select="generate-id(/a)" /> ... </xsl:template>This issue also demonstrates how crucial it is for any fuzzer to be able to combine multiple samples in the corpus in order to produce new samples. However, in this case, note that combining the two samples wouldnât trigger any previously unseen coverage and thus the resulting sample wouldnât be saved, despite climbing closer to triggering the bug.
In this case, because triggering the bug requires chaining only two function calls, a fuzzer would eventually find this bug by randomly combining the samples. But in case three or more function calls need to be chained in order to trigger the bug, it becomes increasingly expensive to do so and coverage feedback, as demonstrated, does not really help.
In fact, triggering this bug might be easier (or equally easy) with a generative fuzzer (that will generate a new sample from scratch every time) without coverage feedback. But even though coverage feedback is not ideal, it still helps in a lot of cases.
As previously stated, this issue does not only affect grammar fuzzing, but also other fuzzing approaches, in particular those focused on language fuzzing. For example, Fuzzilli documentation describes a similar version of this problem.
A possible solution for this problem would be having some kind of dataflow coverage that could identify that data flowing from document() into generate- id() is something previously unseen and worth saving, however I am not aware of any practical implementation of such an approach.
Issue #2: Mutational grammar fuzzing tends to produce samples that are very
similar
To demonstrate this issue, letâs take a look at some samples from one of my XSLT fuzzing sessions:
Part of sample 1128 in the corpus:
<?xml version="1.0" encoding="UTF-8"?><xsl:fallback namespace="http://www.w3.org/url2" ><aaa ></aaa><ddd xml:id="{lxl:node-set($name2)}:" att3="{[$name4document('')att4.|document('')$name4namespace::]document('')}{ns2}" ></ns3:aaa></xsl:fallback>Part of sample 603 in the corpus:
<?xml version="1.0" encoding="UTF-8"?><xsl:fallback namespace="http://www.w3.org/url2" ><aaa ></aaa><ddd xml:id="{lxl:node-set($name2)}:" att3="{[$name4document('')att4.|document('')$name4namespace::]document('')}{ns2}" xmlns:xsl="http://www.w3.org/url3" ><xsl:output ></xsl:output>eHhDC?^5=<xsl:choose elements="eee" ><xsl:copy stylesheet-prefix="ns3" priority="3" ></xsl:copy></xsl:choose></ddd>t</xsl:fallback>As you can see from the example, even though these two samples are different and come from different points in time during the fuzzing session, a large part of these two samples are the same.
This follows from the greedy nature of mutational coverage guided fuzzing: when a sample is mutated to produce new coverage, it gets immediately saved to the corpus. Likely a large part of the original sample wasnât mutated, but it is still part of the new sample so it gets saved. This new sample can get mutated again and if the resulting (third) sample triggers new coverage it will also get saved, despite large similarities with the starting sample. This results in a general lack of diversity in a corpus produced by mutational fuzzing.
While Jackalopeâs grammar mutator can also ignore the base sample and generate an entire sample from scratch, it is rare for this to trigger new coverage compared to the more localized mutations, especially later on in the fuzzing session.
One approach of combating this issue could be to minimize each new sample so that only the part that triggers new coverage gets saved, but I observed that this isnât an optimal strategy either and itâs beneficial to leave (some) of the original sample. Jackalope implements this by minimizing each grammar sample, but stops the minimization when a certain number of grammar tokens has been reached.
Even though this blogpost focuses on grammar fuzzing, I observed this issue with other structure aware fuzzers as well.
A simple solution?
Both of these issues hint that there might be benefits of combining generative fuzzing with mutational fuzzing in some way. Generative fuzzing produces more diverse samples than mutational fuzzing but suffers from other issues such as that it typically generates lots of samples that trigger errors in the target. Additionally, as stated previously, although coverage is not an ideal criteria for finding bugs it is still helpful in a lot of cases.
In the past, when I was doing grammar fuzzing on a large number of machines, an approach I used was to delay syncing individual fuzz workers. That way, each worker would initially work with its own (fully independent) corpus. Only after some time has passed, the fuzzers would exchange sample sets and each worker would get the samples that correspond to the coverage this worker is missing.
But what to do when fuzzing on a single machine? During my XSLT fuzzing project, I used the following approach:
-
Start a fuzzing worker with an empty corpus. Run for T seconds.
-
After T seconds sync the worker with the fuzzing server. Get the missing coverage and corresponding samples from the server. Upload any coverage the server doesnât have (and the corresponding samples) to the server.
-
Run with combined corpus (generated by the worker + obtained from the server) for another T seconds.
-
Sync with the server again (to upload any new samples) and shut down the worker.
-
Go back to step 1.
The result is that the fuzzing worker spends half of the time creating a fully independent corpus generated from scratch and half of the time working on a larger corpus that also incorporates interesting samples (as measured by the coverage) from the previous workers. This results in more sample diversity as each new generation is independent from the previous one. However the worker eventually still ends up with a sample set corresponding to the full coverage seen so far during any worker lifetime. Ideally, new coverage and, more importantly, new bugs can be found by combining the fresh samples from the current generation with samples from the previous generations.
In Jackalope, this can be implemented by first running the server, e.g.
/path/to/fuzzer -start_server 127.0.0.1:8337 -out serveroutAnd then running the workers sequentially with the following Python script:
import subprocess import time T = 3600 while True: subprocess.run(["rm", "-rf", "workerout"]) p = subprocess.Popen(["/path/to/fuzzer", "-grammar", "grammar.txt", "-instrumentation", "sancov", "-in", "empty", "-out", "workerout", "-t", "1000", "-delivery", "shmem", "-iterations", "10000", "-mute_child", "-nthreads", "6", "-server", "127.0.0.1:8337", "-server_update_interval", str(T), "--", "./harness", "-m", "@@"]) time.sleep(T * 2) p.kill()Note that Jackalope parameters in the script above are from my libxslt fuzzing run and should be adjusted according to the target.
Additionally, Jackalope implements the -skip_initial_server_sync flag to avoid syncing a worker with the server as soon as the worker starts, but this flag is now the default in grammar fuzzing mode so it does not need to be specified explicitly.
Does this trick work better than running a single uninterrupted fuzzing session? Letâs do some experiments. I used an older version of libxslt as the target (libxslt commit 2ee18b3517ca7144949858e40caf0bbf9ab274e5, libxml2 commit 5737466a31830c017867e3831a329c8f605c877b) and measured the number of unique crashes over time. Note that while the number of unique crashes does not directly correspond to the number of unique bugs, being able to trigger the same bug in different ways still gives a good indication of bug finding capabilities. I ran each session for one week on a single machine.
I ran two default experiments (with a single long-lived worker) as well as the two experiments with the proposed solution with different values of T, T=3600 (one hour) and T=600 (10 minutes).

As demonstrated in the chart, restarting the worker periodically (but keeping the server), as proposed in this blog post, helped uncover more unique crashes than either of the default sessions. The crashes were also found more quickly. The default sessions proved sensitive to starting conditions where one run discovered 5 but the other run only 2 unique crashes during the experiment time.
The value of T dictates how soon a worker will switch from working on only its own samples to working on its own + the server samples. The best value in the libxslt experiment (3600) is when the worker already found most of the âeasyâ coverage and discovered the corresponding samples. As can be seen from the experiment, different values of T can produce different results. The optimal value is likely target-dependent.
Conclusion
Although the trick described in this blogpost is very simple, it nevertheless worked surprisingly well and helped discover issues in libxslt quicker than I would likely be able to find using default settings. It also underlines the benefits of experimenting with different fuzzing setups according to the target specifics, rather than relying on tooling out-of-the-box.
Future work might include researching fuzzing strategies that favor novelty and would e.g. replace samples with the newer ones, even when doing so does not change the overall fuzzer coverage.
-
-
đ r/Yorkshire Whatâs in Mytholmroyd? rss
Iâm in Mytholmroyd for work for a few hours today. Whatâs actually here? Anything I should see before I go?
Cheers!
submitted by /u/ANuggetEnthusiast
[link] [comments] -
đ MetaBrainz Remembering mayhem rss
Rob Kaye (also known to the community and his peers as ruaok and mayhem) was many things. Friend, partner, colleague, 'that guy with the crazy hair', hacker, burner, visionary and much more. And always a source of creative mayhem!
Millions more have used, contributed to, or benefited from his open-source vision and projects. There's no doubt that Rob was one of the spearheads of open-source. He championed open music data and showed the world that a non- profit open-source organisation could be financially viable, competing with (and far outliving most) similar corporate projects.
Below we will share some of Rob's history with MetaBrainz and staff. Thank you to everyone who left memories on the announcement post and elsewhere on the world wide web. His spirit lives on in our hearts and in 1's and 0's.
Rob and MetaBrainz
In the year 2000 a young Rob created MusicBrainz. He had just witnessed the corporatization of CDDB and embarked on the creation of a collaborative music database that could never be snatched from its contributors.
Young Rob ('the one with
the hair') in the ballpit at the old London Last.fm officesFor over 25 years Rob guided MusicBrainz along its path, always focussed on his vision of openness and independence. He nestled his projects safely the non-profit arms of the MetaBrainz Foundation, to further safeguard them for the future. Since the year 2000 many of MusicBrainz' sister projects have bloomed under the MetaBrainz umbrella, such as MusicBrainz Picard, BookBrainz and ListenBrainz, with Rob either supporting community efforts or identifying a need and kickstarting them himself.
26 years after founding MusicBrainz, with 143,901,298 and growing MusicBrainz IDs serving billions of global requests and (relatively) young ListenBrainz already at 1 billion+ listens, there is no doubt that Robâs open-source efforts have changed the landscape of music data and, by extension, human culture (which relies on open and accessible histories) and the lives of musicians. Itâs changed not just for us die-hards who live âinâ the MetaBrainz ecosystem, but also for the millions of people using the thousands of services that interact with MetaBrainzâ data. Itâs probably no exaggeration to say that most people have interacted with MetaBrainz data at some point in their lives.
Fearless, peerlessNone of this could have happened without Rob's fierce and immovable guard against corporate influence and the enshittification that has taken down so many of MetaBrainz' contemporaries over the decades. He would gleefully share stories of offers to "purchase" MetaBrainz and the ignorance of trying to spend money on something that has effectively been made utterly un-purchasable. He did not bend the knee to power - exemplified by his famous 'Amazon cake' endeavour.
Rob was a hacker at heart which made it all the more admirable that he spent much of his time dealing with the humdrum of what has become a substantial operation with a respectable row of servers and employees, all clamouring to be kept warm, dry, fed and paid, not to mention guiding 100's of students and new contributors through their first forays into open-source.

Robert Kaye and some of the MetaBrainz team in 2024
Rob was also an excellent delegator. Once you had Rob's trust he would let you cook, resulting in a wide range of incredible talent being incorporated into the MetaBrainz team. Rob was still coding whenever he could, but his excellent team allowed him to spend the free time that MetaBrainz' admin left him hacking on collaborations, experiments and anything else that caught his interest - for instance, recently he was spending some evenings working on MBID Mapper 2.0, looking forward to GSoC, and was excited about upcoming collaborations.
Rob will be outlived by what he built, just as he intended. Nothing will be able to replace the presence of that cheeky smile, but Rob's influence will still be felt when the monument to many a king would have crumbled.
The Captain and My Friend
zas has written the following piece about his experience working with Rob, an experience everyone on the MetaBrainz staff, board, and many many volunteer contributors were lucky enough to share.
Rob and I were both born in 1970. Being children of that same year meant we shared more than just a birth year; we shared a digital soul. We grew up hacking hardware when it still felt like magic, watching the world connect through the screech of modems, and finding our first real homes in the scrolling text of IRC and newsgroups.
Rob was a man of many originsâGerman, American, Catalan, and a constant traveller. But he didn't just move through the world; he transformed it. He was impossible to miss: a man of flashy colours, vibrant hair, and weird clothes. Even in the crowded ancient streets of the Old Delhi Market, Rob stood out. He occupied space with a joyous, colourful defiance that invited everyone else to be themselves, too.
I first came to the project through a specific challenge. I had 2k+ CDs from my collection converted into FLAC files and a question: how to properly tag them with decent metadata? I met MusicBrainz, then Picard, and eventually, I met Rob and a life-changing friendship of 12 years. One day, he messaged me with a simple question: would I be interested in some sysadmin tasks?
I jumped on a train to Barcelona just to see him. We sat in a bar, drank a beer, andâdespite my "very bad" spoken Englishâwe understood each other perfectly. We spent that afternoon dreaming up ways to migrate the entire MusicBrainz infrastructure.
Rob had a rare duality. He was the flamboyant traveller and maker who could command a room in a custom-made skirt of his own design, yet he was also the close friend who would happily retreat into a quiet corner to lose himself in the details of a PCB design or a complex server migration. He was as comfortable under the spotlight as he was behind a terminal. He was loving machines AND humans.
He built a "glass house" of data so that the fruits of our labor could never be sold or stolen. He was a leader who never lost the soul of a hacker, a visionary who lived and dressed in technicolor.
Rob was the Captain of MetaBrainz, but to me, he was a fellow traveller who started his journey exactly when I did. He has moved on to the next adventure, leaving us a world that is a little more open, a lot more honest, and infinitely more colourful.
The servers are up, the mission continues, and the music is playing for you, Rob.
Rest easy, my friend. Ruhe in Frieden. Reposa en pau. Bon voyage.
Gallery of mayhem

-
đ HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [IDASQL](https://github.com/allthingsida/idasql): 0.0.10 -
đ r/Yorkshire Is Whitby doable as a day trip from York? rss
Hi everyone, Iâll be in York for a couple of days with a friend and we were thinking about heading over to Whitby for the day while weâre there.
Not sure if it works well as just a day trip or if it ends up feeling a bit rushed. We mostly just want to walk around the harbour, grab some fish and chips and maybe head up to Whitby Abbey if weâve got time. Has anyone done it as a day trip from York before? Just wondering if itâs worth it or if itâs better staying a night.
submitted by /u/FeistyPrice29
[link] [comments] -
đ r/LocalLLaMA Alibaba CEO: Qwen will remain open-source rss
| submitted by /u/Bestlife73
[link] [comments]
---|--- -
đ r/LocalLLaMA Google invites ex-qwen ;) rss
| to make Gemma great again? ;) submitted by /u/jacek2023
[link] [comments]
---|--- -
đ r/reverseengineering Your Duolingo Is Talking to ByteDance: Cracking the Pangle SDK's Encryption rss
submitted by /u/igor_sk
[link] [comments] -
đ Rust Blog Announcing Rust 1.94.0 rss
The Rust team is happy to announce a new version of Rust, 1.94.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via
rustup, you can get 1.94.0 with:$ rustup update stableIf you don't have it already, you can get
rustupfrom the appropriate page on our website, and check out the detailed release notes for 1.94.0.If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (
rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!What's in 1.94.0 stable
Array windows
Rust 1.94 adds
array_windows, an iterating method for slices. It works just likewindowsbut with a constant length, so the iterator items are&[T; N]rather than dynamically-sized&[T]. In many cases, the window length may even be inferred by how the iterator is used!For example, part of one 2016 Advent of Code puzzle is looking for ABBA patterns: "two different characters followed by the reverse of that pair, such as
xyyxorabba." If we assume only ASCII characters, that could be written by sweeping windows of the byte slice like this:fn has_abba(s: &str) -> bool { s.as_bytes() .array_windows() .any(|[a1, b1, b2, a2]| (a1 != b1) && (a1 == a2) && (b1 == b2)) }The destructuring argument pattern in that closure lets the compiler infer that we want windows of 4 here. If we had used the older
.windows(4)iterator, then that argument would be a slice which we would have to index manually, hoping that runtime bounds-checking will be optimized away.Cargo config inclusion
Cargo now supports the
includekey in configuration files (.cargo/config.toml), enabling better organization, sharing, and management of Cargo configurations across projects and environments. These include paths may also be markedoptionalif they might not be present in some circumstances, e.g. depending on local developer choices.# array of paths include = [ "frodo.toml", "samwise.toml", ] # inline tables for more control include = [ { path = "required.toml" }, { path = "optional.toml", optional = true }, ]See the full
includedocumentation for more details.TOML 1.1 support in Cargo
Cargo now parses TOML v1.1 for manifests and configuration files. See the TOML release notes for detailed changes, including:
- Inline tables across multiple lines and with trailing commas
\xHHand\estring escape characters- Optional seconds in times (sets to 0)
For example, a dependency like this:
serde = { version = "1.0", features = ["derive"] }... can now be written like this:
serde = { version = "1.0", features = ["derive"], }Note that using these features in
Cargo.tomlwill raise your development MSRV (minimum supported Rust version) to require this new Cargo parser, and third-party tools that read the manifest may also need to update their parsers. However, Cargo automatically rewrites manifests on publish to remain compatible with older parsers, so it is still possible to support an earlier MSRV for your crate's users.Stabilized APIs
<[T]>::array_windows<[T]>::element_offsetLazyCell::getLazyCell::get_mutLazyCell::force_mutLazyLock::getLazyLock::get_mutLazyLock::force_mutimpl TryFrom<char> for usizestd::iter::Peekable::next_if_mapstd::iter::Peekable::next_if_map_mut- x86
avx512fp16intrinsics (excluding those that depend directly on the unstablef16type) - AArch64 NEON fp16 intrinsics (excluding those that depend directly on the unstable
f16type) f32::consts::EULER_GAMMAf64::consts::EULER_GAMMAf32::consts::GOLDEN_RATIOf64::consts::GOLDEN_RATIO
These previously stable APIs are now stable in const contexts:
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.94.0
Many people came together to create Rust 1.94.0. We couldn't have done it without all of you. Thanks!
-
đ matklad JJ LSP Follow Up rss
JJ LSP Follow Up
Mar 5, 2026
In Majjit LSP, I described an idea of implementing Magit style UX for jj once and for all, leveraging LSP protocol. Iâve learned today that the upcoming 3.18 version of LSP has a feature to make this massively less hacky: Text Document Content Request
LSP can now provide virtual documents, which arenât actually materialized on disk. So this:
can now be such a virtual document, where highlighting is provided by semantic tokens, things like âcheck out this commitâ are code actions, and âgoto definitionâ jumps from the diff in the virtual file to a real file in the working tree.
Exciting!
-
đ Console.dev newsletter Bubble Tea v2 rss
Description: Terminal UI framework.
What we like: Combined set of framework tools for TUI development: Bubble Tea (interactions), Lip Gloss (layouts), Bubbles (UI components). Uses a new rendering engine for performance. Improved support for inline images, clipboards, rendering sync. Declarative views.
What we dislike: Note that itâs Go-only. A good choice for terminal utilities though.
-
đ Console.dev newsletter numpy-ts rss
Description: NumPy implementation in TypeScript/JS.
What we like: Avoid splitting your stack if youâre already building in TS/JS. Tree-shakable library with universal runtime support across server and client. No additional dependencies. Validated against NumPy tests.
What we dislike: Not quite at 100% NumPY API coverage yet (94%). Slower than NumPy (average 11x slower, median 3x slower) for many of the benchmarks, as youâd expect.
-
đ Llogiq on stuff Write small Rust scripts rss
Recently I was working on a Rust PR to reduce
unreachable_codelint churn aftertodo!()calls, that basically removes lint messages fromunreachable_codeaftertodo!()and instead adds atodo_macro_useslint which can be turned off while the code is still being worked on. However, once that change was done, I ran into a number of failing tests, because while they had a#![allow(unused)]or some such, this didnât cover thetodo_macro_useslint.Brief digression: rustc itself is tested by a tool called compiletest. That tool runs the compiler on code snippets, captures the output and compares it with known-good golden master output it stores alongside the snippets. In this case, there were a good number of tests that had
todo!()but didnât#![allow(todo_macro_uses)]. More tests than Iâd care to change manually.In this year of the lord, many of us would ask some agent to do it for them, but I didnât like the fact that I would have to review the output (I have seen too many needless formatting changes to be comfortable with investing time and tokens into that). Also I had a code snippet to find all rust files lying around that only used standard library functions and could easily be pasted into a throwaway project.
use std::io; use std::path::Path; fn check_files(path: &Path) -> io::Result<()> { for e in std::fs::read_dir(path)? { let Ok(d) = e else { continue; }; if d.file_type().is_ok_and(|ft| ft.is_dir()) { check_files(&d.path())?; } else { let path = d.path(); if path.extension().is_some_and(|ext| ext == "rs") { check_file(&path)?; } } } Ok(()) }This can be called on a
Pathand walks it recursively, callingcheck_fileon all Rust files. I also had done a few read-modify-write functions in Rust (notably in my twirer tool I use for my weekly This Week in Rust contributions). They look like this:fn check_file(path: &Path) -> io::Result<()> { let orig_text = std::fs::read_to_string(path)?; let text = todo!(); // put the changed `orig_text` into `text` std::fs::write(path, text) }There was some slight complication in that a) I wanted to amend any
#![allow(..)]annotation I would find instead of adding another, and b) to add one, I would have to find the first position after the initial comments (which are interpreted by compiletest, which would be foiled by putting them below a non-comment line). Also I didnât want to needlessly add empty lines, so I had to check whether to insert a newline. All in all this came out to less than 50 lines of Rust code, which Iâm reproducing here; perhaps someone can use them to copy into their own code to have their own one-off Rust scripts.use std::fs::{read_dir, read_to_string, write}; use std::io; use std::path::Path; fn check_file(path: &Path) -> io::Result<()> { let orig_text = read_to_string(path)?; if !orig_text.contains("todo!(") || orig_text.contains("todo_macro_uses") { return Ok(()); } let text = if let Some(pos) = orig_text.find("#![allow(") { // we have an `#[allow(..)]` we can extend let Some(insert_pos) = orig_text[pos..].find(")]") else { panic!("unclosed #![allow()]"); }; let (before, after) = orig_text.split_at(pos + insert_pos); format!("{before}, todo_macro_uses{after}") } else { // find the first line after all // comments let mut pos = 0; while orig_text[pos..].starts_with("//") { let Some(nl) = orig_text[pos..].find("\n") else { pos = orig_text.len(); break; }; pos += nl + 1; } let (before, after) = orig_text.split_at(pos); // insert a newline unless at beginning or we already have one let nl = if pos == 0 || before.ends_with('\n') { "" } else { "\n" }; format!("{before}{nl}#![allow(todo_macro_uses)]\n{after}") }; write(path, text) } fn check_files(path: &Path) -> io::Result<()> { for e in read_dir(path)? { let Ok(d) = e else { continue; }; if d.file_type().is_ok_and(|ft| ft.is_dir()) { check_files(&d.path())?; } else { let path = d.path(); if path.extension().is_some_and(|ext| ext == "rs") { check_file(&path)?; } } } Ok(()) } fn main() -> io::Result<()> { check_files(&Path::new("../rust/tests/ui")) }The script ran flawlessly, I didnât need to check the output for errors, and I can reuse parts of it whenever I feel like it.
Conclusion: Itâs easy and quick to write small Rust scripts to transform code. And since you know what the code does, you donât need any time to review the output. And Rustâs standard library, while missing pieces that might simplify some tasks, is certainly servicable for work like this. Even if I had the need for, say, regexes, those wouldâve been a mere
cargo add regexaway. So next time you need to mechanically transform some code, donât reach for AI, simply rust it. -
đ exe.dev APIs for the RESTless rss
Exe.dev's API to create a new machine is:
ssh exe.dev **new --name=restless --json**That assumes your SSH key is already registered to your account.
If you want to do it over HTTPS, it's:
curl -X POST https://exe.dev/exec \ -H "Authorization: Bearer $TOKEN" \ -d '**new --name=restless --json** 'Our CLI and our API are one and the same. The conventions are unix-y (how to parse command-line flags) rather than web-by, but they're familiar to our end users, and you don't have to learn two different conventions.
Minting Your Own Tokens
The only tricky bit is giving our users bearer tokens, and here we did something new: you can use your SSH key to mint your own tokens, and you can give those self-minted tokens restrictions (when they're valid, what they can do) without chatting with us. If the signature checks out, we know that the token was generated by the SSH private key.
We walk through building a token step by step in our documentation, but this shell function does the trick:
exetoken() { # Generate an exe.dev API token. # exetoken [permissions_json] [ssh_key_path] # permissions_json defaults to '{}' (no restrictions) # ssh_key_path defaults to the first IdentityFile from ssh config local perms if [ -n "$1" ]; then perms="$1" else perms='{}' fi local key if [ -n "$2" ]; then key="$2" else local default_key=$(ssh -G exe.dev | grep -i identityfile | head -n1 | awk '{print $2}') key="${default_key/#\~/$HOME}" fi b() { tr -d '\n=' | tr '+/' '-_'; } local p=$(printf '%s' "$perms" | base64 | b) local s=$(printf '%s' "$perms" | ssh-keygen -Y sign -f "$key" -n v0@exe.dev 2>/dev/null | sed '1d;$d' | b) echo "exe0.$p.$s" }The key aspects here are the inputs:
- A permissions JSON â e.g.
{"cmds":["whoami"]}says "this key can execute thewhoamicommand." - The SSH key is the secret that signs the token.
The output is the permissions and the signature of the permissions, encoded with URL-safe base64 to prevent any troubles.
$ curl -s -X POST https://exe.dev/exec \ -H "Authorization: Bearer $(exetoken '{"cmds":["whoami"]}')" \ -d whoami | jq -r '.email' philip.zeyliger@bloggy.exe.xyzGadzooks, it works!
Scopes, Expiry, and Revocation
You can associate multiple SSH keys with an exe.dev account. Removing an SSH key from your exe.dev account revokes all tokens signed with that SSH key.
This, dare we say unusual, scheme gives you scopes, expiry, offline token creation, and revocation. We admit it's a little weird.
Extending to the SSH Auth Proxy
Exe.dev VMs come with a built-in auth proxy. If you wanted to script talking to a web server on your VM, you could log in manually and steal the cookie. Stealing cookies is naughty, so you could instead mark the VM publicly accessible and implement your own authentication. Our API keys give you a third way: mint a bearer token scoped to just that VM, and access it directly.
For VM tokens, the signing namespace changes from
v0@exe.devtov0@myvm.exe.xyz:# Without a token â the proxy redirects you to log in: $ curl -s -o /dev/null -w "%{http_code}" https://myvm.exe.xyz/api/data 307 # With a bearer token â you're in: $ curl -s -H "Authorization: Bearer $VM_TOKEN" https://myvm.exe.xyz/api/data {"status": "ok"}References
See https://exe.dev/docs/https-api for the full details, including how to mint short-lived tokens.
- A permissions JSON â e.g.
-
đ Ampcode News GPT-5.4, The New Oracle rss
Habemus oraculum! We have a new oracle in Amp and it's GPT-5.4.
It's a great model. In our internal evals response quality went from 60.8% (GPT-5.2) to 68.2% (GPT-5.4). Mean latency is down from ~6.7min to ~4.9min.
In Amp's
smartmode GPT-5.4 works really well with Opus 4.6, which issmartmode's current main model. They complement each other with the oracle bringing sage advice on architecture, code reviews, and tricky bugs to the context window, just as we're used to from previous incantations.On top of that, we also decided to add the oracle subagent to
deepmode. Now you might wonder, sincedeepmode currently uses GPT-5.3-Codex as the main model, why add another GPT model in the same mode? Does that even make sense?We think it does. GPT-5.3-Codex is fantastic at coding (as Codex models tend to be), which is exactly why it is the main model in
deep, but the oracle is plain GPT-5.4, a non-Codex model. Less a code specialist, more an all-rounder.That gives us two models from the same family, but trained for different goals, with different system prompts, in the same mode â two distinct voices in the same conversation.
We're still learning what GPT-5.4 can do in practice. There are very likely hidden smarts and treasures we haven't found yet. Let us know once you do.
-
đ Armin Ronacher AI And The Ship of Theseus rss
Because code gets cheaper and cheaper to write, this includes re- implementations. I mentioned recently that I had an AI port one of my libraries to another language and it ended up choosing a different design for that implementation. In many ways, the functionality was the same, but the path it took to get there was different. The way that port worked was by going via the test suite.
Something related, but different, happened with chardet. The current maintainer reimplemented it from scratch by only pointing it to the API and the test suite. The motivation: enabling relicensing from LGPL to MIT. I personally have a horse in the race here because I too wanted chardet to be under a non-GPL license for many years. So consider me a very biased person in that regard.
Unsurprisingly, that new implementation caused a stir. In particular, Mark Pilgrim, the original author of the library, objects to the new implementation and considers it a derived work. The new maintainer, who has maintained it for the last 12 years, considers it a new work and instructs his coding agent to do precisely that. According to author, validating with JPlag, the new implementation is distinct. If you actually consider how it works, that's not too surprising. It's significantly faster than the original implementation, supports multiple cores and uses a fundamentally different design.
What I think is more interesting about this question is the consequences of where we are. Copyleft code like the GPL heavily depends on copyrights and friction to enforce it. But because it's fundamentally in the open, with or without tests, you can trivially rewrite it these days. I myself have been intending to do this for a little while now with some other GPL libraries. In particular I started a re-implementation of readline a while ago for similar reasons, because of its GPL license. There is an obvious moral question here, but that isn't necessarily what I'm interested in. For all the GPL software that might re-emerge as MIT software, so might be proprietary abandonware.
For me personally, what is more interesting is that we might not even be able to copyright these creations at all. A court still might rule that all AI- generated code is in the public domain, because there was not enough human input in it. That's quite possible, though probably not very likely.
But this all causes some interesting new developments we are not necessarily ready for. Vercel, for instance, happily re-implemented bash with Clankers but got visibly upset when someone re- implemented Next.js in the same way.
There are huge consequences to this. When the cost of generating code goes down that much, and we can re-implement it from test suites alone, what does that mean for the future of software? Will we see a lot of software re- emerging under more permissive licenses? Will we see a lot of proprietary software re-emerging as open source? Will we see a lot of software re-emerging as proprietary?
It's a new world and we have very little idea of how to navigate it. In the interim we will have some fights about copyrights but I have the feeling very few of those will go to court, because everyone involved will actually be somewhat scared of setting a precedent.
In the GPL case, though, I think it warms up some old fights about copyleft vs permissive licenses that we have not seen in a long time. It probably does not feel great to have one's work rewritten with a Clanker and one's authorship eradicated. Unlike the Ship of Theseus, though, this seems more clear-cut: if you throw away all code and start from scratch, even if the end result behaves the same, it's a new ship. It only continues to carry the name. Which may be another argument for why authors should hold on to trademarks rather than rely on licenses and contract law.
I personally think all of this is exciting. I'm a strong supporter of putting things in the open with as little license enforcement as possible. I think society is better off when we share, and I consider the GPL to run against that spirit by restricting what can be done with it. This development plays into my worldview. I understand, though, that not everyone shares that view, and I expect more fights over the emergence of slopforks as a result. After all, it combines two very heated topics, licensing and AI, in the worst possible way.
-
- March 04, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-03-04 rss
IDA Plugin Updates on 2026-03-04
New Releases:
Activity:
- capa
- 56307134: Sync capa rules submodule
- DeepExtractIDA
- 40c3320a: Remove .cursor symlink from generated AGENTS.md bootstrap and blackliâŠ
- ghidra
- ida-dbimporter
- ida-free-mcp
- 9f5ef9a0: Clarify IDA Free decompiler compatibility in tool descriptions and REâŠ
- Ida-Plugins-Kit
- df5c9f01: Update BinjaDumpToolkit.py
- ida-sdk
- acef1e39: docs: Document SDK versioning scheme (#37)
- idasql
- 4de439e6: feat: v0.0.10 â lib refactor, SQL helpers, UI context, IDAPython (#18)
- msc-thesis-LLMs-to-rank-decompilers
- unflat
- zenyard-ida-public
- capa
-
đ r/Leeds Proposed green space in the city centre rss
Quebec street is already blocked to traffic for half its length, so why not add some greenery?
submitted by /u/PigletConfident6425
[link] [comments] -
đ r/reverseengineering Aura frame brightness API writes succeed but panel luminance never changes (looking for protocol-level insights) rss
submitted by /u/Safe_Beginning5958
[link] [comments] -
đ r/Harrogate Property valuation rss
I want to get a valuer to check the price that I am paying for a house is solid and I am not overpaying. Does anyone have a recommendation for someone who could do this (estate agent or other)? Rough indication of fee would also be good. Cheers!
submitted by /u/DoughnutHairy9943
[link] [comments] -
đ r/Yorkshire Yorkshire water smart meter estimated readings rss
Has anyone else with Yorkshire water had there meter send estimated readings the odd day?
Anoyying as Iâm 100% itâs over estimated however you canât send a reading in due to it been a day behind so itâll be well over that estimated reading now anyway.
submitted by /u/Plastic_Fan_4861
[link] [comments] -
đ r/Leeds Following on from the guy asking about green spaces - I wish this was a park rss
Anyone know its future?
submitted by /u/DaBossLaa
[link] [comments] -
đ Evan Schwartz Scour - February Update rss
Hi friends,
In February, Scour scoured 647,139 posts from 17,766 feeds (1,211 were newly added). Also, 917 new users signed up, so welcome everyone who just joined!
Here's what's new in the product:
đź Inferring Interests from RSS Feeds
If you subscribe to specific feeds (as opposed to scouring all of them), Scour can now infer topics you might be interested in from them. You can click the link that says "Suggest from my feeds" on the Interests page. Thank you to the anonymous user who requested this!
đ€ Improved Onboarding
The onboarding experience is simpler. Instead of typing out three interests, you now can describe yourself and your interests in free-form text. Scour extracts a set of interests from what you write. Thank you to everyone who let me know that they were a little confused by the onboarding process.
đ Ranking Improvements
I made two subtle changes to the ranking algorithm. First, the scoring algorithm ranks posts by how well they match your closest interest and gives a slight boost if the post matches multiple interests. That was the intended design from earlier, but I realized that multiple weaker matches were pulling down the scores rather than boosting them.
The second change was that I finally retired the machine learning text quality classifier model that Scour had been using. The final straw was when a blog post I had written (and worked hard on!) wasn't showing up on Scour. The model had classified it as low quality đ€. I knew for a while that what the model was optimizing for was somewhat orthogonal to my idea of text quality, but that was it. For the moment, Scour relies on a large domain blocklist (of just under 1 million domains) to prevent low-quality content and spam from getting into your feed. I'm also investigating other ways of assessing quality without relying on social signals, but more on that to come in the future.
⥠Massive Performance Improvements
I've always been striving to make Scour fast and it got much faster this past month. My feed, which compares about 35,000 posts against 575 interests, now loads in around 50 milliseconds. Even comparing all the 600,000+ posts from the last month across all feeds takes only 180 milliseconds.
This graph shows the 99th percentile latency (the slowest requests) dropping from the occasional 10 seconds down to under 400 milliseconds (lower is better):

For those interested in the technical details, this speed up came from two changes:
First, I switched from scanning through post embeddings streamed from SQLite, which was already quite fast because the data is local, to keeping all the relevant details in memory. The in-memory snapshot is rebuilt every 15 minutes when the scraper finishes polling all of the feeds for new content. This change resulted in the very nice combination of much higher performance and lower memory usage, because SQLite connections have independent caches.
The second change came from another round of optimization on the library I use to compute the Hamming Distance between each post's embedding and the embeddings of each of your interests. You can read more about this in the upcoming blog post, but I was able to speed up the comparisons by around another 40x, making it so Scour can now do around 1.6 billion comparisons per second.
Together, these changes make loading the feed feel instantaneous, even though your whole feed is ranked on the fly when you load the page.
đ Some of My Favorite Posts
Here were some of my favorite posts that I found on Scour in February:
- Scour is built on vector embeddings, so I'm especially excited when someone releases a new and promising-sounding embedding model. I get particularly excited by those that are explicitly trained to support binary quantization like this one from Perplexity: pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval.
- I also spend a fair amount of time thinking about optimizing Rust code, especially using SIMD, so this was an interesting write up from TurboPuffer: Rust zero-cost abstractions vs. SIMD.
- This was an interesting write up comparing what different coding agents do under the hood: I Intercepted 3,177 API Calls Across 4 AI Coding Tools. Here's What's Actually Filling Your Context Window..
- And finally, this one is on a very different topic but has some nice animations that demonstrate why boarding airplanes is slow and shows The Fastest Way to Board an Airplane.
Happy Scouring!
- Evan
-
đ r/Leeds Does anyone know what this newly built building is? rss
submitted by /u/AshCucumber
[link] [comments] -
đ jj-vcs/jj v0.39.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
-
jj arrangecommand brings up a TUI where you can reorder and abandon
revisions. #1531 -
jj bookmark advanceautomatically moves bookmarks forward to a
target revision (defaults to@) using customization points
revsets.bookmark-advance-fromandrevsets.bookmark-advance-to.
It is heavily inspired by the longstanding community aliasjj tug.
Breaking changes
-
Dropped support for legacy index files written by jj < 0.33. New index files
will be created as needed. -
The following deprecated config options have been removed:
core.fsmonitorcore.watchman.register-snapshot-trigger- The deprecated command
jj op undohas been removed. Usejj op revertor
jj undo/redoinstead.
Deprecations
jj debug snapshotis deprecated in favor ofjj util snapshot. Although
this was an undocumented command in the first place, it will be removed after
6 months (v0.45.0) to give people time to migrate away.
New features
-
Add support for push options in
jj git pushwith the--optionflag.
This allows users to pass options to the remote server when pushing commits.
The short alias-ois also supported. -
jj newnow evaluates thenew_descriptiontemplate to populate the
initial commit description when no-mmessage is provided. -
Templates now support
first(),last(),get(index),reverse(),
skip(count), andtake(count)methods on list types for more flexible
list manipulation. -
New
builtin_draft_commit_description_with_difftemplate that includes the
diff in the commit description editor, making it easier to review changes
while writing commit messages. -
Revsets and templates now support
name:xpattern aliases such as'grep:x' = 'description(regex:x)'. -
Filesets now support user aliases.
-
jj workspace addnow links with relative paths. This enables workspaces to work
inside containers or when moved together. Existing workspaces with absolute paths
will continue to work as before. -
jj undonow also outputs what operation was undone, in addition to the
operation restored to. -
Bookmarks with two or more consecutive
-characters no longer need to be quoted
in revsets. For example,jj diff -r '"foo--bar"'can now be written asjj diff -r foo--bar. -
New flag
--simplify-parentsonjj rebaseto apply the same transformation
asjj simplify-parentson the rebased commits.
#7711 -
jj rebase --branchandjj rebase --sourcewill no longer return an error
if the given argument resolves to an empty revision set
(jj rebase --revisionsalready behaved this way). Instead, a message will be
printed to inform the user why nothing has changed. -
Changed Git representation of conflicted commits to include files from the
first side of the conflict. This should prevent unchanged files from being
highlighted as "added" in editors when checking out a conflicted commit in a
colocated workspace. -
New template function
Timestamp::since(ts)that returns theTimestampRange
between two timestamps. It can be used in conjunction with.duration()in
order to obtain a human-friendly duration between twoTimestamps. -
Added new
jj util snapshotcommand to manually or programmatically trigger a
snapshot. This introduces an official alternative to the
previously-undocumentedjj debug snapshotcommand. The Watchman integration
has also been updated to use this command instead. -
Changed background snapshotting to suppress stdout and stderr to avoid long
hangs. -
jj gerrit uploadnow supports a variety of new flags documented in
gerrit's documentation.
This includes, for example,--reviewer=foo@example.comand
--label=Auto-Submit. -
jj gerrit uploadnow recognizes Change-Id explicitly set via the alternative
trailerLink, and will generate aLink: <review-url>/id/<change-id>trailer
ifgerrit.review-urloption is set. -
jj gerrit uploadno longer requires the-rflag, and will default to
uploading what you're currently working on. -
Templates now support
Serializeoperations on the result ofmap()and
if(), when supported by the underlying type. -
jj bookmark renamenow supports--overwrite-existingto allow renaming a
bookmark even if the new name already exists, effectively replacing the
existing bookmark. -
Conditional configuration based on environment variables with
--when.environments.
#8779
Fixed bugs
-
Windows: use native file locks (
LockFileEx) instead of polling with file
creation, fixing issues with "pending delete" semantics leaving lock files
stuck. -
jjnow safely detaches theHEADof alternate Git worktrees if their
checked-out branch is moved or deleted during Git export. -
jj file track --include-ignorednow works whenfsmonitor.backend="watchman".
#8427
Contributors
Thanks to the people who made this release happen!
- Aaron Christiansen (@AaronC81)
- Andy Brenneke (@abrenneke)
- Anton Ălgmyr (@algmyr)
- Austin Seipp (@thoughtpolice)
- Benjamin Tan (@bnjmnt4n)
- Bram Geron (@bgeron)
- Bryce Berger (@bryceberger)
- Caleb White (@calebdw)
- countskm (@countdigi)
- David Higgs (@higgsd)
- Evan Simmons (@estk)
- Fedor Sheremetyev (@sheremetyev)
- Gaëtan Lehmann (@glehmann)
- George Christou (@gechr)
- Hubert Lefevre (@Paluche)
- Ian (@chronologos)
- Ilya Grigoriev (@ilyagr)
- Jaen (@jaens)
- Joseph Lou (@josephlou5)
- Josh Steadmon (@steadmon)
- Martin von Zweigbergk (@martinvonz)
- Matt Kulukundis (@fowles)
- Matt Stark (@matts1)
- max (@pr2502)
- Nika Layzell (@mystor)
- Philip Metzger (@PhilipMetzger)
- Richard Smith (@zygoloid)
- Scott Taylor (@scott2000)
- Steve Klabnik (@steveklabnik)
- Theodore Dubois (@tbodt)
- William Phetsinorath (@shikanime)
- xtqqczze (@xtqqczze)
- Yuya Nishihara (@yuja)
-
-
đ r/LocalLLaMA PSA: Humans are scary stupid rss
Apologies for the harsh post title but wanted to be evocative & sensationalist as I think everyone needs to see this.
This is in response to this submission made yesterday: Qwen3.5 4b is scary smart
Making this post as a dutiful mod here - don't want this sub to spread noise/misinformation.
The submission claimed that Qwen3.5 4b was able to identify what was in an image accurately - except it was COMPLETELY wrong and hallucinated a building that does not exist. The poster clearly had no idea. And it got over 300 upvotes (85% upvote ratio).. The top comment on the post points this out but the upvotes suggest that not only were most people blindly believing the claim but did not open the thread to read/participate in the discussion.
This is a stark example of something I think is deeply troubling - stuff is readily accepted without any validation/thought. AI/LLMs are exacerbating this as they are not fully reliable sources of information. Its like that old saying "do you think people would just go on the internet and lie?", but now on steroids.
The irony is that AI IS the tool to counter this problem - when used correctly (grounding in valid sources, cross referencing multiple sources, using validated models with good prompts, parameters, reasoning enabled etc.)
So requesting: a) Posters please validate before posting b) People critically evaluate posts/comments before upvoting c) Use LLMs correctly (here using websearch tool would have likely given the correct result) and expect others on this sub to do so as well
submitted by /u/rm-rf-rm
[link] [comments] -
đ r/reverseengineering Open-source sharing | Self-developed ARM64 virtual machine protection engine (VMP) from scratch. Version 2.0 theoretically covers all A64 fundamental instructions. rss
submitted by /u/TurbulentTrouble9582
[link] [comments] -
đ r/wiesbaden Wo finde ich so schnell wie möglich einen Job? rss
submitted by /u/Zealousideal-Try2904
[link] [comments] -
đ sacha chua :: living an awesome life Expanding yasnippets by voice in Emacs and other applications rss
Yasnippet is a template system for Emacs. I want to use it by voice. I'd like to be able to say things like "Okay, define interactive function" and have that expand to a matching snippet in Emacs or other applications. Here's a quick demonstration of expanding simple snippets:
Screencast of expanding snippets by voice in Emacs and in other applicationsTranscript- 00:00 So I've defined some yasnippets with names that I can say. Here, for example, in this menu, you can see I've got "define interactive function" and "with a buffer that I'll display." And in fundamental mode, I have some other things too. Let's give it a try.
- 00:19 I press my shortcut. "Okay, define an interactive function." You can see that this is a yasnippet. Tab navigation still works.
- 00:33 I can say, "OK, with a buffer that I'll display," and it expands that also.
- 00:45 I can expand snippets in other applications as well, thanks to a global keyboard shortcut.
- 00:50 Here, for example, I can say, "OK, my email." It inserts my email address.
- 01:02 Yasnippet definitions can also execute Emacs Lisp. So I can say, "OK, date today," and have that evaluated to the actual date.
- 01:21 So that's an example of using voice to expand snippets.
This is handled by the following code:
(defun my-whisper-maybe-expand-snippet (text) "Add to `whisper-insert-text-at-point'." (if (and text (string-match "^ok\\(?:ay\\)?[,\\.]? \\(.+\\)" text)) (let* ((name (downcase (string-trim (replace-regexp-in-string "[,\\.]" "" (match-string 1 text))))) (matching (seq-find (lambda (o) (subed-word-data-compare-normalized-string-distance name (downcase (yas--template-name o)))) (yas--all-templates (yas--get-snippet-tables))))) (if matching (progn (if (frame-focus-state) (progn (yas-expand-snippet matching) nil) ;; In another application (with-temp-buffer (yas-minor-mode) (yas-expand-snippet matching) (buffer-string)))) text)) text))This code relies on my fork of whisper.el, which lets me specify a list of functions for
whisper-insert-text-at-point. (I haven't asked for upstream review yet because I'm still testing things, and I don't know if it actually works for anyone else yet.) It does approximate matching on the snippet name using a function from subed-word-data.el which just usesstring-distance. I could probably duplicate the function in my config, but then I'd have to update it in two places if I come up with more ideas.The code for inserting into other functions is defined in my-whisper-maybe-type, which is very simple:
(defun my-whisper-maybe-type (text) "If Emacs is not the focused app, simulate typing TEXT. Add this function to `whisper-insert-text-at-point'." (when text (if (frame-focus-state) text (make-process :name "xdotool" :command (list "xdotool" "type" text)) nil)))Someday I'd like to provide alternative names for snippets. I also want to make it easy to fill in snippet fields by voice. I'd love to be able to answer minibuffer questions from
yas-choose-value,yas-completing-read, and other functions by voice too. Could be fun!Related:
This is part of my Emacs configuration.You can e-mail me at sacha@sachachua.com.
-
đ Simon Willison Something is afoot in the land of Qwen rss
I'm behind on writing about Qwen 3.5, a truly remarkable family of open weight models released by Alibaba's Qwen team over the past few weeks. I'm hoping that the 3.5 family doesn't turn out to be Qwen's swan song, seeing as that team has had some very high profile departures in the past 24 hours.
It all started with this tweet from Junyang Lin (@JustinLin610):
me stepping down. bye my beloved qwen.
Junyang Lin was the lead researcher building Qwen, and was key to releasing their open weight models from 2024 onwards.
As far as I can tell a trigger for this resignation was a re-org within Alibaba where a new researcher hired from Google's Gemini team was put in charge of Qwen, but I've not confirmed that detail.
More information is available in this article from 36kr.com. Here's Wikipedia on 36Kr confirming that it's a credible media source established in 2010 with a good track record reporting on the Chinese technology industry.
The article is in Chinese - here are some quotes translated via Google Translate:
At approximately 1:00 PM Beijing time on March 4th, Tongyi Lab held an emergency All Hands meeting, where Alibaba Group CEO Wu Yongming frankly told Qianwen employees.
Twelve hours ago (at 0:11 AM Beijing time on March 4th), Lin Junyang, the technical lead for Alibaba's Qwen Big Data Model, suddenly announced his resignation on X. Lin Junyang was a key figure in promoting Alibaba's open-source AI models and one of Alibaba's youngest P10 employees. Amidst the industry uproar, many members of Qwen were also unable to accept the sudden departure of their team's key figure.
"Given far fewer resources than competitors, Junyang's leadership is one of the core factors in achieving today's results," multiple Qianwen members told 36Kr. [...]
Regarding Lin Junyang's whereabouts, no new conclusions were reached at the meeting. However, around 2 PM, Lin Junyang posted again on his WeChat Moments, stating, "Brothers of Qwen, continue as originally planned, no problem," without explicitly confirming whether he would return. [...]
That piece also lists several other key members who have apparently resigned:
With Lin Junyang's departure, several other Qwen members also announced their departure, including core leaders responsible for various sub-areas of Qwen models, such as:
Binyuan Hui: Lead Qwen code development, principal of the Qwen-Coder series models, responsible for the entire agent training process from pre-training to post-training, and recently involved in robotics research.
Bowen Yu: Lead Qwen post-training research, graduated from the University of Chinese Academy of Sciences, leading the development of the Qwen-Instruct series models.
Kaixin Li: Core contributor to Qwen 3.5/VL/Coder, PhD from the National University of Singapore.
Besides the aforementioned individuals, many young researchers also resigned on the same day.
Based on the above it looks to me like everything is still very much up in the air. The presence of Alibaba's CEO at the "emergency All Hands meeting" suggests that the company understands the significance of these resignations and may yet retain some of the departing talent.
Qwen 3.5 is exceptional
This story hits particularly hard right now because the Qwen 3.5 models appear to be exceptionally good.
I've not spent enough time with them yet but the scale of the new model family is impressive. They started with Qwen3.5-397B-A17B on February 17th - an 807GB model - and then followed with a flurry of smaller siblings in 122B, 35B, 27B, 9B, 4B, 2B, 0.8B sizes.
I'm hearing positive noises about the 27B and 35B models for coding tasks that still fit on a 32GB/64GB Mac, and I've tried the 9B, 4B and 2B models and found them to be notably effective considering their tiny sizes. That 2B model is just 4.57GB - or as small as 1.27GB quantized - and is a full reasoning and multi-modal (vision) model.
It would be a real tragedy if the Qwen team were to disband now, given their proven track record in continuing to find new ways to get high quality results out of smaller and smaller models.
If those core Qwen team members either start something new or join another research lab I'm excited to see what they do next.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
đ r/Yorkshire May - Kringle Workshop - rope mat making in the Peak District rss
| submitted by /u/walkinglantern
[link] [comments]
---|--- -
đ r/reverseengineering FakeGit: LuaJIT malware distributed via GitHub at scale rss
submitted by /u/ectkirk
[link] [comments] -
đ r/LocalLLaMA Qwen3.5-0.8B - Who needs GPUs? rss
| I am genuinely surprised at how good the model is and that it can run on 14 years old device: 2nd gen i5 + 4GB DDR3 RAM. submitted by /u/theeler222
[link] [comments]
---|--- -
đ r/Harrogate Visiting Harrogate (Next Week) rss
I'll be in Harrogate and the surrounding area next week to visit my family.
I need to find a used sewing machine for my niece. Can anyone recommend some shops?
submitted by /u/coffeebugtravels
[link] [comments] -
đ r/Harrogate Crooked teeth getting worse, looking for braces in Harrogate rss
Has anyone in Harrogate had braces as an adult?
My bottom teeth have always been a bit crooked, but I feel like theyâve shifted more over the last couple of years. Itâs not dramatic, but I notice it every time I catch my reflection.
If anyoneâs had braces recently, Iâd love to hear how it went.submitted by /u/Wide-Huckleberry-151
[link] [comments] -
đ r/reverseengineering Ghidra 12.0.4 has been released! rss
submitted by /u/ryanmkurtz
[link] [comments] -
đ r/Yorkshire Exploring Jervaulx Abbey rss
| đ Jervaulx Abbey, Jervaulx, Ripon, HG4 4PH đïž Suggested donation is ÂŁ5 for adults & ÂŁ3 for children | ÂŁ1 for car parking, pay at the honesty box submitted by /u/Yorkshire-List
[link] [comments]
---|--- -
đ r/wiesbaden Mainz zeigt Haltung rss
submitted by /u/Chris0607
[link] [comments] -
đ r/Leeds In the 1890s, Woodhouse Ridge was Leeds's most romantic spot rss
I found a really special article written by a YEP reporter after he visited Woodhouse Ridge in the summer of 1894. It was rumoured to be the unofficial courting ground of the city, and he was looking for signs of love. The guy was an absolute poet.
âOver there, men are whispering the same old tale and girls are listening and believing and blushing, as they have done ever since the sun set behind Meanwood. There are the future parents of future generations of Leeds citizens. Those hills are potential with the happiness, misery, love and murder of the coming yearsâ.
I returned, 130 years on, to see if it retained any of the old magic.
You can have a read on my history newsletter, Bury the Leeds. It's free to subscribe with your email.
https://burytheleeds.substack.com/p/the-lovers-at-woodhouse-ridge
Ta, r/Leeds !
submitted by /u/bluetrainlinesss
[link] [comments] -
đ r/wiesbaden Any english speakers interested in a group chat to make friends? rss
Hey there, Wiesbaden!
My partner and I are English speakers with a little sausage dog, and we are looking for a group of people to hang out and do stuff with in the city! We are both in our late 20s / early 30s, are into games & board games, movies, music and of course letting our dog run our life.
We have met some friends through here before and were thinking it would be cool to create a group chat for people to organise some activities & meetups in the city.
Whether it's a board game night, a casual park walk, a concert, or a trip to the pub. Would anyone be interested? :D
submitted by /u/LankyRaspberry8110
[link] [comments] -
đ vercel-labs/agent-browser v0.16.3 release
Patch Changes
7d2c895: Fixed an issue where the --native flag was being passed to child processes even when not explicitly specified on the command line. The flag is now only forwarded when the user explicitly provides it, consistent with how other CLI flags like --allow-file-access and --download-path are handled.
-
đ r/Yorkshire Briton diagnosed with rabies after psychiatrist raised fears, inquest told | Yorkshire | The Guardian rss
| submitted by /u/prisongovernor
[link] [comments]
---|--- -
đ vercel-labs/agent-browser v0.16.2 release
Patch Changes
01ac557: Added AGENT_BROWSER_HEADED environment variable support for running the browser in headed mode, and improved temporary profile cleanup when launching Chrome directly. Also includes documentation clarification that browser extensions work in both headed and headless modes.
-
đ eric-tramel/slop-guard v0.3.1 release
Release v0.3.1 v0.3.1 adds stronger AI-slop detection coverage and credits the community contribution from SinatrasC (PR #20). Highlights Added 14 words and 48 phrase patterns to the slop phrase/word inventories, across adjectives, verbs, nouns, and hedges. Introduced five passage-level rules, each with fit support and packaged examples: CopulaChainRule ExtremeSentenceRule ClosingAphorismRule ParagraphBalanceRule ParagraphCVRule Updated defaults so the new rule set is available in standard runs of both the CLI and MCP entry point. New Rules (Sinatras PR ) CopulaChainRule
Flags passages where too many sentences open with a copula verb (
is/are/was/were) and create an encyclopedia-like chain.Example that should trigger:
Python is a high-level language. Lists are ordered sequences. Dicts are key-value mappings. Sets are unordered collections. Tuples are immutable sequences. Generators are lazy iterators.ExtremeSentenceRuleFlags any sentence that is too long (default threshold: 80+ words).
Example that should trigger:
A single sentence that keeps chaining clause after clause after clause with coordinating conjunctions and subordinate phrases while continuing to add qualifications and parenthetical elaborations and extra framing language until it eventually crosses the configured word-count threshold and is treated as a run-on structural signal.ClosingAphorismRuleFlags moralizing or abstract final sentences that match multiple generalizing wrap-up patterns.
Example that should trigger:
We explored several design patterns here. Each has trade-offs worth understanding. The codebase grows more complex over time. Ultimately, it is our choices that define the system we build.ParagraphBalanceRuleFlags suspiciously equal body-paragraph sizes using a min/max ratio test.
Example that should trigger:
(Paragraph 1) [intro] (Paragraph 2) ~40 words ... (Paragraph 3) ~40 words ... (Paragraph 4) ~40 words ... (Paragraph 5) ~40 words ...ParagraphCVRuleFlags low paragraph-length variance across the whole passage via coefficient of variation (CV).
Example that should trigger:
A six-paragraph post where each paragraph is nearly the same length (e.g., ~35 words each), creating highly uniform paragraph rhythm.Why it matters
- Better vocabulary coverage helps the analyzer flag AI-styled phrasing that was under-detected by older defaults.
- New structural passage checks catch formulaic patterning around copula chains, sentence length, ending aphorisms, and paragraph rhythm.
Attribution
This release is based on the community PR #20 by @SinatrasC.
-
đ r/LocalLLaMA Qwen 3.5 4b is so good, that it can vibe code a fully working OS web app in one go. rss
| The OS can be used here: WebOS 1.0 Prompt used was "Hello Please can you Create an os in a web page? The OS must have:
2 games
1 text editor
1 audio player
a file browser
wallpaper that can be changed
and one special feature you decide. Please also double check to see if everything works as it should." Prompt idea thanks to /u/Warm- Attempt7773 All I did was to ask it to add the piano keyboard. It even chose it's own song to use in the player. I messed up on the first chat and it thought I wanted to add a computer keyboard, so I had to paste the HTML code into a new chat and ask for a piano keyboard.. but apart from that, perfect! :D Edit: Whoever gave my post an award: Wow, thank you very much, anonymous Redditor!! đ submitted by /u/c64z86
[link] [comments]
---|--- -
đ HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release, ~1 changed rss
sync repo: +1 plugin, +1 release, ~1 changed ## New plugins - [IDAGuides](https://github.com/libtero/idaguides) (1.1.1) ## Changes - [idassist](https://github.com/jtang613/IDAssist): - 1.0.2: archive contents changed, download URL changed -
đ vercel-labs/agent-browser v0.16.1 release
Patch Changes
c4180c8: Improved Chrome launch reliability by automatically detecting containerized environments (Docker, Podman, Kubernetes) and enabling --no-sandbox when needed. Added support for discovering Playwright-installed Chromium browsers and enhanced error messages with helpful diagnostics when Chrome fails to launch.
-
- March 03, 2026
-
đ IDA Plugin Updates IDA Plugin Updates on 2026-03-03 rss
IDA Plugin Updates on 2026-03-03
Activity:
- augur
- 9abd3665: chore: update dependencies
- capa
- 517dfe15: Sync capa rules submodule
- HappyIDA
- 18d60010: fix: bounds check argidx before accessing func_data in argument_labeler
- haruspex
- 0913a1c2: chore: update dependencies
- hrtng
- 47fb33e7: IDA 9.3 related fixes: crash on change var type; auto-renamer looping
- ida-ios-helper
- ida-mcp-server
- 7d9d33fb: fix: prevent crashes on large binaries
- 3d6828a2: fix: prevent hang on shutdown by waking blocked accept()
- 807f196f: fix: prevent UI freeze and fix segment iteration bug
- c4e59055: fix: resolve build errors in segments.cpp and snippets.cpp
- 8154e9b7: fix: address cubic-dev-ai review on commit a9ec8d7
- a9ec8d7d: fix: address PR review issues for MCP protocol compliance
- 061b1424: feat: implement full MCP 2025-11-25 protocol conformance
- 7d06bd23: fix: address path escape bypass and Windows install issues from PR #5
- 363f2da9: fix: address security and maintainability issues from PR #5 review
- 055383f7: chore: remove .claude local settings
- 883b415b: chore: add IDA SDK 9.3 headers and stubs for build context
- a185c3e7: fix: address 40 AI code review issues from PR #4
- 84d2f48a: feat: add StreamableHTTP notification support (202 Accepted for no-idâŠ
- IDAPluginList
- e0f4636a: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- python-elpida_core.py
- 4d2a4a52: Add LemonSqueezy webhook + admin key provisioning + S3 key persistence
- f6f2110d: Fix SyntaxError: backslash in f-string expression (Python 3.11 compat)
- 1c5ab08a: Fix API Space build: remove grpcio-dependent pkgs, add -prefer-binary
- d94b5d45: Fix API Space: Python 3.11, lighter requirements, healthcheck
- b9dd070b: Architecture checkpoint: single source of truth for the full system
- 244b043a: Launch infrastructure: API Space, pricing UI, deploy workflow, checklist
- f9a4f9b6: Fix: /v1/audit no longer hangs; depth=quick skips LLM escalation
- b9009c00: Product split: admin-gated System tab, /v1/audit API, BSL license, raâŠ
- abc1c8ce: P2.1 refined: D13 research-analyst framing (Perplexity refusal fix)
- 985d8e54: P4+P6+P7+Gemini: prescription enforcement, domain/provider diversity,âŠ
- e185f031: Fix CRITIAS monoculture + WorldFeed noise + friction TTL
- 6afabedc: P2.1 + hunger actuator + D0 breath + Gemini safety
- 4427d18c: P4+P5: wire PSO actuator + audit prescription actuator
- 946017dc: P0+P2: feedback dedup + D13 prompt reframing
- 9640cf8e: P1 resolved: cross-invocation memory bridge is constitutional architeâŠ
- 56cf88dd: fix(governance): resolve LOGOS ghost veto + document D15 independence
- c5e34389: fix(parliament): remap CHAOS A9âA10 â eliminate structural double-repâŠ
- rhabdomancer
- f3a2b00d: chore: update dependencies
- augur
-
đ r/york Dating in York 50+ rss
Hello,
How do people feel about dating in York over the age of 50?
Iâm not talking about apps. Iâm talking about York itself and meeting people there.
Any feedback would be appreciated on York and the dating scene for older people.
Thank you Yorkers.
submitted by /u/Less-Pair6695
[link] [comments] -
đ r/Leeds Best sourdough in Leeds? rss
Hi everyone, Iâm looking for any small business type bakeries in Leeds that sell sourdough bread by the loaf? I donât know where to look!
A friend gave me a homemade loaf and Iâve fallen in love with it. I would make it myself but donât have an oven :/
Thank you!
submitted by /u/Working-Barracuda492
[link] [comments] -
đ vercel-labs/agent-browser v0.16.0 release
Minor Changes
05018b3: Added experimental native Rust daemon (--nativeflag,AGENT_BROWSER_NATIVE=1env, or"native": truein config). The native daemon communicates with Chrome directly via CDP, eliminating Node.js and Playwright dependencies. Supports 150+ commands with full parity to the default Node.js daemon. Includes WebDriver backend for Safari/iOS, CDP protocol codegen, request tracking, frame context management, and comprehensive e2e and parity tests.
-
đ hyprwm/Hyprland v0.54.1 release
This is a standard patch release backporting some fixes from main onto 0.54.0
Fixes backported
- algo/dwindle: add back splitratio (#13498)
- hyprpm: fix url sanitization in add
- algo/master: fix crash after dpms (#13522)
- algo/scrolling: fix offset on removeTarget (#13515)
- algo/scrolling: fix rare crash
- build: fix build on gcc 16.x after #6b2c08d (#13429)
- compositor: fix focus edge detection (#13425)
- deco/border: fix damageEntire
- desktop/group: fix movegroupwindow not following focus (#13426)
- desktop/rule: fix matching for content type by str
- desktop/window: fix floating windows being auto-grouped (#13475)
- desktop/window: fix idealBB reserved (#13421)
- hyprctl: fix buffer overflowing writes to the socket
- hyprctl: fix workspace dynamic effect reloading (#13537)
- hyprpm: fix url sanitization in add
- keybinds: fixup changegroupactive
- layout/algo: fix swar on removing a target (#13427)
- layout/scroll: fix configuredWidths not setting properly on new workspaces (#13476)
- layout/scrolling: fix size_t underflow in idxForHeight (#13465)
- layout/windowTarget: fix size_limits_tiled (#13445)
- layouts: fix crash on missed relayout updates (#13444)
- renderer: fix crash on mirrored outputs needing recalc (#13534)
- screencopy: fix nullptr deref if shm format is weird
- tests/workspace: fix one test case failing
- algo/dwindle: don't crash on empty swapsplit (#13533)
- algo/dwindle: use focal point correctly for x-ws moves (#13514)
- algo/scroll: improve directional moves (#13423)
- build: remove auto-generated hyprctl/hw-protocols/ files during make clear (#13399)
- compositor: damage monitors on workspace attachment updates
- desktop/group: respect direction when moving window out of group (#13490)
- desktop/window: don't group modals
- format: safeguard drmGetFormat functions (#13416)
- layout/algos: use binds:window_direction_monitor_fallback for moves (#13508)
- layout/windowTarget: damage before and after moves (#13496)
- layout/windowTarget: don't use swar on maximized (#13501)
- layout/windowTarget: override maximized box status in updateGeom (#13535)
- layout: store and preserve size and pos after fullscreen (#13500)
- monitor: damage old special monitor on change
- monitor: keep workspace monitor bindings on full reconnect (#13384)
- monitor: update pinned window states properly on changeWorkspace (#13441)
- pointer: damage entire buffer in begin of rendering hw
- screencopy: scale window region for toplevel export (#13442)
- scroll: clamp column widths properly
Special thanks
As always, massive thanks to our wonderful donators and sponsors:
Sponsors
Diamond
37Signals
Gold
Framework
Donators
Top Supporters:
Seishin, Kay, johndoe42, d, vmfunc, Theory_Lukas, --, MasterHowToLearn, iain, ari-cake, TyrHeimdal, alexmanman5, MadCatX, Xoores, inittux111, RaymondLC92, Insprill, John Shelburne, Illyan, Jas Singh, Joshua Weaver, miget.com, Tonao Paneguini, Brandon Wang, Arkevius, Semtex, Snorezor, ExBhal, alukortti, lzieniew, taigrr, 3RM, DHH, Hunter Wesson, Sierra Layla Vithica, soy_3l.beantser, Anon2033, Tom94
New Monthly Supporters:
monkeypost, lorenzhawkes, Adam Saudagar, Donovan Young, SpoderMouse, prafesa, b3st1m0s, CaptainShwah, Mozart409, bernd, dingo, Marc Galbraith, Mongoss, .tweep, x-wilk, Yngviwarr, moonshiner113, Dani Moreira, Nathan LeSueur, Chimal, edgarsilva, NachoAz, mo, McRealz, wrkshpstudio, crutonjohn
One-time Donators:
macsek, kxwm, Bex Jonathan, Alex, Tomas Kirkegaard, Viacheslav Demushkin, Clive, phil, luxxa, peterjs, tetamusha, pallavk, michaelsx, LichHunter, fratervital, Marpin, SxK, mglvsky, Pembo, Priyav Shah, ChazBeaver, Kim, JonGoogle, matt p, tim, ybaroj, Mr. Monet Baches, NoX, knurreleif, bosnaufal, Alex Vera, fathulk, nh3, Peter, Charles Silva, Tyvren, BI0L0G0S, fonte-della- bonitate, Alex Paterson, Ar, sK0pe, criss, Dnehring, Justin, hylk, é±ćçKoryChiu, KSzykula, Loutci, jgarzadi, vladzapp, TonyDuan, Brian Starke, Jacobrale, Arvet, Jim C, frank2108, Bat-fox, M.Bergsprekken, sh-r0, Emmerich, davzucky, 3speed, 7KiLL, nu11p7r, Douglas Thomas, Ross, Dave Dashefsky, gignom, Androlax, Dakota, soup, Mac, Quiaro, bittersweet, earthian, Benedict Sonntag, Plockn, Palmen, SD, CyanideData, Spencer Flagg, davide, ashirsc, ddubs, dahol, C. Willard A.K.A Skubaaa, ddollar, Kelvin, Gwynspring, Richard, ZoltĂĄn, FirstKix, Zeux, CodeTex, shoedler, brk, Ben Damman, Nils Melchert, Ekoban, D., istoleyurballs , gaKz, ComputerPone, Cell the FĂŒhrer, defaltastra, Vex, Bulletcharm, cosmincartas, Eccomi, vsa, YvesCB, mmsaf, JonathanHart, Sean Hogge, leat bear, Arizon, JohannesChristel, Darmock, Olivier, Mehran, Anon, Trevvvvvvvvvvvvvvvvvvvv, C8H10N4O2, BeNe, Ko-fi Supporter :3, brad, rzsombor, Faustian, Jemmer, Antonio Sanguigni, woozee, Bluudek, chonaldo, LP, Spanching, Armin, BarbaPeru, Rockey, soba, FalconOne, eizengan, ăăăłăš, zanneth, 0xk1f0, Luccz, Shailesh Kanojia, ForgeWork , Richard Nunez, keith groupdigital.com, pinklizzy, win_cat_define, Bill, johhnry, Matysek, anonymus, github.com/wh1le, Iiro Ullin, Filinto Delgado, badoken, Simon Brundin, Ethan, Theo Puranen Ă hfeldt, PoorProgrammer, lukas0008, PaweĆ S, Vandroiy, Mathias BrĂ€nnström, Happyelkk, zerocool823, Bryan, ralph_wiggums, DNA, skatos24, Darogirn , Hidde, phlay, lindolo25, Siege, Gus, Max, John Chukwuma, Loopy, Ben, PJ, mick, herakles, mikeU-1F45F, Ammanas, SeanGriffin, Artsiom, Erick, Marko, Ricky, Vincent mouline
Full Changelog :
v0.54.0...v0.54.1 -
đ r/york Informal LGBT+ Meet-Up @ Cityscreen Cafe â 06/03/2026 (This Friday) & 11/03/2026 (Next Wednesday) rss
| Hi there! This is just a follow-up to two very informal meet-ups me and my friend hosted at Cityscreen cafe in January and February for LGBT+ folks in the York area. As a refresher for anyone who didnât see the first two posts, hereâs the last thread with some background: https://www.reddit.com/r/york/comments/1r0b2up/redux_two_friends_looking_to_make_connections_in/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button Here's the tentative plan for the next two meet-ups: Where: Cityscreen Cafe When: 19:00 Friday 6th of March (i.e. this Friday) & 19:00 Wednesday 11th of March (i.e. next Wednesday) You'll know it's us because we'll have a fluffy rabbit toy on our table like the one in the attached photo. That's the sign! We'll try to snag the sofa at the back of Cityscreen cafe, but depending on how many people turn up, we might have to take our posse elsewhere â upstairs at The Habit and The Exhibition Hotel struck me as two possible places we could try on the night. If this sounds like your cup of tea (or coffee!), please drop a comment below so we can get an idea of numbers. If you're interested but feeling shy, feel free to shoot me a DM instead. I'll also send out messages individually to the specific people who responded to the last posts! We've also created an event in the York Discord server meet-ups channel, so if anyone here is a member of the York Discord and wants to confirm their interest/attendance that way, they're welcome to do so. The name of the event is: "06/03/2026 & 11/03/2026 - Informal LGBT+ Meet-up @ Cityscreen Cafe". All the best, and maybe see you there! :-) submitted by /u/WaxyMelt
[link] [comments]
---|--- -
đ r/york Laptop repairs rss
Anyone know a place in York for laptop repair? The bloke I used a few years ago has gone I think.
submitted by /u/WalnutOfTheNorth
[link] [comments] -
đ r/reverseengineering Reverse Engineering Crazy Taxi, Part 1 rss
submitted by /u/ifnspifn
[link] [comments] -
đ @HexRaysSA@infosec.exchange âĄCheck out what we have in store for IDA in 2026... mastodon
âĄCheck out what we have in store for IDA in 2026...
Weâre expanding technical depth, improving performance, strengthening collaboration, and introducing a new generation of scalable RE tools.
Read our 2026 Product Vision: https://hex-rays.com/blog/2026-product- direction-priorities
-
đ sacha chua :: living an awesome life La semaine du 23 fĂ©vrier au premier mar rss
lundi 23 février
J'ai demandĂ© si ses amis pourraient venir Ă sa fĂȘte demain. Nous avons appris qu'ils Ă©taient malades depuis quelques semaines et ils ne pouvaient pas venir.
J'ai emmené ma fille à son cours de gymnastique à vélo parce que les rues étaient praticables. Elle s'est entraßnée à faire la roue. AprÚs ça, nous avons livré des pochettes surprises et des petits gùteaux pour ses amis qui sont malheureusement malades.
Ma fille a voulu faire des biscuits en meringues. Elle a sĂ©parĂ© les Ćufs et les a battus elle-mĂȘme jusqu'Ă ce qu'elle soit fatiguĂ©e. La premiĂšre fournĂ©e n'a pas marchĂ©, mais la deuxiĂšme Ă©tait acceptable. Nous les avons laissĂ©s dans le mini-four toute la nuit.
Nous avons découvert que l'axolotl en peluche qui passe au micro-ondes est une façon parfaite de chauffer nos orteils sous les couvertures. Ma fille ne s'habitue pas à l'odeur (c'est probablement la graine de lin avec la lavande), mais si c'est sous les couvertures, ça ne la dérange pas.
mardi 24 février
Les biscuits meringues sont encore trop collants ce matin. Il se trouve que j'ai oublié de les faire cuire au four pendant une heure hier soir. J'ai jeté la moitié de la fournée avant de rechercher une façon de réparer le reste. Heureusement, aprÚs les avoir cuits au four pendant une heure à basse température, les biscuits étaient acceptables.
J'ai travaillé sur la prononciation avec mon tuteur. J'ai réessayé les virelangues du rendez-vous précédent, ainsi que de nouveaux :
- 00:00 Maman peint un grand lapin blanc.
- 00:04 Un enfant intelligent mange lentement.
- 00:08 Le roi croit voir trois noix.
- 00:12 Le témoin voit le chemin loin.
- 00:16 Moins de foin au loin ce matin.
- 00:21 La laine beige sĂšche prĂšs du collĂšge.
- 00:25 La croquette sĂšche dans l'assiette.
- 00:28 Elle mĂšne son frĂšre Ă l'hĂŽtel.
- 00:31 Le verre vert est trĂšs clair.
- 00:35 Elle aimait manger et rĂȘver.
- 00:38 Le jeu bleu me plaĂźt peu.
- 00:41 Ce neveu veut un jeu.
- 00:44 Le feu bleu est dangereux.
- 00:47 Le beurre fond dans le cĆur chaud.
- 00:50 Les fleurs de ma sĆur sentent bon.
- 00:54 Le hibou sait oĂč il va.
- 00:56 L'homme fort mord la pomme.
- 01:00 Le sombre col tombe.
- 01:02 L'auto saute au trottoir chaud.
- 01:07 Le chĂąteau d'en haut est beau.
- 01:09 Le cĆur seul pleure doucement.
- 01:14 Tu es sûr du futur.
- 01:17 Trois trĂšs grands trains traversent trois trop grandes rues.
{tÊwËa tÊÉ ÉĄÊËÉÌ tÊËÉÌ tÊavËÉÊs tÊwËa tÊo ÉĄÊËÉÌd ÊËy.} - 01:29 Je veux deux feux bleus, mais la reine prĂ©fĂšre la laine beige.
{ÊÉ vËĂž dËĂž fËĂž blËĂž, mÉ la ÊËÉn pÊefËÉÊ la lËÉn bËÉÊ.} - 01:37 Vincent prend un bain en chantant lentement.
{vÉÌsËÉÌ pÊËÉÌt ĆÌ bËÉÌ ÉÌ ÊÉÌtËÉÌ lÉÌtmËÉÌ.} - 01:44 La mule sĂ»re court plus vite que le loup fou.
{la mËyl sËyËÊ kËuÊ ply vËit kÉ lÉ lËu fËu.} - 01:50 Luc a bu du jus sous le pont oĂč coule la boue.
{lËyk a bËy dy ÊËy su lÉ pËÉÌt u kËul la bËu.}
Je n'ai pas enregistré de bonne tentative pour :
- Le frÚre de Robert prépare un rare rÎti rouge.
{lÉ fÊËÉÊ dÉ ÊobËÉÊ pÊepËaÊ ĆÌ ÊËaÊ ÊotËi ÊËuÊ.} - La mule court autour du mur oĂč hurle le loup.
{la mËyl kËuÊ otËuÊ dy mËyÊ u ËyÊl lÉ lËu.}
Si je comprends bien, mon tuteur m'a dit que les sons dans « Maman peint un grand lapin blanc. » sont plus proches les uns des autres que dans la version de la synthĂšse vocale. Il a aussi prononcĂ© « doucement » avec trois syllabes au lieu de deux. Je me demande si c'est l'accent du Midi. C'est tout Ă fait acceptable. Maintenant, mon objectif de prononciation est juste d'ĂȘtre comprĂ©hensible, pas d'atteindre un accent mĂ©tropolitain ou canadien. Si j'apprends la prononciation des voyelles nasales et du «r», et que j'apprends les liaisons et les lettres muettes, je pense qu'il me sera facile de prendre un accent acceptable mĂȘme si ce n'est pas parfait.
Ăcouter mes enregistrements n'Ă©tait pas trĂšs utile. Il valait mieux lire les virelangues en voix haute pendant le rendez-vous. Peut-ĂȘtre que je dois modifier mon interface pour Ă©couter les courtes parties d'enregistrements. Mais je pense que la prĂ©paration des enregistrements avant le rendez-vous a Ă©tĂ© utile.
J'ai ajouté la fonctionnalité pour couper une partie au milieu de l'enregistrement dans ma bibliothÚque compile-media.
Nous avons reportĂ© la fĂȘte d'anniversaire de ma fille Ă cause des maladies de ses amis. Elle a invitĂ© deux familles, mais tous les enfants Ă©taient malades. Selon la surveillance des pathogĂšnes dans les eaux usĂ©es, quelques maladies sont trĂšs frĂ©quentes pour le moment. Nous leur avons donnĂ© une pochette surprise et des gĂąteaux.
Bien que nous n'ayons pas eu de fĂȘte, nous avons achetĂ© la pizza que nous avions prĂ©vue.
Elle a des crampes, pauvre chérie. L'axolotl réchauffé était une source de réconfort.
mercredi 25 février
Pour une fois, ma fille s'est réveillée à temps pour le petit-déjeuner. Mais l'école virtuelle a une remplaçante aujourd'hui, donc ma fille n'a pas voulu participer à la classe. C'est la vie. Je l'ai laissée décider parce que c'est sa responsabilité.
J'ai soumis le rapport annuel de mon entreprise. C'était simple.
J'ai achetĂ© des fleurs LEGO pour l'anniversaire de ma sĆur qui habite aux Pays-Bas. Nous avons les mĂȘmes fleurs et ma fille les adore.
J'ai participé à la réunion virtuelle Emacs Berlin. Quelqu'un nous a demandé comment trier les candidats de saisie, donc j'ai expliqué le mécanisme et j'ai créé un exemple qui trie les candidats différemment. J'ai aussi démontré consult-org-headings et edebug, et j'ai discuté d'Embark et de Consult.
J'ai emmenĂ© ma fille Ă la patinoire pour jouer avec son amie et la troupe de scouts de son amie. J'ai apportĂ© 2 litres de chocolat chaud, qui est plus que suffisant pour tous les enfants. Le pĂšre de son amie leur a appris Ă tourner plus vite. Ils ont aussi jouĂ© au loup. MĂȘme si quatre filles ont poursuivi le pĂšre, elles ne l'ont pas attrapĂ©.
jeudi 26 février
Une fois de plus, ma fille s'est encore rĂ©veillĂ©e Ă l'heure du petit-dĂ©jeuner. Elle a participĂ© Ă la classe. Tout allait bien. AprĂšs l'Ă©cole, elle a voulu faire des courses elle-mĂȘme. Elle a empruntĂ© deux livres Ă la bibliothĂšque et elle a achetĂ© quelques collations au supermarchĂ©. Je l'ai suivie d'un peu loin pour partager mon Internet. Elle a envie de l'indĂ©pendance, mais elle voulait aussi jouer Ă PokĂ©mon Go.
J'ai modifiĂ© le mĂ©canisme de saisie Orderless pour traiter des lettres accentuĂ©es. J'ai aussi amĂ©liorĂ© mes fonctions qui trient les candidats de saisie par niveau au lieu de par position. Puis j'ai Ă©crit trois articles sur mon blog : deux sur la saisie pour le Carnaval d'Emacs et un sur les intĂ©rĂȘts convergents pour le Carnaval IndieWeb. Je suis ravie d'Ă©crire les fonctions et les notes.
En préparation d'un autre article, j'ai rassemblé plus de 300 liens sur la saisie tirés de mon infolettre depuis quelques années. J'ai mis à jour ma fonction pour vérifier les liens et je l'ai utilisée pour identifier les liens morts. J'ai aussi commencé à en catégoriser.
J'ai créé des fonctions pour ma bibliothÚque subed-record pour écouter des références audio comme celles que j'avais extraites du rendez-vous avec mon tuteur.
J'ai dĂ» renouveler mes certificats SSL, ce qui a nĂ©cessitĂ© de mettre Ă jour mon logiciel pour arrĂȘter et redĂ©marrer le serveur web.
vendredi 27 février
J'ai créé une fonction pour utiliser la synthÚse vocale pour générer un fichier de référence audio avec les sous-titres. En la combinant avec les fonctions que j'avais écrites hier, je peux probablement suivre ma progression au fil de plusieurs essais. Je dois penser à une bonne interface pour la comparaison sur Emacs et sur Google Chrome pour faciliter le partage.
Pendant le rendez-vous avec mon tuteur, j'ai encore travaillé sur tous les virelangues. Il a dit que je m'améliorais. ProgrÚs ! Bien sûr, j'ai besoin de plus de travail pour que ce soit plus fluide, particuliÚrement le « r ». Mais je construis un bon flux de travail pour enregistrer mes tentatives et les réécouter, et j'ai hùte de l'améliorer.
Le soleil brillait l'aprĂšs-midi. Je me suis assise sur la terrasse de bois et j'ai profitĂ© du soleil pendant que j'Ă©crivais mon journal. C'Ă©tait merveilleux que je puisse me dĂ©tendre vendredi aprĂšs-midi. Quand il fait beau, je veux ĂȘtre dehors. Je n'ai fait que taper sur mon smartphone, mais je peux aussi lire sur ma tablette. Regarder des Ă©missions est un peu difficile Ă cause de la lumiĂšre vive. Je pense que ce sera meilleur si je configure finalement une synthĂšse vocale et Emacspeak sur mon smartphone.
L'article de Christian Tietze m'a fait penser à la façon dont l'éditeur Emacs me permet de faire beaucoup de choses parce qu'il gÚre bien tous les textes. Il a utilisé Tmux pour capturer l'output et diriger vers l'IA pour fermer la boucle de rétroaction. C'est prometteur.
AprÚs mes rendez-vous avec mon tuteur, j'utilise la reconnaissance vocale pour transcrire l'enregistrement. Maintenant que c'est du texte, je peux utiliser subed.el pour écouter certains moments. Puis je peux utiliser subed-record.el pour extraire des passages dans un fichier audio avec les sous-titres corrigés. Je peux donc les écouter, enregistrer de nouvelles tentatives, et les comparer.
J'ai modifiĂ© ma configuration pour la reconnaissance vocale. Maintenant, une fois que je dis « okay, … in French », elle le traduit et affiche le rĂ©sultat comme une suggestion de saisie au lieu d'insĂ©rer directement. Cette façon m'aide Ă me souvenir.
Ma fille était fatiguée aprÚs l'école, donc nous sommes allées jusqu'au parc au lieu de patiner.
samedi 28 février
Les résultats de l'examen médical de ma fille sont arrivés. Son ECG était normal. Elle a dit que ses palpitations sont un peu meilleures. Selon son analyse de sang, son niveau de fer était un peu bas, comme nous tous. Il faut ajuster notre nourriture. Elle me demande si les petits pains aux haricots rouges contiennent du fer. Quelle surprise, ils ont une quantité respectable. Nous sommes tous allés à la pùtisserie chinoise à vélo pour en acheter. En cours de route, nous avons participé aux raids Pokémon et nous avons attrapé quelques mega-Pokémon avec l'aide d'autres dresseurs.
Nous avons fait du lÚche-vitrines à IKEA pour réfléchir à des meubles qui conviendraient à notre fille. Elle a envie du lit en mezzanine qui crée un espace pour jouer en dessous. Elle a aussi envie d'une table à abattant avec des étagÚres. Avant de les acheter, il faut que nous désencombrions sa chambre et que nous mesurions l'espace.
J'ai ajouté des contributions au Carnaval Emacs sur la saisie. J'ai aussi ajouté environ 300 liens issus des archives de l'infolettre Emacs News. C'était une bonne occasion pour apprendre ensemble.
J'ai commencé à regarder les émissions de Pokémon en français sur YouTube. Ma fille adore Pokémon pour le moment, donc si j'en regarde aussi, nous pouvons bavarder. Dans le premier épisode, notre protagoniste Sacha a dormi trop tard et il a reçu le dernier Pokémon, Pikachu, qui n'a pas voulu devenir ami avec lui. Mais une fois que Pikachu a vu comment Sacha a voulu le protéger contre beaucoup de Piafabecs, Pikachu l'a aidé.
J'ai essayé Claude CLI pour générer quelques serveurs MCP pour interroger mon journal en anglais et en français, mes articles sur mon blog, et mes dessins.
dimanche premier mars
J'ai désencombré l'ensemble de tiroirs dans ma chambre et la commode dans la chambre de ma fille. J'ai rempli un sac de choses à donner et j'ai jeté des choses qui étaient cassées ou trop usées.
J'ai relu mon journal pour travailler sur mes dessins quotidiens. Je veux résumer mes revues mensuelles que j'ai perdu l'habitude de faire depuis que j'ai appris le français.
Ma fille a pleurĂ© Ă cause d'une rage de dents, donc je dois prendre un rendez-vous chez la dentiste bientĂŽt. Elle a dit que ses dents sont trop serrĂ©es. Peut-ĂȘtre qu'elle a besoin d'un appareil orthodontique. C'est aussi possible que je ne lui aie pas assez bien brossĂ© les dents. Je vais essayer de faire mieux, et elle doit aussi apprendre Ă prendre soin d'elle-mĂȘme.
L'aprÚs-midi, ma fille et moi sommes allées au parc pour jouer à Pokémon Go. Nous avons raté l'événement avec des cadeaux, mais nous avons réussi à attraper deux Pokémon légendaires avec l'aide de plusieurs autres dresseurs. Il faisait froid, donc nous sommes rentrées aprÚs une heure.
Mon mari a essayé les kits électroniques micro:bit que j'avais achetés pour apprendre l'électronique avec notre fille. Il était un peu frustré par Bluetooth, mais il a finalement réussi avec un cùble direct. Je veux toujours bricoler avec le kit, mais je veux aussi apprendre beaucoup d'autres choses. On va voir.
Prononciation
- 00:00 … les rues Ă©taient praticables
- 00:03 Elle s'est entraßnée à faire la roue.
- 00:07 Ma fille a voulu faire des biscuits en meringues.
You can e-mail me at sacha@sachachua.com.
-
đ r/york People Watching rss
hellooo! Iâm visiting York for the first time at the end of this month. Can you recommend me any cafes/coffee spots/restaurants that are cosy, and great for people watching. Ideally somewhere I can just relax with my book!
submitted by /u/hbshuzo
[link] [comments] -
đ r/LocalLLaMA Junyang Lin has left Qwen :( rss
| https://preview.redd.it/4fjzkqelxumg1.png?width=1178&format=png&auto=webp&s=c6b0015cec7f0970b412b41d52548a90e949c13b Thank him for his contributions to local LLM submitted by /u/InternationalAsk1490
[link] [comments]
---|--- -
đ r/Yorkshire Should I wait for the Hebden Bridge Market? rss
Hiya, I was planning on visiting Hebden Bridge soon on a Wednesday but heard there are different markets running from Thursday to Sunday. Is it worth postponing to make sure I go on a market day or is it not a big deal? (Also if so, which day has the best stands?)
Thanks!!
submitted by /u/ElvenDeer
[link] [comments] -
đ r/Yorkshire Sadly, the 1920s film has been lost to time, but we know the Hall featured in the film thanks to a mention in a local newspaper dated January 1921. Located near to Haworth, home of the BrontĂ«âs, the Hall is very atmospheric with dark oak beams and quiet nooks. rss
submitted by /u/NationalTrustAdmin
[link] [comments] -
đ r/reverseengineering Downland Unearthed Final: Porting The Game To Over A Dozen Platforms rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
đ Hex-Rays Blog 2026 Product Direction & Priorities rss
-
đ r/LocalLLaMA Apple unveils M5 Pro and M5 Max, citing up to 4Ă faster LLM prompt processing than M4 Pro and M4 Max rss
| submitted by /u/themixtergames
[link] [comments]
---|--- -
đ vercel-labs/agent-browser v0.15.3 release
Patch Changes
62241b5: Fixed Windows compatibility issues including proper handling of extended-length path prefixes from canonicalize(), prevention of MSYS/Git Bash path translation that could mangle arguments, and improved daemon startup reliability. Also added ARM64 Windows support in postinstall shims and expanded CI testing with a full daemon lifecycle test on Windows.
-
đ r/Leeds Where can I go in Leeds to see/smell flowers? rss
I know this sounds like a bit of a weird request but seeing flowers makes me so happy, and I was thinking the other day I know what lilacs and violets smell like in perfume, but I've never smelled real ones.
It doesn't have to be those ones specifically, just looking for places I can go this spring/summer to enjoy flowers. It can be parks/garden centres that don't mind strange women loitering/woods with wildflowers - any recommendations welcome!
I did go wildflower picking at kemps farm a couple of years ago which was beautiful and I will go again but thats a few months away yet.
I live near-ish Hyde Park (the crocuses are out there!). I don't have a car, accessible by public transport would be ideal but I could uber.
submitted by /u/Scared_Platypus8286
[link] [comments] -
đ r/Harrogate New to UK- looking for friend ship groups and social events rss
Hi, I am 30 years old and I moved from Austria to Leeds after getting married 1.5 years ago and it has been very difficult for me to get adjusted. Despite being here for so long, I still have no friend groups or social groups I can regularly go to and create nice meaningful friendships. I feel very lonely here and I would like to find like-minded people to hang out with and just have a nice social circle like I used to in Austria. Can someone give me some advice how to do that because apps like bumble bff and Meetup kind of donât work for me. I am into hiking mountain biking, sort of sports like badminton, paddle.
submitted by /u/Content_Accident_481
[link] [comments] -
đ r/reverseengineering [Tool Release] DLLHijackHunter - Automated DLL hijacking detection with canary confirmation rss
submitted by /u/Jayendra_J
[link] [comments] -
đ r/Leeds Waste collection complaint - am I being a Karen? rss
I live in LS9 and the binmen have this habit of filling up 2/3 bins on the street with the rest of the street's rubbish, instead of putting each bin individually on the truck.
I know it sounds uptight, bigger problems in the world etc but I personally don't appreciate my bin being filled with other people's cr@p (literally, I now have litter tray remnants in my bin from the neighbours who have pets). They also often leave behind smaller bags or split bags due their manhandling, leaving litter everywhere.
I asked a bin man to stop once and just put my bin on the truck so it's emptied properly and he got aggressive. I sent the ring doorbell footage to the council who actioned it and asked if I wanted him to be sacked. I said no but that I didn't want him on the street again. That lasted 3mos and he's now back, pulling the same cr@p.
I'm not sure why a simple task is being overcomplicated. I get people don't sort their rubbish properly but I'm not one of them. I probably put my bin out once a month (live alone, diligent with recycling) so it's annoying that the council can't even get that right.
submitted by /u/Lubz3
[link] [comments] -
đ r/york Looking for Community rss
I moved in with my boyfriend almost 2 years ago and he moved here 3-4. We were talking the other day that we need to find a community and some friends.
We both grew up in small villages and his used to have lots on regularly for people of all ages (think scarecrow trails, board game nights, etc). I think he misses it. I know thereâs loads on in the centre but weâd like to get together with people in our area.
Any advice or info? Holgate based. Thank you!
submitted by /u/readyforthemagic
[link] [comments] -
đ r/york York station commuter car park rss
| How do you actually enter it with the works on Leeman road? Do you have to enter from the staff car park end? Iâm pretty sure it is open? Can anyone help? submitted by /u/Icy-Commercial-1518
[link] [comments]
---|--- -
đ r/wiesbaden Wo kann man Abends nett was essen und auch lĂ€nger sitzen? rss
Moin zusammen,
ein Kollege und ich sind Mitte MĂ€rz fĂŒr zwei Tage dienstlich in Wiesbaden. Leider haben wir nicht das selbe Hotel bekommen, so dass wir jetzt ein Lokal suchen wo wir Abends nach der Messe etwas essen können und dann auch ungestört ein paar Runden Magic the Gathering spielen können. Kleine Tische wĂ€re also eher unpassend.
Habt ihr da vllt Empfehlungen in InnenstadtnÀhe?
submitted by /u/fDiKmoro
[link] [comments] -
đ r/reverseengineering Dealing with a modified UPX variant in DvdShrink - Quick Unpacking Walkthrough rss
submitted by /u/AcizBirKulKadir
[link] [comments] -
đ facebookresearch/faiss v1.14.0 release
[1.14.0] - 2026-03-02
Added
- Add PEP 561 Python type stubs for the faiss package (#4840)
- Add conda-forge channel to INSTALL.md install commands (#4819)
- Add post_init_hook call to Python init (#4795)
- Add ARM SVE support for distance functions (#4798)
- Add Dynamic Dispatch OSS CI workflow (#4779)
- Add IndexFlatIPPanorama (#4787)
- Add benchmark to measure the ResultHandler overhead (#4778)
- Demo for a diversity filter (#4765)
- Add SVS binary size comparison demo and documentation (#4777)
- Add InvertedListScanner support for IndexIVFRaBitQFastScan (#4760)
- Add comprehensive ScalarQuantizer correctness tests (#4766)
- add IDSelector for knn_extra_metrics() (#4753)
- Add early stopping to k-means clustering (#4741)
- Add k-means++ and AFK-MCÂČ centroid initialization methods (#4740)
Changed
- ScalarQuantizer: refactor SIMDWIDTH int â SIMDLevel enum (#4838)
- Fold IndexIVFPQ scanner helpers into templatized lambdas (#4836)
- Temporarily disable RaBitQ FastScan from backward compatibility test (#4841)
- Eliminate flat_storage by embedding auxiliary data in SIMD blocks (#4816)
- Rework PQ code distance for Dynamic Dispatch (#4808)
- fbcode/faiss/impl (#4832)
- fbcode/faiss/utils/simd_impl (#4833)
- fbcode/faiss/IndexFlat.cpp (#4831)
- fbcode/faiss (#4829)
- Implement distance_to_code for IVFRaBitQFastScanScanner (#4822)
- distance_to_code for IVFPQFastScan invertedlistscanner (#4821)
- Make dispatch_VectorDistance more compact (#4820)
- Update callers to use read_index_up API (#4818)
- fbcode/faiss/utils/simd_impl/distances_avx2.cpp (#4813)
- fbcode/faiss/impl/PolysemousTraining.cpp (#4814)
- fbcode/faiss/utils/sorting.cpp (#4815)
- VisitedTable -> unordered_set if ntotal is large (#4735)
- resulthandlers with AVX512 (#4806)
- put dispatch one level above (#4802)
- dynamic dispatch distances_simd (#4781)
- Introduce Dynamic Dispatch infrastructure with SIMDConfig (#4780)
- make runtime template selection more compact (#4793)
- support SearchParameters for IndexBinary (#4761)
- Support sharding of RaBitQ indices (#4790)
- Refactor ScalarQuantizer headers to use SIMD wrapper types (#4772)
- Split ScalarQuantizer.cpp into modular headers (NOOP) (#4786)
- Move factory_tools to main library and fix unaligned SIMD store (#4782)
- inline scanning code for fast distance computations (#4785)
- Enable Faiss for internal use (#4737)
- Address review comments on SQ correctness tests (#4771)
- Enable use of svs runtime conda package instead of tarball (#4747)
- generic result handlers for most indexes (#4762)
- Use nth_element for median computation in IndexLSH (#4653)
- Change default qb from 0 to 4 in RaBitQ indexes (#4757)
- Move reorder_2_heaps() into Heap.h (#4752)
- Improve naming due to codemod. simd_result_handlers (#4351)
- Dot Product Support Similarity Metric for IndexIVFFlatPanorama (#4732)
- Panorama Refactor and Code Cleanup (#4728)
- Update serialization backwards compatibility test with panorama and rabitq (#4736)
Fixed
- Additional index deserialization validation (#4844)
- Validate HNSW levels array entries during deserialization (#4827)
- Additional memory exception handling fixes for index_read.cpp (#4837)
- Catch attempts to deserialize undefined MetricTypes (#4823)
- BlockInvertedListsIOHook::read(): Don't leak on exception. (#4824)
- Harden ZnSphere lattice codec against invalid parameters (#4826)
- Validate n_levels > 0 in Panorama (#4825)
- Additional hardening of index load path (#4817)
- Deploy std::unique_ptr<> in index_read.cpp for exception safety (#4809)
- Fix to graph deserialization (#4812)
- Harden deserialization against integer overflow and buffer overflows (#4811)
- Fix CMake/Buck build discrepancies (#4807)
- Fix NSG off-by-one neighbor ID check (#4804)
- Fix CMake static targets missing SIMD sources and definitions (#4800)
- Enable -Wstring-conversion in faiss/PACKAGE +1
- Fix backward compat CI: use isolated conda environments (#4799)
- Fix string-conversion issue in faiss/impl/lattice_Zn.cpp +1 (#4794)
- Fix build pr 4761 (#4792)
- Fix: Remove -Wignored-attributes warning in mapped_io.cpp (#4775)
- Fix string-conversion issue in faiss/IndexHNSW.cpp
- Fix: Remove -Wswitch-unreachable warning in generic-inl.h (#4776)
- Fix string-conversion issue in faiss/invlists/OnDiskInvertedLists.cpp +5 (#4791)
- Fix OSX arm64 nightly by disabling hidden visibility on macOS (#4789)
- Fix FindMKL.cmake to detect Intel oneAPI MKL (2021+) (#4769)
- Fix lint errors in SVS integration code (#4774)
- Fix typos in demos, benchs, and other directories (#4743)
- Fix weak external symbol leakage (#4758)
- Fix compilation on macOS ARM64: Use faiss::idx_t instead of long test_hamming (#4755)
- Fix multi-bit RaBitQ IP metric filtering and f_add_ex computation (#4754)
- Fix IP metric distance computation in multi-bit RaBitQ (#4751)
- Reduce memory usage in timeout callback tests (#4745)
- Fix c++20 compilation in OSS Faiss for OSX ARM64 (#4733)
Deprecated
- Remove deprecated RAFT headers (#4731)
-
đ r/LocalLLaMA Qwen 2.5 -> 3 -> 3.5, smallest models. Incredible improvement over the generations. rss
| You might argue Qwen 3.5 is the best because it's 0.8B, but I'm pretty sure a significant part of that is the vision encoder and the language model itself is smaller. submitted by /u/airbus_a360_when
[link] [comments]
---|--- -
đ tonsky.me Claude is an Electron App because weâve lost native rss
In âWhy is Claude an Electron App?â Drew Breunig wonders:
Claude spent $20k on an agent swarm implementing (kinda) a C-compiler in Rust, but desktop Claude is an Electron app.
If code is free, why arenât all apps native?
And then argues that the answer is that LLMs are not good enough yet. They can do 90% of the work, so thereâs still a substantial amount of manual polish, and thus, increased costs.
But I think thatâs not the real reason. The real reason is: native has nothing to offer.
API-wise, native apps lost to web apps a long time ago. Native APIs are terrible to use, and OS vendors use everything in their power to make you not want to develop native apps for their platform. That explains the rise of Electron before LLM times, but itâs also a problem that LLMs solve now: if that was a real barrier to developing native apps, it doesnât exist anymore.
Then thereâre looks and consistency. Some time ago, maybe in the late 90s and 2000s, native was ahead. It used to look good, it was consistent, and it all actually worked: the more apps used native look and feel, the better user experience was across apps (which we used to call programs).
These days, though, native is as bad as the web, if not worse. Consistency is basically out the window. Anything can look like anything, buttons have no borders, contrast doesnât exist, and neither do conventions. Apple, for example, seems to place traffic lights and corner radius by vibes rather than by any measurable guidelines.
Maybe the server
should round the corners?Looks could be good, but they also can be bad, and then you are stuck with platform-consistent, but generally bad UI (Liquid Glass ahem). It changes too often, too: the app you made today will look out of place next year, when Apple decides to change look and feel yet again. Thereâs no native look anymore.
Computer UIs also
degrade over timeTheoretically, native apps can integrate with OS on a deeper level. This sounds nice, but what does that mean in practice? There are almost no good interoperable file formats; everything is locked inside individual apps, most services moved to the web, and OSes dropped the ball for making a good shared baseline. You can integrate with OS-provided calendar, but you canât do it with web calendar. Well, you can, of course, but itâs easier on the web; native doesnât help with it at all.
Web pages only
lead to more web pagesFinally, the last hope of people longing for native is performance. They feel that native apps will be faster. Well, they can, but it doesnât mean they will. Web apps can be faster, too, but in practice, nobody cares. Thereâs no technical reason why Slack needs to load 80 MiB just to show 10 channel names and 3 messages on a screen. The web is not the problem here! Itâs a choice to be bad. What makes you think itâll be different once the company decides to move to native?
Donât get me wrong: writing this brings me no joy. I donât think web is a solution either. I just remember good times when native did a better-than- average job, and we were all better for using it, and it saddens me that these times have passed.
I just donât think that kidding ourselves that the only problem with software is Electron and it all will be butterflies and unicorns once we rewrite Slack in SwiftUI is not productive. The real problem is a lack of care. And the slop; you can build it with any stack.
-
đ exe.dev Why exe.dev VMs are persistent rss
When we were designing exe.dev, we settled on VMs being persistent, with persistent disks. VMs are not âquiescedâ when thereâs no network traffic or SSH connections. Disks arenât wiped clean on reboot.
This flies in the face of modern âstateless,â âimmutable,â or âserverlessâ infrastructure. Surely, weâre nuts. Why did we do this to our users and to ourselves?
We want the environment to be familiar; more like a laptop than a remote container-whatâs-it that you have to jump through hoops to even get a shell on. (Who amongst us hasnât run
tmateas part of their CI workflow to pop a shell in the darn ephemeral machine to figure out whatâs going onâŠ)We donât want to force you into a distributed system, which is what you have the moment you have a remote SQL database to store your data. (As a wise person once told me, âYou have a problem. You add a distributed system. Now you have two problems.â) (And those problems canât even reliably talk to each other.)
We want cron jobs to just work. Systemd timers too.
Should you want to use a coding agent, we want it to be able to both write AND operate whatever youâre building. (See also Software as a Wiki.)
We want to spare you the need to fuss with container registries. We want to make the easy things obvious.
We donât want to force you into a heavy-handed tool ecosystem. Heck, we donât even want to force you into git when all you want is an internal tool or a prototype.
We donât want you to have to plug in API keys for another service, and then another, and then another to do basic things like receive e-mail or host a web page or store a file or write to a database.
Every time we talked about quiescing, we realized it would break cron jobs. Every time we talked about GitHub integration, we groaned about git and GitHubâs complexity.
So, we settled on VMs that keep running, and weâre doing the interesting work of scaling our infrastructure to keep those VMs running happily and effectively. We want our VMs big enough so that you can work on the big projects you already work on. (Please reach out if theyâre not big enough!)
And itâs working. As they say in the industry, weâre drinking our own champagne. We develop exe using exe VMs with the Shelley coding agent. We serve this blog on, you guessed it, an exe VM. Our silly link shortener uses exe VMs. Our log search/analysis database and agent: also an exe.dev VM.
Go get yourself an exe.dev VM. Get a few. Get twenty.
-
