- ↔
- →
to read (pdf)
- As Rocks May Think | Eric Jang
- Doing the thing is doing the thing
- Reframing Agents
- How to Choose Colors for Your CLI Applications · Luna’s Blog
- A Protocol for Package Management | Andrew Nesbitt
- February 11, 2026
-
🔗 r/wiesbaden Zu Besuch: Flesh & Blood TCG rss
submitted by /u/TGM_E-sport_Mainz
[link] [comments] -
🔗 r/york Viking Festival rss
I'm thinking of driving up for the cooperate march on the 21st. Is it worth a 3 hour drive to go to the festival if you haven't booked in to any of the events? Most are sold out now including the battle. Do you really have to pay to just see that?
submitted by /u/SaturnSplitJon
[link] [comments] -
🔗 gchq/CyberChef v10.22.0 release
See the CHANGELOG and commit messages for details.
-
🔗 r/Leeds Any dentists taking on new patients? (literally anywhere in Leeds) rss
I go on the NHS website to look at them, contact them and get told the waiting list is open but its 12 years long. I have a dentist but they aren't taking new patients and my partners teeth haven't been looked at in a decade (and they need some serious looking at).
So does anyone know or has joined a dentist recently that didn't take several years of waiting or if anyone has any suggestions as to if there's something a step below the emergency dentist. I'm personally out of ideas.
submitted by /u/onydee
[link] [comments] -
🔗 r/york I've seen this bakery mentioned a lot online! Has anyone been recently? I visit York often and have yet to try it rss
| submitted by /u/Terrible_Passion6178
[link] [comments]
---|--- -
🔗 r/york Lost or stolen bike Malton rss
Hey folks just found a blue single speed bike with red bar tape and lots of easily recognisable stickers. Gonna give it to the coppers if I don't hear out by the weekend but am aware that most of the time victims don't get their stuff back.
Drop us a message if you think its yours
submitted by /u/Puzzleheaded-Rice-13
[link] [comments] -
🔗 r/Yorkshire Whitby - Scarborough Cinder Track, current condition? rss
I’m just wondering if anyone can give some details on the current condition of the cinder track between Whitby and Scarborough. I was thinking about running it this weekend and just wanted to check it isn’t a complete muddy swamp after all this rain!
submitted by /u/Capital_Range4936
[link] [comments] -
🔗 r/LocalLLaMA GLM 5 Released rss
-
🔗 r/Leeds Moved to Harehills and hearing nothing but horrorified reactions from friends rss
I lived in Wortley before for 12 years and I haven't had any history with crime apart from just rebellious kids cause I lived near a school. I now decided to move to Harehills and it apparently has a super negative reputation. I walk there to get back from my work at evening hours and haven't really gotten much trouble yet. Should note that I am a white male. Should I be more concerned and prepare myself for anyone hostile?
submitted by /u/TapatioHotHands
[link] [comments] -
🔗 r/Leeds Parents - is Royal Armouries Museum worth it? rss
My nephew is 6 and we’re planning to visit but I want to hear from parents / carers that have already been. Thank you in advance
submitted by /u/Lost_Garlic1657
[link] [comments] -
🔗 r/Yorkshire What an amazing achievement, has anyone been to this restaurant in North Yorkshire? I've always wanted to go to somewhere with a Michelin Star rss
| submitted by /u/Terrible_Passion6178
[link] [comments]
---|--- -
🔗 r/reverseengineering Ghidra 12.0.3 has been released! rss
submitted by /u/ryanmkurtz
[link] [comments] -
🔗 r/york What’s the going day rate for a painter and decorator these days in York? rss
if I were to book in a job that’s around a days work, would I be looking at somewhere around £300?
submitted by /u/OneItchy396
[link] [comments] -
🔗 r/Leeds Any MF DOOM fans? rss
Was reminiscing about his death as Leeds is a 2nd home to me.
Did you know the legend used to chill in Brudenell Social Club??
submitted by /u/itsfuckingume
[link] [comments] -
🔗 r/Yorkshire Yorkshire nazi had 'library of terrorist publications' rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 r/Yorkshire Got this for my Bdayy 🤍🩵 rss
| submitted by /u/Maximillian9207111
[link] [comments]
---|--- -
🔗 r/Leeds No full size fridges in apartments? rss
Currently hunting for my second apartment in Leeds. I’m currently at Uncle (way too overpriced for what you pay for and to live in a business park), and started looking for a new 1bed up to £1100 in city center (no car and like walking to work).
I’ve noticed this weird thing though, literally 80% of the apartments don’t have a full size fridge. Is this normal?
Is this just a 1 bed thing. Im a young professional so haven’t looked for my own places much besides at uni when sharing with lots of ppl. Why don’t apartments have full size fridges? I can’t stand it! I need a freezer!
Also any apartment recommendations gladly taken as this process is so exhausting!
submitted by /u/chickengyoza
[link] [comments] -
🔗 Probably Dance How Programmers Spend Their Time rss
I submitted a tiny patch to flash attention. The necessary typing for the change takes less ten seconds, but the overall change took more than ten hours So where does the time go?
It started when coworker had a bug where cudnn attention would crash randomly. We looked at his unreleased changes and concluded that they couldn't possibly cause this, so we suspected that we had a lingering bug that was exposed by making harmless changes to related code.
Step 1, a few hours: My coworker tried to figure this out just by running the code repeatedly, trying out various theories. The bug was hard to reproduce so this took hours without much progress.
Step 2, 1 hour: I thought this is a good reason to try out compute sanitizer. It would be easiest to just run it on our existing tests to see if it finds any issues without my coworker's changes. But the tests run on another box because they require certain GPUs, which means you have to run the tests through some layers. Unfortunately compute sanitizer really wants to be in charge of the program, so we have to convince those layers to let compute sanitizer run the whole thing. It keeps on failing and we can't figure out why, until eventually I suspect that the issue is that the tests run in a sandbox, and the sandbox is strict enough that it breaks compute sanitizer somehow. This turned out to be true and we probably wasted an hour together.
Step 3, 10 minutes: Run the tests outside of the testing framework. This is surprisingly easy, taking just five minutes. Compute sanitizer immediately finds a problem. Well, almost immediately. You have to know to turn off the pytorch caching allocator because it hides memory issues. If I hadn't known that, I could have wasted hours more.
Step 4, 10 minutes: Investigate a theory that we had: We were padding one tensor, but not a related tensor that really feels like it should be padded, too. I try to use torch.nn.functional.pad but it doesn't work for padding the batch-dimension. So we just use torch.expand and torch.cat together. This takes like ten minutes and the bug is still there. Then I notice another tensor that should also be padded, which takes seconds to try out now and finally our cudnn invocation runs cleanly through compute sanitizer. But a nearby test for flash-attention is failing in compute sanitizer.
Step 5, 20 minutes: The padding fix didn't fix the original issue, so my coworker decides to look more into it on his own and I look more into why flash-attention is having issues. First check if we're doing something obvious wrong. This takes 10 minutes and I find nothing. Then check the flash- attention code. Compute sanitizer gives me a line number and it fails on an interesting line related to running in deterministic mode. That's not used often, so maybe that's why the test is buggy. I tried to understand the index math in that line but that led nowhere, so instead I just grepped for where that variable even comes from, and there is a glaringly obvious use-after-free bug:
The dk_semaphore and dv_sempahore will go away at the end of the scope, but the data_ptrs will still be used and will point into memory that's no longer valid.
Fixing this would take seconds (just default-construct the tensors outside the "if") but we're just using flash-attention from pip, so I would have to build a new wheel to confirm the fix.
Step 6, 2 hours: I decide to build this on my home computer because experience shows that it's easier to get random source code to build on personal computers where I can freely install anything from apt-get or download random things from the Internet. I download the flash attention source but don't actually know how to build it. "make" doesn't do anything even though there is a Makefile. The readme says to use "python setupy.py install" which immediately prints a message telling me to not run this command and to use some other thing instead which I hadn't heard of before. But then it does the work anyway despite that message, so I stick with it. It fails with "unsupported architecture compute_120". I grep for where that comes from, somehow this thinks my PC supports newer things than it actually does. I try disabling it in setup.py, but pytorch does the same thing and I can't modify that. So instead I try to figure out why it thinks compute_120 is supported when it actually isn't. Oops, turns out I'm running ancient CUDA 12.0. I decide to upgrade to version 12.9 instead (I avoid 13.0 because that might have unknown compatibility issues). Now the build works, but it's super slow. After 20 minutes I kill it and rerun it with more parallelism. This OOMs. So I try again with the original setting, which now OOMs as well. So instead I run with even lower parallelism, which makes the build even slower. I decide to call it a night. Unfortunately I can't run the build over night because the PC is in our bedroom and the build makes the fans spin loudly.
Step 7, 45 minutes: Everything is broken. For some reason the build doesn't work the next morning. It says no Nvidia runtime detected. Nvidia-smi is also broken. Turns out I have conflicting packages after upgrading to CUDA 12.9 and for some reason that's only a problem after a reboot. I spend like 30 minutes getting the packages to be consistent. First I try upgrading to the latest drivers, which makes my display run in a low resolution for some reason. Then I try downgrading back to the original driver, except I stay on CUDA 12.9. Then I finally rerun the build with less parallelism, which takes about an hour while I do other things.
Step 8, 30 minutes: I got things working. I write a small reproducer and… I can't reproduce the bug. I realize I'm an idiot because the GPU in my PC only has compute capability 8.9 which means it'll use flash-attention 2, but I already knew that the bug only happens with flash-attention 3. I had found this out in the earlier step.
Step 9, 1 hour: Overall that wasn't so bad. I only had to install three python dependencies: torch, packaging, and wheel. So I try again to get this to compile on a work computer. But as expected everything goes wrong. It wants to use the wrong version of CUDA and the wrong version of GCC and the wrong version of Python for some reason, and then when I finally get it to start compiling, the compiler segfaults. I try switching compiler versions but it still happens. I decide that I should at least have a look at the crash to see what's causing it, but the crash doesn't happen when I run the compiler in gdb. So I try compiling with less parallelism and then the compiler doesn't segfault and everything finally builds.
Step 10, 10 minutes plus waiting: I try to reproduce the bug and get a weird error when importing flash-attention. Claude tells me it's because I compiled this with an incompatible compiler version. Right. I did switch compilers to get to the bottom of the compiler crash. I switch back and compile again.
Step 11, 10 minutes plus waiting: Finally I write a reproducer. I realize I'm an idiot again because this whole time I have been compiling flash-attention 2. The code for flash-attention 3 is actually in the "hopper" subdir. Luckily I can get that building easily now.
Step 12, 1 hour: I have a reproducer running but it doesn't reproduce the bug. I try to make it exactly the same but find that some flags were deleted in a change that's just described as "Big refactor and updates". Still those flags shouldn't matter. I realize I just pulled latest, which might have random changes in it. So let me first confirm I can reproduce the bug in the last release tag, v2.8.3. But it still doesn't happen. So I add some print statements and verify that I'm definitely calling the right function. So why doesn't the bug happen? I replace the use-after-free with a nullptr access, but still no crash. This is very weird. I verify that the code is the same. We still do all the buggy logic in the deterministic mode. So I check if the "deterministic" flag is set. Oh, it's hardcoded to "false". When was that change made? Oh right in "Big refactor and update".
So what do I do now? The bug doesn't happen because the "deterministic" mode doesn't work any more. Do I just stop now? After all the pytorch caching allocator actually guarantees in this particular case that the memory isn't freed before the kernel runs. So the only impact of this bug is that I now can't run our tests with compute sanitizer enabled. So maybe just leave it alone and spend my time on other more important things.
Step 13, 30 minutes plus two hours waiting. While writing the blog post I notice that the "deterministic" mode should work again in the latest version on github. So I pull latest again, build (takes an hour) and try a little harder to reproduce the bug. This time I succeed. Finally. I make the trivial ten-second fix, but the bug doesn't go away. I don't understand why. I add print statement to print the pointer address, but nothing prints. Are incremental builds broken? I do a clean build (another hour) and the print- statement still doesn't print. What's going on? Then I notice that the file was copied to "flash_api_stable.cpp". I make the change in that file and finally the bug is fixed.
Step 14, 30 minutes. Figure out the necessary github clicks and git commands to get a pull request for the fix up. I'm waiting to see if there are any comments but my change gets merged quickly the next day. I'm done.
Oh and the initial bug was actually because the changes that my coworker made could change behavior in a way that was pretty obvious in hindsight, but that's for another time.
So overall a fix that takes about 10 seconds to write took over 10 hours of my time (I didn't fully count the time spent waiting), spread out over days. Is this typical? No, I do often have days where I actually get to write many lines of code. Is this unusual? No, I also have many days where I produce very few lines of code for many hours of work. When maintaining complicated code these days are more common. Where did the time go?
- Trying to get around layers or punching through layers
- Fighting with build systems and compilers
- Fighting with dependencies or packages
- Running the wrong version of the code or a wrong copy of the code
I actually wish that LLM coding tools could help with this stuff. Instead of me spending hours going down a wrong path I'd rather if the LLM went down the wrong path for me. But so far they're not good at this, and in fact they're likely to waste hours of your time by suggesting wrong paths. I'm also terrified of letting a LLM try to upgrade my installed CUDA version. Not because I'm worried it'll take over my computer as a first step towards taking over the world, but because I'm worried it'll mess things up so badly that I can't recover. So while I appreciate that LLMs can be a big help when writing code, I wish they would help with all the programming tasks where I'm barely producing any code.
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [CrystalRE](https://github.com/Nico-Posada/CrystalRE): 1.3.0 -
🔗 matklad Programming Aphorisms rss
Programming Aphorisms
Feb 11, 2026
A meta programming post — looking at my thought process when coding and trying to pin down what is programming “knowledge”. Turns out, a significant fraction of that is just reducing new problems to a vocabulary of known tricks. This is a personal, descriptive post, not a prescriptive post for you.
It starts with a question posted on Ziggit. The background here is that Zig is in the process of removing ambient IO capabilities. Currently, you can access program environment from anywhere via
std.process.getEnvVarOwned. In the next Zig version, you’ll have to threadstd.process.Environ.Mapfrom main down to every routine that needs access to the environment. In this user’s case, they have areadHistoryfunction which used to look up the path to the history file in the environment, and they are wondering how to best model that in the new Zig. The options on the table are:pub fn readHistory( io: std.Io, alloc: Allocator, file: std.Io.File, ) ReadHistoryError!void; pub fn readHistory( io: std.Io, alloc: Allocator, maybe_environ_map: ?*std.process.Environ.Map, ) ReadHistoryError!void; pub fn readHistory( io: std.Io, alloc: Allocator, maybe_absolute_path: ?[]const u8, maybe_environ_map: ?*std.process.Environ.Map, ) ReadHistoryError!void;My starting point would instead be this:
pub const HistoryOptions = struct { file: []const u8, pub fn from_environment( environment: *const std.process.Environ.Map, ) HistoryOptions; }; pub fn readHistory( io: std.Io, gpa: Allocator, options: HistoryOptions, ) ReadHistoryError!void;In terms of meta programming, what I find fascinating is that this, for me, is both immediate (I don’t have to think about it), but also is clearly decomposable into multiple factoids I’ve accumulated before. Here’s a deconstruction of what I did here, the verbal “labels” I use to think about what I did, and where I had learned to do that:
First , I “raised the abstraction level” by giving it a name and a type (
HistoryOptions). This is a rare transformation which I learned and named myself. Naming is important for my thinking and communicating process. “Let’s raise abstraction level” is a staple code review comment of mine.Second , I avoided “midlayer mistake” by making sure that every aspect of options is user-configurable. Easy to do in Zig, where all fields are public. I learned about midlayer mistake from a GitHub comment by Josh Triplett.
Third , I provided a “shortcut”, the
from_environmentconvenience function that cuts across abstraction layers. I learned the “shortcut” aphorism from Django Views — The Right Way. Germane to the present article, I read that post a decade after I had touched Django the last time. It was useless to me on the object level. On the meta level, reading the article solidified and named several programming tricks for me. See reverberations in How to Make a 💡?.Fourth , I instinctively renamed
allocto “gpa” (in opposition to “arena”), the naming I spotted in the Zig compiler.Fifth , I named the configuration parameter “options”, not
config,propsorparams, a naming scheme I learned at TigerBeetle.Sixth , I made sure that the signature follows “positional DI” scheme. Arguments that are dependencies, resources with unique types are injected positionally (and have canonical names like
ioorgpa). Arguments that directly vary the behavior of function (as opposed to affecting transitive callees) are passed by name, in theOptionsstruct.To be specific, I don’t claim that my snippet is the right way to do this! I have no idea, as I don’t have access to the full context. Rather, if I were actually solving the problem, the snippet above would be my initial starting point for further iteration.
Note that I also don’t explain why I am doing the above six things, I only name them and point at the origin. Actually explaining the why would take a blog post of its own for every one of them.
And this is I think the key property of my thought process — I have a bag of tricks, where the tricks are named. Inside my mind, this label points both to the actual trick (code to type), as well as a justification for it (in what context that would be a good trick to use).
And I use these tricks all the time, literally! Just answering in passing to a forum comment makes me grab a handful! A lot of my knowledge is structured like a book of coding aphorisms.
Meta meta — how come I have acquired all those tricks? I read voraciously, random commits, issues, jumping enthusiastically into rabbit holes and going on wiki trips. The key skill here is recognizing an aphorism once you see it. Reading Ziggit is part of trick-acquisition routine for me. Having learned the trick, I remember it, where “remembering” is an act of active recall at the opportune moment. This recall powers “horizontal gene transfer” across domains, stealing shortcuts from Django and midlayer mistake from the kernel. Did you notice that applying “horizontal gene transfer” to the domain of software engineering tacit knowledge is horizontal gene transfer? When entering a new domain, I actively seek out the missing tricks. I am relatively recent in Zig, but all the above tricks are either Zig native, or at least Zig adapted. Every once in a while, I “invent” a trick of my own. For example, “positional DI” is something I only verbalized last year. This doesn’t mean I hadn’t been doing that before, just that the activity wasn’t mentally labeled as a separate thing you can deliberately do. I had the idea, now I also have an aphorism.
-
🔗 Will McGugan AI_POLICY.md rss
If you maintain Open Source software, you will likely have encountered AI slop PRs.
Not all AI authored code can be considered slop, which is why a blanket ban on AI would be counterproductive. My definition of “slop” is work that is AI generated, with very little involvement by the human operator. It may seem like a good deal if somebody is spending their tokens to help your project, but without a passing understanding of the project or issue in question, the author can’t always prompt their way to a good solution.
As far as I can tell, most AI slop PRs are generated by a relatively small number of individuals. They tend to arrive in batches, and I can see the author has submitted dozens or even 100s of PRs to other projects. The work is typically a poor solution, not required, or simply broken. And the author will never follow up on comments.
It is in effect a DDOS for FOSS maintainers; it takes much longer to review the PR than it did to create it.
Nonetheless, I still feel bad about closing a PR without comment. But at the same time resentful at having to spend time formulating a response, that will likely be ignored.
So I plan to include a text file in my project, to clarify my stance on AI PRs. I’m calling this
AI_POLICY.md. I did consider adding an AI policy toCONTRIBUTING.md, but that file tends to be used to inform how to contribute, which seems a different purpose entirely.I’m hoping this could become a standard file, and AI agents would refer to this when generating a PR. Until then, I can link to it when I close slop PRs.
If something like this exists already, or there is a more agent-friendly way of doing this, then let me know.
Here’s the text I’m going with. I don’t think this is particularly challenging to meet, and only a slightly higher bar than what I’d expect from a mammalian brain.
AI_POLICY.md
This project accepts AI generated Pull Requests, as long as the following guidelines are met.
- The Pull Request must fill in the repository’s pull request template.
- The Pull Request must identify itself as AI generated, including the name of the agent used.
- The Pull Request must link to a issue or discussion where a solution has been approved by a maintainer (@willmcgugan).
The maintainer reserves the right to close PRs without comment if the above are not met.
-
- February 10, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-10 rss
IDA Plugin Updates on 2026-02-10
New Releases:
- CrystalRE v1.3.0
- snlite SNLite v6.0.2
- snlite SNLite v6.0.1
- snlite SNLite v6.0.0
- snlite SNLite v0.5.3.1
- snlite SNLite v0.5.2.1
- snlite SNLite v0.5.2
- snlite SNLite v0.5.1
- snlite SNLite v0.5.0
Activity:
- chernobog
- 8e93257b: fix: Prevent IDA crashes by validating calculated EAs in indirect cal…
- CrystalRE
- DeepExtractIDA
- DriverBuddy-7.4-plus
- ghidra
- ghidra-chinese
- hrtng
- 2988c1d1: indirect branch/call deobfuscation polishing
- ida-cyberchef
- 3bfb93f1: Update README.md
- msc-thesis-LLMs-to-rank-decompilers
- b96f594e: fix and update
- cba38ad4: Merge branch 'main' of https://github.com/Lurpigi/msc-thesis-LLMs-to-…
- da5024cd: new format
- 72c95760: update prompt
- ebb95c41: Merge branch 'main' of https://github.com/Lurpigi/msc-thesis-LLMs-to-…
- bd07e234: new method
- a34fc2d7: update
- 1df095e7: added viewer, edited prompts and made the full analysis scripts
- f077b081: fix function declaration
- bfb220e3: upd
- 9f2ffcb2: latex update
- 269bfa4e: trying dogbolt with only ast
- b0779d87: dogbolt with results with winner calculated by code
- 35ae2e3d: updd
- d08bfe70: Merge branch 'main' of https://github.com/Lurpigi/msc-thesis-LLMs-to-…
- 3e2df624: upd
- python-elpida_core.py
- 82c350a6: S3 cloud sync recovery: restore 75,594 patterns from cloud autonomous…
- snlite
- d6d5e56d: 6.0.2 retry modes and stream finish states
- 580d335f: implement 6.0.1 UX and export enhancements (v6.0.1)
- 26e495f0: UI rework (v6.0.0)
- 58bf50ec: fix: ensure ollama server is fully terminated on exit (v0.5.3.1)
- e8b87367: fix: ensure ollama server is fully terminated on exit (v0.5.3.1)
- d8e6c696: feat: add copy & regenerate (v0.5.2.1)
- 57bba4cd: feat: add copy & regenerate (v0.5.2)
- 54315056: Update readme.md
- 7f828008: Update readme.md
- a349d8d8: feat: add copy & regenerate (v0.5.1)
- a3af29dd: Update readme.md
- 468dac57: Update readme.md
- b92db01e: Update readme.md
- e412a39e: Update readme.md
- tnt_deobfuscator
- 1c629fa1: feat: refine dynamic deobfuscation flow, IDA repair behavior, and docs
-
🔗 r/york 🪩 Don’t remember this (the Minster starting to plunge into the depths or the subsequent dancing)🕺 rss
| submitted by /u/amusedfridaygoat
[link] [comments]
---|--- -
🔗 anthropics/claude-code v2.1.39 release
What's changed
- Improved terminal rendering performance
- Fixed fatal errors being swallowed instead of displayed
- Fixed process hanging after session close
- Fixed character loss at terminal screen boundary
- Fixed blank lines in verbose transcript view
-
🔗 r/york How to reduce the queues outside the hospital? rss
Following on from u/dawnriser posting yesterday about the petition from the Guildhall councillors about traffic round the hospital, it got me thinking about what can actually be done?
In summary, some of the suggestions included;
Traffic Flow & Junction Improvements
Extend turn lanes & change junctions: the mini-roundabout near the hospital and nearby junctions need a redesign. For example replacing them with traffic lights that prioritise clearing traffic leaving the hospital, extending turning lanes and simplifying the junctions at the Union Terrace/Lowther Street/Townend Street end to improve flow.
Create a filter lane: Have a dedicated turning lane from Wigginton Road into the hospital entrance so general traffic can pass queues- I have noticed that this tends to happen anyway, especially when travelling northbound as it is then very unlikely any traffic is heading southbound as they’re stuck at the mini-roundabout!
These suggestions are probably the most realistic
Traffic Demand Management
Alternative routes and closures: Some people suggested opening or reconfiguring other nearby roads to act as relief routes, though others disagreed about their effectiveness, especially in regards to the Groves.
A few people mentions about a link road from Crichton Ave bridge to Nestlé, but this has been ruled out. Also, very likely that the ‘fundamental law of highway congestion’, where the expansion of roads in cities actually causes an increase in vehicle traffic that in turn does not solve urban congestion (Garcia-Lopez, Pasidis & Viladecans-Marsal, 2022), would come into effect even if it was approved.
Public transport and alternatives: Previous suggestions have argued that better public transport, park-and-ride and cycle routes could reduce the number of cars trying to access the hospital however barriers like limited infrastructure, personal preference and the fact the location is for predominantly unwell people obviously exist.
Larger-Scale Changes
Move the hospital or transport hub: A suggestions were that the only real fix would be to relocate the hospital outside the city centre or build a nearby train station to reduce car dependency. Fanciful but in terms of legacy planning, I curse the 1970s selectors of the site- why not keep the old midwifery hospital estate and build on there- we would now have an out of town hospital with room to spare and not a Designer Outlet!
What do you think needs to be done? I often joke the whole place needs knocking down and we need to start again, which will obviously not happen! But with no budgets for either the local authority or the hospital, what fix that is feasible and inexpensive to make our lives in this ultimately great city a bit better?
(AI was used to generate the themes from the previous post, writing is my own)
submitted by /u/amusedfridaygoat
[link] [comments] -
🔗 r/Yorkshire Yorkshire's largest windfarm downsized rss
| The largest windfarm in England was planned on Walshaw moor. Bitter opposition by NIMBYs. The plan has been scaled back again... why can't we have green energy? submitted by /u/Useless_or_inept
[link] [comments]
---|--- -
🔗 r/wiesbaden Wo kann man in Wiesbaden lecker essen? rss
Hi liebe Wiesbaden, oder -badener?
Wir sind am Samstag in Wiesbaden und wollen einen coolen Foodspot ausprobieren.
Wir wollen nichts edles, eher etwas ausgefalleneres oder uriges. Hauptsache lecker :D
Freue mich schon auf eure Vorschläge.
Edit: Vielen lieben Dank für eure Vorschläge, ihr seid super!
submitted by /u/Key-Accountant-3801
[link] [comments] -
🔗 Simon Willison Introducing Showboat and Rodney, so agents can demo what they’ve built rss
A key challenge working with coding agents is having them both test what they’ve built and demonstrate that software to you, their overseer. This goes beyond automated tests - we need artifacts that show their progress and help us see exactly what the agent-produced software is able to do. I’ve just released two new tools aimed at this problem: Showboat and Rodney.
- Proving code actually works
- Showboat: Agents build documents to demo their work
- Rodney: CLI browser automation designed to work with Showboat
- Test-driven development helps, but we still need manual testing
- I built both of these tools on my phone
Proving code actually works
I recently wrote about how the job of a software engineer isn't to write code, it's to deliver code that works. A big part of that is proving to ourselves and to other people that the code we are responsible for behaves as expected.
This becomes even more important - and challenging - as we embrace coding agents as a core part of our software development process.
The more code we churn out with agents, the more valuable tools are that reduce the amount of manual QA time we need to spend.
One of the most interesting things about the StrongDM software factory model is how they ensure that their software is well tested and delivers value despite their policy that "code must not be reviewed by humans". Part of their solution involves expensive swarms of QA agents running through "scenarios" to exercise their software. It's fascinating, but I don't want to spend thousands of dollars on QA robots if I can avoid it!
I need tools that allow agents to clearly demonstrate their work to me, while minimizing the opportunities for them to cheat about what they've done.
Showboat: Agents build documents to demo their work
Showboat is the tool I built to help agents demonstrate their work to me.
It's a CLI tool (a Go binary, optionally wrapped in Python to make it easier to install) that helps an agent construct a Markdown document demonstrating exactly what their newly developed code can do.
It's not designed for humans to run, but here's how you would run it anyway:
showboat init demo.md 'How to use curl and jq' showboat note demo.md "Here's how to use curl and jq together." showboat exec demo.md bash 'curl -s https://api.github.com/repos/simonw/rodney | jq .description' showboat note demo.md 'And the curl logo, to demonstrate the image command:' showboat image demo.md 'curl -o curl-logo.png https://curl.se/logo/curl-logo.png && echo curl-logo.png'
Here's what the result looks like if you open it up in VS Code and preview the Markdown:

Here's that demo.md file in a Gist.
So a sequence of
showboat init,showboat note,showboat execandshowboat imagecommands constructs a Markdown document one section at a time, with the output of thoseexeccommands automatically added to the document directly following the commands that were run.The
imagecommand is a little special - it looks for a file path to an image in the output of the command and copies that image to the current folder and references it in the file.That's basically the whole thing! There's a
popcommand to remove the most recently added section if something goes wrong, averifycommand to re-run the document and check nothing has changed (I'm not entirely convinced by the design of that one) and aextractcommand that reverse-engineers the CLI commands that were used to create the document.It's pretty simple - just 172 lines of Go.
I packaged it up with my go-to-wheel tool which means you can run it without even installing it first like this:
uvx showboat --help
That
--helpcommand is really important: it's designed to provide a coding agent with everything it needs to know in order to use the tool. Here's that help text in full.This means you can pop open Claude Code and tell it:
Run "uvx showboat --help" and then use showboat to create a demo.md document describing the feature you just builtAnd that's it! The
--helptext acts a bit like a Skill. Your agent can read the help text and use every feature of Showboat to create a document that demonstrates whatever it is you need demonstrated.Here's a fun trick: if you set Claude off to build a Showboat document you can pop that open in VS Code and watch the preview pane update in real time as the agent runs through the demo. It's a bit like having your coworker talk you through their latest work in a screensharing session.
And finally, some examples. Here are documents I had Claude create using Showboat to help demonstrate features I was working on in other projects:
-
shot-scraper: A Comprehensive Demo runs through the full suite of features of my shot-scraper browser automation tool, mainly to exercise the
showboat imagecommand. -
sqlite-history-json CLI demo demonstrates the CLI feature I added to my new sqlite-history-json Python library.
-
row-state-sql CLI Demo shows a new
row-state-sqlcommand I added to that same project. -
Change grouping with Notes demonstrates another feature where groups of changes within the same transaction can have a note attached to them.
-
- krunsh: Pipe Shell Commands to an Ephemeral libkrun MicroVM is a particularly convoluted example where I managed to get Claude Code for web to run a libkrun microVM inside a QEMU emulated Linux environment inside the Claude gVisor sandbox.
I've now used Showboat often enough that I've convinced myself of its utility.
(I've also seen agents cheat! Since the demo file is Markdown the agent will sometimes edit that file directly rather than using Showboat, which could result in command outputs that don't reflect what actually happened. Here's an issue about that.)
Rodney: CLI browser automation designed to work with Showboat
Many of the projects I work on involve web interfaces. Agents often build entirely new pages for these, and I want to see those represented in the demos.
Showboat's image feature was designed to allow agents to capture screenshots as part of their demos, originally using my shot-scraper tool or Playwright.
The Showboat format benefits from CLI utilities. I went looking for good options for managing a multi-turn browser session from a CLI and came up short, so I decided to try building something new.
Claude Opus 4.6 pointed me to the Rod Go library for interacting with the Chrome DevTools protocol. It's fantastic - it provides a comprehensive wrapper across basically everything you can do with automated Chrome, all in a self-contained library that compiles to a few MBs.
All Rod was missing was a CLI.
I built the first version as an asynchronous report prototype, which convinced me it was worth spinning out into its own project.
I called it Rodney as a nod to the Rod library it builds on and a reference to Only Fools and Horses - and because the package name was available on PyPI.
You can run Rodney using
uvx rodneyor install it like this:uv tool install rodney
(Or grab a Go binary from the releases page.)
Here's a simple example session:
rodney start # starts Chrome in the background rodney open https://datasette.io/ rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)' rodney click 'a[href="../for"]' rodney js location.href rodney js document.title rodney screenshot datasette-for-page.png rodney stop
Here's what that looks like in the terminal:
![;~ % rodney start
Chrome started (PID 91462)
Debug URL: ws://127.0.0.1:64623/devtools/browser/cac6988e-8153-483b-80b9-1b75c611868d
~ % rodney open https://datasette.io/
Datasette: An open source multi-tool for exploring and publishing data
~ % rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)'
[
"https://datasette.io/for",
"https://docs.datasette.io/en/stable/",
"https://datasette.io/tutorials",
"https://datasette.io/examples",
"https://datasette.io/plugins"
]
~ % rodney click 'a[href="/for"]'
Clicked
~ % rodney js location.href
https://datasette.io/for
~ % rodney js document.title
Use cases for Datasette
~ % rodney screenshot datasette-for-page.png
datasette-for-page.png
~ % rodney stop
Chrome stopped](https://static.simonwillison.net/static/2026/rodney-demo.jpg)
As with Showboat, this tool is not designed to be used by humans! The goal is for coding agents to be able to run
rodney --helpand see everything they need to know to start using the tool. You can see that help output in the GitHub repo.Here are three demonstrations of Rodney that I created using Showboat:
- Rodney's original feature set, including screenshots of pages and executing JavaScript.
- Rodney's new accessibility testing features, built during development of those features to show what they could do.
- Using those features to run a basic accessibility audit of a page. I was impressed at how well Claude Opus 4.6 responded to the prompt "Use showboat and rodney to perform an accessibility audit of https://latest.datasette.io/fixtures" - transcript here.
Test-driven development helps, but we still need manual testing
After being a career-long skeptic of the test-first, maximum test coverage school of software development (I like tests included development instead) I've recently come around to test-first processes as a way to force agents to write only the code that's necessary to solve the problem at hand.
Many of my Python coding agent sessions start the same way:
Run the existing tests with "uv run pytest". Build using red/green TDD.Telling the agents how to run the tests doubles as an indicator that tests on this project exist and matter. Agents will read existing tests before writing their own so having a clean test suite with good patterns makes it more likely they'll write good tests of their own.
The frontier models all understand that "red/green TDD" means they should write the test first, run it and watch it fail and then write the code to make it pass - it's a convenient shortcut.
I find this greatly increases the quality of the code and the likelihood that the agent will produce the right thing with the smallest amount of prompts to guide it.
But anyone who's worked with tests will know that just because the automated tests pass doesn't mean the software actually works! That’s the motivation behind Showboat and Rodney - I never trust any feature until I’ve seen it running with my own eye.
Before building Showboat I'd often add a “manual” testing step to my agent sessions, something like:
Once the tests pass, start a development server and exercise the new feature using curlI built both of these tools on my phone
Both Showboat and Rodney started life as Claude Code for web projects created via the Claude iPhone app. Most of the ongoing feature work for them happened in the same way.
I'm still a little startled at how much of my coding work I get done on my phone now, but I'd estimate that the majority of code I ship to GitHub these days was written for me by coding agents driven via that iPhone app.
I initially designed these two tools for use in asynchronous coding agent environments like Claude Code for the web. So far that's working out really well.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 News Minimalist 🐢 Great apes can imagine and pretend rss
In the last 4 days ChatGPT read 114775 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

[5.5] Great apes can imagine and pretend, challenging human uniqueness —laopiniondemalaga.es(Spanish) (+10)
A landmark study in Science confirms great apes possess the ability to imagine and pretend, debunking the long-held belief that these complex cognitive skills are exclusive to the human species.
Through controlled experiments, a bonobo named Kanzi successfully tracked imaginary juice and grapes. He correctly identified the location of pretend items in a shared fiction while demonstrating he could distinguish between these mental representations and physical reality without needing any external rewards.
This discovery suggests that the evolutionary roots of symbolic thought date back millions of years. Kanzi’s extensive training in symbolic communication likely helped researchers identify this sophisticated, previously hidden cognitive trait.
[5.5] EU orders TikTok to change addictive design or face fines —bbc.com(+40)
The European Union has warned TikTok to redesign features deemed addictive or face fines reaching 6% of its global turnover following findings that the platform violated online safety regulations.
Preliminary findings from a two-year investigation highlight failures to mitigate risks from autoplay and infinite scroll on children's wellbeing. The Commission suggests implementing screen time breaks and altering algorithms to comply with the Digital Services Act's strict user protection requirements.
TikTok maintains the allegations are meritless and plans a legal challenge. While the platform has been invited to respond, non-compliance could lead to multi-billion dollar penalties under European Union legislation.
Highly covered news with significance over 5.5
[6.2] US transfers command of two NATO headquarters to European officers — nos.nl (Dutch) (+10)
[5.8] Palestinians say new Israeli measures in West Bank amount to de facto annexation — bbc.com (+36)
[5.8] Sanae Takaichi secures supermajority in Japan's parliament — newyorker.com (+278)
[5.7] Hong Kong media mogul Jimmy Lai receives 20-year prison sentence — abcnews.go.com (+95)
[5.7] UK regulator secures app store changes from Apple and Google — bbc.com (+7)
[5.7] Dow crosses 50,000 for the first time ever — abcnews.go.com (+35)
[5.6] Defense Department ends academic ties with Harvard University — nytimes.com (+22)
[6.0] Sixteen AI agents build a C compiler that compiles a Linux kernel — arstechnica.com (+3)
[5.7] NASA allows astronauts to use personal smartphones on missions — techradar.com (+3)
Thanks for reading!
— Vadim
You can create your own significance-based RSS feed with premium.
-
🔗 r/LocalLLaMA Train MoE models 12x faster with 30% less memory! (<15GB VRAM) rss
| Hey r/LocalLlama! We’re excited to introduce ~12x faster Mixture of Experts (MoE) training with > 35% less VRAM and ~6x longer context via our new custom Triton kernels and math optimizations (no accuracy loss). Unsloth repo: https://github.com/unslothai/unsloth- Unsloth now supports fast training for MoE architectures including gpt-oss, Qwen3 (30B, 235B, VL, Coder), DeepSeek R1/V3 and GLM (4.5-Air, 4.7, Flash).
- gpt-oss-20b fine-tunes in 12.8GB VRAM. Qwen3-30B-A3B (16-bit LoRA) uses 63GB.
- Our kernels work on both data-center (B200, H100), consumer and older GPUs (e.g., RTX 3090), and FFT, LoRA and QLoRA.
- The larger the model and more context you use, the more pronounced the memory savings from our Unsloth kernels will be (efficiency will scale exponentially).
- We previously introduced Unsloth Flex Attention for gpt-oss, and these optimizations should make it even more efficient.
In collaboration with Hugging Face, we made all MoE training runs standardized with PyTorch’s new
torch._grouped_mmfunction. Transformers v5 was recently optimized with ~6x faster MoE than v4 and Unsloth pushes this even further with custom Triton grouped‑GEMM + LoRA kernels for an additional ~2x speedup, >35% VRAM reduction and >6x longer context (12-30x overall speedup vs v4). You can read our educational blogpost for detailed analysis, benchmarks and more: https://unsloth.ai/docs/new/faster-moe We also released support for embedding model fine-tuning recently. You can use our free MoE fine-tuning notebooks: | gpt-oss (20b)-Fine-tuning.ipynb) (free) | gpt-oss (500K context)_500K_Context_Fine_tuning.ipynb) | GLM-4.7-Flash.ipynb) (A100)
---|---|---
gpt-oss-120b_A100-Fine-tuning.ipynb) (A100) | Qwen3-30B-A3B (A100) | TinyQwen3 MoE T4 (free)To update Unsloth to auto make training faster, update our Docker or:
pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth unsloth_zooThanks for reading and hope y'all have a lovely week. We hear it'll be a busy week! :)
submitted by /u/danielhanchen
[link] [comments] -
🔗 r/wiesbaden Suche einen doc holiday. rss
Kennt jemand von euch eine Praxis die reibungslos & häufig Krankschreibungen ausstellt. Frage für einen Freund.
submitted by /u/stup1dfukk
[link] [comments] -
🔗 r/LocalLLaMA Kimi is so smart rss
| https://preview.redd.it/nlgh125vpoig1.png?width=1726&format=png&auto=webp&s=886a17278e2ccf5692ac0a5ec0d8e4474334900d https://preview.redd.it/yv3bxtsvpoig1.png?width=2448&format=png&auto=webp&s=b67a5991c5ff32dd3e72eb6717eb617168dcaac9 https://preview.redd.it/mk02u5fwpoig1.png?width=1578&format=png&auto=webp&s=a9d858ecc90244f657a58a1b202c3bccb7267260 Kimi > ChatGPT = Claude submitted by /u/Bernice_working_girl
[link] [comments]
---|--- -
🔗 r/Harrogate Moving into the area - new letting rules 2026 rss
Hi all, we're looking to move to Harrogate in spring and want to rent for a year or two before buying.
We are selling our house and giving up our jobs to move here, and will have proof of funds after the sale of the house (over £200k). I don't want to ask anyone to be a guarantor as I wouldn't want to be asked myself. But we can't prove we have the funds until the day the house is sold, which will leave a gap between leaving our sold house and moving into rented. We may not be able to secure work before that time.
Anyone else know how it could work? Apparently you're no longer allowed to secure a place by paying 6 months rent up front, which I was hoping to do.
submitted by /u/Charliesheart23
[link] [comments] -
🔗 r/reverseengineering We hid backdoors in binaries — Opus 4.6 found 49% of them rss
submitted by /u/jakozaur
[link] [comments] -
🔗 r/Yorkshire Watch out for the reds rss
| submitted by /u/Bumblebee937
[link] [comments]
---|--- -
🔗 r/Leeds David Lloyd gym rss
Hi! I’m looking to change gyms as I really want to go somewhere with a pool and a sauna. I would also love to get back into playing tennis. For anyone who is a member of David Lloyd, is it worth it? It looks like the membership would be around £164 a month which feels like a lot!
submitted by /u/Dry-Start-1498
[link] [comments] -
🔗 r/Harrogate asian in harrogate rss
I have been looking into Harrogate lately and I am having a bit of a wobble regarding the social scene. I am mixed heritage, specifically half white and half Asian, and I really do look like a 50/50 split. I visited a grammar school in the area recently and did not see a single person who looked like me. It felt a bit grim to be honest.
I am quite worried about being an outcast or dealing with some ghastly narrow- mindedness. Is Harrogate actually a welcoming spot for mixed people or is the vibe a bit too stale? To the Asians and mixed-heritage folk in town, is it properly lovely or do you feel like a constant curiosity?
submitted by /u/Substantial-Fee-1114
[link] [comments] -
🔗 MetaBrainz Picard 3 alpha 2 released rss
A second alpha version for the upcoming MusicBrainz Picard 3 is now available. This focuses on fixing issues that were found in the previous alpha 1 as well as some minor improvements and updated translations.
Download links and a list of changes since Picard 3 alpha 1 are available below. For a more detailed overview of what is new in Picard 3 please see the previous blog post Picard 3 Alpha Release.
As before this is still an early pre-release. While we have all the major features implemented and we are rather confident in the current code, it is still a development release and it is expected there will be bugs. If you use this, do so with care, backup your files and please report any issues you encounter.
Some of the changes are also backward incompatible, hence we recommend you make a backup of your Picard.ini config file before trying the alpha version. You can do so in Picard’s Options under Advanced > Maintenance.
Thanks a lot to everyone who gave feedback, reported issues and provided translations.
What’s new?
Bugfixes
- PICARD-2833 - macOS: "New user" dialog breaks application menu
- PICARD-3116 - Sorting columns does not work on Apple M2
- PICARD-3173 - Exception if custom columns list is empty
- PICARD-3174 - Collection menu does not show labels and checked state
- PICARD-3176 - Image processing is changing JPEG quality even without any processors running
- PICARD-3178 - Cover processing setting enabled when it should be disabled
- PICARD-3181 - File sizes not shown if "clear existing tags" is active
- PICARD-3182 - Lookup in Browser not working for album cluster
- PICARD-3184 - Using "keep original cover art" on an album does reset the cover for children, but not the album itself
- PICARD-3185 - Built-in server must not serve CORS request with invalid origin
- PICARD-3186 - Without pygit2 the plugin page shows an error message
- PICARD-3190 - Crash on network errors when searching from the search bar
Improvements
- PICARD-3187 - Add config upgrade hook to update usage of
$matchedtracks() - PICARD-3177 - Make JPEG quality configurable in image processing settings
- PICARD-3179 - Add the cover processing settings to the profile manager
Download
As this is a pre-release and early alpha version, it is not available on all the channels where Picard’s current stable version is available.
We appreciate your interest in trying this new version. Use with care, backup your files and please use theMetaBrainz community forums and the ticket system to give feedback and report bugs.
- MusicBrainz Picard for Windows (installer)
- MusicBrainz Picard for Windows (portable)
- MusicBrainz Picard for macOS (Intel)
- MusicBrainz Picard for macOS (ARM64)
- Source code
Picard is free software and the source code is available on GitHub.
-
🔗 r/LocalLLaMA Hugging Face Is Teasing Something Anthropic Related rss
| Anthropic are the guys that make the Claude Models. I highly doubt this will be an Openweights LLM release. More likely it will be a dataset for safety alignment. Anthropic is probably the organization most opposed to the open source community, so it's probably going to be a dataset. submitted by /u/Few_Painter_5588
[link] [comments]
---|--- -
🔗 r/wiesbaden Wie sprecht ihr im Alltag? Kurze Umfrage rss
Hallo zusammen 😊
ich schreibe gerade meine Masterarbeit im Bereich Sprachwissenschaft/Übersetzung und untersuche, wie regionale Sprachformen im Alltag wahrgenommen und verwendet werden.
Die Umfrage habe ich auch in r/mainz gepostet, da es mir um einen direkten Vergleich zwischen Mainz und Wiesbaden geht.
Es handelt sich um eine kurze, anonyme Umfrage. Es gibt keine richtigen oder falschen Antworten, auch wenn man keinen Dialekt spricht oder nur einzelne Ausdrücke kennt, ist das sehr hilfreich.
Link: https://forms.gle/XzhwDvdAendQipBH8 Vielen Dank fürs Mitmachen! 🙏
submitted by /u/francescocam
[link] [comments] -
🔗 r/Leeds For those who live in or near the City Centre rss
How much is your service charge and ground rent?
Has it gone up in price year on year?
I'm thinking of moving into a flat near the City Centre but the extortionate charges are putting me off.
West Point for example have a few properties for sale but each has a different service charge price. How can this be?
Others are getting cladding work done in the near future and I'm concerned the Building Management will increase their charges to recoup their losses.
Any advice would be better greatly appreciated 👍
submitted by /u/pudderf
[link] [comments] -
🔗 r/Yorkshire Beverley Minster in Yorkshire [OC] rss
| submitted by /u/mdbeckwith
[link] [comments]
---|--- -
🔗 r/Leeds Unmasking US rap iconoclast MF Doom’s final years in West Yorkshire rss
submitted by /u/MasterpieceAlone8552
[link] [comments] -
🔗 r/LocalLLaMA Qwen-Image-2.0 is out - 7B unified gen+edit model with native 2K and actual text rendering rss
Qwen team just released Qwen-Image-2.0. Before anyone asks - no open weights yet, it's API-only on Alibaba Cloud (invite beta) and free demo on Qwen Chat. But given their track record with Qwen-Image v1 (weights dropped like a month after launch, Apache 2.0), I'd be surprised if this stays closed for long.
So what's the deal:
- 7B model, down from 20B in v1, which is great news for local runners
- Unified generation + editing in one pipeline, no need for separate models
- Native 2K (2048×2048), realistic textures that actually look good
- Text rendering from prompts up to 1K tokens. Infographics, posters, slides, even Chinese calligraphy. Probably the best text-in-image I've seen from an open lab
- Multi-panel comic generation (4×6) with consistent characters
The 7B size is the exciting part here. If/when weights drop, this should be very runnable on consumer hardware. V1 at 20B was already popular in ComfyUI, a 7B version doing more with less is exactly what local community needs.
Demo is up on Qwen Chat if you want to test before committing any hopium to weights release.
submitted by /u/RIPT1D3_Z
[link] [comments] -
🔗 r/Yorkshire Every spot here is so scenic 😍 rss
| @secretyorkshire submitted by /u/AnfieldAnchor
[link] [comments]
---|--- -
🔗 r/Yorkshire The North Yorkshire Moors Railway is to temporarily prop Bridge 42 to allow its 2026 season to begin. rss
| Bridge 42 allows trains to travel over the River Murk Esk and requires propping to allow for trains to run over it without disruption. A full repair programme will take place in 2026/27 and this will restore the bridge for the long term. The NYMR season starts on the 28th March 2026. A major appeal has been launched to cover costs of propping the bridge as well as complete the repairs needed. More information can be found on the NYMR website. “Propping Bridge 42 is a carefully considered solution that keeps the bridge fully operational for the upcoming season. It also gives us the time needed to develop a detailed repair programme, which will be implemented over the winter months to secure the long-term safety and performance of this important structure. That said, propping does come with challenges due to the bridge’s location and access. The final cost of the project with propping and full repair works will be confirmed once we have received all final surveys and quotations—we are currently awaiting responses from five contractors.” Phil Sash, Director of Civils at NYMR submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life 2026-02-09 Emacs news rss
- Upcoming events (iCal file, Org):
- OrgMeetup (virtual) https://orgmode.org/worg/orgmeetup.html Wed Feb 11 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Thu Feb 12 0000 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri Feb 13 1800 Europe/Paris
- EmacsSF (in person): coffee.el in SF https://www.meetup.com/emacs-sf/events/313232290/ Sat Feb 14 1100 America/Los_Angeles
- M-x Research: TBA https://m-x-research.github.io/ Wed Feb 18 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Thu Feb 19 0000 Asia/Singapore
- Emacs configuration:
- Emacs Lisp:
- Toggle between let and let* (Irreal)
- meedstrom/system-idle (Reddit) get the number of seconds since last user activity on the computer
- vui.el: Building a File Browser - Boris Buliga
- Lightning Talk: Emacs Lisp Rabbit Holes (04:03)
- Lightning Talk: Common Lisp packages in Emacs Lisp (04:23)
- Appearance:
- Navigation:
- TRAMP:
- Dired:
- Writing:
- Denote:
- Org Mode:
- Org Mode requests: [RFC] Rename :colnames header argument to :strip-colnames
- Executive Function as Code: using (Doom) Emacs to script my brain (Reddit)
- How I kickstart a new sprint in emacs (using org capture template) - short video in Reddit post
- tzc/tzc-org.el at main · md-arif-shaikh/tzc · GitHub (Reddit)
- (Experimental) Added custom view functionality to org-supertag
- Jack Baty: Global org-capture shortcut in KDE (Irreal)
- yingyu5658/niwa: My digital garden. (@Verdant@c7.io)
- Aimé Bertrand: org-to-cal - Syncing Org Mode to macOS Calendar
- [BLOG] #26 bbb:OrgMeetup on Wed, January 14, 19:00 UTC+3 - Ihor Radchenko (@yantar92@fosstodon.org)
- Org development:
- Completion: (the topic for this month's Emacs Carnival!)
- Coding:
- Release CIDER 1.21 ("Gràcia") · clojure-emacs/cider · GitHub - use buttons, drop support for Emacs 27
- Thanos Apollo: (Video) Contributing to Git Projects with Magit: PRs, Patches & Agit workflow (YouTube 8:01)
- Use prettierd as a formatter in apheleia
- Web:
- Mail, news, and chat:
- The Emacs RSS Reader I Wanted (Github, Reddit, YouTube 04:57)
- Doom Emacs:
- Fun:
- AI:
- Community:
- Other:
- Two Neat Emacs Packages: Bufferfile and Stripspace #rename #whitespace (02:54)
- Plain text agenda (Reddit)
- era-emacs-tools/ERA: Emacs Remote Editing (Reddit)
- Comprehending MELPA's size
- Resilient Technologies. Why Decades-Old Tools Define the ROOT of Modern Research Data Management (@lukascbossert@mastodon.social)
- Emacs development:
- emacs-devel:
- Help wanted for the widget library
- Re: Supporting stylistic sets - Eli Zaretskii - more notes on stylistic sets
- Re: Is it possible to suppress 'after-string overlay property by placing other overlays? - Eli Zaretskii - considering before- and after-string
- Do cache and timed invalidation in "VC-aware" project backend
- Fix selected group sort with topics (bug#80341)
- Add missing symbolic prefix keybinding
- Change the type of 'python-eldoc-function-timeout' to number
- Support D-Bus file descriptor manipulation
- Separate input histories for 'C-x v !' and Git pulling & pushing
- Allow using xref-find-references without visiting a tags table
- New minibuffer history for vc-user-edit-command (bug#80169)
- Fix [More] buttons in tutorial and other buttons in Semantic
- emacs-devel:
- New packages:
- auto-space-mode: Auto add space between CJK and ASCII (MELPA)
- consult-spotlight: Consult interface to macOS mdfind (Spotlight) (MELPA)
- ddgr: DuckDuckGo search (MELPA)
- dialog-mode: Major mode for editing Dialog files (MELPA)
- doing: Frictionless activity log and time tracking (MELPA)
- duckdb-query: DuckDB query results as native Elisp data structures (MELPA)
- edna-theme: A dark, Edna-inspired theme (MELPA)
- eldc: Emacs Lisp Dictionary Converter (MELPA)
- elsqlite: SQLite browser (MELPA)
- hanfix-mode: Korean grammar checker (MELPA)
- lazy: Lazy evaluation library (MELPA)
- magit-gh: GitHub CLI integration for Magit (MELPA)
- magit-pre-commit: Magit integration for pre-commit (MELPA)
- org-window-habit: Time window based habits (MELPA)
- project-rails: Rails support for project.el (MELPA)
- system-idle: Poll the system-wide idle time (MELPA)
- warm-mode: Warm colors for nighttime coding (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 anthropics/claude-code v2.1.38 release
What's changed
- Fixed VS Code terminal scroll-to-top regression introduced in 2.1.37
- Fixed Tab key queueing slash commands instead of autocompleting
- Fixed bash permission matching for commands using environment variable wrappers
- Fixed text between tool uses disappearing when not using streaming
- Fixed duplicate sessions when resuming in VS Code extension
- Improved heredoc delimiter parsing to prevent command smuggling
- Blocked writes to
.claude/skillsdirectory in sandbox mode
-
🔗 Ampcode News Amp Free Is Full (For Now) rss
We're closing admission to Amp Free, for now.
If you already have Amp Free, you’re still in. You'll keep getting $10/day in free usage, supported by ads. If not, you'll need to keep paying for all of your Amp usage.
Why? Amp is growing very fast, but we need to slow down the growth because we're very busy building the next version of Amp. We want it to feel like the future of how you build software with agents.
We'll be sprinting on this frontier with our early-adopter customers and community. To do that well, we need to spend less time on new-user support and keeping usage fair (fraud and abuse prevention), and more time on building the next Amp.
When the next version of Amp is ready, you'll know, and we'll reopen Amp Free.
-
🔗 Baby Steps Dada: moves and mutation rss
Let's continue with working through Dada. In my previous post, I introduced some string manipulation. Let's start talking about permissions. This is where Dada will start to resemble Rust a bit more.
Class struggle
Classes in Dada are one of the basic ways that we declare new types (there are also enums, we'll get to that later).
The most convenient way to declare a class is to put the fields in parentheses. This implicitly declares a constructor at the same time:
class Point(x: u32, y: u32) {}This is in fact sugar for a more Rust like form:
class Point { x: u32 y: u32 fn new() -> Point { Point { x, y } } }And you can create an instance of a class by calling the constructor:
let p = Point(22, 44) // sugar for Point.new(22, 44)Mutating fields
I can mutate the fields of
pas you would expect:p.x += 1 p.x = p.yRead by default
In Dada, the default when you declare a parameter is that you are getting read-only access:
fn print_point(p: Point) { print("The point is {p.x}, {p.y}") } let p = Point(22, 44) print_point(p)If you attempt to mutate the fields of a parameter, that would get you an error:
fn print_point(p: Point) { p.x += 1 # <-- ERROR! }Use
!to mutateIf you declare a parameter with
!, then it becomes a mutable reference to a class instance from your caller:fn translate_point(point!: Point, x: u32, y: u32) { point.x += x point.y += y }In Rust, this would be like
point: &mut Point. When you calltranslate_point, you also put a!to indicate that you are passing a mutable reference:let p = Point(22, 44) # Create point print_point(p) # Prints 22, 44 translate_point(p!, 2, 2) # Mutate point print_point(p) # Prints 24, 46As you can see, when
translate_pointmodifiesp.x, that changespin place.Moves are explicit
If you're familiar with Rust, that last example may be a bit surprising. In Rust, a call like
print_point(p)would movep, giving ownership away. Trying to use it later would give an error. That's because the default in Dada is to give a read-only reference, like&xin Rust (this gives the right intuition but is also misleading; we'll see in a future post that references in Dada are different from Rust in one very important way).If you have a function that needs ownership of its parameter, you declare that with
given:fn take_point(p: given Point) { // ... }And on the caller's side, you call such a function with
.give:let p = Point(22, 44) take_point(p.give) take_point(p.give) # <-- Error! Can't give twice.Comparing with Rust
It's interesting to compare some Rust and Dada code side-by-side:
Rust | Dada
---|---
vec.len()|vec.len()
map.get(&key)|map.get(key)
vec.push(element)|vec!.push(element.give)
vec.append(&mut other)|vec!.append(other!)
message.send_to(&channel)|message.give.send_to(channel)Design rationale and objectives
Convenient is the default
The most convenient things are the shortest and most common. So we make reads the default.
Everything is explicit but unobtrusive
The
.operator in Rust can do a wide variety of things depending on the method being called. It might mutate, move, create a temporary, etc. In Dada, these things are all visible at the callsite- but they are unobtrusive.This actually dates from Dada's "gradual programming" days - after all, if you don't have type annotations on the method, then you can't decide
foo.bar()should take a shared or mutable borrow offoo. So we needed a notation where everything is visible at the call-site and explicit.Postfix operators play more nicely with others
Dada tries hard to avoid prefix operators like
&mut, since they don't compose well with.notation.
-
- February 09, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-09 rss
IDA Plugin Updates on 2026-02-09
Activity:
- augur
- cf16c24f: chore: update dependencies
- capa
- cbe005ae: bump ruff from 0.14.7 to 0.15.0 (#2853)
- CrystalRE
- DelphiHelper
- distro
- ea1a29f1: Add ModuleNotFoundError specific hint and fix indented traceback parsing
- haruspex
- be23ccd3: chore: update dependencies
- IDA-MCP
- 9f8c0b40: Update README.md
- IDAPluginList
- c590ee19: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- msc-thesis-LLMs-to-rank-decompilers
- rhabdomancer
- 50405573: chore: update dependencies
- augur
-
🔗 r/reverseengineering A Reverse Engineer’s Map of Standard Math Definitions rss
submitted by /u/zboralski
[link] [comments] -
🔗 r/york Demand action to reduce queues at the hospital - Sign the Petition rss
| submitted by /u/dawnriser
[link] [comments]
---|--- -
🔗 r/LocalLLaMA MechaEpstein-8000 rss
| I know it has already been done but this is my AI trained on Epstein Emails. Surprisingly hard to do, as most LLMs will refuse to generate the dataset for Epstein, lol. Everything about this is local, the dataset generation, training, etc. Done in a 16GB RTX-5000 ADA. Anyway, it's based on Qwen3-8B and its quite funny. GGUF available at link.
Also I have it online here if you dare: https://www.neuroengine.ai/Neuroengine-MechaEpstein submitted by /u/ortegaalfredo
[link] [comments]
---|--- -
🔗 r/Yorkshire ‘Agate’ rss
Has anybody else heard of the term ‘agate’? My fiancée has never heard of it and I’m gobsmacked but then asking others I know, they haven’t a clue either.
Eg. I was talking to him the other day and he was agate ‘I don’t even care what agate means’
So it’s sort of used as ‘he/she was saying’
How many have heard of this and where did it even originate or mean?🤣
submitted by /u/Fun-Swordfish-2359
[link] [comments] -
🔗 r/Yorkshire Halifax Panthers RLFC been given a winding up order rss
| submitted by /u/CaptainYorkie1
[link] [comments]
---|--- -
🔗 r/york REDUX: Two friends looking to make connections in York's LGBT+ Community rss
| Hi there! This is a follow up to a post that I made in this subreddit back in January. For anyone who missed it, here's the link: https://www.reddit.com/r/york/comments/1qdd8z1/two_friends_looking_to_make_connections_in_yorks/ As a refresher, here's the summary: "My friend and I are both gay men living in York. We’ve been trying to branch out and make new connections in the city, but the few existing local groups didn't quite fit what we were looking for. We’re just wondering if there are other LGBT+ folks, especially people in their 20s/30s, who are a bit nerdy, alternative, and/or introverted and who also haven't found their spot yet who maybe wanted to meet up with us to make some new friendly connections! We were thinking just a chilled coffee meetup with no pressure or formal club rules. If we all get along, maybe we do it again? About us: mid-30s sci-fi nerd (Star Trek, Murderbot, Adrian Tchaikovsky) + late-20s cinephile/music lover/gamer (psychological horror, Disco Elysium, Radiohead, Nick Drake, Kate Bush, Talking Heads)." After making the above post, my friend and I met up at Cityscreen on the Friday we mentioned, and it went really well - a couple of people came along to meet us, which was lovely. :-) We did get a fair number of comments and messages from people who said they'd be interested but who couldn't make the date and time we'd set though, so with that in mind, we thought we'd arrange another similar meet-up - this time with slightly more advance warning! Here's the tentative plan: Where: Cityscreen Cafe When: 19:00 Thursday 12th of February (i.e. this Thursday) You'll know it's us because we'll have a fluffy rabbit toy on our table like the one in the attached photo. That's the sign! We'll try to snag the sofa at the back of Cityscreen cafe, but depending on how many people turn up, we might have to take our posse elsewhere – upstairs at The Habit and The Exhibition Hotel struck me as two possible places we could try on the night. If this sounds like your cup of tea (or coffee!), please drop a comment below so we can get an idea of numbers. If you're interested but feeling shy, feel free to shoot me a DM instead. I'll also send out messages individually to the specific people who responded to the last post! This time, we've also created an event in the York Discord server meet-ups channel, so if anyone here is a member of the York Discord and wants to confirm their interest/attendance that way, they're welcome to do so. The name of the event is: “12/02/2026 - Informal LGBT+ Meet-up @ Cityscreen Cafe” All the best, and maybe see you there! :-) submitted by /u/WaxyMelt
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Do not Let the "Coder" in Qwen3-Coder-Next Fool You! It's the Smartest, General Purpose Model of its Size rss
Like many of you, I like to use LLM as tools to help improve my daily life, from editing my emails, to online search.
However, I like to use them as an "inner voice" to discuss general thoughts and get constructive critic. For instance, when I face life-related problems take might take me hours or days to figure out, a short session with an LLM can significantly quicken that process.
Since the original Llama was leaked, I've been using LLMs locally, but they I always felt they were lacking behind OpenAI or Google models. Thus, I would always go back to using ChatGPT or Gemini when I need serious output. If I needed a long chatting session or help with long documents, I didn't have choice to use the SOTA models, and that means willingly leaking personal or work-related data.
For me, Gemini-3 is the best model I've ever tried. I don't know about you, but I struggle sometimes to follow chatGPT's logic, but I find it easy to follow Gemini's. It's like that best friend who just gets you and speaks in your language.
Well, that was the case until I tried Qwen3-Coder-Next. For the first time, I could have stimulating and enlightening conversations with a local model. Previously, I used not-so-seriously Qwen3-Next-80B-A3B-Thinking as local daily driver, but that model always felt a bit inconsistent; sometimes, I get good output, and sometimes I get dumb one.
However, Qwen3-Coder-Next is more consistent, and you can feel that it's a pragmatic model trained to be a problem-solver rather than being a sycophant. Unprompted, it will suggest an author, a book, or a theory that already exists that might help. I genuinely feel I am conversing with a fellow thinker rather than a echo chamber constantly paraphrasing my prompts in a more polish way. It's the closest model to Gemini-2.5/3 that I can run locally in terms of quality of experience.
For non-coders, my point is do not sleep on Qwen3-Coder-Next simply because it's has the "coder" tag attached.
I can't wait for for Qwen-3.5 models. If Qwen3-Coder-Next is an early preview, we are in a real treat.
submitted by /u/Iory1998
[link] [comments] -
🔗 r/Leeds Looking for scrap metal collection reccomendations rss
Hi,
I've got a large selection of scrap metal to get rid of that came out of my grandads workshop.
Any reccomendations for services that offer collection, also open to offers on collection if your reading this and are registered etc feel free to reply/pm with estimates/ costs
Hoping to sort something this week/ early next week.
Some pictures attached however this isn't everything that's to go. Looking at a transit van/ tipper size vehicle for collection
Thanks in advance.
Approximate weights below, these are pretty much minimum weights as I've added to these recently as I've been finding stuff.
Mild steel - Big mixed tub of mild steel Old oven Filing cabinet Some round bar, flat strips etc - 200kg+
Brass - Clean bar - 37kg Sligly oxidised bar - 27kg 16kg bucket of mixed scrap 43kg of brass plate (clean but with protective coating) 20kg big bar (presumed to be brass but possibly phosphor bronze or similar)
Copper 15.5kg Sheet 12kg copper bar 3kg of mixed scrap 2kg of wire (unstripped) Nominal amount of small guage stripped wire
Aluminium 8kg of clean sheet (need to confirm it's ally) 25kg fairly clean mixed bucket (bar, angle iron, etc) 12kg big clean bar 5kg of bicycle/ motorcycle rims, my old mountain bike etc)
A chunk of lead
A car battery or two etc
I'll also have a good amount of HSS to probably scrap which is all old engineering tools (drill, reamers, lathe tools etc)
Thanks
submitted by /u/Jakksz
[link] [comments] -
🔗 sacha chua :: living an awesome life La semaine du 2 février au 8 février rss
lundi 2 février
00:00 J'ai essayé de modifier mon logiciel pour traiter mon prompt via l'interface web de l'IA pour qu'il remplace le prompt précédent au lieu d'en ajouter un nouveau afin de réduire la consommation de jetons, mais l'interface ne me permet pas de l'utiliser facilement avec Javascript. J'ai décidé d'ajouter de nouveaux prompts, mais je ne répète pas les instructions si celles-ci ont déjà été transmises.
00:30 Je préfère le bilan de Gemini à celui de Mistral ou d'OpenAI. Je dois essayer Claude pour qu'il commente mon journal en français. Ma fille préfère Claude à OpenAI ou Gemini pour générer des histoires et beaucoup de gens préfèrent ça pour la programmation, donc ça en vaut peut-être la peine.
00:52 Je pense que pour m'adapter aux limites de l'API gratuite, je peux traiter la semaine toute entière dans une requête au lieu de jour par jour. Je peux aussi essayer de réécrire mes brouillons chaque jour au lieu d'une fois par semaine. C'est mieux pour apprendre de toute façon.
01:13 J'ai amené ma fille à son cours de gymnastique. J'ai fait du vélo. Mon mari nous a offert de nous accompagner, ce que j'ai apprécié. C'était la première fois que je faisais du vélo après la tempête de neige. Aucune piste cyclable n'était praticable, donc nous avons dû emprunter la rue. Il se trouve qu'il y avait un spectacle de gymnastique aérienne à 17 h, donc nous avons attendu pour le voir. Nous avons joué à Pokémon Go en attendant. Le spectacle était bien.
01:47 Pour le souper, nous avons préparé des ailes de poulet frites, des frites et du brocoli. On s'est régalés. Ma fille était fière parce que tout était moins cher et meilleur que de manger au restaurant. La cuisine est une compétence très utile dans la vie. Si elle apprend ça, cette compétence lui servira bien.
02:10 Après avoir mangé, j'ai travaillé sur l'infolettre Emacs. J'ai finalement trouvé le temps de créer une fonction pour réécrire les liens de Reddit. J'ai utilisé Spookfox pour chercher des liens sur la page.
mardi 3 février
00:00 J'ai repris ma routine matinale. D'abord, j'ai préparé des crêpes épaisses pour moi et mon mari. Après ça, j'ai appris à jouer du jazz au piano et j'ai suivi une courte vidéo d'exercice. J'ai fait une promenade en suivant un chemin qui commence près de chez nous et qui se termine au parc. J'ai fait des arrêts Pokémon. J'ai même vaincu deux arènes et laissé mes Pokémon, mais ils ont été défaits rapidement, donc je n'ai pas reçu de jetons.
00:35 Ce matin, j'ai modifié ma fonction pour envoyer mes questions à Gemini pour qu'elle puisse aussi les envoyer à Claude IA et chercher la réponse. Je ne sais pas toujours comment automatiser l'attente de la réponse, mais pour le moment, la façon manuelle fonctionne bien.
00:55 Après le retour de mon mari, je suis sortie et j'ai enlevé la neige sur le trottoir devant plusieurs maisons au bout de la rue. Il faisait moins trois degrés, donc j'ai brisé la neige compacte facilement.
01:10 J'ai eu mon dernier rendez-vous avec ma thérapeute. Elle a récapitulé les outils qu'elle m'a enseignés pendant la thérapie, et j'ai identifié quelques compétences que je veux pratiquer et quelques signes qui me montrent que je dois être attentive.
01:29 Après avoir mangé des restes, j'ai créé une fonction pour ajuster automatiquement le volume d'enregistrement après avoir enregistré. Maintenant je peux utiliser la synthèse vocale pour dire le sous-titre actuel et écouter l'enregistrement pour faciliter la comparaison. Ma prochaine étape consistera à simplifier le réenregistrement.
01:58 Je voulais publier mes tentatives de prononciation sur mon blog pour me responsabiliser et suivre mes progrès. Il y a parfois de longues phrases où ma tutrice n'a surligné qu'un ou deux mots, donc je veux inclure juste les expressions au lieu de la phrase entière. Pour éviter de désorienter les gens, je voulais spécifier un enregistrement (par exemple, un simple carillon) que j'alternerais avec les enregistrements de ma voix. D'abord, j'ai copié le segment plusieurs fois, comme ça :
Sous-titres avec des carillonsWEBVTT NOTE #+OUTPUT: 2026-02-02-prononciation.opus #+INTERLEAVE: /home/sacha/proj/french/chime.opus #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-143719.wav 00:00:02.747 --> 00:00:11.722 Ça veut dire que je peux écouter quelques podcasts de niveau A2 même sans sous-titres. NOTE #+AUDIO: /home/sacha/proj/french/chime.opus 00:00:00.000 --> 00:00:01.000 [chime] NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-143719.wav 00:00:17.224 --> 00:00:22.201 Après avoir mangé des nouilles udon au souper, NOTE #+AUDIO: /home/sacha/proj/french/chime.opus 00:00:00.000 --> 00:00:01.000 [chime] NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-144002.wav 00:00:03.064 --> 00:00:06.440 C'est amusant que nous passions ces moments-là. NOTE #+AUDIO: /home/sacha/proj/french/chime.opus 00:00:00.000 --> 00:00:01.000 [chime] NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-140019.wav 00:01:58.720 --> 00:02:04.043 D'autres personnes ont discuté de la navigation par onglets02:41 Mais c'était incommode et cela a encombré mon script. Donc j'ai simplifié mon processus en une directive appelée
#+INTERLEAVE:, comme ça :Plus simpleWEBVTT NOTE #+OUTPUT: 2026-02-02-prononciation.opus #+INTERLEAVE: /home/sacha/proj/french/chime.opus #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-143719.wav 00:00:02.747 --> 00:00:11.722 Ça veut dire que je peux écouter quelques podcasts de niveau A2 même sans sous-titres. NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-143719.wav 00:00:17.224 --> 00:00:22.201 Après avoir mangé des nouilles udon au souper, NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-144002.wav 00:00:03.064 --> 00:00:06.440 C'est amusant que nous passions ces moments-là. NOTE #+AUDIO: /home/sacha/proj/french/recordings/Points de prononciation-2026-02-03-140019.wav 00:01:58.720 --> 00:02:04.043 D'autres personnes ont discuté de la navigation par onglets02:54 J'ai aussi corrigé quelques bogues dans mon exportateur de liens audio pour que je puisse afficher le sous-titre actuel après le lecteur audio. Si vous voulez, vous pouvez écouter mon enregistrement audio à l'entrée de la semaine précédente.
03:14 Ma fille a suivi le tutoriel de dessin de Pokémon d'après le livre. Elle est très fière de son œuvre. Elle a aussi dessiné plusieurs idées.
03:25 Concernant le fait que ma fille aime se blottir contre moi et m'empêcher de taper sur mon ordinateur, j'ai essayé de connecter mon téléphone à mon ordinateur via Termux, SSH et emacsclient. Il a fonctionné étonnamment bien. Écrire sur mon téléphone avec Orgzly est plus facile qu'écrire sur Emacs dans Termux et SSH parce que je peux écrire en glissant. Mais si je veux voir des bilans et des explications en écrivant, Emacs est plus personnalisable. Avec
(xterm-mouse-mode 1), je peux même utiliser l'écran tactile pour faire défiler ou accéder aux menus. Peut-être que je pourrai configurer Termux pour simplifier les raccourcis clavier, puis je pourrai configurer des raccourcis clavier simples pour rechercher des mots dans le dictionnaire et compléter des mots en tapant.mercredi 4 février
J'ai encore préparé des crêpes épaisses pour finir le levain qui était dans le réfrigérateur. Cependant, mon mari continuait à encore faire du pain, donc notre réserve de levain grandit toujours. Maintenant nous avons quelques jours de répit avant que le levain ne menace d'envahir notre cuisine une fois de plus.
Pour l'exercice, j'ai enlevé la neige compacte sur le trottoir au sud. C'était satisfaisant de voir le trottoir sans la neige. C'est plus facile de marcher dessus que sur la neige compacte. Quelques voisins m'ont remerciée. Bon, enlever de la neige à l'extérieur est mieux que de suivre une vidéo d'exercice à l'intérieur.
J'ai ajouté quelques détails à mon entrée d'hier sur mon processus pour écouter la synthèse vocale et mes enregistrements. J'ai utilisé mon nouveau processus pour enregistrer mes tentatives de prononciation pendant les trois semaines précédentes. J'ai inclus les horodatages pour naviguer en utilisant ma nouvelle fonction pour insérer les données.
Nous sommes allés à pied aux Stockyards ensemble où nous avons fait du shopping. Dans la pharmacie, ma fille a choisi du shampooing, de la lotion et du dentifrice pour enfants. Chez Canadian Tire, elle a choisi des articles de fête Pokémon : des serviettes, des assiettes et des décorations. Chez Bulk Barn, elle a choisi des pièces en chocolat, des moules à muffins, des sacs en papier, et des distributeurs Pez de Minecraft. Elle a beaucoup d'enthousiasme pour sa fête. Elle n'a invité que quatre amis et nous planifions de tenir la fête à l'extérieur en hiver, donc nous sommes heureux de dépenser pour l'expérience.
La poche de pantalon de mon mari a été déchirée. Ma fille l'a rapiécée avec deux carrés de nylon rouge. Je l'ai aidée à enfiler l'aiguille et à tenir le tissu. Elle aime bien aider pour ces choses. C'est une compétence pratique.
jeudi 5 février
J'ai préparé du gruau pour le petit-déjeuner. Après un peu plus d'apprentissage du jazz au piano, je me suis frayé un chemin dans la congère au coin d'une autre rue.
J'ai finalement répondu à un commentaire à propos de mon français. J'ai aussi poussé mes mises à jour pour les bibliothèques subed-record et compile-media.
J'ai mis à jour les données de mon client et j'ai créé un rapport analytique ce mois-ci, ce que je fais chaque mois. J'ai aussi cherché à savoir quelles pages mentionnent un système obsolète pour que je puisse les modifier.
Une bénévole de Bike Brigade m'a demandé de modifier le modèle de l'infolettre pour ajouter le logo et le lien vers un commanditaire. Elle a aussi demandé de rendre l'article qu'a écrit le commanditaire plus visible. Pour ça, j'ai dit qu'elle, le commanditaire, ou la nouvelle bénévole qui est responsable de l'infolettre cette semaine peuvent le faire. Je suis heureuse de laisser les autres faire les choses comme ça. J'ai essayé de modifier mon logiciel pour convertir l'infolettre de Google Docs en MailChimp, mais les listes emboîtées étaient difficiles à traiter parce que Google Docs utilise les styles pour simuler des listes emboîtées au lieu de balisage sémantique correct. C'était plus facile de changer le format dans Google Docs pour faire une liste simple.
Après l'école, j'ai emmené ma fille à un rendez-vous avec ses amies parce qu'elle veut leur donner les invitations à sa fête directement. Après ça, elle s'est entraînée à faire la roue. Une fois que ses amies sont parties, nous sommes allées dehors pour jouer à Pokémon. Nous avons gagné trois Combats Max de niveau un à trois ensemble et nous avons attrapé beaucoup de Pokémon. J'ai obtenu assez de jetons pour agrandir mon sac, ce qui était utile pour stocker des Pokéballs, mais ma réserve de Pokémon atteignait souvent la limite.
Sur le chemin du retour, j'ai essayé la piste cyclable proche de Lansdowne parce qu'elle semblait praticable. La première partie était dégagée, mais la fin était enneigée. Même si je faisais du vélo attentivement, j'ai été étonnée quand mon vélo a tourné à gauche et a heurté une voiture. Heureusement, nous y allions toutes doucement et personne n'a eu mal. Le chauffeur était gentil et il nous a demandé comment nous allions. Après avoir repris mon souffle, j'ai continué à rentrer à la maison. Ma fille a dit que ses jambes lui faisaient un peu mal, donc elle s'est installée sur le canapé.
C'était une bonne journée pour des plats réconfortants, donc nous avons mangé des nouilles udon avec des gâteaux de poisson et du porc char siu. Si je prépare du porc char siu en sauce demain, mon mari peut préparer des petits pains au porc char siu pour le souper pendant que ma fille joue avec son amie (si son amie est disponible et si le temps le permet).
vendredi 6 février
J'ai fait une diffusion en direct sur mon processus pour apprendre le français dont j'ai parlé hier. Je me demande si je pourrais extraire des captures d'écran pour les inclure dans un article.
L'après-midi, ma fille était de mauvaise humeur à cause de son camarade de classe. Elle a quitté l'école et elle s'est barricadée dans sa chambre. J'ai essayé de ne pas m'inquiéter. C'est son expérience à elle.
Moi, j'ai créé une fonction pour traduire la phrase actuelle via l'API Google Traduction. De cette façon, je peux confirmer si mon intention a été préservée après avoir appliqué les corrections d'IA. J'ai aussi essayé la synthèse vocale d'Azure. Ma prochaine étape consiste à modifier ma fonction pour utiliser la synthèse vocale pour que je puisse choisir quel moteur je veux utiliser.
Mon mari a dit que pendant qu'il était au No Frills, quelqu'un lui a demandé de m'envoyer ses remerciements parce que j'enlevais la neige sur le trottoir.
J'ai emmené ma fille à la patinoire pour jouer avec son amie. Elles voulaient jouer longtemps, donc le père de son amie est retourné chez lui pour préparer le souper et il l'a laissée avec nous. Après encore une heure à jouer, nous avons escorté son amie chez elle et nous sommes rentrées chez nous. Ma fille était très fatiguée.
Nous avons préparé des burritos pour le souper. Après ma routine du soir, je me suis assise avec ma fille sur le canapé. Je tenais mon journal pendant qu'elle regardait les épisodes Pokémon.
J'ai dû brosser les cheveux de ma fille. Je l'ai fait hier soir pendant longtemps, mais aujourd'hui ses cheveux étaient encore emmêlés. Elle a aussi brossé ses cheveux elle-même.
Je ne sais pas exactement la raison, mais une fois que j'avais terminé la routine du soir de ma fille, elle est devenue grincheuse. Elle s'est barricadée de nouveau dans sa chambre et elle n'a pas voulu se brosser les dents. Bon, peut-être qu'elle a ressenti soudainement de la fatigue après avoir terminé de regarder la télévision. Elle a dit qu'elle peut gérer des choses elle-même et n'a pas besoin de mon aide. Elle a dit « Arrête de faire la maman. » Donc je l'ai laissée.
samedi 7 février
Il faisait très froid aujourd'hui, donc nous sommes restés à la maison au lieu de participer au club nature.
Ma fille a passé une mauvaise nuit et a fait deux siestes sur le canapé. Elle regardait la série Pokémon. Elle nous a aussi aidés à préparer des petits pains chinois au porc char siu. Nous en avons mangé pour le souper et ils étaient délicieux même si j'ai utilisé la recette pour le pain cuit au four au lieu du pain à la vapeur. Ce n'était pas grave, je vais la réessayer demain.
Ma tutrice m'a dit qu'elle devait arrêter nos cours particuliers. Je suis désolée. C'était une bonne tutrice, donc j'étais heureuse d'avoir pu passer ce moment ensemble.
J'ai fini d'enregistrer tous les exercices de prononciation pour mon journal des douze semaines précédentes avec ma tutrice. Ça ne nous fait qu'un total d'une heure d'audio. J'ai essayé les deux moteurs de la synthèse vocale, et je préfère le moteur Google Traduction pour le moment parce qu'il semble plus clair même s'il est moins naturel que le moteur Azure.
- La semaine du 10 au 16 novembre
- La semaine du 17 novembre au 23 novembre
- La semaine du 24 novembre au 30 novembre
- La semaine du 1 décembre au 7 décembre
- La semaine du 7 décembre au 14 décembre
- La semaine du 15 décembre au 21 décembre
- La semaine du 22 au 28 décembre
- La semaine du 29 décembre
- La semaine du 5 janvier au 11 janvier
- La semaine du 12 janvier au 18 janvier
- La semaine du 19 janvier au 25 javier 2026
- La semaine du 26 janvier au 1er février 2026
dimanche 8 février
J'ai commencé à regarder les émissions sur Netflix avec l'audio et les sous-titres en français. C'est trop rapide pour les étudier, juste pour le plaisir.
Il y faisait encore très froid : moins vingt degrés. Au lieu d'aller au cours de patinage qui a probablement été annulé, nous sommes restés à la maison. Ma fille et moi avons pratiqué un peu de français et elle a travaillé sur ses devoirs.
Nous avons encore préparé des petits pains chinois au porc char siu. Mon mari et ma fille ont aussi haché du porc pour préparer des petits pains chinois au porc et au chou.
Maintenant je dois réfléchir à la manière dont je veux procéder dans mon apprentissage du français :
- J'ai commencé avec l'intention d'aider ma fille dans son apprentissage du français, et je réussi. De temps en temps, elle pratique le français avec moi. Elle a dit qu'elle me préférait presque à sa professeure parce que la classe est trop lente et ses camarades de classe font souvent les clowns. Je peux offrir quelques trucs mnémotechniques avant ses interrogations. Je peux lui démontrer l'utilisation du système de répétition espacée et la valeur de l'apprentissage qu'on choisit pour soi-même.
- Mon objectif à long terme est la stimulation mentale. Je choisis d'apprendre le français juste pour le plaisir, pas pour le travail ou un voyage.
- J'aime bien tenir un journal en français. C'est une bonne façon de chercher et d'utiliser beaucoup de mots liés à mes centres d'intérêt. Les entrées esquissent nos vies, ce que j'apprécierai plus tard. C'est aussi un bon prétexte pour bricoler avec Emacs et partager mes retouches. Même si je n'ai pas de rendez-vous avec un tuteur chaque semaine, je veux garder cette habitude. L'IA semble être un bon moyen d'éviter les grosses fautes de grammaire, puisque ma tutrice n'est en désaccord avec ses suggestions qu'à quelques exceptions près. Peut-être que mes gentils lecteurs me donneront leurs remarques si l'IA m'induit en erreur. Même si mon attention sera peut-être détournée par un autre loisir, les entrées restent. (Et elles survivront peut-être aux erreurs de codage mieux que mes entrées en japonais qui ont été gâchées pendant une migration entre les systèmes de blog…)
- Je vais continuer à utiliser les cartes Anki pour revoir mon vocabulaire et pratiquer l'expression orale.
- Pour la compréhension, je veux pouvoir lire la liste de diffusion emacsfr, le forum Emacs Fr et le projet documentation_emacs. Je peux aussi trouver beaucoup de ressources sur Pokémon et Donjons et Dragons en français. Je peux lire des livres sur Libby et sur Epic Books. Je veux trouver plus de blogs sur Emacs et d'autres technologies. Je veux aussi améliorer mon environnement pour faciliter la lecture.
- Pour la grammaire, je pense que la croissance organique de mon journal aboutira à des connaissances limitées. J'ai emprunté beaucoup de livres sur le français, mais je ne les ai pas souvent lus parce qu'ils n'étaient pas pratiques pendant mes moments perdus. Je pense que je vais essayer de courts articles et interrogations sur Kwiziq pour identifier des choses que j'ignore sans le savoir.
- Maintenant je peux écouter des podcasts et des vidéos simples de niveau A2, et je peux en suivre de plus complexes si je lis les sous-titres en même temps. Donc je peux regarder les épisodes sur Netflix qui sont doublés en français pour pratiquer la compréhension orale. Un jour je voudrais comprendre des cours comme sur fun-mooc.fr. Je peux pratiquer avec l'application gratuite Mauril qui est disponible au Canada.
L'IA ne peut pas offrir beaucoup d'aide à la prononciation. Je pense que la plupart des applications utilisent la reconnaissance vocale et traitent les textes résultants parce que cette façon minimise les frais et aussi il n'y a pas beaucoup de données ou de modèles de haute qualité pour corriger de mauvaises prononciations des débutants. Donc si je veux améliorer ma prononciation, j'ai besoin d'un tuteur qui puisse me donner ses remarques en temps réel.
Je ne suis pas prête à converser sauf pour de simples échanges. Pour le moment, je n'ai pas beaucoup d'intérêt pour les présentations et le bavardage, du moins pas suffisamment pour y consacrer du temps. Cependant, le processus d'internalisation des règles de la prononciation (et beaucoup d'exceptions) prend du temps. Pratiquer régulièrement donne confiance en soi. C'est aussi une expérience que de payer une personne pour m'enseigner, ce que je veux démontrer à ma fille. Je pense que je vais essayer différents tuteurs pour trouver celui qui me convient le mieux. Je peux combiner cela avec la lecture de mon journal à voix haute en suivant la synthèse vocale, pour qu'une personne puisse me signaler les erreurs que je ne peux pas toujours entendre.
- Je peux tester l'IA payante pour voir si elle peut m'aider davantage dans tous les aspects de l'apprentissage du français. On est inondé de différentes applications qui utilisent l'IA et demandent un abonnement, mais si je fais mes propres outils, je peux les personnaliser selon mes besoins.
Prononciation
- J'ai essayé de modifier mon logiciel pour traiter mon prompt via l'interface web de l'IA {in French for the A as well}
- je ne répète pas les instructions (in struuk seons) si celles-ci ont déjà été transmises.
- je peux traiter la semaine toute entière dans une requête au lieu de jour par jour.
- Je peux aussi essayer de réécrire mes brouillons chaque jour au lieu d'une fois par semaine.
- Il se trouve qu'il y avait un spectacle de gymnastique aérienne à 17 h, donc nous avons attendu pour le voir.
- Ce matin, j'ai modifié ma fonction pour envoyer mes questions à Gemini pour qu'elle puisse aussi les envoyer à Claude IA et chercher la réponse.
- j'ai identifié quelques compétences que je veux pratiquer et quelques signes qui me montrent que je dois être attentive.
- Maintenant je peux utiliser la synthèse vocale pour dire le sous-titre actuel et écouter l'enregistrement pour faciliter la comparaison.
- Il y a parfois de longues phrases où ma tutrice n'a surligné qu'un ou deux mots, donc je veux inclure juste les expressions au lieu de la phrase entière.
- Pour éviter de désorienter les gens, je voulais spécifier un enregistrement (par exemple, un simple carillon) que j'alternerais (jal tehr ne ray) avec les enregistrements de ma voix.
- Mais si je veux voir des bilans et des explications en écrivant, Emacs est plus personnalisable.
- J'ai ajouté quelques détails (day tie) à mon entrée d'hier sur mon processus pour écouter la synthèse vocale et mes enregistrements.
- J'ai inclus les horodatages pour naviguer (nah vee gay) en utilisant ma nouvelle fonction pour insérer les données.
- Dans la pharmacie, ma fille a choisi du shampooing, de la lotion et du dentifrice (den tee freese) pour enfants.
- Chez Canadian Tire, elle a choisi des articles de fête Pokémon : des serviettes, des assiettes et des décorations. (day koh rah seon)
- Elle a beaucoup d'enthousiasme (den tou see assmeuh) pour sa fête.
- Je l'ai aidée à enfiler l'aiguille (lay gwee) et à tenir le tissu.
- Après un peu plus d'apprentissage du jazz au piano, je me suis frayé (fraie yay) un chemin dans la congère au coin d'une autre rue.
- J'ai aussi cherché à savoir quelles pages mentionnent (men seonne) un système obsolète pour que je puisse les modifier.
- Une bénévole de Bike Brigade m'a demandé de modifier le modèle de l'infolettre pour ajouter le logo et le lien (lee ehn) vers un commanditaire.
- les listes emboîtées étaient difficiles à traiter parce que Google Docs utilise les styles pour simuler des listes emboîtées (em bwah tay) au lieu de balisage sémantique correct.
- Une fois que ses amies sont parties, nous sommes allées dehors (deuh orh) pour jouer à Pokémon.
- Nous avons gagné trois Combats Max de niveau un (uhn) à trois ensemble et nous avons attrapé beaucoup de Pokémon.
- ma réserve de Pokémon atteignait (ah teng yay) souvent la limite.
- Le chauffeur était gentil (jen tee) et il nous a demandé comment nous allions.
- Après avoir repris mon souffle,(soufl) j'ai continué à rentrer à la maison.
- Ma prochaine étape consiste à modifier ma fonction pour utiliser la synthèse vocale pour que je puisse choisir quel moteur je veux (veuh) utiliser.
- mais aujourd'hui ses cheveux (peux, que, le, veux, “euh”) étaient encore emmêlés.
- Ma fille a passé une mauvaise nuit et a fait deux (deuh- veux, peux, que, le, cheveux) siestes sur le canapé.
- Je peux lui démontrer (day mon tray) l'utilisation du système de répétition espacée et la valeur de l'apprentissage qu'on choisit pour soi-même.
- Les entrées esquissent nos vies, ce que j'apprécierai (jah prey see ehr ray) plus tard.
- Même si je n'ai pas de rendez-vous avec un tuteur chaque semaine, je veux (veuh) garder cette habitude.
- Pour la compréhension, (comp prey ehn seon) je veux pouvoir lire la liste de diffusion emacsfr, le forum Emacs Fr et le projet documentation_emacs.
You can e-mail me at sacha@sachachua.com.
- La semaine du 10 au 16 novembre
-
🔗 r/Harrogate Harrogate voted the friendliest place in the UK rss
| This is based on the booking.com traveller awards 2026, found here https://news.booking.com/traveller-review-awards-2026-celebrate-181-million-partners-worldwide-and-reveal-the-most-welcoming-destinations-for-the-year-ahead/ submitted by /u/kromesky
[link] [comments]
---|--- -
🔗 r/Leeds reputable computer repair shops in Leeds? rss
unfortunately, i recently got scammed by gadgetsfix. they told me my motherboard was broken and needed replaced. finally got it back after 2 months and beside the fact that the trackpad now doesn't work and they wiped all my data, they also replaced my i7 processor with an i5, stripped my old "broken" motherboard for parts without my consent and then tried to argue that they wanted to keep the old motherboard for other repairs when i asked for it back... even though it was ""broken.""
after talking with citizen's advice i know that i have the right to request a refund so i can afford to have the thing fixed, but i'm pretty wary of any repair shop now. i'm really skint as is and this has really set me back, and i still dont have a laptop that works properly.
is there a place in leeds that has a particularly good reputation? one that would give me a quote on the repair?
submitted by /u/corpuscalos
[link] [comments] -
🔗 r/york Free pizza offer Pizza Express today! rss
| Can confirm it works. First 150 customers so be quick! submitted by /u/TheNorthernJevans
[link] [comments]
---|--- -
🔗 r/york Drummer Looking for Practice Space rss
Hi everyone! I’ve recently moved back to York, and I’m trying to find a long- term space where I can store my drum kit and practise regularly, ideally somewhere I can set it up and have reliable access (not just pay-as-you-go rehearsal rooms).
I know there are some good rehearsal rooms around, but I’m hoping to avoid booking by the hour and would love something dedicated and permanent (monthly rent or similar).
If you’ve got any inside info on:
- Studios or spaces that offer long-term / monthly rental in York
- Rehearsal studios happy for you to keep your kit there
- Garage/workshop/warehouse spaces for musician use
- Shared band spaces or studio mates
- Anyone looking to share a larger rental room
Please let me know, thanks!
submitted by /u/JLCarlton97
[link] [comments] -
🔗 r/LocalLLaMA Bad news for local bros rss
| submitted by /u/FireGuy324
[link] [comments]
---|--- -
🔗 r/york Is Betty’s Cafe Tea room a tourist trap? rss
The wife and I are looking for a spot in York for afternoon tea that’s not going to be overrun with tourists. Will Betty’s qualify and, if not, are there less touristy options farther out from the historic downtown?
submitted by /u/BK_Mason
[link] [comments] -
🔗 r/reverseengineering Windows containers network isolation RE rss
submitted by /u/safesws
[link] [comments] -
🔗 r/york Charming market town voted friendliest in the UK and visitors are 'obsessed' rss
| Happy for Harrogate! submitted by /u/your_right11
[link] [comments]
---|--- -
🔗 r/Yorkshire Yorkshire Dales, Britain is spectacularly beautiful and green 🌿 rss
| @britainbloom submitted by /u/AnfieldAnchor
[link] [comments]
---|--- -
🔗 r/wiesbaden Obermayr Europa Schule rss
Hi zusammen,
habt ihr Erfahrungen als Lehrkraft an der Obermayr Europa School? Die Kununu- Reviews wirken ziemlich besorgniserregend und ich würde gerne hören, wie es wirklich dort ist – sowohl im Alltag als auch was die Atmosphäre/Schulleitung/Reputation in der Region angeht.
Ehrliche Meinungen/Erfahrungen sehr willkommen! Danke
submitted by /u/scoobeeroo
[link] [comments] -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/reverseengineering Reverse Engineering the PROM for the SGI O2 rss
submitted by /u/tnavda
[link] [comments] -
🔗 r/reverseengineering Searching for offline version or private server for PES 2018 mobile rss
submitted by /u/Fresh-Lavishness449
[link] [comments] -
🔗 r/LocalLLaMA Qwen3.5 Support Merged in llama.cpp rss
| submitted by /u/TKGaming_11
[link] [comments]
---|--- -
🔗 Armin Ronacher A Language For Agents rss
Last year I first started thinking about what the future of programming languages might look like now that agentic engineering is a growing thing. Initially I felt that the enormous corpus of pre-existing code would cement existing languages in place but now I'm starting to think the opposite is true. Here I want to outline my thinking on why we are going to see more new programming languages and why there is quite a bit of space for interesting innovation. And just in case someone wants to start building one, here are some of my thoughts on what we should aim for!
Why New Languages Work
Does an agent perform dramatically better on a language that it has in its weights? Obviously yes. But there are less obvious factors that affect how good an agent is at programming in a language: how good the tooling around it is and how much churn there is.
Zig seems underrepresented in the weights (at least in the models I've used) and also changing quickly. That combination is not optimal, but it's still passable: you can program even in the upcoming Zig version if you point the agent at the right documentation. But it's not great.
On the other hand, some languages are well represented in the weights but agents still don't succeed as much because of tooling choices. Swift is a good example: in my experience the tooling around building a Mac or iOS application can be so painful that agents struggle to navigate it. Also not great.
So, just because it exists doesn't mean the agent succeeds and just because it's new also doesn't mean that the agent is going to struggle. I'm convinced that you can build yourself up to a new language if you don't want to depart everywhere all at once.
The biggest reason new languages might work is that the cost of coding is going down dramatically. The result is the breadth of an ecosystem matters less. I'm now routinely reaching for JavaScript in places where I would have used Python. Not because I love it or the ecosystem is better, but because the agent does much better with TypeScript.
The way to think about this: if important functionality is missing in my language of choice, I just point the agent at a library from a different language and have it build a port. As a concrete example, I recently built an Ethernet driver in JavaScript to implement the host controller for our sandbox. Implementations exist in Rust, C, and Go, but I wanted something pluggable and customizable in JavaScript. It was easier to have the agent reimplement it than to make the build system and distribution work against a native binding.
New languages will work if their value proposition is strong enough and they evolve with knowledge of how LLMs train. People will adopt them despite being underrepresented in the weights. And if they are designed to work well with agents, then they might be designed around familiar syntax that is already known to work well.
Why A New Language?
So why would we want a new language at all? The reason this is interesting to think about is that many of today's languages were designed with the assumption that punching keys is laborious, so we traded certain things for brevity. As an example, many languages — particular modern ones — lean heavily on type inference so that you don't have to write out types. The downside is that you now need an LSP or the resulting compiler error messages to figure out what the type of an expression is. Agents struggle with this too, and it's also frustrating in pull request review where complex operations can make it very hard to figure out what the types actually are. Fully dynamic languages are even worse in that regard.
The cost of writing code is going down, but because we are also producing more of it, understanding what the code does is becoming more important. We might actually want more code to be written if it means there is less ambiguity when we perform a review.
I also want to point out that we are heading towards a world where some code is never seen by a human and is only consumed by machines. Even in that case, we still want to give an indication to a user, who is potentially a non- programmer, about what is going on. We want to be able to explain to a user what the code will do without going into the details of how.
So the case for a new language comes down to: given the fundamental changes in who is programming and what the cost of code is, we should at least consider one.
What Agents Want
It's tricky to say what an agent wants because agents will lie to you and they are influenced by all the code they've seen. But one way to estimate how they are doing is to look at how many changes they have to perform on files and how many iterations they need for common tasks.
There are some things I've found that I think will be true for a while.
Context Without LSP
The language server protocol lets an IDE infer information about what's under the cursor or what should be autocompleted based on semantic knowledge of the codebase. It's a great system, but it comes at one specific cost that is tricky for agents: the LSP has to be running.
There are situations when an agent just won't run the LSP — not because of technical limitations, but because it's also lazy and will skip that step if it doesn't have to. If you give it an example from documentation, there is no easy way to run the LSP because it's a snippet that might not even be complete. If you point it at a GitHub repository and it pulls down individual files, it will just look at the code. It won't set up an LSP for type information.
A language that doesn't split into two separate experiences (with-LSP and without-LSP) will be beneficial to agents because it gives them one unified way of working across many more situations.
Braces, Brackets, and Parentheses
It pains me as a Python developer to say this, but whitespace-based indentation is a problem. The underlying token efficiency of getting whitespace right is tricky, and a language with significant whitespace is harder for an LLM to work with. This is particularly noticeable if you try to make an LLM do surgical changes without an assisted tool. Quite often they will intentionally disregard whitespace, add markers to enable or disable code and then rely on a code formatter to clean up indentation later.
On the other hand, braces that are not separated by whitespace can cause issues too. Depending on the tokenizer, runs of closing parentheses can end up split into tokens in surprising ways (a bit like the "strawberry" counting problem), and it's easy for an LLM to get Lisp or Scheme wrong because it loses track of how many closing parentheses it has already emitted or is looking at. Fixable with future LLMs? Sure, but also something that was hard for humans to get right too without tooling.
Flow Context But Explicit
Readers of this blog might know that I'm a huge believer in async locals and flow execution context — basically the ability to carry data through every invocation that might only be needed many layers down the call chain. Working at an observability company has really driven home the importance of this for me.
The challenge is that anything that flows implicitly might not be configured. Take for instance the current time. You might want to implicitly pass a timer to all functions. But what if a timer is not configured and all of a sudden a new dependency appears? Passing all of it explicitly is tedious for both humans and agents and bad shortcuts will be made.
One thing I've experimented with is having effect markers on functions that are added through a code formatting step. A function can declare that it needs the current time or the database, but if it doesn't mark this explicitly, it's essentially a linting warning that auto-formatting fixes. The LLM can start using something like the current time in a function and any existing caller gets the warning; formatting propagates the annotation.
This is nice because when the LLM builds a test, it can precisely mock out these side effects — it understands from the error messages what it has to supply.
For instance:
fn issue(sub: UserId, scopes: []Scope) -> Token needs { time, rng } { return Token{ sub, exp: time.now().add(24h), scopes, } } test "issue creates exp in the future" { using time = time.fixed("2026-02-06T23:00:00Z"); using rng = rng.deterministic(seed: 1); let t = issue(user("u1"), ["read"]); assert(t.exp > time.now()); }Results over Exceptions
Agents struggle with exceptions, they are afraid of them. I'm not sure to what degree this is solvable with RL (Reinforcement Learning), but right now agents will try to catch everything they can, log it, and do a pretty poor recovery. Given how little information is actually available about error paths, that makes sense. Checked exceptions are one approach, but they propagate all the way up the call chain and don't dramatically improve things. Even if they end up as hints where a linter tracks which errors can fly by, there are still many call sites that need adjusting. And like the auto-propagation proposed for context data, it might not be the right solution.
Maybe the right approach is to go more in on typed results, but that's still tricky for composability without a type and object system that supports it.
Minimal Diffs and Line Reading
The general approach agents use today to read files into memory is line-based, which means they often pick chunks that span multi-line strings. One easy way to see this fall apart: have an agent work on a 2000-line file that also contains long embedded code strings — basically a code generator. The agent will sometimes edit within a multi-line string assuming it's the real code when it's actually just embedded code in a multi-line string. For multi-line strings, the only language I'm aware of with a good solution is Zig, but its prefix-based syntax is pretty foreign to most people.
Reformatting also often causes constructs to move to different lines. In many languages, trailing commas in lists are either not supported (JSON) or not customary. If you want diff stability, you'd aim for a syntax that requires less reformatting and mostly avoids multi-line constructs.
Make It Greppable
What's really nice about Go is that you mostly cannot import symbols from another package into scope without every use being prefixed with the package name. Eg:
context.Contextinstead ofContext. There are escape hatches (import aliases and dot-imports), but they're relatively rare and usually frowned upon.That dramatically helps an agent understand what it's looking at. In general, making code findable through the most basic tools is great — it works with external files that aren't indexed, and it means fewer false positives for large-scale automation driven by code generated on the fly (eg:
sed,perlinvocations).Local Reasoning
Much of what I've said boils down to: agents really like local reasoning. They want it to work in parts because they often work with just a few loaded files in context and don't have much spatial awareness of the codebase. They rely on external tooling like grep to find things, and anything that's hard to grep or that hides information elsewhere is tricky.
Dependency Aware Builds
What makes agents fail or succeed in many languages is just how good the build tools are. Many languages make it very hard to determine what actually needs to rebuild or be retested because there are too many cross-references. Go is really good here: it forbids circular dependencies between packages (import cycles), packages have a clear layout, and test results are cached.
What Agents Hate
Macros
Agents often struggle with macros. It was already pretty clear that humans struggle with macros too, but the argument for them was mostly that code generation was a good way to have less code to write. Since that is less of a concern now, we should aim for languages with less dependence on macros.
There's a separate question about generics and comptime. I think they fare somewhat better because they mostly generate the same structure with different placeholders and it's much easier for an agent to understand that.
Re-Exports and Barrel Files
Related to greppability: agents often struggle to understand barrel files and they don't like them. Not being able to quickly figure out where a class or function comes from leads to imports from the wrong place, or missing things entirely and wasting context by reading too many files. A one-to-one mapping from where something is declared to where it's imported from is great.
And it does not have to be overly strict either. Go kind of goes this way, but not too extreme. Any file within a directory can define a function, which isn't optimal, but it's quick enough to find and you don't need to search too far. It works because packages are forced to be small enough to find everything with grep.
The worst case is free re-exports all over the place that completely decouple the implementation from any trivially reconstructable location on disk. Or worse: aliasing.
Aliasing
Agents often hate it when aliases are involved. In fact, you can get them to even complain about it in thinking blocks if you let them refactor something that uses lots of aliases. Ideally a language encourages good naming and discourages aliasing at import time as a result.
Flaky Tests and Dev Env Divergence
Nobody likes flaky tests, but agents even less so. Ironic given how particularly good agents are at creating flaky tests in the first place. That's because agents currently love to mock and most languages do not support mocking well. So many tests end up accidentally not being concurrency safe or depend on development environment state that then diverges in CI or production.
Most programming languages and frameworks make it much easier to write flaky tests than non-flaky ones. That's because they encourage indeterminism everywhere.
Multiple Failure Conditions
In an ideal world the agent has one command, that lints and compiles and it tells the agent if all worked out fine. Maybe another command to run all tests that need running. In practice most environments don't work like this. For instance in TypeScript you can often run the code even though it fails type checks. That can gaslight the agent. Likewise different bundler setups can cause one thing to succeed just for a slightly different setup in CI to fail later. The more uniform the tooling the better.
Ideally it either runs or doesn't and there is mechanical fixing for as many linting failures as possible so that the agent does not have to do it by hand.
Will We See New Languages?
I think we will. We are writing more software now than we ever have — more websites, more open source projects, more of everything. Even if the ratio of new languages stays the same, the absolute number will go up. But I also truly believe that many more people will be willing to rethink the foundations of software engineering and the languages we work with. That's because while for some years it has felt you need to build a lot of infrastructure for a language to take off, now you can target a rather narrow use case: make sure the agent is happy and extend from there to the human.
I just hope we see two things. First, some outsider art: people who haven't built languages before trying their hand at it and showing us new things. Second, a much more deliberate effort to document what works and what doesn't from first principles. We have actually learned a lot about what makes good languages and how to scale software engineering to large teams. Yet, finding it written down, as a consumable overview of good and bad language design, is very hard to come by. Too much of it has been shaped by opinion on rather pointless things instead of hard facts.
Now though, we are slowly getting to the point where facts matter more, because you can actually measure what works by seeing how well agents perform with it. No human wants to be subject to surveys, but agents don't care. We can see how successful they are and where they are struggling.
-
🔗 Baby Steps Hello, Dada! rss
Following on my Fun with Dada post, this post is going to start teaching Dada. I'm going to keep each post short - basically just what I can write while having my morning coffee.1
You have the right to write code
Here is a very first Dada program
println("Hello, Dada!")I think all of you will be able to guess what it does. Still, there is something worth noting even in this simple program:
" You have the right to write code. If you don't write a
mainfunction explicitly, one will be provided for you." Early on I made the change to let users omit themainfunction and I was surprised by what a difference it made in how light the language felt. Easy change, easy win.Convenient is the default
Here is another Dada program
let name = "Dada" println("Hello, {name}!")Unsurprisingly, this program does the same thing as the last one.
" Convenient is the default." Strings support interpolation (i.e.,
{name}) by default. In fact, that's not all they support, you can also break them across lines very conveniently. This program does the same thing as the others we've seen:let name = "Dada" println(" Hello, {name}! ")When you have a
"immediately followed by a newline, the leading and trailing newline are stripped, along with the "whitespace prefix" from the subsequent lines. Internal newlines are kept, so something like this:let name = "Dada" println(" Hello, {name}! How are you doing? ")would print
Hello, Dada! How are you doing?Just one familiar
StringOf course you could also annotate the type of the
namevariable explicitly:let name: String = "Dada" println("Hello, {name}!")You will find that it is
String. This in and of itself is not notable, unless you are accustomed to Rust, where the type would be&'static str. This is of course a perennial stumbling block for new Rust users, but more than that, I find it to be a big annoyance - I hate that I have to write"Foo".to_string()orformat!("Foo")everywhere that I mix constant strings with strings that are constructed.Similar to most modern languages, strings in Dada are immutable. So you can create them and copy them around:
let name: String = "Dada" let greeting: String = "Hello, {name}" let name2: String = nameNext up: mutation, permissions
OK, we really just scratched the surface here! This is just the "friendly veneer" of Dada, which looks and feels like a million other languages. Next time I'll start getting into the permission system and mutation, where things get a bit more interesting.
- My habit is to wake around 5am and spend the first hour of the day doing "fun side projects". But for the last N months I've actually been doing Rust stuff, like symposium.dev and preparing the 2026 Rust Project Goals. Both of these are super engaging, but all Rust and no play makes Niko a dull boy. Also a grouchy boy. ↩︎
-
- February 08, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-08 rss
IDA Plugin Updates on 2026-02-08
New Releases:
Activity:
- cybersecurity
- DeepExtractIDA
- DriverBuddy-7.4-plus
- 9538eb40: Sync workflows-sync.yml from .github repo
- ghidra-chinese
- aa25094a: Merge pull request #91 from TC999/sync
- IDA-NO-MCP
- c044d118: refactor: Simplify output filename generation in export_decompiled_fu…
- msc-thesis-LLMs-to-rank-decompilers
- PyAgent
- b19b8549: Ingest final Batch 12: Completing ingested queue - 5,670 total modules
- a7ced950: Ingest Batch 11: GraphR1, FactorioEnv, and Multi-modal assets - 5,366…
- 3478be4a: Phase 36-41: Reached 4,597 modules by processing multi-modal and soci…
- 726de53a: Ingest Batch 15: Skills (Oura, Fitbit, Tesla, Trading) - 4,291 total
- 00477bbe: Ingest Batch 14: Surfline, Garmin, and automation skills - 3,969 total
- 20a6f590: Ingest Batch 13: Specialized skills and LLM utils - 3,628 total
- cbda252e: Ingest Batch 12: evoagentx and decodingai materials - 3,209 total
- 277e43bc: Ingest Batch 11: code_puppy and coqui_ai TTS utilities - 2,819 total
- adb8aec9: Ingest Batch 10: Multi-agent and cloud utilities - 2,475 total module…
- b45e19d6: Ingest Batch 9: 500 sanitized modules and tests from exhaustive scan
- 9d2b8c44: Ingest Batch 6, 7 & Final: 794 sanitized modules with updated scanner…
- ce5e2c51: Refactor: Comprehensive linting fix and stub implementation
- aabd2e42: fix(core): support dict config in AutoMemCore and mock it in ChangeMo…
- e53294c0: fix(core): resolve circular imports, missing dependencies and pass Ru…
- 0a06e0b6: chore: add werkzeug dependency
- 274b495f: chore: release update to version 4.0.0 (Core & SDK)
- 48ad94f4: feat: update README to v4.4.0, fix docstring auditor, and add swarm i…
- 676c07d0: fix: repair external_candidates imports and consolidate agent data dirs
-
🔗 r/reverseengineering Reverse Engineering Venetica to fix a crash present since release rss
submitted by /u/RazerOG
[link] [comments] -
🔗 r/york Roman baths at York, UK rss
submitted by /u/tyw7
[link] [comments] -
🔗 r/york Remains of the Romans at York rss
submitted by /u/tyw7
[link] [comments] -
🔗 r/reverseengineering I made an obfuscator. Full source available for analysis. rss
submitted by /u/Temporary-Future-718
[link] [comments] -
🔗 r/LocalLLaMA I built a rough .gguf LLM visualizer rss
| I hacked together a small tool that lets you upload a .gguf file and visualize its internals in a 3D-ish way (layers / neurons / connections). The original goal was just to see what’s inside these models instead of treating them like a black box. That said, my version is pretty rough, and I’m very aware that someone who actually knows what they’re doing could’ve built something way better :p So I figured I’d ask here: Does something like this already exist, but done properly? If yes, I’d much rather use that For reference, this is really good: https://bbycroft.net/llm …but you can’t upload new LLMs. Thanks! submitted by /u/sultan_papagani
[link] [comments]
---|--- -
🔗 r/york York uni golf umbrella? rss
V random, but does anyone know where I could get a York uni golf umbrella this week? obvs tried the official website and eBay but no luck! needed for Valentine’s Day.
submitted by /u/eastyorkshirepudding
[link] [comments] -
🔗 remorses/critique critique@0.1.87 release
hunks:\n -critique hunks add <id>now appends dirty submodule diffs before hunk lookup\n - Fixes mismatch where IDs listed bycritique hunks listfor submodule changes could fail with "Hunk not found"\n\n- No external contributors in this release.
-
🔗 pranshuparmar/witr v0.2.7 release
What's Changed
- Bump the actions group with 3 updates by @dependabot[bot] in #161
- perf: Change to the standard append to the end of the slice and perform a single reverse (Reverse) at the end in ancestry.go by @ArNine in #163
- perf: Merge commands for obtaining
Listeninginformation in darwin by @ArNine in #164 - Main PR by @pranshuparmar in #167
New Contributors
- @dependabot[bot] made their first contribution in #161
Full Changelog :
v0.2.6...v0.2.7 -
🔗 r/Leeds Attending Leeds Crown Court - Advice please rss
I'm interested in visiting the Leeds Crown Court as a citizen and wondering if there is anyone on this sub that could give detailed advice on how it works, what my rights are to view public hearings and what to expect. Is it interesting, is it worth visiting, etc. Thankyou so much for your time.
submitted by /u/BenjiD123
[link] [comments] -
🔗 r/reverseengineering joshuanwalker/Raiders2600: Reverse Engineering Raiders of the Lost Ark for the Atari 2600 rss
submitted by /u/tnavda
[link] [comments] -
🔗 r/york Best parmo in York? rss
I’ve lived here 7 years and have not yet had a good parmo. I miss them. Most of the time I’ve had one they’ve either been cold, cut into weird chunks or had underdone cheese/goopy bechamel/soggy chicken.
Help a former Teessider out!
submitted by /u/Autoembourgeoisement
[link] [comments] -
🔗 r/Yorkshire Victorian child chimney sweep guard spotted rss
| submitted by /u/snakeoildriller
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen3 Coder Next as first "usable" coding model < 60 GB for me rss
I've tried lots of "small" models < 60 GB in the past. GLM 4.5 Air, GLM 4.7 Flash, GPT OSS 20B and 120B, Magistral, Devstral, Apriel Thinker, previous Qwen coders, Seed OSS, QwQ, DeepCoder, DeepSeekCoder, etc. So what's different with Qwen3 Coder Next in OpenCode or in Roo Code with VSCodium?
- Speed : The reasoning models would often yet not always produce rather good results. However, now and then they'd enter reasoning loops despite correct sampling settings, leading to no results at all in a large over-night run. Aside from that the sometimes extensive reasoning takes quite some time for the multiple steps that OpenCode or Roo would induce, slowing down interactive work a lot. Q3CN on the other hand is an instruct MoE model, doesn't have internal thinking loops and is relatively quick at generating tokens.
- Quality : Other models occasionally botched the tool calls of the harness. This one seems to work reliably. Also I finally have the impression that this can handle a moderately complex codebase with a custom client & server, different programming languages, protobuf, and some quirks. It provided good answers to extreme multi-hop questions and made reliable full-stack changes. Well, almost. On Roo Code it was sometimes a bit lazy and needed a reminder to really go deep to achieve correct results. Other models often got lost.
- Context size : Coding on larger projects needs context. Most models with standard attention eat all your VRAM for breakfast. With Q3CN having 100k+ context is easy. A few other models also supported that already, yet there were drawbacks in the first two mentioned points.
I run the model this way:
set GGML_CUDA_GRAPH_OPT=1llama-server -m Qwen3-Coder-Next-UD-Q4_K_XL.gguf -ngl 99 -fa on -c 120000 --n-cpu-moe 29 --temp 0 --cache-ram 0This works well with 24 GB VRAM and 64 GB system RAM when there's (almost) nothing else on the GPU. Yields about 180 TPS prompt processing and 30 TPS generation speed for me.
temp 0? Yes, works well for instruct for me, no higher-temp "creativity" needed. Prevents the very occasional issue that it outputs an unlikely (and incorrect) token when coding.cache-ram 0? The cache was supposed to be fast (30 ms), but I saw 3 second query/update times after each request. So I didn't investigate further and disabled it, as it's only one long conversation history in a single slot anyway.GGML_CUDA_GRAPH_OPT? Experimental option to get more TPS. Usually works, yet breaks processing with some models.
OpenCode vs. Roo Code :
Both solved things with the model, yet with OpenCode I've seen slightly more correct answers and solutions. But: Roo asks by default about every single thing, even harmless things like running a syntax check via command line. This can be configured with an easy permission list to not stop the automated flow that often. OpenCode on the other hand just permits everything by default in code mode. One time it encountered an issue, uninstalled and reinstalled packages in an attempt of solving it, removed files and drove itself into a corner by breaking the dev environment. Too autonomous in trying to "get things done", which doesn't work well on bleeding edge stuff that's not in the training set. Permissions can of course also be configured, but the default is "YOLO".
Aside from that: Despite running with only a locally hosted model, and having disabled update checks and news downloads, OpenCode (Desktop version) tries to contact a whole lot of IPs on start-up.
submitted by /u/Chromix_
[link] [comments] -
🔗 r/Yorkshire Had to stop and brew up this morning rss
| submitted by /u/Acceptable-Truth-912
[link] [comments]
---|--- -
🔗 r/LocalLLaMA PR opened for Qwen3.5!! rss
| https://github.com/huggingface/transformers/pull/43830/ Looking at the code at src/transformers/models/qwen3_5/modeling_qwen3_5.py, it looks like Qwen3.5 series will have VLMs right off the bat! submitted by /u/Mysterious_Finish543
[link] [comments]
---|--- -
🔗 Register Spill Joy & Curiosity #73 rss
A year ago, on this very newsletter, I wondered: how might AI change programming?
Here are some of the questions I asked in that post:
" Will we write docstrings at the top of files that aren't meant to be read by humans, but by LLMs when they ingest the file into their context window?"
" Will we see a melting of language servers and LLMs?"
" What will change once we start to optimize code and processes around code purely for the reader, because the writer's a machine?"
" Will we change how we modularize code and switch to writing many smaller programs because they're easier for LLMs to digest than large codebases?"
It's been a year and now most of these questions sound naive to me. Of course we'll write documentation for agents, language servers seem dead, and absolutely one hundred percent are we optimizing code for readability over writability, except that now the reader is also an agent. And small programs? Yes, we're all optimizing codebases for the agents now.
Here's a little anecdote for you, to show what happened in a year.
On Tuesday, I was on a call with Tim and Camden to discuss something about our new architecture, and they suggested that we use UUIDs everywhere. Hmm, I don't know, UUIDs aren't a silver bullet you know, they do come with downsides, I said. But we don't have those downsides, they said, because our tables are literally a few hundred rows in this setup. Right, right, I said, but UUIDs are kinda ugly and when you look at them they don't give you any insights.
On Thursday, Tim then said: hey, didn't you just say on Raising An Agent that you need to optimize for agents, not for humans, even at the cost of human developer experience? And I don't remember what exactly I said in response, but it boiled down to: you'll see, and then I will say that I told you so, UUIDs are ugly.
Then yesterday, on Saturday, I realized Tim's right. Who am I kidding. Agents will read far more UUIDs than I ever will in the future. I had an aesthetic objection to something I'll barely see. The agents, though, they will deal with the UUIDs and they love them.
-
We recorded another episode of Raising An Agent. Quinn and I talk about where the frontier of these coding agents is moving to, why we are going to kill the Amp editor extension and why we don't think the sidebar nor the text editor are the future, and, finally, we talk about how wild it is to build in AI land and how every playbook software companies had in the last twenty, thirty years is now outdated. The only winning move now is to accept that the board will be flipped at random intervals. It's 55 minutes long and a condensed version of what I'd tell you this evening if you and I went out for beers.
-
Recorded another short video: "Is this the bet you want to take? While everything around us is changing?"
-
My colleague Lewis wrote a wonderful post about giving agents feedback: Feedback Loopable. There are so many good ideas in there: the arrow, the URL updating, the logs, the debug/REPL/CLI thing. Highly recommend it.
-
Hey, seriously, watch this talk: Rich Hickey - Simple Made Easy. I've linked to it before, I've tweeted about it many times, but this week I had to find out (and then digest and recover) that some of my colleagues hadn't seen it. So now I'm here and I'm telling you that this might very well be the greatest talk about programming ever given. I'm not kidding. I'm not exaggerating. I mean it. Not a week goes by in which I don't think of it. I'm rearchitecting a system now and when I close my eyes I can see Rich standing there, one hand on the podium, the other in the air, hanging down, and him saying "…and you end up with this knot." Go and watch the talk. Don't complect.
-
Martin Alderson: "Two kinds of AI users are emerging. The gap between them is astonishing." There's a lot of great stuff in there. The first point about people being stuck in Copilot is very interesting, isn't it? If your product is a text box, then it looks like all the other text boxes. But some text boxes have actual genies behind them and others don't. You, as a user, can't tell in advance. The other points he makes about enterprises shooting themselves in their feet with their security restrictions is very interesting too.
-
Monday was my birthday and I got a fantastic gift: the Xteink X4! Yes, it's a tiny, tiny e-reader. My mini-review, after having not read at all on it this week yet: very light, very small, very fun -- the software seems unfinished, it feels a bit hacky, it's a bit of a pain in the ass to transfer files to it, but there are a lot of articles and browser extensions on how to get the most out of it, there are also custom wallpapers, and an open-source firmware you can flash on it, and people are using their agents to write scripts for it, and I had Amp clone and extend the Send to X4 browser extension for me so that it fixes some broken epub formatting. Fun!
-
Talking about text boxes, here's Julian Lehr, Creative Director at Linear, with his case against conversational interfaces.
-
Mitchell: My AI Adoption Journey. "Through this journey, I've personally reached a point where I'm having success with modern AI tooling and I believe I'm approaching it with the proper measured view that is grounded in reality. I really don't care one way or the other if AI is here to stay, I'm a software craftsman that just wants to build stuff for the love of the game. The whole landscape is moving so rapidly that I'm sure I'll look back at this post very quickly and laugh at my naivete." Great post.
-
And here's DHH, roughly 6 weeks after I interviewed him and couldn't get a word in when he said that he doesn't believe in the hype and that agents can't write code he likes, telling his employees how to use agents.
-
Fantastic blog post: A Broken Heart. Read it, I swear you won't regret it. Great writing, great bug, great debugging. And -- you might not even notice, because of how calmly it's woven into the rest -- great use of agents.
-
Brendan Gregg is joining OpenAI. What a gig for him! There's very few places in the world right now where the relationship between performance and business value is as big as it is there.
-
Also: Yehuda Katz is joining Vercel to work on v0. The next big framework programmer going to build developer tooling with AI. Because that's where the leverage is.
-
But then here's Jose Valim, another big framework guy but one who turned into language guy, explaining why he thinks Elixir is the best language for AI. I respect Valim immensely, he's one of this generation's greatest programmers, but I couldn't help reading this and thinking: does it matter? doc strings? As if GPT-5.2 wasn't a thing. The point with the tooling stands though. Remember when some languages flipped how they print stack traces so that the most important line is printed last, so that the developer reading them in the terminal can immediately see it without scrolling up? What's the equivalent for agents going to be?
-
And here's someone arguing that the age of frameworks is over, but that software engineering ("the true one") is back: "Automation and boilerplating have never been so cheap to overcome. I've been basically never writing twice the same line of code. I'm instantly building small tools I need, purpose built, exactly shaped around the problem at hand. I don't need any fancy monorepo manager. A simple Makefile covers 100% of my needs for 99% of my use cases. When things will get very complicated, and if they get very complicated, I'll think about it. But only then. Not a second before. This is engineering. You solve the problem you have, not the problem someone on a conference stage told you that you'll eventually have." I agree that agents solve many of the same problems that frameworks are solving, but the overlap isn't 100%. Frameworks will continue to be around but look vastly different in a few years.
-
Related: Start all of your commands with a comma. This seems very smart and while I don't have that much in my ~/bin, I'm intrigued. But I'm also wondering: won't the agents think it's a typo? Won't they get it wrong at least once every time they try to run a command. You know, as if they were trying to plug a USB-A thing in.
-
So, John Collison and Dwarkesh Patel interviewed Elon Musk and two of them drank Guinness. Now, I'm aware that by linking to this episode I risk receiving angry letters telling me that I shall not promote Musk and by linking to a conversation with him I endorse this and that. I'm aware, but I do think it's possible to listen to someone talk and find them interesting and providing food for thought without agreeing with them. That's what happened when I listened to this episode. I kept thinking about how crazy this is: data centers in space to generate tokens. Maybe it will actually happen? Wow. I also kept thinking about how Musk views problems and engineering challenges, and how he always wants to remove the next bottleneck, and how everything is a manufacturing question to him. Everything, as if he's in a game of Factorio. Building one thing isn't enough, to solve the problem you need to build the factory that builds the things. I do think that listening to this episode and reading the commentary around it is interesting, because energy and GPUs are at the heart of the transformation we're going through. It's also interesting because xAI is joining SpaceX and SpaceX is about to IPO and you have to wonder how much of this podcast is part of the IPO pitch.
-
This tweet by Rasmus is worth reading. And so too is the reply by Protty (that's the Zig contributor, ex-TigerBeetle, hardcore hacker Protty). My personal, very boring take that's actually so boring that it often makes me wonder whether I might just not be smart enough to see what others apparently see: I don't think today's software is buggier than the software I used in 1998 or in 2002 or in 2010. I also don't think the software back then was better. What I do think is that the Lindy effect exists in software too and that's why Vim is something we should put in a shrine but not that all software from 1992 is great.
-
cdixon in 2013: what the smartest people do on the weekend is what everyone else will do during the week in ten years.
-
2013, again, this time Jason Cohen: The Code is your Enemy. Prescient, right? I mean: "The weakness is the same as your strength as they often are: Your love of creation. You love to write clean, tested, scalable, extensible, beautiful code. You love converting 'JTBDs' into 960-wide artwork. You love developing an entire app in the browser against a scalable back-end. And because you love it, you do it. You wake up in the morning thinking about what you can make, not how you can sell. You open Visual Studio before you consult your to-do list because there's something you just need to tweak. You launch xterm before your CRM (if you even have one, which you don't) because the server was running just a tad slower than you'd expect and you want to paw through log files."
-
"Clawdbot is a boutique, nerdy project right now, but consider it as an underlying trend going forward: when the major consumer LLMs become smart and intuitive enough to adapt to you on-demand for any given functionality - when you'll eventually be able to ask Claude or ChatGPT to do or create anything on your computer with no Terminal UI - what will become of 'apps' created by professional developers? I especially worry about standalone utility apps: if Clawdbot can create a virtual remote for my LG television (something I did) or give me a personalized report with voice every morning (another cron job I set up) that work exactly the way I want, why should I even bother going to the App Store to look for pre-built solutions made by someone else? What happens to Shortcuts when any 'automation' I may want to carefully create is actually just a text message to a digital assistant away?" That's by Federico Viticci. I think he has programming chops, but I don't think he's worked as a software engineer and, well, now he's also seeing it: a lot of software is going to die in the next few years. Don't make the mistake and think that there'll be announcements or funerals.
-
Here's stevey with a very stevey but calm-and-reflective-stevey post about Anthropic, and the idea of a Golden Age that companies go through, and about a hundred other things too: The Anthropic Hive Mind. This is stevey at his best. And, coming back to what Viticci wrote, the closing paragraphs are very good: "If you have a strictly online or SaaS software presence, with no atoms in your product whatsoever, just electrons, then you are, candidly, pretty screwed if you don't pivot. I don't think there are any recipes for pivoting yet; this is all new, and it's all happening very fast. But there is a yellow brick road: spending tokens. This golden shimmering trail will lead your company gradually in the right direction. Your organization is going to have to learn a bunch of new lessons, as new bottlenecks emerge when coding is no longer the bottleneck. You need to start learning those bespoke organizational lessons early. The only way to know for sure that you're learning those lessons is if people are out there trying and making mistakes. And you can tell how much practice they're getting from their token spend." Here's my recipe for how to walk the yellow brick road, from December 2025. I'd update it to say: use deep mode in Amp. GPT-5.2 and GPT-5.3 -- that's the frontier now.
-
Wirth's Revenge. I really enjoyed this one. I don't agree with quite a few things in there but that's what made it stick with me and maybe I'll change my opinions because of it. Good stuff.
-
An invitation by Nolan Lawson to mourn our craft. "Someday years from now we will look back on the era when we were the last generation to code by hand. We'll laugh and explain to our grandkids how silly it was that we typed out JavaScript syntax with our fingers. But secretly we'll miss it."
-
Domenic Denicola: "But they haven't solved the need to plan and prioritize and project-manage. And by making even low-priority work addictive and engaging, there's a real possibility that programmers will be burning through their backlog of bugs and refactors, instead of just executing on top priorities faster. Put another way, while AI agents might make it possible for a disciplined team to ship in half the time, a less-disciplined team might ship following the original schedule, with beautifully-extensible internal architecture, all P3 bugs fixed, and several side projects and supporting tools spun up as part of the effort."
-
Nicholas Carlini at Anthropic "tasked Opus 4.6 using agent teams to build a C Compiler, and then (mostly) walked away." That's a milestone we'll think back to even next year, I'd say. But, of course, people have moved the goalposts out of the stadium already and are saying that the code the compiler produced is slower than GCC's at -O0. See you in the parking lot! But there's another interesting bit here, at the end: "So, while this experiment excites me, it also leaves me feeling uneasy. Building this compiler has been some of the most fun I've had recently, but I did not expect this to be anywhere near possible so early in 2026. The rapid progress in both language models and the scaffolds we use to interact with them opens the door to writing an enormous amount of new code. I expect the positive applications to outweigh the negative, but we're entering a new world which will require new strategies to navigate safely." Why do statements like these always sound so hollow when they come from people working at Anthropic?
-
Steven Sinofsky, who's seen quite a few platform and paradigm shifts from up close: "Death of Software. Nah." He's saying that "there will be more software than ever before. This is not just because of AI coding or agents building products or whatever. It is because we are nowhere near meeting the demand for what software can do." And "new tools will be created with AI that do new things." And also: "Finally, it is absolutely true that some companies will not make it. It is even true that in some very long time, longer than a career or generation, every company will be completely different or their product line and organization will have dramatically changed. This will not broadly happen on any investing timeline."
-
Jo Kristian Bergum with some very good thoughts on the future: "few things are worth building." The value of 10k lines of code is approaching $0, he says, and a lot of things will disappear along with the value these lines once held. "What survives? Systems that compress hard-won insights agents would have to rediscover at enormous token cost. Systems that operate on a cheaper substrate than inference. Systems that solve hard universal problems agents can't route around easily. Systems built for how agents actually work, not how we wish they worked." The point about the "cheaper substrate" is something I flip back and forth on. Let's see how it plays out.
-
David Crawshaw after "eight more months of agents": "I am having more fun programming than I ever have, because so many more of the programs I wish I could find the time to write actually exist. I wish I could share this joy with the people who are fearful about the changes agents are bringing. The fear itself I understand, I have fear more broadly about what the end-game is for intelligence on tap in our society. But in the limited domain of writing computer programs these tools have brought so much exploration and joy to my work."
-
Yesterday evening, to my great delight, I found out that there's a documentary on Netflix about The New Yorker's 100th anniversary. Why did no one tell me about this? Next time, please do. That's why I write this newsletter. But anyway: delightful and very good. Also, if you've never listened to it, I very often think of David Remnick's voice in this 2016 episode of the Longform podcast.
-
Now that's a headline: Notepad++ Hijacked by State-Sponsored Hackers. And here's a very interesting, very screenshot-heavy deep dive into how the attack works. But I want to read the New Yorker version of this. Who targets Notepad++? There has to be an amazing story behind this.
If you also think programming in five years will look completely different than from what it is now, you should subscribe:
-
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [DeepExtract](https://github.com/marcosd4h/DeepExtractIDA): 0.0.9 -
🔗 Baby Steps Fun With Dada rss
Waaaaaay back in 2021, I started experimenting with a new programming language I call "Dada". I've been tinkering with it ever since and I just realized that (oh my gosh!) I've never written even a single blog post about it! I figured I should fix that. This post will introduce some of the basic concepts of Dada as it is now.
Before you get any ideas, Dada isn't fit for use. In fact the compiler doesn't even really work because I keep changing the language before I get it all the way working. Honestly, Dada is more of a "stress relief" valve for me than anything else1 - it's fun to tinker with a programming language where I don't have to worry about backwards compatibility, or RFCs, or anything else.
That said, Dada has been a very fertile source of ideas that I think could be applicable to Rust. And not just for language design: playing with the compiler is also what led to the new
salsadesign2, which is now used by both rust-analyzer and Astral's ty. So I really want to get those ideas out there!I took a break, but I'm back baby!
I stopped hacking on Dada about a year ago3, but over the last few days I've started working on it again. And I realized, hey, this is a perfect time to start blogging! After all, I have to rediscover what I was doing anyway, and writing about things is always the best way to work out the details.
Dada started as a gradual programming experiment, but no longer
Dada has gone through many phases. Early on, the goal was to build a gradually typed programming language that I thought would be easier for people to learn.
The idea was that you could start writing without any types at all and just execute the program. There was an interactive playground that would let you step through and visualize the "borrow checker" state (what Dada calls permissions) as you go. My hope was that people would find that easier to learn than working with type checker checker.
I got this working and it was actually pretty cool. I gave a talk about it at the Programming Language Mentoring Workshop in 2022, though skimming that video it doesn't seem like I really demo'd the permission modeling. Too bad.
At the same time, I found myself unconvinced that the gradually typed approach made sense. What I wanted was that when you executed the program without type annotations, you would get errors at the point where you violated a borrow. And that meant that the program had to track a lot of extra data, kind of like miri does, and it was really only practical as a teaching tool. I still would like to explore that, but it also felt like it was adding a lot of complexity to the language design for something that would only be of interest very early in a developer's journey4.
Therefore, I decided to start over, this time, to just focus on the static type checking part of Dada.
Dada is like a streamlined Rust
Dada today is like Rust but streamlined. The goal is that Dada has the same basic "ownership-oriented" feel of Rust, but with a lot fewer choices and nitty-gritty details you have to deal with.5
Rust often has types that are semantically equivalent, but different in representation. Consider
&Option<String>vsOption<&String>: both of them are equivalent in terms of what you can do with them, but of course Rust makes you carefully distinguish between them. In Dada, they are the same type. Dada also makes&Vec<String>,&Vec<&String>,&[String],&[&str], and many other variations all the same type too. And before you ask, it does it without heap allocating everything or using a garbage collector.To put it pithily, Dada aims to be " Rust where you never have to call
as_ref()".Dada has a fancier borrow checker
Dada also has a fancier borrow checker, one which already demonstrates much of the borrow checker within, although it doesn't have view types. Dada's borrow checker supports internal borrows (e.g., you can make a struct that has fields that borrow from other fields) and it supports borrow checking without lifetimes. Much of this stuff can be brought to Rust, although I did tweak a few things in Dada that made some aspects easier.
Dada targets WebAssembly natively
Somewhere along the line in refocusing Dada, I decided to focus exclusively on building WebAssembly components. Initially I felt like targeting WebAssembly would be really convenient:
- WebAssembly is like a really simple and clean assembly language, so writing the compiler backend is easy.
- WebAssembly components are explicitly designed to bridge between languages, so they solve the FFI problem for you.
- With WASI, you even get a full featured standard library that includes high-level things like "fetch a web page". So you can build useful things right off the bat.
WebAssembly and on-demand compilation = compile-time reflection almost for
free
But I came to realize that targeting WebAssembly has another advantage: it makes compile-time reflection almost trivial. The Dada compiler is structured in a purely on-demand fashion. This means we can compile one function all the way to WebAssembly bytecode and leave the rest of the crate untouched.
And once we have the WebAssembly bytecode, we can run that from inside the compiler! With wasmtime, we have a high quality JIT that runs very fast. The code is even sandboxed!
So we can have a function that we compile and run during execution and use to produce other code that will be used by other parts of the compilation step. In other words, we get something like miri or Zig's comptime for free, essentially. Woah.
Wish you could try it? Me too!
Man, writing this blog post made ME excited to play with Dada. Too bad it doesn't actually work. Ha! But I plan to keep plugging away on the compiler and get it to the point of a live demo as soon as I can. Hard to say exactly how long that will take.
In the meantime, to help me rediscover how things work, I'm going to try to write up a series of blog posts about the type system, borrow checker, and the compiler architecture, all of which I think are pretty interesting.
-
Yes, I relax by designing new programming languages. Doesn't everyone? ↩︎
-
Designing a new version of
salsaso that I could write the Dada compiler in the way I wanted really was an epic yak shave, now that I think about it. ↩︎ -
I lost motivation as I got interested in LLMs. To be frank, I felt like I had to learn enough about them to understand if designing a programming language was "fighting the last war". Having messed a bunch with LLMs, I definitely feel that they make the choice of programming language less relevant. But I also think they really benefit from higher-level abstractions, even more than humans do, and so I like to think that Dada could still be useful. Besides, it's fun. ↩︎
-
And, with LLMs, that period of learning is shorter than ever. ↩︎
-
Of course this also makes Dada less flexible. I doubt a project like Rust for Linux would work with Dada. ↩︎
-
