- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- May 07, 2026
-
🔗 r/Leeds What should I do about the stressed koi at a restaurant? rss
I'm at a restaurant in Leeds, I'm sure you could figure out which one, which has a koi pond in the middle of the restaurant. It's covered by a large bridge and a thick mesh, and the fish are showing classic signs of stress (not moving, sitting near the bottom, jumping out of the water, and gasping at the surface). Is there a way for me to advocate for better health for them or is it a lost cause as they are the restaurant's property and technically taken care of? Sorry if this is silly it just makes me sad to see them in a bad state.
submitted by /u/moonstone7152
[link] [comments] -
🔗 r/york Goose on Dame Judi Dench Walk rss
| Honk submitted by /u/NervousEnergy
[link] [comments]
---|--- -
🔗 crosspoint-reader/crosspoint-reader SD Card Fonts (m1-b4) release
Pre-built
.cpfontfont files for CrossPoint Reader.Download individual files or use Settings > System > Download Fonts on the device.
See SD Card Fonts documentation for details.
-
🔗 r/Leeds I love this spot. rss
Sidenote : anyone going warehouse this coming Tuesday ?
submitted by /u/Auriv3x
[link] [comments] -
🔗 r/york York City Parade rss
| View from the bus! submitted by /u/York_shireman
[link] [comments]
---|--- -
🔗 r/reverseengineering The first FREE online WebAssembly Reverse Engineering workbench (and how we built it) rss
submitted by /u/TrustSig
[link] [comments] -
🔗 earendil-works/pi v0.74.0 release
Changed
- Updated repository links and package references for the move to
earendil-works/pi-monoand@earendil-works/*package scopes.
- Updated repository links and package references for the move to
-
🔗 The Pragmatic Engineer The Pulse: AI load breaks GitHub – why not other vendors? rss
Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of four topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .
GitHub's reliability has been beyond unacceptable recently: last month, third party measurements pinned it at one nine (right at 90%). This month, reliability has been down to zero nines - 86% - as per a third-party tracker, and last week, things got even worse: a frankly embarrassing data integrity incident, more outages, and a partial explanation from GitHub, eventually.
Data integrity incident
Last Thursday (23 April), this happened: PRs merged via the merge queue using the squash merge method produced incorrect merge commits, when the merge group contained more than one PR. Commits were reverted from subsequent merges: basically, commits were "lost" in the code that was merged!
Thanks to a bug GitHub introduced, the service broke its integrity promise that pull requests would be merged as expected when using squash merge, which is a technique typically used to merge multiple small commits into a single, meaningful commit. This is a big deal: as data integrity promises are some of the most important ones, for services like GitHub.
A total of 2,092 pull requests were impacted, and companies hit by the outage included Modal and Zipline. Effectively, GitHub pushed a bunch of work on affected customers who had to manually untangle and recover lost commits, which GitHub could offer zero assistance with.
Customers had to manually go through their git history and restore missing code. After following manual recovery steps (reverting the squash commit and re-applying commits one by one), all commits should have been recovered.
GitHub later emailed the list of affected commits to customers, but it's odd that GitHub executives seemed to downplay the nature of this outage. After all, an outage that messes with data integrity is a much bigger deal than something like a fall in availability where no data is corrupted.
Can Duruk, software engineer at Modal, was unhappy about GitHub's muted response to the outage:
"The COO going out of their way to find a huge denominator to make the impact appear small feels very dishonest; versus a sincere apology about how this invalidates their entire promise to their customers. We had to dig into their status page about this to even realize they just casually f***ed up our repo."
Outages don't stop
On Monday (27 April), pull requests and issues disappeared from GitHub's web UI:
Pull
requests go missing. Source:Mario
Zechner
Issues also not to be found. Source:David
CramerThis had to do with an Elasticsearch outage on GitHub's backend: the cluster became overloaded and went down. So, while pull requests, issues, and projects didn't vanish altogether, they also didn't show up during the 6-hour-long outage.
There were other outages this week:
- Some pull requests not showing up (Tuesday, 28 April)
- Problems with some GitHub Actions (the same day)
- Incomplete pull requests in repositories (Wednesday, 29 April)
Also on Tuesday (28 April), security firm Wiz disclosed a critical security issue, where a bad actor could get access to all repositories on GitHub and GitHub Enterprise server by using only a git push command. GitHub fixed the issue on GitHub.com within six hours, but GitHub Enterprise servers that were not updated remain vulnerable.
Famous open source contributor quits GitHub in frustration
On Tuesday, Mitchell Hashimoto, founder of HashiCorp, creator of Ghostty, announced GitHub was unfit for professional work and that he was moving off to Ghostty, the open source terminal that's his main focus. Mitchell's reasoning was dead simple: being on GitHub makes him unproductive (emphasis mine:)
"The past month I've kept a journal where I put an "X" next to every date where a GitHub outage has negatively impacted my ability to work. Almost every day has an X. On the day I am writing this post, I've been unable to do any PR review for ~2 hours because there is a GitHub Actions outage. This is no longer a place for serious work if it just blocks you out for hours per day, every day.
It's not a fun place for me to be anymore. I want to be there, but it doesn't want me to be there. I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software.
I want it to be better, but I also want to code. And I can't code with GitHub anymore. I'm sorry. After 18 years, I've got to go. I'd love to come back one day, but this will have to be predicated on real results and improvements, not words and promises."
Mitchell's experience suggests that GitHub's official status page is inaccurate from the point of view of a heavy user like himself. The third- party "missing GitHub status page" is likely to be a better estimation: where GitHub's reliability is at zero nines: at 85.51% uptime. That means that a part of GitHub was down for 2-3 hours, per day, on average, for the last 90 days (!!)
Reliability
woes: GitHub "not a place for serious work." Source: The Missing GitHub
Status PageMitchell's complaint sounds straightforward:
- As a professional software engineer, it's important to have tools that help you get work done
- For months, GitHub has got in the way of his work on open source projects via a flood of outages
- It makes no sense to use a product unfit for professional work.
- As GitHub shows no signs of improvement, it's worthwhile to move to a different solution which just works
CTO blames AI agent-fuelled load spike
GitHub CTO, Vlad Fedorov, shared an update on why reliability has been terrible for months at GitHub. He identified the load from agents being much bigger than expected as the culprit. Charts illustrating this were shared by GitHub:

This chart looks eye-catching - but there's just one tiny issue: no Y axis! So, while it tells the story of the load going up slowly and then very fast, we're not told by how much. However, I managed to get data from GitHub, and below is the chart showing the actual load increase over two years:

A load increase of ~3.5x, spread across two years, doesn 't seem so brutal at first glance. It is nothing like a load increase of 10x in a month, and a good chunk of it occurred in recent months. So, why can't GitHub handle it? In a blog post, Fedorov said:
"A pull request can touch Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases. At large scale, small inefficiencies compound: queues deepen, cache misses become database load, indexes fall behind, retries amplify traffic, and one slow dependency can affect several product experiences."
Here's how the per-second load numbers from January 2023 and today compare:

GitHub took 15 years to achieve the 2023 numbers, and maybe it expected to continue growing in a comparable way in the future. If so, some engineering decisions about long-term infrastructure improvements would have been made obsolete by the arrival of AI agents.
To add to GitHub 's challenges, the company is in the midst of a migration from its own data centers -> Azure. In October last year, GitHub started to move over to Azure - a project expected to take 12 months - because it already had constraints on its own data center capacity.
Such large-scale infrastructure migrations are hard enough when the load on a service is relatively stable; just making sure nothing breaks takes a lot of effort. But moving at a time when load is spiking means that bugs can cause more visible outages. Of course, GitHub can secure a lot more compute capacity on Azure, now they know what to expect.
But other major companies prepared for a 10x increase in infra load, so why not Microsoft / GitHub? A year ago, I did research on how Big Tech was preparing to respond to the impact of AI on their business. Google was improving its internal systems to accommodate for a 10x increase in load. As we covered in The Pragmatic Engineer, in July last year:
"Google is preparing for 10x more code to be shipped. A former Google Site Reliability Engineer (SRE) told me:
"What I'm hearing from SRE friends is that they are preparing for 10x the lines of code making their way into production."
If any company has data on the likely impact of AI tools, it's Google. 10x as much code generated will likely also mean 10x more: code review, deployments, feature flags, source control footprint and, perhaps, even bugs and outages, if not handled with care."
Predicted enormous load increases were not secret knowledge within the industry, yet it seems GitHub was blissfully ignorant of their potential size. According to Vlad, GitHub did eventually plan for a need to increase capacity by 10x, but this was in October 2025, months later. In February 2026, the company is now adjusting that expectation to 30x. He wrote:
"We started executing our plan to increase GitHub's capacity by 10X in October 2025 with a goal of substantially improving reliability and failover. By February 2026, it was clear that we needed to design for a future that requires 30X today's scale."
There's also the question of whether GitHub miscalculated how much time it had to prepare for explosive load growth, and whether it was caught off guard when that growth materialized months sooner than expected at the start of this year.
Given GitHub only started to prepare for a major load increase in October, its current problems are unsurprising. At the scale of GitHub, it's common enough for each team owning a service to plan a year ahead on how much load their service will have, and hardware resources like storage, VMs, and networking are allocated accordingly. Load planning can account for up to half of the preparations, and when reality doesn't conform to plans, some systems can struggle to scale up.
So, on one hand, dealing with a 3.5x increase in load over 2 years should not be such a big deal for most services; especially not ones which can be horizontally scaled (when there's not much state, and scaling is achieved simply by adding new nodes.) But GitHub probably stores a lot more state with pull requests, workflows, projects, etc. This probably makes scaling more tricky when it comes to databases and systems running workflows.
GitHub also has 18 years of tech debt on its hands, and thousands of staff to align as "organizational overhead." As its service load grows faster than before, responding is harder due to all that accumulated "debt":
- Tech debt: many systems at the company are 10+ years old and are likely patched up, making them more difficult and risky to change
- Organizational debt: around 4,000 people work at GitHub, of whom 1,000 are engineers. Teams have dependencies with each other, and even seemingly simple work can require dozens of engineers to work together
- Customer expectations: GitHub cannot break customer workflows, even if doing so would mean changes to systems happen faster
GitHub finds itself in the 'innovator's dilemma': the company became successful because it built developer workflows that made sense, pre-AI, and it used to be able to accurately forecast service load changes. But now that engineering teams' workflows include AI agents, GitHub's own workflows are not necessarily the best fit, and the company failed to forecast service-level changes.
Other vendors floored by AI load? Not really
One thing that doesn't add up about the situation is that other vendors who are presumably experiencing similar load spikes don't appear to be suffering with reliability issues as much. Vercel, Linear, Resend, Railway, Sentry, and other infra providers see record-level growth thanks to AI, but keep up with the load.
Yes, it's true that AI vendors like Anthropic, OpenAI, and Cursor have some reliability issues, but it's not at the scale of GitHub's. GitHub's direct competitors, GitLab and Bitbucket, presumably see load going up similarly, but they're not going down as much.
An obvious question is how much of GitHub 's pain is self-inflicted? With Microsoft as owner, it has more resources at its disposal than any competitor or startup, and yet failed to predict load increases and is too big to respond with the nimbleness of a startup.
It's undeniable that solving for a major load increase is a hard challenge; it's when the difference between average and standout engineering teams is apparent. GitHub hasn't been responding like a world-class engineering org.
GitHub alternatives?
Every regular user of GitHub feels the pain of ongoing outages. As a dev, you can either hope Microsoft will eventually improve reliability, or seek alternatives. As covered above, Mitchell has chosen to quit and is currently deciding where to take Ghostty.
The obvious alternatives are GitHub's biggest competitors, GitLab, and Bitbucket. Each offers Git hosting, and neither comes with the uptime woes that GitHub is suffering from.
Self-hosted solutions are also an option, like self-hosting your git repo, or going with a self-hosted forge like Forgejo, which is an open source, local-first GitHub alternative.
I also suspect that, soon enough, we'll see startups offering GitHub-like code hosting capabilities, while offering more robust uptime and being architected to handle the 30x-or-more scale which GitHub hopes one day to support.
Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:
- Did Anthropic turn hostile on devs because capacity was running low?
- Amazon finally allows Claude Code and Codex usage
- Meta forcefully assigns engineers to data labelling ahead of job cuts
- New trend: small "AI-forward" teams
- Industry Pulse: why Meta tracks employees' computer activity, OpenAI starts to move off Datadog, Apple lets slip it uses Claude Code, GitHub -> Xbox transfers at Microsoft, VS Code inserted "coathored by Copilot" even when Copilot did nothing, analysis of the Coinbase layoffs
-
🔗 r/wiesbaden Freitags essen gehen zu zweit? rss
Moin würde gerne mit einer Freundin an einem Freitag in Wiesbaden essen gehen, es sollte gemütlich sein und nicht so laut. Also eine Atmosphäre haben die es her gibt das man sich gut unterhalten kann. Es sollte vegan/vegetarische Optionen geben. Ich wäre sehr dankbar für eure Tipps da ich mich nicht so gut auskenne.
submitted by /u/JohnTheMonkey2
[link] [comments] -
🔗 r/Leeds why is everyone in fancy dress? rss
I'm in the city centre right now and just wondering why everyone is dressed up? I thought it was the otley run but now I'm unsure because the people in fancy dress are everywhere. This is just me being nosey but I can't find any info about it online so I was wondering if anyone knows.
submitted by /u/MeowTS13
[link] [comments] -
🔗 Simon Willison Notes on the xAI/Anthropic data center deal rss
There weren't a lot of big new announcements from Anthropic at yesterday's Code w/ Claude event, but the biggest by far was the deal they've struck with SpaceX/xAI to use "all of the capacity of their Colossus data center".
As I mentioned in my live blog of the keynote, that's the one with the particularly bad environmental record. The gas turbines installed to power the facility initially ran without Clean Air Act permits or pollution control devices, which they got away with by classifying them as "temporary". Credible reports link it to increases in hospital admissions relating to low air quality.
Andy Masley, one of the most prolific voices pushing back against misleading rhetoric about data centers (see The AI water issue is fake and Data center land issues are fake), had this to say about Colossus:
I would simply not run my computing out of this specific data center
I get that Anthropic are severely compute-constrained, but in a world where the very existence of "AI data centers" is a red-hot political issue (see recent news out of Utah for a fresh example), signing up with this particular data center is a really bad look.
There was a lot of initial chatter about how this meant xAI were clearly giving up on their own Grok models, since all of their capacity would be sold to Anthropic instead. That was a misconception - Anthropic are getting Colossus 1, but xAI are keeping their larger Colossus 2 data center for their own work.
As an interesting side note, the night before the Anthropic announcement, xAI sent out a deprecation notice for Grok 4.1 Fast and several other models providing just two weeks' notice before shutdown, reported here by @xlr8harder from SpeechMap:

This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative.
I will never depend on one of your products again.
Here's SpeechMap's detailed explanation of how they selected Grok 4.1 Fast for their project in March.
Were xAI serving those models out of Colossus 1?
xAI owner Elon Musk (who previously delighted in calling Anthropic "Misanthropic") tweeted the following:
By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. [...]
After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.
And then shortly afterwards:
Just as SpaceX launches hundreds of satellites for competitors with fair terms and pricing, we will provide compute to AI companies that are taking the right steps to ensure it is good for humanity.
We reserve the right to reclaim the compute if their AI engages in actions that harm humanity.
Presumably the criteria for "harm humanity" are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me!
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/wiesbaden Erfahrungen mit Autohaus Can in Wiesbadener Str. ? rss
Wer hat Erfahrung mit dem oben genannten Händler? Seriös oder nicht ?
submitted by /u/HagebuddneLard
[link] [comments] -
🔗 r/LocalLLaMA WARNING: Open-OSS/privacy-filter MALWARE rss
There's this new "model" on Hugging Face titled
Open-OSS/privacy-filterwhich is actually a customized infostealer virus. It's a fake version of the OpenAI privacy filter and it uses a Python-based dropper (loader.py) which downloads a malicious PowerShell command from the internet, which spawns another PowerShell command and downloads a shady EXE file and runs it using Task Scheduler.Here's a behavior analysis of what the EXE does: https://tria.ge/260507-tnftrsfx5x/behavioral1
I also reported both the dropper and the EXE to Microsoft.
I also reported the repo to HF.
If you use Linux (which is easier to use for AI/ML) you are unaffected as this is a Windows virus.
submitted by /u/charles25565
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.11.0 release
What's Changed
Big milestone! Crit crossed more than 500 commits and 250 stars. You can now install it directly from homebrew and we released a Windows version!
Thank you to everyone who contributed to get us here! I'd appreciate if you would share it with your colleagues or on Twitter! It helps a lot!
crit is now in homebrew-core — no tap needed. If you installed from the tap, upgrade once with:
brew uninstall crit && brew install critFuture updates will arrive via
brew upgradelike any other formula.Windows + WSL support
feat: add Windows + WSL supportreplaces Unix-only syscalls with cross- platform abstractions, addsrundll32browser launch on native Windows, and keeps the existing WSL fallback chain. crit now works end-to-end on Windows natively.- feat: add Windows + WSL support by @tomasz-tomczyk in #459
General
- feat: add --file flag and better errors to crit comment --json by @tomasz-tomczyk in #480
- fix: deny rather than silently auto-approve on daemon shutdown by @tomasz-tomczyk in #483 - Thank you @TalAmuyal for raising!
- fix: remove daemon 1h idle timeout by @tomasz-tomczyk in #477 - Thank you @TalAmuyal for reporting!
- fix: audit fixes — path safety, shared reads, dir pruning by @tomasz-tomczyk in #485
- fix: chain reloadForScope when scope/commit changes mid-flight by @tomasz-tomczyk in #482
- fix: scope unified diff comment highlight to commented side by @tomasz-tomczyk in #479
- fix: header context chip colors and hidden unresolved count by @tomasz-tomczyk in #486
- fix: preserve CLI argument order for files by @tomasz-tomczyk in #474
- docs: switch primary brew install to homebrew-core by @tomasz-tomczyk in #481 - thanks @omervk for contributing to homebrew on our behalf!
- docs: cleanup stale spec by @tomasz-tomczyk
- refactor: drop auto-detection of stacked PRs / local stacks by @tomasz-tomczyk in #478
Full Changelog :
v0.10.5...v0.11.0 -
🔗 earendil-works/pi v0.73.1 release
New Features
- Self-update support for the npm scope migration :
pi update --selfnow supports the upcoming package rename from@mariozechner/pi-coding-agentto@earendil-works/pi-coding-agent. After the new package is published, existing global installs can update through the normal self-update flow; pi will uninstall the old global package and install the package name returned by the version check endpoint. - Interactive OAuth login selection : OAuth providers can now present multiple login choices in
/login, enabling provider-specific interactive authentication flows. See Providers. - JSONC-style
models.jsonparsing:models.jsonnow allows comments and trailing commas, making custom provider and model configuration easier to maintain. See Providers and Custom Providers.
Added
- Added interactive login selection support so OAuth providers can present multiple login choices (#4190 by @mitsuhiko).
Changed
- Changed
pi update --selfto honor the active package name returned by the Pi version check endpoint, defaulting to the current package when omitted and uninstalling the old global package before installing a renamed package. - Changed extension loading to use upstream
jiti2.7 instead of the@mariozechner/jitifork (#4244 by @pi0). - Changed
models.jsonparsing to allow comments and trailing commas (#4162 by @julien-c).
Fixed
- Fixed
pi -ptreating prompts that start with YAML frontmatter as extension flags instead of user messages (#4163). - Fixed pending tool results not updating in the live TUI after toggling thinking block visibility while the tool is running (#4167).
- Fixed
/copyreporting success on Linux without writing the clipboard on Wayland-only compositors (Hyprland, Niri, ...) by skipping the X11-only native addon on Linux and routing throughwl-copy/xclip/xselinstead (#4177). - Fixed HTML session exports to strip skill wrapper XML from rendered user messages (#4234 by @aliou).
- Fixed OpenAI-compatible chat completion streams that interleave content and tool-call deltas in the same choice.
- Fixed OpenAI Codex OAuth refresh failures writing directly to stderr while the TUI is active (#4141).
- Fixed OpenAI Codex Responses requests to send a non-empty system prompt (#4184).
- Fixed Kimi For Coding model resolution for the Kimi K2 P6 alias (#4218).
- Fixed Kitty inline image redraws to stay within TUI-owned terminal regions and avoid writing below the active viewport.
- Fixed Kitty inline image rendering by letting the terminal allocate image ids and bounding parsed image ids to valid values.
- Fixed inline image capability detection to disable inline images in cmux terminals.
- Self-update support for the npm scope migration :
-
🔗 r/Leeds Leeds cycle lane network is a 'step in the right direction', say campaigners rss
Just wanted to add a bit of positivity around the new cycle lanes in Leeds, as there seems to be a lot of negativity whenever the topic comes up.
Speaking from personal experience, they’ve genuinely changed my life for the better. Up until last year, I hadn’t really ridden a bike since I was a teenager. But after seeing more segregated cycle lanes appear around my area, I realised I could get from my house into the city centre in under 30 minutes almost entirely on protected infrastructure.
I've started cycling regularly, and eventually I sold my car altogether. I now use my bike every other day for commuting, trips into town, canal rides etc etc. I’m healthier, happier, saving loads of money, and honestly enjoy getting around Leeds far more now. It's hilly in parts but stick to a low gear and it's perfectly manageable, ebikes are great alternatives too and can be purchased through the cycle to work schemes (I saved hundreds on my bike).
I also cycle year-round, and I think people massively overestimate how “hardcore” cycling is in the UK. Our weather really isn’t that different from places like the Netherlands. Most of the time you’re completely fine with a decent jacket.
I know the network still has gaps and improvements to make, but for me it’s been a massive step in the right direction and has made cycling feel accessible to normal people again, not just super confident road cyclists.
Just wondering if anyone else has had a similar experience or enjoys using the bike lanes too?
submitted by /u/_testingdude
[link] [comments] -
🔗 jj-vcs/jj v0.41.0 release
About
jj is a Git-compatible version control system that is both simple and powerful. See
the installation instructions to get started.Release highlights
-
jj fixnow supports formatting specific line ranges (allowing you to format
only modified lines); see the configuration manual and notes below for more. -
The new global flag
--no-integrate-operationwill let you run a command without
impacting the repo state or the working copy, which is useful when automated tools
may create snapshots in the background.
Breaking changes
-
The
--patternflag forfile searchnow defaults toregex:instead ofglob:. -
jj git push --all/--tracked/-r REVSETSno longer fails when revisions to
push are private or have conflicts. Bookmarks which aren't eligible to push
will be skipped. -
Branch/bookmark patterns passed to
jj git cloneare now saved to jj's repo
settings file instead of.git/config. Git fetch refspecs are set to the
default value.
Deprecations
- In the templating language, the
Operationtype's.tags()function has been
deprecated in favor of.attributes().
New features
-
The
--patternflag forfile searchnow accepts various pattern kinds through
kind:patternsyntax. -
A new global flag
--no-integrate-operationlets you run a command without
impacting the repo state or the working copy. -
A new config option
diff.git.show-path-prefixcan be used to suppress the
a/andb/path prefixes in thediff --gitoutput. -
jj fixnow supports line range-limited formatting via thefix.tools.<name>.line-range-arg
andrun-tool-if-zero-line-rangesconfigs. This allows running tools only on modified
lines and fine-grained control over when the tool is run. If you have set theline-range-arg
config, use--all-linesto match the previous behavior of formatting the entire file. -
A new
replace(pattern, content, replacement)template function is added
which supports replacement of content in templates, using a lambda to format
replacement text. It supports all string patterns, including regexes with
capture groups (e.g.replace(regex:'(\w+) (\w+)', "hello world", |c| c.get(1) ++ " " ++ c.get(2))). -
New
ByteStringtemplate type for things like file content. -
jj gerrit uploadnow supports the new options--message(-m),--edit
and--merged. You can now also pass multiple hashtags by repeating the
--hashtagoption. -
New
remotes.<name>.fetch-bookmarks/fetch-tagsoptions to configure
default fetch targets. -
JJ_PAGERcan now override theui.pagerconfig, matchingJJ_EDITORfor
callers that need a jj-specific environment override.
Fixed bugs
-
Improving consistency with
githandling of.gitignore, including/
after entries and\r\r\nfor MacOS files. -
jj statusfilters untracked paths by fileset
#9287 -
Improved performance for snapshotting, visibly improving
jj status
speed for large repositories. -
Pre-existing Git submodule directories are no longer considered conflicts in
checkouts. #8065. -
Fixed a panic in
jj gerrit uploadwhen run without-rand the
inferred revision was immutable. #9398 -
jj statusrespects path filters in working copy summaries. -
jj git remote rename/removenow updates thetrunk()alias. -
Commands would sometimes incorrectly diagnose a stale working copy and suggest
runningjj op integratewhen it would have no effect. This should now be
much less likely to happen in practice.
#9314
Contributors
Thanks to the people who made this release happen!
- Adrian Freund (@freundTech)
- ase (@adamse)
- Austin Seipp (@thoughtpolice)
- Benjamin Tan (@bnjmnt4n)
- Björn Kautler (@Vampire)
- David Higgs (@higgsd)
- David Rieber (@drieber)
- dzaima (@dzaima)
- Federico G. Schwindt (@fgsch)
- Gaëtan Lehmann (@glehmann)
- hewigovens (@hewigovens)
- Ilya Grigoriev (@ilyagr)
- jonmeow (@jonmeow)
- Joseph Lou (@josephlou5)
- Josh McKinney (@joshka)
- Jun Mukai (@jmuk)
- Lucas Garron (@lgarron)
- Martin von Zweigbergk (@martinvonz)
- Matt Stark (@matts1)
- Maximilian Gaß (@mxey)
- OlshaMB (@OlshaMB)
- Philip Metzger (@PhilipMetzger)
- rayaq
- Remo Senekowitsch (@senekor)
- rishiad (@rishiad)
- Ryan Patterson (@CGamesPlay)
- Sebastian Barfurth (@sbarfurth)
- Thomas Axelsson (@thomasa88)
- xtqqczze (@xtqqczze)
- Yuya Nishihara (@yuja)
-
-
🔗 r/reverseengineering VLC Media Player MKV Exploit Analysis rss
submitted by /u/eshard-cybersec
[link] [comments] -
🔗 r/york Different angles on one perfect subject 💫 rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 r/Yorkshire 'We're all human': Reform response to Sheffield candidate accused of Nazi praise rss
| submitted by /u/johnsmithoncemore
[link] [comments]
---|--- -
🔗 r/Leeds Does anyone else remember when you could buy cats at kirkgate market ? rss
And pirated dad's,before 2010 and other crazy stuff or I'm I confusing it with the wrong place I'm pretty sure we got a cat from there some time in the 2000's but I could be wrong
submitted by /u/TipAdditional4625
[link] [comments] -
🔗 Console.dev newsletter honker rss
Description: Durable queues for SQLite.
What we like: Adds pub/sub, task queue, and event streams to SQLite. No need for client polling or a broker. Shipped as a SQLite extension with bindings for Python, Node, Rust, Go, Ruby, etc. Allows an INSERT and enqueue as part of the same transaction (with rollback). Also supports cron.
What we dislike: Polling is via a SELECT per millisecond per database, which should be lightweight, but is an extra high-frequency query. Still experimental.
-
🔗 Console.dev newsletter Plow rss
Description: HTTP benchmarking.
What we like: Runs HTTP requests and benchmarks latency and response codes. Configurable concurrency, duration, request count, and ramp up time. Outputs stats to the terminal in real time. Supports JSON output and provides a web UI.
What we dislike: Pretty straightforward HTTP request support, including different methods e.g. POST (with body). For more complex benchmarks, k6 is a good, scriptable alternative.
-
- May 06, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-06 rss
-
🔗 r/Harrogate Harrogate Traffic Relief rss
The traffic in and around Harrogate is a joke, and has been commented on for as long as I can remember.
but I’m curious, I’ve no idea how to solve it so what are people’s suggestions? It seems to me there’s just no where to speed flow or reroute around bottlenecks.
Better busses? By-passes? How do we fix it?
submitted by /u/CyclePrevious9043
[link] [comments] -
🔗 Jeremy Fielding (YouTube) Wall-E Is Getting Complicated. rss
If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join
If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding
Social media, websites, and other channel
Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA
Notes:
Technical corrections
Nothing yet
-
🔗 @HexRaysSA@infosec.exchange New training updates, plus Spring discounts: mastodon
New training updates, plus Spring discounts:
• On-demand Starter → 20% off with code STR20
• AI-powered Intermediate → 40% off (May 12) with code AI-INTER40
• Malware, Decompiler & Programming → 30% off with code SPRING30Details + course breakdown: https://hex-rays.com/blog/spring-training- sale-2026
*Limited time offer, check blog for expiration dates! -
🔗 r/LocalLLaMA ZAYA1-8B: Frontier intelligence density, trained on AMD rss
| submitted by /u/carbocation
[link] [comments]
---|--- -
🔗 r/york Moving back - flat hunting rss
I'm coming home! So excited to be moving back but slightly worried about finding a flat after a few years abroad. I know the drill since the last time I lived there, but wanted to see if anything has changed - do things still move at the speed of light - by the time something hits Rightmove, it's already full of viewings and likely to be gone tomorrow - is that still the case?
I can't remember what month most student lets turn over / when the most availability is...? (I know the new system may impact this)
Should I just book a hotel and wait till I'm in town to sort out viewings? (and trust I'll find somewhere within a week?)
Budget is 1.1-1.5k, would like to be relatively near the uni. I know the dust is still settling from the new Renters' rights and I've read so many posts on here about where to look/ agents to avoid etc, but curious how things feel locally lately.
Last but not least - any anecdotes for getting pets approved since the rule changes? Any differences between getting a cat approved (vs dogs)?
Thanks!
submitted by /u/fruitloopfitness
[link] [comments] -
🔗 r/Leeds Does anyone remember Toyworld Megastore? rss
As a kid I loved this toy shop, it was on the Headrow, attached to the Headrow Shopping Centre (later turned into the core, now demolished) and was to right of the entrance, which the same unit later became GAME. It seems to have had a very short lifespan, opening and closing in the mid 2000s but having another store on the top floor of the Headrow Shopping centre in the 90s.
Some of the only info I can find online, is my own reddit post from 3 years ago, https://www.reddit.com/r/Leeds/comments/z57afp/does_anyone_remember_toyworld_megastore/
I'd love to find a photo of the store, or literally any info/memories - it's basically all gone and I'm so annoyed at myself for not having saved the one photo that existed 3 years ago.
Thank you in advance!
submitted by /u/Same_Ability3423
[link] [comments] -
🔗 r/Yorkshire Silktone Waggonway rss
| I create short history forgotten videos around Yorkshire and specifically Barnsley, here's my latest short Silkstone Waggonway submitted by /u/9arke1
[link] [comments]
---|--- -
🔗 Hex-Rays Blog New Training Formats, New Workflows, New Skills rss
-
🔗 Simon Willison Live blog: Code w/ Claude 2026 rss
I'm at Anthropic's Code w/ Claude event today. Here's my live blog of the morning keynote sessions.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/Leeds An afternoon in Leeds. rss
Today I got a lovely change of pace, a hot tap in Leeds just 2 miles from where I live which is great because I've been perpetually up near Consett and Seal Sands sorting out P11's and staying in impersonal hotels and pubs.
So I had a wander into Leeds City Centre on a weekday after sorting out the permits, the change of pace compared to the weekend is huge. Its been years since I've been into Leeds during the week for leisure.
Found it nice to just wander, Im just having a coffee in the indoor market. My wife's coming through after she finishes work and Im treating us to Blue Sakura.
Just some aimless musing. Leeds is a good place and it deserves some aimless musing over a nice coffee.
submitted by /u/EdwardJSuperman
[link] [comments] -
🔗 r/LocalLLaMA None of this will ever get stolen rss
| It's crazy that they're thinking of doing this. There are problems with people stealing catalytic converters off people's cars and now they want to put a rack outside your house!? submitted by /u/martin_xs6
[link] [comments]
---|--- -
🔗 r/york Lost keys rss
I lost a set of keys with a black carabiner on them, two old style keys and one modern one, within the nunnery lane area.
Any leads?
I'm really worried😓submitted by /u/soupygirls
[link] [comments] -
🔗 Simon Willison Vibe coding and agentic engineering are getting closer than I'd like rss
I recently talked with Joseph Ruscio about AI coding tools for Heavybit's High Leverage podcast: Ep. #9, The AI Coding Paradigm Shift with Simon Willison. Here are some of my highlights, including my disturbing realization that vibe coding and agentic engineering have started to converge in my own work.
One thing I really enjoy about podcasts is that they sometimes push me to think out loud in a way that exposes an idea I've not previously been able to put into words.
Vibe coding and agentic engineering are starting to overlap
A few weeks after vibe coding was first coined I published Not all AI-assisted programming is vibe coding (but vibe coding rocks), where I firmly staked out my belief that "vibe coding" is a very different beast from responsible use of AI to write code, which I've since started to call agentic engineering.
When Joseph brought up the distinction between the two I had a sudden realization that they're not nearly as distinct for me as they used to be:
Weirdly though, those things have started to blur for me already, which is quite upsetting.
I thought we had a very clear delineation where vibe coding is the thing where you're not looking at the code at all. You might not even know how to program. You might be a non-programmer who asks for a thing, and gets a thing, and if the thing works, then great! And if it doesn't, you tell it that it doesn't work and cross your fingers.
But at no point are you really caring about the code quality or any of those additional constraints. And my take on vibe coding was that it's fantastic, provided you understand when it can be used and when it can't.
A personal tool for you, where if there's a bug it hurts only you, go ahead!
If you're building software for other people, vibe coding is grossly irresponsible because it's other people's information. Other people get hurt by your stupid bugs. You need to have a higher level than that.
This contrasts with agentic engineering where you are a professional software engineer. You understand security and maintainability and operations and performance and so forth. You're using these tools to the highest of your own ability. I'm finding the scope of challenges I can take on has gone up by a significant amount because I've got the support of these tools.
But I'm still leaning on my 25 years of experience as a software engineer.
The goal is to build high quality production systems: if you're building lower quality stuff faster, I think that's bad. I want to build higher quality stuff faster. I want everything I'm building to be better in every way than it was before.
The problem is that as the coding agents get more reliable, I'm not reviewing every line of code that they write anymore, even for my production level stuff.
I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it's just going to do it right. It's not going to mess that up. You have it add automated tests, you have it add documentation, you know it's going to be good.
But I'm not reviewing that code. And now I've got that feeling of guilt: if I haven't reviewed the code, is it really responsible for me to use this in production?
The thing that really helps me is thinking back to when I've worked at larger organizations where I've been an engineering manager. Other teams are building software that my team depends on.
If another team hands over something and says, "hey, this is the image resize service, here's how to use it to resize your images"... I'm not going to go and read every line of code that they wrote.
I'm going to look at their documentation and I'm going to use it to resize some images. And then I'm going to start shipping my own features. And if I start running into problems where the image resizer thing appears to have bugs or the performance isn't good, that's when I might dig into their Git repositories and see what's going on. But for the most part I treat that as a semi-black box that I don't look at until I need to.
I'm starting to treat the agents in the same way. And it still feels uncomfortable, because human beings are accountable for what they do. A team can build a reputation. I can say "I trust that team over there. They built good software in the past. They're not going to build something rubbish because that affects their professional reputations."
Claude Code does not have a professional reputation! It can't take accountability for what it's done. But it's been proving itself anyway - time and time again it's churning out straightforward things and doing them right in the style that I like.
There's an element of the normalization of deviance here - every time a model turns out to have written the right code without me monitoring it closely there's a risk that I'll trust it at the wrong moment in the future and get burned.
The new challenge of evaluating software
It used to be if you found a GitHub repository with a hundred commits and a good readme and automated tests and stuff, you could be pretty sure that the person writing that had put a lot of care and attention into that project.
And now I can knock out a git repository with a hundred commits and a beautiful readme and comprehensive tests of every line of code in half an hour! It looks identical to those projects that have had a great deal of care and attention. Maybe it is as good as them. I don't know. I can't tell from looking at it. Even for my own projects, I can't tell.
So I realized what I value more than the quality of the tests and documentation is that I want somebody to have used the thing. If you've got a vibe coded thing which you have used every day for the past two weeks, that's much more valuable to me than something that you've just spat out and hardly even exercised.
The bottlenecks have shifted
If you can go from producing 200 lines of code a day to 2,000 lines of code a day, what else breaks? The entire software development lifecycle was, it turns out, designed around the idea that it takes a day to produce a few hundred lines of code. And now it doesn't.
It's not just the downstream stuff, it's the upstream stuff as well. I saw a great talk by Jenny Wen, who's the design leader at Anthropic, where she said we have all of these design processes that are based around the idea that you need to get the design right - because if you hand it off to the engineers and they spend three months building the wrong thing, that's catastrophic.
There's this whole very extensive design process that you put in place because that design results in expensive work. But if it doesn't take three months to build, maybe the design process can be a whole lot riskier because cost, if you get something wrong, has been reduced so much.
Why I'm still not afraid for my career
When I look at my conversations with the agents, it's very clear to me that this is moon language for the vast majority of human beings.
There are a whole bunch of reasons I'm not scared that my career as a software engineer is over now that computers can write their own code, partly because these things are amplifiers of existing experience. If you know what you're doing, you can run so much faster with them. [...]
I'm constantly reminded as I work with these tools how hard the thing that we do is. Producing software is a ferociously difficult thing to do. And you could give me all of the AI tools in the world and what we're trying to achieve here is still really difficult. [...]
Matthew Yglesias, who's a political commentator, yesterday tweeted, "Five months in, I think I've decided that I don't want to vibecode — I want professionally managed software companies to use AI coding assistance to make more/better/cheaper software products that they sell to me for money." And that feels about right to me. I can plumb my house if I watch enough YouTube videos on plumbing. I would rather hire a plumber.
On the threat to SaaS providers of companies rolling their own solutions instead:
I just realized it's the thing I said earlier about how I only want to use your side project if you've used it for a few weeks. The enterprise version of that is I don't want a CRM unless at least two other giant enterprises have successfully used that CRM for six months. [...] You want solutions that are proven to work before you take a risk on them.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/reverseengineering pyghidra-mcp Meets Ghidra GUI: Drive Project-Wide RE with Local AI rss
submitted by /u/onlinereadme
[link] [comments] -
🔗 r/york York station gateway what do you think? rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 r/Leeds I bought a job lot of antique postcards from Leeds off eBay rss
When I saw 50 antique postcards of Leeds on eBay for £20 it was was a no brainer of a buy!
Most date to the first decade of the 20th century and they include lovely, stylised images of streets that look so familiar but also very different. Some also have messages on the back, frankly irrisitable to a nosy person such as myself.
I've posted a gallery of some of the best ones on my Leeds history newsletter, Bury the Leeds, which is free to read and to subscribe to.
https://burytheleeds.substack.com/p/looking-back-at-leeds-through-antique
My favourite is the image of Headingley from 1909 which includes the beast of a stump of the Shire Oak, an ancient tree that was said to have stood on Otley Road for 1,000 years. By the 20th century, only a hulking stump remained before that was destroyed during a storm in 1941. The Original Oak pub is named after it and so is the Skyrack, which is an old timey derivation of 'Shire Oak'.
I also love the one of the fashionable ladies promenading down Woodhouse Moor in 1904 and the very evocative shots of Briggate and Boar Lane, when trams ruled. You can really imagine how these busy streets must have sounded back then.
I'm giving the postcards away with a book I've made featuring some of my most interesting and unusual stories about the city. I know several r/Leeds redditors have ordered copies. I'm celebrating one year of this project now so thanks for the support and to the mods!
submitted by /u/bluetrainlinesss
[link] [comments] -
🔗 r/LocalLLaMA Bad news: Apple drops high-memory Mac Studio configs rss
| Looks like Apple has quietly killed off the higher-memory Mac Studio options. The M3 Ultra Mac Studio is now only available with 96GB RAM. The 512GB option was already removed back in March, and now the 256GB config is gone too. Apple has said both the Mac Studio and Mac mini will stay supply-constrained for the next few months. The Mac mini is also stuck at 48GB RAM max for now. Probably their high-memory chip stock got too expensive to keep producing. This is a real bummer for us! Big unified memory configs were one of the few (relatively) affordable ways to run large models locally. I am glad I own the M3 Utlra 512, will definitely keep this on (my favorite local model is Qwen 397b atm). submitted by /u/jzn21
[link] [comments]
---|--- -
🔗 r/Yorkshire Please get out there and vote May 7th (tomorrow.) rss
The North is often neglected by the government, so the best chance that YOU have to get the work done in your area is by voting in the local election tomorrow.
If you don’t know who to vote, do your research and see who aligns more with your community. Vote based on who you believe will help your local area the most.
This isn’t a political soapbox post, I won’t tell you who to vote for. Just please, use your voice. There are a lot of cunts who just wanna use your seat and sit on it, and nothing will ever change. This is an important election with a lot of new voices who could genuinely help your local ward. I wish the best for your local area in the next 4 years and that’s why i’m making this post!
We don’t get a lot of chances to enact change, so it’s best to use it when we can.
submitted by /u/coolfunkDJ
[link] [comments] -
🔗 tomasz-tomczyk/crit Spotify popup-relay preview (bb4d9fb) release
WIP build of
critwithshare_flow: "popup"config support for SSO- protected crit-web instances.Setup instructions: SPOTIFY-PREVIEW.md
Pair with crit-web branch
share-receiver- elixir(commited01b25).Built from commit
bb4d9fbof branchshare- receiver.Feedback / issues: tomasz-tomczyk/crit-web#50
-
🔗 Anton Zhiyanov Solod v0.1: Go ergonomics, practical stdlib, native C interop rss
Solod (So) is a system-level language with Go syntax and zero runtime. It's designed for two main audiences:
- Go developers who want low-level control and zero-cost C interop, without having to learn a new language or standard library.
- C developers who like Go's style.
The initial version (let's call it v0) was focused on picking a subset of Go and translating it to C. The next logical step was to port Go's standard library and make it easier to interop with C. That's what the v0.1 release I'm presenting today is all about.
Standard library • SQLite bindings • Persistent map • Store and retrieve • Command-line interface • Performance • Wrapping up
Standard library
Solod v0.1 ships with the following stdlib packages ported from Go:
io,bufio, andfmt— Abstractions and types for general-purpose I/O.bytes,strings,strconv, andunicode/utf8— Common byte and text operations.slicesandmaps— Generic heap-allocated data structures.crypto/randandmath/rand— Generating random data.flag,os, andpath— Working with the command line and files.log/slog— Structured logging.time— Measuring and displaying time.
And a couple of its own packages:
mem— Memory allocation with a pluggable allocator interface.c— Low-level C interop helpers.
In the following sections, I'll demonstrate some of the v0.1 features using a simple example: a persistent key-value store backed by SQLite.
SQLite bindings
Since So doesn't provide
database/sqlyet, we'll call SQLite directly through its C API. To do this, let's import the necessary headers with theso:includedirective and generate extern declarations using the sobind tool:package main import "solod.dev/so/c" //so:include <sqlite3.h> // SQLite constants. // //so:extern SQLITE_OK const sqliteOK = 0 //so:extern SQLITE_ROW const sqliteRow = 100 //so:extern SQLITE_DONE const sqliteDone = 101 // SQLite types. // //so:extern type sqlite3 struct{} //so:extern type sqlite3_stmt struct{} //so:extern type sqlite3_value struct{} //so:extern type sqlite3_callback func(any, int32, **c.Char, **c.Char) int32 // SQLite functions. func sqlite3_open(filename string, ppDb **sqlite3) int32 func sqlite3_prepare_v2(db *sqlite3, zSql string, nByte int32, ppStmt **sqlite3_stmt, pzTail **c.ConstChar) int32 func sqlite3_step(arg0 *sqlite3_stmt) int32 func sqlite3_finalize(pStmt *sqlite3_stmt) int32 func sqlite3_close(arg0 *sqlite3) int32 func sqlite3_exec(arg0 *sqlite3, sql string, callback sqlite3_callback, arg3 any, errmsg **c.Char) int32 // more declarations...The
so:externdirective is required for constants (sqliteOK) and types (sqlite3_stmt). As for functions (sqlite3_prepare_v2), we can just declare them without a body — the transpiler will treat them as extern declarations even withoutso:extern.Persistent map
With the SQLite API in place, let's implement a key-value type that wraps the database connection:
// SQLMap is a simple key-value store backed by an SQLite database. type SQLMap struct { db *sqlite3 }Add a constructor that connects to an SQLite database and creates a table to store the items:
var ErrCreate = errors.New("sqlmap: create schema failed") const sqlCreate = "create table if not exists kv (key text primary key, val)" // NewSQLMap creates a new SQLMap using the provided connection string. // It opens a connection to the SQLite database and creates the underlying // key-value table if it does not already exist. // // The caller is responsible for calling Close on the returned SQLMap // when it is no longer needed. func NewSQLMap(connStr string) (SQLMap, error) { var db *sqlite3 rc := sqlite3_open(connStr, &db) if rc != sqliteOK { return SQLMap{}, ErrCreate } rc = sqlite3_exec(db, sqlCreate, nil, nil, nil) if rc != sqliteOK { sqlite3_close(db) return SQLMap{}, ErrCreate } return SQLMap{db}, nil } // Close releases resources associated with the SQLMap. func (m *SQLMap) Close() { sqlite3_close(m.db) }As you can see, this So code looks a lot like regular Go code. However, there are some key differences:
- When compiled, the code is first translated to plain C, then compiled into a native binary using GCC or Clang.
- Unlike Go, there is no runtime (no automatic heap memory allocation, no garbage collection, no goroutine scheduler).
- There is no overhead when calling C functions, unlike Go's Cgo.
- The interop syntax is a bit cleaner. For example, Go's
string(sqlCreatein thesqlite3_execcall) automatically decays to C'sconst char*.
Store and retrieve
First, let's implement the
Setmethod:var ( ErrPrepare = errors.New("sqlmap: prepare failed") ErrExec = errors.New("sqlmap: exec failed") ) const sqlSet = "insert or replace into kv (key, val) values (?, ?)" // Set stores a string value for the specified key. func (m *SQLMap) Set(key string, val string) error { var stmt *sqlite3_stmt rc := sqlite3_prepare_v2(m.db, sqlSet, -1, &stmt, nil) if rc != sqliteOK { return ErrPrepare } defer sqlite3_finalize(stmt) sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil) sqlite3_bind_text(stmt, 2, val, int32(len(val)), nil) rc = sqlite3_step(stmt) if rc != sqliteDone { return ErrExec } return nil }No surprises here, just a bunch of SQLite API calls.
The
Getmethod is more interesting:var ErrNotFound = errors.New("sqlmap: not found") const sqlGet = "select val from kv where key = ?" // Get returns the value associated with the specified key. // The caller owns the returned string and must free it with mem.FreeString. func (m *SQLMap) Get(a mem.Allocator, key string) (string, error) { var stmt *sqlite3_stmt rc := sqlite3_prepare_v2(m.db, sqlGet, -1, &stmt, nil) if rc != sqliteOK { return "", ErrPrepare } defer sqlite3_finalize(stmt) sqlite3_bind_text(stmt, 1, key, int32(len(key)), nil) rc = sqlite3_step(stmt) if rc == sqliteDone { return "", ErrNotFound } if rc != sqliteRow { return "", ErrExec } text := sqlite3_column_text(stmt, 0) tmp := c.String(text) result := strings.Clone(a, tmp) return result, nil }The pointer returned by
sqlite3_column_textis managed by SQLite. It becomes invalid after callingsqlite3_finalize(whichGetdoes before returning). Because of this, we need to allocate a copy of the returned value, usingstrings.Clonein this case.So's approach to memory allocation is similar to Zig's — all heap allocations must be done explicitly by providing a specific instance of the
mem.Allocatorinterface.The caller, of course, must free the allocated string:
func main() { m, err := NewSQLMap(":memory:") if err != nil { panic(err) } defer m.Close() m.Set("name", "Alice") name, err := m.Get(mem.System, "name") if err != nil { panic(err) } println("name =", name) mem.FreeString(mem.System, name) } name = AliceHere,
mem.Systemis a specific allocator that uses libc'smallocandfree. Alternatively, we could usemem.Arenaor any other implementation of themem.Allocatorinterface:var buf [1024]byte // stack-allocated arena := mem.NewArena(buf[:]) name, _ := m.Get(&arena, "name") mem.FreeString(&arena, name) // no-op for arena; can be omittedCommand-line interface
With the
SQLMaptype in place, let's create a simple CLI using theflagpackage:var ( opFlag string keyFlag string valFlag string ) func parseFlags() { flag.StringVar(&opFlag, "op", "", "operation: get, set, or del") flag.StringVar(&keyFlag, "key", "", "key name") flag.StringVar(&valFlag, "val", "", "value (for set operation)") flag.Parse() } func main() { parseFlags() // ... }Then add command routing:
m, err := NewSQLMap("sqlmap.db") check(err) defer m.Close() switch opFlag { case "set": err = m.Set(keyFlag, valFlag) check(err) case "get": val, err := m.Get(mem.System, keyFlag) check(err) println(val) mem.FreeString(mem.System, val) case "del": err = m.Delete(keyFlag) check(err) default: flag.Usage() os.Exit(1) } sqlmap -op=set -key=name -val=alice sqlmap -op=get -key=name aliceAgain, no surprises here — the
flagpackage works just as it does in Go.Performance
Solod isn't trying to outperform hand-tuned C. Still, performance matters: the code is benchmarked and optimized to run reasonably fast. Since So compiles to plain C and then to native code with full optimizations, the results are sometimes better than Go's.
Here are some highlights from the benchmarks:
- Buffered I/O is 3x faster than Go.
- String and byte operations are up to 2.5x faster.
- Maps are 1.5x faster for modifications.
- Integer formatting is 2x faster.
There're no GC pauses and no Cgo bridge cost when calling C libraries. The tradeoff is that you have to handle memory yourself, but as the SQLite example above shows, So's allocator interface makes that pretty manageable.
Wrapping up
Solod is still in its early days, but with the v0.1 release, it's ready for hobby projects. The already-ported parts of the Go standard library make it easy to write command-line tools (check out the
cat,head,sort, andwcexamples). Plus, with native C interop, you can build just about anything else you need.The next release (v0.2) will likely focus on networking, concurrency, or both — along with more stdlib packages.
If you're interested, take a look at So's readme — it has all the information you need to get started. Or try So online without installing anything.
-
🔗 r/york York Dungeon investigates 'poltergeist' after tumblers fall from shelves rss
| submitted by /u/Unlikely-Tension-616
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life La semaine du 27 avril au 3 mai rss
lundi 27 avril
J'ai ajouté la capacité de naviguer en temps réel à mon paquet subed.el. C'était déjà très pratique pour ajouter les chapitres à la transcription de ma conversation avec John Wiegley et Karthik Chikmagalur. Elle a besoin d'une petite modification pour convertir les notes que j'avais prises pendant la conversation.
J'ai emmené ma fille à son cours de gymnastique. Il y avait un remplaçant. Je suis ravie de voir que le remplaçant a porté un masque KN-95 sans demander.
Je me suis organisé avec ma mère pour installer l'app BDO Pay sur mon téléphone.
J'ai préparé les éléments pour coudre mon chapeau comme le chapeau que j'avais cousu pour ma fille.
mardi 28
J'ai emmené ma fille à Adventure Alley pour jouer avec ses amies. C'était un peu cher, mais ma fille s'est amusée, donc ce n'est pas un problème si nous allons là-bas de temps en temps.
mercredi 29
L'écran de remplacement est arrivé au magasin Apple, donc je vais aller là-bas demain.
J'ai réécrit une partie de la page EmacsNewbie sur l'EmacsWiki.
Ma fille a cousu mon chapeau.
Sur Stardew Valley, nous avons acheté un cochon et un mouton. Nous avons amélioré le poulailler en un grand poulailler et nous avons ajouté une cuisine à notre maison.
jeudi 30
J'ai été ravie en discutant avec Prot sur l'expérience de l'éditeur Emacs pour les débutants.
Mon mari, ma fille, et moi avons fait du vélo avec son amie et le père de son amie.
Sur Stardew, ma fille a remarqué que j'ai accidentellement acheté une vache que j'appelle Chèvre au lieu de la chèvre que j'ai prévu d'acheter pour le centre communautaire. Oups! Elle s'est très amusée et elle m'a demandé, quand j'achète finalement une chèvre, si je pouvais l'appeler Vache. Les animaux seront très confus, et moi aussi. Je l'ai quand même fait.
vendredi 1er mai
L'école avait un remplaçant et elle n'a pas voulu y assister, donc j'ai prévenu l'école de son absence et nous avons fait un compromis entre ses devoirs et des jeux.
Nous sommes allées au Stockyards pour acheter des tissus pour son maillot de bain. Elle a trouvé les deux couleurs qu'elle voulait, mais il ne restait qu'un yard d'une couleur. Il faudra que nous planifions soigneusement. Nous avons acheté des fils chez Michaels. Elle a aussi acheté une boîte de mochi puffs chez Marry Me Mochi.
Elle a cousu des coutures sur mon chapeau.
samedi 2
Pour le petit-déjeuner, ma fille a préparé une grande omelette en utilisant six œufs. On s'est régalés.
Ma fille était grincheuse parce que j'ai attiré son attention sur son agitation et elle a senti que j'étais sur son dos.
Le magasin Apple n'a pas pu réparer l'écran de ma tablette, donc il l'a remplacé par une nouvelle tablette pour une petite somme. L'Apple Pencil était finalement lié à ma garantie AppleCare+, mais malheureusement, il était en rupture de stock partout en ville, donc il fallait que j'attende pendant environ une semaine.
Une fois rentrée, j'ai trouvé que ma fille s'était calmée. Elle et moi avons joué à Duplo, ce qui est aussi un produit LEGO, mais plus grand que la normale. Je les ai utilisés pour montrer à ma fille des concepts mathématiques comme les permutations et les combinaisons.
dimanche 3
Mon mari et moi avons fait du vélo au centre-ville avec ma fille dans mon vélo cargo. Ma fille et moi avons essayé le mochi chez Kibo (c'était délicieux) avant de continuer chez MEC pour chercher une nouvelle gourde pour remplacer celle que j'ai perdue. Elle n'a rien vu qui lui plaisait. Nous avons aussi acheté un mannequin en bois pour faciliter des prototypes pour coudre et des crayons d'aquarelle pour les explorer.
Une fois rentrés, mon mari a fait cuire un pain de levain qu'il donnera au père de l'amie de notre fille, suite à leur conversation vendredi. Ma fille et moi avons travaillé sur le plan de faire son maillot de bain. Elle a voulu une robe qui a un corsage cache-cœur et une jupe à ourlet tulipe. Pour le dos, elle a voulu des bretelles croisées avec un petit dos goutte.
J'étais fatiguée, donc j'ai fait une sieste. Ma fille est venue me réveiller. J'ai remarqué que mes yeux étaient très secs, donc elle a négocié de m'apporter des gouttes pour les yeux et elle me les a administrées pour 25 cents.
You can e-mail me at sacha@sachachua.com.
-
🔗 tomasz-tomczyk/crit v0.10.5 release
What's Changed
A maintenance release with broad fixes across the GitHub PR roundtrip, the comment-sync push/pull pipeline, and the local review UI — plus accessibility polish on the sidebar resize handles, a distinct "Approved" state on the review-finish modal.
General
- feat: distinct "Approved" state for review-finish modal by @tomasz-tomczyk in #427
- feat: keyboard-accessible sidebar resize handles by @tomasz-tomczyk in #469
- feat: per-round timeline backend (Stage 1) by @tomasz-tomczyk in #460
- style: bump comment input font-size to 14px by @tomasz-tomczyk in #441
- style: align textarea line-height with rendered comment bodies by @tomasz-tomczyk in #444
Fixes
- fix: tie agent goroutine to daemon shutdown ctx + add runGit helper by @tomasz-tomczyk in #433
- fix: small correctness nits (bulk parser err, scheduleWrite doc, dup mkdir) by @tomasz-tomczyk in #432
- fix: use 127.0.0.1 in internal HTTP clients to match daemon bind by @tomasz-tomczyk in #436 (Thanks @perbu for reporting)
- fix: clean message when running crit on a repo with no changes by @tomasz-tomczyk in #439 (Thanks @perbu for reporting)
- fix: hide TOC toggle for single-heading documents by @tomasz-tomczyk in #443
- fix: propagate local comment deletes to GitHub on push by @tomasz-tomczyk in #461
- fix: import GitHub thread resolved state on crit pull by @tomasz-tomczyk in #462
- fix: detect mid-push auth rotation and abort cleanly by @tomasz-tomczyk in #463
- fix: prefers-reduced-motion spinner gap; rename waitingHasComments; annotate reflow line by @tomasz-tomczyk in #465
- fix: relax comment drift detection for in-place edits by @tomasz-tomczyk in #466
- fix: atomically rewrite auth_token + identity on login by @tomasz-tomczyk in #468
- fix: close finish-review modal on backdrop click by @tomasz-tomczyk in #470
- fix: allow --range/--pr on clean working tree by @tomasz-tomczyk in #472 (Thanks @ewgdg for reporting!)
- fix: backward selection across blank-line boundary by @tomasz-tomczyk in #473 (Thanks Matt for reporting!)
Documentation
- docs: rewrite AGENTS.md with blocks by @tomasz-tomczyk in #431
Internal refactors
- chore: post-v0.10.4 audit cleanup by @tomasz-tomczyk in #426
- fix: post-v0.10.4 release audit cleanup by @tomasz-tomczyk in #475
- refactor: bundled cleanup — wrappers, mustGetwd, browser.go, error surfacing by @tomasz-tomczyk in #428
- refactor: extract review-file CLI logic out of github.go by @tomasz-tomczyk in #429
- refactor: consolidate atomic-file-write helpers by @tomasz-tomczyk in #430
- refactor: split main.go and session.go into focused files by @tomasz-tomczyk in #434
- refactor: release audit cleanup — atomic writes, flag parsing, dead code by @tomasz-tomczyk in #464
- test: cover gaps in atomic write, auth, watch, sapling, parsers by @tomasz-tomczyk in #440
- test: GitHub PR roundtrip integration harness by @tomasz-tomczyk in #445
- test: wait for PR head sha after force-push in roundtrip harness by @tomasz-tomczyk in #457
- test: integration coverage for resolved_round mapping by @tomasz-tomczyk in #467
Full Changelog :
v0.10.4...v0.10.5 -
🔗 r/york First-Time DM looking for DnD players in York! rss
Hey everyone! I've been wanting to DM something for a while now and I've been planning a campaign that I'm pretty excited about.
I've got one player on board so far, so I just need three more players to be able to start playing! The two of us are 26/27, so ideally we're looking for people around the same age.
If you're interested, just let me know and I'll DM you with more details 😄
submitted by /u/WeirdoWolfBoy
[link] [comments] -
🔗 r/LocalLLaMA 2.5x faster inference with Qwen 3.6 27B using MTP - Finally a viable option for local agentic coding - 262k context on 48GB - Fixed chat template - Drop-in OpenAI and Anthropic API endpoints rss
2026-05-07 edit: I have updated the hardware based recommendations with more focus on quality. I do not recommend q4_0 KV cache anymore beyond 64k context. After multiple rounds of testing with the different size quants, it appears3 is the optimal number for draft speculative decoding. The fastest and best quality quant is q8_0-mtp. F16, which I have also uploaded is actually better but ultra slow (6x slower than q8_0). Many keep saying 8bit is virtually lossless compared to 16bit, and 6bit almost as good as 8bit, but this is simply not true: time and time again I have noticed huge differences in quality and correctness between 8bit and 16bit versions of various models.
The recent PR to llama.cpp bring MTP support to Qwen 3.6 27B. This uses the built-in tensor layers for speculative decoding. None of the existing GGUF have it, as they need to be converted with this PR.
I have tested it locally on my mac M2 Max 96GB, and the results are amazing: 2.5x speed increase, bringing it to 28 tok/s!
I have converted the most useful quants and uploaded them to HF. Even if you are using apple silicon, you should use those instead of MLX. You can download them here:
https://huggingface.co/froggeric/Qwen3.6-27B-MTP-GGUF
This also includes 7 fixes I made to the original jinja chat template, due to vLLM specificity which broke in other tools:
https://huggingface.co/froggeric/Qwen-Fixed-Chat-Templates
For now, you will need to compile your own version of llama.cpp to use them. It is fairly simple to do:
```bash git clone --depth 1 https://github.com/ggml-org/llama.cpp.git cd llama.cpp git fetch origin pull/22673/head:mtp-pr && git checkout mtp-pr
cmake -B build -DGGML_METAL=ON -DCMAKE_BUILD_TYPE=Release cmake --build build --target llama-cli llama-server ```
Then to start serving with the API endpoint, use a command similar to:
bash llama-server -m Qwen3.6-27B-Q5_K_M-mtp.gguf \ --spec-type mtp --spec- draft-n-max 3 \ --cache-type-k q8_0 --cache-type-v q8_0 \ -np 1 -c 262144 --temp 0.7 --top-k 20 -ngl 99 --port 8081Vision currently crashes llama.cpp when used alongside MTP. Reported 2026-05-06 in the current PR.
That's it. Three optimizations in one command:
Flag | What it does | Impact
---|---|---
--spec-type mtp --spec-draft-n-max 3| Multi-Token Prediction (built into the model) | 2.5x faster generation
--cache-type-k q8_0 --cache-type-v q8_0| 8-bit KV cache (instead of 16-bit) | Half the KV memory , negligible quality loss
-c 262144| 262K context window | Full native context on 48 GB Mac with q8_0 KVAdjust
-m,-c, and--cache-type-k/vfor your hardware, according to the tables below.Here are my recommendations based on your hardware:
Apple Silicon
Qwen3.6-27B is a hybrid model — only 16 of 65 layers use KV cache (verified). The other 48 are linear attention (fixed 898 MiB recurrent state). KV memory is ~4× less than a standard dense model. Runtimes that don't handle this (e.g. vllm) allocate KV for all 65 layers and show much higher memory usage.
Numbers below are total memory used (model + KV cache + 0.9 GB recurrent state). Must leave ≥ 8 GB for macOS (16 GB Macs excepted).
RAM | Quant | KV cache | Max context | Total used | Vision
---|---|---|---|---|---
16 GB |IQ2_M|q8_0| 42K | 12.0 GB | ✗
24 GB |IQ3_M| | 46K | 16.0 GB | ✗
24 GB |IQ3_M|q8_0| 91K | 16.0 GB | ✗
32 GB |Q5_K_M| | 74K | 24.0 GB | ✗
32 GB |Q5_K_M|q8_0| 147K | 24.0 GB | ✗
32 GB |Q4_K_M| | 99K | 24.0 GB | ✓
48 GB |Q6_K| | 262K | 39.7 GB | ✓
48 GB |Q8_0| | 173K | 40.0 GB | ✓
48 GB |Q8_0|q8_0| 262K | 37.3 GB | ✓
64 GB |Q8_0| | 262K | 45.8 GB | ✓
96 GB |Q8_0| | 262K | 45.8 GB | ✓NVIDIA GPU
Same model memory as Apple Silicon, plus ~1 GB CUDA overhead.
VRAM | Quant | KV cache | Max context | Total VRAM used | Vision
---|---|---|---|---|---
12 GB |IQ2_M|q8_0| 11K | 12.0 GB | ✗
16 GB |IQ3_M| | 30K | 16.0 GB | ✗
16 GB |IQ3_M|q8_0| 60K | 16.0 GB | ✗
24 GB |Q4_K_M| | 83K | 24.0 GB | ✓
24 GB |Q4_K_M|q8_0| 167K | 24.0 GB | ✓
24 GB |Q5_K_M| | 58K | 24.0 GB | ✗
48 GB |Q6_K| | 262K | 40.7 GB | ✓
48 GB |Q8_0| | 262K | 46.8 GB | ✓
80 GB |Q8_0| | 262K | 46.8 GB | ✓16 GB Mac:
IQ2_M/q8_0 — 42K text-only. No vision.24 GB Mac:
IQ3_M— 46K (f16 KV) or 91K (q8_0). Vision at 32–65K.32 GB Mac:
Q5_K_M— 74K text-only (f16 KV), 147K (q8_0).Q4_K_Mfor vision at 99K.48 GB Mac:
Q6_K/f16 KV — 262K with vision.Q8_0/q8_0 KV for 262K at higher model quality.64 GB+ Mac:
Q8_0/f16 KV — 262K with vision. Maximum quality at practical speed.12 GB GPU:
IQ2_M/q8_0 — 11K. Very limited, no vision.16 GB GPU:
IQ3_M— 30K (f16 KV) or 60K (q8_0). No vision.24 GB GPU:
Q4_K_M— 83K with vision (f16 KV).Q5_K_M— 58K text-only (f16 KV), 116K (q8_0).48 GB+ GPU:
Q6_K/f16 KV — 262K with vision.Q8_0for max quality.Leave KV cache at f16 (blank column) for best quality. Use
q8_0KV only when f16 doesn't give enough context.q4_0KV should not exceed 64K context.Vision adds ~0.9 GB for mmproj. macOS needs ≥ 8 GB for itself (16 GB Macs excepted — use ~4 GB). You can increase available memory by raising the wired memory limit, e.g. for a 96 GB Mac:
sudo sysctl iogpu.wired_limit_mb=90112(88 GB). NVIDIA reserves ~1 GB for CUDA.submitted by /u/ex-arman68
[link] [comments] -
🔗 r/wiesbaden Fine Line Tattoo Artist rss
Hey,
Kennst jemand ein gutes Tattoo Studio/ einen guten Tattoo Artist für abstrakte Fine Line Tattoos in Wiesbaden oder Umgebung?Ansonsten auch anderswo:)
submitted by /u/heyheyheyoooooo
[link] [comments] -
🔗 tomasz-tomczyk/crit Windows pre-release 1 (PR #459) release
Pre-release Windows binaries for testing the windows-wsl-support branch (PR #459).
This release is not published to Homebrew and is not a stable release. It exists so reviewers can test Windows + WSL support without merging the PR.
Install
- Download the matching binary below (
crit-windows-amd64.exefor most machines,crit-windows-arm64.exefor ARM64). - Rename it to
crit.exe. - Drop it on your
PATH. - Run
critin a git repo with changed files.
Linux/macOS binaries are included for convenience but the supported install path on those platforms remains Homebrew (
brew install tomasz- tomczyk/tap/crit). - Download the matching binary below (
-
🔗 r/Leeds When is Uniqlo going to open? rss
I was so excited for this opening to be announced last year, at Christmas it said opening soon, then it changed to fall/winter 2026.
It's a long time to fit out a shop.
submitted by /u/used2bfat69
[link] [comments] -
🔗 r/LocalLLaMA Quality comparison between Qwen 3.6 27B quantizations (BF16, Q8_0, Q6_K, Q5_K_XL, Q4_K_XL, IQ4_XS, IQ3_XXS,...) rss
| The following is a non-comprehensive test I came up with to test the quality difference (a.k.a degradation) between different quantizations of Qwen 3.6 27B. I want to figure out what's the best quant to run on my 16 GB VRAM setup. WHAT WE ARE TESTING First, the prompt:Given this PGN string of a chess game: 1. b3 e5 2. Nf3 h5 3. d4 exd4 4. Nxd4 Nf6 5. f4 Ke7 6. Qd3 d5 7. h4 * Figure out the current state of the chessboard, create an image in SVG code, also highlight the last move.I want to see if the models can:
- Able to track the state of the board after each move, to reach the final state (first half of move 7)
- Generate the right SVG image of the board, correctly place the pieces, highlight the last move
And yes, if you are questioning. It could be possible that the model was trained to do the same thing on existing chess games, so I came up with some random moves, the kind of moves that no players above 300 elo would ever have played. For those who are not chess players, this is how the board supposed to look like after move 7. h4. Btw, you supposed to look at the pieces positions and the board orientation, not image quality because this is just a screenshot from Lichess. https://preview.redd.it/6lsfvzy8wfzg1.png?width=1586&format=png&auto=webp&s=94634b461528a6ecc6728eefd23072ab28c3769d CAN OTHER MODELS SOLVE IT? Before we go to the main part, let me show the result from some other models. I find it interesting that not many models were able to figure out the board state, let alone rendering it correctly. Qwen 3.5 27B It was mostly figured out the final position of the pieces, but still render the original board state on top. Highlighted the wrong squares, and the board orientation is wrong. https://preview.redd.it/oanbebp9xfzg1.png?width=1078&format=png&auto=webp&s=b72af75a10f4a9f4d897699b404580370bd29d9e Gemma 4 31B Nice chess dot com flagship board style, i would say it can figure out the board state, but failed to render it correctly. The square pattern also messed up. https://preview.redd.it/w5jwi05nxfzg1.png?width=1640&format=png&auto=webp&s=33e6f21f56c4e98df92c828103ac10714e578973 Qwen3 Coder Next I don't know what to say, quite disappointed. https://preview.redd.it/knltp8h1yfzg1.png?width=1348&format=png&auto=webp&s=1e9207cd1dfd08b049eaa13727703be732d2cb96 Qwen3.6 35B A3B As expected, 35B always be the fastest Qwen model, but at the same time, managed to fail the task successfully in many different ways. This is why I decided to find a way to squeeze 27B into my 16 GB card. The speed alone just not worth it. https://preview.redd.it/orti5kdhyfzg1.png?width=3360&format=png&auto=webp&s=c29a3aae9683e5ceaa15c59ae32adecabdd1b6b6 HOW QWEN3.6 27B SOLVE IT? All the models here are tested with the same set of llama.cpp parameters:
- temp 0.6
- top-p 0.95
- top-k 20
- min-p 0.0
- presence_penalty 1.0
- context window 65536
BF16 version was from OpenRouter, Q8 to Q4_K_XL versions was on a L40S server, the rest are on my RTX 5060 Ti. The SVG code generated directly on Llama.cpp Web UI without any tools or MCP enabled (I originally ran this test in Pi agent, only to found out that the model tried to peek into the parent folders and found the existing SVG diagrams by higher quants, copied most of it). BF16 - Full precision This is the baseline of this test. It has everything I needed: right position, right board orientation, right piece colors, right highlight. The dotted blue line was unexpected, but it also interesting, because later on you will see, not many of the high quants generate this. https://preview.redd.it/lgizkjklzfzg1.png?width=1424&format=png&auto=webp&s=d7867b55735d3d875e0e36aecbaf3c3f0d1dbd58 Q8_0 As expected Q8 retains pretty much everything from the full precision except the line. https://preview.redd.it/6wjnq6ff0gzg1.png?width=1610&format=png&auto=webp&s=f0d20ff4717b972efffced49ac8d43075fa97eb5 Q6_K We start to see some quality loss here. I mean the placement of the rank 5 pawns. The look of the pieces are mostly because Q6 decided to use a different font. None of the models here trying to draw its own pieces in this test. https://preview.redd.it/kcqj81vl0gzg1.png?width=1608&format=png&auto=webp&s=66c7a219e79a8f6ecf44e27489f337b4016185b5 Q5_K_XL Looks very similar with Q8, but it is worth noticing that the SVG code of Q5 version is 7.1 KB, while Q8 is 4.7 KB. https://preview.redd.it/6wshu7g01gzg1.png?width=1506&format=png&auto=webp&s=289db354fea59c456d8bd2dc7abdbcc1e4282ffd Q4_K_XL and IQ4_XS If you ignore the font choice, you will see Q4_K_XL is a more complete solution, because it has the board coordinates. https://preview.redd.it/pzdghdtm1gzg1.png?width=3326&format=png&auto=webp&s=10c3d7758459f223d195107353f1ec76565cd31d Q3_K_XL and Q3_K_M https://preview.redd.it/56gttur62gzg1.png?width=3330&format=png&auto=webp&s=4af27d8a652e2deef6c14485d0fff4bd3651097f IQ3_XXS Now here's the interesting part, everything was mostly correct, the piece placements and the highlight, and there's the line on the last move! But IQ3_XXS get the board orientation wrong, see the light square on the bottom left? https://preview.redd.it/7jnzxy324gzg1.png?width=1608&format=png&auto=webp&s=178f72f51e65866497f16e861b04c0c448fce774 Q2_K_XL This is just a waste of time. But hey, it got all the pieces positions right. The board is just not aligned at all. https://preview.redd.it/3z63d7bv4gzg1.png?width=1604&format=png&auto=webp&s=f6723b28248327c55bede4e42a4a0cfbe962fb74 SO, WHAT DO I USE? I know a single test is not enough to draw any conclusion here. But personally, I will never go for anything below IQ4_XS after this test (I had bad experience with Q3_K_XL and below in other tries). On my RTX 5060 Ti, I got like pp 100 tps and tg 8 tps for IQ4_XS with vanilla llama.cpp (q8 for both ctk and ctv, fit on). But with TheTom's TurboQuant fork, I managed to get up to pp 760 tps and tg 22 tps , by forcing GPU offload for all layers (
-ngl 99), quite usable.llama-cpp-turboquant/build/bin/llama-server -fa 1 -c 75000 -np 1 --no-mmap --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.0 --presence_penalty 1.0 -ctk turbo4 -ctv turbo2 -ub 128 -b 256 -m Qwen3.6-27B-IQ4_XS.gguf -ngl 99The only down side is I have to keep the context window below 75k, and use turbo4/turbo2 for KV cache quant. Below are some example of different KV cache quants. https://preview.redd.it/y0y7o6h09gzg1.png?width=3320&format=png&auto=webp&s=bd7c855100ff63c9bb666a4f4a61b966ad6eebca https://preview.redd.it/dyrru7z19gzg1.png?width=3314&format=png&auto=webp&s=d54238d7a31c6cd8858f84df67ff588dc22d726b You can see all the result directly here https://qwen3-6-27b-benchmark.vercel.app/ submitted by /u/bobaburger
[link] [comments]
---|--- -
🔗 r/reverseengineering ant4g0nist/pyre: Ghidra decompiler in your browser rss
submitted by /u/Nightlark192
[link] [comments] -
🔗 HexRaysSA/plugin-repository commits sync repo: ~1 changed rss
sync repo: ~1 changed ## Changes - [HashDB](https://github.com/oalabs/hashdb-ida): - 1.10.0: archive contents changed, download URL changed -
🔗 Ampcode News Amp, Rebuilt rss
Today we're starting to roll out the new Amp.
Not all of it, not yet. But the first piece: a rebuilt Amp CLI. Codename: Neo.
In The Coding Agent is Dead we wrote about where this is going: agents with longer leashes, less handholding, and many more places to run. Not just one agent in one terminal. Agents prompted from anywhere, running everywhere.
That's the new Amp we're building.
But the terminal still matters and will matter. There will be moments where you want the agent right next to you.
So we rebuilt the CLI first. It is still Amp in your terminal. But it's running on a completely new architecture: remote-controllable, compaction-first, plugin-powered, and much faster. Built for what's coming.
Let's walk through it.
Remote Control
When you start a thread in the new Amp CLI, you can now remote control it from ampcode.com.
You'll not only get live updates but you can also send messages, queue and dequeue them, or cancel what the agent is currently doing:
The architecture that enables this is the reason we rewrote Amp. And remote control is just the start.
No More Manual Context Management
A core principle behind the rebuild: build for what the frontier models can do now, in 2026, and what they will be able to do in the future. Do not build for what once was.
Today's leading frontier models are great at handling compaction.
So Amp now manages context for you.
You don't have to watch context percentages anymore, or decide when to handoff, or extract information from a thread in a panic.
When the context window fills up, Amp now compacts the thread: it summarizes the current context, starts a fresh window with that summary, and keeps going.
Compaction now runs automatically when the context window is 90% full.
It was also the first thing we added to the new architecture. During one migration, we had to shut it off for a day and everyone complained. One beta-user reported: "I love having auto-compaction. NOT missing handoff..."
So handoff is out. Compaction is in.
Plugins
With this release we're officially releasing the Amp Plugin API.
Amp plugins can:
- Handle events —
amp.on(...)for tool calls, tool results, and agent lifecycle events - Add tools —
amp.registerTool(...)for custom tools the agent can call - Add commands —
amp.registerCommand(...)for command palette actions - Show UI elements —
ctx.ui.notify(...),ctx.ui.confirm(...),ctx.ui.input(...), andctx.ui.select(...) - Ask AI questions —
amp.ai.ask(...)for yes/no classification with confidence and reasoning
Here, for example, is a plugin that registers a tool called
ask_user_choice. The agent can use it to present the user with options:// .amp/plugins/ask-user-choice.ts import type { PluginAPI } from '@ampcode/plugin' export default function (amp: PluginAPI) { amp.registerTool({ name: 'ask_user_choice', description: 'Present the user with a multiple choice question when there are several possible approaches and you need them to pick one. Use when you have 2-5 concrete options to choose from.', inputSchema: { type: 'object', properties: { question: { type: 'string', description: 'The question to ask the user' }, options: { type: 'array', items: { type: 'string' }, description: 'The options to choose from (2-5 items)', }, }, required: ['question', 'options'], }, async execute(input, ctx) { const question = input.question as string const options = input.options as string[] const optionsList = options.map((opt, i) => `${i + 1}. ${opt}`).join('\n') const answer = await ctx.ui.input({ title: question, helpText: `${optionsList}\n\nType the number of your choice`, submitButtonText: 'Select', }) if (!answer) return 'User dismissed the question without choosing.' const index = parseInt(answer.trim(), 10) - 1 if (index >= 0 && index < options.length) { return `User selected option ${index + 1}: ${options[index]}` } return `User responded with: ${answer}` }, }) }That's it: a single file in
.amp/pluginsand Amp gets a new tool. It looks like this:
The Amp Plugin API documentation has more examples, including a full permissions plugin.
Queuing & Steering
Queuing messages is now the default. When you send a message while the agent is busy, it'll get added to the queue instead of stopping and interrupting the agent.
This, too, we think fits the models of today and tomorrow better. They work for longer and need fewer mid-flight yanks.
If you want to fast-track a queued message, you can steer.
Steering lets you send a queued message as soon as possible, not just when the agent becomes idle. The next time a tool result is sent up to the agent, for example.
Use ↑ to select a queued message, then steer it with ⏎:
You can also hit Esc Esc to interrupt the agent and send immediately.
Permissions
Amp will no longer ask for permission before running tools.
What was once the
--dangerously-allow-allflag is now the default behavior for users who have not configured permissions.The old permissions system still exists. It's now a built-in plugin. If your existing Amp settings already opt into permissions — through
amp.permissions,amp.dangerouslyAllowAll: false, oramp.guardedFiles.allowlist— Amp loads that plugin and works as before. (When the plugin is active, it applies in bothampandamp --execute.)Why change the default?
A year ago tool calls were simpler to check: inspect the name, inspect the arguments, do string-based matching, allow or deny. Now, frontier models write throwaway scripts to get stuff done. They chain shell commands.
It's near-impossible to determine statically whether a tool invocation will be destructive or not.
When a model writes five 20-line Python scripts in parallel to do something, checking whether a tool call contains
rm -rfgives you a false sense of security.On top of that, there are now custom skills and scripts, specifically built for agents. And different organizations have different policies around which model is allowed to call which tool.
So permissions now live in the Plugin API.
If you need a policy, build the one that matches your setup. Point Amp at the Amp Plugin API and ask it to help you.
Performance & Efficiency
The old Amp CLI got slow with huge threads. Neo doesn't. Here's a comparison, using a thread with around 5000 messages:
Metric Old New Improvement CPU% (mean ± sd) 84.1% ± 1.6% 17.4% ± 8.8% 79% less CPU CPU% (peak) 86.3% 25.8% — Memory (idle) 1814 MB 540 MB 70% less memory Rendering performance has improved, too.
Before:
After:
What's Gone
We also removed features. Of course we did, otherwise it wouldn't be an Amp release, would it?
Our goal is to keep you on the frontier. Amp should not make you work like it's still 2025.
Some features made sense when models needed more babysitting, more manual context management, more careful steering. They don't anymore. When a feature starts tying you to the old way to use agents, it goes.
Handoff is gone. As described above, compaction made it obsolete. There are some valid use cases for Handoff even when there's enough space left in the context, but we don't think it warrants the complexity introduced by many small, connected threads.
You can also still reference other threads and Amp will read them and extract the relevant information.
For example, you can use Ctrl+O and
thread: newto create a new thread, then hit Enter to quickly insert a reference to the previous thread. Amp will use that reference along with the rest of your prompt to read the previous thread.Amp no longer rolls back file changes when you edit or restore a message. We've found ourselves using this less and less as models advanced. The models are now good enough to undo changes for you, with more finesse than a rollback. And, the truth is, the rollback feature was always best-effort: if the agent wrote and ran code that generated files, we didn't keep track of that without elaborate snapshotting.
Skill management: Amp still supports Agent Skills but we no longer offer commands or subcommands to add, remove, or update skills. That's better done by separate tools, such as
skills.User-invokable skills: We also removed support for user-invokable skills. The latest generation of models now invokes skills reliably.
Themes: Custom themes made it harder to keep the CLI legible, polished, and recognizably Amp. We’d rather ship one good interface than support many broken-looking ones.
Manual bash invocation: in the old Amp CLI you could invoke bash commands by using
$and$$in the prompt editor. An interesting idea a year ago, but now with models being ever more capable at running commands on their own and without blowing up their context window (and that context window being unlimited, practically) it's no longer useful.Rollout
We’re rolling Neo out over the next few days. If you want to skip the line, send us an email. We'll flip the switch for you.
This is the first piece of the new Amp.
More soon.
- Handle events —
-
- May 05, 2026
-
🔗 r/reverseengineering Resident Evil: Code Veronica X is able to play the opening FMV from the decompiled PS2 source! rss
submitted by /u/MrFroz1995
[link] [comments] -
🔗 imfing/hextra v0.12.3 release
What's Changed
This version focuses on bug fixes and small maintenance updates since v0.12.2.
For the full release notes and the upgrade guide for v0.12, please visit:
https://imfing.github.io/hextra/blog/v0.12/- fix: remove inline TOC click handler so default Hextra can avoid unsafe-inline (CSP) by @jecc1982 in #981
- fix: add Hugo compatibility helpers for deprecated multilingual APIs by @imfing in #983
- fix: add hx:mx-auto to the footer by @luigimorel in #982
- fix: avoid publishing demo cast in theme builds by @imfing in #985
- fix(test): accessibility test for YouTube iframe internals by @imfing in #986
- fix(sidebar): fall back to content tree when mobile menu has no entries by @imfing in #991
- fix: resolve page-relative URLs in details shortcode by @muit in #989
New Contributors
- @jecc1982 made their first contribution in #981
- @luigimorel made their first contribution in #982
- @muit made their first contribution in #989
Full Changelog :
v0.12.2...v0.12.3 -
🔗 r/york TOMORROW (WEDNESDAY). Rising post punk band The 113 headlines the Fulford Arms. Not to be missed! £9 advance tickets available from SeeTickets and Fulford Arms website. rss
| https://www.seetickets.com/event/the-113/the-fulford-arms/3598090 submitted by /u/RLTpresents
[link] [comments]
---|--- -
🔗 r/LocalLLaMA DeepSeek V4 being 17x cheaper got me to actually measure what I send to cloud vs what I could run locally. the results are stupid. rss
That foodtruck bench post showing deepseek v4 matching gpt-5.2 at 17x cheaper got me thinking. if frontier cloud models are that overpriced for equivalent quality, how much of my daily work even needs cloud at all?
Ran my normal coding workflow for 10 days. every task got logged: what it was, tokens in/out, whether local qwen 3.6 27b (on a 3090) could have done it. didn't use benchmarks, just re-ran a random sample of 150 tasks on both.
results:
- file reads, project scanning, "explain this code": local matched cloud 97% of the time. this was 35% of my workload. paying for cloud here is genuinely throwing money away.
- test writing, boilerplate, single file edits: local matched 88%. another 30% of tasks. the 12% misses were edge cases i could catch in review.
- debugging with multi-file context: local dropped to 61%. cloud still better but not 17x-the-price better. about 20% of my work.
- architecture decisions, complex refactors across 5+ files: local at 29%. cloud genuinely needed here. only 15% of my tasks.
So 65% of my daily coding work runs identically on a model that costs me electricity. another 20% is close enough that I accept the occasional miss. only 15% actually justifies cloud pricing.
Started routing by task type. local for the first two buckets, cloud for the last two. my api bill went from $85/month to about $22 and the 3090 was already sitting there mining nothing.
The deepseek post is right that the price gap is insane but the bigger insight is that most of us don't even need cloud for most of what we do. we're just too lazy to measure it.
submitted by /u/spencer_kw
[link] [comments] -
🔗 r/Leeds Help me find the same sandwich please - Bánh Mì Cô Út rss
Went to New York last week and had a sandwich that was so good it genuinely brought tears to my eyes (that may however, have been jet lag). I’m absolutely desperate to find as close to the one I had as possible, the one I tried was the
No. 1 - Pork belly, boiled Vietnamese ham, fried Vietnamese ham, jambon, pate, mayo,cucumber, “cilantro”, carrot and daikon.
Happy to pay more than that one cost buy at $8 I suspected I might, can travel a bit but preferably in Leeds
submitted by /u/Thieves-like-us
[link] [comments] -
🔗 r/LocalLLaMA I know this isn’t technically an LLM but OmniVoice is FUCKING AMAZING. rss
Literally one shot voice cloning and it’s literally so easy. What the FUCK. It’s everything I’ve ever dreamed of.
submitted by /u/Borkato
[link] [comments] -
🔗 r/reverseengineering Reverse-engineering the 1998 Ultima Online demo server rss
submitted by /u/draxinar
[link] [comments] -
🔗 r/york Looking for a tenant to take over my old house rss
| Hello all, hoping this doesn’t count as commercial spam and can stay up as I know people sometimes come on here looking for housing. I’ve just bought a house (hooray, adulthood!) and in an effort to not have to pay the rest of my tenancy on my old house, the landlord’s agreed we can be released from the tenancy early if they can find a new tenant. (Renters right act has come in just too late to help us out, unfortunately) It’s a 2 bed mid-terrace in Heworth about a 20 min walk and even shorter cycle to the city centre. It’s the nicest rental property I’ve ever had, and I’ve had a few. It’s a good size, in generally decent condition, has a garage and a parking space and a little courtyard. I can promise I didn’t leave any disastrous messes or unpaid bills there! Shoot me a message if you have the kind of questions letting agents won’t answer!! If you’re interested give them a call via the details on Rightmove, I don’t think they’ve had much interest which surprises me submitted by /u/hollyviolet96
[link] [comments]
---|--- -
🔗 sacha chua :: living an awesome life La semaine du 13 au 19 avril rss
lundi 13
Ma fille a séché les cours toute la journée. Elle a dit qu'elle était fatiguée. Elle est restée à la maison au lieu d'aller à son cours de gymnastique.
J'ai configuré obs-websocket pour lancer et arrêter la diffusion en direct depuis Emacs.
Il faisait très beau, donc je me suis assise dehors et j'ai lu la configuration d'Emacs de tecosaur. Non seulement sa configuration était très détaillée, mais elle était aussi magnifiquement mise en page.
J'ai préparé mon bulletin d'information sur Emacs pendant que je diffusais en direct.
Le glacier était toujours fermé, donc nous avons acheté de la crème glacée au supermarché à la place.
À l'heure du coucher, ma fille a dit qu'elle aurait aimé rester une enfant. Elle a dit qu'elle aimait bien KidSpark, qui est réservé aux enfants jusqu'à 10 ans.
mardi 14
Ma fille a suivi son cours. Après l'école, nous avons fait du vélo au parc pour jouer avec ses amies, qui en faisaient aussi.
J'ai continué à améliorer obs-websocket pour gérer mon direct depuis Emacs. J'ai aussi réécrit mon correctif pour l'opération « sentence-at-point » sur Org Mode.
J'étais fatiguée et j'avais un peu mal à la tête.
mercredi 15
Ma fille s'est réveillée tard, mais elle a participé à son cours toute seule.
J'ai mis à jour mon OBS pour ajouter socialstream.ninja via une source navigateur. Maintenant, je peux afficher les commentaires et je peux envoyer un message depuis Emacs sur YouTube.
J'ai travaillé un peu comme consultante. Le design du profil avait besoin d'une petite correction.
Ma fille et moi avons joué à Stardew Valley.
Mon mari avait une course près du Musée des beaux-arts de l'Ontario. Ma fille était heureuse de sécher les cours l'après-midi parce que l'école avait une remplaçante. J'ai emmené ma fille là-bas et nous avons passé du temps à essayer les activités au musée et à dessiner sur nos tablettes.
Après le dîner, nous nous sommes entraînées à peindre des yeux avec des aquarelles.
jeudi 16
J'avais rendez-vous avec Protesilaos pour l'informer de mes progrès depuis notre conversation précédente et lui poser mes nouvelles questions. J'ai fait fonctionner mon code pour lancer ma vidéo à partir d'un horodatage et j'ai écrit une fonction pour calculer la conversion entre l'heure réelle et le temps écoulé.
Ma fille et moi avons joué à la Play-Doh, au sungka (un jeu traditionnel philippin), et aux charades.
vendredi 17
J'ai révisé les sous-titres de ma conversation avec Prot d'hier. J'ai ajouté deux fonctions pour gérer l'étiquette d'interlocuteur quand on divise ou fusionne des sous-titres. J'ai aussi programmé trois conversations sur Emacs et j'ai publié les événements sur YouTube et sur mon site grâce à d'autres fonctions. J'ai aussi modifié ma bibliothèque pour publier mon site afin qu'elle n'inclue pas les fichiers privés.
J'ai travaillé sur nos impôts.
Ma fille s'est réveillée toute seule ce matin, à temps pour le petit-déjeuner, notre routine matinale, et son interrogation de mathématiques à l'école. Mais elle a séché les cours l'après-midi et elle s'est assise tout l'après-midi contre sa porte. Au lieu de se détendre, elle s'est davantage braquée contre moi. Je ne sais pas quoi faire dans cette situation.
samedi 18
Pour le petit-déjeuner, j'ai préparé des crêpes avec le reste de la crème fouettée. Il reste juste un peu de la créme, donc je n'ai pas pu fouetter dans le mélanger. J'ai fouetté à la main. J'ai aussi utilisé la crème fouettée congelée que j'avais faite il y a plusieurs mois. Je les ai mangé avec des pêches et de la mangue. C'était parfait.
Lire la configuration lettrée d'Emacs de tecosaur me rend jaloux de sa mise en page, donc j'ai passé du temps en ameliorant l'export de ma configuration. C'est très long. Le PDF est 736 pages. Seule la table de matières est 15 pages. Je veux ajouter plus de commentaires et implementer plus d'exports LaTeX pour mes types de liens.
Ma fille était grincheuse contre moi du matin, mais l'après-midi, elle a réapparu et elle a voulu passer du temps avec moi.
Nous avons joué à Minecraft pour essayer les nouveaux cubes de soufre. Nous avons généré un Warden et lui avons donné un cube qui nous donnaient un bloc de champignon. Le Warden s'amusait avec le cube.
Nous avons joué avec Play-Doh. Je l'ai étalé très finement et nous l'avons coupé à beaucoup de pièces. Elle les a tressé. Elle a voulu essayer une tresse couronne, donc j'ai tressé ses cheveux.
Pour le dîner, nous avons préparé des sushis.
Nous avons joué encore à Stardew Valley Expanded. Nous avons bien progressé dans les paquets du centre communautaire, même si j'ai oublié d'obtenir l'engrais de centre communautaire après la Fête des Œufs pour accélerer les fraises. Tant pis.
Ma fille a pratiqué son vocabulaire français en racontant l'histoire de la famille d'Eevee.
dimanche 19
Ma fille s'est réveillée à 8h00 aujourd'hui. Elle trouve que c'est plus facile de se réveiller quand il n'y a pas école. Il est bon que je n'avait pas commencé une diffusion en direct.
Ma fille et moi sommes allées aux Stockyards à vélo pour acheter des tissus pour coudre un chapeau d'été. Elle avait fait du lèche-vitrine mais elle n'en avait pas trouvé un qui lui convenait, donc nous devons le faire nous-même. Elle a choisi du tissu jaune Pokémon. Elle a aussi voulu de la laine pour faire du crochet une couverture.
Nous avons mangé du Panda Express pour le déjeuner. Le repas enfant m'a suffi.
Je l'ai déposée à la maison et j'ai apporté des donations au Goodwill en faisant le grand ménage. J'ai aussi fait les courses. Une fois que je suis rentrée, ma fille m'a montré fièrement qu'elle a fait les lits comme un hôtel.
Nous avons joué à Stardew Valley Expanded après le dîner. L'été a commencé. Je pense que je dois planter plus de doubeurre pour le paquet récoltes de qualité qui demande 5 récoltes de qualité or.
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life La semaine du 20 au 26 avril rss
lundi 20 avril
Ma fille s'est réveillée tôt de façon autonome, donc nous avons terminé notre routine matinale. Mais elle a été déconcertée quand son mot de passe n'a pas fonctionné pour se connecter à l'école. Je l'ai aidée et elle a assisté à ses cours. Je pensais qu'elle allait bien, mais une fois que je suis allée la voir pendant la récré, j'ai trouvé qu'elle était grincheuse. Elle a encore séché les cours.
À mon grand étonnement, après la pause déjeuner et un petit moment de jeu, elle participait à l'école.
Quelques points :
- Comme tout le monde, elle a des jours avec et des jours sans. Quand elle a mal au corps, tout est dur.
- Nous savons que les cours collectifs ne lui conviennent pas pour le moment. C'est une expérience pour obtenir des données.
- Ce n'est pas la fin du monde. Peut-être que l'école est plus indulgente que je ne le pense. Je peux leur laisser dire quand il y a un vrai problème. C'est possible que ce ne soit pas un problème.
- C'est très difficile (peut-être impossible) d'aider une personne qui ne veut pas être aidée, particulièrement car une partie de sa résistance est due à son désir d'autonomie.
- Harceler est inutile et inefficace. Si j'essaie d'utiliser la punition, je lui rends la tâche plus difficile pour choisir elle-même une bonne façon de procéder.
- Si elle veut quelque chose de différent, nous pouvons trouver quelque chose de différent.
- Donc je dois gérer mes propres émotions et être solidaire. Je dois avoir confiance dans le fait qu'elle veut un bon résultat pour elle-même. Elle peut le gérer ou elle peut demander de l'aide. Si je reste zen, c'est plus facile pour elle de demander de l'aide.
mardi 21
Je pense que j'ai trouvé un moyen de me protéger contre les accidents pendant une diffusion en direct. Si je diffuse avec un délai vers une autre instance d'OBS, je peux interrompre le flux une fois que je remarque quelque chose que je partage accidentellement.
J'ai aussi écrit une fonction pour formater les événements dans le format Org Mode pour exporter vers le format iCalendar.
J'ai répondu à des courriels, dont un en français. J'ai mis à jour les entrées de mon agrégateur Planet Emacslife. Je l'ai modifié pour utiliser toujours l'IPv4 et interpréter correctement les corps des articles.
Pour la soulager de son ennui, j'ai aidé ma fille à travailler sur des fiches d'exercices mathématiques pour les élèves de 6ème, qu'elle a pu accomplir avec de petites astuces. Elle était très fière parce que c'était plus intéressant que ses devoirs.
Après l'école, j'ai emmené ma fille au parc pour jouer avec toutes ses meilleures amies. Elles s'amusaient tellement que d'autres enfants ont voulu se joindre à elles, ce qui a rendu l'endroit trop bruyant pour ma fille, qui s'est déplacée au bac à sable pour jouer au calme. Une fois que les autres enfants sont partis, ma fille a retrouvé ses amies.
Ma fille a redécouvert les attrape-soleil et elle en a peint quelques-uns avec des peintures acryliques. Elle a voulu une peinture verte, mais nous n'en avions pas, donc elle a mélangé de la peinture bleue et de la peinture jaune pour en faire.
Elle a aussi discuté de son idée pour un petit mannequin pour présenter des prototypes de robes. Nous avons cherché des options en ligne, mais tous les produits étaient trop chers ou ne convenaient pas à ma fille. Nous allons peut-être acheter un petit mannequin chez Ikea.
J'étais un peu fatiguée.
mercredi 22
J'ai écrit quelques articles pour annoncer mes diffusions en direct.
J'ai proposé à ma fille de travailler sur des mathématiques plus complexes ensemble, mais elle n'avait pas besoin de mon aide aujourd'hui.
Après l'école, ma fille et moi avons fait du vélo au parc. Nous étions en avance pour notre rendez-vous avec ses amies, donc nous avons joué dans l'aire de jeu près de la rue qui a un grand bac à sable. J'ai apporté les jouets de sable, ce qui a permis à ma fille de simuler une pâtisserie. Après avoir joué, nous sommes allées à l'autre aire de jeu en pente. Nos amies étaient en retard, mais ce n'était pas un problème. Il y avait d'autres amies, et une fois qu'elles ont dû partir, nous avons joué aux balançoires jusqu'à ce que nos autres amies arrivent. Il faisait beau et un peu chaud. Ma fille a mangé deux sucettes glacées au yaourt, à la fraise, et au miel qu'elle a préparées hier soir, et elle les a offertes à ses amies.
Ses amies sont venues à pied. Ma fille a voulu les accompagner sur le chemin du retour, donc nous sommes toutes allées à pied. J'ai accroché son vélo au mien grâce au sac Bakkie, et j'ai poussé mon vélo pendant qu'elles marchaient.
Une de ses amies est tombée et elle a eu mal au genou. Elle a hurlé. Ma fille a offert un bandage Pokémon. Elle a encore hurlé, ce qui était trop bruyant pour ma fille qui commençait aussi à pleurer. Elles ont eu besoin de quelques moments avant qu'elles ne se calment.
J'étais étonnée que ma fille ait voulu accompagner ses amies presque jusque chez elles. Eh bien, le soleil brillait et je peux toujours emmener ma fille si elle devient trop fatiguée.
Pour le dîner, mon mari a préparé des escalopes de poulet.
jeudi 23
J'ai travaillé comme consultante.
J'ai emmené ma fille au parc Dufferin Grove pour jouer là-bas. Une fois arrivée, elle a vu que ses meilleures amies sont occupées à jouer avec une fille qui est en désaccord avec ma fille, donc ma fille a décidé de jouer plutôt avec moi ou avec son père, qui nous a rejoints à vélo. Elle a joué sur la balançoire et le toboggan. Elle a aussi joué dans le sable avec d'autres enfants.
À la maison, nous avons fait des bulles géantes.
vendredi 24
J'ai eu une merveilleuse conversation avec John Wiegley et Karthik Chikmagalur sur le flux de travail de John pour gérer ses tâches sur Emacs et sur Org Mode.
Ma fille était un peu grincheuse parce que j'étais occupée avec ma conversation et son père était occupé à préparer le dîner. Une fois que j'étais disponible, elle a voulu jouer à un jeu de dominos que nous avons déjà donné il y a plus d'une année. Elle a été déçue, puis elle a décidé de faire un jeu similaire en utilisant LEGO. Elle s'est amusée.
J'ai accidentellement fait tomber mon Apple Pencil et il s'est cassé.
samedi 25
Je suis allée au magasin Apple pour essayer de remplacer mon Apple Pencil et de réparer l'écran de ma tablette sur la garantie AppleCare+. Je n'ai rien obtenu. Ils n'avaient pas les pièces en stock pour la réparation de l'écran, donc le technicien les a commandées et il va me notifier une fois qu'elles seraient arrivées. Il a trouvé que mon Apple Pencil n'est pas inclus dans la garantie AppleCare+ automatiquement même si je l'avais acheté en même temps que ma tablette. Le technicien m'a dit que j'ai besoin d'appeler l'assistance Apple pour lier mon Apple Pencil à la garantie AppleCare+, ce qui a pris 35 minutes à résoudre. Une fois que j'ai fini, le technicien est déjà passé à un autre client. C'était très occupé au magasin, et je n'ai pu reprendre mon rendez-vous. Si je voulais faire un autre rendez-vous, il m'aurait fallu attendre plus d'une heure et demie. J'étais surstimulée, donc j'ai choisi de rentrer.
Ma fille a voulu jouer à Stardew Valley avec moi. C'étaient les derniers jours avant l'automne. Elle a commencé à détruire ses arbustes de myrtilles. Quand je lui ai demandé ce qu'elle faisait, elle est partie furieuse parce qu'elle a senti que j'étais sur son dos. J'ai présenté mes excuses, et je l'ai aussi informée que les myrtilles ont une récolte de plus exactement à la fin de la saison. Elle ne le savait pas.
dimanche 26
J'ai écrit une petite fonction pour sauvegarder une capture d'écran à la position actuelle dans la vidéo et l'ajouter avec un horodatage au sous-titre actuel, ce qui facilite l'inclusion des images à l'article. Karthik et moi avons discuté du traitement de la vidéo.
Il faisait très beau, donc ma fille et moi avons fait du vélo jusqu'au Corktown Commons pour la première fois. Elle s'est très amusée sur les toboggans. Nous avons aussi fait plusieurs gâteaux de sable dans le bac à sable, grâce aux quelques conteneurs que j'ai apportés.
Après le dîner, ma fille a voulu jouer à Stardew Valley avec moi. Elle m'a demandé si c'est acceptable si elle vend quelques minerais d'or. Je lui ai demandé ce qu'elle voulait faire, quel est son but… Elle est devenue grincheuse et elle s'en est allée. Je me suis rendu compte qu'elle voulait peut-être faire de l'espace dans son inventaire, ce qui peut aussi être résolu avec un coffre, ce que j'avais d'ailleurs prévu de faire. Bien, elle doit développer sa propre autorégulation. Elle est finalement revenue de sa chambre et elle m'a demandé un câlin parce que son nez lui fait mal, pauvre chérie. Nous avons fait la routine du soir avec des larmes.
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Childfree people of Leeds? rss
Heya! Random one, but are there many childfree people in Leeds on here?
I’ve been thinking about setting up a Discord or something just to chat, maybe find people for games or last-minute plans, but not sure if there’d actually be much interest. I'd probably make it for people around my age, like 25+ year olds or something
For me personally it feels like a lot of social stuff ends up revolving around kids/schedules and it’d be nice to have a space that’s a bit more flexible, and to also have conversations that don't involve how Timmy shat his pants in Morrisons cafe
Would anyone be up for something like that? I'm up for making one and sending some invites out - or if this space already exists please do let me know so I can get involved!
EDIT - I’m gonna make a server - if you want an invite leave a comment/send me a dm :)
submitted by /u/amzlrr
[link] [comments] -
🔗 r/york Let's talk about York's hidden past! rss
Hey r/york!
We're Uncomfortable York, an academic-led tour organisation focusing on the underrepresented stories and people that make up the UKs favourite cities.
On our tour we talk about the lived experience of diverse individuals living and working in York across its 2000 years of history. We also examine York's connections to the world as a seat of power from the Roman Empire to a manufacturing hub for the chocolate industry.
We've taken to Reddit to ask some important questions:
-
Do you feel represented in York's heritage landscape?
-
What topics, themes, people, periods, etc. would you like to see examined with a more critical eye?
If you're interested in checking out our work feel free to head over to our website!
submitted by /u/Uncomfortable_Tours
[link] [comments] -
-
🔗 r/LocalLLaMA Gemma 4 MTP released rss
Blog post:
https://blog.google/innovation-and-ai/technology/developers-tools/multi- token-prediction-gemma-4/
MTP draft models:
https://huggingface.co/google/gemma-4-31B-it-assistant
https://huggingface.co/google/gemma-4-26B-A4B-it-assistant
https://huggingface.co/google/gemma-4-E4B-it-assistant
https://huggingface.co/google/gemma-4-E2B-it-assistant
This model card is for the Multi-Token Prediction (MTP) drafters for the Gemma 4 models. MTP is implemented by extending the base model with a smaller, faster draft model. When used in a Speculative Decoding pipeline, the draft model predicts several tokens ahead, which the target model then verifies in parallel. This results in significant decoding speedups (up to 2x) while guaranteeing the exact same quality as standard generation, making these checkpoints perfect for low-latency and on-device applications.
submitted by /u/rerri
[link] [comments] -
🔗 r/reverseengineering Inside Faxanadu series — deep dive into how this NES title works rss
submitted by /u/r_retrohacking_mod2
[link] [comments] -
🔗 r/reverseengineering EMBA v2.0.1 with interactive firmware dependency map available - Check it out and let us know what you are missing rss
submitted by /u/m-1-k-3
[link] [comments] -
🔗 r/LocalLLaMA Heretic 1.3 released: Reproducible models, integrated benchmarking system, reduced peak VRAM usage, broader model support, and more rss
Dear fellow Llamas, it is my distinct pleasure to announce the immediate availability of version 1.3 of Heretic (https://github.com/p-e-w/heretic), the leading software for removing censorship from language models.
This was a long and eventful release cycle, during which Heretic became a high-profile open source project with 20,000 GitHub stars and more than 13 million total model downloads (not counting the models from a certain "competitor" who was recently found to have been using a plagiarized fork of Heretic under the hood). The topic of model decensoring has exploded in popularity, with many clones and forks popping up, some of them clouding their techniques in mystique, technical jargon, or tens of thousands of lines of LLM-written junk code.
I am happy to say that Heretic is moving in the exact opposite direction. Instead of making it more difficult to understand what is going on, the new release makes it easier and more transparent. The headline feature in Heretic 1.3 is reproducible runs. This was a much more difficult problem to solve than it might appear to be at first glance, because the results of tensor operations can depend on the PyTorch version, the GPU, the driver, the accelerator library, and whether Saturn is Ascendant or not. This means that in order to ensure reproducibility, all of that information must be collected and preserved. This mammoth task was taken up by long-time contributor Vinay-Umrethe, who wrote the majority of the code in the course of an intense multi-week collaboration in which over 250 comments were exchanged.
As a result, when publishing an abliterated model to Hugging Face, you now have the option to have Heretic generate a
reproducedirectory in the repository, which contains everything another person needs to know in order to generate a byte-for-byte identical model themselves (example of such a directory). Gone are the days of "I can't seem to get such low numbers on my own machine"; you now can! While the reproducibility system is already immensely helpful and educational by itself, in the future it will form the backbone of something even more ambitious and exciting, which I will announce soon. Please note that publishing reproducibility information is completely optional, and Heretic always prompts before doing so. You are in control of what is uploaded at all times.There's more! You know how it can be difficult to tell with certainty whether an abliterated model has incurred significant damage to its capabilities? Heretic now includes the world's simplest benchmarking system , allowing you to run standard benchmarks like MMLU, EQ-Bench, GSM8K, and HellaSwag directly from Heretic, without having to fumble with any configuration and without even having to export the model first. This makes it much easier to decide whether a model is worth publishing, or whether you should look at another trial instead. The system is based on lm-evaluation-harness, the academic gold standard for running LLM benchmarks, allowing the resulting metrics to be directly compared against numbers published online.
In the course of a typical run, Heretic computes various functions on tensors. This can involve intermediate tensors being manifested in GPU memory that take up large amounts of VRAM. magiccodingman analyzed this in detail, and implemented optimizations that substantially reduce peak VRAM usage , allowing larger models to be processed.
Model architectures continue to evolve and become more complex, and Heretic is keeping up! farolone and MoonRide303 improved Heretic's layer and module handling logic, making it far more generic and allowing it to process latest-generation models like Qwen3.5 and Gemma 4 , among others.
Please see the release notes for the full list of improvements and fixes. More exciting stuff is coming in future versions!
Cheers :)
submitted by /u/-p-e-w-
[link] [comments] -
🔗 r/Yorkshire Glorious day along the Wall rss
| A bit rainy & windy, but still a brilliant day out. submitted by /u/TitanicDays
[link] [comments]
---|--- -
🔗 r/Leeds Favorite spot to read books? rss
Im new in the city and looking for any recommendations where I can just chill out, have tea or coffee and read a book. I really enjoyed Sonder and Sociable Folk. Any other similar spots?
submitted by /u/nimblebaroness
[link] [comments] -
🔗 r/Leeds Is this “d” an upside down “P” on the Leeds sign? rss
submitted by /u/Tight_Mammoth4602
[link] [comments] -
🔗 r/york Why didn't they take this rss
| York recycling bin men left this ? York council are a bloody joke (or they would be if the fact they provide such a shitty service and waste OUR money) submitted by /u/DarkBytes
[link] [comments]
---|--- -
🔗 r/york The Doom Stone in the Crypt at York Minster rss
| ⚔️ Beneath the floor of York Minster lies one of the most chilling reminders of medieval England’s belief in death and judgement: The Doom Stone. Carved over 800 years ago, this fragment was once part of a great tympanum above a church doorway. Its original paint and detailed imagery warned every visitor of the Last Judgement – heaven or hell, salvation or damnation. In this film, we explore the stone, the medieval mindset that created it, and how faith shaped the lives and deaths of all who passed beneath it. Featuring rare imagery of medieval Doom paintings, manuscripts, and iconography, this short documentary brings the forgotten stone and its message back into the light. There is NO AI Imagery in this Film, and all Motion Graphics were created by hand. Step into the shadows of England’s past. 00:00 The Doom Stone Beneath York Minster
00:50 What is the Doom Stone?
02:30 Medieval Last Judgement Explained
03:55 Heaven & Hell
04:50 Fear of Death and Judgement
6.00 Conclusion – A Warning in Stone submitted by /u/The_Black_Banner_UK
[link] [comments]
---|--- -
🔗 r/Yorkshire Mornings like this are all I need❤️🩹 rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 r/york Struggling to find a place for 3 sharers/2 households rss
Me and my partner and a friend of ours are looking for a place to live within the next month or so. I keep telling letting agents that me and my partner are long-term dating and we count as two households, but for some reason they still consider it 3 sharers and any advice I find online just says "say two of you are dating so it counts as two households" which we don't need to lie about because we are actually dating. Does anyone know what areas that have a decent commute to the city centre would be more ok with that?? Two of us are students but one of us is graduating in the next month so student accomodation isn't possible. Really not sure what to do.
submitted by /u/Rainecats
[link] [comments] -
🔗 r/reverseengineering Copy.fail: Why Internal LLMs Are Non-Negotiable for Security rss
submitted by /u/eshard-cybersec
[link] [comments] -
🔗 r/Harrogate Bilton Triangle Development rss
Hi all, I remember a while ago there being so chat regarding developing the Bilton triangle farmers field for housing. Is anyone aware of any updates? Thanks!
submitted by /u/Leading_Roof407
[link] [comments]
-
- May 04, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-04 rss
IDA Plugin Updates on 2026-05-04
Activity:
- Deobfuscator
- ida-pro-mcp
- c05d7405: Merge pull request #399 from momo5502/main
- 9a6b024e: Merge pull request #402 from momo5502/fix/dbg-start-ip-grace
- 4d1ada32: Fix failing tests
- 766e6a38: Merge pull request #401 from momo5502/fix/dbg-start-ip-grace
- e88bb69b: Refine dbg_start IP grace handling
- 4da9ff46: Merge pull request #400 from mrexodia/fix/dbg-start-batch-mode
- f0cd8778: Restore caller's pre-call batch state, not hard-coded 0
- c6f491d9: Fix dbg_start reporting failure on successful starts
- dd4e730b: Fix entry-point APIs for 8.3
- 7cca988b: Fix compatiblity with IDA 8.3
- IDAPluginList
- 6d3a4180: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- inertia_decompiler
- cbae1422: latest
- plugin-ida
- b4bc0ff8: fix: Use markdown image syntax for hex-rays store rendering (#9)
-
🔗 r/Leeds Coffee / Drinks meet up… join us! rss
After the previous thread of someone looking to meet people, I thought what a perfect reason to create a meet up.
I think these days it can be hard to meet new people. even when you have friends it’s always nice to meet new people and find people who want to do things, try things, go places.
I’ve created a WhatsApp group, so can banter / chat in there and then arrange the date and time for the first meet up. It’ll be cool as well for that group to be used to organise other meet ups, walks, activities.
I’m happy to host the first one, welcome and chat to everyone as they walk in. These things can be awkward, but thankfully I’ve been involved with events and language exchanges so know to make them run smoothly.
I’m fortunate to have a cafe (also sells pints and alcohol too…) I’d be more than happy to use the space, and I’ll even throw in a free drink for everyone and some nibbles.
If you want to join, send me a DM and I’ll send the WhatsApp group chat!
submitted by /u/No_Kitchen1337
[link] [comments] -
🔗 r/york Any gaming communities? rss
Hi everyone I’m Chloe, M24 recently moved to York from Cornwall. Beautiful city and the people are extremely friendly, I was just wondering if anyone knows of any gaming communities in the area for people perhaps my age - I’m struggling to make friends and would love to make friends with some fellow gamers, or if anyone individual would have room for one more in their circle I’d be happy to join
I usually play a wide variety of games so anything works for me
submitted by /u/Chloeallloving
[link] [comments] -
🔗 r/wiesbaden Ein Wiesbadener baute die erste Guillotine rss
submitted by /u/Happycosinus
[link] [comments] -
🔗 r/Yorkshire Spotted in Guernica, Spain rss
| submitted by /u/hillboy286
[link] [comments]
---|--- -
🔗 r/Leeds Cars constantly parked in cycle lanes / Cycle Superhighway – anything actually being done? rss
Just wanted to see if anyone else has had this issue or knows what can actually be done about it.
I’ve started using the cycle lanes and Cycle Superhighway a lot more recently and honestly really like them – I’ve ended up replacing most of my car journeys around Leeds with cycling.
The problem is there are constantly cars parked in the cycle lanes, especially along the superhighway, which kind of defeats the point and often forces you out into traffic or onto the pavement.
I’ve already emailed the council and CityConnect about it a few times but never seem to get a response.
Is there a better way to report this? Or anyone specific that actually deals with enforcement? Just feels a bit pointless having the infrastructure if it’s not kept clear.
submitted by /u/_testingdude
[link] [comments] -
🔗 r/reverseengineering Reverse-engineering Final Fantasy X (PS3) trophy system with Ghidra rss
submitted by /u/JoshLeaves
[link] [comments] -
🔗 r/Yorkshire Yorkshire, Yorkshire! Spotted in Downtown Toronto rss
| submitted by /u/Del_213
[link] [comments]
---|--- -
🔗 r/york Pointless Anti terrorism barriers being pointless rss
| Now I don't mind there being barriers into the city. I really don't. But what I do mind is that they seem to be open at 5pm on a Saturday - when the city is still rammed with people, and you've got taxis, flying down Parliament street. Which is full of kids on bikes and tourists. Coney Street is packed, and cars are coming down every 30 seconds. What the hell was the point? They are literally open at the worst possible time. submitted by /u/FoxyStoat444
[link] [comments]
---|--- -
🔗 r/LocalLLaMA White House Considers Vetting A.I. Models Before They Are Released rss
| submitted by /u/fallingdowndizzyvr
[link] [comments]
---|--- -
🔗 earendil-works/pi v0.73.0 release
New Features
- Xiaomi MiMo API billing and regional Token Plan providers -
xiaominow uses API billing, with separatexiaomi-token-plan-{cn,ams,sgp}providers. See docs/providers.md#api-keys and README.md#providers--models. (#4112 by @Phoen1xCode) - Incremental bash output streaming - Bash tool output now appears while commands run instead of only after completion. (#4145)
- Compact read rendering - Interactive
readoutput for Pi docs, context files, and skills is collapsed by default and shows selected line ranges.
Breaking Changes
- Switched the built-in
xiaomiprovider from Token Plan AMS to Xiaomi's API billing endpoint, and renamed its/logindisplay from "Xiaomi MiMo Token Plan" to "Xiaomi MiMo".XIAOMI_API_KEYnow refers to the API billing key from platform.xiaomimimo.com. Users on Token Plan should switch to the appropriatexiaomi-token-plan-*provider and set the corresponding env var (#4112 by @Phoen1xCode).
Added
- Added three Xiaomi MiMo Token Plan regional providers visible in
/login:xiaomi-token-plan-cn(XIAOMI_TOKEN_PLAN_CN_API_KEY),xiaomi-token-plan-ams(XIAOMI_TOKEN_PLAN_AMS_API_KEY),xiaomi-token-plan-sgp(XIAOMI_TOKEN_PLAN_SGP_API_KEY). Each defaults tomimo-v2.5-pro(#4112 by @Phoen1xCode).
Changed
- Changed
readtool rendering to collapse Pi documentation, AGENTS/CLAUDE context files, andSKILL.mdcontents by default in interactive output.
Fixed
- Fixed generated OpenAI-compatible model metadata for Qwen 3.5/3.6 and MiniMax M2.7, so those models work through the built-in provider catalog (#4110 by @jsynowiec).
- Fixed Bedrock Claude Opus 4.7
xhighthinking requests by preserving the provider's native effort value. - Fixed OpenAI Codex WebSocket transport to fall back to SSE when setup fails before streaming starts, and surface transport diagnostics in the assistant message (#4133).
- Fixed OpenAI Codex WebSocket transport keeping
--printand JSON mode processes alive after the response by closing cached WebSocket sessions during session shutdown (#4103). - Fixed compact
readtool calls to render directly and include selected line ranges in interactive output. - Fixed interactive sessions to exit when terminal input is lost instead of continuing in a broken state.
- Fixed bash tool output to stream incrementally while commands run instead of waiting for command completion (#4145).
- Fixed selector and autocomplete fuzzy ranking to prioritize exact matches.
- Xiaomi MiMo API billing and regional Token Plan providers -
-
🔗 sacha chua :: living an awesome life Emacs Chat 21: Amin Bandali rss
: Added file enclosure so that it can load as a proper podcast.
I chatted with Amin Bandali about Emacs and life.
View it via the Internet Archive, watch/comment on YouTube, read the transcript online, download the transcript, or e-mail me your thoughts!
Links:
- Amin Bandali: a computing scientist, archivist, and activist for user freedom
- bandali's GNU Emacs configuration
- .emacs.d - configs - My configuration for GNU Emacs and other programs
- The People of Emacs - bandali
Chapters
- 0:11 Introduction: Amin Bandali, software developer and free software activist
- 1:06 Aspects of life: notetaking, editing, multiple
- 3:03 Configuration: keeping things simple
- 5:03 user-lisp-directory, site-lisp if you're using an older Emacs
- 6:35 Organizing configuration into modules
- 7:49 early-init
- 9:09 ring-bell-function
- 9:41 performance optimizations
- 10:27 user-lisp
- 11:16 ignoring byte compilation warnings
- 11:58 init-file-debug = –debug-init
- 12:56 Core
- 13:57 no longer using bandali-configure; scoping errors, timing execution
- 17:06 Why not use use-package
- 18:39 Defining multiple keybindings
- 19:48 doric-oak uses emphasis instead of colours
- 20:52 global font scaling instead of the local ones
- 21:39 display-fill-column-indicator
- 22:57 emacsclient for EDITOR and VISUAL
- 23:38 fundamental-mode-hook
- 24:25 indicate-buffer-boundaries
- 26:38 enabling and disabling commands
- 27:42 package-review-policy
- 28:58 getting the Info files from the Emacs source directory
- 29:46 recentf, adding directories
- 31:41 Scrolling
- 32:36 auto revert
- 33:16 Repeat mode
- 34:53 EXWM
- 38:05 Audio setup
- 39:15 keymaps for launching different applications
- 39:55 bandali-call-interactively-insert
- 42:29 workspaces
- 43:50 ZSA Voyager split keyboard, super x as a single key
- 46:28 Keybindings
- 48:08 Media buttons
- 49:45 exwm-input-simulation-keys!
- 51:43 exwm: managing floating windows
- 53:13 exwm: application-specific local simulation keys
- 54:09 binding C-q to exwm-input-send-next-key
- 54:31 Renaming buffers
- 55:38 dunst for notifications
- 56:55 exwm xsettings and responding to screen configuration changes
- 59:03 Slowly getting back into Org mode
- 1:00:01 chat notes
- 1:00:54 Mode line
- 1:01:50 display-buffer-alist
- 1:02:24 TRAMP slowness, maybe disabling VC detection?
- 1:03:42 eat
- 1:05:09 TRAMP completion
- 1:06:55 ffs: form feed slides, ^L
- 1:09:36 Speaker notes
Transcript
Transcript0:00 Introduction: Amin Bandali, software developer and free software activistSacha: Let me do the thing. Go live. Let's check in. Alright, hello. This is Emacs Chat 21 coming back after a decade of not doing it, so... And today I've got Amin Bandali who's a... Is it seven years now that we've been doing EmacsConf together? Amin: I think so. Since fall 2019. Yeah. Sacha: Yeah, yeah, yeah, yeah. But of course you also do a whole lot of other things. I was looking through your Emacs configuration and there's like translation and other stuff in there. So would you like to start off with a brief introduction of who you are and how and why you use Emacs? Amin: Yeah, sure. Yeah, first of all, hello, everyone. Sorry if I'm looking to the side. This is a new setup. My laptop, which has my webcam, is there, but my main display is here. So I might be looking to the side from time to time. But yeah, that aside, hello.1:05 Aspects of life: notetaking, editing, multipleAmin: Yeah, I'm Amin Bandali. I've been, I think, using Emacs since 2014 or 15, so I guess more than a decade now. I'm a software engineer by day, or software developer, slash programmer, slash computing scientist. I'm also a free software activist. I volunteer on a lot of free software projects as well, which Sacha mentioned. I do things around GNU. I volunteer with FSF. I'm a Debian developer, so I try to maintain some packages in Debian. I try to help run EmacsConf from time to time. Hopefully this year I will be much more present. But yeah, that's that. So I first got into using Emacs, I guess, as a programmer tool, like as a text editor. But I've since then kind of integrated it into a lot of other aspects of my life. And I do much more with it, as I'm sure a lot of us do. Yeah, so I use it for kind of note-taking, just any writing, editing purposes. in multiple natural and programming languages. Reading and sending email for chatting via IRC. All of that good stuff. Sacha: This is the sort of thing that isn't immediately obvious from your configuration. I know you've got your Gnus setup in there and you've got your ERC setup in there, but sometimes when newcomers are trying to figure out, okay, there are all these packages, but how do I use them to get stuff done? That's one of the reasons why we want to do this Emacs chat, so that maybe you can show us some of the cool stuff. We are live, but if you accidentally show something personal, let me know and I can kill the stream within 10 seconds and I think then we can be like, okay, we'll just flush that out and then come back once we've hidden the top secret plans for taking over the world, that sort of thing. Sounds good. Where do we want to start?3:00 Configuration: keeping things simpleAmin: I'm happy to do it however you like. I can either share my screen, pull up my configuration. Yeah, okay, so let's do that. Sacha: Yeah. If you share your screen sometimes, I think what we did ages ago was we just started walking through the configuration and then sometimes people say, oh yeah, that's really interesting. Let's go and demonstrate that so that people can get a sense of how this actually works. And there were some things in your configuration that I had no idea, like what is FFS? There's like no package. I couldn't find any information about it. But yeah, so your config, if you want to go ahead and share your screen while I Fill the air with hand-waving. Admin's config tends to be more on the minimalist side. I think you mostly rely on built-in things with a couple of external packages. You don't even use use-package at all. It's all run-at-idle-time to delay the startup of various things, and then it's all vanilla Emacs as you can get for loading and configuring things. Amin: Yeah, pretty much, yeah. Yeah, so before I continue, quick note, Sacha, if you can make me presenter because I don't have access to share my screen. Sacha: Oh, that would be important, yes. Hang on a second. Let me see. Okay, here we go. Make presenter. I might as well promote you to moderator while we're at it. There you go. You should now have magic powers. Amin: Thanks. Let's see. Sacha: It's a good thing we're practicing this before EmacsConf so I remember how all this stuff works. Amin: Yep, for sure. Okay, let's see. I think I got it now. Can you see my screen? Sacha: Yes, I can see your screen. Amin: Okay, excellent. Let's see. Okay.4:58 user-lisp-directory, site-lisp if you're using an older EmacsAmin: Yeah, so as Sacha mentioned at the moment, my config is kind of very minimalist and kind of conservative by design, in part because I tend to work on a lot of different machines, whether it's for work or volunteering or whatever, and I prefer to use Emacs if I can. So I want my config to be fairly self-contained so I can easily either git clone or rsync it over. Yeah. To keep it simple, I was using package.el for a while for installing and managing my packages, which I don't keep in my configs repository. But then I decided to switch over to very manual package management with the awesome new feature user-lisp-directory of the next upcoming Emacs release, which basically you can give it a subdirectory in your .emacs.d or .config/emacs. And then it'll go through all the Emacs Lisp files recursively, byte compile them, native compile them, all that good stuff, and add them to the load path. And for people who are using existing or older releases of Emacs, there's also site-lisp by Philip Kaludercic, which is kind of the... I guess first implementation of what later became user-lisp and built into Emacs. So you can make it conditional and fall back to site-lisp if you want to be able to use user-lisp on older Emacs but still have your configuration be usable. Yeah, anyway.6:32 Organizing configuration into modulesAmin: So I've experimented with like a couple different ways of managing my configurations like single giant init file of like four or five thousand lines which I know is actually not very large by comparison to I think like someone like Sacha's configuration and also like You know, split into multiple different files, which has its own benefits. And I've kind of actually converged to the approach that Prot uses. If you actually take a look at my configuration file, you see I've drawn a lot of inspiration from Prot switches. Having a literate single file configuration, which then all of the Emacs LISPs source blocks get tangled to individual files. So I can maintain a single source of truth and edit it all in one place, but then also easily be able to share individual pieces to people if they want. So yeah, that's kind of the general approach. And I can dive right in. Sacha: Yeah, that's definitely the structure that I've also stolen from Prot. And I like the way that you're Your heading names are all long and descriptive, and you've got everything broken down in detail. So yeah, go ahead and walk us through it, please. Amin: Yeah, sure. Let's see.7:45 early-init
Amin: So that's a brief introduction, and then I have an early init section for doing the early init file. There's a couple of subheadings here. Actually, let me enlarge the font size a little bit to make it more legible. OK, great. I do a couple of things here like disabling package at startup because I don't use package as I mentioned. I manually install and update my packages as git submodules in my configurations repository.
Amin: I set load-prefer-newer to t to make sure that I never load any stale code. For example, I might edit some Emacs Lisp file by hand and forget to byte compile or native compile it. And this tells Emacs to basically just use the version of these three variants that's the most recent. Yeah. Nothing super fancy here.
Amin: I turn off a couple of things that I find a little bit distracting, like the menu bar or toolbar. Although I do say here that for people who are new to Emacs, they're actually super helpful. Sure, it's a little bit of visual clutter, but in the beginning, it's really, really helpful to help you orient yourself of what mode you're in, what tools do you have available in your disposal. And even someone who's been using Emacs for more than 10 years, I also use it sometimes when I'm like... just starting to use a new mode. So yeah, good stuff. 9:06 ring-bell-functionSacha: I was very amused by the comment on the... "I don't like getting jumpscared out of my chair." You turned off the bell. Amin: Yeah, because that actually used to happen when I first started using Emacs. Like when I would, I don't know, I don't even remember when it bells or rings, but Maybe if you like quit like with C-g or like try to backspace into like delete where there's no more characters to delete so it rings a bell and it's very like can be jarring so yeah I turn that off.9:40 performance optimizations
Sacha: Yeah, and then you've got a whole bunch of things where you set some variables to nil temporarily to make it faster, so that's in your startup in garbage collection. Amin: Exactly. Empirically, there is no hard and fast science behind this. I experimented over the years. I'm pretty sure I believe the default, for example, the garbage collection con threshold is about eight megabytes. I tried increasing that a little bit to see how much If I increase it to what point will it make my startup faster? And I found this 30 megabytes or mibibytes to be kind of a sweet spot. So I bumped that up. And then after Emacs has finished initializing, in the after-init-hook, I just restored the defaults. 10:25 user-lisp
Amin: And then, yeah, this is the bit with the user-lisp-directory that I was talking about. Awesome stuff. So you can basically designate a directory. For example, in my configuration, it's just a lisp directory. And then on startup, Emacs will go through and byte compile, native-compile if necessary, and then add all of that stuff to the load path automatically. So you get that. Yeah, and then this is the bit about site-lisp that I was talking about. So if you want to use user-lisp, but you're still using older Emacs versions that you maintain, you need to maintain backward compiling in your config. This is how you do it, for example. So you just yeah, add it to load-path, require it and then call prepare-user-lisp. That's about it. 11:14 ignoring byte compilation warningsSacha: I'm picking up that tip about using the ignore directories. I'm getting by with just ignoring all of the byte compilation output, but it would be nice to just say, you know, that stuff is test. I don't need to worry about it. Amin: Right, right. Thanks. Yeah, I was also doing that. I actually have it as a comment to suppress warning types, like by compilation, but I was... I plan on working on some packages, whether my own or others, and it would still be helpful to get those warnings, so I keep them enabled. It's still a bit annoying. I still get some of them when I launch emacs but I don't restart or launch emacs as frequently so it's pretty bearable.11:55 init-file-debug = --debug-init
Amin: Yeah, and then I have the main init file. And there's not much in it. It's just the debug-on-error and debug-on-quit. So the debug-on-error thing, I set it to the value of init-file-debug. And if you look at that, the help for this variable, basically if you pass or launch Emacs with --debug-init, this variable will be true. So yeah. Sacha: I did not know that. Cool. Amin: Yeah, it's pretty helpful. I think, if I'm not mistaken, I took this from John Wiegley's .emacs, but I can't remember for sure. It's been years. Yeah, it's pretty nice. And then here, I just set my name and email address. And very early I set a custom file to keep all of that stuff separate from my .emacs. I don't want it mixing in. 12:53 Core
Amin: And then pretty much the only other thing that's in my main init file is just to require and load these different modules or packages of my configuration. I have these as actual packages or as actual features. They provide themselves. And that's just something that I've found straightforward enough to do. I know, for example, Prot uses a dual approach. He has some of his configuration that's more readily usable, available as actual packages. And then the other ones, it's just Emacs Lisp code. It's not actual packages. But for me, I just keep it simple. Everything has packages and that's about that. Sacha: Fantastic. Let's dive into some of those configuration modules. Amin: Sure, let's see. Yeah, so this there's this like core thing which is kind of included gets included in all of my other files. 13:53 no longer using bandali-configure; scoping errors, timing execution
Amin: I wrote a bandali-configure macro shamelessly based on prot-emacs-configure which is what Prot uses and it basically is a way of kind of similar to use package for like wrapping a bunch of relevant like Emacs Lisp code all together. It has the benefit, if you use it, if there is an error in that block or in the body basically, then it won't crash everything. That body will just get ignored and we display an error. And that's also the main reason that Prot uses it. The one thing that I added extra to mine, which I took with inspiration from Echelle Yaron's ESY slash init step, is to wrap it up in basically time the execution of each of these blocks, which can be pretty helpful to help you see, okay, which part of my configuration is particularly slow. Usage examples. I just have it here. You can either basically pass it like a symbol like thing or you can also pass in a string as the first argument. And this is what will be displayed when you display a list of the evaluation times for all of these blocks in your configuration.
Amin: Yeah, and then I have a neat little function here like configure-report-times that will report these times, whether in the order that it's encountered them, or you can have it sort by fastest to slowest, slowest to fastest, blah blah blah. Sacha: You mentioned you're no longer using this. Is it because you wanted it to be easier to copy and paste your code? What got you to shift back to the regular vanilla type of configuration? Amin: Right, as neat as it is, I didn't find it super useful. For one thing, because I don't add or remove a ton of stuff to my Emacs configuration regularly, so if there is an error, it wouldn't cause an issue for the rest of my configuration. I didn't really find that very useful. And then my other potential concern is that the way I was structuring things, I would put all of the configuration, let's say for GNU, in one of these blocks. But I wanted to be able to break that down into, for example, Org Mode sections more easily. So far, I just decided to not use it. I know I could technically break those down into smaller blocks, but I haven't done that yet. Sacha: Ihor says, this configure macro looks a lot like good old use package, which you're not even using in the rest of your config. And I hear you about wanting to be able to split things into smaller blocks with more explanations in between them. So in my config, yeah, sure, I've got the use-package there to do the ensure and all that stuff. But I also have with-eval-after-load because I still want, you know, the links and the screenshots in between. 17:02 Why not use use-packageAmin: Right. Yeah, exactly. use-package is awesome. I have used that in the past, especially when I was using the straight.el package manager. It pairs nicely with it. But yeah, since then, I found it a little bit like too magical for my tastes, kind of along the lines of declaring an init file bankruptcy at some point I really wanted to understand every single line that I have in my Emacs configuration. And at the time, I didn't know a whole lot about macros or wasn't very well-versed with them. So I just ditched it in favor of simply using, as you mentioned, with-eval-after-load. And then that causes all that code to be basically delayed, not evaluated immediately, but when that package is loaded. And then as to when to pull that package in, depending on if I want it right from the get-go of my Emacs starts, then I would require it. Otherwise, I add this, as you also mentioned earlier, this kind of timer thing where if Emacs is idle for, I don't know, 0.2 seconds or 0.4 seconds, then go ahead and require this package. Sacha: Ihor has a tip in the chat. Of course, Ihor has an Org way to do this. He uses use-package whatever config and then he has a noweb reference to the Babel blocks. Then he just says :tangle no on the source blocks so that they don't actually get repeated. Anyway, you can look at it later when you go through. I'll send you the comments or whatever. But show us how you're actually configuring things since you're not using this.18:37 Defining multiple keybindings
Amin: Then I just have another quick macro thingy here, bandali-define-keys, which wraps around Emacs's define-key. It affords me the convenience of defining multiple key bindings, and Prot's version of this (I think it's prot-emacs-keybind, or something like that) he imposes the limitation that the keys should be valid strings that can be passed to the =kbd= function, which is very fair and valid, but I wanted to not impose that, to keep the flexibility of using define-key directly. The consequences of that, as we can see, is we can pass in the old representation of key bindings, like the vector or whatever syntax, which Prot's doesn't support by choice, whereas mine does. Let's see. For example, let's look at the Bandali theme, which is all about... The appearance, I guess, of Emacs. 19:45 doric-oak uses emphasis instead of colours
Amin: Yeah, so I just have a conditional block where, you know, if you're in a graphical environment, I'll just go ahead and load Prot's Doric themes, specifically Doric Oak, which is what we're seeing right now. I'm using, it's very beautiful, it's very subtle, and it uses emphasis, bolding and stuff to draw your eye to something instead of using a million different colors, which I find pretty nice. Yeah, and then for example here I set up some fonts. I use this Sahel font for Persian and Arabic text. I set a color emoji font here and this is like we get a kind of preview of what I do. It's like with-eval-after-load faces and then blah blah blah. Sacha: Ihor would like to point out that with-eval-after-load is also a macro that calls another macro. So I'm just going to mention it because it's there. These are your fonts. This is your theme. This is great because everyone always asks, what theme is this? What font is this? All right. 20:49 global font scaling instead of the local ones
Sacha: I like your text scaling tweaks that you're just about to go into. You've changed the global mappings. Amin: Yeah, yeah, yeah. And I actually took this from Prot as well. And it makes a lot more sense. So by default, this, C-x C-+, -, blah, blah, blah, it only scales the text for the current buffer only. But in newer version of Emacs, in Emacs 29, they also added commands to adjust this globally, including the mode line and all that stuff, which is usually what I want, for example, in this presentation or when I'm sharing my screen right now. It scales everything up globally. So yeah, I just swapped these to be the default, and then I add keybinds for the just local variants in case I need to use that. Yep. 21:37 display-fill-column-indicatorAmin: And then here I have display-fill-column-indicator. I don't know, maybe this is just me, but sometimes I'm kind of OCD about keeping my text lined up at exactly, for example, the 70 characters column. I care a lot about that, especially if I'm writing code or text that I want to also visually look nice. And I enable this. And let's see, I enable it for prog-mode. So yeah, I guess if I, for example, do this... This little thin line that we see here, that's the display filler column indicator. I used to have it globally enabled, but then I found that a bit too much, so I just enable it with a hook in the modes that I want. Sacha: Yeah, and the theme makes it very subtle. It's just there as a reminder, don't go beyond this line. You can if you really want to, but just try not to. Amin: Yeah, exactly. And then my essentials... This is where I configure a lot of key behaviors of Emacs, all built-in stuff for the most part, or things that are key to my workflows. For example, I always want to start with a scratch buffer.22:53 emacsclient for EDITOR and VISUAL
Amin: Start the Emacs server if it's not running. And this is very useful, very helpful so that then you can call into an existing Emacs process with Emacs client and have it edit a file. I don't use it for anything fancy just yet. I believe Prot also mentioned in his video with you, Sacha, that he uses it for things like org-capture to spawn a new buffer in his existing Emacs session and things like that. You can do pretty cool things with it. But yeah, I just use it for being able to easily use my Emacs as editor and visual text editors. So yeah, this sets that up. 23:37 fundamental-mode-hook
Amin: Adding a fundamental mode hook. Again, I took this from Prot. Sacha: I was surprised by that because I was like, oh, there isn't a fundamental-mode-hook? Okay, that makes sense now. Amin: Right, right. Yeah, there isn't a fundamental-mode-hook by design. But I still, in the past, have found that I wanted that. For example, for this display-fill-column-indicator, when I had it enabled everywhere, I was like, it would be nice if I could at least disable it for fundamental mode. And at the time, I didn't have this. I added this just recently. So if I decide to go back to using something globally, but I don't want it in fundamental-mode, then I can disable it using this. Yeah, and then some standard stuff like I prefer spaces and a tab with four characters. 24:23 indicate-buffer-boundaries
Amin: Visually indicate buffer boundaries. This is a little bit hard to see right now, but here at the bottom left
Amin: you see a little down arrow
Amin: and then the little top arrow. And... Let's see if I can. Sacha: Oh!
Amin: And also here, for example, when it all fits in the view. Sacha: Huh, that is cool. I was looking at that. What does it do? And so that tells you, you can still scroll up or you can still scroll down, and you don't have to look at the scroll bar to see where you are. It just says there's more there. Amin: Yeah, exactly. Yeah. And it also helps distinguish when there's a newline character at the end of the file or not. So here in this buffer, there is.
Amin: But if I delete that, you see this indicator here changed shape. But if I go back and add the new line again. So yeah, that's also been very helpful for me because I added configuration files and some of these pieces of software are sensitive to having a new line at the end of the file. So yeah, it's very helpful and useful for that. Sacha: I would not have guessed that from the very short line in your config that turns that on. It's one line, setq-default indicate-buffer-boundaries 'left, and yet it adds this nice little nuance to the way that fringe looks. Amin: Right. Yeah, absolutely. Perhaps I should expand more on it at some point later to explain these things. But yeah, just this one line. Sacha: May I recommend screenshots? Amin: Yes, you may, for sure. Yeah, I will definitely do that as well, because I'm also a bit of a visual person. I like seeing screenshots and videos, so yeah I'll take that to heart and do that for my own configuration as well. Sacha: When I post this, I'll probably... I figured out how to have the transcripts and then screenshots embedded into my transcript. I'll generate it automatically from the subtitle file. Our EmacsConf transcripts are going to get so fancy next year. But you can pull those screenshots and drop them into your config. It'll be great. Amin: Nice. Yeah, for sure. Sounds good. 26:36 enabling and disabling commands
Amin: And then here, I just enabled some of these commands that are disabled by default. So yeah, it's useful, especially narrow-to-page, for example, or narrow-to-region. These are commands where Emacs disables them by default so that newcomers don't accidentally hit them and get very confused by what just happened. It doesn't disable them for good. It just basically prompts you for confirmation. Are you sure you want to run this command? I'm sure, at least about these commands. So I just enabled them. And then something like, for example, overwrite-mode, which I never use and I don't want to accidentally enable. I just put it disabled so that if I do accidentally hit the keys, which might be, I don't know, something insert or whatever, then it will prompt me to make sure that I meant to do that. Sacha: That reminds me, I should probably turn that off for myself and then you get a whole new keyboard shortcut you can use too. Amin: Right, yeah. Let's see. 27:37 package-review-policy
Amin: Yeah, I have just one line setting for package.el. In Emacs 31, we will be getting a package review policy which is very helpful. So if you do use package.el for installing packages from GNU ELPA, NonGNU ELPA, MELPA or whatever else, you can enable this, and then whenever you update your packages, you'll get a diff of what changed in this new revision of the package that you're downloading and you're about to enable. And you can presumably say yes or at least see what's going on, which I find helpful. Sacha: But you're not using packages, you mentioned, so you're just checking everything out and then you're just git pulling whenever you feel like it. Amin: Yeah, so right now I'm using git pulls and git submodules, very manual. I put this here because I think it's generally a very welcome change and awesome new feature that I want to spread the word about. So maybe someone who's looking at my config, they use package and that's perfectly fine. So this is just here to spread the word about it mainly, I guess. And if I start using package at some point myself in the future, then I will have this enabled. Let's see. 28:52 getting the Info files from the Emacs source directory
Amin: Very quickly, here I extend Info-directory-list. I like to, at least on some of my machines, use Emacs that I built from source directly in the source repository of Emacs. Just after doing make, I don't run make install, even though it's very easy to do that. You can install to a custom location by providing dash dash prefix when you're configuring Emacs. Sometimes I just find it more convenient for me to not do that and just run make and then exit and reopen Emacs. And for that kind of a setup, I just extend the info directory list to include the info subdirectory of the Emacs source repository so that the built-in Emacs info manuals will be available to me. 29:45 recentf, adding directories
Amin: And then I use recentf for tracking recent revisited files. I bind it to C-c f r e for me to get a pop-up completion for visiting a recent file, it has completion. So if I hit TAB here, for example, we can see some of these files or directories that I visited recently. Sacha: I see. And then you're adding the directory to it. So what does that let you do? Because I'm assuming you're already in there in the directory. But how does that change your recentf? Amin: Right. So I need to think to remember this, but I think the point of this was that if I open a project in VC or in Dired, then I would like that directory to also get added to my recentf files list, because I think by default, recentf only includes files, not directories. Sacha: You're in it, you start up Magit or whatever, and then you move on to something else, but you want to be able to easily go back to it. Amin: Yeah, for example, I like to keep my recently visited directories in recentf as well. Because that's one of the main ways I jump between projects and stuff, even though there is literally a built-in Emacs project mode, which I still use. The only thing that I have here is... I don't want to add my home directory to the recently visited list, so the only thing that this function does is to skip that if I'm opening the home directory. That's about it. 31:38 Scrolling
Amin: And then here I configure mouse and scrolling behavior. So I want Emacs to scroll very gently, one line at a time. I think the default is that when you reach the end of the page, it'll jump half a page down and then recenter. I don't remember default behavior because I don't use it very much, but yeah, this basically makes it very predictable. For example, when I reach the edge of the page here and I press C-n, it'll only scroll one line at a time, instead of jumping and then doing something like this. Sacha: Oh yeah, mine does! Mine doesn't do that, so it does that jumping thing. I see what you mean here. Interesting. Amin: Yeah, so you can tweak that with scroll conservatively and then scroll preserve screen position, I believe. 32:28 auto revert
Amin: Yeah, and then I use autorevert, which is pretty helpful. So this will have Emacs watch, for example, files that are open in your buffers. And if they change on disk, Emacs will automatically refresh the buffer so that you get the latest version. The cool thing is you can press undo in one of these files that's been autoreverted so that you get the revision that was there right before the change. So I've used that sometimes as well. Sacha: Yeah, and sometimes autofollow also is nice for log files and things like that. But yeah, autoreverting is great. Amin: Yeah, for sure. 33:14 Repeat mode
Amin: Repeat mode is something that I've only recently started using, especially with my Emacs EXWM setup, using Emacs as my window manager. For example, if I hit C-x o, we see here in the echo area where it says repeat with o or capital O. So I can now only press o instead of saying C-x o, C-x o to do that multiple times. Keymaps that have support for this basically indicate that they want to be repeatable can declare that. And then once you invoke one of the keys in those keymaps, then you can repeat it with just that single character. And for example, for my setup, I have that with my EXWM workspace switching keys. So I can easily go to the next and previous workspaces, many of them at a time by just pressing P and N instead of doing the shortcut multiple times. Sacha: And actually, if you don't mind jumping ahead, the EXWM part of your config is fairly complex, and I think not a lot of people have a lot of experience seeing EXWM in action. And I don't know whether you're comfortable sharing you switching around to different workspaces, but if that is something that you can do, how are you doing all this awesomeness? I'm still too scared to use EXWM myself. Stability. But that's a me problem, not an EXWM problem. 34:51 EXWM
Amin: Yeah, EXWM was pretty awesome. I used it back in 2018, '19 for a while, and then I kind of moved on to Sway and Wayland. But I don't know. It's something that I feel like once you try it, you want to keep going back to it. So recently, this past month or so, I decided to give it an earnest try and try to actually address any pain points that I've noticed. So it's much more usable for me now, and I'm sticking with it for now. I'm not a Wayland hater, but I'm just saying, at least for now, I'm using EXWM. And I'm happy to talk about it. Sacha: OK, what do you love about your setup for that one? Amin: EXWM? Sacha: Yeah, yeah. Like, you're doing a lot of rename buffers. Yeah, yeah, yeah. Amin: Right. Yeah, let me think. There's a couple of things. So, for the longest time, my Emacs EXWM configuration used super key as a prefix, which is the Windows [key] or the one with the logo, basically, to switch workspaces, launch applications and such. And at least the way that EXWM is right now, it doesn't... Like the way you have to add those global key bindings and kind of slows down the EXWM startup. And I had many such key bindings.
Amin: So one thing that I did kind of recently is to define a prefix map here, like bandali-prefix-exwm-map. So I bind all of the keys and commands that I want here, and then this helps me really minimize what I'm telling EXWM, which is here. For example, this is how you set global keys with EXWM, and I just point it to my prefix map. C-c x and then any of those letters and functions that we saw. That's kind of annoying. I still use the super key here, but I have it s-x and s-,. On the left-hand side of my keyboard, X is right next to super, so I can hit it in one go with one motion almost as a single key with these two fingers. On the right side of my keyboard, I don't have a super key, but I have a control key that I remapped to super. On the right side, I do s-, with these two fingers. It's still very convenient for me to invoke those commands. And pairing this up with repeat mode, as we can see just here, actually, then I can hit s-, and then P, N, or H, J, K, L many times to switch workspaces or shift focus to different windows and stuff without having to hit that kind of annoying s-x or s-, repeatedly. Yeah. Sacha: That sounds really cool. I should look into that. Sorry, quick aside. 38:03 Audio setupSacha: @blaiseutube would like to compliment you on your awesome audio setup. It sounds like you're in the room with him. Apparently, I sound like I'm on speakerphone, but your audio setup is top-notch, apparently. But that looks like a Blue Yeti, so I have to find out what's going on. what microphone are you using? Amin: It is indeed a Blue Yeti. Sacha: Yeah, yeah. So I just have to ask him for okay, what kind of boom mic? Anyway, we'll do that all offline because it's not Emacs related. Amin: Yeah, it's just the Blue Yeti. Yeah, I turned down the gain. I used to have gain higher, but then it picks up more noise from around the room or around the house. So I turned down the gain a lot and then I get close to the mic so that it only captures my voice. Okay. Sacha: I'm gonna need the boom. Otherwise, I'm squished into that corner. All right. So you were doing repeat-map before I said oh, let's talk about EXWM because you've got cool stuff there. Amin: Yeah, and I can continue talking about the EXWM. There's a lot here.39:10 keymaps for launching different applicationsAmin: I have, let's see, s-, SPC. I bind it to async-shell-command to use as my simple, little, dmenu-thing for launching applications.
Amin: Some of these things, like browsers, I still do them frequently enough, and I use different browser profiles. So I just define a new keymap so I can basically one-shot launch Chromium or Firefox in a specific browser or an incognito window and such. So yeah, I just do s-x b and then, for example, c to launch Chromium and all that stuff. So I found this pretty convenient. 39:49 bandali-call-interactively-insert
Amin: Speaking of key bindings, before I get down this, let's see if I can find... C-c h. I think this is just before my EXWM setup. I'm pretty proud of this. I love this. It really goes to show how awesome Emacs is and extensible it is. Let's see. So as we know, these various help commands and describe commands are under C-h prefix. But some of them are not bound. for example, find-library or describe-face. Some of these I use pretty frequently. I was really having trouble coming up with descriptive-enough keybindings or short-enough keybindings for all of them. I put some of them here, for example, like C-c f l for find library. But I can't do that for all of them. What I did was just do C-c h a or C-c h d. What this will do is basically, if I show that, It basically opens up M-x, fills in describe-, and then I can just type, for example, face, and that's it. So it basically opens up the minibuffer for me, pre-fills it with the string that I want, and I can type what is it that I'm looking for. And I found this to be better than trying to bind a million different keyboard things for describe this and that, apropos this and that, find this and that. So yeah and the way that we do that is to just use a minibuffer-with-setup hook, and you just have a little lambda to insert the string that you give it, and then you invoke it. Sacha: Yeah, this is pretty cool. When I saw that in your config, I was like, I'm going to steal that. Pre-filling the minibuffer but still letting you do stuff with it, it's such a powerful thing, not just for completing the command itself, but even for when you're using the command, but you want to do something with the input before. You don't want to do it all the way, send it in and submit right away. You want to actually do something with it after you insert it. So great tip. Amin: Yeah. Thanks. Yeah, it's pretty useful. It's pretty nice. Yeah. And then back to the Emacs or EXWM stuff. So before I had, I used to yeah, sorry, go ahead. Sacha: Sorry. I forgot whether I was muted or unmuted. Amin: Okay, no worries. 42:26 workspaces
Amin: For the longest time, I had 10 default EXWM workspaces on startup, and that can slow things down a little bit. So I found that okay, I don't really use all 10 workspaces always. So I set it to five. So I get five workspaces initially. But I still bind keys here. Like if we go down. Let's see. Here. So here, I define those keys for all the way from, let's say, from 0 to 9 for all 10. And then if I try to switch to a workspace that doesn't exist, then EXWM will just go ahead and create it for me. Yeah, so I found that pretty cool. You can create workspaces on the fly. Yeah. Sacha: Yeah, and I saw that it moves your current window there, too. So that's just like, OK. Let's move it to workspace number two or whatever. Very cool. Amin: Yeah, yeah, yeah. I have keys or convenience keys for moving some window to some workspace. Yeah, it's nice. Let's see. Let's see. Yeah. So these are just made key bindings. I use hjkl here for switching windows. 43:46 ZSA Voyager split keyboard, super x as a single key
Amin: I also have a ZSA Voyager split ergonomic keyboard. I can basically customize it infinitely. For example, I don't really have a super key on the first layer. What I have is a key that will do the s-x thingy, basically my prefix. So that's the last missing piece is that if I'm at home and if I have this keyboard with me, then I just hit one key and then that's it. I'm in my prefix. But even if not, on the laptop, the s-x or the super comma are still easy enough for me to hit it with one hand. Sacha: Now I'm jealous and I definitely want to assign my prefixes to their own keys. Very tempting. I've started using the numpad because my laptop has one. I only use the numpad rarely, but we all need more keys. Amin: Yeah, ergonomic keyboards are pretty nice, especially these ones. For example, the ZSA ones where you can put QMK on it, the QMK firmware. You can define keys in a C file. I can actually show that. Let's see... QMK Firmware, Keyboards, ZSA, Voyager, Bandali, and then keymap.c. Sacha: Is this in your repository somewhere? Amin: Right. It's in a different repository, but it's still on git.kelar.org next to my configs repository. You can find this as well, but if I go smaller... Yeah, you can define keys here and have different layers, like the base layer. And then you can define a key to switch between different layers and put some of the keys there anyway. So yeah, it's a whole rabbit hole in and of itself. Prot also uses a split ergonomic keyboard. It really does help if you're typing for long periods of time. I actually had these for a while, and I wasn't using them too much, but I started slowly getting some pain in my wrists and here. So I was like, okay, I have the keyboard. Might as well put it to good use, and I've started using it. 46:26 Keybindings
Sacha: Okay, so most of your keyboard shortcuts come off that kind of s-x or C-c something, and then you have a long prefix sequence, and you just remember everything or you use your... pre-fill some of it and then fill in the rest of the command. Amin: Pretty much all my window management related keys are on this s-x prefix that I'm showing here. And then I have a few other ones which I think I showed earlier. Is it this one? Anyway, I bind a few general keys outside of the s-x thing, like C-c e i. For example, I have C-c e e for eval-last-sexp. I do that a lot, so it's easy to hit that. Making frames or deleting frames Sacha: I love how Emacs uptime is something you use frequently enough that you have a keyboard shortcut for it. Amin: Yeah, of course. I mean, I'm sometimes curious to see how long has my Emacs session been running. To continue with the EXWM stuff, let's see. This is just some keybindings I define here. It's all Emacs Lisp, right? It's amazing. You can mapc over whatever sequence and create keybindings like that. Only with Emacs we can do things like that. I just love it. Let's see. 48:05 Media buttons
Amin: I still keep these three other keys for raising and lowering the volume and toggling mute off of that prefix and just directly on my keyboard, hitting it directly in the exwm-input-global-keys because I do that very, very frequently. But I also have scripts that I can invoke. I should do keycast. So yeah, I can invoke the prefix with semicolon. I can set my volume here, adjust it here, type in what volume I want, or with the single quote, I can enter a value for the screen brightness. I like these things to be exact depending on the lighting in the room. I have preferred brightness values of 50 or 12 or 10 that I manually adjust. I guess it's a poor man's version of having something with a light sensor that can pick up and adjust automatically. I do it manually. Yeah. Sorry, you just muted yourself again. Sacha: You're just probably this close to writing the Emacs Lisp that takes your webcam image and then adjusts your light. But I think Prot was also saying he likes to do the lighting changes manually as well because warmer colors versus cooler colors and all of that stuff. Anyway, so you have all these buttons that EXWM listens to and it can launch various things for. That's a lot of things. Amin: Yeah, those are pretty cool. 49:43 exwm-input-simulation-keys!
Amin: EXWM has this lovely feature called input simulation keys where You can basically use it to bring Emacs key bindings to other applications like Firefox or whatever. And yeah, it's mind blowing when you try it for the first time. for example, I bind C-b to just hit the left arrow on the keyboard. And it does that. So I can define all of these commands that I'm using or used to using in Emacs. So I can get them in Firefox or other applications as well. Realistically, it's mostly Firefox. It's the only other program that I spend any reasonable amount of time outside of Emacs. Sacha: Let me point out this very important one that you have there. Under selection, cut, copy, paste, I see a control W. Input simulation keys. So this is for all the people who have accidentally closed their browser tab while trying to copy text. This is how you solve that problem. Use EXWM and use EXWM input simulation keys and you don't have to accidentally close your browser tabs again. @blaiseutube asks, hey, what about time since last save? Or do you have some kind of autosave magic? you know, in reference to the uptime thing, right? You have this thing that shows you... Amin: I don't think I have anything for autosave, but I have this habit of... I save everything pretty regularly. Yeah, so I've never really needed that feature, but I'm sure Emacs has something where you can, at the very least, just very dumb, simple implementation of has it been idle for one minute, then just do a save buffer. You can roll your own. But I don't have anything. Sacha: All right. I'm getting really tempted now to try out EXWM, even if it's just for those global keyboard remapping things. 51:39 exwm: managing floating windows
Sacha: How is it for windows that you've got to have floating? I feel like it's very good at handling tiling things, but how is it for sometimes the apps kind of really want the floating window? Amin: Right, yeah, so you can toggle any window to be floating or not, and you can also — actually, we're just looking at it here. EXWM manage configurations, to match on the instance name or the class name of a window that you can get from `xprop`, to automatically make that tiling. For example, if I do my prefix and then capital T, it launches a floating terminal for me here. And if I go back to where I set it up, I just launch Xterm with the name argument. This is where it can set the instance. And I just put any string you can want, like floating, for example. And then here in my configuration, I just check that if the instance name is floating, then I'll go ahead and float the window. Simple as that. Sacha: All right. This is starting to look exceedingly tempting. Lol, I save everything regularly, so he's one of those people who compulsively hit C-x C-s. Amin: Yeah, I do that a lot. I don't know. It's just me. But, yeah. Yeah. And then, I don't know. EXWM is awesome. 53:11 exwm: application-specific local simulation keys
Amin: You can also put local simulation keys, application-specific simulation keys, depending on, the application, terminals, for example, or, Zathura. This is a PDF viewer. To have application-specific custom key bindings, how cool is that? For example, if I'm in Xterm or something like the Mate terminal, hitting C-c C-c twice basically, it'll just send the C-c key to the terminal. Because one thing with EXWM is that you can set it to capture a couple of Emacs prefixes, like C-x or C-c. So the application by default doesn't see it because Emacs captures it. But this is one of those mechanisms by which you can send a key through. Let's see. 54:04 binding C-q to exwm-input-send-next-keyAmin: The other thing is, you can set it like EXWM inputs send next key. So the default is C-c C-q, but I just bind it to C-q, and I, for example, can do C-q C-t to send C-t to the underlying application. So that's the other thing. Yeah, and then let's see.54:28 Renaming buffers
Amin: So this thingy here, I enable EXWM and I add this rename hook and all it does is basically to add the window titles to the buffer that I can see on the mode line. But as long as it's within a certain reasonable length, like for example, I have 25 characters. If it's longer than that, it will just put dot dot dot. So yeah, that's all the purpose of that. Let's see, for example, if I launch Xterm, it appears there. The perfect example is actually here on the right-hand side. On the mode line, we see Firefox, ESR, Emacs, Comp Chat. It's a bit long, so it just puts the dot dot dot there. So that's all that does. Sacha: Yeah, now being able to use Emacs to manage the tiling of these things instead of my having to fiddle with alt-dragging things to snap nicely into buffers. Yes, very cool stuff. EXLDM. Gotta try it. Amin: Yeah, for sure. Yeah, let's see. 55:36 dunst for notifications
Amin: Here I launch Dunst if the executable is installed for getting notifications in ESWM. I think there's at least one or two Emacs specific packages that implement a simple notification daemon or backend so that Emacs itself can handle that. But I found Dunst good enough for my use cases coming from I3, Sway, like tiling window manager background. I just reuse that. So yeah, I just start a process, keep a handle of it in this dunst process variable here. And this thing I discovered recently, it's cool. using set-process-query-on-exit-flag, you can basically have Emacs not ask you if you want to exit Emacs if that process is still running. It'll just kill it without confirming with you. So just a little convenience. Sacha: That is also cool. Just a heads up, I have about 15 minutes before the kiddo runs out because she'll be done with school then. Even just the EXWM part and other things that you've shown us in the config have been super awesome. But are there other things in the next 15 minutes that you would love to show people so that they can see how it works in practice? 56:54 exwm xsettings and responding to screen configuration changes
Amin: One thing I'll just mention, EXWM, one more thing, and then I'll go check. I think this is kind of recent: EXWM xsettings, and this allows you to dynamically at runtime change some of these things that you would normally set in an X resources file, like fonts. These kinds of settings were especially commonplace back when Wayland wasn't a thing or wasn't very popular. You would set some of these font settings there. With EXWM xsettings, you can do this dynamically, and what's awesome about that is it also lets you hook into, for example, if your screen configuration changes, if you plug in a monitor or unplug it, then you can run whatever `xrandr` command to set it up and also adjust those settings. The main thing I use it for is to change the DPI setting. The thing with X11 or Xorg is, unfortunately, there's no per-monitor DPI. There's one global DPI. But I found that on my high-DPI laptop screen, if I set the resolution to 1920x1080 instead of the full resolution, then the default DPI of 96 works just fine with my external monitor as well. All this little hook does, by calling into this function, is: if I'm plugging in my external monitor, lower the resolution and lower the DPI, and if I unplug it, go back to the high thing. I just love this. Sacha: That's great. We're definitely not going to demonstrate that because plugging in and unplugging monitors is not a good thing for screen sharing, but that sounds really cool. When things change, you can actually get your system to adapt to the changes for you. Amin: Yeah, it's lovely. Let's see. There's so much more to talk about. 58:59 Slowly getting back into Org modeAmin: I'm slowly getting back into Org Mode again. For the longest time, I didn't use it and I just used Markdown for my website as well. But I found that it's kind of limited. For example, I was using a Markdown implementation that was written in C and I can't easily customize it. Whereas with Org, I can hook into or create my custom HTML back-end that's a derivative of ox-html, even if I don't necessarily like the defaults or the settings for ox-html. I just recently started writing a new back-end called bhtml for Bandali HTML. It's just a boilerplate. I don't have much there yet, but that's the idea. Sacha: I love how you can hook into all of these different aspects of Emacs and get it to do exactly what you want. Amin: Yeah, so that's cool. Let's see.59:58 chat notes
Amin: I have written some things about the prompt for this meeting. Yeah, so I talked about that stuff briefly. Minibuffer setup. Things that I love about my setup is that it's kind of portable, simple. People can easily copy things from it if they want. It's kind of self-contained. And that was kind of a big thing a while back when I wanted to use my configurations on a couple of work machines. And these don't have direct outbound internet access. So I couldn't do things like installing packages with Elpa because that's done over HTTP. So yeah, I use submodules now. I recently began documenting my setup, very much inspired by Prot and Sacha and others. 1:00:52 Mode lineAmin: The things that I'm looking forward to tweaking next is the mode line. This is basically the default mode line of Emacs. A couple versions ago, they added a setting for compacting the mode line, which improves a lot of the extraneous whitespace in it, which is great. It's still... There's too much information. If you use multiple windows or even especially if you use EXWM all of those things like the date or like the battery get repeated in all of the windows, so I'm looking forward to doing my mode line in such a way that for example, it shows most of those things. And Prot actually has an excellent video about that where he shows how you can create your own custom mode line. Sacha: I've also been tempted to start using the header line too because that's another thing that you can put information in. Amin: Right, yep, header-line is awesome.1:01:49 display-buffer-alistSacha: Yeah, the display-buffer-alist is particularly powerful because you're combining it with EXWM, so it'd be interesting to see how you can manage windows and applications and stuff. Amin: Especially, just like how we saw in today's video call and also a call that I had with Prot recently. For example, if I open a describe-variable or something, it'll by default use the right area of the screen right now where our video is. So it reuses that. So I'm also looking forward to reading more about and configuring display-buffer-alist.1:02:23 TRAMP slowness, maybe disabling VC detection?Amin: I'd like to figure out some TRAMP slowness. I recently tried using it again. It's awesome. You can seamlessly open files, SSH into other machines and edit files there. But I don't know. It's kind of slow. So I want to see aside from the latency, you know, the physical limit of the latency because of the distance. Is there anything slowing it down? I think I read in the Tramp FAQ that maybe trying to disable VC mode or VC detection for remote connections might help speed it up, or at least having it do only Git, for example, because by default, Emacs' VC has support for Mercurial, CVS, SVN, Git, RCS even. Sacha: Anything anyone has ever wanted to use in the last 40 years. Here we go. I saw in your chat config actually that you were doing something with the SSH configs and I'd never come across that. So I was like, oh, that's something I should look into later. Amin: I don't remember the specifics, but it's all out there. Feel free to look into it.1:03:39 eatAmin: Especially with this EXWM setup, I still use Xterm sometimes and I have the Emacs EAT terminal, which is a terminal emulator written in Emacs Lisp. If I launch it right now, it's awesome. It actually is very powerful and it's a properly capable terminal emulator. It just can be a little bit slow. It is slower than xterm, but it's still a lot faster than whatever Emacs has built in. So this is pretty cool. But yeah, I don't want to use it a lot. And I kind of started testing, delegating more things or using more async-shell-command to just basically open this prompt and then do whatever I want. anyway. Sacha: I've also heard things about Ghost TTY. Anyway, so that's another thing to look into. Yes, so @Paniash47 says, "With Emacs 31, there's a new variable where you can hide the minor modes in the mode line." @pkal says it's mode-line-collapse-minor-modes. And @Paniash47 also says, "I personally use the Minions package by Tarsius, and it has some nice features in addition to the built-in features." So other people are tinkering around with their mode lines as well. Amin: Yeah, it's pretty cool. And then I don't know, I think maybe you touched on something a couple of minutes ago that I was going to go back to, but I forget.1:05:07 TRAMP completionSacha: Tramp SSH completion out of your configs. I was like, there's a Tramp sconfig in here that I've never used. And that sounded interesting. Yeah, tramp-parse-sconfig. Amin: Ah, right, right, right. Yeah. Sacha: Which, of course, we're not going to let go because it's private stuff, but yeah. Amin: Right. Yeah, you're welcome to try this. I'm pretty sure, actually, I took this from the Tramp manual itself. And it's one of those things where it's set and forget, I don't remember. But yeah, it's here. There was something else that I also wanted to show, but I forget. Let me see if looking at the outlines will remind me or if I will see it. Sacha: And that's one of the things I love about literate configuration is, you know, just kind of look at the structure and skim it and try to find something with keywords and ordered lists and all that stuff. Amin: Right. Yup. Exactly. Sacha: Oh, and you know, people will have access to your full configuration because it is in your repository and you have that lovely HTML expert for it as well. So if you, uh, if, if people want to follow up, they can go through that at length. At some point, you're going to add some more screenshots and possibly even video clips to it. so that's there you at git.kelar.org
Amin: This is my configurations repository. If you go here to treeview .emacs.d, this is the org file. I also export all of those individual components into this lisp subdirectory. All that stuff is here. The QMK thingy that was mentioned. 1:06:54 ffs: form feed slides, ^L
Amin: Oh, I wanted to mention FFS. Okay, I'll do that as well. Yeah, what's up with that? Sacha: I was trying to find information. It was like, there's no package. It's not what is this thing? Amin: It's FormFeed Slides and it's going to soon be a package. I was actually talking to Prot about it and I'm hoping to submit it for inclusion in GNU ELPA within, I don't know, the next couple of weeks. It's basically very similar to Prot's Logos package. Turns out we both had the same kind of idea at the exact same time in 2022, and we both used it for our LibrePlanet 2022 presentations. Of course, Prot being the diligent person that he is, he polished his work, documented it, put it on GNU ELPA. I still haven't gotten around to doing it yet, but better late than never. Yeah, let's see. I can maybe show a quick demonstration of that. So let's see. Let's see. Anyway, so if I go to my website sources and net-beyond-web. So I had the LibrePlanet talk a couple years ago. So what FFS is basically, it looks for a particular character in this case, or the default case, it's the page delimiter, ^L, which you can insert by hitting C-q C-l. It basically then designates each of these areas as one slide. So, very, very simple slideshow that you don't even have to use Org or outline or any other major or minor mode. If I launch ffs, by default, it's in a mode where it binds a couple of convenience keys, like p and n, to go into the next and previous slide. You can hit e to edit a slide, similar to Org source, and then make your changes and all of that. And then you can start a presentation by hitting s.
Amin: It has hooks for, for example, bumping up the font size or whatever, hiding the mode line. I can toggle the mode line by hitting M here. Let's see. I can also toggle the cursor, to make the cursor visible or not. So, yeah. And then I'm just hitting P and N. Sacha: Very simple, very minimalist. You have a file, you've got page markers, and that's all you got. Amin: Yeah, pretty much. And then... 1:09:34 Speaker notesAmin: The neat thing that it has that I also liked implementing at the time is it has a speaker notes feature.
Amin: So you can designate a file as being the speaker notes where it has the same structure separators with ^L. But you can type your notes over here, whatever. And you can basically open these in two different windows or two different frames on separate displays. And then in whichever one of those you advance the slides, like p n n, it also does the other one. Sacha: That's brilliant. I was looking for a way to do that so I can pretend to know what I'm talking about when I have something on screen, but I can just read my notes or even just remember what points I wanted to make. So this is great. You have speaker notes. You've got the main screen. They can be in two different frames. You can have your frame that you're sharing and your frame that you're not sharing that has all of your cheat sheets. Excellent. And on that note, in about one minute, the kid is going to come running out and want to have snack and all that stuff. Thank you so much for walking through parts of your config. There is more. And so everyone who wants to find out more can go check out your setup. I have a great many things that I want to try out, starting from EXWM to little things like figuring out a boom mic setup because apparently your audio setup is making me very jealous. Yes, thank you for doing this. I'm going to post the transcript and the chapters. I have a chapter every minute. It's going to be a long time. But it was good. Lots of cool stuff. Thank you again. Amin: Sounds great. And yeah, you're very welcome. And thank you so much for having me as well, Sacha. I'm very delighted to be here, especially, I think, just by chance. I think I'm the first person who you're doing this with after the long hiatus. So that's an extra honor for me. But yeah, it's been fun. I could go on for hours. I'm sure we both could. This has been fun. Sacha: If we wanted to go on for hours, Prot has more flexible scheduling, so he can chat with people for two hours and stuff, and you already have conversations going on with him. But I unfortunately have a small mammal who's 10 years old and loves me very much, and likes to not let me concentrate for very long. But thank you everyone for joining. Thank you for the chat. And thank you also, stream, for all the interesting questions. I will send you all the information and update the post. And we'll see you all on Thursday. I've got another chat. All of a sudden, all these Emacs chats are going to happen. Thanks. Oh, and you said you're happy to be on the hook for doing another EmacsConf this year, right? Amin: Yes. You can hold me to that. There will be another EmacsConf and I will be active in it. Sacha: Alright then, I'm going to end that broadcast. Thanks everyone, bye! Amin: Thank you, bye bye!Chat
- sachactube: This is a test message
- sachactube: Getting ready for Emacs Chat 21 with Amin Bandali, https://sachachua.com/blog/2026/05/emacs-chat-with-amin-bandali/
- JacksonScholberg: Yo
- sachactube: Yo yo yo, we are live!
- IhorRadchenkoyantar92: … and the list can continue until the end of the stream? :)
- IhorRadchenkoyantar92: do you compile those packages?
- sachactube: Automatically compiled by prepare-user-lisp because of user-lisp-directory, I think
- IhorRadchenkoyantar92: makes sense
- IhorRadchenkoyantar92: this configure macro looks a lot like good old use-package
- IhorRadchenkoyantar92: I just do (use-package foo :config ) and then :tangle no in actual src block
- IhorRadchenkoyantar92: what is funny, with-eval-after-load is itself a macro
- sachactube: hahaha, it's much smaller though
- IhorRadchenkoyantar92: not smaller at all! Because there is recursion with-eval-after-load (macro) -> eval-after-load (also macro!)
- IhorRadchenkoyantar92: hmm. wrong
- IhorRadchenkoyantar92: ok. let me not do two things at the same time
- blaiseutube: yay, I made it!
- blaiseutube: screenshots and also asciicinema
- blaiseutube: asciinema ?
- blaiseutube: whatever
- sachactube: and gif-screencast
- blaiseutube: nice
- blaiseutube: Sacha, your mic volume is just a bit lower than his so it's a bit harder (for me) to hear you.
- sachactube: Hmm, let me try turning my dial, let's see if this next one is better
- blaiseutube: better, I think
- blaiseutube: it's also that Amin has an awesome microphone. The result sounds like Amin is in the room with me and we are both listening to you on speakerphone. it's not terrible
- blaiseutube: we're all friends her
- blaiseutube: here
- sachactube: I think we have the same mic, but he has an awesome setup, so I'm going to bug him for tips =D
- paniash47: Hello there! Nice to see this chat. :)
- blaiseutube: yes, low gain and close mic is good. Sacha if prefer to avoid a boom, you can use a microphone with a tight pattern and increase gain. LMK if you want to unleash my inner audio engineer.
- sachactube: oooh. my mic is right next to my laptop though, so I'm not sure I can get away from the typing noises
- sachactube: I'll just have to get cozy with y'all
- blaiseutube: mini buffet is an underrated superpower. I think Kakoune adopted that also
- blaiseutube: helpful for a11y and users with sequential processing/ ADHD issues
- blaiseutube: (I noticed that the comments are recorded so I'm trying to add value 🥴)
- paniash47: Split keyboards make sense with vanilla keybindings. I'd like to switch but moving from evil is difficult :(
- sachactube: much appreciated!
- blaiseutube: what about "time since last save" or do you have some auto save magic?
- blaiseutube: 🤯
- blaiseutube: emacs all the things
- blaiseutube: LOL, "I save everything regularly" …so he's one of those people.
- paniash47: I think with emacs 31, there's a new variable where you can hide the minor modes in the modeline
- pkal_: mode-line-collapse-minor-modes
- paniash47: I personally use the minions package by tarsius (Magit author) and it has some nice features in addition to the built-in feature.
- paniash47: ghostel is the package :)
- blaiseutube: BRB
Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat
You can comment on Mastodon or e-mail me at sacha@sachachua.com.
-
🔗 r/reverseengineering [CrackMe] PyVMP v6 : The Fortress. I dare you to break it (again x2). rss
submitted by /u/PynaBola
[link] [comments] -
🔗 sacha chua :: living an awesome life From David Dimagid: What we talk about when we talk about recommending Emacs packages rss
David Dimagid wrote this post for Emacs Carnival May 2026: "May I recommend…". Here it is!
Someone recently said on emacs-devel that they'd like to talk about recommending ELPA packages. Someone else said we should first ask what "recommending" actually means. RMS opened a thread asking that very question. It's still open, and you can follow it there (ELPA: to curate or not to curate).
I think we could apply Rich Hickey's technique here and start by looking up the definition of "recommend" in the dictionary. I invite everyone to do so with whatever dictionary you have at hand and to trust your definitions.
Now, we could evaluate ELPA packages for recommendation based on whether they complement or improve functionality already present in the core. For example, diff-hl by Dmitry Gutov. Its description says:
diff-hl-mode highlights uncommitted changes on the side of the window, allows you to jump between and revert them selectively. In buffers controlled by Git, you can stage and unstage the changes.
That last feature —staging partial hunks— is missing from VC, and diff-hl adds it seamlessly. We could say diff-hl complements the core.
Then there are major mode packages, like csv-mode, markdown-mode, cobol-mode, and so on. They add functionality that doesn't exist in the core. They have no direct equivalent. We could call them standalone packages.
Now consider another excellent package, like diff-hl, that depends only on the core: expreg, by Yuan Fu, the region expansion package. With a single key, it expands the region based on context. The core already offers this through sexp movement commands, but not with a single keybinding — you need several. Some will prefer the native core way; others will prefer the package. We could say expreg improves or, depending on how you look at it, duplicates the core's functionality.
So, in my opinion, package recommendations should be structured around their relationship with the Emacs core. I believe the best-regarded ELPA packages should be those that encourage users to use what the core already offers, first and foremost, and then try those packages because they extend a feature the core lacks or complement it. This would also help more people discover lesser-known core features, increase bug reports, and, over time, bring more contributors to Emacs. That way, the Emacs community could have a package repository it can trust for as long as Emacs exists. Perhaps the person who wrote Elfeed would have known about Newsticker and would have contributed to that package instead. Perhaps if we recommended what Emacs already offers, the Elisp we write would be Elisp of and for Emacs.
If you e-mail me your comments, I can forward them to David!
You can e-mail me at sacha@sachachua.com.
-
🔗 sacha chua :: living an awesome life Emacs Carnival May 2026: "May I recommend..." rss
It's May and I like puns, so I'm going to suggest "May I recommend…" as our Emacs Carnival theme this month, building on lively conversations about people's favourite packages on lobste.rs, Reddit, and Hacker News. Let's go beyond packages and talk workflows, tips, practices, perspectives… whatever you'd recommend!
It was pretty nice having a wiki page that people could edit without needing to wait for me, so if you write about this topic, feel free to and add your link. If you run into problems doing that, please e-mail me and I can add the link for you.
People have already started sharing their recommendations:
- May Emacs Carnival
- May I Recommend EWM | Dilip's Log
- From David Dimagid: What we talk about when we talk about recommending Emacs packages
I'll also do a round-up post at the end of the month so that it shows up in people's RSS feeds.
Looking forward to seeing what y'all recommend!
You can e-mail me at sacha@sachachua.com.
-
🔗 r/Leeds Loneliness rss
Damn the loneliness, after 9-5 all i can do is get some beers. There is nothing much to do , no one to talk to. Anyone who has been in my shoes - advice how did you get better ? I migrated here in March.
submitted by /u/FarziiHu
[link] [comments] -
🔗 r/Yorkshire River Nidd rss
| The river at Little Ribston; unexpectedly beautiful. But then, it’s Yorkshire. submitted by /u/Inevitable-Debt4312
[link] [comments]
---|--- -
🔗 r/wiesbaden Umzug -> Internetanbieter? rss
Da ich in 2 Monaten nach Wiesbaden ziehe und ja, auch wenn das oft keinen wirklichen Stadtbezug hat: Welcher Anbieter bereitet am wenigsten Kopfschmerzen und ist preislich sowie Servicetechnisch absolut empfehlenswert?
Adressencheck ist schon durchgeführt. Es kommen alle gängigen Anbieter in Frage.
Danke Euch! :-)
submitted by /u/allroundurso
[link] [comments] -
🔗 r/Yorkshire More Whitby! rss
| submitted by /u/SectorSensitive116
[link] [comments]
---|--- -
🔗 r/Leeds Both Queens Court and The Bridge have closed down in absolutely devastating news for the LGBTQ+ Scene in leeds rss
submitted by /u/28peteslater
[link] [comments] -
🔗 r/reverseengineering [WIP] Resolve indirect calls in Binary Ninja with DynamoRIO instrumentation rss
submitted by /u/Weird_Field_8518
[link] [comments] -
🔗 sacha chua :: living an awesome life 2026-05-04 Emacs news rss
Thanks to everyone who shared their thoughts on the April 2026 Emacs Carnival theme of Newbies and Starter Kits. Check out that post to see all the entries people have shared so far. I enjoyed chatting with Prot about the topic, and he shared some defaults that even experienced users have been trying out. The carnival theme for May 2026 is "May I recommend…". Looking forward to reading your posts!
- Upcoming events (iCal file, Org):
- Emacs.si (in person): Emacs.si meetup #5 2026 (v #živo) https://dogodki.kompot.si/events/b4192df7-3da4-41b8-95a3-532b93923656 Mon May 4 1900 CET
- EmacsATX: Emacs Social https://www.meetup.com/emacsatx/events/314341747/ Thu May 7 1600 America/Vancouver - 1800 America/Chicago - 1900 America/Toronto - 2300 Etc/GMT – Fri May 8 0100 Europe/Berlin - 0430 Asia/Kolkata - 0700 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri May 8 1800 Europe/Paris
- London Emacs (in person): Emacs London meetup https://www.meetup.com/london-emacs-hacking/events/314540885/ Tue May 12 1800 Europe/London
- Emacs Berlin: In-Person-Only Emacs-Berlin Stammtisch https://emacs-berlin.org/ Tue May 12 1900 Europe/Berlin
- OrgMeetup (virtual) https://orgmode.org/worg/orgmeetup.html Wed May 13 0900 America/Vancouver - 1100 America/Chicago - 1200 America/Toronto - 1600 Etc/GMT - 1800 Europe/Berlin - 2130 Asia/Kolkata – Thu May 14 0000 Asia/Singapore
- Sacha Chua: May 14: Sacha, Prot, and Philip Kaludercic Talk Emacs: Newcomer Experience (Protesilaos)
- Beginner:
- Emacs configuration:
- Must-have Emacs packages you should know about [Updated] (Reddit)
- Jiewawa: Overriding keybindings with Meow
- Magnus: Follow-up on switching to eglot - more about use-package
- Emacs config (15:08)
- badele/idem: Doom Emacs configuration for DevOps workflows (bash, go, json, python, terraform, typescript, etc…) (@jesuislibre.org on Bluesky)
- Sharing my emacs.d while cleaning up my folder a bit. (Reddit)
- My Emacs Config (Reddit)
- Been working on my emacs config lately (Reddit)
- My configuration and workflow for game development in emacs with Godot
- Emacs Lisp:
- Contributing to ELPA (@pkal@social.sdfeu.org, Reddit)
- compat 31.0.0.0 released, stabilization in progress (@minad@mastodon.world)
- Dave's blog: Writing an automated test to try to find an Emacs bug
- NeLisp v1.0 — Emacs Lisp implemented in Elisp, plus a small Rust runtime that runs it without Emacs (Reddit)
- Appearance:
- Navigation:
- Writing:
- How I use quick-sdcv to get the Oxford English Dictionary in my Emacs
- Dave Pearson: blogmore.el v4.3.0 - blogmore-toggle-invite-comments, blogmore-invite-comments-to
- Denote:
- Org Mode:
- Stupidly Simple Notes Taking With Emacs - Linux Renaissance (@darth@watch.linuxrenaissance.com)
- I built an org-mode weekday repeater, .+wd
- Jonathan Chu: Introducing grove.el - note-taking workflow for Org
- Experimental/personal PDF-viewing/notetaking minor mode I (sort of) vibe-coded. (Reddit) dired + pdfview + org
- Import, export, and integration:
- Implementing a minimal evergreen blog in HTML and Emacs Lisp (Reddit, HN)
- Randy Ridenour: Managing Multiple-Choice Questions With Org Mode
- jamesendreshowell/org-teach-worksheet: Emacs lisp and Org macros for authoring classroom worksheets - Codeberg.org (@jameshowell@fediscience.org)
- schue/org-canvas: upload Org mode files directly into an instance of the Canvas LMS. (@schuemaa@ecoevo.social)
- canvas.el/canvas.org - interact with the Canvas learning management system (@locallytrivial@mathstodon.xyz)
- De Org-mode a Trilium Notes, pasando por Obsidian · El blog de Lázaro (@elblogdelazaro)
- tykayn/orgmode-to-gemini-blog - Source Bliss: Comme dirait Manon, les sources, c'est important. (@tykayn@mastodon.cipherbliss.com)
- Completion:
- History: delete old duplicates, but still rank by frecency (@minad@mastodon.world)
- vertico-posframe-preview: a preview sidecar for vertico-posframe (Reddit)
- VOMPECCC from Scratch: Picking Fruits and Veggies with ICR (YouTube 51:06, Reddit, HN) - incremental completing read with vertico, consult, marginalia, etc.
- Coding:
- Code to run magit-status on a project (@robjperez@fosstodon.org)
- Wireframe.el Keyboard-first wireframe prototyping inside GNU Emacs.
- Auto-mark rules, snooze, marking and filters for GitHub notifications in Emacs (Reddit)
- eglot, emscripten, and clangd (@robjperez@fosstodon.org)
- Einar Mostad: Fix Emacs python-mode REPL and org code block with python evaluation problems
- uv.el – a declarative Emacs interface for the uv Python package manager (experimental) (Reddit)
- Magnus: Secrets when connecting to DBs
- Using our new Lua debbuger: LuaProbe, we made an Emacs package for it (Reddit)
- Package announcement: go-prettify-mode.el (Reddit)
- Emacs is a fantastic SQL editor - see the comments for more recommendations
- Mail, news, and chat:
- Evil mode:
- Multimedia:
- Fun:
- Server play support in nethack-el: Help lobby for support on popular Nethack servers
- AI:
- macher-agent: Similar to gptel-agent but within the macher context (Reddit)
- adds $ completion for Codex skills in agent-shell buffers (Reddit)
- Agent's major mode kit (Reddit)
- Emacs manager for OpenAI Codex conversations (Reddit)
- anvil.el v1.0.0 — first stable, anvil-ide split, anvil-pkg sister, and a no-Emacs path via NeLisp (Reddit)- let AI agents use Emacs as a workbench via MCP
- Community:
- Emacs Carnival April 2026:
- Emacs Carnival in May (and in general) (Reddit)
- The gravitational pull of Emacs — baty.net (@jbaty@social.lol)
- Kent Pitman and Ramin Honary join on #commonLisp #lisp #IDE #emacs #schemacs #UX #lispyGopherClimate - toobnix (@screwtape@toobnix.org)
- SimHacker/NeMACS: UniPress Emacs 2.20 for NeWS · GitHub (released 1989) (@kickingvegas@sfba.social)
- Kent Pitman #demo 1977-1984 #MIT #ITS #DDT #TECO #EMACS #LISP #MACLISP - toobnix (@screwtape@toobnix.org)
- A Report on Burnout in Open Source Software Communities (2025, PDF) (@yantar92@fosstodon.org) - not Emacs-specific, but good to think about long-term
- Other:
- Emacs development:
- The emacs-31 branch will be cut in one week (Reddit, Irreal)
- Demote 'completion-preview-is-calling'
- Project prompters always default to current project, if any
- New variable 'completion-preview-is-calling'
- Always compile w32image.c on MinGW (Bug#80924)
- New VC commands for remote unintegrated changes
- New commands to report diffs of all local changes
- New packages:
- emcp: Lets your agent talk to Emacs (MELPA)
- forgejo: Emacs Forgejo Front-end (GNU ELPA)
- grove: Obsidian-like note-taking for org files (MELPA)
- keymap-popup: Described keymaps with popup help (GNU ELPA)
- mysql: Pure Elisp MySQL wire protocol client (MELPA)
- outline-stars: Outshine-style star headings for outline-minor-mode (MELPA)
- simulacrum: Inject custom event types into the event stream (MELPA)
- sql-bigquery: Adds BigQuery support to SQLi mode (MELPA)
- tmux-csi-u: Tmux CSI-u decoder (MELPA)
- ttx-mode: TrueType/OpenType font viewer using ttx (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
🔗 r/LocalLLaMA Llama.cpp MTP support now in beta! rss
| Happy to report that llama.cpp MTP support is now in beta, thanks to Aman (and all the others that have pushed the various issues in the meantime). This has the potential to actually get merged soon-ish. Currently contains support for Qwen3.5 MTP, but other models are likely to follow suit. Between this and the maturing tensor-parallel support, expect most performance gaps between llama.cpp and vLLM, at least when it comes to token generation speeds, to be erased. submitted by /u/ilintar
[link] [comments]
---|--- -
🔗 r/Yorkshire Taking the long way round. I could wander these dry stone wall paths forever and still find a new view to admire rss
| submitted by /u/HammersAndPints
[link] [comments]
---|--- -
🔗 r/reverseengineering IDA-MCP Is Now RE-MCP With Ghidra Support rss
submitted by /u/jtsylve
[link] [comments] -
🔗 @malcat@infosec.exchange [#Malcat](https://infosec.exchange/tags/Malcat) 0.9.14 is out! mastodon
#Malcat 0.9.14 is out!
This is a maintenance build, with some bonuses:
● AccessDB parsing
● RAR unpacking
● UPX (static) unpacking
● Improved __noreturn detection
● ... and as usual, up-to-date signature, constants and Kesakode DBs.Happy reversing!
-
🔗 r/reverseengineering Reverse-engineered the BLE protocol of the LuckPrinter-SDK family of thermal pocket printers (DP-L1S) — Python CLI + Web Bluetooth client + full command reference rss
submitted by /u/ChiaraCannolee
[link] [comments] -
🔗 r/york My favourite therapeutic loop💫 I could walk this a thousand times and never get bored🥹 rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 r/Harrogate Recommendations for someone to lay a shed base in Harrogate? rss
Hi all,
Looking for a bit of help/recommendations.
I need to get a shed base put in at the bottom of my garden and I’m weighing up either concrete or paving slabs. It’s not a massive job, but I want it done properly so it’s solid and lasts.
Does anyone know someone reliable in the Harrogate area who could take this on? Ideally someone you’ve used yourself and would recommend.
Thanks in advance
submitted by /u/Logical_Yogurt_520
[link] [comments] -
🔗 r/LocalLLaMA it's time to update your Gemma 4 GGUFs rss
Chat Template was fixed a few days ago
choose your fav dealer:
https://huggingface.co/bartowski/google_gemma-4-31B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-26B-A4B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-E4B-it-GGUF
https://huggingface.co/bartowski/google_gemma-4-E2B-it-GGUF
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF
https://huggingface.co/unsloth/gemma-4-31B-it-GGUF
https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF
https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF
submitted by /u/jacek2023
[link] [comments] -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/Yorkshire Whitby rss
| submitted by /u/Phil-pot
[link] [comments]
---|--- -
🔗 Stavros' Stuff Latest Posts Adding a feature to a closed-source app rss
Who needs source code?I use Audiobookshelf (abbreviated ABS) for all my legal audiobooks that I bought legally, and I really like it. I also use the Smart Audiobook Player (abbreviated SABP) Android app, which I also bought (leg
-
🔗 Rust Blog Rust is participating in Outreachy rss
The Rust Project has been building up a good history of participating in various open-source mentorship programs, including Google Summer of Code for three years (including this year) and previously OSPP. We're happy to announce that this year we are also participating in Outreachy starting in the May 2026 cohort.
Each of these mentorship programs has different criteria for eligibility depending on who they target and the motivations of the program. Outreachy provides internships in open source, to people from any background who face underrepresentation, systemic bias, or discrimination in the technical industry where they are living. You can learn more about the Outreachy program on their website.
What is Outreachy and how is it different than Google Summer of Code
Outreachy is similar to Google Summer of Code (GSoC) in some aspects, but different in others. First off, unlike GSoC, Outreachy interns first apply to the overall program and only then can apply to specific communities. Second, while oftentimes GSoC applicants submit various contributions prior to their application, Outreachy has a dedicated period where contributions are not just optional, but required. Finally, Outreachy applicants submit an application similar to GSoC applications and communities pick interns based on those applications and the interns' contributions. Outreachy has two internship periods per year, one running from May to August (in which we are currently participating) and one from December to March.
The other major difference between Google Summer of Code and Outreachy is the source of intern stipends. For GSoC, Google graciously covers contributor stipends and overhead. For Outreachy, communities instead cover the interns' stipends and overhead.
We are mentoring 4 interns for the May 2026 cohort
Because of limited funding availability and mentoring capacity, the Rust Project decided to select four interns for mentorship. We'll briefly share these projects below.
Calling overloaded C++ functions from Rust
Ajay Singh has been selected, mentored by teor, Taylor Cramer, and Ethan Smith.
This project aims to implement an experimental feature for calling overloaded C++ functions from Rust, and to begin testing that feature in a few representative use cases.
Code coverage of the Rust compiler at scale
Akintewe Oluwasola has been selected, mentored by Jack Huey.
This project aims to develop the workflows to run and analyze code coverage of the compiler at the scale of the entire compiler test suite and on ecosystem crates detected by crater. The hope is to be able to detect when the compiler is inadequately tested, both within the compiler and in the ecosystem, and to build tools to do continuous analysis on this.
Fuzzing the a-mir-formality type system implementation
Tunde-Ajayi Olamiposi has been selected, mentored by Niko Matsakis, Rémy Rakic, and tiif.
This project aims to implement fuzzing for a-mir- formality, an in-progress model for Rust's type and trait system. The goal is to generate programs in order to identify rules with underspecified semantics in a-mir-formality.
Improve the security of GitHub Actions of the Rust Project
oghenerukevwe Sandra Idjighere has been selected, mentored by Marco Ieni and Ubiratan Soares.
This project aims to improve the security of GitHub Actions workflows of the repositories owned by the Rust Project. It will develop tools and workflows, integrating with existing software, to analyze Github repositories and detect if they follow the best security practices, fix existing issues, and ensure that good security practices are followed in the future.
What's next
Over the next 3 months, the interns will work closely with their mentors to make progress on their projects. When the internship period is over, we'll write another blog post to share the results! See you then!
We also want to thank all the people that submitted applications and made contributions. It was quite tough to decide which applicants to select. Hopefully we will participate in Outreachy again in the future and there are other opportunities to participate. We also very much welcome you to stick around and continue being involved - there is a ton of places in the Rust Project with opportunities to be involved.
-
🔗 Julia Evans Links to CSS colour palettes rss
A while back I decided to stop using Tailwind for new projects and to just write vanilla CSS instead.
But one thing I missed about Tailwind was the colour palette (here as CSS). If I wanted a light blue I could just use
blue-100and if I didn't like it maybe tryblue-200orblue-50. I'm not very good with colours so it makes a big difference to me to have a reasonable colour palette that somebody who is better at colour than me has thought about.But I'm also a little tired of those Tailwind colours, so I asked on Mastodon today what other colour palettes were out there. And then a friend said they wanted links to those colour palettes, so here's a blog post so my friend can see them, and all the rest of you too :)
my favourites
The ones I liked the most were:
- uchū (css file, FAQ)
- flexoki (css file)
- reasonable colours, which seems to have a focus on accessibility (css file)
more colour palettes
colourscheme generators
Folks also linked to a bunch of colour palette generators
I've always found these types of generators too hard to use but maybe one day I will get better enough at colour that I'm able to use a colour palette generator successfully so I'll leave those links there anyway.
and more colour tools:
- colorhexa has some info about colorblindness
oklchGenerative colors with CSS gives an example of how to use the
oklchCSS function to dynamically generate colors. -
🔗 Ampcode News GPT-5.5 In Deep rss
GPT-5.5 now powers Amp's
deepmode.It is a better coding agent than GPT-5.4: more steerable, more interactive, and better at staying inside constraints.
More Agent-Shaped
GPT-5.5 is better at the actual agent loop: read enough code, make the change, verify it, explain what happened. Whereas with GPT-5.4, prompts often had to spell out the process.
With GPT-5.5 we found it's best to clearly describe the outcome and put the rules and repeatable steps into the guidance files and tools.
If the task is vague, it can still solve the wrong problem cleanly. Good prompts matter more, not less.
Reasoning Effort
With GPT-5.5 we lowered
deep's default effort fromhightomedium(deep²).Do not assume higher reasoning is always better: in our eval, GPT-5.5
highcost more thanmediumand performed worse.xhigh(deep³) is for cases where maximum quality matters more than cost.As before, you can toggle the thinking effort directly in the CLI with
Opt+D(Alt+D), cycling throughlow(deep),medium, andxhigh.How To Use It
The most important guideline to follow: tell GPT-5.5 what success looks like.
A few patterns have worked well for us:
- Give it the outcome and the constraints. Example: “Refactor transcript caching into a separate module. Keep the public API unchanged. Perf logging should only run behind this env var. Cache growth should be capped. Run the focused tests and typecheck.”
- Give it a way to prove the fix. Example: “This CLI focus bug should be verified in the actual CLI, not just by inspection. Reproduce it interactively, check focus state, then run the focused test.”
- Use it for planning when the shape of the fix is unclear. Example: “Analyze this protocol deadlock. Is it an infrastructure bug, a protocol bug, or something the client must recover from? Propose 2–3 options with tradeoffs and pseudo-code. Do not implement yet.”
Update Amp to the latest version by running
amp updateand you're ready to go.Model Card
We wrote up the full GPT-5.5 model card with evals, reasoning guidance, prompt changes, and caching/ZDR caveats.
-
🔗 Armin Ronacher Content for Content’s Sake rss
Language is constantly evolving, particularly in some communities. Not everybody is ready for it at all times. I, for instance, cannot stand that my community is now constantly "cooking" or "cooked", that people in it are "locked in" or "cracked." I don't like it, because the use of the words primarily signals membership of a group rather than one's individuality.
But some of the changes to that language might now be coming from … machines? Or maybe not. I don't know. I, like many others, noticed that some words keep showing up more than before, and the obvious assumption is that LLMs are at fault. What I did was take 90 days' worth of my local coding sessions and look for medium-frequency words where their use is inflated compared to what wordfreq would assume their frequency should be. Then I looked for the more common of these words and did a Google Trends search (filtered to the US). Note that some words like "capability" are more likely going to show up in coding sessions just because of the nature of the problem, so the actual increase is much more pronounced than you would expect.
You can click through it; this is what the change over time looks like. Note that these are all words from agent output in my coding sessions that are inflated compared to historical norms:
Loading word trend chart…
The interactive word trend chart requires JavaScript.
Something is going on for sure. Google Trends, in theory, reflects words that people search for. In theory, maybe agents are doing some of the Googling, but it might just be humans Googling for stuff that is LLM-generated; I don't know. This data set might be a complete fabrication, but for all the words I checked and selected, I also saw an increase on Google Trends.
So how did I select the words to check in the first place? First, I looked for the highest-frequency words. They were, as you would expect, things like "add", "commit", "patch", etc. Then I had an LLM generate a word list of words that it thought were engineering-related, and I excluded them entirely from the list. Then I also removed the most common words to begin with. In the end, I ended up with the list above, plus some other ones that are internal project names. For instance, habitat and absurd, as well as some other internal code names, were heavily over-represented, and I had to remove those. As you can see, not entirely scientific. But of the resulting list of words with a high divergence compared to wordfreq, they all also showed spikes on Google Trends.
There might also be explanations other than LLM generation for what is going on, but I at least found it interesting that my coding session spikes also show up as spikes on Google Trends.
The Rise of LLM Slop
The choice of words is one thing; the way in which LLMs form sentences is another. It's not hard to spot LLM-generated text, but I'm increasingly worried that I'm starting to write like an LLM because I just read so much more LLM text. The first time I became aware of this was that I used the word "substrate" in a talk I gave earlier this year. I am not sure where I picked it up, but I really liked it for what I wanted to express and I did not want to use the word "foundation". Since then, however, I am reading this word everywhere. This, in itself, might be a case of the Baader–Meinhof phenomenon, but you can also see from the selection above that my coding agent loves substrate more than it should, and that Google Trends shows an increase.
We have all been exposed to LLM-generated text now, but I feel like this is getting worse recently. A lot of the tweet replies I get and some of the Hacker News comments I see read like they are LLM-generated, and that includes people I know are real humans. It's really messing with my brain because, on the one hand, I really want to tell people off for talking and writing like LLMs; on the other hand, maybe we all are increasingly actually writing and speaking like LLMs?
I was listening to a talk recording recently (which I intentionally will not link) where the speaker used the same sentence structure that is over- represented in LLM-generated text. Yes, the speaker might have used an LLM to help him generate the talk, but at the same time, the talk sounded natural. So either it was super well-rehearsed, or it was natural.
Engage and Farm
At least on Twitter, LinkedIn, and elsewhere, there is a huge desire among people to write content and be read. Shutting up is no longer an option and, as a result, people try to get reach and build their profile by engaging with anything that is popular or trending. In the same way that everybody has gazillions of Open Source projects all of a sudden, everybody has takes on everything.
My inbox is a disaster of companies sending me AI-generated nonsense and I now routinely see AI-generated blog posts (or at least ones that look like they are AI-generated) being discussed in earnest on Hacker News and elsewhere.
Genuine human discourse had already been an issue because of social media algorithms before, but now it has become incredibly toxic. As more and more people discover that they can use LLMs to optimize their following, they are entering an arms race with the algorithms and real genuine human signal is losing out quickly. There are entire companies now that just exist to automate sending LLM-generated shit and people evidently pay money for it.
Speed Should Kill
If we take into account the idea that the highest-quality content should win out, then the speed element would not matter. If a human-generated comment comes in 15 minutes after a clanker-generated one, but outperforms it by being better, then this whole LLM nonsense would show up less. But I think that LLM- generated noise actually performs really well. We see this plenty with Open Source now. Someone builds an interesting project, puts it on GitHub and within hours, there are "remixes" and "reimplementations" of that codebase. Not only that, many of those forks come with sloppy marketing websites, paid- for domains, and a whole story on socials about why this is the path to take.
I have complained before that Open Source is quickly deteriorating because people now see the opportunity to build products on top of useful Open Source projects, but the underlying mechanics are the same as why we see so much LLM slop. Someone has a formed opinion (hopefully) at lunch, and then has a clanker-made post 3 minutes later. It just does not take that much time to build it. For the tweets, I think it's worse because I suspect that some people have scripts running to mostly automate the engagement.
And surely, we should hate all of this. These low-effort posts, tweets, and Open Source projects should not make it anywhere. But they do! Whatever they play into, whether in the algorithms or with human engagement, they are not punished enough for how little effort goes into them.
Friction and Rate Limiting
That increases in speed and ease of access can turn into problems is a long- understood issue. ID cards are a very unpopular thing in the UK because the British are suspicious of misuse of a central database after what happened in Nazi Germany. Likewise the US has the Firearm Owners Protection Act from 1986, which also bans the US from creating a central database of gun owners. The gun-tracing methodologies that result from not having such a database look like something out of a Wes Anderson movie. We have known for a long time that certain things should not be easy, because of the misuse that happens.
We know it in engineering; we know it when it comes to governmental overreach. Now we are probably going to learn the same lesson in many more situations because LLMs make almost anything that involves human text much easier. This is hitting existing text-based systems quickly. Take, for instance, the EU complaints system, which is now buckling under the pressure of AI. Or take any AI-adjacent project's issue tracker. Pi is routinely getting AI-generated issue requests, sometimes even without the knowledge of the author.
Trust Erosion and Gaslighting
I know that's a lot of complaining for "I am getting too many emails, shitty Twitter mentions, and GitHub issues." I really think, though, that now that we know that it's happening, we have to change how we interact with people who are increasingly automating themselves. Not only do they produce a lot of shitty slop that we all have to sit through; they are also influencing the world in much more insidious ways, in that they are influencing our interactions with each other. The moment I start distrusting people I otherwise trust, because they have started picking up LLM phrasing, it erodes trust all over society.
You also can't completely ban people for bad behavior, because some of this increasingly happens accidentally. You sending Polsia spam to me? You're dead to me. You sending me an AI-generated issue request and following up with an apology five minutes later? Well, I guess mistakes happen. Yet, in many ways, what is going on and will continue to go on is unsettling.
I recently talked with my friend Ben who said he forced someone to call him to continue a conversation because he was no longer convinced he was talking to a human.
Not all of us have been exposed to the extreme cases of this yet, but I had a handful of interactions in which I questioned reality due to the behavior of the person on the other side. I struggle with this, and I consider myself to be pretty open to new technologies and AI in particular. But how will my children react to stuff like this? My mother? I have strong doubts that technology is going to solve this for us.
Suggestions for Change
The reason I don't think technology is going to solve this for us is that while it can hide some spam and label some generated text, it won't fix us humans. What is being damaged here are social interactions across the board: the assumption that when someone writes to you, there is a person on the other side who has put some care into the interaction. I would rather have someone ghost me or reject me than send me back some AI-generated slop.
Change has to start with awareness and an unfortunate development is that LLMs don't just influence the text we read and they influence the text we write, even when we don't use them. Given the resulting ambiguity, we need to become more aware of how easily we can turn into energy vampires when we use agents to back us up in interactions with others. Consider that every time someone reads text coming from you, they will increasingly have to make a judgment call if it was you, an LLM, or you and an LLM that produced the interaction. Transparency in either direction, when there is ambiguity, can help great lengths.
When someone sends us undeclared slop, we need to change how we engage with them. If we care about them, we should tell them. If we don't care about them, we should not give them visibility and not engage.
When it comes to creating platforms and interfaces where text can be submitted, we need to throw more wrenches in. The fact that it was cheap for you to produce does not make it cheap for someone else to receive, and we need to find more creative ways to increase the backpressure. GitHub or whatever wants to replace it, will have a lot to improve here and some of which might be going against its core KPIs. More engagement is increasingly the wrong thing to look at if you want a long term healthy platform.
Whatever we can do to rate-limit social interactions is something we should try: more in-person meetings, more platforms where trust has to be earned, and maybe more acceptance that sometimes the right response is no response at all.
And as for AI assistance on this blog, I have an AI transparency disclaimer for a while. In this particular blog post I used Pi as an agent to help me generate the dynamic visualization and I used to write the code to analyze and scrape Google Trends.
-
🔗 exe.dev Dev, Test, Prod: Choose One, Two, or Three rss
Industry-wide, we often develop our software in three distinct environments. Perhaps your laptop is a Mac; your CI system is hosted GitHub Actions, and your prod is k8s.
Three-in-One
For some use cases, you need not bother with the complexity; use one exe.dev vm for all three. A blog, a dashboard, a link shortener, a bot, and so on: these work well with the environments collapsed. Add features by asking Shelley to do so. Set up continuous deployment by asking Shelley to poll every hour. Use git for a backup if it calls for it. Voila!
Our internal tools sport an "Edit with Shelley" ribbon. They either point straight to the "vm.shelley.exe.xyz" domain, or link to exe.dev/new with a pre-filled prompt and pre-filled tags, just like the link here.
Just Dev
Use an exe.dev vm (or many) to work on your software. Set up the GitHub integration (docs) to make cloning easy. Some people work serially. Some people work using multiple worktrees on one vm. Some people have one vm per task or project. Clone your VMs using ‘cp’ or configure them using setup scripts.
Using remote VMs opens up the convenience of mobile, opportunities for sharing, not to mention isolation from your other projects.
Why now? Many, many companies have tried remote development before. There is an entire graveyard of failed startups in this space. The big difference is agents. If your development is increasingly chat-based, the old arguments about getting your environment and dot-rc files just right fade away. The convenience of starting a task from your phone overwhelms the decades-old bashrc file and finely crafted PS1. As a bonus, you get the ability to share with your co-workers. Pull requests are so yesterday; send them a link to a working demo instead.
Just Test
Exe.dev VMs are a great place to riff on an idea. Perhaps you want to explore a particular open source project. Or you want to do some data analysis and share it with your co-workers? Or prototype your next idea? Or find your flakes by running your tests over and over again. Or let loose Shelley, our agent, on your app with its built-in browser? Or send off a security review. Or even just run a GitHub Actions runner.
Because you pick what access you want to give your VMs, and because they’re persistent, exe.dev VMs are great places to test stuff out.
Just Prod
You can host real, production software in exe. We support custom domains with a bit of DNS configuration (docs).
If you’re incredulous that this is a good idea, the entirety of Stack Overflow ran on just a few machines. Reach out to us if you want to enlarge your VM as far as modern hardware can go.
Private, Internal, or Public
Once you build it, you'll want to share it. You can keep it to yourself, and that's the default. Or you can share it with your team or with share links. Or you can share it publically. Sharing a VM's website is as easy as sharing any other online doc.
-
