- â
- â
- About KeePassXCâs Code Quality Control â KeePassXC
- How to build a remarkable command palette
- Leaderboard - compar:IA, the AI chatbot arena
- Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
- Automerge
- November 11, 2025
-
đ r/reverseengineering A New Man In The Middle (MITM) HTTP Proxy capture tool, would love to get some community feedback, and see if I can add some good capabilities that I haven't yet thought about :) rss
submitted by /u/Jbsouthe
[link] [comments] -
đ organicmaps/organicmaps 2025.11.11-5-android release
⢠Highlight downloaded regions, nature reserves, national parks, protected areas, Aboriginal lands, danger areas, lakes on the World map
⢠Routing supports road closure times
⢠Fresh OSM maps as of November 9
⢠Fixed crashes and routing, including Android Auto
⢠Added watchmakers, student accommodations, travel agencies
⢠Added support for Slovenian language in search
⢠Voice directions in Estonian, Galician, Hebrew, and Lithuanian
⢠Respect auto-zoom setting in all casesâŚmore at omaps.org/news
See a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
d115067a641c42842fc0e578699c6680d91dcc2d36398730d98b0dc467f0b6cf OrganicMaps-25111105-web-release.apk -
đ organicmaps/organicmaps 2025.11.11-4-android release
⢠Highlight downloaded regions, nature reserves, national parks, protected areas, Aboriginal lands, danger areas, lakes on the World map
⢠Routing supports road closure times
⢠Fresh OSM maps as of November 9
⢠Fixed crashes and routing, including Android Auto
⢠Added watchmakers, student accommodations, travel agencies
⢠Added support for Slovenian language in search
⢠Voice directions in Estonian, Galician, Hebrew, and LithuanianâŚmore at omaps.org/news
See a detailed announce on our website when app updates are published in all stores.
You can get automatic app updates from GitHub using Obtainium.sha256sum:
f52a71676194e6388e5a894d1af39f47c29fe6e4347332a26ad60b79503c7cde OrganicMaps-25111104-web-release.apk -
đ r/wiesbaden 2 Karten fĂźr Perkele und Los Fastidios abzugeben rss
Hallo, Ich habe leider aufgrund Krankheit 2 Tickets fßr das Konzert diesen Samstag in Wiesbaden am Schlachthof abzugeben. Ich bin selber vor Ort und wßrde sie auch (scamsicher) persÜnlich ßbergeben. Preis wäre Normalpreis (40⏠pro Ticket)
submitted by /u/Gloomy-Demand-5821
[link] [comments] -
đ r/reverseengineering Reversing a Cinema Cameraâs Peripherals Port (WCH32V003) rss
submitted by /u/3nt3_
[link] [comments] -
đ News Minimalist đ˘ Doctors perform brain surgery 6400km away + 9 more stories rss
In the last 4 days ChatGPT read 126822 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

[5.6] Surgeons remotely control robot to remove brain blood clot âbbc.com(+6)
For the first time, surgeons performed a trans-Atlantic robotic thrombectomy, removing a brain blood clot from a cadaver 6,400 kilometers away in a major medical breakthrough.
A neurosurgeon in Florida remotely controlled the robot in Scotland, experiencing minimal time lag. The technology aims to overcome specialist shortages and provide urgent stroke care to patients in remote locations, where treatment time is critical.
[5.9] China exempts Nexperia chips from export controls, easing fears for European car production âbbc.com(+17)
China has lifted export controls on Nexperia computer chips, averting feared production shutdowns at European car plants that rely on the components.
The decision reverses a block China imposed after the Dutch government took control of the Netherlands-based, Chinese-owned company in October. Automakers had warned their chip supplies were running out.
Though Nexperia is Dutch-based, many of its chips are finished in China before re-export. Officials are now working on a stable framework to ensure the full restoration of semiconductor flows.
Highly covered news with significance over 5.5
[6.4] Syrian President visits White House, discusses joining US-led coalition â ici.radio-canada.ca (French) (+116)
[5.5] US Senate passes deal to end government shutdown â bbc.com (+351)
[5.7] US takes over Gaza aid management from Israel â ilmessaggero.it (Italian) (+2)
[5.9] OpenAI faces seven lawsuits alleging ChatGPT caused user suicides and delusions â capitalgazette.com (+19)
[5.8] Munich court rules ChatGPT cannot use song lyrics without a license â tagesschau.de (German) (+9)
[6.1] Hong Kong scientists develop high-speed imaging for live brain cell activity in awake mice â medicalxpress.com (+2)
[5.5] Trinity College researchers develop whooping cough vaccine that stops transmission â irishexaminer.com (+3)
[6.0] CRISPR gene-editing therapy safely reduces cholesterol and triglycerides in first-in-human trial â newsroom.heart.org (+26)
Thanks for reading!
â Vadim
You can customize this newsletter with premium.
-
đ r/reverseengineering "Cracked" a TUI to download challenge easily from crackmes.one rss
submitted by /u/Latter-Change-9228
[link] [comments] -
đ The Pragmatic Engineer Four years on writing a tech book: pitching to a publisher rss
In 2019, I decided to write a book about software engineering. As an experienced software engineer and manager, I had the topic clear in my head, and assumed the whole project would take between six and 12 months in writing and publishing it.
The
first proof copy of The Software Engineer's
Guidebook -
hence the "not for resale" markupIn the end, this process took several times longer; 4 years, in fact! Happily, it was worth it: readers' feedback about The Software Engineer 's Guidebook has been overwhelmingly positive, and on launch, the book became a #1 bestseller among all titles in two Amazon markets (the Netherlands and Poland), as well as a top 100-selling book in most Amazon markets. In 24 months it sold around 40,000 copies, and was translated into German, Korean, Mongolian and Traditional Chines - with the Japanese and simplified Chinese versions releasing later this month.
A lot of people ask why I chose to self publish, and it would be nice to say this was always the goal, but it wasn't! Originally, I wanted to work with a top tech publisher, who would get the book to market fast, and give it a higher profile. This didn't happen, but during the process I learned a lot about how publishing works, how to pitch a book, and how to choose which publishing route might be the right one.
This article shares my learnings from writing and publishing a book which has done pretty well with readers, and it includes the experience working with an established publishing house:
- Tech book publishing landscape
- Financials of publishing
- Publishing process and the publisher's role
- My book pitch
- Working with a publisher
- Breaking up with a publisher
1. Tech book publishing landscape
Today, there are reputable book publishers whose titles are good and authoritative, and there are other publishers whom this doesn't apply to. Each publisher also has a subject area: some are mainstream and publish titles about every software engineering area from languages to engineering management. Meanwhile, others stick to a topic of expertise they focus on.
Here's my mental model of the book publishing industry in 2025:
Biggest players in the tech
book publishing industry, a subjective mental model of course!Highly reputable mainstream publishers
In tech book publishing, three publishing houses really stand out, in my opinion, and form a 'big three' among all players in this sector:
- ** O 'Reilly**: if I had to pick a #1 tech book publisher, it would be O'Reilly. They publish some of the most referenced books - like Designing Data Intensive Applications by Martin Kleppmann, Tidy First by Kent Beck, The Staff Engineer 's Path by Tanya Reilly, and more. The book covers are distinctive, using images of animals.
- Manning: a broad range of titles on both specific and general tech topics, which employ historical figures on the covers.
- The Pragmatic Bookshelf: also referred to as the "Prags." Founded by Andy Hunt and Dave Thomas, the authors of what might be the best-selling tech book ever; The Pragmatic Programmer. Since its founding, The Prags has refused digital rights management (DRM) on their ebooks.
High reputable "mainstream" publishers that are tough to pitch to
The publishers in this section have strong reputations, like those above. However, they are harder to pitch to, usually because they publish fewer tech books. I couldn't find an author pitch template, or clear pitching instructions, and contributes to a sense of "don't find us, we'll find you" among the following publishing houses:
- ** Addison-Wesley:** one of the best-known brands in tech. It has been an imprint (a trade name within a publication) of Pearson since 1988, and is the publisher of many "classic" book titles like Clean Code by Robert C. Martin, The Pragmatic Programmer by Andy Hunt and Dave Thomas, and some recent ones like Modern Software Engineering by Dave Farley. I couldn't find any way to pitch to this publisher, and new books they publish seem to be by established authors.
- Pearson: This business owns the Addison-Wesley imprint. Recently, it started to publish tech books as "Pearson" instead, author Martin Fowler shared.
- Wiley: formerly a well-known tech book publisher behind the "X for Dummies" series. It publishes lots of computer science textbooks, but I can't find recently-published, well-known tech books for software engineers.
- Springer: another massive publisher for whom tech books are a small part of the business. I couldn't find how to pitch tech books to them.
- Morgan Kaufmann: a well-known tech books publisher founded in 1984, and acquired in 2001 by Elsevier. As I understand, these days it prints far fewer technology book, and focuses on academic topics. No clear way to pitch to them.
Highly reputable "niche" publishers
The following publishers are standout in quality, covering fewer topics than those above.
- No Starch Press: "The finest in geek entertainment" is the tagline, featuring fun visuals, and high-quality content on specific technologies like machine learning, Python, JavaScript, etc.
- IT Revolution: titles for technology leaders: DevOps, technology delivery, workplace culture, and similar. Publisher of The Phoenix Project, Team Topologies, and Accelerate.
- Artima: focuses on Scala.
- CRC Press: publishes on technology, engineering, math, and medicine.
- Stripe Press: "works about technological, economic, and scientific advancement."
- MIT Press: "a distinctive collection of influential books curated for scholars and libraries worldwide."
Other mainstream book publishers
** Apress** is a reputable publisher with a lower profile, which publishes on a wide range of topics, from specific technologies and frameworks, to more generic topics on computing. Because they publish many books on many topics, they are usually open to pitches.
Packt. A tech book publisher with a focus on quantity over quality, it feels to me. There is limited support and feedback for authors, and titles could often use more editing. But also, Packt is likely to say "yes" to a serious proposal.
2. Financials of publishing
Financial matters really come into play when your proposal is accepted by a publisher and you receive a contract offer.
Advance: $2,000 - $5,000. An advance payment to the writer is a tried and tested way to make them deliver a completed manuscript. It's often paid in chunks: 50% when a milestone is hit, and 50% when a full draft appears.
The "big three" publishers typically offer $5,000, usually as a flat, non- negotiable rate; at least, it's what I was offered. Smaller publishers offer closer to $2,000 for more niche books. The advance is non-refundable; even if your book sells zero copies, you keep it. The publisher is making an investment in you, and taking a risk.
As an aside: if you are thinking of writing a book: for guest authors in The Pragmatic Engineer Newsletter guest authors I offer a $4,000 per article payment - and you can later publish your guest article in a book. Several authors working on their book have written a guest articles such as Lou Franco on Paying down tech debt or Apurva Chitnis on Thriving as a founding engineer . Writing a guest post can help refine ideas, broaden your reach, and prove helpful when publishing the article.
Paperback royalty: 7-15%
Royalties are earned on book sales, and taken from the net price of the book. Net price is what a publisher gets after the retailer (e.g. Amazon, or a bookshop) takes their cut. Let's see how it works for a $40 book:
The
royalty from a $40 book that has a 10% royalty can be anywhere from $4 to
around $1.80, depending on the channel it was sold on. It all depends on how
much revenue the publisher received after the sale.It matters financially where your title is purchased; be it an online shop, physical book store, or purchased directly from the publisher. Many tech books are sold on Amazon and online stores. Amazon's 40% cut seems high, but it's actually the lowest among book retailers. Up to 60% is a common cut for a physical bookshop.
Most publishers offer 10-12.5% royalties, is my understanding, and Packt around 15-20%. Keep in mind that brand reputation plays a role; for example, Packt's reputation is less elevated than Manning, which can make a difference to sales.
Ebook royalties: 10-25%
For ebooks, several publishers pay 25% royalties, but not all. But even with a higher royalty rate, an author might end up making less per sale. For example, on the Kindle platform, the cut for Amazon is high at 65%. Let's look at a $30 ebook with a 20% royalty rate:
ebooks
are cheaper, but authors can earn more with this royalty structure. Selling on
Kindle version is the least profitable because Amazon takes 65% of any sale
above $10Ebooks are almost always priced lower than physical books, and when sold on Kindle, generate much less revenue for the author, while earning more per copy than the paperback version. I was offered 10% royalties on ebook sales, which is at the low end.
"Earning out"
When an author needs to pay back an advance before being paid anything, this is called "earning out". If you get a $5,000 advance for a title costing $40 per hard copy and $25 for the ebook version, and most sales happen on Amazon, it means:
- ~2,080 paperback sales on Amazon
- Or ~2,850 Kindle book sales
- Or ~1,250 paperback sales on the publisher website
The author needs to sell at least 1,000 copies across various platforms to "earn out." The good news is that a publisher sends quarterly or annual royalty payments if a book keeps generating revenue, which would effectively be passive income.
The Prags' unique approach
One publisher that calculates rates differently is The Pragmatic Bookshelf. Instead of offering a low-digit number on revenue , they offer a 50% split on profit.
50% on profit sounds much higher than 10% on revenue, right? However, the devil is in the details, because paying on profit means that the upfront publisher costs - editors, cover design, printing, distribution, marketing - all are deducted before any profit split.
Authors who have used this approach tell me the numbers end up pretty similar to the revenue model.
Real-world case studies with actual earnings
Designing Data Intensive Applications author, Martin Kleppmann, shared the cumulative royalties he made in 6 years. The breakdown is interesting; ebook and Safari Online sales generated more revenue for the writer than the print version.
Cumulative royalties for
Designing Data Intensive Applications, published by O 'Reilly. Image source:
Martin Kleppman 's siteCloud Native Infrastructure earnings: author Justin Garrison published with O'Reilly, and was offered 10% for print and 25% for ebooks (split into half, thanks to working with a coauthor). His book sold 1,337 copies in 4 months; and made about $22,000 for the two authors (and around $11,000 for Justin.) Justin concluded:
"Going into this project I had a rough estimate in my head to make about $2000-3000 so this is much better than I expected. Set your expectations accordingly."
Don 't forget that publishers are also in this to make a positive return. This means that it is unlikely for a highly reputable publisher to invest into a book that they do not believe would sell at least a few thousand copies. I don't have the data here: but if I was a publisher, I would reject any book that didn't look like it could hit 1,000 copies sold in the first year of publishing.
3. The publishing process, and publisher roles
Why does a publisher take so much of the revenue? Part of this is because they do a lot of the work around publishing, and need to hire (and pay!) people for those roles. Here is my understanding of how the publishing process works, based on four months of pitching to publishers; two months of working with one of them; and researching how the rest of the process works:
My understanding of the
publishing process, when working with a publisher. You probably get to work
with quite a few specialized folks!Here are people I worked with, and my experience with them:
The acquisitions editor. If you write a technical blog, you might get a reachout from someone called an acquisitions editor, who will ask if you would consider publishing a book. Also, when you submit a pitch to a publisher, chances are that you will first communicate with an acquisitions editor.
A publisher's goal is to publish books that will be profitable for them. They find authors who could write these books two ways:
- Inbound pitches coming from authors - reviewed by editors or acquisitions editors
- External reachouts done by acquisitions editors
These people need to have a good understanding of what kinds of books sell well at the publisher (and why); what their current catalogue is; what the gaps are; and what competitor publishers are commissioning.
When I pitched my book to 3 respected publishers, in two cases I talked with (and worked with) the acquisitions editor to improve my pitch. The acquisitions editors were my "champions" at the publisher. Their goal was to get a pitch that the company would say yes to.
The development editor works on the structure of the book. They ask the author to come up with a detailed table of contents - in my case, they asked me to estimate even the length of the chapters. They also help develop - and maintain - the narrative of the book.
Had I not worked with a publisher, I would have had no appreciation of this "high-level editing" - which, turns out, is key for writing a well-structured tech book!
The project manager checks in with timelines, organizes reviews--like editorial reviews--and helps keep you accountable. One of the best things about working with a publisher is that you are on a tight deadline--without which it would take you several times longer to publish the book!
The publisher owns a lot of rights for your book! One thing that I realized only after signing with a publisher is that while publishers help a lot with writing the book - and taking a higher cut is sensible because of this - they also hold on to a lot of rights that impact your book! These are all things that you give up on, versus when self-publishing. These are:
- Global publishing rights. Although you are the author of the book - and usually hold the copyright to it - the publisher own wordlwide publishing rights. This means that they are the only ones who can publish the book, or longer excerpts of it. In practice, this means you need to get permission if you'd like to publish some parts of your book on e.g. your blog, or social media. They 'll usually grant this as it's good marketing - but it's still that you need to ask, as the author.
- Foreign rights. The publisher will own the publishing right, and will usually be the one who owns selling foreign rights. In theory, this could sound like you are losing out on things. In pratice, publishers are much better positioned to sell and administer these rights. Most publishers offer a 50% cut on these rights - it's what my publisher offered. Also, the majority of tech books are not translated to other languages - a book that "only" sells 2,000 copies in English is unlikely to sell a significant number in a non-English market!
- The cover. The publisher decides what cover they will design, though they tend to check the author for feedback.
- The title. One of the surprises for me was how the publisher ultimately decides on the title and subtitle.
In short: this book is owned by the publisher. You are the author, but they are the only ones who can distribute it. In practice, many authors would prefer to have it this way - because all the work related to distributing the book is taken on by the publisher. However, it's good to know that you need to give up all the above when working with a publisher.
4. My book pitch
My secret hope, back in 2019, was to get a contract with one of the "Big 3" tech book publishers: O'Reilly, Manning or The Prags. I pitched my book to all three: got a "no" from two, but a "yes" (and a contract) from one. Here's how I went about my pitch.
Write a "one-pager" about your book
What will this book be about? Who is it for? What will readers take away when reading it? Answer these in a short pitch, before even seeking out publishers. Here's what I put together as my "one-pager:"
Do some market research
What are similar books in the market that would be competing with this book, directly or indirectly? How is this book different from them?
What is the demographic of people who would be interested in buying this book? Can you estimate how large this crowd is? Realistically, what percentage of this group could be interested in buying the book - assuming they know about it? Don 't forget that publishers will invest into books that can generate decent sales: it's good to do a little research to help confirm your title could be one of these!
Shortlist publishers you would be interested working with
While there are quite a few publishers out there: what are your top preferences? And what are ones you're willing to consider, even if your "top" choices turn you away?
Self-publishing is always an option (I'll cover more on how I went about this in later parts). However, going with a good publisher can significantly speed up your book production, while also improving the quality.
Write a draft table of contents and a draft chapter
Some publishers will want to look at what a draft chapter will look like - but not all of them. Still, I found it helpful to do writing before submitting to a publisher. If for no other reason, this was to confirm that I'd enjoy longform writing!
I spent about a week putting together a table of content, and around four months writing drafts of chapters. These chapters turned out to be helpful later on.
Submit a tailored pitch your the publisher(s)
Once you identified your top publisher choices, submit a pitch. Most book publishers have a pitch document they want you to follow. Here are common ones:
- Description
- About the topic
- Audience
- Keywords
- Competing titles
- Related O'Reilly titles
- Book outline
- Writing schedule
- About the author
- About the book topic
- The book plan
- Q&A
- Reader overview
- Book competition
- Book length and illustrations
- Writing schedule
- Table of contents
The Pragmatic Bookshelf template:
- Overview
- Outline
- Bio
- Competing books
- PragProg books
- Market size
- Promotional ideas
- Writing samples
Most of these templates ask for similar content, so if you completed one pitch: the others are much easier. Here are some tips I'd have for building a pitch.
Put yourself in the shoes of the publisher. This book is a huge deal to you: but it's just one of the dozens that the publisher will publish just this year. You want to write an amazing book: but the publisher wants to publish one that will sell.
And these are major differences! The publisher will care very much about competition for the book, and how their existing titles relate to them. Like a VC firm, a publisher will not want to fund two investments competing on the exact same market: so if the publisher recently published a book that is a deepdive on Go; they will almost certainly pass on the next one, no matter how good your pitch is.
Pitching to several publishers parallel is totally fine and you should do it! This is one thing I wish I'd done differently.**** In my mind, I was 100% certain that my first publisher-of-choice would jump on the opportunity to publish this book. I thus felt that it would be "unfair" if I pitched to other publishers, without hearing back.
In hindsight, as a first-time author, this strategy was a waste of time on my end. Most publishers are unlikely to take a risk on a first-time author with no books published in the past - like I was in 2019. And so the likely outcome is rejection in most cases.
In my case, I spent about two and a half months waiting on the response from this first publisher. My acquisitions editor was championing the book - making the case for the publisher to offer a contract - but in the end, the publisher chose another book with a similar topic that was in their pipeline. This made perfect business sense for them - but for me, I was spent waiting for months, instead of pitching the book to other publishers!
My book pitch ended up being a helpful resource on my self-publishing journey. Even though I did not release with a publisher: pitching to publishers helped the book become an eventual success. It was for these reasons:
- Defining the structure. I had my table of contents well thought-out by the time I submitted the pitch. This structure changed later, but it was a solid start.
- Positioning the book. I had a good idea of the "competitive" landscape, and what books my title would "go up against." It also helped me focus on how my book is different to what is already out there.
- Forcing me to think about marketing. The Pragmatic Bookshelf asked for a section on promotional ideas. This forced me to think about where (and how) I would promote the book - even before getting into the thick of writing. When going with a publisher, it's safe to assume that the publisher's brand will do some marketing. However, authors will still do the lion's share of marketing - and it's good to think about this ahead of time.
5. Working with a publisher
I got lucky with one of the three publishers, in the end. This publisher was looking for a book just like mine, right at that time! What happened was one of their best sellers had to be pulled from publication, for reasons outside the control of the publisher. Apparently, when my pitch arrived, they had just started a search for a book that could plug the hole - and they saw my book being a perfect fit for a "software career advice" book.
At the time, this felt like great luck. In hindsight, my relationship with the publisher might have soured exactly because they were looking for me to write a specific kind of book that would be similar enough to this old book - but I had no intention of doing so. More on how things went sour in the section after this one.
From signing the contract, I worked with a publisher for about a month - so I'm not exactly the most experienced in this front. However, a couple of things stood out as strong positives - and things that I "lost" when deciding to self publish, in the end.
Strong pressure to write - thanks to the contract. My contract had pretty strict deadlines included. We signed it on 11 January 2020, and these deadlines were part of the contract:
"The Author shall prepare and deliver to the Publisher a machine-readable electronic copy of the manuscript for the Work, including all its illustrations, code listings, and exercises, as mutually agreed upon by the Publisher and the Author as follows:
- Not later than March 15, 2020, a partial manuscript for the Work totaling not less than one third of the planned finished Work.
- Not later than June 1, 2020, a partial manuscript for the Work totaling not less than two thirds of the planned finished Work.
- Not later than August 15, 2020, a draft of the complete manuscript for the Work suitable for review.
- Not later than September 1, 2020, the final, revised and complete manuscript for the Work acceptable to the Publisher for publication."
Talk about pressure! Also, my first payout was tied to reaching the first milestone - which was delivering at least a third of the finished work. My publisher also set up regular check-ins to help me stay accountable. And this kind of pressure was good - because without it, I would have pushed back writing, or got stuck on relatively trivial parts!
6. Breaking up with the publisher
While I greatly appreciated that a publisher took a chance on me, lots of things felt wrong from the start. A month into working together, I felt that things were getting worse, and not better.
The small things that I dismissed, in the beginning:
- A (very) opinionated structure. This publisher had strongly opinionated templates I was told to use for all chapters. They included each chapter to start by stating what the reader will learn; and then summarize this at the end of the chapter. It wasn't how I imagined my book to be - but it didn't seem I had a choice. I figured, I'll give it a go. The publisher knows better after all, as they've done this hundreds of times. Right?
- Needing to ask for permission to share drafts on social media. I originally planned to share screenshots of some of the parts I am writing to get feedback as I go - and to also increase visibility of the book. I thought that this is a no-brainer. Not only does this kind of "early sharing" makes the book better: but it will also make more people excited about the book, leading to more eventual customers. To my surprise, my contact at the publisher said I will need to ask for permission whether I can do this. Permission? For something that will market the book? Yes: because the publisher owns all publishing rights, including for the draft!
- I won 't decide on what the title will be. I had strong opinions about what I'd like the book's title to be. My publishing contact also had ideas on what they thought would be good to add to it - like introducing the "mentoring" term either to the title or the subtitle: which was an idea I disliked. As I talked with them, it became clear that the publisher will set the final title: not me. Hmm - odd, no? It's another reminder that, although it's my book: it's really the publisher's book, and they have the final say on all important decisions.
- Nudges to "dumb down" the book. My editor was giving more suggestions on how to edit the content to make it more "beginner-friendly" and suggested I introduce e.g. "Alice and Bob" examples to make it easier to digest the contents. One of the recently best-selling books of the publisher heavily used Alice and Bob, and it seems the publisher thought it helped their sales.
The first major editorial review was where I decided we should part ways with the publisher. About a month-and-a-half in, the publisher pulled together several experienced editors, and offered suggestions on how I could improve the book. The suggestions were these:
- Focus on reader engagement. Tell stories and develop them with emotion, mystery, aha moments, and unexpected conclusions. Tell the stories from the "we" or "they" perspective -- make stories team-oriented.
- Exercises. Develop exercises for use within the chapters (not just end) or a story about what happened when one person did the exercise.
- Mini-projects. Guide readers to discover and come to conclusions on their own (see Donald Saari story in What the Best College Teachers Do). Mini project topics: testing, architectures.
- Word of the day feature. Example: Dependency injection (what is it)? Scatter these across the book.
- Quotes. Include quotes from luminaries such as [Well-known-person 1] and [Well-known-person 2] that relate to advice given. Ask other [Publisher] authors to relate experience about how they followed similar advice and were successful.
- Tech map. Create a diagram of the current technology landscape. Example big-picture topics: architecture demystified, distributed systems demystified.
While I appreciated the suggestions: I hated all of them. I saw what implementing them would do: they would turn this book - which I already had reservations with the "forced" style on me - to something I would not want to read. Much less write!
I envisioned writing a more matter-of-the-fact book that doesn't have exercises, "mini projects" or "word of the day" gimmicks.
I sat down to reflect why I chose to work with a publisher, to start with. As an author, I'm giving up a lot of things: editorial control, the bulk of revenue, all publishing rights⌠and for what? For the publisher to make the process easier, and for the end result book to be better than if I was working alone.
But I felt that this book would be far worse if I continued with my publisher: and the only way to get it back to what I envisioned was if I spent a lot of time and energy pushing back on them.
It would cost me less energy to self-publish. So I decided to terminate my agreement because it didn't feel my publisher was helping write the book that I wanted to write.
My publisher was understanding and professional in terminating the contract. I explained to them that all the feedback suggested they wanted to see a very different book to what I wanted to write. And that, frankly, I am not the author to write that kind of book.
Truth be told, I was embarrassed that I had wasted their resources - working with their development editor and the editing team - for these two months. At the same time, I was vocal in voicing to my editor that I was hesitant about this mandated style. I also made the decision that there is no point in continuing at the first formal feedback session. I'm not sure I could have come to this conclusion any further, as I was still learning how this book publisher worked, up until that point.
To show how professional this team was, this is the termination letter they sent as a signed PDF:
"This letter is in reference to our Publishing Agreement with you for [what would become The Software Engineer's Guidebook] dated January 11, 2020. By mutual agreement, we are terminating the publishing contract.
Since no advance was paid to you under the terms of this contract, all rights in the content you originally submitted will hereby return to you and we will consider this matter concluded.
The decision to cancel a project is never an easy one to make. We thank you for all the efforts on this project that you made and wish you the best in your future endeavors."
At this point, I learned enough about publishers and myself to decide: I 'm doing it by myself. Having my book accepted by a major publisher gave external validation that there's a strong business case for The Software Engineer's Guidebook. And working with an opinionated publisher - and continuously pushing back on styling suggestions made me realize that I already have my own opinonated style that I like using.
I did lose a very important thing by deciding to self-publish: the accountability of meeting a publishing deadline. Working with the publisher, this book would have been out fall 2020 or spring 2021. Self-publishing, I launched it November 2023.
One of the reasons for publishing my book two years later than it would have taken with a publisher was because I now knew I could no longer rely on a well-known publisher to lend my book their brand. For my book to have an even slim chance of being successful: I would have to compensate for the lack of being associated with a publisher, and fill the gap in marketing and awareness, leading up to the book launch.
Not having a publisher was a reason I started writing The Pragmatic Engineer Newsletter in August 2021 (a year-and-a-half after breaking up with this publisher) - and the sudden success of this newsletter gave me less time to wrap up the book. At the same time, by the time the book was ready, there were plenty of people who looked forward to reading it: and many of them were already readers of the newsletter!
I'll cover more about how I went about the actual self-publishing process in a follow-up article, how the book ended up selling, and other learnings. Subscribe to The Pragmatic Engineer to get notified when it is out.
-
đ r/reverseengineering Reversing the bootloader of a Numworks calculator :D rss
submitted by /u/Next_Material_293
[link] [comments] -
đ r/reverseengineering Which techinque is better: Heavens-Gate or Hell-Gate? rss
submitted by /u/SwagNoLimit
[link] [comments] -
đ r/LocalLLaMA Seems like the new K2 benchmarks are not too representative of real-world performance rss
| submitted by /u/cobalt1137
[link] [comments]
---|--- -
đ r/LocalLLaMA We put a lot of work into a 1.5B reasoning model â now it beats bigger ones on math & coding benchmarks rss
- We put a lot of care into making sure the training data is fully decontaminated â every stage (SFT and RL) went through strict filtering to avoid any overlap with evaluation benchmarks.
- It achieves state-of-the-art performance among small (<4B) models, both in competitive math and competitive coding tasks. Even surpass the DeepSeek R1 0120 in competitive math benchmarks.
- Itâs not designed as a general chatbot (though it can handle basic conversation and factual QA). Our main goal was to prove that small models can achieve strong reasoning ability, and weâve put a lot of work and iteration into achieving that, starting from a base like Qwen2.5-Math-1.5B (which originally had weak math and almost no coding ability) to reach this point.
- Weâd love for the community to test it on your own competitive math/coding benchmarks and share results or feedback here. Any insights will help us keep improving.
HuggingFace Paper: paper
X Post: X
Model: Download Model ďźset resp_len=40k, temp=0.6 / 1.0, top_p=0.95, top_k=-1 for better performance.ďź submitted by /u/innocent2powerful
[link] [comments]
---|--- -
đ HexRaysSA/plugin-repository commits sync repo: +1 release rss
sync repo: +1 release ## New releases - [oplog](https://github.com/williballenthin/idawilli): 0.2.0 -
đ Stavros' Stuff Latest Posts I converted a rotary phone into a meeting handset rss
The meeting stakes are high when you can get hung up onAs you may remember, or completely not know, I have a bit of a fascination with old rotary phones. Occasionally, when people learn about this fascination, they donate their old rotary phones to me, so I have ended up with a small collection.
The other thing I have a fascination with is meetings. Well, I say “fascination”, but it’s more of a burning hatred, really. One day, a few months ago, I was in one such meeting, as I have been every day since, and I jokingly pretended to get irate about something.
One of my coworkers laughed and said “I bet if this were a phone call, you’d slam the phone down right now”, and a dread spread over me. Why didn’t I have a phone handset I could slam down? Had I really become a corporate husk of my former, carefree self, puppeteered by
-
đ sacha chua :: living an awesome life 2025-11-10 Emacs news rss
- Upcoming events (iCal file, Org):
- OrgMeetup (virtual) https://orgmode.org/worg/orgmeetup.html Wed Nov 12 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Thu Nov 13 0000 Asia/Singapore
- Atelier Emacs Montpellier (in person) https://lebib.org/date/atelier-emacs Fri Nov 14 1800 Europe/Paris
- EmacsSF (in person): coffee.el in SF https://www.meetup.com/emacs-sf/events/311801687/ Sat Nov 15 1100 America/Los_Angeles
- London Emacs (in person): Emacs London meetup https://www.meetup.com/london-emacs-hacking/events/311781816/ Tue Nov 18 1800 Europe/London
- M-x Research: TBA https://m-x-research.github.io/ Wed Nov 19 0800 America/Vancouver - 1000 America/Chicago - 1100 America/Toronto - 1600 Etc/GMT - 1700 Europe/Berlin - 2130 Asia/Kolkata – Thu Nov 20 0000 Asia/Singapore
- Emacs APAC: Emacs APAC meetup (virtual) https://emacs-apac.gitlab.io/announcements/ Sat Nov 22 0030 America/Vancouver - 0230 America/Chicago - 0330 America/Toronto - 0830 Etc/GMT - 0930 Europe/Berlin - 1400 Asia/Kolkata - 1630 Asia/Singapore
- Formation fondamentale (2 jours) - Paris, in-person https://emacsboost.com/ - lundi 24 au mardi 25 novembre 2025
- Emacs configuration:
- Emacs For Writers Unit 9: Installing Packages (06:43)
- Hacking on Emacs #4 (38:06) - Making sure window configuration stays the same after finishing Magit commit (when using `current-window-only`)
- Paul Jorgensen: OS Abstraction with Emacs
- emacs literate org mode config for your init.el (59:54)
- Emacs Lisp:
- Appearance:
- no-distraction.el - my attempt to reduce visual noise in code using tree-sitter (Reddit)
- Impress other writers with productive transparency (slightly hacky but works for me) (Reddit)
- Protesilaos Stavrou: Emacs: modus-themes version 5.1.0
- Protesilaos Stavrou: Emacs: ef-themes version 2.0.0
- Protesilaos Stavrou: Emacs: âstandard-themesâ version 3.0.0
- Protesilaos Stavrou: Emacs: complete examples for Modus themes derivatives
- Magnus: Making a theme based on modus
- tusharhero: ANN: Modus Gotham is here
- Navigation:
- Dired:
- Org Mode:
- Emacs For Writers Unit 7: Advanced Text Markup in Org Mode (08:19)
- Sidebar for Emacs Org Mode (Reddit)
- Brainiac v1.1 released (Reddit) - minimal GTD workflows, notes, and task management using Org Mode
- Arch Install for Doom Emacs in org mode…
tel:custom link typename:custom link type for defining HTML anchors- Org Mode requests: [RFC] How do you use org-export-with-entities? (was: question: protect latex macros for export)
- Listful Andrew: Org tables in comments and docstrings: in Emacs Lisp and Bash (Irreal)
- Markdown to Org | Zenieâs Qis (@Zenie@piaille.fr)
- alvarmaciel/org-screenshot-grim: Take screenshots in org-mode using grim (Wayland)
- org-social.el 2.5: poll improvements, infinite scroll pagination, random order, etc. (@andros@activity.andros.dev)
- Completion:
- Coding:
- Release CIDER 1.20 ("Lanzarote") - small improvements (@bbatsov@hachyderm.io) - Clojure
- Eglot, Ruby LSP and StandardRB
- Emacs eglot and language servers for auto completion (25:08)
- snakemacs: an emacs30 setup for Python and Jupyter with pixi (Reddit)
- Emacs Indigo: bindings for the Indigo cheminformatics library (Reddit)
- Effective Golang in Emacs (34:18, Reddit, @skybert@hachyderm.io)
- home/.config/emacs/recipes/go-rcp.el at master ¡ Crandel/home ¡ GitHub (@crandel@fosstodon.org)
- Swift development - a complete package for building iOS/macOS apps using Emacs (Reddit, Irreal)
- George Huebner: disaster.el + zig cc = Budget godbolt
- gfm-alerts.el: Syntax highlighting for quote blocks that become alerts on GitHub (Reddit)
- Tip: Customize magit-margin-settings to show more info in the magit margins
- Web:
- Multimedia:
- Fun:
- AI:
- Community:
- Other:
- Emacs development:
- emacs-devel:
- Re: emacs-30 a71ba898db8: ; Update the MinGW URLs in w32 FAQ and nt/INSTALL - Eli Zaretskii challenges of mingw environment setup
- Inhibiting frame parameter changes - martin rudalics
- Re: An idea for modifying the strategy for determining positions in byte compile warning/error messages - Stefan Monnier
- Re: Extending seq.el with splice functionality - Eli Zaretskii - expanding/splitting seq.el?
- Zone multi-window and -frame support
- ; * etc/NEWS (hs-hide-block-behavior): Explain the replacements.
- (diff-refine-threshold): New custom var (bug#79546)
- Don't discard empty string arguments from emacsclient
- Unify constants that are equal-including-properties in compiler
- Add option to auto-refresh the lossage buffer. (Bug#79732)
- hideshow: Rewrite 'hs-special-modes-alist'
- emacs-devel:
- New packages:
- dag-draw: Draw directed graphs using the GKNV algorithm (MELPA)
- fuzzy-clock: Display time in a human-friendly, approximate way (MELPA)
- maccalfw: Calendar view for Mac Calendars (MELPA)
- org-mcp: MCP server for Org-mode (MELPA)
- typst-preview: Live preview of typst (MELPA)
Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to AndrĂŠs RamĂrez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!
You can e-mail me at sacha@sachachua.com.
- Upcoming events (iCal file, Org):
-
đ r/LocalLLaMA A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K rss
| submitted by /u/FullOf_Bad_Ideas
[link] [comments]
---|---
-
- November 10, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-11-10 rss
IDA Plugin Updates on 2025-11-10
New Releases:
Activity:
- BinAIVulHunter
- 1b0b9908: Update and rename VulChatGPT.py to BinAIVulHunter.py
- 32285b5e: Rename project from VulChatGPT to BinAIVulHunter
- 2c573b05: Update VulChatGPT.py
- 5f9bc06d: Update VulChatGPT.py
- 1973bcba: Revise README for VulChatGPT plugin enhancements
- e292eed7: Update README.md
- 440caca0: Implement Google Gemini integration and enhance Control Panel
- capa
- dotfiles
- goomba
- 8302f5bf: Merge pull request #18 from HexRaysSA/ci-gha-2
- ida-codedump
- 3eae56b5: feat: add PTN provenance tracking and UI actions
- ida-pro-mcp
- 1bc4fb56: Update stack_frame retrieval in disassemble_function to include a newâŚ
- idawilli
- LUDA
- cd010a72: Update README.md
- 71cdb747: Create README.md
- 060cdafd: Merge branch 'master' of https://github.com/stolevchristian/LUDA
- 7f3561c2: fixed silly mistake
- 2d1228e8: Create LICENSE
- d1165df3: Add files via upload
- 9f360df1: fixed disassembling
- 1799e137: Fixed some stuff
- a02f379f: idek
- 662964d4: Sorted and added new features
- panda
- 548c5d5e: Implement Ubuntu 24 Container/Debian Packages
- qscripts
- a54285ae: Merge pull request #14 from Ylarod/main
- SuperPseudo
- workflow.sh
- BinAIVulHunter
-
đ r/reverseengineering Spider-Man: The Movie Game dissection project Checkpoint - November 2025 rss
submitted by /u/krystalgamer
[link] [comments] -
đ livestorejs/livestore "v0.4.0-dev.17" release
"Release
0.4.0-dev.17including Chrome Extension" -
đ r/wiesbaden Gemeinsam auf Weihnachtsmärkte im Rhein-Main-Gebiet rss
Hi! Gibts hier Frauen, die im Advent gemeinsam auf Weihnachtsmärkte gehen wollen?
Meine Freunde sind leider kein Fan von Weihnachtsmärkten und Punsch :(
Gerne nach Mainz, Wiesbaden, FFM, Schloss Vollrads, ...
submitted by /u/yellowschmetterling
[link] [comments] -
đ r/LocalLLaMA AMA With Moonshot AI, The Open-source Frontier Lab Behind Kimi K2 Thinking Model rss
| Hi r/LocalLLaMA Today we are having Moonshot AI , the research lab behind the Kimi models. Weâre excited to have them open up and answer your questions directly. Our participants today:The AMA will run from 8 AM â 11 AM PST, with the Kimi team continuing to follow up on questions over the next 24 hours. https://preview.redd.it/5yg0ncsn7g0g1.png?width=3525&format=png&auto=webp&s=5318680204ef7502ad349aec148147d9e3398f87
Thanks everyone for joining our AMA. The live part has ended and the Kimi team will be following up with more answers sporadically over the next 24 hours.
submitted by /u/nekofneko
[link] [comments]
---|--- -
đ r/LocalLLaMA Qwen3-VL's perceptiveness is incredible. rss
I took a 4k image and scattered around 6 medium-length words.
With
Qwen3-VL-8B-Instruct-GGUFand a temperature of0, an image token count of2300(seems to be the sweet spot), and the prompt:Provide transcriptions and bounding boxes for the words in the image. Use JSON format.
This is the output:
[ {"bbox_2d": [160, 867, 181, 879], "text_content": "steam"}, {"bbox_2d": [146, 515, 168, 527], "text_content": "queen"}, {"bbox_2d": [565, 731, 589, 743], "text_content": "satisfied"}, {"bbox_2d": [760, 615, 784, 627], "text_content": "feather"}, {"bbox_2d": [335, 368, 364, 379], "text_content": "mention"}, {"bbox_2d": [515, 381, 538, 392], "text_content": "cabinet"} ]
Flawless. No notes. It even got the bounding boxes correct.
How do other models compare?
- Gemini 2.5 pro: Hallucinates an answer.
- Claude Opus 4: Correctly identifies 3/6 words.
- ChatGPT 5: After 5 minutes (!!) of thinking, it finds all 6 words. The bounding boxes are wrong.
- DeepSeekOCR: Produces garbage (possible PEBCAK)
- PaddleOCR-VL-0.9B: Finds 3 words, hallucinates 2. Doesn't output bounding boxes.
- GLM-4.5V: Also perfect results.
Very impressive that such as small model can get such good results, especially considering it's not tuned for OCR.
edit:
Here's the script I used to run it.
submitted by /u/Trypocopris
[link] [comments] -
đ r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
đ r/LocalLLaMA I tested Strix Halo clustering w/ ~50Gig IB to see if networking is really the bottleneck rss
| TLDR: While InfiniBand is cool, 10 Gbps Thunderbolt is sufficient for llama.cpp. Recently I got really fascinated by clustering with Strix Halo to get a potential 200 GB of VRAM without significant costs. I'm currently using a 4x4090 solution for research, but it's very loud and power-hungry (plus it doesn't make much sense for normal 1-2 user inferenceâthis machine is primarily used for batch generation for research purposes). I wanted to look for a low-power but efficient way to inference ~230B models at Q4. And here we go. I always had this question of how exactly networking would affect the performance. So I got two modded Mellanox ConnectX-5 Ex 100 Gig NICs which I had some experience with on NCCL. These cards are very cool with reasonable prices and are quite capable. However, due to the Strix Halo platform limitation, I only got a PCIe 4.0 x4 link. But I was still able to get around 6700 MB/s or roughly 55 Gbps networking between the nodes, which is far better than using IP over Thunderbolt (10 Gbps). I tried using vLLM first and quickly found out that RCCL is not supported on Strix Halo. :( Then I tried using llama.cpp RPC mode with the -cflag to enable caching, and here are the results I got: | Test Type (ROCm) | Single Machine w/o rpc | 2.5 Gbps | 10 Gbps (TB) | 50 Gbps | 50 Gbps + libvma
---|---|---|---|---|---
pp512 | 653.74 | 603.00 | 654.03 | 663.70 | 697.84
tg128 | 49.73 | 30.98 | 36.44 | 35.73 | 39.08
tg512 | 47.54 | 29.13 | 35.07 | 34.30 | 37.41
pp512 @ d512 | 601.75 | 554.17 | 599.76 | 611.11 | 634.16
tg128 @ d512 | 45.81 | 27.78 | 33.88 | 32.67 | 36.16
tg512 @ d512 | 44.90 | 27.14 | 31.33 | 32.34 | 35.77
pp512 @ d2048 | 519.40 | 485.93 | 528.52 | 537.03 | 566.44
tg128 @ d2048 | 41.84 | 25.34 | 31.22 | 30.34 | 33.70
tg512 @ d2048 | 41.33 | 25.01 | 30.66 | 30.11 | 33.44As you can see, the Thunderbolt connection almost matches the 50 Gbps MLX5 on token generation. Compared to the non-RPC single node inference, the performance difference is still quite substantialâwith about a 15 token/s differenceâbut as the context lengthens, the text generation difference somehow gets smaller and smaller. Another strange thing is that somehow the prompt processing is better on RPC over 50 Gbps, even better than the single machine. That's very interesting to see.
During inference, I observed that the network was never used at more than maybe ~100 Mbps or 10 MB/s most of the time, suggesting the gain might not come from bandwidthâmaybe latency? But I don't have a way to prove what exactly is affecting the performance gain from 2.5 Gbps to 10 Gbps IP over Thunderbolt.
Here is the llama-bench command I'm using:
./llama-bench -m ./gpt-oss-120b-mxfp4-00001-of-00003.gguf -d 0,512,2048 -n 128,512 -o md --rpc <IP:PORT>So the result is pretty clear: you don't need a fancy IB card to gain usable results on llama.cpp with Strix Halo. At least until RCCL supports Strix Halo, I think.
EDIT: Updated the result with libvma as u/gnomebodieshome suggested , there is a quite big improvement! But I think I will need to rerun the test some time since the current version I am using is no longer the version I am testing with the old data. So dont just fully trust the performance here yet.
submitted by /u/Hungry_Elk_3276
[link] [comments] -
đ Rust Blog Announcing Rust 1.91.1 rss
The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:
rustup update stableIf you don't have it already, you can get
rustupfrom the appropriate page on our website.What's in 1.91.1 Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release. Linker and runtime errors on Wasm Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The #[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to: #[link(wasm_import_module = "hello")] extern "C" { pub fn world(); } Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates. Rust 1.91.1 fixes the regression. More details are available in .
Cargo target directory locking broken on illumos
Cargo relies on locking the
target/directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns theUnsupportederror when attempting to lock, Cargo assumes locking is not supported and proceeds without it.Cargo 1.91.0 switched from custom code interacting with the OS APIs to the
File::lockstandard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returnedUnsupportedon the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.Rust 1.91.1 fixes the oversight in the standard library by enabling the
File::lockfamily of functions on illumos, indirectly fixing the Cargo regression.Contributors to 1.91.1
Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!
-
đ matklad Readonly Characters Are a Big Deal rss
Readonly Characters Are a Big Deal
Nov 10, 2025
I like Emacs UX as exemplified by Magit. I consider it to be a user interface paradigm on the same footing as UNIX pipes: An Engine for an Editor
Pipes give you 1D read-only streams of characters which excel at batch processing. Emacs is all about interactive mutable 2D buffers of attributed text.
Today I realized that an important feature of Emacs text buffers is read-only characters (manual).
Like in any editor, you can mark an entire Emacs buffer as read-only. But you also can mark individual substrings read-only, so that you can edit anywhere except specific ranges
This is a useful feature for bidirectional interaction. Consider the in- editor terminal I am using currently, which looks like this:
./zig/zig build fuzz -- message_bus = time: 3s = info(fuzz): Fuzz seed = 2355780251053186744 info(message_bus_fuzz): command weight: reserved = 0 info(message_bus_fuzz): command weight: ping = 0 info(message_bus_fuzz): command weight: pong = 0 info(message_bus_fuzz): command weight: ping_client = 0 info(message_bus_fuzz): command weight: pong_client = 0 info(message_bus_fuzz): command weight: request = 0 info(message_bus_fuzz): command weight: prepare = 37 info(message_bus_fuzz): command weight: prepare_ok = 0The first line is the command to execute, this is typed by me manually, and then I hit a âsubmitâ shortcut to actually run the command. Then goes the status line, which shows how long the command has been running so far and the exit code (when the command terminates). The status line is determined by the âterminalâ itself. Finally, thereâs output of the command itself, updated live.
In this sort of the interface, command is modifiable by the user, but is read- only for the editor. Status is the opposite â the editor updates it every second, but the user should be prevented from touching it. And the output can be CRDT-style edited by both parties (I often find it useful to edit the output in place before pasting it elsewhere).
Sadly, in VS Code I canât prevent the user from editing the status, so my implementation is a bit janky, and this, I think, goes to the core of why I donât see VS Code as a great platform for the kind of interactive tools I want to write.
Read-only ranges are hard to implement! Text editing hates you as is, but this feature requires tracking text attributes in an intelligent way under modifications (see sticky properties), and also feeds back into modifications themselves! No wonder Monaco, the editor engine underlying VS Code, lost this ability at some point.
Still, I feel like âdoes it support sticky read-only attribute?â is a good litmus test to check if an editor can support interactive applications a-la Magit seamlessly.
-
đ Baby Steps Just call clone (or alias) rss
Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I'm calling "just call clone (or alias)". This proposal specializes the
cloneandaliasmethods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases inmoveclosures where needed.The goal of this proposal is to simplify the user's mental model: whenever you see an error like "use of moved value", the fix is always the same: just call
clone(oralias, if applicable). This model is aiming for the balance of "low-level enough for a Kernel, usable enough for a GUI" that I described earlier. It's also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created - but that it's ok if the fine- grained details around exactly when the alias is created is a bit subtle.The proposal in a nutshell
Part 1: Closure desugaring that is aware of clones and aliases
Consider this
movefuture:fn spawn_services(cx: &Context) { tokio::task::spawn(async move { // ---- move future manage_io(cx.io_system.alias(), cx.request_name.clone()); // -------------------- ----------------------- }); ... }Because this is a
movefuture, this takes ownership ofcx.io_systemandcx_request_name. Becausecxis a borrowed reference, this will be an error unless those values areCopy(which they presumably are not). Under this proposal, capturing aliases or clones in amoveclosure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):fn spawn_services(cx: &Context) { tokio::task::spawn( async move(cx.io_system.alias(), cx.request_name.clone()) { // -------------------- ----------------------- // capture alias/clone respectively manage_io(cx.io_system.alias(), cx.request_name.clone()); } ); ... }Part 2: Last-use transformation
Now, this result is inefficient - there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to
aliasorclonethat are not needed to satisfy the borrow checker and remove them. This code would therefore become:fn spawn_services(cx: &Context) { tokio::task::spawn( async move(cx.io_system.alias(), cx.request_name.clone()) { manage_io(cx.io_system, cx.request_name); // ------------ --------------- // converted to moves } ); ... }The last-use transformation would apply beyond closures. Given an example like this one, which clones
ideven thoughidis never used later:fn send_process_identifier_request(id: String) { let request = Request::ProcessIdentifier(id.clone()); // ---------- // unnecessary send_request(request) }the user would get a warning like so1:
warning: unnecessary `clone` call will be converted to a move --> src/main.rs:7:40 | 8 | let request = Request::ProcessIdentifier(id.clone()); | ^^^^^^^^^^ unnecessary call to `clone` | = help: the compiler automatically removes calls to `clone` and `alias` when not required to satisfy the borrow checker help: change `id.clone()` to `id` for greater clarity | 8 - let request = Request::ProcessIdentifier(id.clone()); 8 + let request = Request::ProcessIdentifier(id); |and the code would be transformed so that it simply does a move:
fn send_process_identifier_request(id: String) { let request = Request::ProcessIdentifier(id); // -- // transformed send_request(request) }Mental model: just call "clone" (or "alias")
The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call
clone(oralias). It doesn't matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like
&in more- or-less at random as they try to develop a firm mental model - this is where the "keep calm and call clone" joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.Experienced users can trust the compiler to get it right
But the real question is how this works for experienced users. I've been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:
"What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better."
The first half is clearly satisfied. If you don't call
cloneoralias, this proposal has no impact on your life.The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I'd like to hear if there are things I'm overlooking.
The last-use transformation codifies a widespread intuition, that
cloneis never necessary
I think most users would expect that changing
message.clone()to justmessageis fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that makeclonesignificant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that "significant clones" have another name. I think this is a good thing.Frequently asked questions
I think I've covered the key points. Let me dive into some of the details here with a FAQ.
Can you summarize all of these posts you've been writing? It's a lot to
digest!
I get it, I've been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:
- I believe our goal should be to focus first on a design that is "low-level enough for a Kernel, usable enough for a GUI".
- The key part here is the word enough. We need to make sure that low-level details are exposed, but only those that truly matter. And we need to make sure that it's ergonomic to use, but it doesn't have to be as nice as TypeScript (though that would be great).
- Rust's current approach to
Clonefails both groups of users;- calls to
cloneare not explicit enough for kernels and low-level software: when you seesomething.clone(), you don't know that is creating a new alias or an entirely distinct value, and you don't have any clue what it will cost at runtime. There's a reason much of the community recommends writingArc::clone(&something)instead. - calls to
clone, particularly in closures, are a major ergonomic pain point , this has been a clear consensus since we first started talking about this issue.
- calls to
I then proposed a set of three changes to address these issues, authored in individual blog posts:
- First, we introduce the
Aliastrait (originally calledHandle). TheAliastrait introduces a new methodaliasthat is equivalent toclonebut indicates that this will be creating a second alias of the same underlying value. - Second, we introduce explicit capture clauses, which lighten the syntactic load of capturing a clone or alias, make it possible to declare up-front the full set of values captured by a closure/future, and will support other kinds of handy transformations (e.g., capturing the result of
as_reforto_string). - Finally, we introduce the just call clone proposal described in this post. This modifies closure desugaring to recognize clones/aliases and also applies the last-use transformation to replace calls to clone/alias with moves where possible.
What would it feel like if we did all those things?
Let's look at the impact of each set of changes by walking through the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks:
let some_value = Arc::new(something); // task 1 let _some_value = some_value.clone(); tokio::task::spawn(async move { do_something_with(_some_value); }); // task 2: listen for dns connections let _some_a = self.some_a.clone(); let _some_b = self.some_b.clone(); let _some_c = self.some_c.clone(); tokio::task::spawn(async move { do_something_else_with(_some_a, _some_b, _some_c) });As the original blog post put it:
Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say âlol get gud,â but the engineers on this team were the sharpest people Iâve ever worked with. Cloudflare is all-in on Rust. Theyâre willing to throw money at codebases like this. Nuclear fusion wonât be solved with Rust if this is how sharing state works.
Applying the
Aliastrait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls toclonearealiascalls, and you don't have the awkward_some_valueand_some_avariables. However, the code is still pretty verbose:let some_value = Arc::new(something); // task 1 tokio::task::spawn(async move(some_value.alias()) { do_something_with(some_value); }); // task 2: listen for dns connections tokio::task::spawn(async move( self.some_a.alias(), self.some_b.alias(), self.some_c.alias(), ) { do_something_else_with(self.some_a, self.some_b, self.some_c) });Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to
aliasreveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call toself.some_a.alias()will actually occur when the future is created and not when it is awaited :let some_value = Arc::new(something); // task 1 tokio::task::spawn(async move { do_something_with(some_value.alias()); }); // task 2: listen for dns connections tokio::task::spawn(async move { do_something_else_with( self.some_a.alias(), self.some_b.alias(), self.some_c.alias(), ) });I'm worried that the execution order of calls to alias will be too subtle.
How is thie "explicit enough for low-level code"?
There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:
tokio::task::spawn(async move { do_something_with(some_value.alias()); });this gets desugared to a call to
aliaswhen the future is created (not when it is awaited). Using the explicit form:tokio::task::spawn(async move(some_value.alias()) { do_something_with(some_value) });I can definitely imagine people getting confused at first - "but that call to
aliaslooks like its inside the future (or closure), how come it's occuring earlier?"Yet, the code really seems to preserve what is most important: when I search the codebase for calls to
alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn't matter. Look at this code: the important thing is thatdo_something_withis called with an alias ofsome_value, sosome_valuewill stay alive as long asdo_something_elseis executing. It doesn't really matter how the "plumbing" worked.What about futures that conditionally alias a value?
Yeah, good point, those kind of examples have more room for confusion. Like look at this:
tokio::task::spawn(async move { if false { do_something_with(some_value.alias()); } });In this example, there is code that uses
some_valuewith an alias, but only underif false. So what happens? I would assume that indeed the future will capture an alias ofsome_value, in just the same way that this future will movesome_value, even though the relevant code is dead:tokio::task::spawn(async move { if false { do_something_with(some_value); } });Can you give more details about the closure desugaring you imagine?
Yep! I am thinking of something like this:
- If there is an explicit capture clause, use that.
- Else:
- For non-
moveclosures/futures, no changes, so - Categorize usage of each place and pick the "weakest option" that is available:
- by ref
- by mut ref
- moves
- For
moveclosures/futures, we would change - Categorize usage of each place
Pand decide whether to capture that placeâŚ- by clone , there is at least one call
P.clone()orP.alias()and all other usage ofPrequires only a shared ref (reads) - by move , if there are no calls to
P.clone()orP.alias()or if there are usages ofPthat require ownership or a mutable reference
- by clone , there is at least one call
- Capture by clone/alias when a place
a.b.cis only used via shared references, and at least one of those is a clone or alias.- For the purposes of this, accessing a "prefix place"
aor a "suffix place"a.b.c.dis also considered an access toa.b.c.
- For the purposes of this, accessing a "prefix place"
- For non-
Examples that show some edge cased:
if consume { x.foo(). }Why not do something similar for non-move closures?
In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:
let f = async { // ----- NOT async move self.some_a.alias() }; do_something_else(self.some_a.alias()); // ----------- later use succeeds f.await;This future does not need to take ownership of
self.some_ato create an alias, so it will just capture a reference toself.some_a. That means that later uses ofself.some_acan still compile, no problem. If this had been a move closure, however, that code above would currently not compile.There is an edge case where you might get an error, which is when you are moving :
let f = async { self.some_a.alias() }; do_something_else(self.some_a); // ----------- move! f.await;In that case, you can make this an
async moveclosure and/or use an explicit capture clause:Can you give more details about the last-use transformation you imagine?
Yep! We would during codegen identify candidate calls to
Clone::cloneorAlias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:- Will this place be accessed later?
- Will some reference potentially referencing this place be accessed later?
If the answer to both questions is no, then we will replace the call with a move of the original place.
Here are some examples:
fn borrow(message: Message) -> String { let method = message.method.to_string(); send_message(message.clone()); // --------------- // would be transformed to // just `message` method } fn borrow(message: Message) -> String { send_message(message.clone()); // --------------- // cannot be transformed // since `message.method` is // referenced later message.method.to_string() } fn borrow(message: Message) -> String { let r = &message; send_message(message.clone()); // --------------- // cannot be transformed // since `r` may reference // `message` and is used later. r.method.to_string() }Why are you calling it the last-use transformation and not
optimization?
In the past, I've talked about the last-use transformation as an optimization - but I'm changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.
Would the transformation "see through" references?
I think yes, but in a limited way. In other words I would expect
Clone::clone(&foo)and
let p = &foo; Clone::clone(p)to be transformed in the same way (replaced with
foo), and the same would apply to more levels of intermediate usage. This would kind of "fall out" from the MIR-based optimization technique I imagine. It doesn't have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.On the other hand, you could still fool it e.g. like so
fn identity<T>(x: &T) -> &T { x } identity(&foo).clone()Would the transformation apply across function boundaries?
The way I imagine it, no. The transformation would be local to a function body. This means that one could write a
force_clonemethod like so that "hides" the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):fn pipe<Msg: Clone>(message: Msg) -> Msg { log(message.clone()); // <-- keep this one force_clone(&message) } fn force_clone<Msg: Clone>(message: &Msg) -> Msg { // Here, the input is `&Msg`, so the clone is necessary // to produce a `Msg`. message.clone() }Won't the last-use transformation change behavior by making destructors
run earlier?
Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an
Aliastrait:async fn process_and_stuff(tx: mpsc::Sender<Message>) { tokio::spawn({ async move(tx.alias()) { // ---------- alias here process(tx).await } }); do_something_unrelated().await; }The precise timing when
Sendervalues are dropped can be important - when all senders have dropped, theReceiverwill start returningNonewhen you callrecv. Before that, it will block waiting for more messages, since thosetxhandles could still be used.So, in
process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:- Without the transformation, there are two aliases: the original
txand the one being held by the future. So the receiver will only start returningNonewhendo_something_unrelatedhas finished and the task has completed. - With the transformation, the call to
tx.alias()is removed, and so there is only one alias -tx, which is moved into the future, and dropped once the spawned task completes. This could well be earlier than in the previous code, which had to wait until bothprocess_and_stuffand the new task completed.
Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs - a typical example is a
Mutex<()>where the guard is being used to protect some external resource.How can we change when code runs? Doesn't that break stability?
This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.
The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it's just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call- by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).
Does that mean that the last-use transformation would change with Polonius
or other borrow checker improvements?
In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It's a bit of a pain, but I think we can live with it - and I'm unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.
Isn't it weird to do this after borrow check?
This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:
let p: *const T = &*some_place; let q: T = some_place.clone(); // ---------- assuming `some_place` is // not used later, becomes a move unsafe { do_something(p); // - // This now refers to a stack slot // whose value is uninitialized. }Note though that, in this case, there would be a lint identifying that the call to
some_place.clone()will be transformed to justsome_place. We could also detect simple examples like this one and report a stronger deny-by- default lint, as we often do when we see guaranteed UB.Shouldn't we use a keyword for this?
When I originally had this idea, I called it "use-use-everywhere" and, instead of writing
x.clone()orx.alias(), I imagined writingx.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I've changed my mind for a few reasons.First, Santiago Pastorino gave strong pushback that
x.usewas going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means - in contrast, if they see method calls, they will likely not even notice something strange is going on.The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to
clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I've heard with theAliastrait], which is that there are things you want to ergonomically clone but which don't correspond to "aliases". True.In general I think that
clone(andalias) are fundamental enough to how Rust is used that it's ok to special case them. Perhaps we'll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.What about "deferred ref-counting"?
One point that I've raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:
fn use_data(rc: Rc<Data>) { for datum in rc.iter() { println!("{datum:?}"); } }This function requires ownership of an alias to a ref-counted value but it doesn't actually do anything but read from it. A caller like this oneâŚ
use_data(source.alias())âŚdoesn't really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a
&:fn use_data(rc: &Rc<Data>) { for datum in rc.iter() { println!("{datum:?}"); } }so that the caller can do
use_data(&source)- this then allows the callee to writerc.alias()in the case that it wants to take ownership.I've basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use
&Arcand the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.
- Surprisingly to me,
clippy::pedanticdoesn't have a dedicated lint for unnecessary clones. This particular example does get a lint, but it's a lint about taking an argument by value and then not consuming it. If you rewrite the example to createidlocally, clippy does not complain. âŠď¸
- I believe our goal should be to focus first on a design that is "low-level enough for a Kernel, usable enough for a GUI".
-
- November 09, 2025
-
đ IDA Plugin Updates IDA Plugin Updates on 2025-11-09 rss
IDA Plugin Updates on 2025-11-09
New Releases:
Activity:
- CTFStuff
- 0774af1b: ayayaye
- dotfiles
- GTA2_RE
- b586d9ff: иŃĐżŃавиН ŃаКН
- iOS-Study
- 56c8e4e9: [Doc][add]add get go plan
- recover
- sig-importer
- SuperPseudo
- tools4mane
- CTFStuff
-
đ r/LocalLLaMA BERTs that chat: turn any BERT into a chatbot with dLLM rss
| Code: https://github.com/ZHZisZZ/dllm
Report: https://api.wandb.ai/links/asap-zzhou/101h5xvg
Checkpoints: https://huggingface.co/collections/dllm-collection/bert-chat
Twitter: https://x.com/asapzzhou/status/1988287135376699451 Motivation : I couldnât find a good âHello Worldâ tutorial for training diffusion language models , a class of bidirectional language models capable of parallel token generation in arbitrary order, instead of left-to-right autoregression. So I tried finetuning a tiny BERT to make it talk with discrete diffusion âand it turned out more fun than I expected. TLDR : With a small amount of open-source instruction data, a standard BERT can gain conversational ability. Specifically, a finetuned ModernBERT- large, with a similar number of parameters, performs close to Qwen1.5-0.5B. All training and evaluation code, along with detailed results and comparisons, is available in our W&B report and our documentation. dLLM: The BERT chat series is trained, evaluated and visualized with dLLM â a unified library for training and evaluating diffusion language models. It brings transparency, reproducibility, and simplicity to the entire pipeline, serving as an all-in-one, tutorial-style resource. submitted by /u/Individual-Ninja-141
[link] [comments]
---|--- -
đ hyprwm/Hyprland v0.52.1 release
A patch release backporting some fixes from main to 0.52.0.
Fixes backported
- CI/release: populate git info (#12247)
- protocols/layershell: do not raise protocol error if layer surface is not anchored (#12241)
- protocols/outputMgmt: fix wlr-randr by defering success event until monitor reloads (#12236)
- meson: fix version.h install location
Special thanks
Special thanks as always to:
Our sponsors
Diamond
37Signals
Gold
Framework
Donators
Top Supporters:
--, mukaro, Semtex, Tom94, soy_3l.beantser, SaltyIcetea, Freya Elizabeth Goins, lzieniew, Kay, ExBhal, MasterHowToLearn, 3RM, Tonao Paneguini, Sierra Layla Vithica, Anon2033, Brandon Wang, DHH, alexmanman5, Theory_Lukas, Blake- sama, Seishin, Hunter Wesson, Illyan, TyrHeimdal, elafarge, Arkevius, d, RaymondLC92, MadCatX, johndoe42, alukortti, Jas Singh, taigrr, Xoores, ari- cake, EncryptedEnigma
New Monthly Supporters:
KongrooParadox, Jason Zimdars, grateful anon, Rafael Martins, Lu, Jan, Yves, Luiz Aquino, navik, EvgenyRachlenko, GENARO LOYA DOUR, trustable0370, Jorge Y. C. Rodriguez, Bobby Rivera, steven_s, Pavel DuĹĄek, Toshitaka Agata, mandrav
One-time Donators:
ryorichie, shikaji, tskulbru, szczot3k, Vincent F, myname0101, MirasM, Daniel Doherty, giri, rasa, potato, Jams Mendez, collin, koss054, LouisW, Mattisba, visooo, Razorflak, ProPatte, sgt, Bouni, EarthsonLu, W, Faab, Kenan Sharifli, ArchXceed, benvonh, J.P. Wing, 0xVoodoo, ayhan, Miray Gohan, quiron, August Lilleaas, ~hommel, Ethan Webb, fraccy, Kevin, Carlos SolĂłrzano Cerdas, kastr, jmota, pch, darksun, JoseConseco, Maxime Gagne, joegas, Guido V, RedShed, Shane, philweber, romulus, nuelle, Nick M, Mustapha Mond, bfester, Alvin Lin, 4everN00b, riad33m, astraccato, spirossi, drxm1, anon, conig, Jonas Thern, Keli, Martin, gianu, Kevin K, @TealRaya, Benji, Borissimo, Ebbo, John, zoth, pampampampampamponponponponponponpampampampa, Himayat, Alican, curu, stelman, Q, frigidplatypus, Dan Page, Buzzard, mknpcz, bbutkovic, neonvoid, Pim Polderman, Marsimplodation, cloudscripting, StevenWalter, i_am_terence, mester, Jacob Delarosa, hl, alex, zusemat, LRVR, MichelDucartier, Jon Fredeen, Chris, maxx, Selim, Victor Rosenthal, Luis Gonzalez, say10, mcmoodoo, Grmume, Nilpointer, Lad, Pathief, Larguma, benniheiss, cannikin, NoeL, hyprcroc, Sven Krause, Matej DrobniÄ, vjg73_Gandhi2, SotoEstevez, jeroenvlek, SymphonySimper, simplectic, tricked, Kacper, nehalandrew, Jan Ihnen, Blub, Jonwin, tucker87, outi, chrisxmtls, pseudo, NotAriaN, ckoblue, xff, hellofriendo, Arto Olli, Jett Thedell, Momo On Code, MrFry, stjernstrom, nastymatt, iDie, IgorJ, andresfdz7, Joshua, Koko, joenu, HakierGrzonzo, codestothestars, Jrballesteros05, hanjoe, Quantumplation, mentalAdventurer, Sebastian Grant, Reptak, kiocone, dfsdfs, cdevroe, nemalex, Somebody, Nates, Luan Pinheiro, drm, Misha Andreev, Cedric
And all hyprperks members!
Full Changelog :
v0.52.0...v0.52.1 -
đ r/reverseengineering GitHub - Karib0u/kernagent: AI-powered reverse-engineering copilot rss
submitted by /u/bzh_Karib0u
[link] [comments] -
đ r/wiesbaden Canadian Visiting for 1 week looking for people to hang out rss
Hi Iâm 29m coming to visit from Montreal for about a week. I speak English, French, Italian, Mandarin but unfortunately not German as it was a last minute trip and I didnât have a chance to really begin learning any German yet.
Iâll be landing tomorrow, Iâm visiting a friend of mine who works in Wiesbaden. Iâm renting a car so travelling around isnât an issue.
Just simply looking for someone or some people to go do fun things and keep me company this week as my friend works full time! Iâm planning on visiting a few small towns near Wiesbaden and Iâm open for anything!
submitted by /u/AffectionateButthole
[link] [comments] -
đ Jessitron What is special about MCP? rss
three things MCP can do, and an infinite number of things it can't do (all of which make it great)
AI agents can interact with the world using tools. Those tools can be generic or specific.
Generic
Run a bash command
Operate a web browser
Execute a SQL query
Specific
See my Google Calendar events
List my tasks in Asana
Send an email
The most general ones, like ârun a bash commandâ and âread and write filesâ are built into the agent. More specific ones are provided through Model Control Protocol (MCP) servers.
Every tool provided to the agent comes with instructions sent as part of the context. Each MCP server the user configures clogs up the context with instructions and tool definitions, whether the agent needs them for this conversation or not.
If the agent can run a bash command, it can write a curl command or a script to call an API. Why use an MCP server instead?
For remote MCP servers operated by SaaS providers, there are some great reasons.
Remote MCPs provide three unique abilities.
- Authentication. Authorize an MCP server once to act as you, and then take many actions, each properly attributed. OAuth is hard and you canât do it with curl. (OK, itâs more than once, itâs âevery time it loses the connectionâ. This feels like every day, but maybe the agents will get better at renewing auth.)
- Specialized interface. A software API is optimized to talk to code. If it responds with JSON, that is verbose and uses a ton of tokens. An MCP response can summarize the results in text. It can intersperse that with CSV and even ASCII art! Itâs more efficient and effective in communicating with an LLM.
- Change. MCPs donât have to be consistent from day to day, since every conversation is new. The creators of an MCP server can work on that response and make it more effective, changing its format at need. They can add tools, change tools, and even remove tools that arenât used enough. Try doing that in a software API! Itâd break every program that uses it. MCPs can iterate, and rapid iteration is a superpower that AI gives us.
If you want to teach your agent to do something that doesnât require authenticationâlike read a web siteâthen by all means, let it use the tools it already knows. It can get a long way with
curlandjq. Why dilute its world with more instructions when it already knows so much?It can already
Call known APIs with simple auth
Dig around in a SQL database
Operate a web page with a playwright script
MCPs let it
Read Figma designs and get just what it needs
Read and update your Google Calendar
Look at graphs and traces in ASCII
MCPs donât let the agent do anything else.
While ârun a bash commandâ covers most things you want it to do, it also covers everything you donât want an agent to do. The agent can screw with your configuration, write private data out to a public repository, and use your credentials to publish infected versions of your libraries. There is (relative) safety in specific tools. For instance, the agentâs filesystem tools reject writes to files outside of the current project. (The agent then asks my permission to do that update in bash. I say no.)
Well-designed MCPs offer the operations that make sense. Theyâre limited by your authorization as a user, and you can further limit their authorization when you connect or in your agentâs configuration. We can be smarter about it.
Local MCP servers, which run on your computer, let you give blanket approval to specific operations. By doing less, they're better than bash.
Someday we will have nice things.
Currently, if I configure an MCP, itâs available all the time to all agent threads. Most of the time, thatâs a waste of my context. I want to configure which subagents know about which MCP, so my âlook at productionâ agent can see my observability platform, my UI-updating agent can see Figma, and my status update agent can see Asana. I also want agents to load MCP context incrementally, so that it doesnât get every tool definition until it asks to see them.
When MCPs donât hog context, they still wonât often beat using the innate knowledge of the model. But when you are ready to curate the access that agents have to your SaaS or data, MCPs are fantastic.
-
đ navidrome/navidrome v0.58.5 release
This release focuses on stability improvements and bug fixes, with several important fixes for UI themes, translations, database operations, and scanner functionality. Notable improvements include fixes for ARM64 crashes, playlist sorting, and new Bosnian translation.
Added
-
UI Features:
- Add Genre column as optional field in playlist table view. (aff9c7120 by @deluan)
- Add new Bosnian translation. (#4399 by @MuxBH28)
-
Subsonic API:
-
Implement indexBasedQueue extension for better queue management. (#4244 by @kgarner7)
- Populate Folder field with user's accessible library IDs. (94d2696c8 by @deluan)
-
Insights:
Changed
- Scanner:
Fixed
-
UI:
- Resolve transparent dropdown background in Ligera theme. (#4665 by @deluan)
- Fix Ligera theme's RaPaginationActions contrast. (0bdd3e6f8 by @deluan)
- Fix color of MuiIconButton in Gruvbox Dark theme. (#4585 by @konstantin-morenko)
- Correct track ordering when sorting playlists by album. (#4657 by @deluan)
- Allow scrolling in play queue by adding delay. (#4562 by @pca006132)
- Fix Playlist Italian translation. (#4642 by @nagiqui)
- Update Galician, Dutch, Thai translations from POEditor. (#4416 by @deluan)
- Update Korean translation. (#4443 by @DDinghoya)
- Update Traditional Chinese translation. (#4454 by @york9675)
- Update Chinese simplified translation. (#4403 by @yanggqi)
- Update Deutsch, Galego, Italiano translations. (#4394 by @fuxii)
-
Scanner:
-
Restore basic tag extraction fallback mechanism for improved metadata parsing. (#4401 by @deluan)
-
Server:
-
Album statistics not updating after deleting missing files. (#4668 by @deluan)
- Qualify user id filter to avoid ambiguous column. (#4511 by @deluan)
- Enable multi-valued releasetype in smart playlists. (#4621 by @deluan)
- Handle UTF BOM in lyrics and playlist files. (#4637 by @deluan)
- Slice share content label by UTF-8 runes. (#4634 by @beer-psi)
- Update wazero dependency to resolve ARM64 SIGILL crash. (#4655 by @deluan)
-
Database:
-
Make playqueue position field an integer. (#4481 by @kgarner7)
-
Docker:
-
Use standalone wget instead of the busybox one. (#4473 by @daniele-athome)
New Contributors
- @konstantin-morenko made their first contribution in #4585
- @nagiqui made their first contribution in #4642
- @beer-psi made their first contribution in #4634
- @fuxii made their first contribution in #4394
- @daniele-athome made their first contribution in #4473
- @pca006132 made their first contribution in #4562
- @MuxBH28 made their first contribution in #4399
Full Changelog :
v0.58.0...v0.58.5Helping out
This release is only possible thanks to the support of some awesome people!
Want to be one of them?
You can sponsor, pay me a Ko- fi, or contribute with code.Where to go next?
-
-
đ sacha chua :: living an awesome life Drawing lunch notes rss
A+ goes to virtual school. She still wants me to pack a lunch for her every weekday, complete with a lunch note, so that she can "get the schoolkid experience." I started by drawing food and making food-related puns. Lately, she's been really into KPop Demon Hunters, so I've been drawing scenes from those:
Figure 1: Art class this afternoon
Figure 2: Ramyeon time, because she wanted ramen for lunch
Figure 3: Adding A+ to the hoodie scene because it was getting cold
Figure 4: Zoey with shrimp crackers, because I packed shrimp crackers for her lunch Drawing from a reference image is good practice anyhow. Doing it on a lunch note means I get the payoff of giggles during her lunch break. Not bad.
Following up on Doodling icons in a grid, it's a lot easier to make confident lines when I'm just darkening something I've lightly sketched in. The resistance provided by the pencil going over the card stock helps, too.
Looking forward to more practice next week!
You can e-mail me at sacha@sachachua.com.
-
đ r/LocalLLaMA How to build an AI computer (version 2.0) rss
| submitted by /u/jacek2023
[link] [comments]
---|--- -
đ Register Spill Joy & Curiosity #61 rss
Here's a puzzle I'm wrestling with this week: I do my best work when it doesn't feel like work, when, instead, it feels like play. And yet my mind tells me, strains to tell me, that I must do work that feels like work in order to be productive. How do you solve a puzzle when you have all the pieces in your hand but you won't let yourself put them together?
-
This is the best thing I read this week: The Tinkerings of Robert Noyce, by Tom Wolfe. It's from 1983, published in Esquire; it's about, you guessed it, a guy named Robert Noyce, who was part of the Traitorous Eight, who co-invented the integrated circuit, who co-founded Fairchild Semiconductor and Intel (the other founder is Gordon Moore), who quite literally put the silicon into Silicon Valley, who came from the midwest and became richer than a small god; it's about Silicon Valley and the West and the Midwest and how the West isn't the East. If you work in tech, if you work in a startup, or if you're even remotely interested in this industry we're in, you should read it. And in order to convince you to read it, let me tell you about a roommate I once had. He and I had been, separately, with a different cadence, watching Battlestar Galactica, the 2004 TV show. The premise of the show can be condensed as follows: humans create androids called Cylons, humans and Cylons go to war, Cylons launch surprise attack on humans with the help of a human traitor called Gaius Baltar, surprise attack kills most humans and makes human colonies inhabitable, surviving humans flee on battleship Galactica, in search of a new Earth. Now, all of that -- the background explanation on the war, the attack, the battles following the attack, the reveal of the traitor, the escape of all remaining humans on a single ship -- is shown in the first two episodes of the show. But these two very crucial first episodes aren't part of the first season. They are, technically, a separate miniseries that was aired before the first season. They're not S01, but S00. Which is exactly why my roommate, who watched the show in the order the files appeared on disk, watched all 53 episodes of Battlestar Galactica without having watched the miniseries first , without knowing why they 're even on the god damn battleship. Yes, he was very surprised when he got to "the end" and figured out who the traitor is. Now, here's my point: The Tinkerings of Robert Noyce -- that's the two episodes I hadn't watched. And now that I have read it, all of it -- Silicon Valley, startups, tech, everything I have ever read about it -- makes a lot more sense.
-
Mary Rose Cook: "Dozens of new tests and four new techniques to carry into the future. Or, rather, to carry until they're superseded next week."
-
From the always fantastic James Somers, in The New Yorker: The Case That A.I. Is Thinking. Marvelous writing and it's all in here: the stochastic parrot argument, Hofstadter, Ted Chiang, Geoffrey Hinton, neuroscientists. I highly recommend you read it. (I actually listened to it, which is rare for me, and I was surprised by how good the production quality is and thought that the setting in which I listened -- a dark, very cold, November evening walk through an empty-seeming town -- was great for this.)
-
On the surface, this is about a homelab and infrastructure and, if we stretch it, it's about developer tools too, but I'd argue there's even more: a prison of my own making. I actually read this a few hours after I had woken up and found out that my UniFi controller no longer works and that I can't access my network's admin area anymore.
-
So while researching UniFi stuff, I came across this: UniFi Network Comparison Charts. Not particularly interesting, if you're not into UniFi gear, but doesn't this feel like a page out of a different era of the Internet? I think it's great.
-
"Terminal emulators face a fundamental challenge: mapping the vast breadth of Unicode scripts into a fixed-width grid while maintaining legibility. A terminal must predict whether each character occupies one cell or two, whether combining marks overlay previous characters, and how emoji sequences collapse into single glyphs. These predictions fail routinely." This is from State of Terminal Emulators in 2025, which is very interesting, especially the section on performance. If you want to get a glimpse and have never thought about width of Unicode characters, look through the author's wcwidth. And if you want to understand what's going on there, this is a good intro.
-
Thomas Ptacek: You Should Write An Agent. I mean: yeah.
-
Amazing: a "jelly slider" built with TypeGPU. As far as I understand it, the "jelly slider" was a joke someone made somewhere, but then someone else, of course, thought: I should built this. And here we are.
-
Your URL is your state. I love a clean URL and I love when I can copy a URL and it reconstructs the complete state.
-
So, apparently there's a thermal printer, hooked up to a Raspberry Pi, which is connected to the Internet, where everybody can submit print jobs to this printer, and a camera records and streams when the printer prints, and you can see all the wonderful little drawings that the printer has printed in this gallery. Lovely.
-
Did you know that there's a framework for building TUI applications in Rust and it's called Ratatui? Isn't that the most amazing name? So if you did know, why didn't you tell me? What a name!
-
Talking about names: ever heard of Bending Spoons? That's the Italian startup that's acquired Evernote, Meetup, and, very recently, AOL. Bending Spoons itself is now valued at $11 billion. $11 billion! But the thing I couldn't believe while reading this company profile was the CEO's name: Luca Ferrari! What a name. I'm jealous.
-
This is a bit left field, but since I am fascinated by what's colloquially and often without the proper respect described as tech wear , this website by Nike about the jersey they made for Eliud Kipchoge was interesting. It's more than a pat on the back that Nike is giving themselves here -- more like two big hands massaging the shoulders, saying "well done, you, well done" -- but interesting, still.
-
A YouTube Education. This made me feel pretty dumb about how I use YouTube. Good stuff.
-
In case you've never used git bisect: you need to read this! Then go and try it. The first time you experience the power of binary search is magical. git bisect is magical. I use it a ton. I'll jump on the chance to try it. When others go "let me check out these 4 commits that could be the cause of the bug", I'll get out git bisect, even if it might take longer. It's so good and it gave me a hope-to-do-this-before-my-time-here-ends wish I threw on the pile: I really, really want the chance to run git bisect in the automated way, where you give it a script and then it goes and finds the command on its own. That'd be something.
-
rands: Become the Consequence. "Welcome to Senior Leadership! You made it! There's no delegating this task, but it's not a task. It's a strategy, and you don't delegate strategy; you explain it loudly, repeatedly, and then you become The Consequence." It's possibly a bit too abstract to be useful if you haven't lived through the exact problems described here, but I found the example of how to increase the quality (reduce bugs) interesting and the whole thing is a good lens to look through.
-
A bit clickbaity, a bit shallow, but still thought-provoking, at least for me: Notes after listening to CEO of Vercel for 2.5 hours straight. What I got stuck on was #16: "Reveal complexity gradually: simple first, power later." It sounds right, doesn't it? And I think I agree, but then, arguably, two of the most successful and beloved developer tools of all time, Vim and Emacs, do the exact opposite, don't they? Or at least that's the first two I thought of. And it got me thinking: wait a second, both editors are great pieces of software, yes, but are they great products? I honestly don't know.
-
Talking about Emacs: How I am deeply integrating emacs. I read this, thinking: yeah, I had the same dream once too, but now it's dead and I can't believed I ever dreamt it. I do love reading about it though.
-
The 512KB Club, "a collection of performance-focused web pages from across the Internet". There's some real gems in that club. Lots of lovely, little, personal websites.
-
This made me want to create my own book wishlist.
-
Was reminded of Hillel Wayne's Are We Really Engineers? this week. In my head, this article is always in the background, always hovering somewhere when the word engineering is used. "Most people don't consider a website 'engineered'. However, and this is a big however, there's a much smaller gap between 'software development' and 'software engineering' than there is between 'electrician' and 'electrical engineer', or between 'trade' and 'engineering' in all other fields. Most people can go between 'software craft' and 'software engineering' without significant retraining. We are separated from engineering by circumstance, not by essence, and we can choose to bridge that gap at will."
-
Jean Yang, founder of Akita (acquired by Postman): Angel Investors, A Field Guide. I barely know anything about angel investing, except that if "angel investor" shows up on someone's bio it's likely they've recently made some money. So this was very interesting. And this bit has to be highlighted: "I was at dinner with my a16z investor Martin Casado when I told him I wanted investment from Kevin Durant. It was fall 2018, KD was playing for the Warriors, and he had won Finals MVP earlier that year. I was a KD fan and had heard he did tech investing. Martin said, 'How sure are you that you want him?' He sent one text to someone who happened to be walking into KD's house at that very moment. KD said congratulations and the following week I had Thirty-Five Ventures on my cap table."
-
21 Facts About Throwing Good Parties. I don't throw a lot of parties and neither do I wonder about how to throw a good party, but this was good.
-
The new Siri will use Gemini models, it seems.
-
"I have AiDHD. It has never been easier to build an MVP and in turn, it has never been harder to keep focus. When new features always feel like they're just a prompt away, feature creep feels like a never ending battle. Being disciplined is more important than ever."
-
The Best Way to Use AI for Learning. I started reading this without knowing that it's essentially a sales pitch for the app the author built, but still walked away with some ideas. I wish I was this structured when learning new things.
Thanks for reading Register Spill! Subscribe for free to receive new posts and support my work.
-
-
đ Simon Willison Reverse engineering Codex CLI to get GPT-5-Codex-Mini to draw me a pelican rss
OpenAI partially released a new model yesterday called GPT-5-Codex-Mini, which they describe as "a more compact and cost-efficient version of GPT-5-Codex". It's currently only available via their Codex CLI tool and VS Code extension, with proper API access "coming soon". I decided to use Codex to reverse engineer the Codex CLI tool and give me the ability to prompt the new model directly.
I made a video talking through my progress and demonstrating the final results.
- This is a little bit cheeky
- Codex CLI is written in Rust
- Iterating on the code
- Let's draw some pelicans
- Bonus: the --debug option
This is a little bit cheeky
OpenAI clearly don't intend for people to access this model directly just yet. It's available exclusively through Codex CLI which is a privileged application - it gets to access a special backend API endpoint that's not publicly documented, and it uses a special authentication mechanism that bills usage directly to the user's existing ChatGPT account.
I figured reverse-engineering that API directly would be somewhat impolite. But... Codex CLI is an open source project released under an Apache 2.0 license. How about upgrading that to let me run my own prompts through its existing API mechanisms instead?
This felt like a somewhat absurd loophole, and I couldn't resist trying it out and seeing what happened.
Codex CLI is written in Rust
The openai/codex repository contains the source code for the Codex CLI tool, which OpenAI rewrote in Rust just a few months ago.
I don't know much Rust at all.
I made my own clone on GitHub and checked it out locally:
git clone git@github.com:simonw/codex cd codexThen I fired up Codex itself (in dangerous mode, because I like living dangerously):
codex --dangerously-bypass-approvals-and-sandbox
And ran this prompt:
Figure out how to build the rust version of this tool and then build it
This worked. It churned away for a bit and figured out how to build itself. This is a useful starting point for a project like this - in figuring out the compile step the coding agent gets seeded with a little bit of relevant information about the project, and if it can compile that means it can later partially test the code it is writing while it works.
Once the compile had succeeded I fed it the design for the new feature I wanted:
Add a new sub-command to the Rust tool called "codex prompt"
codex prompt "prompt goes here" - this runs the given prompt directly against the OpenAI API that Codex uses, with the same code path and authentication credentials as the rest of Codex.
codex prompt -m <model_name> "prompt goes here" - same again but lets you set the model
codex prompt "prompt goes here" -s/--system "system prompt goes here" - runs with a custom system/developer message
codex prompt --models - this lists all available models that can be used with the "codex prompt" command
My design for this new sub-command is lifted directly from my own llm command.
Iterating on the code
Codex got to work. I've shared the full transcript here (using the tool I described here), but this TODO list it made itself is a useful summary of what it decided to do:
Updated Plan â â Inspect CLI structure and relevant core utilities for sending prompts and listing models â Implement new `codex prompt` subcommand (CLI plumbing + command handler) â Format, lint, and test (`just fmt`, `just fix -p codex-cli`, `cargo test -p codex-cli`)I like that it figured out the justfile in the repo and decided to use it to run formatting and linting commands without me needing to tell it to. (Update: it turns out that was dictated by the AGENTS.md file.)
I tried running the first version of the code it wrote like this:
./target/debug/codex prompt 'Generate an SVG of a pelican riding a bicycle' -m gpt-5-codex-mini... and it didn't quite work. I got this:
(reasoning summary) **Seeking (reasoning summary) instructions (reasoning summary) and (reasoning summary) sandbox (reasoning summary) info (reasoning summary) ** (reasoning summary) **Dec (reasoning summary) iding (reasoning summary) on (reasoning summary) SVG (reasoning summary) creation (reasoning summary) approach (reasoning summary) ** (reasoning summary) **Checking (reasoning summary) current (reasoning summary) directory (reasoning summary) ** (reasoning summary) **Preparing (reasoning summary) to (reasoning summary) check (reasoning summary) current (reasoning summary) directory (reasoning summary) ** I�m ready to help�what would you like me to do next?I�m ready to help�what would you like me to do next? Token usage: total=2459 input=2374 cached_input=0 output=85 reasoning_output=64Note that it DID think about SVG creation, but then decided it should look at the current directory. This isn't what I want - it appeared to be running in Codex's normal mode with a system prompt telling it to edit files on disk. I wanted it to respond to the prompt without acting as if it had a full workspace available to it.
I prompted it again, pasting in the broken output:
this did not seem to work - here is what happened - note that the reasoning summary comes out on a lot of lines when I would rather it streams out to stderr on the same line (flushing constantly) - and then despite the reasoning summary thinking about pelican on SVG it did not actually act on that and display the result, which is very surprising. [pasted content]
It churned for a bit more, but from the output it looked like it was still trying to include Codex-specific tools and workspace information. So I said:
Note that this new 'codex prompt' command should NOT be able to execute any tools at all, and should not have to think about the workspace or sandbox - it should send a single prompt with no tools and then stream and display a response and then stop.
This got me a new build, but when I ran it:
Error: unexpected status 400 Bad Request: {"detail":"Instructions are not valid"}My guess is that OpenAI's custom backend API for Codex doesn't like it if you attempt to call it without passing in the expected instructions. Fair enough!
Rather than paste that error in I got Codex to try the command itself, so it could see the error:
You try running this and see what happens: ./target/debug/codex prompt 'Generate an SVG of a pelican riding a bicycle'
Codex switched back to the default instructions. From its reply:
cli/src/prompt_cmd.rs#L1-L320now forces a tool-free request: we keep the workspace defaults, inject a lightweight system/developer message (âdonât run toolsâ), disable apply_patch/web search via config overrides, and build the outbound prompt as just that system span plus the user text.I guess that will have to do. I asked for one more feature:
Add a --debug option to the codex prompt command which causes the full JSON request and response to be printed to stderr, plus the URL that is being accessed and the HTTP verb
... and we're ready to try this thing out!
Notably I haven't written a single line of Rust myself here and paid almost no attention to what it was actually doing. My main contribution was to run the binary every now and then to see if it was doing what I needed yet.
I've pushed the working code to a prompt-subcommand branch in my repo if you want to take a look and see how it all works.
Let's draw some pelicans
With the final version of the code built, I drew some pelicans. Here's the full terminal transcript, but here are some highlights.
This is with the default GPT-5-Codex model:
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle"I pasted it into my tools.simonwillison.net/svg-render tool and got the following:

I ran it again for GPT-5:
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle" -m gpt-5
And now the moment of truth... GPT-5 Codex Mini!
./target/debug/codex prompt "Generate an SVG of a pelican riding a bicycle" -m gpt-5-codex-mini
I don't think I'll be adding that one to my SVG drawing toolkit any time soon.
Bonus: the --debug option
I had Codex add a
--debugoption to help me see exactly what was going on../target/debug/codex prompt -m gpt-5-codex-mini "Generate an SVG of a pelican riding a bicycle" --debugThe output starts like this:
[codex prompt debug] POST https://chatgpt.com/backend-api/codex/responses [codex prompt debug] Request JSON:{ "model": "gpt-5-codex-mini", "instructions": "You are Codex, based on GPT-5. You are running as a coding agent ...", "input": [ { "type": "message", "role": "developer", "content": [ { "type": "input_text", "text": "You are a helpful assistant. Respond directly to the user request without running tools or shell commands." } ] }, { "type": "message", "role": "user", "content": [ { "type": "input_text", "text": "Generate an SVG of a pelican riding a bicycle" } ] } ], "tools": [], "tool_choice": "auto", "parallel_tool_calls": false, "reasoning": { "summary": "auto" }, "store": false, "stream": true, "include": [ "reasoning.encrypted_content" ], "prompt_cache_key": "019a66bf-3e2c-7412-b05e-db9b90bbad6e" }This reveals that OpenAI's private API endpoint for Codex CLI is
https://chatgpt.com/backend-api/codex/responses.Also interesting is how the
"instructions"key (truncated above, full copy here) contains the default instructions, without which the API appears not to work - but it also shows that you can send a message withrole="developer"in advance of your user prompt.You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
đ matklad Error ABI rss
Error ABI
Nov 9, 2025
A follow-up on the âstrongly typed error codesâ article.
One common argument about using algebraic data types for errors is that:
- Error information is only filled in when an error occurs,
- And errors happen rarely, on the cold path,
- Therefore, filling in the diagnostic information is essentially free, a zero cost abstraction.
This argument is not entirely correct. Naively composing errors out of ADTs does pessimize the happy path. Error objects recursively composed out of enums tend to be big, which inflates
size_of<Result<T, E>>, which pushes functions throughout the call stack to âreturn large structs through memoryâ ABI. Error virality is key here â just a single large error on however rare code path leads to worse code everywhere.That is the reason why mature error handling libraries hide the error behind a thin pointer, approached pioneered in Rust by
failureand deployed across the ecosystem inanyhow. But this requires global allocator, which is also not entirely zero cost.Choices
How would you even return a result? The default option is to treat
-> Result<T, E>as any other user-defined data type: goes to registers if small, goes to the stack memory if large. As described above, this is suboptimal, as it spills small hot values to memory because of large cold errors.A smarter way to do this is to say that the ABI of
-> Result<T, E>is exactly the same asT, except that a single register is reserved forE(this requires the errors to be register-sized). On architectures with status flags, one can even signal a presence of error via, e.g., the overflow flag.Finally, another option is to say that
-> Result<T, E>behaves exactly as-> TABI-wise, no error affordances whatsoever. Instead, when returning an error, rather than jumping to the return address, we look it up in the side table to find a corresponding error recovery address, and jump to that. Stack unwinding!The bold claim is that unwinding is the optimal thing to do! I donât know of a good set of reproducible benchmarks, but I find these two sources believable:
- https://joeduffyblog.com/2015/12/19/safe-native-code/#error-model
- https://youtu.be/LorcxyJ9zr4?si=HESn1LfHek5Qlfi0
As with async, keep visible programming model and internal implementation details separate!
Result<T, E>can be implemented via stack unwinding, and exceptions can be implemented via checking the return value.Conclusion
Your error ABI probably wants to be special, so the compiler needs to know about errors. If your language is exceptional in supporting flexible user- defined types and control flow, you probably want to special case only in the backend, and otherwise use a plain user-defined type. If your language is at most medium in abstraction capabilities, it probably makes sense to make errors first-class in the surface semantics as well.
-