šŸ”


to read (pdf)

  1. I don't want your PRs anymore
  2. JitterDropper | OALABS Research
  3. DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
  4. EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
  5. Neobrutalism components - Start making neobrutalism layouts today

  1. May 01, 2026
    1. šŸ”— r/reverseengineering Hello! Here is my Oura Ring 4 pure Python driver! Let me know what you think :) rss
    2. šŸ”— tomasz-tomczyk/crit v0.10.3 release

      What's Changed

      PR-scoped and commit-range review

      crit pr 1234 opens the PR diff scoped to the layer you're reviewing, with a stack popover for hopping between branches in the chain. crit --range A..B does the same for any commit range. The popover surfaces the default branch as a non-interactive root marker and lists every reachable tip with full-stack vs layer scope toggles, so reviewing a stacked diff feels like reviewing a single PR.

      Review-level comments

      General feedback that doesn't belong on a specific line now lives in a dedicated "Review conversation" section. Add, reply, edit, resolve - same threading model as line comments, but for the review as a whole. Drafts you're typing survive sibling state changes (adding a reply elsewhere no longer wipes your in-progress comment).

      Fixes

      Documentation

      • docs(integrations): mention markdown in replies; print URL on daemon start by @tomasz-tomczyk in #401 (Thanks @hbogaeus for the suggestion!)
      • docs: surface crit pr subcommand and --scope/--remote in printHelp by @tomasz-tomczyk in #412

      Internal refactors

      Full Changelog : v0.10.2...v0.10.3

    3. šŸ”— r/Yorkshire Yorkshire Dales✨I finally visited this place🄹 rss

      Yorkshire Dales✨I finally visited this place🄹 | submitted by /u/No_Donut1433
      [link] [comments]
      ---|---

    4. šŸ”— r/Leeds Has anybody had anything delivered by the ubereat robots? rss

      How was the experience?

      I see them all the time on my walk home and I think if I got something delivered to my door from one of those. I'd never order from that place again. They're slow and get stuck regularly

      submitted by /u/Empty-Sleep-9770
      [link] [comments]

    5. šŸ”— r/wiesbaden Morgen Retro Game Messe in Mainz rss

      Moin Leute,

      einige von uns haben uns unter einem anderen Beitrag für morgen für die Retro Game Messe in Mainz verabredet.

      Infos:

      https://www.messen.de/de/22886/mainz/retrogamescon-mainz/info

      https://retrogamescon.de

      02.05 (Samstag)

      Einlass 11 Uhr – Ende 16 Uhr

      altes Postlager direkt am Hbf

      Wir treffen und 13 Uhr am Eingang.

      Wer Lust hat ist eingeladen mitzukommen.

      submitted by /u/JohnTheMonkey2
      [link] [comments]

    6. šŸ”— r/york The Shambles, York is well preserved! rss
    7. šŸ”— r/Leeds Alternative Real Ale Trail rss

      I did an alternative Real Ale Trail yesterday from Leeds, with four old Rugby mates to celebrate retirement. Cracking afternoon if you are looking for a different rail route.

      Train from Leeds to Saltaire (get a return to Gargrave) then:

      Saltaire - Salt Beer Factory

      Bingley - Engine Room

      Keighley - Boltmakers Arms

      Skipton - The Boat House Bar

      Gargrave - Masons Arms, then a curry at Bollywood Cottage before train back to Leeds.

      I've done the original https://realaletrail.uk/ before but this was a great low key alternative.

      submitted by /u/andrewbyrom
      [link] [comments]

    8. šŸ”— r/wiesbaden 2€/L Sprit ist der neue SchnƤppchen-Preis rss

      Schon traurig, wie plƶtzlich alle Schlange stehen um für 2€/L zu tanken. Ich hoffe es geht deutlich weiter runter….

      submitted by /u/Loko21784
      [link] [comments]

    9. šŸ”— r/Harrogate Any plans for the long weekend? rss

      I am looking forward to visiting the reopened Shepherd and Dog at some point.

      Keen to get into Weetons for some quality meat to chuck on the bbq.

      Will also take a trip out to Leeds either Saturday or Sunday for a bite to eat and a bit of shopping.

      submitted by /u/Similar-Actuator-338
      [link] [comments]

    10. šŸ”— r/wiesbaden Massiver Rückstau Schiersteiner Brücke rss

      Kann mir jemand den häufig massiven Rückstau auf der Schiersteiner Brücke erklären? Gestern gegen 18.30 standen sie Richtung Mainz ja schon am Ortsausgang Schild von WI.

      Kommt das nur von den Pendlern oder sind dort Bauarbeiten?

      submitted by /u/Kaninchen2468
      [link] [comments]

    11. šŸ”— r/york Vintage chess. rss

      Hi all,

      Vintage chess set collector here. I was wondering if you could recommend any shops or stalls in York that may stock vintage chess sets from time to time.

      Thanks in advance.

      submitted by /u/anthonychess82
      [link] [comments]

    12. šŸ”— Bits About Money Notes on a non-profit indicted for bank fraud rss

      The financial industry understands itself to be an arm of the government. We were inducted into this service other-than-willingly through the ordinary operation of law and regulation.

      This is uncontroversial and unsurprising to insiders.

      A claim which will be more surprising: some regulated financial institutions have delegated authority for account- and transaction-level decisioning to a non-profit.

      Another: that non-profit includes a private intelligence agency, which runs covert assets, publishes intelligence estimates, develops target lists, and communicates them to decisionmakers.

      Still another: the non-profit organized a coalition of the willing as an outgrowth of its intelligence agency. The willing non-profits, that is. The coalition engaged in a years-long campaign to coerce financial infrastructure and other firms to give them the ability to direct accounts to be closed. The infrastructure built to do this against domestic terrorists was applied to an American politician's fundraising efforts, and no one seemed to think that was odd.

      Last week, the DOJ unsealed an indictment against the organizing non-profit for bank fraud. This was based, in part, on how it paid the intelligence agency's covert assets.

      They likely developed evidence for that indictment using the Bank Secrecy Act (BSA) mandatory reporting regime.

      We begin, as always, with the bank fraud.

      The strategic logic of bank fraud charges in white collar indictments

      White collar prosecutions are structurally difficult because they frequently depend on intent. It is difficult to prove intent beyond a reasonable doubt, as it frequently depends on subjective mental states which we cannot directly observe. This problem is discussed in the literature including in book- length treatments.

      There exist ways to overcome this difficulty as a prosecutor.

      The classic one is waiting for the criminal to violate Stringer Bell's dictum on the wisdom of taking notes on a criminal conspiracy. You then introduce their notes into evidence. They will frequently contain explicit statements demonstrating mens rea (a legal concept of a "guilty mind"). The register of those statements will be less guilty and more gleeful. Crime is awesome! Wow I sure hope the government never reads this! Because we are committing so much crime right now!

      The prosecutorial toolbox has other tricks, too. Rely less on charges which require demonstrating intent. Rely more on what the economics of law field calls bright-line rules. For those crimes, you do not need to demonstrate what emotional valence someone experienced while committing a criminal act. You only need to demonstrate the fact of the act.

      Interdicting crime is an iterated game. Responding to our noted inability to manage some forms of crime, legislators have intentionally added some items to the prosecutorial toolbox. Whether one describes them as tools or weapons depends mostly on whether one touches them with the hand or the face.

      As Bits about Money has covered frequently previously, the anti-moneylaundering (AML), Know Your Customer (KYC), and related regulatory edifices function in a subtle manner. They do not simply proscribe conduct and rely on perfect enforcement by the financial industry. To achieve the overall objective of stochastically interdicting crime, the regs are designed to force criminals into repeated unpalatable tradeoffs. One is "You can choose making money, or you can choose never interacting with banks, but it is very difficult to choose both."

      We then follow the criminal into the bank. "By the way, lying to a bank is a crime. It doesn't matter what you think while you're doing it. It doesn't matter why you did it. It doesn't matter if you're a sinner or a saint. It doesn't matter if it is a big lie or a little lie. It doesn't matter if the bank believes you. Lying to a bank is a crime. And everything you say to a bank will be recorded for decades. It will be routinely forwarded directly to law enforcement if the forward-deployed intelligence analysts we force the bank to hire believe there is even a tiny chance law enforcement will find it useful."

      Al Capone infamously went down for the tax evasion because it was easier to prove than the murders. Drug smuggling is sometimes difficult to prove, but the smugglers will want their money in the regulated financial system. The mandatory questionnaire at account opening will ask "Why are you requesting this account?" They will probably not write down "Drug smuggling!", because a wag who tries doing so will quickly realize this does not successfully result in a bank account. So they will write any other answer. Now they have lied to a bank.

      And then, in the ordinary practice of U.S. prosecutors, you will charge them with any crimes you can prove, including the lie itself. And if you are able to demonstrate that it was in fact a lie, which is easier to prove than e.g. rolling up the entire drug smuggling network, you will then make a simple legal request: give us all the money you lied about. That request will be more directed at the banks than the criminal, and the banks will comply, with alacrity.

      Some worked examples of this in white-collar prosecutions

      Sam Bankman-Fried : SBF continues to believe he is innocent. His argument is, effectively, that being the best investor of his generation excuses stealing the principal to invest. That is not a defense in U.S. law, and the indictment charges fundamentally the same conduct (misappropriating money and crypto investors had on deposit at FTX) under a variety of statutory pathways, including 18 USC §1343 (wire fraud), §1344 (bank fraud), and §1956 (money laundering conspiracy).

      Let's focus on the bank fraud. One part of SBF's criminal empire needed banking in the United States. They could not convince a U.S. bank to let them handle FTX customer funds flows. But they wanted to do that. They incorporated North Dimension, a shell entity. Some shells have legitimate business purposes; this one existed only to deceive. North Dimension filled out a due diligence questionnaire. SBF signed it. It said North Dimension traded on its own account and did not handle customer funds.

      You need two bits of evidence to convict SBF of bank fraud. The first fits on one page of paper held by one bank. The second is the answer to a single question: "Did at least one dollar of customer funds flow into the North Dimension account?" Thousands of people at dozens of companies, and hundreds of thousands of electronic documents, know the answer to that question and you only need to find one.A single word convicts.

      Other charges can stack on the bank fraud. SBF deployed rationalizations about the core fraud counts like a squid deploys ink. Defeating all of them is unnecessary. A money laundering conspiracy requires showing agreement to a) move money that are the proceeds of at least one of a set of "specified unlawful activities" and b) any act to corrupt the integrity of the paper trail about that money movement.

      Bank fraud is a specified unlawful activity.

      And so you only need one more sentence to get the second charge. Many many sentences will do. Here's a question to elicit one: "Caroline Ellison, you are a cooperating co-conspirator. During the duration of your conspiracy, while you were the CEO of Alameda Research, did you at any time direct the North Dimension account to move money on behalf of Alameda Research, knowing that by this direction banking records would reflect the money movement to be directed by North Dimension and not by Alameda Research?" A single word convicts.

      SBF was, properly and justly, convicted of all of these crimes and more.

      He was not the first and he will not be the last. A brief survey for fellow aficionados of this genre:

      Dennis Hastert was accused of horrific acts against children but indicted for other crimes. He could not have been successfully prosecuted on the abuse even with a confession, because it was time-barred long before it came to light.

      Hastert paid one of his victims to keep him quiet. He was smart enough to make the payment in cash, but the system was smarter sooner, and made a trivial tripwire for attempting to move large amounts of cash out of the regulated banking system: your bank files a currency transaction report (CTR). Apprised of this, he changed his banking patterns to avoid the filing of a CTR. That is called "structuring" and it is a crime under 31 USC §5324(a)(3).

      When the FBI asked him about it they used another frequent prosecutorial tool to pick up a freebie. They asked why he had changed his banking practices. He did not say "To structure transactions to avoid the bank filing a CTR when I pick up hush-money." Instead, he agreed with the agents suggesting that perhaps he distrusted banks and simply wanted to keep hundreds of thousands of dollars of cash on hand. This gave the prosecutors a second charge of lying to federal officials.

      In a routine practice, they agreed to dismiss it if he would simply confess to the structuring. The implicit Or Else: "We go to trial, convict you on both, and you get a longer sentence both because the charges stack and because the sentencing formula will mechanically penalize you for this choice."

      Reggie Fowler was the U.S.-based partner of a payment processor which ripped off Tether. It will still be years before we unravel that ouroboros of crime. But that didn't need to delay the indictment, because he lied to banks that he was doing real estate development. Trivially a lie; his hundreds of millions had bought neither land nor buildings. Our old friend §1344 brought his new bestie §1343 (wire fraud), not because he defrauded Bitfinex/Tether but because he had caused his ill-gotten bank accounts to move money. Every movement is another crime.

      George Santos was indicted for a variety of abuses of the public trust. Prosecutions of elected officials are inevitably tricky business, particularly as finding the precise line between crime and politics-as-usual is contentious. (Many people seem to think that line should move radically every, oh, four years.)

      Happily, we have bright lines which don't move, like "don't steal credit cards" (§1028A). You can convict on that without needing to explain whether it was done with a gun or a political donation portal.

      Then you stack on wire fraud again, because we intentionally make it hard to spend directly on oneself from politics-adjacent pools of money, and so you need to move money at least once to enjoy it.

      Why do these indictments, and hundreds more, rhyme so much? Why can we employ these charges to such devastating effect against the rich and powerful, even those in positions of public trust, even those with allies who still love them? Because we maintain textbooks of how to make these cases and make them stick.

      Criminal law textbooks published on the Internet

      White collar criminal cases are like any other high-end bespoke services work. One could imagine that the production function is fully artisanal, like something out of a traditional French restaurant.

      You, an aspiring lawyer, labor for years under the eye of a terrifying supervisor. He periodically steps behind you, rips a brief from your hands, screams at your incompetence, and you have learned one new thing. After twenty years, you now write your own indictments. They bear his distinctive stamp but your signature. You have added your own spin, which you will pass on to the rising generation, via the traditional mix of hazing and hands-on instruction.

      This happens. But in law, as in restaurants, we are allowed to write down recipes and tell people to just follow the steps.

      For example, we in the financial industry are obliged to file Suspicious Activity Reports (SARs). These are basically three-ish page memos. Combined with statutory tools such as those discussed above, these memos will giftwrap charges and convictions. They get saved by the Financial Crimes Enforcement Network (FinCEN) for decades and some small fraction of the four million filed every year will eventually be read by a public servant.

      The bank pays the screening vendor which fires the alert, the bank pays the intelligence officer who reviews it, the bank pays the senior compliance analyst to spend a few hours collecting data from various employees and web applications into a single coherent narrative. And then the public pays the prosecutor to copy/paste the SAR into an indictment. (Accept this as a slight exaggeration, but if you can't name a paragraph lifted from a SAR into a federal criminal indictment, you will be able to in about five minutes.)

      FFIEC BSA/AML Examination Manual**: **

      One purpose of filing SARs is to identify violations or potential violations of law to the appropriate law enforcement authorities for criminal investigation.

      …

      Examples of agencies to which a SAR or the information contained therein could be provided include:

      • the criminal investigative services of the armed forces
      • the Bureau of Alcohol, Tobacco, and Firearms
      • an attorney general, district attorney, or state 's attorney at the state or local level
      • the Drug Enforcement Administration
      • the Federal Bureau of Investigation
      • the Internal Revenue Service or tax enforcement agencies at the state level
      • the Office of Foreign Assets Control
      • a state or local police department
      • a United States Attorney 's Office
      • Immigration and Customs Enforcement
      • the U.S. Postal Inspection Service
      • the U.S. Secret Service

      A diligent public servant is welcome to use FinCEN's database to get additional information on the subject of an existing investigation. But FinCEN will happily tell you the other sequencing works fine: trawl the database "proactively." Most SARs are not evidence of crime! But if you have the choice between doomscrolling Twitter and doomscrolling the SAR database, one is much more efficient at converting into prosecutions. Their phrase for this is proactive SAR review and they have twenty volumes more if you are interested.

      Very few Americans not professionally implicated in this surveillance regime understand it exists. FinCEN employees and bank compliance officers depend on it for their continued employment, and so it might be understandable why they are such effusive fans. But the regime does have informed critics, including occasionally this author.

      One critique is that this regime is functionally an end-run around the Fourth Amendment. Civil libertarians have made this point for decades, but never with the economy of phrase as the U.S. Immigration and Customs Enforcement (ICE) internal magazine Cornerstone's article The Currency Transaction Report: Controversial To Some--Essential To All.

      Why is the CTR so useful to law enforcement, ICE?

      ICE: ICE special agents utilize CTRs to establish links between individuals and businesses, and to identify co-conspirators and potential witnesses. This information is often utilized to meet the 'probable cause' requirement necessary to obtain search, arrest and seizure warrants.

      Is this surveillance regime narrowly tailored?

      ICE: ICE conducts approximately 1 million record checks of BSA data each year.

      If a libertarian were scripting you right now they'd ask you to say that innocents have nothing to hide.

      ICE: Individuals and businesses conducting legitimate transactions have no reason to avoid the filing of CTRs.

      Yikes. Say, did you ever articulate the intentional double-bind twenty years before Bits about Money did?

      ICE : However, criminals are forced to make a choice between appearing to be a legitimate customer, thereby exposing their assets and money movements through BSA reporting requirements, or engaging in risky, illegal actions to conceal the movement of their funds.

      Wow, it seems like this field is filled with carefully laid traps that function exactly as designed. Did you by chance happen to publish the Hastert prosecutorial strategy ten years early?

      ICE: Suspicious attempts to avoid the filing of a CTR by structuring cash deposits (making a series of deposits just under the $10,000 reporting threshold over a number of days) is a significant red-flag indicator of criminal activity and one of the most frequent triggers for the filing of a SAR.

      Which brings us to the Southern Poverty Law Center (SPLC).

      A textbook prosecution of bank fraud in many respects

      On April 21st, 2026, the Department of Justice unsealed an indictment of the SPLC for bank fraud.

      The SPLC is a storied civil rights organization. Like many non-profits, it runs a portfolio of what are sometimes called "programs" under a single roof. One of those programs is producing a data product listing individuals and entities that it considers to be involved in hate and anti-government activities.

      That data product is important financial infrastructure, and we will return to it in a moment.

      The SPLC runs a private intelligence service to produce it. The SPLC has in the past paid informants, who it describes as "field sources." Those informants are generally members of what it describes as domestic terror organizations. The existence of this program has been public knowledge for decades.

      It is unlikely that any magistrate in the United States would approve a warrant to search the bluest-of-blue-chip civil rights organization's papers on the suspicion that they have created a fictitious CIA to launder money to the wife of an Exalted Cyclops of the Ku Klux Klan. Are you not aware, officer, that the reason this organization is in high school history texts is they developed a novel civil litigation strategy to bankrupt the Ku Klux Klan? You will not get your warrant. You would be lucky to escape court without a citation for contempt or an order for psychiatric commitment.

      Well, good thing nobody ever had to ask for that warrant.

      Banks don't need warrants to become quite alarmed when they discover that they have created an account for the Center Investigative Agency and several other sole proprietorships for the same person… and those businesses don't receive revenue, run payroll, buy office supplies on their debit card, or rent office space. No, the only thing they do is take large deposits then transfer out hundreds of thousands of dollars directly to, Great Scott, the worst people imaginable.

      Substantially every employee of the financial industry, CEO or teller or product marketing manager that they may be, is obligated to attend a yearly training on their BSA compliance responsibilities. That training customarily requires you to pass a test. If that test stipulated this scenario and then asked what the financial institution must do next, there is only one correct answer: Conduct an investigation, close the accounts at issue with very high probability, and file a Suspicious Activity Report.

      We return from this flight of fancy to the indictment. Excerpting verbatim:

      Starting in the 1980s, the SPLC began operating a covert network of informants who were either associated with violent extremist groups, such as the Ku Klux Klan, or who had infiltrated violent extremist groups at the SPLC 's direction.

      If one does not closely follow this community of practice, one could be forgiven doubting whether prosecutors are being candid here. This claim does sound farfetched. The indictment, in this paragraph, is neutrally recounting the truth. The SPLC is proud of that program, which it ran for decades. NPR's gloss:

      The indictment came shortly after the SPLC revealed the existence of a criminal investigation into its disbanded informant program to gather intelligence on extremist group activities.

      Well, OK, they ran an intelligence agency. One can construct a narrative by which that makes some tactical sense. Sure.

      How did they get a bank to go along with making payments to people who the SPLC has spent decades attempting to make it impossible to pay. Did they perhaps… lie to a bank?

      Indictment:

      To secretly funnel donated money to the Fs, individuals at the SPLC, including a person who would become the Chief Financial Officer ( "Employee-1") and a person who would become the Director of the Intelligence Project ("Employee-2"), among others, opened a series of bank accounts at Bank-1 and Bank-2 in the name of various fictitious entities, including, but not limited to, the following: Center Investigative Agency ("CIA"), Fox Photography, North West Technologies ("North West Tech"), Tech Writers Group ("Tech Writers"), and Rare Books Warehouse ("Rare Books").

      Oh dear, SPLC! It would be extremely bad for you if you had in fact opened accounts for businesses which do not actually exist, then used them to move funds! Perhaps you can just pray that the feds never find out? … The bank is quite likely going to find out, though. Some bank accounts have red flags. These red flags have bank accounts.

      Indictment:

      In 2020, Bank-1 conducted an internal investigation into these accounts.

      Oh that's… unsurprising given the asserted facts. Well, your options are diminishing rapidly at this point.

      Hey quick intermission: want a surprisingly reliable way to combat credit card fraudsters, drug dealers, and the like? First, you identify one of their accounts which is definitely committing crime. Usually they have lots of these and cycle through them quickly. They are often opened with synthetic or stolen identities. Burning the identity doesn't get you all that much; they have thousands to cycle through. So just freeze the money in the account. Then, rely on human nature: nobody likes giving up "their" money. So compassionately offer to help them out, by offering to transfer the money from the frozen account to another account they control. We just need your quick written instruction to send your money to your other account, sir.

      Indictment:

      Thereafter, an SPLC employee requested that Bank-1 close the accounts associated with the CIA, Fox Photography, North West Tech, and Tech Writers and transfer the remaining balances in these accounts to a Bank-1 account ending in 6050 held in the name of the SPLC.

      Industry practice varies on whether you give the user their money back before filing the SAR.

      There are some grey areas in practice. You can't return the money if you understand the user to be e.g. Hamas. You might be able to return the money if you understand the user to be e.g. engaged in unsupportable but debatably legal behavior.

      "Unsupportable" here is a term of art: the institution, in its considered judgement, cannot allow it to happen on systems it controls. Many legal acts are unsupportable, and a determination of supportability is not and cannot be coextensive with a criminal conviction. Compliance officers are not federal judges and are happy to defer to them.

      Please, we beg you, do not ask Compliance to run a parallel criminal justice system. We will do it if you force us to, but you will not like the outcome.

      One of the functions of getting an explanation in writing from the SPLC (we will get to it; it is a doozy) is the financial institution seeks to absolve itself. Did we open accounts for cutouts to a domestic terrorism organization? If we did, *#%( , our regulators need to hear about that today. But this admission can be shared with a later regulator to say "We were unaware of the actual ownership of the accounts when opened and for ten years of use, which we agree is bad. We then executed our responsibilities with urgency. On the strength of this communication from the SPLC, which is not a terror organization, we decided to not immediately call you, and instead relied on the ordinary processes of our Compliance function. We will listen attentively if you feel we were ever derelict in our duties."

      One of those duties: a financial institution must, as a matter of black letter law (31 CFR § 1020.320), file a SAR if its investigation discovers a transaction designed to obscure the provenance of money. Transactions, by their nature, reference the account title (ownership, which could be by a e.g. company or trust) and beneficial ownership information (the ultimate people who have economic interest in the account). Any transaction conducted by an intentionally mistitled account is immediately and mandatorily reportable as soon as the financial institution has knowledge of this fact.

      Alright, options for the SPLC are narrowing precipitously, but perhaps it can argue that those two employees, senior though they might be, were acting rogue? Or perhaps they could argue that the SPLC was institutionally unaware of the specific financial infrastructure its employees had created to support the SPLC's intelligence program?

      Indictment, quoting the President and Chief Executive officer of the SPLC, to the bank:

      Pursuant to the discussion we had earlier this week, please let this correspondence serve as confirmation that the accounts listed below were opened for the benefit of Southern Poverty Law Center operations and operated under the Center 's authority. The following accounts are listed below:

      ...6700 Center Investigative Agency -- opened 1/31/2008, closed 8/5/2020

      ...9674 Fox Photography -- opened 1/31/2008, closed 8/5/2020

      ...6743 North West Technologies -- opened 1/31/2008, closed 8/5/2020

      ...6751 Tech Writers Group -- opened 1/31/2008, closed 8/5/2020

      ...6719 Imagery Ink -- opened 1/31/2008, closed 3/15/2013

      ...6727 J &J Electronics -- opened 1/31/2008, closed 3/15/2013

      ...6735 Kelly 's Marine -- opened 1/31/2008, closed 3/15/2013

      There are a variety of ways for the DOJ to get the CEO's email. It may have been attached to a SAR, and therefore filed automatically with FinCEN. The other way, of course, is to pivot from a SAR (or any other reason to open an investigation) to a request that the bank produce records. Subpoenas are not strictly required; that document exists to exonerate the bank. A financial institution, concerned it is falling under negative government attention, might proactively offer to share what they know.

      In any event, the feds got what they needed.

      This written communication is a succinct confession to bank fraud.

      There are multiple different ways to charge it, as we have seen. The indictment went with §1014. And if the SPLC admitted to bank fraud, then the transfers are wire fraud. And if the transfers were wire fraud, then the… you've seen this movie before and it ends predictably.

      I do not expect this conclusion to be a happy one to all readers. I believe it to be correct.

      There exist lawyers who say that the legal analysis in the indictment is sloppy. That statute is a weapon. Weapons wielded sloppily hit the target all the time. A weapon that only works when wielded perfectly is poorly designed.

      Some commentators have implied theories that, for example, §1014 only applies to applications for loans. Excerpting the statute:

      Whoever knowingly makes any false statement to … any [FDIC-insured institution]... upon any application… shall be fined not more than $1,000,000 or imprisoned not more than 30 years, or both.

      This is extraordinarily broadly drafted, by design.The long list of alternatives to application includes loan, and as a basic principle of statutory construction, this means that Congress considered limiting the list to only loan applications and then intentionally did not do that.

      In Wells, the defendant sold something of value to a bank, rather than borrowing money from them. (Turns out copiers print money, in a sense; you can sell the future revenue stream.) The lie was a relatively tiny detail relevant to the pricing discussion. Wells was prosecuted under §1014. The controversy was not "Does §1014 allow prosecution outside of loans?" Yes, read the plain text of the statute. But the holding is as interesting as the data point. Can you be convicted of fraud over a tiny lie? The Court held there is no materiality requirement under §1014. You can be convicted of fraud for a lie that doesn 't matter if you tried to influence any decision of a bank.

      Some have advanced the notion that the account application is misleading but not false. This matters due to Thompson, decided last year by the Supreme Court, which holds that §1014 doesn't cover misleading but true statements. Consider what happens when the prosecutor summons a senior SPLC executive to the stand and says: "So, Fox Photography, which you ran as a sole proprietorship. Did you buy a camera? Did you advertise? Did you file for a DBA? Did you make a website? Does Fox Photography have any activities other than this bank account application? You had three other businesses. Which of them did anything other than obtaining banking services?"

      Some believe, plausibly, that the prosecution is politically motivated. Others might counter that the SPLC is the nation's leading expert in lawfare and has just discovered sauce for the gander. Still others might believe both claims. Or they might believe "This looks like retaliation for the SPLC coordinating a coalition to interfere with Trump political fundraising," which is not the way coalition participants say the SPLC gained his enmity [archive]. We will return, at length, to the activities of a coalition the SPLC co-founded.

      Many commentators have argued that this program has been discontinued. Yes, bank fraud will frequently cease after its discovery. That is definitely a goal of this apparatus, and is almost definitionally true. Almost all white- collar prosecutions will happen after the conduct giving rise to them has ceased. The financial industry would certainly be chagrined to learn about a live fraud happening on its rails from the indictment. (That does happen; we have processes to detect it happening and then immediately investigate accounts associated with entities that were just indicted. We will discuss how data products and screening infrastructure function in substantial detail below.)

      The industry as an institution expects its supervisors in government to bring these cases, all the time, against targets that have many friends, positions of authority, extremely competent defense lawyers, and sincere belief that they are innocent of any real crime. The government expects, as an institution, to be overwhelmingly advantaged in these cases.

      Many commentators, including the government itself, have made this indictment mostly about the fraud against donors. Many believe that argument to be a stretch. I agree, unreservedly. As a connoisseur of this genre, I have read few documents which are simultaneously so far from the conventions while adding so little new to the canon.

      It is a stretch that the government routinely makes and wins in other contexts. Matt Levine has collected several hundred examples of the genre, which he calls Everything Is Securities Fraud. That genre is, succinctly, "If you run a for-profit corporation, and have raised money from outside investors, and anything at all goes badly, and you did not describe exactly that thing to the investors as a risk, you have arguably defrauded your investors." The government is comfortable making that argument and wins it routinely.

      Perhaps Everything Is Donor Fraud. Perhaps not.

      But, again, the design of this system is so you don't have to prove the hard crime, the one where you're being creative and taking some risks and pushing the envelope. You only have to prove the easy one, in exactly the same way hundreds of cases have won before. You will then use the spectre of conviction for (minimally) that as procedural leverage, and your target will likely settle.

      Absolutely textbook.

      Data products and mechanistic decisioning

      A brief break from the SPLC's situation. We'll return in a moment. I had promised you a discussion about how the financial industry uses certain data products, including one published by the SPLC. We will begin with the canonical example of a data product.

      As BAM has noted in discussing so-called debanking, the United States does not maintain a secret blacklist of people who can't gain bank accounts. It maintains a public one.

      Regulated financial institutions must deterministically reach the correct conclusion about accounts or transactions where they benefit certain people and organizations. That blacklist is called the Office of Foreign Asset Control (OFAC) list of Specially Designated Nationals (SDN). In broad strokes, this is a blacklist for foreign terrorists and narcotraffickers. (Banks aren't the only people who can't transact with the OFAC list. Reader, if you are an American, you can't either, under penalty of law. But enforcement action is concentrated against banks et al because they are a, how might one phrase this, choke point for money movement.)

      In theory, every time a bank opens a student checking account, it can have a bank employee mosey on over to the OFAC website, search the list in real time, and then determine that the prospective customer, yep, isn't on the OFAC list. This is quite impractical and unlikely to be considered an acceptable set of controls by a regulator unless it is the smallest-of-small-town community banks.

      You could write your own software to periodically download the list (yep, we just publish the files) from OFAC, and then compare new accounts and in-progress transactions against your recently-synced copy in your database. Most financial institutions do not choose to do this. It is fiddly, extremely high downside if you get wrong, and has zero financial upside if you do a better job than "minimally adequate." Also, you have to do it many times redundantly across hundreds of functions of your financial institution. Checks, wire transfers, accounts payable, even your employee giving program! This set of considerations spells "outsource this function."

      The jargon for the function is "OFAC screening" and the company or companies which the financial institution engages to handle this are selling "data products." You will work with your vendors and your internal IT teams to integrate those data products (which might be APIs, or platforms, or similar) with your other IT systems.

      Then you turn everything on. Presto! You get alerts sent to Compliance if someone appears to be OFAC-listed. One of the large team of intelligence analysts you are forced to hire will be instructed to click through alerts as they stream by. The interface frequently resembles a Twitter feed from the most boring possible circle of hell.

      Your analyst will tell the system to ignore the false positives (extremely common relative to true positives, but you basically have to look at every one) and action the true positives. "Action" here means close the account or block the transaction. A close synonym is "decision" the account.

      If you're technically sophisticated, you can probably configure your screening vendor to pass alerts off to some combination of heuristics, machine learning models, and other AI techniques before they are sent to a human. Alert fatigue is real and dangerous. You can decrease it by automatically decisioning low- risk accounts/transaction, based on criteria acceptable to your regulator, which you will write in your policies. Perhaps you have recorded that you have a U.S. passport or other evidence of citizenship on file for the account holder and therefore it is vanishingly unlikely they are the SDN whose citizenship is Farawayistan even if the names look similar. You might reasonably argue that, for retail accounts that are incapable of moving large amounts of money, that is good enough.

      A bit of engineering and Compliance jargon: this architecture is a pipeline. An alert enters the pipeline from your screening vendor. It goes through some automated decisioning and routing to end up in a particular queue for a particular team of analysts. They decision the alerts which will, in some cases, be the end of it. In other cases, this will result in a new type of entry in a new pipeline, perhaps to effect the series of actions one must take to offboard an account. Pipelines are serviced by a mix of technical and human systems, and governed by both computer code and process documents written in English. Both the code and the documents are subject to review from your regulators. (They are far more likely to read the documents than the code, but they have essentially carte blanche to ask Compliance for anything they want that describes e.g. the OFAC pipeline, and Compliance will probably not push back very hard. Keeping a positive relationship with regulators is a very large portion of their job.)

      The OFAC list is the canonical data product, but your screening vendor really wants to sell you several. Can you charge for a list the government makes freely available? Absolutely! Because the screening vendor isn't simply charging for the list. They are charging for a complex technical and human system around the list.

      One factor among many: you expect the list to change and you want them to be in charge of making sure you always have a very recent version. You also want them primarily in charge of understanding the e.g. regulatory environment around their data products. If one of the products goes from advisory to mandatory, you want to learn that from your vendor before you learn that from a pissed-off examiner wondering why you didn't read the bulletin two years ago.

      Some data products have very different characteristics than the OFAC list.

      One example, alluded to above, is repackaging criminal indictments into a screening list. You might bank Bob's Autos. If Beneficial Owner Bob is indicted as a money launderer for the mob, you want to know that very quickly so that no one e.g. drives off with the balances in Bob's Autos' accounts.

      But you don't have to close an account if someone is indicted, not like you have to close the account if they're added to the OFAC list. It's a judgement call, and you'll have described your decisionmaking process for it in internal procedures documents, and your regulator will have blessed them. So one of your intelligence analyst gets the tweet-sized version of the indictment from the pipeline, reads "Misdemeanor assault", and probably decides "Bob's in trouble, certainly, but still supportable." Or they read "Felony bank fraud" and that analyst very likely kicks off an internal investigation. Or, and again this is the dominant case, Close As False Positive. Turns out there are a lot of Bobs in the world who own car dealerships; that Bob was not our Bob.

      Another data product is so-called "adverse news" screening. This one is not an extension of state power like the OFAC or prosecutorial lists, not directly. You have much more discretion on whether you buy it than you have on OFAC screening. But your screening provider might have gone to the trouble of licensing wire service articles or newspaper feeds or the Twitter firehose or similar. They repackage it and match e.g. mentions of colorful local businessmen (a classic newsroom euphemism for "mob, but we can't prove it and he has lawyers ready to sue for defamation") to your accountholders. If a colorful local businessman is reportedly on the lam and feared to have left the country, and then he asks for an international wire transfer, you probably don't want to simply process it.

      And now the data product you've been waiting for: the SPLC Extremist Files. Like the OFAC list, it's available for free on their website, but there do exist screening providers which will happily charge you for it. Part of that work is for scraping, part of that work is for e.g. matching names to e.g. charity EIN numbers, etc. Your screening vendor will happily tell you, though, that the data product they're selling you is really SPLC's considered judgement, packaged in a way that makes it easy to include into your pipelines.

      Why would you buy this data product? In part, it is because the financial industry broadly considers the SPLC an extraordinarily trustworthy non-profit. It is widely believed that if they say you're a Nazi, you're a Nazi, and we don't want to do business with Nazis. Financial institutions, like other firms in capitalism, have broad discretion (with some specifically enumerated exceptions) in choosing who they do business with.

      An aside to conservative and progressive critics of the SPLC: yes, I know, they are not as selective, restrained, and expert as their reputation suggests. But please accept for the moment that the financial industry understands this less well than you do.

      One citation for the industry broadly considering the SPLC reliable and being aligned with their views on the good: JPMorgan Chase, the largest bank in the U.S., practically a metonym for conservative-as-a-banker, gave them $500,000 specifically to "work in tracking, exposing and fighting hate groups and other extremist organizations."

      If you were to have a thousand conversations in the financial industry about non-criminal clients you don't want to do business with, you would hear the SPLC cited more than any other group or data product.

      Some of the most established screening providers do not carry the SPLC data product, though they have data products which compete with it. The SPLC has in the past criticized those providers by name:_World-Check is often criticized by civil rights organizations, advocates, and experts on international terrorism for bias and misinformation that can result in the blacklisting and de-platforming of legitimate charitable groups. The commercial nature of World-Check, its lack of coordination with civil society organizations, its use of unsubstantiated data, and its lack of transparency make it a highly problematic tool to screen out hate. _

      In substance, the SPLC's complaint is that our competitors list people we wouldn't, don't list people we would, and we're just better at this. Which, fair enough, everyone is allowed to have an opinion.

      How did we arrive at the position where financial institutions clamored for their data providers to offer SPLC screening? Marketing and sales are skills and the SPLC is very, very good at them. Also, again, read a history book of your choice; they picked a fight with the KKK and won. If you get a reputation for doing that for decades and also have an aligned product many customers feel the need for, sure, they will want to get it from you specifically.

      That is not the only reason why many people in tech companies, financial infrastructure companies, and banks are intimately acquainted with the work of the SPLC. We will return to the oth er reason in a moment.

      But, what does your SPLC pipeline do? Depends! Perhaps alerts go to an analyst, who checks it for false positives (yep, hits will frequently be false positives), and in the case of true inclusion you have a spirited debate within your firm. Perhaps some people argue that even Nazis need to eat, and to eat you need money, and that on balance the marginal harm of giving this particular Nazi a checking account is outweighed by the social utility of their children not starving to death. You are consuming the SPLC's data product on an advisory basis; your firm retains full control of decisioning.

      Or you could configure your pipeline to automatically deny services to anyone the SPLC lists, either by operation of computer code or by the programmatic- but-in-the-sense-of-directing-humans way that many processes still work in the financial services industry.

      Jeff Bezos, in Congressional testimony, describing Amazon's reliance on the SPLC data product for AmazonSmiles, a now- discontinued charitable product they offered:

      _" We use the Southern Poverty Law Center data to say which charities are extremist organizations. We also use the U.S. Foreign Asset Office [sic] to do the same thing." _

      Bezos was interrupted before he could finish his next thought; you're welcome to read the testimony for full context. He is clearly referring to the OFAC SDN list.

      Bezos went on to elaborate that the Fortune 2 company could not operate AmazonSmile without some way to kick out the extremist organizations and that SPLC was, effectively, the only reasonable option. He asked Congress for other suggested data providers. None were offered. (No, really, he did that.)

      Let us pause to acknowledge that Bezos, one of the richest men in the world, considers these two four-letter organizations as peers. One of them is created by statute, operates within constitutional and administrative-law constraints, and answers to Congress, the courts, and ultimately the people of the United States of America. It could jail Bezos, personally, for willful non- compliance. And the other is …some people in Montgomery with a very specific interest, whose decisions are subject to review by no court, and whose only power appears to be moral suasion.

      Bezos was equally and entirely committed to satisfying both.

      Why? We'll return to it in a minute.

      As a longstanding financial infrastructure enthusiast and practitioner, I am confident that SPLC screening is used on an advisory basis in very many sectors of the financial industry. It is also used in a delegated authority fashion for some products at some firms, in the fashion that Amazon used to. In the delegated-authority cases, an SPLC hit kills an account application or transaction as cleanly and automatically as an OFAC hit does.

      Perhaps that strikes readers as implausible, even after you just heard it in sworn testimony to Congress. I offer to you publicly documented examples, frustrated that they all cluster in a small set of the vast panoply of financial products. There is a reason for that clustering, related to the SPLC's marketing and sales motion, and we will discuss it in a moment. A warning: if you assume the public examples are fully representative of the SPLC's delegated authority you will materially underestimate how much actual power the SPLC has over financial infrastructure.

      Many employers in the United States offer a perk: if you donate to charity, we'll match what you donate, up to some dollar amount and subject to some restriction. This is, morally, compensation, just like the salary is compensation, just like the 401-k match is compensation, just like the healthcare benefits are compensation. Firms use specialist providers of financial services to run payroll, administer 401-k plans, and deliver health insurance.

      Deed offers a workplace giving program as a service (WGPaaS? We'll workshop it.) Some Deed customers are banks, and so they have a ready answer [archive] for your Compliance people on the work Deed already does on your behalf: Continuous Monitoring: Stay protected with up-to-date screening against sanctions and regulatory watchlists, including IRS, OFAC, SPLC, PEP, and adverse media.

      One of these acronyms is not like the others.

      This perk is quite popular in banks, who have been trying to shake the heartless image since the Medicis, and who want people to feel good that they teleport value through time and space but also really and truly care. So some financial institutions in the United States, possibly without knowing it , may have, in delegating authority to Deed to decision requests for compensation, indirectly delegated it to the SPLC.

      I assume, as enterprise-grade software, that many workplace giving programs have many levers available to customers if they want HR to review every match of a $20 donation to someone's parish. As a self-evident statement of prioritization: no, HR does not want to do this, at all, ever, please stop wasting my time, configure it the way you do for every other bank. Do you expect Customer Success to press on and say "Nope, sorry, not enough to proceed. Is being able to donate to Nazis important to your employees?"

      An observation from someone who worked in the marketing department of a financial services company: if it is on the industry-specific solutions page, it is because potential customers routinely ask for it by name and not having it is a dealbreaker. So you must offer the SPLC screening to customers. But it is socially impossible to ask whether they want it. Product decision time: what is the default value of the Allow Gifts to Nazis checkbox.

      Now, a quiz: do you think Compliance at a bank is neutral on "Can the bank delegate transaction-level decisioning authority, in any part of the business, however small, to an entity under federal indictment for bank fraud? Does the answer change if they are convicted of bank fraud?"

      No! Compliance will not let you do that! Not because they are worried about the integrity of the blacklist. An accused bank fraudster has the final say to approve money movement out of a regulated financial institution. That is very likely intolerable to Compliance.

      What happens next? Well, remember, when you bought the data product, you were also buying someone anticipating your concerns before you even voice them and preparing options before you ask. Jeff Bezos' words echo in San Francisco today: Does anyone know another option?

      Deed is not an outlier in workplace giving programs.

      Groundswell? The FAQ recently read "Groundswell does not process donations to organizations denoted as hate groups by the Southern Poverty Law Center. " _but changed to " Groundswell conducts due diligence to confirm they meet applicable IRS standards. Clients can also configure their own charitable restrictions within Groundswell, including allowing or blocking specific organizations or categories of organizations, in accordance with their internal policies."_

      No prizes for guessing the default.

      Millie? Blog: " Vetting nonprofits can be a time and labor sensitive task… That is why vetting is typically left up to the experts at SPLC. All vetting for the Millie database is even through the SPLC!" [sic throughout] [archive]

      And, again, you are reading the tip of the iceberg. There is much more use of the SPLC list in the financial industry, in much more important products than workplace giving.

      Why is it so easy to find public evidence in giving programs but not of SPLC blacklisting in life insurance or wire transfers or options trading?

      The SPLC and its allies bootstrapped a consensus in their core community of practice, non-profits and the supply chain that funds them. I will describe the shape of that consensus without making specific claims about truths. It is: either you're screening charitable donations for hate funding, or you are a monster. You will not attend our parties. You will not get our retweets. You will be iced out of the flow of money , because we have friends at Ford. One phone call and Open Society is closed to you. And then good luck paying your staff. We have spent our professional careers getting very good at delivering social consequences through tightly coordinated coalitions. Get with the program, or get consequenced.

      If you want to understand why the charitable giving world moves in lockstep here, start with the Amalgamated Foundation's "Hate Is Not Charitable." You will find it a project to reconstruct what that did, but the SPLC has a whitepaper with most of the important story beats.

      It's a long story, and I would rather tell you a different story, about how the SPLC formed a coalition to gain account- and transaction-level decisionmaking capability at tech companies, financial infrastructure firms, and banks through a coordinated pressure campaign.

      Parts of this story are abundantly reported in public. Parts are extremely well understood in the organizations that the SPLC's coalition repeatedly persuaded, cajoled, or threatened (pick your favorite verb for the moment).

      Some parts of the story are original public interest reporting. What is the public interest in candidly recounting the exercise of power over Nazis? Because they did not stop once they achieved power over the Nazis.

      The coordinated pressure campaign, as experienced by industry

      One coalition of non-profit organizations ran an organized pressure campaign against industry, for years. It started in 2017, with the SPLC and another non-profit informally coordinating. It intensified and formalized in 2018, under SPLC co-leadership. It escalated sharply in 2020 and 2021.

      The campaign had two main components. The first was public advocacy and communications work. The second, less visible but more consequential, was a series of meetings with industry. Hundreds of meetings. With a specific target set of companies.

      The campaign's declared aims were three. To convince those companies to censor more communications the coalition characterized as hate. To blacklist organizations and individuals the coalition characterized as promulgators of hate or violence. And to interdict the flow of funds to those blacklisted parties.

      The coalition claimed to be non-partisan. Be on the lookout for mentions of "non-partisan," because it is a word the coalition understands differently than I do.

      The coalition calls its targets "Internet companies" and relies on government, media, and the public to not read the fine print. In it, they define Internet company mendaciously to include banks, credit card processors, and any other financial infrastructure their enemies could touch. The coalition was going after posts, but it was also and primarily going after money. I will use the language "industry participants" going forward to identify who they met with.

      Industry participants included Facebook, Twitter, JPMorgan Chase, Visa, Mastercard, and many other firms. Some were among the largest companies in the world. Others had fewer than 10 employees. (I estimate headcount based on published reporting and industry experience.)

      Stripe was an industry participant. I was employed at Stripe continuously from late 2016 through early 2023, covering the entire period under discussion. I remain an active advisor to Stripe. Stripe does not necessarily endorse what I write in my personal spaces.

      This series of hundreds of meetings involved hundreds of employees from industry participants. Those employees included C-suite executives and managers and individual contributors across a host of functions. Those functions included communications, legal, government affairs, Trust and Safety, and compliance professionals.

      Meeting notes were frequently kept, and sometimes widely circulated, as is the routine practice in industry. The meetings were documented on calendar invites (often with full participant lists), shared docs, attachments, emails, and other contemporaneous records. In the ordinary practice of industry these primary documents distribute themselves promiscuously into secondary documents; think of an email being screenshot to paste into a PowerPoint to discuss the response in a meeting. Records exist on conservatively hundreds of systems and can be accessed by many more than 10,000 people.

      No employee of an industry participant I have spoken to, familiar with the contents of the meetings, was willing to provide quotes for publication with their name and corporate affiliation attached.

      Their reasoning included not being authorized to disclose private information, fear for their personal and corporate reputation, future career consequences for leaking, personal consequences for being identified adjacent to national political controversies, in some cases fear for their physical safety, and in some cases unwillingness to betray a cause they personally support.

      Industry participants recount the tone of the meetings differently, and as varying over the meetings. Some meetings were strained-but-professional. Sometimes the coalition participants were described as demanding and "hectoring." Industry participants report abusive remarks towards their companies and to the people in the meeting.

      Industry participants were repeatedly told that if they did not accede to demands they would be profiting from evil, complicit in the death of innocents, or benefitting from white supremacy. The innocents claimed to be at risk were often specifically identified as black, including during a period of intense societal concern for the lives of black Americans specifically. Industry participants were told that they wanted this. That they were taking "blood money". Industry participants repeatedly felt personally attacked, in ways and using language not normative in their professional experience.

      On the account of multiple industry participants, coalition participants explicitly held individuals in the meeting personally responsible for the actions of their employers. This was aimed at individuals with substantial influence and authority in companies, and also at junior employees.

      Industry participants describe the coalition participants as threatening their employers, openly and by implication.

      The most commonly described threat was coordinated negative public messaging with the goal of causing reputational harm to the industry participants. Feared comms outcomes ran the gamut from heavy mainstream media coverage to a Twitter pile on. Twitter is real life, particularly when a large and vocal contingent of your employees use it and Slack simultaneously. Ever been pulled into a meeting over a single customer tweet then burn weeks on managing the fallout? Count yourself lucky.

      Less commonly, the industry participants perceived they were being threatened with adverse legislative, executive, or regulatory action indirectly by coalition participants who are reasonably read as exercising substantial political influence. Industry participants sometimes report that coalition participants flaunted their political influence.

      Industry participants were repeatedly told that if they did not accede to specific demands, they would share the blame for future deaths. Bits about Money has reviewed contemporaneous records which unequivocally make this claim, authored by coalition participants. We note that this echoes language the coalition routinely puts in press releases, Medium posts, and similar artifacts after presumptively careful review of the phrasing. The coalition was inconsistently disciplined in phrasing in documents we have reviewed, and we decline to quote their phrasing, in part, out of charity.

      You will share the blame. We will hold you responsible.

      The coordinated pressure campaign, as narrated by its authors

      The coalition has publicly and voluminously described their own understanding of what was said in those meetings.

      Where employees of industry participants dispute their characterizations, I will characterize broadly what some employees of industry participants have said, to preserve their anonymity. You should not view this as a claim on behalf of all industry participants. Patterns emerge frequently, but I am making no claims about unanimity.

      mid-2017: Color of Change dialogue with Paypal begins

      Many left-of-center voices felt that white supremacists had been emboldened by the 2016 election of Donald Trump. Beginning in mid-2017, Color of Change communicates with and meets with PayPal, with the objective of cutting off financial services to hate groups. Color of Change is a civil rights organization which specializes in online organizing.

      The Center for Media and Democracy, an aligned non-profit, quotes a senior executive as saying "Let's be clear: public speech promoting ideologies of hate always complements and correlates with violent actions."

      Industry participants characterize the coalition participants as asserting that speech was inseparable from conduct. Free speech concerns were dismissed and, industry participants report, mocked, including with the dismissive rendering "freeze peach."

      August 11th, 2017: Charlottesville Unite the Right Rally

      As has been abundantly reported elsewhere, a coalition of white nationalist, neo-Nazi, and alt-right organizations (per voluminous public reporting tracking self-identification) organized a rally in Charlottesville, Virginia. This sparked counter-demonstrations. A rally attendee struck and killed a counter-demonstrator with his car.

      Color of Change intensified its existing engagement with Paypal and other industry participants. Rashad Robinson, then executive director, would describe them in detail later, to a podcast on iHeartRadio. Fast Company [archive] approvingly cites that this came after "Robinson used similar tactics to move companies to withdraw sponsorship from the 2016 Republican National Convention." The Republican National Convention is a get-together sometimes described as a grand old party.

      Robinson articulated the coalition's theory of change: "Power is the ability to change the rules." The coalition perceived the industry participants as having power, desired power for itself, and took steps to achieve it.

      Color of Change swiftly organized what it describes as a social media campaign using the hashtag #NoBloodMoney.

      In the wake of Charlottesville, which was shocking in the broader U.S. political environment and perceived as a watershed moment within tech companies, many industry participants made decisions to end services to a variety of groups they felt had violated their policies against promoting violence or extremism. This was sometimes proactively. It was sometimes after receiving communication from activists, either in their personal capacity or identified as coalition participants.

      Meetings were, prior to this point, relatively ad hoc. This would soon change.

      August 21, 2017: JPMorgan Chase Foundation donates $500k to SPLC

      As mentioned above, the SPLC enjoyed broad trust within the financial industry dating to long before these events. Chase's donation to SPLC immediately after a galvanizing tragedy could, if one were immensely cynical, be read as a tiny communications expenditure.

      Industry participants routinely claimed shock and a sense of urgency after Charlottesville. A grown man once wept in my presence recounting that event. While there is substantial diversity of views among industry participants, many have, in their private spaces, when the cameras are not rolling, when there is nothing to gain, repeatedly described the SPLC to me as being on the side of the angels.

      Keep this in mind as the coalition describes industry as being standoffish and foot-dragging.

      2018: SPLC organizes Change the Terms, which becomes the coalition's

      nucleus

      The SPLC co-led an effort to unify, coordinate, and intensify previously ad hoc organizing actions. Change the Terms (CTT) was a coalition, to its friends, a conspiracy, to its enemies, and an unincorporated association, to a geek with an unhealthy interest in LLC formation. (The only fact I've ever retained about unincorporated associations is that they are jointly and severally liable for acts of the members.)

      The individuals identified contemporaneously as co-chairing CTT were Heidi Beirich (then-head of the SPLC Intelligence Project) and Henry Fernandez, of the Center for American Progress (CAP).

      SPLC's Intelligence Project ran the private intelligence service and produced its data products. It also produces an annual intelligence estimate, such as the (2024) Year in Hate and Extremism.

      (Beirich left SPLC in 2019 to co-found Global Project Against Hate and Extremism (GPAHE) with a fellow SPLC alumna.)

      The SPLC characterized the CTT coalition as its own initiative under the Intelligence Project, and not simply Beirich's initiative, in charitable governance and fundraising documents in the possession of Bits about Money. We cite one such document below, contrasted against later Congressional testimony.

      According to documents reviewed by Bits about Money produced by coalition participants, the SPLC participated in cost-sharing arrangements to fund expenses of other coalition participants incurred in carrying out the joint purpose of the coalition. We are unaware of the extent of this practice.

      CTT presently describes its most senior members as CAP, Color of Change, Common Cause, Free Press, GPAHE, Muslim Advocates, the National Hispanic Media Center, and the SPLC. There is some ambiguity around who claims founding member status and whether that list has evolved over time. Startup life, I get it.

      I will refer to CTT's primary artifact as the Terms. This document, announced at the coalition's debut, was foundational to CTT's positioning (they are Change the Terms). The Terms were sometimes described as recommendations, sometimes as a model Terms of Service (ToS). They were consistently positioned as being for Internet companies.

      This is sleight-of-hand. A primary purpose, perhaps the primary purpose, of the Terms is to interdict money movement.

      The Terms define "Internet Companies" in a non-standard fashion to include banks, credit card brands, any business of any character which facilitates a transfer of money with a web or mobile interface, and also more central examples of Internet companies. This is in keeping with the coalition's by-now demonstrated target selection of PayPal (an Internet company) and Mastercard (which predates the commercial Internet by decades).

      The SPLC co-drafted the Terms.

      The SPLC referenced the Terms in Congressional testimony as being an extension of the SPLC's long-running campaign to interdict money movement to targeted organizations.

      For decades, the SPLC has been fighting hate and exposing how hate groups use the internet. We have lobbied internet companies, one by one, to comply with their own rules to prohibit their services from being used to foster hate or discrimination. A key part of this strategy has been to target these organizations ' funding.

      The Change the Terms coalition existed to coordinate and parallelize execution on this tactic. In addition to nominating targets for existing policies, it extracted concessions from industry in the form of policy changes. The coalition, when minimizing its own power served its purposes, sometimes described all actual decisions as made by industry. The coalition was very candid when speaking with itself, with allies, and with industry participants. The coalition understood itself to have some degree of coercive power, and factually had some degree of coercive power, as we will discuss. It also secured delegated authority, routinely but not universally, as we have discussed.

      Industry participants do not consider the Terms to be reasonably characterized as a ToS.

      I would say the Terms are an advocacy artifact which adopts the stylization of a ToS without making any effort to be one. A ToS is a binding contract that industry customarily pays professionals to produce or adapt from firm- maintained templates appropriate to young startups. The English-language U.S. ToS of a major tech company has consumed more than 7 figures in bespoke services work, as a rule. The idea that a filesharing service and regulated U.S. depository institution could adopt the same ToS is fatuous on its face.

      The purpose of the Terms was to get the meeting and, oh boy, did the coalition get them. I estimate they successfully achieved hundreds of meetings.

      March 2021: Color of Change describes the meetings on a podcast

      Color of Change's Robinson was interviewed by Hillary Clinton on her podcast You and Me Both in March 2021. You and Me Both is available on major podcast platforms through iHeartRadio. Readers may recognize Clinton from other work.

      Bits about Money has archived the podcast MP3 file, to make specified quotations findable via timestamps. Many professional podcasts use dynamic insertion of ads, which is good for advertising revenue but bad for reproducibility of timestamps across listeners. Please do not use the archive unless you need these specific timestamps.

      Robinson confirms that CoC works with SPLC and that its relevant work began after the 2016 election (29:15).

      Episode at 29:30:

      We started calling the credit card companies. We started calling these payment processing companies. And you know what they told us? They said, oh, we 're with you, but, you know, you have to talk to the banks. And then the bank said, you know, you have to talk to the credit card companies. So we start building the #NoBloodMoney campaign and we start building this platform. And, you know, we're not quite done with it all when Charlottesville happens.

      Robinson describes a central tactic of the coalition: identifying particular accounts it wants deactivated, with a consequence if demands are not met. He claims this to have been demonstrably effective.

      Multiple industry participants describe the same sequence of events across several invocations of the tactic. I feel it necessary to caveat causality, as described below.

      Episode at 30:00:

      We have been talking with you [companies] for months. We 've given you these lists of white nationalist groups. And then within about twenty four hours [of launching the #NoBloodMoney campaign], they start sending us a list of white nationalist organizations that they are cutting off from processing. No law had changed.

      Clinton interjects: Exactly.

      Robinson locates this within his non-partisan broader political project.

      Episode at 28:00:

      We really built what I feel is a new strategy. It was focused not simply on resistance, but on opposition. What would it mean to not just resist but to build power, to oppose, so that we could get back to governing, focusing on winning real victories at the local level, while also recognizing that the game was not fair, that the rules were rigged, and that we couldn 't simply say that what happened in 2016 was democracy. It was what happened.

      Clinton later comments, at 31:20 :

      Moving [your advocacy] to the private sector, and corporate power, was an incredibly smart approach.

      A brief interlude about causality and communications strategy

      Industry participants have, compared to anyone else in the world, broadly better information about account status, account history, position in pipelines, and similar. (This is not to say they have total awareness of all information in their possession, or that all employees of an industry participant have equivalent access to information and capacity to understand it. Some organizations tightly silo information internally by role.)

      It is easy to infer causality from timelines without that being warranted. One mechanism for this: accounts may be in pipeline at the time of target nomination. An external observer will perceive "account active, nomination communicated, account closed shortly thereafter" and make the obvious inference.

      If one understands one's counterparty to have misunderstood something, one can correct them. Or not.

      Industry participants describe a variety of tactics for extending olive branches to the coalition participants, including but not limited to acceding to demands. One such tactic was giving more visibility into pipelines than the broader public had, with or without influence on operation of those pipelines. "Thanks so much for bringing that to our attention. They are absolutely on our radar now." can mean many things, including "Message received.", "I confirm they are in pipeline.", "I confirm they are in pipeline thanks to you."

      The coalition targets politicians in non-partisan fashion

      The CTT Terms include the following recommendation.

      Many Internet Companies have granted special exemptions to official accounts, government actors and powerful people, allowing them to promote hateful activities, disinformation and other divisive behavior. Instead, these actors should be held to the same standards (if not higher standards) as regular users. There should be no special exemptions that allow the powerful to spread hate with impunity. Many official accounts at various social-media companies have circumvented platform policies despite promoting hateful activities, disinformation and other divisive behavior. Policies should apply equally to all users and must be enforced.

      Industry participants perceived themselves as being in an impossible situation with regards to a handful of accounts which were both extremely vexatious to coalition members and obviously newsworthy. Consider how manifestly unwise it would be to intentionally deplatform the sitting, duly elected President of the United States. While nominally about a large class of users, industry participants describe the motivating examples brought up in meetings as consistently circling back to Trump, Tucker Carlson, and a very short list of other names.

      The ADL, a coalition-aligned non-profit, co-authored a press release with some coalition members titled Deplatform Tucker Carlson.

      The coalition benefits from the mistaken impression that it only asks platforms to remove accounts controlled by terrorist organizations. No. The first, unobjectionable list is the ante. After you're in the hand, they raise you Tucker.

      Once the coalition has achieved agreement in principle it defines the bounds of polite society, it soon broadens the ask, framing the new concession as something you have already committed to publicly.

      The coalition often communicates the ask privately but the retaliation for non-compliance publicly. The public, mainstream media sources, and similar interpret the sudden coordinated pressure intensification as evidence that the targeted company has failed at the original commitment, the one about terrorist organizations.

      In public communications, some coalition participants exhibit message discipline in locating the agency within the industry participants: the coalition "recommends" policies, the industry participant agrees to a policy, then the industry participant is responsible for enforcing what is now their own policy.

      Coalition participants were, in the recollection of many industry participants, frequently undisciplined in meetings. They specifically nominated accounts for adverse actions, up to account closure, in no uncertain terms, and it was not a request.

      Color of Change, at a minimum, was quite disciplined: they consistently adopted coercive conditional escalation as their default engagement model. Get the meeting, communicate demand, show a marketing brief of words and images that would be activated if you did not swiftly accede to the demand. This account is described by industry participants and by executive director Robinson to Fast Company, where he describes employing it "95% of the time."

      Coalition participants were inconsistently disciplined in their contemporaneous written records, some of which Bits about Money has reviewed. Authenticating these as true copies is tricky; authenticating public statements is not.

      The Leadership Conference on Civil and Human Rights in October 2019 wrote Facebook a public letter, which the SPLC and many coalition members co-signed.

      And yet, sabotaging your own efforts, Facebook recently announced that it would automatically deem speech from politicians to be newsworthy, even when it violated the company 's Community Standards; exempt politician-created content from its fact-checking program - permitting anyone running for office to post or purchase ads with falsehoods; and exempt content deemed to be "opinion" from its misinformation rules. Politicians should not get a blank check to lie, incite, spread hate, or oppress groups of people. Politicians are historically responsible for perpetuating discrimination and erecting barriers to voter participation, while autocrats throughout history have relied on mass media to rise to power and subjugate minority communities.

      Note the conflation here of committing incitement (illegal), spreading hate/oppression (probably bad), and lying while being a politician (Tuesday). This sort of conflation, of attempting to box someone into a proposition they had never actually agreed to, was routine, in the view of some industry participants.

      I contemporaneously viewed the brouhaha about politicians lying as being battlespace preparation for the 2020 election. First, establish the general principle that social media platforms had a duty to censor lies told in campaigning. (This was sometimes described as "misinformation," to imply that an American politician lying was doing so in a Russian accent.) Then, seize on every lie in a one very specific political campaign, and use the platforms to interdict that political campaign's storytelling. I didn't expect campaign financing shenanigans, because I have a strong prior that responsible professionals might fly close to the sun but do not attempt to fly through it. More on that later.

      Industry participants have their own compliance issues to worry about and frequently perceived this two-step as being too cute by half. The aim was obvious to them. Industry participants describe coalition participants as stating directly that Trump lies frequently, and helpfully telling people with degrees in logic that it therefore follows that if lies cause decisioning, and Trump lies, Trump should be decisioned.

      Early 2020: The SPLC describes this campaign to Congress

      The SPLC has described the coalition's strategy in its own voice, in the most formal venue available to it: sworn testimony before Congress. Lecia Brooks, who self-identifies as senior SPLC leadership, appeared before the House Financial Services, Subcommittee on National Security, International Development and Monetary Policy on January 15th, 2020.

      Verbatim quotes from prepared testimony:For decades, the SPLC has been fighting hate and exposing how hate groups use the internet. We have lobbied internet companies, one by one, to comply with their own rules to prohibit their services from being used to foster hate or discrimination. A key part of this strategy has been to target these organizations ' funding.

      The coalition was an extension of the SPLC Intelligence Project, identified as such in their 2018 Annual Report, pg 9 [archive]. A charity annual report is a governance and fundraising document exhaustively reviewed by professionals and customarily approved by the board. It would be uncharitable to argue the SPLC misunderstands or is dissimulating about its role in the coalition in that document.

      Brooks, to Congress, chooses to describe the SPLC as a member of the coalition and not the animating force of it:

      On Oct. 25, 2018, the Change the Terms coalition - including the SPLC and other civil rights groups - released a suite of recommended policies for technology companies that would take away the online microphone that hate groups use to recruit members, raise funds and organize violence. In response to Change the Terms' advocacy, several Silicon Valley leaders have made promising changes that align with the coalition's vision for a safer online world.

      Brooks then lists several examples of specific wins the coalition achieved.

      Brooks then claims these accomplishments advanced the SPLC's mission. She implies that the coalition's important work will continue.

      Hate groups have clearly been damaged by the efforts of the SPLC and its allied organizations, including the Change the Terms coalition, to fight them and their funding sources online. But the fight is far from over.

      Brooks had an opportunity to describe industry participants as valued partners. Brooks describes the SPLC's relationship with industry participants in part as follows:

      The public exposure was half the battle. We conducted the other part of the campaign privately. SPLC officials held dozens of meetings with top Silicon Valley executives. Some companies acted. Some took half steps. Others did little or nothing. But eventually, the far-right extremists who depended on Silicon Valley were beginning to feel the pain.

      Brooks characterizes the SPLC's tone in a similar fashion to industry participants quoted above.

      She indirectly confirms one of the campaign's core tactics: get the meeting, get a commitment under threat of coordinated public pressure, then judge progress against the commitment to be inadequate. In the next meeting, offer absolution and de-escalation, contingent on policy concessions. Repeat as desired.

      _The SPLC kept up the pressure, cajoling companies and exposing those that dragged their feet. _

      The coalition, across a wide variety of documents, more consistently describes itself as having only influence when having power would require accountability, and more consistently describes itself as having power when addressing audiences presumptively sympathetic to the aims towards which that power was deployed.

      June 2020: Widespread protests throughout America. National guard,

      Facebook deployed.

      As a reminder, in late May 2020, the death of George Floyd triggered a wave of nationwide protests.

      Several of those protests devolved into riots and looting. This continued for months. The usual reckoning of the death toll, based on contemporaneous reporting, is two dozen. Property damage is generally estimated at between $1 and $2 billion based on insurance industry claims data.

      Trump posted "Any difficulty and we will assume control but, when the looting starts, the shooting starts."

      The U.S., unfortunately, has long historical experience with race riots, and the civil rights movement has strong institutional memory of that phrase being invoked to justify murder as a riot control tactic.

      One can believe people steeped in this tradition, inclusive of many coalition members, sincerely understood the post to be a true threat. One can also believe they understood the situation to be an opportunity.

      The coalition's operating logic has been to use each expansion to prepare for the next. A win here would establish that no one is beyond its reach. It would also establish that industry just isn't qualified to understand what their policies mean, and should defer to the subject matter experts who wrote them.

      Facebook declined to remove the post.

      Some employees at Facebook organized a walkout in protest.

      In an attempt to quell the discontent within the ranks, senior Facebook leadership (Zuckerberg and two lieutenants) had an unusually publicized meeting with coalition members (the heads of Color of Change, the NAACP, and the Leadership Conference on Civil and Human Rights).

      Coalition participants did not achieve what they professed to want in that meeting and, in a tick-tock motion industry participants were very familiar with by this time, released a statement to media then coordinated coverage around it.

      Widely quoted language from the statement included "Mark is setting a very dangerous precedent for other voices who would say similar harmful things on Facebook. " The specificity and analytical rigor of this sentence is not dissimilar to that recounted by industry participants of statements made in many meetings.

      The statement explained its concern was that failing to censor Trump, in a non-partisan manner of course, would result in voter suppression, via a causational pathway that the margin of the statement may have been too small to contain.

      This was transparently designed to activate commitments Facebook had made in the wake of the 2016 election.

      Believing the 2016 election had been tainted due to Russian interference was a left-coalition signifier--much as believing Trump actually won 2020 became a right-coalition signifier later. Neither of these views has the evidentiary strength the coalitions claim for them. But they aren't claims advanced to achieve understanding; they are advanced to achieve alignment and, through it, power.

      If one was concerned about the substantive merits of the claim on election interference, and not willing to simply accede to it on the strength of the speaker's social position, one might wonder whether widespread actual violence might not suppress voting more than words describing hypothetical government violence.

      Industry participants who asked coalition participants (in other circumstances) to explain their reasoning were told that it was not their job to educate them, that there exists literature, and that civil rights organizations had unmatchable expertise. Stick to coding, geeks. This did not always mollify industry participants, who in 2020 and 2021 were becoming deeply skeptical of expertise wielded as a shield for disastrous policy recommendations. For reference, see any history of the early days of the covid pandemic.

      When they knew the cameras were rolling, participants were fractionally more disciplined. Color of Change's Robinson delivered a 2019 speech to Facebook leadership [archive], telling executives directly that they had 'profound gaps in their expertise' and that implementing CTT would be 'a step toward seriousness.' We believe we fairly characterize other documents we have seen as extending the logic from a claim about incapacity to understand racism as a societal problem to incapacity to understand the words written on industry 's internal policy documents.

      The term of art in industry for the person responsible for the interpretation of a document is the "owner" of that document. Accepting this term of art, many professionals in the industry would agree that if the coalition doesn't understand themselves to own the policies, it's tough to guess where they think they should be on the stakeholder-analysis form. "Consulted" doesn't get to say the owner has blood on their hands after a decision.

      July 29th, 2020: Anti-trust committee hearing about market power

      The House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law conducted a hearing on Online Platforms and Market Power, Part 6: Examining the Dominance of Amazon, Apple, Facebook, and Google. The CEOs of the four companies attended as witnesses.

      This is the hearing at which Jeff Bezos invited Congress to recommend a substitute data product for the SPLC blacklist.

      About a month later 15 Republican lawmakers wrote Bezos a letter, saying:

      Amazon 's ongoing reliance on the SPLC, with its documented anti-conservative track record, reinforces allegations that Big Tech is biased against conservatives and censors conservative views.

      The letter did not contain a recommendation for an alternative data product.

      Industry participants were extremely aware of the climate regarding potential anti-trust actions against their firms at many times during these years. Avoiding that was a central goal of policy teams and company leadership at all levels. Industry participants perceived the coalition members as possessing substantial influence over outcomes for anti-trust policy.

      You don't get interviewed by Hillary Clinton for being a nobody.

      January 6th, 2021: A riot at the Capitol

      Joe Biden won the 2020 election. Trump disputes this.

      A planned demonstration in Washington D.C. for protesters sympathetic to him, timed to coincide with the counting of electoral votes in the Capitol Building, devolved into a riot. Demonstrators gained physical access to the Capitol Building, sometimes by force and sometimes being let in by overwhelmed police. Capitol Police shot and killed one demonstrator while she attempted to enter a window. A Capitol Police officer who had responded to the riot died the following day; the medical examiner ruled the cause natural (strokes) but noted the events of the day played a role in his condition.

      Industry participants and coalition participants treated the events of January 6th as a multi-faceted emergency and responded within days.

      Industry participants converged on nearly unanimously terminating or severely restricting services to Trump and affiliated entities. Coalition participants pressed publicly and privately for this outcome.

      Some commentators view these events as over a dozen firms watching the same news and making substantially the same decisions independently of each other. Some commentators, focusing on the near unanimity, believe these decisions to have been strictly coordinated. This commentator believes neither.

      There was a widespread effort to blame the tech industry specifically for the events of January 6th, contemporaneously reported in many places. The WSJ synthesizes, in a straight news story, the view "The Capitol incursion, some of which was planned and discussed in advance on social media, has hardened many Democrats' view that a lack of tech-platform regulation is undermining democracy." The climate in industry contemporaneously was acutely aware of being perceived as a threat to national security.

      Industry participants perceived they were making decisions under conditions of profound risk to their businesses. This perception was contemporaneously noted by many external observers, including then-Senator Rubio, quoted by the WSJ as saying:

      The reason why these guys are doing it is that the Democrats are about to take power, and they view this as a way to get on their good side.

      If "get on their good side" converges with "not get one's license to do business revoked" then there is not much daylight between that model and tech's own. I am making this observation generally, on the basis of years of industry experience, rather than on the specific basis of any conversation that happened that week.

      Financial professionals not directly employed by tech companies themselves shared this model, articulated it, and attempted to profit from it in a way which is entirely permissible under capitalism. Bellwether tech stocks (including those of industry participants) sold off during market highs for non-tech indexes, pricing in regulatory risk to these businesses.

      This was noted by many non-political industry observers. The WSJ quoted an equity analyst as saying:" The bottom line is that the odds of legislative action on privacy, antitrust and [liability shield Section] 230 just went up significantly."

      Investment banks get market color on recorded lines. In tech we get it in DMs from people we've worked with before and will again. It flows up to decisionmakers when it needs to. Much color is tweets being pasted into Slack.

      This is not limited to times of national crisis. Speed is edge. As an illustrative example, regulators learned FTX had tried suborning a bank from the NYT, who learned it from an informed source in Tokyo, who developed a package of proof after reading a single document posted to Twitter. Or so this writer speculates in a curiously specific and consistent manner.

      Now, putting these observations together:

      Imagine a coordination game with two sides of a fence. Players have to pick either side of the fence. They may announce their decision at any time, and may change it until all players have announced a decision. Payoffs to this game decline the longer one waits. They are catastrophically negative if the game ends with one player alone on a side. The game has no winners ever and you can't refuse to play.

      This game has a "race to be second" dynamic, where any credible commitment to a move, or observed move, strongly encourages any player contemplating the same move to immediately announce it. Each additional player joining the block is a domino against players who have yet to announce.

      The real-life situation reached rough equilibrium by January 10th.

      Industry participants do not perceive themselves as having highly weighted the opinions of coalition participants during these few days. They were considered unimportant relative to other factors. Nor did industry participants broadly attempt to solicit input from coalition participants, in part because their responses were viewed as being trivially predictable. Further meetings during a crisis were considered a distracting waste of time.

      Coalition members publicly and privately, along with many who had learned by imitation, immediately demanded everyone shut down everything. If he still had Netflix the next day it was not for want of trying.

      Change the Terms issued a joint statement [archive] demanding an absolute Trump ban on January 6th itself using extraordinary language.

      If platforms do not take immediate action to permanently remove Trump 's accounts, they will further share in the blame for additional white- supremacist violence that may unfold over the evening and in the remaining days before Trump's term as president ends.

      February 25th, 2021: The SPLC lobbies Congress to require companies to

      inform on non-profits, and others, to government

      The House Financial Services Subcommittee on National Security, International Development and Monetary Policy held a hearing titled Dollars Against Democracy: Domestic Terrorist Financing in the Aftermath of Insurrection. SPLC's Brooks again offered prepared testimony. The SPLC appears to ask Congress for new legislation establishing a BSA-style mandatory reporting regime, with penalties for non-compliance, across industry participants.

      Verbatim quotes, bolding in original:

      Government should require regular, mandatory reporting by technology service providers to document abuse of their systems including financial support of violence, harassment, and terrorism.This includes implementation ofmandatory financial abuse reporting requirements for internet services operating in the United States, including social media services, infrastructure providers, banking institutions, cryptocurrency exchanges, crowdfunding sites, video streaming platforms, and the like.

      and

      [These companies] should be required to investigate and report the details of harms and abuse of their service. There should be … penalties applied to services that refuse these tracking and reporting responsibilities.

      Given that this reporting regime is mandatory, on the face of it, if a respected civil rights organization makes a payment to an individual responsible for violence, harassment, and/or terrorism, facilitators would have an immediate reporting requirement. That seems to carry the risk of reporting on the actions of an NGO to a potentially hostile government. That government could be the current one or a future one, because governments have been known to keep written records and employ personnel who serve across generations.

      Had the SPLC asked me for comment on this novel expansion of BSA-style enforcement mechanisms, I would have told them that the existing BSA enforcement apparatus routinely negatively impacts marginalized individuals the SPLC makes the center of their moral concern. Bits about Money has made this argument across many pieces and in depth for years, continuing on observations I had made during my time as a consumer advocate for individuals with banking and credit problems, dating to the mid-2000s.

      June 4th, 2021: Facebook rescinds newsworthiness exception to multiple

      policies

      Facebook announced that it would end its longstanding "newsworthiness exception" to content moderation rules. This was a concession to years of repeated public and private demands by CTT coalition members. These demands included the October 2019 letter co-signed by 46 organizations including several CTT coalition members.

      This form of exception was called out in the CTT Terms and ending it was an avowed goal of the coalition.

      CTT coalition members then pushed for another concession they desired.

      July 2021: The CTT coalition attempts non-partisan interdiction of Trump

      PAC fundraising

      Industry participants have characterized coalition members as being routinely undisciplined, verbally and in writing, in specifically nominating FEC- registered entity controlled accounts, including fundraising accounts, for termination. They claim this was a pattern of practice for several years. Bits about Money has reviewed multiple records suggestive of this pattern.

      It is not straightforward to authenticate documents obtained through sources. More rigorous authentication often poses additional risk to sources.

      On the other hand, sometimes documentary evidence of the pattern is available from the coalition directly. Common Cause maintains a WordPress site, and occasionally posts their target lists in public. [archive] WordPress is a complex and highly modular open source platform which you could use for a blog or e-discovery delivery service.

      Bits about Money 's eclectic collection of coalition-authored communications unequivocally demonstrates a) multiple coalition members b) specifically directing account termination and/or continuous restriction c) against Trump- affiliated accounts d) for the express purpose of interdicting political fundraising and other activity e) with them subsequently fundraising in specific reliance upon these acts. We offer the published document in substantiation of claims a-d and the next section of this piece in substantiation of claim e.

      Verbatim quotes from the document:

      _As you know, The Team Trump Facebook page is operated by Save America, a political action committee ( "PAC") controlled by Trump. _

      and

      Allowing Team Trump to continue running political ads on Facebook is a significant loophole in Trump 's two-year suspension and provides a pathway for the former president to evade the ban. … Further, Team Trump is soliciting donations and inviting supporters to Trump rallies.

      and

      [We urge you to s]ubject the Team Trump account and any other account under Trump 's control, including any account of a political committee authorized and/or established by Trump pursuant to campaign finance law, to the same two year-ban as his Facebook and Instagram accounts.

      No other accounts are specifically nominated in this document.

      The document makes a token gesture that the principle is broader than the specific PAC whose fundraising activities it desires to be interdicted.

      [We urge you to S]ubject any Facebook pages run by a political committee or other political entity authorized, established, financed, maintained or controlled by an individual to the same content moderation decisions as that individual 's Facebook account.

      The Common Cause demand letter was co-signed by CTT coalition members Common Cause, CAP, Free Press, GPAHE, Media Justice, NHMC, and many other aligned 501c3 organizations. The published version of the demand letter is not signed by the SPLC.

      Consider what level of operational discipline prevailed in the coalition, which employs many communications professionals and lawyers, to publish that document. Now imagine what individual coalition employees wrote with their thumbs. Do you picture excessive emoji, or prose that reads more Blackberry.

      Later in 2021: Coalition members fundraise in reliance upon this conduct

      Coalition participants Free Press and Common Cause rented a mobile billboard to reiterate their demands. The mobile billboard was deployed to follow Facebook executives around Washington D.C. They tie this action to organizing to achieve a government investigation of Facebook.

      Verbatim quotes from their press release [archive], titled Facebook Targeted by Mobile Billboard Circling Capitol Hill Demanding That Company Close the Trump Ad Loophole:

      A mobile billboard demanding that Facebook ban Team Trump ads in accordance with its ongoing suspension of Donald 's [sic] Trump's accounts will greet Facebook representatives following their Capitol Hill testimony today.

      and

      Sponsored by Free Press Action and Common Cause, the mobile billboard began its route

      this morning and is continuing to circle the Federal Trade Commission, the White House,

      Facebook headquarters and the U.S. Capitol, and will join the "Rally to Investigate Facebook"

      We below reproduce Chris Cruz 8 Media Group's photo of the mobile billboard, attached to the press release. The mobile billboard reads "Facebook must close Trump's ad loophole" and "Nobody is above the rules." We believe this reproduction is fair use for the purpose of reporting and commentary, but are happy to pay any reasonable fee for an unrestricted non-exclusive perpetual worldwide license across all media types currently existing or to be invented. Invoice to Kalzumeus Software, LLC please.

      alt

      Free Press's 2021 end of year communication [archive] to donors, signed by its co-CEOs, attempted to fundraise in part based on their participation in the Change the Terms coalition and in part based on the mobile billboard campaign to interdict PAC fundraising. The communication includes a photo of the billboard. All following quotes are from the document, and bolding is true to the original.

      [W]e co-founded Change the Terms, a coalition that calls on the platforms to adopt model policies we developed to crack down on hateful content.

      …

      Our efforts have yielded numerous concrete changes. After years of pressure from Free Press and our allies, Twitter finally banned Trump[.]

      …

      Facebook initially suspended Trump "indefinitely" and later changed his suspension to a two-year ban. We're now pushing the company to permanently ban Trump and to close a loophole that's allowing a Trump PAC to fundraise and organize on his behalf.

      The funding call to action, immediately above a donate button, was:

      FUND THE FIGHT. Your generosity makes our work possible. Please give what you can today to make sure we have the resources we need to keep fighting for equitable media policies that improve people's lives.

      The communication included the following disclaimer, directly under the donation call-to-action. It was italicized.

      Free Press and Free Press Action are nonpartisan organizations fighting for your rights to connect and communicate. Free Press and Free Press Action do not support or oppose any candidate for public office.

      2022 to present: the Change the Terms coalition sunsets (?)

      Meetings between industry participants and coalition participants decline from being a regular practice to occasional and ad hoc. This is according to several industry participants in past meetings. The Change the Terms social media presences, which had posted regularly from 2018 through 2021, substantially cease operations. Their last Medium post was in May 2022.

      CTT coalition member GPAHE released a statement [archive] about Facebook and Trump on January 25th, 2023. The Change the Terms coalition retweeted it, in one of their final Twitter posts, and the final one naming Trump.

      The most striking difference from the CTT coalition's past several years of public and private statements: this is, conspicuously, carefully worded.

      There was no urging, calling upon, demanding, etc in this public statement. It was comparatively disciplined in only describing Facebook's decision and their analysis of it, and letting a rhetorical question hang in the air.

      _If that 's not enough for Facebook to continue to ban him, then what is? _

      The Change the Terms coalition website remains up, but it is difficult to say whether any members maintained their longstanding non-partisan interest in shaping industry policy via pressure campaigns and then nominating targets for enforcement. Perhaps they achieved final victory over hate.

      Or perhaps, since September 2021, they had learned operational discipline. The kind that chuckles at a proposal to chase executives around with mobile billboards demanding the interdiction of PAC fundraising, in a totally non- partisan fashion of course, and then doesn 't do that. Donor funds are best spent elsewhere.

      In other news, Trump had filed his candidacy paperwork with the FEC in November 2022. He would go on to win the 2024 election.

      A brief parable about maintaining tax-exempt status

      Wiley Coyote Charities, an IRS-recognized 501c3 non-profit organization in a universe not too far from our own, has chased its hated nemesis for years. The orange road runner is tantalizingly close. Focused and untiring, perceiving himself close to ultimate victory, Wiley Coyote Charities salivates. This time, this time for sure, he will be sated. He will be free.

      Wiley Coyote Charities speeds past a sign reading "Danger: Plausible Non- Partisanship Ends." The only danger is to that blasted bird.

      Wiley Coyote Charities is, to the appearance of observers of the race, now running over two miles of clear blue sky. He has not yet looked down. We know what will happen when he does. Blame the road runner all the way down.

      As a former 501c3 CEO myself, I am aware of the requirements to maintain tax-exempt status. This is of paramount importance to charities. You can save yourself some legal bills quickly with the IRS's Restriction of Political Campaign Intervention by Section 501(c)(3) Tax-Exempt Organizations :

      " Under the Internal Revenue Code, all section 501(c)(3) organizations are absolutely prohibited from directly or indirectly participating in, or intervening in, any political campaign on behalf of (or in opposition to) any candidate for elective public office. Contributions to political campaign funds or public statements of position (verbal or written) made on behalf of the organization in favor of or in opposition to any candidate for public office clearly violate the prohibition against political campaign activity. Violating this prohibition may result in denial or revocation of tax-exempt status and the imposition of certain excise taxes."

      501c4 organizations have similar considerations. Consult your lawyer.

      Does Bits about Money have a political agenda?

      BAM mostly explains and analyzes financial infrastructure. The pipes work for everyone in every party, and for that thank God, plus the many people who go to work every day to make it happen.

      A reader unfamiliar with years of issues will assume, picking one at random, that we are sympathetic to the then-current administration because we referenced an indictment. We say very similar things at substantial length every single time. Some pieces you may enjoy: The Bond Villain compliance strategy re: CZ, an extensive discussion in Debanking and Debunking of bank compliance failures enabling the FTX fraud, and our voluminous record on the function and tradeoffs of the BSA regime.

      Bits about Money does not generally recommend particular providers of financial services, including of screening data products. As an editorial decision: we anti-recommend the SPLC blacklist. It is unfit for purpose in financial services and obviously so. We have no position as a publication as to whether it is valuable for other uses.

      To the extent I personally have policy preferences, I prefer the orderly administration of law. Any law we would not be willing to enforce against a sympathetic lawbreaker, a friend, or an ally is a bad law. Until a bad law is changed, it is the law. I reject a legal realism, or legal cynicism, that says that power is the only law.

      The Declaration of Independence and D.C. billboards agree: No one is above the rules. We have no kings in this country.

      On the SPLC specifically, I don't really specialize in charity effectiveness ratings, but so I am not accused of hiding the ball: I think they achieved a meaningful and historic victory in the cause of righteousness many years ago. They have dined well on that reputation for a very long time.

      To those who think their mission remains critical and more intrinsically noble than simply the pursuit of political power for their favored coalition, I will say this. If the coyote has a noble mission on his back, he owes it to the mission to let the damned bird go, before he takes that mission off the cliff with him.

      Postscript to my fellow communications professionals

      Just following up on my emails. Do I have the correct addresses? Emails to the team alias and your personal work accounts, formatted correctly, did not bounce; emails to incorrect guesses for the team alias did.

      SPLC: I had asked you to deny that the email between the SPLC's CEO and bank exists or dispute the accuracy of the excerpt in the indictment, and asked you to comment on whether the Change the Terms coalition you co-founded had specifically nominated accounts for negative actions. I still welcome a denial or comment from you on any matter, like whether it is fair to characterize Change the Terms as the SPLC's concerted coalition to interrupt the fundraising of political opponents.

      Common Cause: I asked you to comment on whether you have ever nominated the account of an FEC-registered entity for negative decisioning, and told you I had written evidence of you doing so on at least one occasion. I welcome your future comment, perhaps on when you started that practice and when or whether you have ceased. We could compare notes.

      Email is my preference, but since the SPLC specifically is well-resourced to pursue the other way to deliver a response if it desires, I'll save everyone 6 billable minutes: tell them "to Kalzumeus Software, LLC's registered agent." The Internet and I will read it attentively.

      To the as-yet uncontacted coalition members, that meeting can be an email: "How about 'We categorically deny ever directing any company to interfere with fundraising of a political opponent.' ?" "Approved. Next topic?" Unless you doubt that is true, in which case, book the non-partisan conference room for workshopping the language.

      Don't worry, I am a reasonable professional. Most journalists haven't worked in a comms department. I have, and so gave all parties contacted several business days to answer very simple questions.

      Postscript to fellow geeks who need to hear it

      Your employer is profoundly opposed to you sending confidential information to external parties, even a fellow geek. The incremental value of evidence to me is far lower than risk to you.

      Audit logs exist, including for searches and document accesses.

      Remember the front page test. If you write it down, you could read it in the NYT. Or HN. So don't write down anything you wouldn't want published next to your name for forever.

    13. šŸ”— r/LocalLLaMA 16x Spark Cluster (Build Update) rss

      16x Spark Cluster (Build Update) | Build is done. 16 DGX Sparks on the fabric, all hitting line rate. Setup was time consuming but honestly smoother than I expected. Each Spark runs Nvidia’s flavor of Ubuntu out of the box with mostly everything pre installed and ready to go. For setup I had to rack them, power on, create the same user/pass across all nodes, wait about 20 minutes per node for updates, then configure passwordless SSH, jumbo frames, IPs, etc. which I scripted to save time. Each Spark connects to the FS N8510 switch with a single QSFP56 cable. The DGX Spark bonds its two NIC interfaces into each port, so you get dual rail over one cable. I'm seeing 100 to 111 Gbps per rail, which aggregates to the advertised 200 Gbps. Why this over H100s or a GB300? Unified memory. The whole point is maximizing unified memory capacity within the Nvidia ecosystem. With 8 nodes I was serving GLM-5.1-NVFP4 (434GB) at TP=8. Now going to test with DeepSeek and Kimi The longer term plan is a prefill/decode split. The Spark cluster handles prefill (massive parallel throughput), and once the M5 Ultra Mac Studios drop I'll add 2 to 4 into the rack for decode. — Full rack, top to bottom: - 1U Brush Panel - OPNSense Firewall - Mikrotik 10Gb switch (internet uplink) - Mikrotik 100Gb switch (HPC to NAS) - 1U Brush Panel - QNAP 374TB all U.2 NAS - Management Server - Dual 4090 Workstation - Backup Dual 4090 Workstation (identical specs) - FS 200Gbps QSFP56 Fabric Switch (Spark cluster) - 1U Brush Panel - 8x DGX Spark Shelf One - 8x DGX Spark Shelf Two - 2U Spacer Panel - SuperMicro 4x H100 NVL Station - GH200 submitted by /u/Kurcide
      [link] [comments]
      ---|---

    14. šŸ”— r/reverseengineering /r/ReverseEngineering's Triannual Hiring Thread rss

      If there are open positions involving reverse engineering at your place of employment, please post them here. The user base is an inquisitive lot, so please only post if you are willing to answer non-trivial questions about the position(s). Failure to provide the details in the following format and/or answer questions will result in the post's removal.

      Please elucidate along the following lines:

      • Describe the position as thoroughly as possible.
      • Where is the position located?
      • Is telecommuting permissible?
      • Does the company provide relocation?
      • Is it mandatory that the applicant be a citizen of the country in which the position is located?
      • If applicable, what is the education / certification requirement? Is a security clearance required? If so, at what level?
      • How should candidates apply for the position?

      Readers are encouraged to ask clarifying questions. However, please keep the signal-to-noise ratio high and do not blather. Please use moderator mail for feedback.

      Contract projects requiring a reverse engineer can also be posted here.

      If you're aware of any academic positions relating to reverse engineering or program analysis in general, feel free to post those here too!

      submitted by /u/AutoModerator
      [link] [comments]

    15. šŸ”— r/york Hobby/Gaming Shops rss

      I’m in York today for work, but I’ve got a two hour gap in the middle of the day. Are there any fun and friendly hobby/Warhammer/gaming shops I can pop my head into in the middle of town?

      Ta!

      submitted by /u/Fletch1396
      [link] [comments]

    16. šŸ”— r/reverseengineering In-circuit NAND acquisition for edge devices (Raspberry Pi GPIO, no chip-off) rss
    17. šŸ”— r/Yorkshire I just want to live in a place like this forever✨ rss

      I just want to live in a place like this forever✨ | Video by @ ajmchaletravels submitted by /u/Seabeachlover10
      [link] [comments]
      ---|---

    18. šŸ”— r/LocalLLaMA Qwen 3.6 27B vs Gemma 4 31B - making Packman game! rss

      Qwen 3.6 27B vs Gemma 4 31B - making Packman game! | Gemma just crushed Qwen in a local LLM gamedev contest! Device: MacBook Pro M5 Max, 64GB RAM Qwen 3.6 27B: 32 tokens/sec Ā· 18m 04s Ā· 33,946 tokens.
      Gemma 4 31B: 27 tokens/sec Ā· 3m 51s Ā· 6,209 tokens. So what is more important: tokens per second, or the quality of the final answer? Qwen made a very long response and showed more creativity and visual style. But Gemma gave a shorter, clearer, and more logical answer in much less time. In this one-shot Pac-Man gamedev contest, Gemma 4 31B was the clear winner. Its game logic was stronger: click reactions were smoother, and it handled interactions with elements like walls, ghosts, and particle effects better. Open Source Local AI Models Server: atomic.chat Basic Prompt: Create a single standalone HTML file for a complete playable Pac-Man–style neon arcade game. Use only HTML, CSS, JavaScript, and one full-page canvas. No external libraries or assets—everything must be procedurally drawn and run immediately in the browser. Generate a compact (~21Ɨ21) symmetrical maze programmatically (no ASCII). It must be fully connected, playable, and use tile types (wall, path, pellet, power pellet, ghost spawn, Pac-Man spawn, fruit spawn). Ensure no unreachable pellets or invalid spawns. Canvas must fill the window. Center and scale the maze dynamically using available space (no fixed tile size). Reserve space for a HUD. Game states: title, playing, paused, life lost, level complete, game over. Include controls (keyboard + mobile). Title and game over screens must show instructions. Pac-Man: smooth tile movement, queued turns, no diagonal movement, no clipping, wraps through side tunnels, resets after life loss. Ghosts (4): simple pathfinding with distinct behaviors, spawn in a central house, exit with delays, move only on valid paths, never freeze. Gameplay:

      • Pellets (+10), power pellets (+50), fruit (+500), ghost chain scoring (200→1600)
      • Power mode (~8s, min 3s): ghosts become edible and return to spawn when eaten
      • Combo multiplier for quick pellet collection
      • 3 lives, level progression increases difficulty
      • Store high score in localStorage

      Extras:

      • Fruit spawns near center temporarily
      • Visual polish: neon maze, glowing elements, animations, particles, screen effects
      • HUD: score, high score, lives, level, combo, power timer

      Technical:

      • Use requestAnimationFrame with delta time
      • Keep performance stable (limit particles)
      • No bugs: avoid invalid movement, stuck entities, unreachable areas, or crashes

      Final output: only the complete HTML code. submitted by /u/gladkos
      [link] [comments]
      ---|---

    19. šŸ”— Rust Blog Raising the baseline for the `nvptx64-nvidia-cuda` target rss

      The nvptx64-nvidia-cuda target is a compilation target for NVIDIA GPUs. When using this target, the final output is PTX. Two version choices shape that output:

      • a GPU architecture (for example, sm_70, sm_80, …), which determines which GPUs can run the PTX, and
      • a PTX ISA version, which determines which CUDA driver versions can load (and JIT-compile) the PTX.

      In Rust 1.97 (scheduled for release on July 9, 2026), the baseline PTX ISA version and GPU architecture for nvptx64-nvidia-cuda will be increased. These changes affect both the Rust compiler (rustc) and related host tooling, and they make it impossible to generate PTX artifacts compatible with older GPUs and older CUDA drivers.

      The new minimum supported versions will be:

      • PTX ISA 7.0 (requires a CUDA 11 driver or newer)
      • SM 7.0 (GPUs with compute capability below 7.0 are no longer supported)

      Why are the requirements being changed?

      Until now, Rust has supported emitting PTX for a wide range of GPU architectures and PTX ISA versions. In practice, several defects existed that could cause valid Rust code to trigger compiler crashes or miscompilations. Raising the baseline addresses these issues and enables more complete support for the remaining supported hardware.

      Removing support affects users of the architectures being removed. In this case, the most recent affected GPU architectures date back to 2017 and are no longer actively supported by NVIDIA. We therefore expect the overall impact of this change to be limited.

      Maintaining support for these architectures would require substantial effort. These removals let us focus development efforts on improving correctness and performance for currently supported hardware.

      What happens when I update to Rust 1.97?

      If you need to target a CUDA driver that does not support PTX ISA 7.0 (CUDA 10-era drivers and older), Rust 1.97 will no longer be able to generate PTX compatible with that environment. Similarly, if you need to run on GPUs with compute capability below 7.0 (for example, Maxwell or Pascal), Rust 1.97 will no longer be able to generate compatible PTX for those GPUs.

      Assuming you are targeting a CUDA driver compatible with CUDA 11 or newer and using GPUs with compute capability 7.0 or newer:

      • If you do not specify -C target-cpu, the new default will be sm_70, and your build should continue to work (but will no longer be compatible with pre-Volta GPUs).
      • If you currently specify an older -C target-cpu (for example, sm_60), you will need to either:
        • remove that flag and let it default to sm_70, or
        • update it to sm_70 or a newer architecture.
      • If you already specify -C target-cpu=sm_70 (or newer), there should be no behavioral changes from this update.

      For more details on building and configuring nvptx64-nvidia-cuda, see the platform support documentation.

    20. šŸ”— Drew DeVault's blog I can't cancel GitHub Copilot rss

      Back when Copilot first came out, I immediately disliked it. But I decided to give it a fair shake and tried to evaulate it in good faith. I wasn’t interested in paying for it, but they had a form for FOSS community members to apply for a free subscription, so I filled it out and gave it a shot. Once approved I spent 15 minutes (successfully) convincing it to write a Python script that printed out the lyrics to ā€œAll Starā€ verbatim, and haven’t touched it since.

      Since then, like clockwork I get an email every month informing me that my subscription has been automatically renewed.

      Hi there,

      Thank you for renewing your free access to GitHub Copilot. Your access to GitHub Copilot will be reviewed on 2026-05-31. GitHub Copilot checks eligibility monthly per our policy. No steps are needed on your end.

      We hope you enjoy using GitHub Copilot and participating in the developer community.

      I’m not being charged for it and so it’s a matter of principle more than anything, but I ought to be able to turn this off. But I cannot find anything in the GitHub settings which would allow me to cancel this free ā€œsubscriptionā€.

      A screenshot of GitHub's copilot settings. Everything which can be disabled is disabled, but may features cannot be disabled. There is no obvious way to cancel the subscription.

      GitHub support has been less than helpful:

      A screenshot of a support ticket opened on March 26th asking for assistance in cancelling my Copilot subscription. I asked for an update on April 21st. There is no response from GitHub.

      How do I get rid of this thing!

  2. April 30, 2026
    1. šŸ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-30 rss

      IDA Plugin Updates on 2026-04-30

      Activity:

      • claude-of-alexandria
        • 72899136: chore(initiative): mark F2 done — PR #25
        • bed0e32c: feat: create autopilot session initiative-2026-04-30-1009
        • d78a36e8: feat(skills): add version and changed metadata to all SKILL.md frontm…
      • python-elpida_core.py
        • af2d84a4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T23:54Z
        • 5f01be6a: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T23:31Z
        • e07da4de: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T23:09Z
        • fb83a729: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T22:46Z
        • 11eca9e5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T22:25Z
        • b3cf8aa5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T21:58Z
        • 2ac7242f: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T21:35Z
        • cccf78ad: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T21:10Z
        • 817ff061: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T20:43Z
        • b0b51012: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T20:17Z
        • 7a1d3cc5: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-30T19:50Z
    2. šŸ”— r/york Thanks again for all the support for the charity sleep out! Over and out 🫔 rss

      Thanks again for all the support for the charity sleep out! Over and out 🫔 | submitted by /u/kittywenham
      [link] [comments]
      ---|---

    3. šŸ”— r/LocalLLaMA AMD Halo Box (Ryzen 395 128GB) photos rss

      AMD Halo Box (Ryzen 395 128GB) photos | This demo unit was running Ubuntu and the light strip is apparently programmable. submitted by /u/1ncehost
      [link] [comments]
      ---|---

    4. šŸ”— Evan Schwartz Scour - April Update rss

      Hi friends,

      In April, Scour scoured 778,059 posts from 25,790 feeds. This month, my focus was on ranking improvements and adding a number of new features:

      šŸ”ƒ Ranking Improvements

      Scour is designed to find hidden gems that interest you, while trying to avoid using popularity signals or pigeonholing you into a narrow slice of content simply because you clicked on one thing (you can read the ranking philosophy here).

      Your Scour feed now subtly adjusts based on which content you click on, like, or dislike. Interests whose related content you like will get a small boost, as well as posts from domains that you tend to like. This effect is intentionally subtle.

      The feed is also much better now at balancing across your different interests. I revamped the way it does the final content selection to have an explicit diversification step that balances the feed based on your interests, the sources, and other criteria.

      ā†•ļø Tap to Expand

      Scour's interface has undergone a number of iterations this month. Now, you click or tap a post to expand it. The expanded view contains a short snippet from the post with a link to read more, as well as buttons to save, react, report it, etc.

      šŸ“‘ Saved Posts

      Want to save an item to read for later? You can now save items, which is separate from liking them. Saved items are private and don't affect your feed's ranking at all. Also, Scour will occasionally resurface a couple of your saved items while you're browsing your feed so you can revisit things you might not have had time to read before.

      šŸ“– Reading Posts on Scour

      You can read post summaries and some entire posts directly on Scour. Click on Read More, which is shown when you click on a post, to go to the post preview page. That page has better styling now, so it should be nicer to read. Plus, code blocks now get automatic syntax highlighting.

      šŸ± Browse Interests by Category

      You can now browse popular interests by category. Technology is broken out into subcategories, or you can easily skip past it to find other topics like Science & Nature, Food & Cooking, Arts & Design, etc.

      🌐 Post List by Domain

      Clicking on a post's domain now brings you to a chronological list of all the posts from that site and, optionally, all the subdomains. You can easily block domains on that page if you don't want any of their content appearing in your feed, or just browse to see what else was published.

      šŸ”¢ Pagination by Default

      The default feed view switched from infinite scrolling to paginated. You can click the link at the bottom of the page to use infinite scroll, or toggle this in your settings.


      šŸ™ Thanks

      Thanks to Gordon McLean for the Scour mention in Why I Still Like the Internet!

      And thanks to everyone whose feedback shaped the roadmap this month:


      šŸ”– Some of My Favorite Posts

      Here were some of my favorite posts that I found on Scour in April:

      For Rust developers, I also wrote up this blog post: Your Clippy Config Should Be Stricter.


      Have ideas for how to make Scour better? Post them on the feedback board!

      Happy Scouring!

      - Evan

    5. šŸ”— sacha chua :: living an awesome life YE24: Sacha and Prot Talk Emacs - Newbies/Starter Kits rss

      : Added chapters, transcript, and Prot's defaults.

      Here are the settings Prot recommended during our chat.

      The Emacs Carnival theme for April 2026 is newbies/starter kits. I chatted with Prot about helping people get into Emacs and also supporting lifelong learning.

      Prot had some notes on how he started with Emacs in 2019 in All about switching to Emacs (video blog) | Protesilaos. These notes were just a few months after he started, so his experience was pretty fresh.

      In Computing in freedom with GNU Emacs | Protesilaos (2026), he said:

      Remember that I started using Emacs without a background in programming. … I learnt the basics within a few days. I started writing my own Emacs Lisp within weeks. And within a year I had my modus-themes moved into core Emacs.

      Prot has several projects that might be of interest to many newcomers to Emacs:

      • modus-themes, which are part of Emacs core and are therefore just a M-x load-theme or M-x customize-themes away
      • Emacs Lisp Elements, a book that helps people learn Emacs Lisp
        • Where does this fit into people's learning journeys? How can they come across it and use it?
      • perhaps Denote
        • What would it take for people to learn enough to be able to use this?

      I'm also curious about his thoughts on the general Emacs newcomer experience and what we can do to make it better.

      He also offers Emacs coaching. I wonder if any newbies have taken advantage of that. There are a few other coaches listed on the EmacsWiki. (Ooh, Emacs buddy, that was neat.)

      Other possible topics: Philip suggested the following general themes for the Emacs Carnival:

      • What are your memories of starting with Emacs?
      • What experiences do you have with teaching Emacs to new users?
      • Do you think if starter kits are more of a hindrance in the long term or necessary for many users to even try Emacs?
      • What defaults do you think should be changed for everyone (new and old users)?
      • What defaults do you think should be changed for new users (see NewcomersTheme)?
      • What is the sweet-spot between starter-kit minimalism and maximalism?

      Chapters

      I'll tweak the notes and timestamps later. Just wanted to put something up quickly!

      • 0:00 Warming up
      • 2:28 C-g is supposed to get you out of everything, but it doesn't work for the minibuffer
      • 2:28 Anything related to display-buffer is hard for people to configure. Many windows do not focus by default. You have to switch to the other window to q.
      • 4:28 Good defaults
      • 4:28 How do I set my fonts? Which is the one I should be using?
      • 4:28 Other common settings and packagings
      • 4:28 ediff is unusable by default for everyone, not just newcomers
      • 5:28 Packages to install
      • 6:28 People muddle through. There isn't a curation of content. 10 different ways = too confusing for newcomers because they can't weigh the pros and cons.
      • 7:28 the wiki might be a good approach for the community. Start here.
      • 9:28 the direction of the newcomers theme is nice. Does it work in practice?
      • 11:28 minor mode?
      • 11:28 people think of themes as styles, not arbitrary customizations. Maybe a package instead?
      • 14:28 Listing changes for newcomers-presets
      • 15:28 Terminology is also a challenge: completion, minibuffer, orderless, etc. vs what a new user might try to say (search box, …).
      • 16:28 Clusters of configuration; maybe aliases in the documentation to find things (ex: aliases in the concept index)
      • 17:28 Blank slate, didn't have to unlearn terms (ex: narrowing, window)
      • 19:28 Emacs Lisp Elements: Prot recommends it to people who have already decided that Emacs is the right tool for them.
      • 20:28 Getting the hang of Emacs
      • 22:28 Getting help when you have a starter kit
      • 24:28 Customize is overwhelming for beginners unless it's just a toggle or a selection list. It's good for discovery. Can't be copied and pasted into the configuration, though.
      • 27:28 debug-init
      • 28:28 Getting help: partially bridged by LLMs?
      • 30:28 Things people don't even know about
      • 32:28 Filling in the blanks; recursive
      • 33:28 .emacs
      • 36:28 Discovery: info manual: g, i, but you have to have completion already set up
      • 38:28 Address your immediate need, small steps. Piecing together your system.
      • 40:28 Let's understand what your needs are.
      • 40:28 :config and setq is nicer than :custom for C-x C-e purposes (eval-last-sexp)
      • 45:28 culture of documentation and sharing
      • 47:28 Link to a search
      • 50:28 Getting through the gap between beginner tutorials and the next step
      • 48:28 Predictability, popper
      • 52:28 Earlier is better than later for Emacs Lisp. Take it as is. show-paren-mode is helpful.
      • 55:28 Before and after comparisons
      • 56:28 user-init-directory
      • 57:28 Emacs core
      • 58:28 Getting past the initial awkward phase; people
      • 59:28 Even reporting an issue is a great contribution
      • 1:00:28 Wrapping up: wiki gardening,
      • 1:02:28 Core longevity

      Transcript

      00:00:08 Warming up

      [Sacha]: All right. Hello, this is Yay Emacs 24, I think. And today I'm going to be talking to Prot, who is going to join eventually. In about five minutes is our scheduled time. And I want to pick his brain about newcomers, the newcomer experience for Emacs, the starter kits, what we can do to make it easier for people to get into Emacs, and how we can support lifelong learning. So let me spend a few minutes here getting all set up so that if you have any questions, you can use the YouTube chat during the live stream so that I can read your questions out loud to Prot. And also so that I can share everything. I think my audio is working. And also in the meantime, I can tell you what I've been doing lately. I have just posted a guide to newcomers presets, which is a new feature in Emacs 31. It's a theme that enables a bunch of defaults. Sorry, that changes a bunch of defaults to make it a little bit nicer for people. And let's see, what was that? I don't know what that sound just meant. Okay, Prot, it says he's in the Google Meet room. So I will now admit him. And I think we should be live. Fantastic. Hello. Hello, hello. All right.

      [Prot]: Hello, Sacha. Good day.

      [Sacha]: Hello, Prot. Good day. Thank you for joining early. I was just doing my pre-session panicking and warming up. But since you're here and since I have a hard stop in about one hour, a little over one hour since I have to make the kid a grilled cheese sandwich, let's dive right into it.

      [Prot]: Yes, yes. The grilled sandwich cannot wait.

      [Sacha]: No, no, no. She'll be hungry. So, the theme for the Emacs Carnival this month was newbies and starter kits. And it gives us a good excuse to start thinking about How do we make the Emacs experience better for new users? Now I know you probably have run into a lot of new users from the talks that you've been giving, the packages you make, everything, the coaching. So tell me about what you've been thinking about this so far.

      00:02:36 C-g is supposed to get you out of everything, but it doesn't work for the minibuffer

      [Prot]: Yeah, yeah, yeah. So broadly speaking, there are a few pain points that I think every new user experiences. One is the behavior of C-g. The fact that you have the mini buffer open and you do C-g because C-g is supposed to get you out of where you are and the mini buffer will stay open by default. And I have seen people struggle live. It's like, oh, I am, you know, they have the mini buffer open, they click somewhere else, then they type C-g, the mini buffer stays there, and they're like, what is happening? Why is this not working? It stopped working. That's the one thing.

      00:03:11 Anything related to display-buffer is hard for people to configure. Many windows do not focus by default. You have to switch to the other window to q.
      [Prot]: not just beginners, struggle with is anything related to display buffers, which can be configured, of course, via the display-buffer-alist. And some of the common pain points with that are the fact that many windows do not focus by default. For example, you open a helper buffer, it doesn't focus the window by default. So if you want to type q to dismiss it, you have to switch to it, then type q. You do a care, it doesn't focus a care by default. You have to go there and then interact with it. These sorts of things. And then there are a few other things. I have written some settings that I can share with you as well. Maybe I can, I don't know, email them to you and then you can... I don't hear you now. One second.

      [Sacha]: Sorry, I turned on mute. Do you want to share your screen? Because that's another thing you can do.

      [Prot]: Yes, of course, of course, of course. But I meant to say that, so I have this here, and I was of course about to write a blog post and all that. Let me increase the font size. Is this font size okay or is it too small?

      [Sacha]: Oh, this is good. Yeah, yeah, yeah.

      [Prot]: Okay, so I have written a few things, so I don't have to go through all of them.

      00:04:28 Good defaults
      [Prot]: based on what I have noticed.

      00:04:35 How do I set my fonts? Which is the one I should be using?
      [Prot]: how do I actually set my fonts, right? Because there are like a million ways to do this as well. And the people are like, okay, but which is the one that I should be using? And of course, when I pick one option, I don't mean to say that this is the right option, but it's just to not be technical about it. Like, okay, just use this and forget about it. A few other settings and a few common packages. And at the end of this... Oh, sorry. I have to really make this point.

      00:05:13 ediff is unusable by default for everyone, not just newcomers
      [Prot]: Out of the box, Ediff is literally unusable. I cannot excuse that. Everything else I can excuse, this is not excusable. Sorry. This is the minimum viable setup for it.

      [Sacha]: So maybe that's something to suggest for newcomer presets or maybe even the defaults.

      [Prot]: I would say the defaults. This is not a newcomer thing. Basically, if you want to have that default layout, you just have to opt into it. Sorry if I'm offending anyone, but I don't mean to say that. You have to consider the ergonomics of it.

      00:05:52 Packages to install
      [Prot]: some packages, third party packages. that I recommend for installation. This is not exhaustive. I try to be minimalist here. So, of course, there are many, many good, excellent, top-notch packages that I don't recommend here. And, for example, I don't recommend any of my packages here. But I just included some for people to get started.

      [Sacha]: So it sounds like we should have a Prot starter kit.

      [Prot]: No, no. I already have too many packages that I maintain.

      00:06:28 People muddle through, but it's confusing

      [Sacha]: It also sounds like you are talking to a lot of newbies and you are hearing about a lot of pain points and frustrations. How are people finding information in the first place? How are people finding this information? Do people tell you about their experience of getting into Emacs? Where are they finding the stuff? How do they find their way to you?

      [Prot]: Generally they muddle through. So they will find a blog post, they will find a video, they will just do some search. Now, of course, there is also LLMs providing feedback. So it's a combination of all those and they try to piece together whatever kind of knowledge those sources provide. The thing with the newcomer experience is that there isn't a curation of content. Like of course you were doing that thing with the wiki, right? So of course you are working towards that. But what I mean is there are like options like, oh, you can do it in these 10 different ways. But for a newcomer, this is just details that don't make sense. Because the newcomer cannot weigh the pros and cons of each option, or even if they have pros and cons, or they are just different ways of expressing the same intent. Such as with the fonts, for example. You can do the frame fonts, or the faces, or whatever.

      [Sacha]: Okay, so if there was something more curated, what would that look like? I know you spend a lot of time thinking about the, you know, the information architecture of your documentation, which is the lovely thing about your pack, one of the many lovely things about your packages. But what could that kind of newcomer experience look like for documentation?

      00:08:20 The wiki might be a good approach for the community. Start here.

      [Prot]: What you were doing with the wiki, I think is the right approach from a community perspective, meaning like, yeah, here is the single point of entry. Take it from there. Basically, don't look elsewhere. Start with this. No matter what you do, start with this. I think that's a good approach and basically in the community we should be agreeing on that. I didn't see all of your videos yesterday. I don't have the time to watch all of it. But basically on the Emacs subreddit, which is basically where a lot of people find information. That's the first thing that should be on the sidebar or basically it could even be pinned on the on the top of the tips and tricks section, the thread there. So that's the one thing. Yes, please.

      [Sacha]: Yes, so the Emacs subreddit does have in its sidebar a link to the Emacs Wiki. Not calling out the Emacs Newbie page specifically, but there is a page. There's a link to the Emacs Newbie page from the Emacs Wiki homepage, I think. But yeah, as long as we can come up with a reasonably coherent starting point for people, then that will inevitably show up in people's recommendations as they respond to all these threads.

      [Prot]: Yes, yes, very well, very well.

      00:09:33 The direction of the newcomers theme is nice
      [Prot]: the direction of the newcomers theme. I don't know exactly now if newcomers theme works in practice. Like, I don't know what happens if you do Emacs disable-heme, or specifically what I mean. image from video[Prot]: but what I mean if you do this: mapc disable-theme right, the custom enabled theme maybe you have seen this right so you want to disable all the other themes before loading your theme right I'm sure somebody has written something like this maybe I have done it and then it's like you know load your favorite theme now right and then you do your favorite theme or whatever For example, here. So in this case, I don't know what happens to the newcomers theme. I will assume that it will disable it. In which case, I think that has to be prevented.

      [Sacha]: Oh, but then it wouldn't be treated the same as other things.

      [Prot]: Which you can do. Which you can do, for example, if I go to Fontaine. And of course, I got this from use-package. But you can do it with a synthetic theme. So there is a little trick you can do.

      00:10:45 Themes versus minor modes

      [Sacha]: I was looking at newcomers presets recently, and when I was trying to make instructions for people to actually use this stuff, I ended up leaning towards just telling them to use either the splash screen, of course, or M-x customize-themes, from where they can check and uncheck things if they wanted additional themes layered on top of that. It's not like you can't uncheck it and then all of your settings go back to what they were before. Some of the things are still left over.

      [Prot]: That's why I like the direction. I'm not sure if it should be a theme though. I think it should be a minor mode. And the minor mode should be like here is the opinionated settings and here are the default settings.

      [Sacha]: Do we already have like a mechanism for letting minor modes override the variables in a nice way but let you go back to the previous version? Because it's not just restoring the default customized ones either.

      [Prot]: I do something like that in Logos but I'm not sure to be honest right now how I even do it. Set arg and maybe this was a wrong time ago so I cannot even recall what exactly I was doing but actually this was contributed by Daniel Mendler so of course something like this could be added to Core Emacs as part of the newcomers theme eventually. If not, somewhere in core anyway.

      00:12:19 People think of themes as styles, not arbitrary customizations
      [Prot]: Basically, I like the idea, I don't think it's the right tool. Because themes are... It's also confusing language, you know? Because theme, when you talk to the average person, they will think of the style. And they won't think about arbitrary customizations. Whereas in Emacs we have this idiosyncratic conception of theme where it's like any kind of a user option as well as faces.

      [Sacha]: So it sounds like if it were a package that defined a minor mode that people could turn on and off Even better, yes, exactly.

      [Prot]: And there is this user option. I forget, do I even have it here for the built-in packages? I don't remember if I added it here. No, there is something like update the built-in packages. Yeah, so there is an option like that. So, of course, it could be like built into Emacs 31 as well as ELPA, kind of like Eglot. And then users could be like, okay, update this. So going forward, they can also benefit from whatever comes from Emacs 31. Or, you know, the development target of Emacs going forward.

      00:13:55 Listing changes for newcomers-presets

      [Sacha]: One of the challenges that I encountered when I was starting to play around with newcomers presets or other things like that is that it turns on all these options, but there's no easy way for people to say, okay, this is what has changed. This is how to use it. So I've started documenting that. And I think this is a challenge generally for many of the starter kits. It takes already a lot of work to make the configuration and maybe answer people's questions or It's a tricky situation how best to do it.

      [Prot]: I guess the natural place for that is the manual. And the manual, I believe right now the manual mentions something along the lines of, well, newcomers can just toggle this on kind of thing, but it doesn't really tell them what that will entail. So I think it's worth actually keeping track of all the changes and be like, well, the newcomers theme will change this and that and the other. And it could just be a bullet point of items. Maybe it doesn't have to go into all the technicalities like, hey, we are changing, I don't know, the isearch so that it shows the counter. By default, it doesn't show the counter, right? Like, it doesn't need to be as detailed. It can just say, okay, these are the user options that are affected.

      [Sacha]: or the minor modes that are enabled. You know, the specific commands and variable settings, whatever. It's like, how do I combine these different concepts to do something? Or taking a step back further, something we've talked about in previous conversations, how do I even begin to learn this overwhelming number of concepts? You know, how do I start to memorize all these keyboard shortcuts? And I'm not sure we have a lot of support for that yet.

      00:16:10 Terminology is also a challenge

      [Prot]: No, because I think part of the challenge here is the terminology. For example, if we say completion like me and you and other users, we kind of know what we are talking about, right? So minibuffer and orderless and all that, right? But if the user wants to express something along the lines, they may say the search box. Or, you know, like the interaction panel or whatever. So they don't have a language of the completion framework or the mini buffer or whatever. So even then it can be tricky for them to kind of narrow down what they are searching for.

      00:16:52 Maybe documentation aliases?
      [Prot]: to also think in terms of clusters of configuration, kind of what starter kits do with the various modules they define. And you can have aliases for them. Aliases in the manual, I mean. Like in the manual, if you type i, it goes to the index, right? And you can have a concept index. So you can have a concept index for the search panel or whatever. And that means the minibuffer and friends.

      [Sacha]: So it's like we're doing search engine optimization so that people can find things with the words that they use. I'm not sure that will be in the Emacs manual itself, but one of the things I've appreciated about people sharing their notes through blog posts and things like that is because they're using their words to describe a concept, and they're linking it to the code that uses the words that Emacs does. So then people can then say, oh, I'm looking for this. It's actually called this in the Emacs world. But this takes time for people to kind of make those connections.

      00:17:56 Learning Emacs as a nonprogrammer
      [Sacha]: if you can look back to like 2019 when you were learning all of this stuff for the first time? What was it like for you as a non-programmer to come into this world where people are using all these strange terms?

      [Prot]: Yeah, it was a challenge for sure. But I think actually the fact that I started out as a beginner, as a beginner into programming, I mean, benefited me in the sense that I was a blank slate. I don't have to unlearn terms. So I didn't have a concept of, okay, in other, I don't know, programming IDEs, for example, they call this the narrowing framework or whatever. I was like, completion. Okay, let's move on. It was the first time I was introduced to such concepts. So I think in that sense, I was lucky. That granted, there is a lot of reading involved. I was reading the manual and learning from it.

      [Sacha]: And that's something I do too. I mean, I'll still casually flip through the Emacs manual or the Org manual because every time you read it, there's something else that catches your eye and makes you think, how do I use that? How do I do that? And I like that, you know, you and Mickey Peterson and other people have also been organizing these thoughts into like a linear arrangement of logical progression. So that's the books that There aren't a lot of books about Emacs that people can read.

      00:19:29 Emacs Lisp Elements
      [Sacha]: your Emacs Lisp elements? How do we support their learning journey from, I have absolutely no idea how to do anything in Emacs to, okay, I'm ready to read this book and get stuff out of it?

      [Prot]: Yeah, yeah. When I recommend that book, I recommend it to people who have already decided that Emacs is the right tool for them. So I would basically say, look, Elisp is for you if you are already sold on Emacs, because what Elisp gives you is that extra you need to make Emacs do what you want, basically to tap into the potential programmability of Emacs. But to get to that point, you have already been convinced that you already like Emacs. If you don't vibe with it at the outset, you won't learn Elisp, not least because it's a niche language.

      00:20:28 Getting the hang of Emacs

      [Sacha]: Okay, so how do we get people to the point where they can vibe with Emacs? Where they can appreciate it? Because when they start off, it's this clunky text editor that has these weird keyboard shortcuts and strange terms, and all we can do is offer them videos and blog posts from people who say, this is totally awesome. I've been using it for three years or 20 years or whatever, and I love it. That's the light at the end of the tunnel, but there's a lot of tunnel to get through.

      [Prot]: correct correct correct it's difficult and i think that's why something like the newcomers theme ultimately is the way forward where it's like yeah opt into this and that's already a good set of defaults and i think what really matters is to reach a point where you can actually open your files actually move around and that happens with the very basics like that happens with the tutorial already what the tutorial doesn't give you is the basic interface, such as the mini-buffer. The default mini-buffer, I don't think it's good for beginners. Actually, maybe it's not even good for advanced users, but that's another. You have to have a few of the basic packages enabled, and then the tutorial, I think, is enough for that initial push. Then, of course, it's also up to the user to do some reading, based on what you will provide them with.

      [Sacha]: I know when I was trying this, I started a fresh Emacs so that I could see what it's like when people don't have their accumulated cruft of 20 years of configuration. And I was like, I need some kind of completion that I don't have to keep pressing tab for. So maybe Fido vertical mode can be part of that, you know, standard, at least in ?? or whatever, that would be nice. But yeah, there are a lot of these niceties that reduce the friction enough that people can then start enjoying things more and more.

      00:22:28 Getting help when you have a starter kit
      [Sacha]: They're great at getting people over that initial hump. But the challenge with starter kits and probably things like the newcomers presets has also been that when people ask for help, it's hard because they don't know the things that have changed under the hood. So they're asking for help and the people who are helping them are like, I don't know what's going on there.

      [Prot]: More so if the starter kit has its own macros and way of doing things, such as Doom Emacs. On the one hand, Doom Emacs does an excellent job at integrating everything, providing a polished experience, comprehensive configuration and so on. On the other hand, they have their own way of doing things like they have their own macros. You have to use Doom sync or whatever to do things from the command line. So somebody who is not using Doom basically has no means of knowing what is happening in that world. So that is definitely a challenge. So for me, a good starter kit is one that at the very least uses what a generic configuration would use, meaning no macros, no weird shell scripts and that sort of thing.

      [Sacha]: And I did spend some time going over the starter kit list in the Emacs wiki to try to sort it by minimalist, stays close to vanilla, all the way to the changes a lot of things about Emacs and you probably should ask the community of that starter kit first if you need help. So that's kind of like Doom Emacs and Spacemacs at that end of the spectrum and things like better defaults would be like at the Like just a little bit of smoothing over of things. But then also, it was interesting to see some of the starter kits focus on saying, okay, you don't have to write any code to extend this further. A lot of the things are available through Customize.

      00:24:25 Customize is overwhelming for beginners
      [Sacha]: for a newcomer. So how do we get people to the point where they might feel comfortable going through this Customize interface And saying, oh, I can find what I want to change and I can change it and I'm not worried about breaking everything.

      [Prot]: Yeah, I actually, when I was trying to use Customize with people, I gave it an honest try. Like, for example, we tried to do Emacs Customize the org capture templates. And I was seeing it live. Impossible for people to understand what is happening. Like, Customize has this concept of the insert button, right? So if you have a list of things, you can do insert to add the next element to the list. If you have an Elisp understanding of what you are actually interacting with, you kind of know what to do, right? But otherwise, I was seeing it live. It's like... I have no idea what is happening. What is this? So for me, my approach is basically skip customize altogether. For me, it's a lost cause. Unless it's completely rewritten, I mean in its current form, it's not good for beginners unless it's for toggles, like true or false kind of thing. If it's for anything more involved, it's not good. And what it is good for is for discovery, discovery of user options. But it presents the user options in a human-readable format which you cannot just copy-paste into your configuration. So, for example, it doesn't have the dashes for the names.

      [Sacha]: Yeah, and getting it out of the customized variables if you wanted to keep a nice clean Emacs is hard. Although I would say that's more of an intermediate level concern. When they start caring about having a beautiful Emacs that other people can learn from. A couple of comments in from people who are watching the stream. Hello, folks! Hello! @hajovonta6300 says, "Hi legends." @JacksonScholberg and @petertillemans2231 say, well, @JacksonScholberg says hi. @petertillemans2231 says, "I am not worthy." @takoverflow says, "Thank you for these streams." @ShaeErisson says, "I love Emacs but haven't really learned Elisp." And I know Shae has been using Emacs for a long time. So that's interesting that you have people who enjoy using Emacs. I don't know whether something is getting in their way when it comes to learning Emacs Lisp or whether it's just totally fine already the way it is. So that's different things. @JacksonScholberg says, oh, so @hajovonta6300 says, "you are worthy if you are willing to learn." Maybe the resources are there as people start digging into EmacsLisp. Maybe the combination of looking at other people's source code and trying to ask on Reddit or whatever is enough. @JacksonScholberg says," I vibe with Emacs after using other text editors that were not minimalist enough for my preferences, plus having experience with other open source software like Linux." @petertillemans2231 says, "Well, Emacs and minimalist in the same sentence. Strange concept, but I know what you mean." There's a whole spectrum of things you can do with Emacs, right? So yeah, people can just use basic Emacs.

      00:27:53 debug-init
      [Sacha]: "I guess learn starters quickly to use emacs --debug-init. Maybe not in the first hour, but close to it. Close to tweaking.

      [Prot]: Yeah. Which of course doesn't help. It's very useful, of course, but it doesn't help beginners because they cannot read the backtrace.

      [Sacha]: Yeah, it is hard to navigate even for people who are experienced like there's a whole bunch of things and what you need to change is like a small thing and you don't know about edebug and all that other stuff.

      [Prot]: But of course debugging it many times of course it is a lifesaver for sure.

      [Sacha]: Yeah, and I think a lot of these things can be stepped around if you have, you know, like you, someone more experienced with Emacs to watch over your shoulder either in person or virtually and say, you know, do it this way instead, or have you heard about this package? But this is an experience that I think not a lot of people have because many times they're isolated, right? They're the only Emacs person they know around them. And maybe they'll go to meet up, but maybe they're intimidated by the idea of asking about their beginner problem with all these other people talking about arcane Emacs list things. So how do we get people to the point where they can get help?

      00:29:06 Getting help: partially bridged by LLMs?

      [Prot]: Yeah, I think this is partially bridged. This gap is partially bridged by LLMs. Like a lot of people will just check with a bot and get something useful out of it and basically continue from there. And that's why I said earlier they muddle through because LLMs of course will give you what you ask. So if you kind of don't know what to ask, you will get something that may be useful, maybe needs a further tweak to it. That's why sometimes it's hit or miss.

      [Sacha]: And I am seeing that in a lot of the discussion threads now. Of course, people are concerned about the environmental impacts and the ethical considerations around large language models, but there are also people who are saying, you know, this is what helped me write my first bit of Emacs Lisp, or this is what helped me figure out how to configure Emacs to do the thing that I wanted to do. So for that, I'm like, okay, then maybe there's something there. Challenge, of course, if it's hallucinating something, you're like, no, that function does not actually exist. You got to do it this other way. But if you can get them over some of the humps, maybe that's useful for them.

      [Prot]: Yes, yes, yes. I think, of course, it's not 100% good, but I think it is, on the balance, I think it is good.

      [Sacha]: So when people are too embarrassed or too intimidated to ask people in person, and when I go to these meetups, everyone's always super friendly. Sometimes we're live debugging someone's configuration or someone's function in real time. But sometimes that is a little difficult for people to get to for schedule or other reasons. There are other ways to understand something and ask questions about it and figure it out.

      00:31:01 Things people don't even know about
      [Sacha]: what to ask questions about. How do we help people in that situation where they don't even know that they're doing something inefficiently and that the solution for their problems is just one package away? How do we help?

      [Prot]: That's difficult because it's on a case-by-case basis. I think you cannot optimize for that because each person will have different intuitions or different pain points, let's say. And maybe you can do it by having the most exhaustive kind of documentation with the equivalent of search engine optimization, as you were saying earlier. But I think eventually people will still have questions and even the formulation of the question may be idiosyncratic. So even if the concept is there, the way it is presented, you might not have a perfect match.

      [Sacha]: And the idiosyncrasy of things is something that it's definitely a challenge for us when we're working with Emacs because everyone has their own way of doing things and everyone therefore has their own... How they set it up or the keyboard shortcuts that they use or the ways that they want the functions to work. Even trying to write documentation to say, if you're learning this, you might want to check out this stuff next, I have a hard time figuring out how to make that make sense to as many people as possible without overwhelming them with 20 different questions.

      00:32:42 Filling in the blanks

      [Prot]: That's the difficult part. Actually, I think that's the part where you have to assume that people will fill in the blanks. For example, I think yesterday you were doing this thing where, well, somebody needs to use Git, but what is even Git? So you have to even know about Git, right? And that's recursive because, well, how do you install Git? Well, you need a terminal. What is a terminal, right? Well, you need to have this thing called Linux. What is a Linux? So basically at some point you have to just say like I will give you as much as I can but I will limit it to the scope of this like Emacs basically. Because otherwise it has infinite scope.

      [Sacha]: And I find that hyperlinks help a lot with that then because we can say, if you need a more detailed description, you can go over there. So now I'm trying to make it easier for myself whenever I say, oh yeah, put this in your .emacs.

      00:33:37 .emacs
      [Sacha]: the Emacs wiki page on init files. Because there's this whole discussion that you have to have about what is your .emacs and sometimes it's actually your .emacs.d/init.el but sometimes it's actually your .config/emacs/init.el and, like, pass that off to a page to explain all that stuff.

      [Prot]: Actually I want to say something about this because now it reminded me. So many people nowadays will use .emacs.d/init.el or .config/emacs/init.el But Emacs defaults to reading the .emacs file from your home directory. And I had this case where a user was writing their init file in one of those specified locations, but they did something with Emacs Customize beforehand and Emacs Customize wrote to the .emacs file. So they were loading Emacs and nothing was showing up and they were like, what is wrong? My init file is there. Why is it not working? I'm loading, you know, this dark thing. Why is it white? or whatever. And eventually it was because of the .emacs file. I'm not sure how best to resolve that given that you want to also be backward compatible.

      [Sacha]: No, no, no. Okay. So when I tell people just, you know, here's the link to the init file page in the Emacs wiki, it also includes a describe-variable user-init-file, which will tell you which one is actually loading. And I have a to-do to suggest on emacs-devel, if they haven't already discussed it endlessly, that maybe there should be kind of like a M-x find-user-init-file that just opens that specific file. Would be nice. But yeah. Going back to the chat because people have been sharing great comments as well. Shae says, "I learned about new Emacs packages by pairing with other users and asking, how did you do that thing?" Which I think is a great thing for screencasts. People sharing videos as well because when people share a video, sometimes they see things that they wouldn't have mentioned because they totally take advantage of it. It's just something they take for granted. For example, in your live stream package maintenance sessions, I'm sure you've had this a couple of times. People are asking, what is that that you just did? Videos are great for this.

      [Prot]: Let me open the door for my puppy. I'll be back.

      [Sacha]: In the meantime, let's see if there's anything here I can address by myself. The puppies cannot wait.

      [Prot]: No, the puppies cannot wait.

      [Sacha]: Small mammals in general are like, they need us, they need us. @hajovonta6300 says, "I used Emacs since 2010 and had become a power user, but in the last year, I feel LLMs took over most of the tasks I usually solved with Emacs." I mean actually it's a bit of a tangent here but we're seeing that also with some of the long-term users of Emacs moving on to other editors because whatever they had customized on top of Emacs could be replicated by a custom application written by an LLM. The movement is going both ways. People leaving Emacs for other things, people coming into Emacs because LLMs can help them with stuff. So I just wanted to mention that because things are happening.

      00:37:04 Discovery and the info manual
      [Sacha]: "Emacs documentation is very extensive, but I feel discovery of the docs is a problem for new users." And I want to dig into that a bit more. How do we help with this discovery thing?

      [Prot]: In the info manuals, if you know two key bindings, it really helps a lot. One is g, the other is i. But you have to have completion already set up, like vertico-mode, for example.

      [Sacha]: I also like using s for search.

      [Prot]: Or s for search. Those help a lot, because then you can jump to a node or an index. Without those navigating, the manuals can feel cumbersome. That granted, we are back to the point where the user also has to do some research on their own. You cannot compensate for drive, motivation. No matter how much we write, no matter how many themes or minor modes we define, the user also has to be searching.

      [Sacha]: Yeah. And it's going back to the challenge of being overwhelmed. You know, sometimes it's difficult for new users to say, okay, there's so much to learn. How do I scope this so that I don't go crazy? You know, what is the most important thing that I need to learn about first? And then what is the tiniest step after that that I can take? And so forth. Otherwise, it's just like, I want to learn about everything.

      00:38:34 Address your immediate need; small steps

      [Prot]: Based on the discussions I have had, I think the consensus is address your immediate needs. For example, you want to write a to-do list, all you need to know at this early stage is Org Mode. And not all of Org, because Org has approximately one zillion commands. Just to-do and done. And maybe schedule a date. Just learn that, and by learning that, do that for a week, do it for a month, however long it takes for you to embed it as part of your knowledge . And then once you have done that, move on to the next thing. Like, okay, now that I am solid on my to-do's, how do I do the agenda, for example, and incrementally add to that. And the idea is by piecing together your system this way, you achieve two things. First, you build on a solid foundation of knowledge where you know what you are doing. And two, you understand how your system is pieced together. So if something breaks, you already have an intuition of what it could be. Even if you don't know Emacs Lisp, you can guess like, oh, I added this thing the other day and now my Emacs is broken. So probably the breakage is there.

      [Sacha]: And this decomposing it into those tiny steps so that you can piece them together and build slowly understanding each step along the way is something that new people struggle with because they don't have experience to know what the small step is. And I think that's where coaching and mentoring and you know sometimes If you're lucky enough to be able to sit with somebody who says, okay, your next step is just to do this. That would be super lucky. But most people will just have to content themselves with sometimes there's a playlist of videos that they can follow in sequence. Or maybe there's someone, you know, maybe they'll post on Reddit saying, okay, I know this. What should I learn next? I just wish it were easier for us to say... Let's imagine this from the helper point of view. How do we make it easier for people to say, all right, this is where you are. Here's some things that you can look into next. What do you do when you're coaching someone?

      [Prot]: Yes, I always ask them what their needs are. There are some needs which are common. For example, completion. Vertico, for example, I think basically everybody can benefit unless you have a really special use case. But other than that, it's like, well, we don't need to fix everything. Let's understand what your needs are. Let's work towards that goal. And one way to break it down also conceptually is with use-package blocks. I think use-package is an excellent, of course, it's an excellent tool in its own right, but it's an excellent way of saying, you know what? This is one thing. This is one step. And this is the next step. And so people can start thinking in terms of each use-package is a step.

      00:41:45 :config and setq is nicer than :custom for C-x C-e purposes (eval-last-sexp)

      [Sacha]: I sometimes feel like I'm going back and forth. use-package is nice because it allows us to add the hooks and say this stuff happens after the package is loaded, so I don't have to keep having lots of with-eval-after-load. But on the other hand, it becomes harder for people to copy and paste things because then they have to know it needs to go inside the use-package. Do I use the custom keyword or do I just use setq because it looks more copyable?

      [Prot]: This is why me, I don't use the custom. It's not that I have anything personal against it. It's that I found that it's unusable. If you have the equivalent of this in a custom, you cannot do C-x C-e. If you say use-package is syntactic sugar... I have read this before. To somebody who doesn't speak programming lingo, syntactic sugar doesn't mean anything. To me, it barely means anything after knowing all this stuff. So what does syntactic sugar actually mean? So what do I have to do to evaluate this, right? So I am like, okay, the more minimal you can do is just have a config and then you can do add-hook there, bind-key there or whatever. Granted, I don't do this here. I don't follow this. But I mean, if you want to have like a combination of what you were saying of the back and forth while still retaining use-package, you salvage that by doing the equivalent of this. Just this. And then everything goes under config.

      [Sacha]: And that's what I end up doing too. Just making it easier for me to change things and re-evaluate them with C-x C-e is definitely one of the major considerations. Okay, I've temporarily misplaced my... Some people are very lucky. They actually have an Emacs channel at work that they can ask for help or they can come across recommendations for. That's nice for learning, @Rossbaker9079 says. It's not a full replacement for these other ideas, but it brings together people solving the same problems with Emacs. Some people are lucky enough to work in a large company where other people are using Emacs. You should definitely take advantage of that. I hear there's actually a Discord server as well, and of course there's IRC, where people can also hang out and hear other people talk about Emacs, ask questions, learn from other people's questions. I don't think you hang out in IRC or any of these places.

      [Prot]: No, no. I haven't done it in a very long time. I have an account there on IRC. I think the last time I did, it was in the last EmacsConf I could attend, which is like maybe two or three years ago. I forgot already.

      [Sacha]: It's yet another thing that kind of distracts your attention. I also find Mastodon to be very helpful for this stream of little updates from people sharing their Emacs questions or their things that they've just figured out. That's another useful resource for people. I've started trying to get people... to support them in hooking up with this community, connecting with this community. The Emacs Newbie page has a link to learning Emacs, and one of those things says links to category community. Because if you're learning these things in isolation, you will get really, really stuck. And you will not progress. I think being able to connect with the Emacs community is great for inspiration and figuring things out.

      [Prot]: Yes, yes, I agree, I agree.

      00:45:28 Culture of documentation and sharing
      [Prot]: basically, like the social aspect of it. Like, well, of course, I use it as a tool, but there is a cultural component to it.

      [Sacha]: So tell me, what is your impression of the Emacs culture so far?

      [Prot]: Oh, it's, of course, we are talking about people who stick around, right? Not people who will use Emacs once and then leave. I think fundamentally it's people who care about sharing. I think the essence of it, it's really sharing. And then, of course, that is expressed sharing code, sharing ideas, and then, of course, documenting things. So the documentation culture of Emacs, I think it's really strong. Like in other free software communities, they are like, okay, we are sharing code, but then code is its own documentation kind of thing. Good code speaks for itself kind of thing. Whereas in Emacs land, we are like, okay, good code speaks for itself, but here is this wall of text just in case.

      [Sacha]: And, you know, this is probably something only two other people in the world will ever want to do, but here it is just in case. I love those. I'm like, yeah, that's exactly what I wanted to do, actually. Thank you.

      [Prot]: Yeah, yeah, I agree.

      [Sacha]: It's a wonderful community, and I'm very glad that you're part of it, and I'm very glad that lots of other people have joined in as well. Okay, let me go. Once again, I have misplaced my... Okay, here we go. @ShaeErisson asked, "Is there a way to ask Emacs which file it has read below the current configuration?" That's the user init file variable, Shae, so you can just describe that.

      00:47:11 Link to a search
      [Sacha]: "thinking of the terminology problem, maybe offering search terms for further exploration rather than or in addition to links." Which I guess like instead of just looking to a specific resource which may or may not still exist. I was going through my beginner resources and it's like this page no longer resolves but like saying okay this is this is what it's called and you can go search for your own resources, or this is the link, but also here's some other terms that you might find useful.

      [Prot]: Yeah, yeah. Just to add to what this person was asking, was suggesting is like, because we had something like this in Denote and eventually I implemented it. So there are two kinds of links. One is a direct pointer where it's like, go there. The other is basically the equivalent of a button that triggers a search. For example, let's imagine in terms of files and directories, like a direct link goes to a file. A query link, you click on it, it opens a directory listing of all files that match the query. And that is basically evergreen. It will always show you whatever is matching. And maybe we could have something like that for info buffers, where instead of a link to a node, you do that and it produces a listing of all nodes that match the query.

      [Sacha]: Hmm, that's quite interesting. Or like when we, you know, if we're writing about something, we can say, you know, here's the apropos command to go find all the commands, things that are related to this concept. Even just getting people to learn about how to use apropos, I think, would be a great step in helping them. Even before that, just getting them to a completion setup where they can ideally use something like orderless to just find things. Yeah. I think it would definitely help with the discoverability thing.

      [Prot]: Yes. I think like Vertico and Orderless are like... if you have to install two packages, it's those two.

      [Sacha]: Yeah. It is great. Okay. Where are we now? I keep... We've talked about the sandwich that has to be made. We've talked about getting people into it, helping them discover concepts, helping them connect with the community. And then there's a thing about how do we support people as they do their lifelong learning.

      00:49:48 Getting through the gap between beginner tutorials and the next step
      [Sacha]: maybe they'll get through the tutorial fine, but then when they start to try to do something more sophisticated, like, oh yeah, I need to do something similar to my IDE. I want to have all these different bits and bobs working the way that they do in my other editor. That's where things break down because the tutorial gets them through the, you know, here are the basics, but then there's this huge gulf before that, okay, this is how I can be more productive with it. How do we fix that?

      [Prot]: Yes, that's very difficult because part of that requires Emacs Lisp knowledge. Like, for example, an IDE, of course, I haven't used one myself, but from what I understand, there is a sidebar with the tree view of your files. At the bottom, there is a shell. Maybe there is some debugger there, some other sidebar on the side. So to replicate that, you really need to massage the display-buffer-alist which I think requires a lot of knowledge, like you need to understand the display buffer, you need to know about window... what's it called? Even I forget. Attributes and all that.

      [Sacha]: I don't even do it myself. If I feel like I need to do anything related to display-buffer-alist, I'm just like, okay, I'm going to look for an example and I'm going to copy it very carefully.

      00:51:08 Predictability

      image from video[Prot]: Okay, so this is for you. It's like too much work, but I must say. This looks like arcane knowledge but this sort of thing actually is a quality of life improvement to your Emacs because one thing that I think is bad about the default Emacs experience is uncertainty about where things will show up. Like, you never know. Like, you cannot predict it. Because Emacs tries to be sensible about it or whatever, but you cannot predict it. Whereas things that are ancillary should have kind of a more predictable behavior.

      00:51:51 Brief mention of Popper
      [Prot]: by Karthik Chikmagalur called Popper. I didn't mention it, but yeah, it's basically another way to do the display-buffer-alist.

      [Sacha]: Mm-hmm. So there's an interesting thing here where you have the beginners. Okay, they're just getting through the tutorial. If they can get to the point where they can edit the file, click on, even just use the menu bar to say file save, file open and all that stuff, that's great. Then the step beyond that is, okay, how do they start to use packages? And quite...

      00:52:25 Earlier is better than later for Emacs Lisp. Take it as is.
      [Sacha]: to be able to use packages like Popper or all these, they gotta be unafraid to use Emacs Lisp. Because all the packages, you know, tell them, okay, just put this use package in your config, but you gotta be comfortable.

      [Prot]: And that's why I think you have to basically circumvent Customize. Like the earlier you are exposed to Emacs Lisp, I think the better it is for you long term. Because there is no way around it. You will have to deal with it. and even if you don't quite know how things work, like even, for example, this thing here, where it's like, there is a line between them, even if you don't understand code, you can start to think in terms of blocks even if you don't understand this code... Maybe with a few comments here and there, that can become a bit more obvious as well. But of course, like you go to a package and the first thing it will tell you is, okay, add this to your config and it's a use-package declaration, for example. And you will be like, what is a config? The better solution is for you to know that quickly, learn it quickly.

      [Sacha]: There's this whole intimidation factor, especially for people who are coming from non-programming backgrounds, and suddenly they're like, there are a lot of parentheses in this. Do I have to be a programmer in order to use this? You just go right into it, but I'm sure you've talked to people who maybe weren't sure about it. How do you get them over that hump?

      [Prot]: Basically the idea is treat it as something that is inscrutable right now. Just take it as is. Take it at face value basically. You don't need to understand it. You don't need to be able to debug it. Take it as is and just make sure moving your cursor that this kind of balance is preserved by checking that there is a parenthesis at the beginning and there is a parenthesis at the end. So, show parent mode helps in that regard, which is enabled by default. Of course, you cannot really get around it. Like, you cannot have a training wheels mode for Elisp, unfortunately. You can do something like rainbow-delimiters, you know, the package. You can help, but I'm not sure that helps by a lot.

      [Sacha]: Yeah, yeah. And it's like, OK, so you just got to do it. Don't be too scared. But it's OK to just copy and paste and trust that as you do this, you will learn enough that when you go back, you'll be able to understand more and more of it.

      00:55:17 Before and after comparisons

      [Prot]: Yes. What helps, for example, in this block here, of course, I don't have to describe the code. But if you do this iterative approach that we mentioned earlier of step by step, like you can try your Emacs before this and after this. And based of course on some comment or whatever, you can see what the difference is. So even if you don't understand the code, you understand the effects of the code.

      [Sacha]: Yeah, yeah. Before and after comparisons. I'm guilty of not taking advantage of this enough myself. I'm just like, oh yeah, I'm just going to evaluate it in my current Emacs and sometimes the results are obvious and sometimes the results kind of break my Emacs and I'm like, okay, I got to restart Emacs instead. I should have just started a new Emacs and tried it there.

      00:56:04 user-init-directory
      [Sacha]: but actually --user-init-directory has been around since Emacs 29. So it's pretty much widely available now. People can actually try, for example, a starter kit without committing to it. Do you see newbies actually use this? Because I tell people, okay, you can do this, but it requires using the command line and using command line arguments. Is that a thing they can do?

      [Prot]: I have introduced it to some people and they have used it, yes. But I don't know if people use it as part of their workflow or maybe they have just a cheat sheet specifically for this where it's like, oh, I want to try this and I want to try that. But eventually they don't use it day by day, I think. They just settle.

      [Sacha]: if you want to try something big, then you know you can say, try that starter kit, but don't necessarily go to the work of making it my .emacs.d and so forth. Yes, that's a good one. They just say put this in your init file so it's a lot easier to back it out and change your mind. I had a thought, but it has disappeared, so I will just read something else from the chat.

      [Prot]: That's fine.

      00:57:20 Emacs core

      [Sacha]: @romsno says, "Do you fear that Emacs C core will go unmaintained? Deep knowledge is rare, held by few, like Eli. While finding Elisp maintainers is easier, like with elfeed, the core is hard to replace." So I guess if you're thinking about the long-term: newbie, to package user, to package developer, to who knows, Emacs core contributor, And then off to the C, like somebody who knows the C core, that's a very long and somewhat leaky pathway.

      [Prot]: It is for sure, for sure. But of course, here we are talking about people who have expertise in those specific domains. And yeah, that's something that it's an experienced Emacs user already. Like we are talking about somebody who not only is actually an experienced Emacs user, but also knows the relevant technical knowledge. Right. I am an experienced user, for example, but I don't know C, so I'm useless in this regard.

      [Sacha]: I guess if we zoom out a little bit, we can think about how do we help people connect with the long-term motivation that drives, that you mentioned earlier, to keep using Emacs, to learn more about it, to enjoy using it and fiddling with it and get deeper into it. For some people, Emacs clicks right away because they already tinker with other things and it becomes another thing to tinker with. For some people, it's like, I don't know, I've heard I should use this or I've heard people say good things about Org Mode or about Magit. I just want to see what it's like.

      00:59:02 Getting past the initial awkward phase
      [Sacha]:

      [Prot]: Yeah, yeah, yeah. It's that initial awkward phase. Like if they can get past that, and by awkward phase, here I mean to actually understand Emacs and the key bindings and how to move between windows and there is a mini buffer, that sort of thing. Once they get past that, I think that people stick around. Like if they have, for example, a use for it such as, okay, I use it for org, they do stick around.

      00:59:34 Even reporting an issue is a great contribution
      [Prot]: like even non-programmers. And this is something I encourage in my packages, for example, where it's like, write me an issue. You don't need to know any code. You don't have to tell me about how to do it. Just tell me what your idea is. And in all my manuals that I write, I have an acknowledgement section where I have, you know, ideas or suggestions or whatever. And I write the name of everybody who has ever created an issue because it's like you help even by telling me what your use case is. And that already helps. And it gets the people involved as well.

      [Sacha]: They spend time trying it out and describing what the difference was between what happened and what they wanted to happen. And sometimes even just identifying the issue is a big part of it already because you can't test everything. So we can definitely help people feel more included in the community because they don't have to be core developers or package authors to be part of the community. Even using it and writing about it is a big help.

      01:00:44 Next steps: adding to the wiki
      [Sacha]: I have to make a grilled cheese sandwich, shall we wrap up with some concrete things that you or me or somebody listening can do to help improve the newcomer experience for Emacs?

      [Prot]: You were doing it already. You were doing the wiki. I think that's good. A link, a direct link to the newbie section I think is great. Maybe you can even have a permanent link in your Emacs News, like the topmost line. It would be like, well, new...

      [Sacha]: Don't get overwhelmed by all these people talking about SDL graphics loops and Emacs and whatever. Very far down the path of the learning journey. So making one of these starting points where people can then kind of find the trail that then leads them to different places. I'm looking forward to reviewing the Emacs news things for beginner resources that I've already previously identified and then fitting them into the Emacs Wiki in various places where people might come across them. And then of course, it would be nice if we could test these with actual people. So in your coaching sessions, we can find out where the other gaps are. There's a lovely conversation in the chat about other things that I don't have the fast speaking rate to cram into the next three minutes. Thank you so much for this conversation. It was great. I always like picking your brain about things. It's a big project but Emacs is fun to play with and I hope lots of other people come to have fun with it too.

      01:02:37 Core longevity

      [Prot]: Yes, and maybe I can make a final comment about the C core and the fact that there are a few people such as Eli Zaretskii who have expertise in that. I am an optimist. I think things will be ironed out. I think they will work out on their own. There are people who have the expertise. Maybe it's a cultural issue or We could say like a bureaucracy issue, like they don't want to deal with mailing lists or whatever. Maybe they don't like the current style. I don't know. But I'm sure that when push comes to shove, somebody will step up.

      [Sacha]: I think it's actually very encouraging that because Emacs has such a long history, we've actually seen this kind of generational transfer of knowledge already in the sense that the people who are maintaining Emacs now, aside of course from Dr. Stallman himself, they're not the originals who started this project. They came into it afterwards, decided they liked it, dug deep enough into it to learn all these different things and have continued from there. And we've also seen lots of, you know, lots of trends come and go. People leave Emacs for Atom. People come back when Atom gets discontinued. People leave Emacs for VS Code. Who knows what will happen then? But when they come back, they come back bringing even more ideas. Thank you for watching! Okay, so in about one minute, the kid is going to start barreling down the hallway and asking for a grilled cheese sandwich. I'm going to wrap it up nicely here so I can remember to copy the chat this time.

      [Prot]: Very well, very well.

      [Sacha]: Yeah, yeah. The notes are going to be in, like, you know, if you go to yayemacs.com, they're probably going to be in, like, yayemacs24. And you're going to send me this markdown file or whatever that you showed me, so I can post that as well. Thank you so much, everyone. Thank you, Prot, and thank you to the people who joined in the chat. We'll see where it goes. Okay, bye.

      [Prot]: Take care. Take care. Bye, Sacha. Bye, folks. Take care.

      Chat

      • protesilaos: ​​I am in the Google Meet room
      • protesilaos: ​​And hello, by the way!
      • hajovonta6300: ​​Hi legends!
      • JacksonScholberg: ​Hi
      • petertillemans2231: ​I am not worthy!
      • takoverflow: ​​Hello Sacha and Prot, thanks for these streams!
      • ShaeErisson: ​I love emacs, but haven't really learned elisp.
      • hajovonta6300: ​​@petertillemans2231 you are worthy if you are willing to learn!
      • JacksonScholberg: ​I vibe with Emacs after using other text editors that were not minimalist enough for my preferences, plus having experience with other open source software like Linux.
      • petertillemans2231: ​Well, Emacs and Minimalist in the same sentence… strange concept, but I know what you mean
      • petertillemans2231: ​I guess learn starters quickly to use emacs –debug-init. Maybe not in the first hour but close to tweaking.
      • JacksonScholberg: ​ChatGPT reminding me keyboard shortcuts helps a lot
      • ShaeErisson: ​I learn about new emacs packages by pairing with other users and asking "How did you do that thing?"
      • hajovonta6300: ​​I use Emacs since 2010 and had become a power user; but in the last year I feel LLMs took over most of the tasks I usually solved with Emacs.
      • petertillemans2231: ​Emacs documentation is very extensive but I feel discoverability of the docs is a problem for newer users.
      • 10cadr: ​​wow! ill watch the vod later,, nice buzzcut prot. i am between sessions rn also ill leave a comment on prot latest video later cheers
      • rossbaker9079: ​​We have an Emacs channel at work that's nice for learning. It's not a full replacement for these other ideas, but brings together people solving the same problems with Emacs.
      • ShaeErisson: ​Is there a way to ask emacs which file(s) it has read to load the current configuration?
      • charliemcmackin4859: ​​thinking of the terminology problem: maybe offering search terms for further exploration, rather than (or in addition to) links
      • JacksonScholberg: ​An Emacs channel at work sounds like a nice way to learn from others.
      • siredwardthehalf: ​​whats emacs
      • hajovonta6300: ​​it is an application platform with a great editor app
      • romsno: ​​hello guys do you fear the Emacs C core will go unmaintained? Deep knowledge is rare, held by few like Eli. While finding Elisp maintainers is easier (like with elfeed), the core is harder to replace
      • hajovonta6300: ​​@romsno true that
      • petertillemans2231: ​orderless is awesome
      • takoverflow: ​​Vertico can be replaced by icomplete-vertical-mode but there's no built-in corfu replacement
      • petertillemans2231: ​In the beginning, especially with use-package it is much more like yaml than a real programming language. That can ease people in.
      • satrac75: ​​i'm curious if other users split their init file into seperate files. my init file over the years continuea to grow and grow.
      • hajovonta6300: ​​@satrac75 I sometimes delete obsolete code I don't use anymore. I found my config became relatively stable after 2-3 years of initial trial-and-error. I heard other people experienced the same
      • petertillemans2231: ​I do … I go back and forth… single file … modularize … refactor/simplify in single file again… Like a dynamic tension field.
      • hajovonta6300: ​​My current config is 3099 lines long (org-babel format)
      • hajovonta6300: ​​the tangled output is 2345 lines.
      • charliemcmackin4859: ​​@satrac75 I did, yes. But this is mainly because I cherry-picked the configs from purcell's emacs config as I found I needed it. Then I converted it (mine) to use-package later

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    6. šŸ”— r/wiesbaden Neue Arbeit rss

      Ich bin QA /Testautomatisierer in Softwareentwicklung Bereich. Hab ich auch Juristisch hintergrund. Als ich wohnen in WI, suche nach etwas passenden für mich.

      submitted by /u/NikolaBilbil
      [link] [comments]

    7. šŸ”— r/LocalLLaMA Open Models - April 2026 - One of the best months of all time for Local LLMs? rss

      Open Models - April 2026 - One of the best months of all time for Local LLMs? | Any underrated or overlooked models? FYI MiniMax-M2.7 switched their license(from MIT to Non-Commercial) so it's not in graph. PS : Took me 30 mins to gather these models & generate this graph submitted by /u/pmttyji
      [link] [comments]
      ---|---

    8. šŸ”— r/Leeds Time to get real, Leeds rss
    9. šŸ”— r/Leeds Some snapshots I took last weekend rss
    10. šŸ”— r/york Meeting new people rss

      Where would be a good place to go to try and meet new people and make friends? I've been left in york on my lonesome and I wanted to try and change that but no luck. Something within my age range would be nice (im 23)

      submitted by /u/ChibiXenovia
      [link] [comments]

    11. šŸ”— r/LocalLLaMA AMD in-house ryzen 395 box coming in June rss

      AMD in-house ryzen 395 box coming in June | Don't know if the date was released yet, but this was just said a few moments ago at AMD AI Dev Day. No word on price, but I think its made by Lenovo based on the plug earlier in the presentation. Edit: They had a unit on a table and I just confirmed with an engineer it is just a 395 128gb with no changes. submitted by /u/1ncehost
      [link] [comments]
      ---|---

    12. šŸ”— r/reverseengineering Revealing NVIDIA Closed-Source Driver Command Streams for CPU-GPU Runtime Behavior Insight rss
    13. šŸ”— Kagi release notes April 30th, 2025 - Kagi API preview and ecosystem updates rss

      Kagi APIs: the same search technology that powers Kagi is opening up to

      developers

      Starting next week, we’ll begin onboarding developers to the Kagi API dashboard. Access will roll out first to people who joined the API waitlist or contacted Kagi support.

      With the new Search API developers can bring Kagi Search into their own apps, tools, and AI systems. Here's an early look:

      Kagi API developer dashboard Overview
page

      If you'd like to join this early preview of the Kagi API, please fill out this form. We'll reach out next week!

      Kagi Search

      New landing

      We updated our landing page to bring awareness to Kagi's wider ecosystem beyond search. Check it out!

      This is the first of many steps toward helping more people discover everything Kagi has to offer.

      • IP address and subnet search to bring up the Wolfram Alpha answer #10147 @dronics
      • Wrong Kagi Knowledge result for Mother's Day search #7086 @dreifach
      • "1 lakh crore" returns confusing results #9050 @holdenr
      • Custom assistant without internet access results in error #9876 @Thibaultmol
      • "Sign up for free" link on Pricing page not working #10314 @Hanbyeol
      • Disable Search Grouping in News Tab #10254 @dvdnet89
      • Auto suggest gives results which trigger bangs improperly #5346 @LadyStrawberries
      • Reverse image search returns primarily Russian and Russian-translated results #9111 @Jake-Moss
      • Runway (the AI video generation company) got erased from search result #10369 @yanda
      • Quick, direct access to "Set Kagi as default Search" instructions on your landing page (or close by). #6646 @ragnar
      • Web search image preview does not match the actual image searches. Also the image results are not relevant at all. #10367 @StealthGirl
      • Better UX for date calculator widget. #10282 @leftium
      • Redirect to first result bang no longer working if preceded by a space #10385 @znmto
      • Img data leaking into search results #10355 @Keli
      • Free search quota never expires #10403 @afestein
      • Ranking adjustment doesn't do anything when JavaScript is disabled. #10425 @SkyDotBit
      • Advanced Search modal and scrollbar behavior #4509 @dix

      Kagi Assistant

      • We increased the Assistant's file upload size limit to 30 MB #8872 @mrzv
      • Degradation of file analysis functionality in Kagi Assistant #10290 @v3max
      • Umlauts are sometimes not displayed in the Quick Assistant #9289 @Kel
      • Universal summarizer "Continue in Assistant" button fails: "We are sorry, this input is not supported. (Invalid Input)" #10368 @Self-Perfection

      Kagi News

      • Kagi News -> timeline ambigious #8525 @yeri
      • Story corrections, both from user reports and our own continuous fact-checking. When something turns out to be wrong, we fix it and show a small correction notice on the story, with the changed sentence highlighted on your next visit.
      • Stories can pull in related coverage from other categories, so a single big story can span Science, World, and Tech when it makes sense.
      • Cleaner prose in hard-news categories: fewer filler phrases, less editorializing, more neutral writing.
      • Snappier all around: faster initial load, much faster story search, and browser back/forward now restores the page instead of reloading it.
      • Custom category order syncs reliably across devices now. Fixed several cases where reorders were lost or overwritten.
      • Category tabs use proper ARIA semantics for assistive tech.

      Kagi Translate

      • Keyboard shortcuts in Kagi Translate #10306 @mb
      • Poor text formatting of image translations on Kagi Translate app #10016 @San
      • Pinyin absent for alternative translations #10340 @phuertay
      • Add Seto and VƵro to Kagi Translate #10324 @mb
      • Correct file extensions when saving translations #10311 @mb
      • Add Montenegrin as an option in Translate #10230 @mb
      • Pasting text in Translate app is hard #10047 @marty
      • Pasted text from books or PDFs is auto-formatted: broken mid-sentence line breaks, hyphenation across lines, and stray whitespace get cleaned up. An undo toast lets you revert if you wanted the original.
      • Auto-language switch now shows a toast with undo, and skips ambiguous cases like uncertain, mixed, or mid-typing input.
      • Pin any language to the top of your list, including custom or non-standard ones.
      • Romanization shown beneath alternative translations into Japanese, Chinese, Korean, Arabic, Russian, and other non-Latin scripts.
      • Link previews (Open Graph) for translated text now show the actual translation when shared on social media, instead of a generic logo. The /extension page also got its own dedicated preview.
      • New languages: Seto, VƵro, Montenegrin, and Badini Kurdish (with both Arabic and Latin Hawar scripts).
      • Formal Ukrainian now correctly capitalizes Š’Šø and Š’Š°Ńˆ.
      • Downloaded translations get the right file extension based on the detected content format.

      Post of the week

      alt

      Follow us and tag us in your comments, we love hearing from you.

      Kagi is growing

      The team is expanding, and we're looking for talented people who want to help build a better web alongside us. We're hiring for multiple roles, including:

      • Product Designer (UI/UX) : Take strategic ownership of end-to-end design across Kagi's product ecosystem. Apply here.

      • An Education Partnerships Lead : If you believe the most important thing technology can do for students is teach them how to think for themselves, we'd like to talk. Apply here.

      • A Senior Platform Engineer : If you have strong opinions about API contracts, auth correctness, and migrating user data without losing anyone's trust, we'd like to talk. Apply here.

      We also have openings for a Senior Search Engineer, Senior Platform Engineer, Senior Full-Stack Developer (Kagi Labs), and an AI Specialist. See the full list of openings here.

      Kagi tip of the week šŸ’”

      Between AI-image filters, clickbait controls, reverse lookup, and source filters, there's a lot of power hiding behind the Images and Videos tabs. Here's how to get the most out of them.

      Kagi art

      Less scrolling, more living.

      Cartoon illustration with the text "Most search engines want to keep you
scrolling. Kagi wants to set you free." Below, two stick figures exchange a
glowing box labeled "what you were searching for"—one says "Found it! Now go
and enjoy your day!" and the other replies
"Perfect!"

    14. šŸ”— r/york Stork rss

      Stork | Seen near Taddy this week by a gardener friend. Going to bed a bigger bird box! submitted by /u/yorangey
      [link] [comments]
      ---|---

    15. šŸ”— r/Yorkshire Whitby Goth Weekend Horror - feature film free to view rss

      There’s a serial killer on the loose at Whitby Goth Weekend…

      The film was shot at Whitby Goth Fest in November 2025 - four day shoot - microbudget- but a lot of fun…

      https://youtu.be/Zg-k2D2BFTI?si=iJZwQkbgREzyK97R

      submitted by /u/matcoop23
      [link] [comments]

    16. šŸ”— The Pragmatic Engineer The Pulse: token spend breaks budgets – what next? rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of three topics from last week 's The Pulse issue. Full subscribers received the article below seven days ago. If you 've been forwarded this email, you can subscribe here .

      Last week, we covered the slightly perverse trend of "tokenmaxxing" across the industry, where devs run agents with the sole aim of boosting their personal "token stats" in an effort to rank higher on internal token leaderboards, and not be seen as a Luddite who doesn't use AI tools enough compared to peers.

      This week, I spoke with a software engineer at a large company and another at a seed-stage place. Both shared almost identical stories: at their latest all- hands, company leadership expressed concerns about the fast-rising costs of tokens. At both places, token spend has increased by ~10x in the last six months - with no signs of slowing down.

      I wanted to find out about this trend, so I talked to devs at 15 businesses. Below is what I learned about what's happening in workplaces of all sizes. Names are anonymized.

      Large companies

      Setting the default model to a cheaper one: 10,000+ person SaaS company,

      offices on all continents

      Inside a large SaaS company, most devs use an internal background coding tool for coding. This model defaults to Claude Sonnet, which is the cheaper Claude version. Model selection is not persisted, so devs who prefer working with Opus, for instance, must reselect it on every subsequent startup.

      This tool supports all major frontier models such as Sonnet, Opus, GPT, and Gemini. Devs at the company whom I talked to are very heavy users of the tool and have not encountered usage limitations.

      Fintech company, US, Series D, ~8,000 people. Staff engineer:

      "The cost in token spend is off the charts - and leadership has shared this trend with us. They have not said anything beyond showing growth in spend, and mentioning that this won't be sustainable. So, nothing specific yet, but my sense is that something will have to change. Limits or prioritizing cheaper models, cutting back on hiring? Who knows."

      Infra company, US, publicly traded, ~5,000 people. Engineering Director:

      " We're monitoring but not restricting. We are spot checking the heaviest users, but we are seeing the business cases working out.

      We are offering some guidance on model selection - e.g., turn off the new high-effort setting in Claude. Some users are trying open source models - but open source model usage is a bottom-up initiative, not a top-down one."

      Information technology, US, 10,000+ people. Director of Engineering:

      "We have already had to raise our API budget limits multiple times in April. We recently switched to a much higher-effort level for Claude, which significantly increased the cost per PR.

      One reason for the cost spike is using state-of-the-art models for demanding tasks. We are using that high-effort setting even for fairly trivial tasks that could have been handled by much cheaper models, or even by lower-effort Claude loops. Despite a few of us pointing this out, leadership has basically said budget is not the concern right now.

      I sense that the budget increase has not been forecasted, and we're in for a reckoning.**** I suspect the attitude changes once finance and other cost- conscious parts of the org realize we are spending hundreds of dollars per day, per highly-engaged developer. For now, fear of missing out and not wanting to fall behind seems to be outweighing cost discipline."

      Games studio, US+Europe, ~5,000 people. Senior developer:

      "What budget increase? It's very hard to get a budget for AI here! Claude Code is still not rolled out because $200/month/dev is seen as too high a cost. I talk with people at startups where $1,000/month in spending is totally normal, and it's night and day here."

      Fintech company, US+Europe, late stage, ~5,000 people. Staff engineer:

      " Some developers are now spending $500 a day (!!) on Claude Code. Practically speaking, this means that employee costs have doubled. Productivity has increased, in my view, but now the bottleneck is code reviews. AI can spit out code quite quickly, but we still have human reviews in place. Leadership encourages using AI for code review, but my team will not blindly trust AI.

      The push from AI is coming from the top. This year's performance review had a section on AI, rating devs by how well they used AI, so this is another reason everyone just uses it as much as they can."

      Mid-sized companies

      SaaS industry, US, ~2,000 people. Dev Productivity Lead:

      "Model routing helped keep our costs growing less dramatically. For example, changing the default model reduced cost by 30%. This is our strategy with AI spend, summarized:Short term: spend, spend, spend! Experiment and use whatever models make sense.Measure the impact. Measure key outcomes and report on spend, monthly.When spend vs results diverge: adjust. When our spend increases dramatically, but outcomes don't follow: see what we can do to adjust the delta. More spend should mean better outcomes. If not, we are doing something wrong."

      Finance industry, US, ~2,000 people. VP of AI:

      "We have Cursor and Claude Desktop, both of which have around 800-1,200 total users. Token usage is growing somewhat unexpectedly. Estimates are being adjusted on the fly; the initial plan to have strict limits (say, $100 per user) is breaking when reality hits, and people exhaust them in 3-5 working days.

      Using expensive models is a problem. In regards to Cursor, many devs are defaulting to the most expensive models without realizing that going with Opus gives single percentage gains in intelligence compared to Sonnet, for example, while exhausting their budgets almost immediately.

      We are working on blocking/managing out the most expensive models [with Cursor] , as going into thousands of dollars per user, per month is not sustainable on our scale. Cursor is a good partner and we're working with them to switch to a "pooled spend" model where heavy users can tap into a pool of extra spend.

      Claude is a similar story. We were at $100 of Claude Desktop limit for everyone, but as we are moving forward, I can see that we would need to go much higher, especially for business-critical use cases."

      Infra company, US, late-stage, ~700 people. Founder:

      "We haven't had much of an issue. Most folks police themselves for runaway costs; for example, we had someone hit like $10K in a week because they messed up caching, but it was caught and they corrected their harness.

      For the most part, we don 't see our high-end folks spending more than ~$1K/week. Now, to be clear, this is not a small amount! BUT it's already a small subset of the population.

      We're just factoring it into engineering costs at this point: if it's, say, $2K/month per employee, that's $24K per year.

      Who cares, then, when engineers already cost $200-400K/year in cash comp? Okay, so what if it's $5K/month. That's $60K/year.

      Our bet is that token costs will stabilize and we 'll eventually end up with local-ish models.

      Now, it could be five years before they stabilize, but overall, spend today isn't that insane to me.

      There's a lot of people who are just dumb about it, but most legit execs push back on this. Take the Ralph loops or other insanity where someone spends $1K/day, $5K/week or stuff like this. That's all just people being fools thinking they're doing "R&D," or somehow that they're smarter than everyone else, but they're just producing junk that never ships or is not useful.

      We saw a bit of "stupid overspend" in the first couple months, but that's all gone now. Costs could go up even more if we would "crack the whip" in wanting to see even more output, but we're not doing that."

      Healthcare industry, US, ~500 people. Senior engineering manager:

      "We are not holding back on spend, and have a monthly spend leaderboard. And we WANT devs to spend more on tokens! For example, one of my engineers spent $1,400 on a long Claude Code session in a single day.

      We are seeing massive leverage, and we do more with the same number of people. This is why we are okay with our spending spiking. Our traffic is growing more than 10x, year-on-year, and we have managed to keep things running with the same team, and these AI tools.

      Engineering is now blocked on Product and Design - which never happened before! This is how fast execution has become. We now have Staff+ engineers writing Product PRDs so we can move faster.

      I've been in tech for close to 15 years and I never saw dramatic change like this. I just came back after a 3-month break, and every single thing is different in my day! I feel these AI agents are the biggest change in the industry since high-level languages became widespread."

      E-commerce company, US & Europe, ~2,000 devs. Head of Engineering:

      "The increase in spend is INSANE. It's about usage going up, with no signs of stopping. Usage is off the charts.

      We currently do not have limits in place, and are not pausing now. Our CEO is AI-pilled and won't let us slow down.

      We do buy tokens at a discount. They start from 5% and go up with usage with the vendors we use (the usual suspects.)

      We don't let devs use anything lower than Opus 4.7 for coding. Cheaper models might work better, but a slight error pushed to prod would result in hours of toil."

      Small companies

      Series A, US, ~50 people. Principal Engineer:

      "About 15 devs are heavy users of AI and costs are rising very fast. Almost everyone uses Claude and Claude Code. We are considering four potential options:Increase AI budget, and start measuring more. Continue doing what we are, but allow devs to use more tokens instead of hiring limits. The precise ROI is hard to quantify, but we'll start to measure and track both AI adoption and impact.Optimize token consumption. Use cheaper models for simpler tasks, review token usage, and see where we can cut usage. Downside: this approach could become one with diminishing returns, fast.Integrate more AI providers in the company. Find wrappers to abstract LLMs. The problem is: how do you replace Claude Code, for instance?Pivot to local models: such as Kimi, Qwen, and so on. The problem is it's a big investment in high-end hardware or cloud GPUs. Upside: it offers better long-term cost control, once done.

      We are likely to go with option #1: increase spend BUT maintain momentum and put the right measurements in place. We can do #2, #3 and #4 later. But if we kill AI usage momentum inside the company, the outcome will probably be worse."

      AI infra, US, seed stage, ~15 people. Founder:

      "We saw a 15x increase in 6 months: Six months ago our spend per developer was ~$200/monthToday, it's around $3,000/developer/month, for our seven devs
      We're not slowing usage, especially as we are building an AI infra product. The increase was much faster than expected, though."

      Small, bootstrapped company, Europe. Founding engineer:

      "Our current strategy in dealing with the increase in costs is to switch to a cheaper model; unfortunately, from Opus to Sonnet in our case. That said, Sonnet is quite decent."

      How businesses manage token spend

      Regardless of company size, there seems to be two strategies for how companies deal with increased spending. A summary:

      Strategy #1: "let it rip and start measuring." Around half of respondents say AI spend is rising dramatically, and they have decided to do nothing about it. They want devs to use AI as much as it makes sense to, and to help the work as much as possible.

      However, because the cost is rising dramatically, these companies are now starting to measure usage and attempting to measure the impact of their AI tools.

      There's a few companies where the impact seems to be very positive, already. Smaller startups whose business is exploding in numbers of customers, load, and revenue, see that they don't need to hire more staff because existing engineers can keep supporting the growth with AI tools.

      Strategy #2: curb spending. Commonly mentioned cost-saving approaches:

      • Use cheaper models for simpler tasks
      • Set default models to less capable ones
      • Set a spending cap and make it hard for engineers to exceed it, or require consent for doing so

      Most companies using strategy #1 have briefly considered going with this approach, but threw it away, because they see this approach as optimizing on the wrong thing: cutting costs before the productivity impact of using state- of-the-art tools is even known!

      Discounts exist when the spend is in the millions of dollars. I asked several people if they are getting discounts from vendors when buying tokens at scale. There were no exact numbers, but this is what I gathered in aggregate about possible custom agreements:

      • Cursor: open to discounts above a few million dollars in spend. Companies have negotiated discounts with Cursor after crossing $1M of spending. Some companies negotiated tiered discounts from this level, starting at 5% and going higher as their spend goes up.
      • Anthropic: no discounts. I talked with companies spending $5M+ per year on Claude which have received no discounts. If Anthropic offers discounts, it will likely be at a much higher tier.
      • All discounts are custom, so try to negotiate - it's free! Pricing discounts are on a per-customer basis, and highly custom. The easiest way to see if a discount is available is to ask the vendors!

      -- -

      Read the full issue of last week 's The Pulse , or check out this week 's The Pulse . This week 's issue covers:

      1. Load from AI breaks GitHub - but why not other vendors? GitHub's reliability is less than one nine, and getting worse. Prolific open source contributor, Mitchell Hashimoto, is quitting GitHub because he thinks it's not suited for professional work. GitHub's leadership blames the 3.5x increase in service load as the cause of degradation - or it might be self-inflicted.
      2. Anthropic 's speedrun to destroy trust. Anthropic could do no wrong until recently, but in the past month, that's all changed. Silently nerfing Claude Code, banning companies from Claude, and baffling price rises all add to a sense that Anthropic is in its "extraction" era of generating more revenue for the same or worse service.
      3. Industry pulse. Dramatic price increases at GitHub Copilot, explosive growth at Codex, Google scrambling to build a good coding model, Cursor might be bought by SpaceX, AI agent deletes car business, and more.
      4. Mitchell Hashimoto & the "building block economy." Ghostty's creator finds that open source "building blocks" are the best way to win massive adoption by software components - but it's got harder to build a business on top of open building blocks.
    17. šŸ”— r/Leeds Growing Well Study - Participants Needed rss

      Hi there!

      My name is Chloe Thackray and I am reaching out from the University of Leeds.

      We are currently conducting a large-scale, national research project called the Growing Well Study and are looking for families with young children to take part (6ms - 4 years). You will receive up to £50 in vouchers as a thank you!

      The project is focused on preschool diet, growth and dental health and it will help inform national policy-recommendations.

      What will be involved:

      • Short online survey
      • Local measurement appointment (height, weight, tummy)
      • 3 daily online food diaries
      • Repeat in 1 year + free dental check

      We will be hosting measurement appointments soon at Sunny Bank Mills and the Merrion Centre in Leeds.

      If you have a child within this age range we would love you to take part. You can sign up and complete our online survey here: https://survey.natcen.ac.uk/GWS

      Thank you so much!

      submitted by /u/GrowingWellStudy_UoL
      [link] [comments]

    18. šŸ”— r/york It’s amazing how the Cathedral seems to change personality depending on where you're standing. I could spend hours here rss

      It’s amazing how the Cathedral seems to change personality depending on where you're standing. I could spend hours here | submitted by /u/Wallabydoll
      [link] [comments]
      ---|---

    19. šŸ”— Cryptography & Security Newsletter ECH Is Done, But Can We Make It Work? rss

      Some technologies are easier to deploy than others. Take TLS, for example. Once enough time passes and we upgrade the servers and clients, we’re done. Encrypted Client Hello (ECH) is not one of those technologies. To get it to be effective, we first need to go through the usual upgrade cycle, iron out the last kinks, and then also get enough of the ecosystem to opt in to achieve safety in numbers.

    20. šŸ”— r/Yorkshire Homeowners in Yorkshire turn to solar panels as oil prices rise rss

      Homeowners in Yorkshire turn to solar panels as oil prices rise | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    21. šŸ”— r/Yorkshire Now and Then rss
    22. šŸ”— r/Leeds Varsity night - A grumble rss

      It was varsity night in headingly last night. We live around the Trelawns, and the whole damn street is littered with glass. Shattered pint glasses, beer bottles.

      We've been out this morning sweeping it up. I honestly cannot fathom the lack of basic respect.

      submitted by /u/Swivials
      [link] [comments]

    23. šŸ”— tomasz-tomczyk/crit v0.10.2 release

      What's Changed

      Note : You might need to run crit auth login again to link your profile properly for the future.

      New Contributors

      Full Changelog : v0.10.1...v0.10.2

    24. šŸ”— r/reverseengineering HexDig 1.0.0 a lightweight binwalk alternative working both on Windows and Linux, written in C++, give it a try! rss
    25. šŸ”— r/reverseengineering GitHub - iss4cf0ng/CVE-2026-31431-Linux-Copy-Fail: Rust implementation Exploit/PoC of CVE-2026-31431-Linux-Copy-Fail, allow executing customized shellcode (such as Meterpreter). rss
    26. šŸ”— r/Yorkshire Culloden tower rising above the Swale. Can you spot the Mallard duck? rss
    27. šŸ”— r/Yorkshire Yorkshire Water Seeks Views On Multimillion-Pound Scarborough Investment rss

      Yorkshire Water Seeks Views On Multimillion-Pound Scarborough Investment | submitted by /u/willfiresoon
      [link] [comments]
      ---|---

    28. šŸ”— r/Yorkshire New jobs as East Yorkshire company announces round-the-clock production move rss

      New jobs as East Yorkshire company announces round-the-clock production move | submitted by /u/willfiresoon
      [link] [comments]
      ---|---

    29. šŸ”— keeweb/keeweb 1.19.0 release

      keeweb-1.19.0

    30. šŸ”— keeweb/keeweb v1.18.8 release

      What's Changed

      New Contributors

      Full Changelog : v1.18.7...v1.18.8

    31. šŸ”— Evan Schwartz Your Clippy Config Should Be Stricter rss

      ā€œIf it compiles, it works.ā€ This feeling is one of the main things Rust engineers love most about Rust, and a reason why using it with coding agents is especially nice. After debugging some code that compiled but mysteriously stopped in production, I realized that it’s useful to enable more Clippy lints to catch bugs that the compiler won't prevent by itself. It's especially useful as guardrails for coding agents, but stricter linting can make your code safer, whether or not you’re coding with LLMs.

      Motivating Bug: UTF-8-Oblivious String Slicing

      Scour is the personalized content feed that I work on. Every Friday, Scour sends an email digest to each user with the top posts that matched their interests. On a recent Friday, the email sending job mysteriously stopped. This was puzzling because I had already put in place multiple type system-level safeguards and tests to ensure that it would continue with a log on all types of errors.

      After digging into the logs, I found the culprit to be thread 'tokio-runtime-worker' panicked... byte index 200 is not a char boundary. A function naively truncated article summaries without checking for UTF-8 character boundaries, which caused a panic and stopped the Tokio worker thread running the email sending loop.

      The solution for this particular bug was a safer method for truncating article summaries that respects UTF-8 character boundaries. However, this problem was reminiscent enough of the 2025 Cloudflare unwrap bug that "broke the internet" that I wanted some more general solution.

      Rust's compiler prevents many types of bugs but there are still production problems it can't catch. Panics will either crash your program or quietly kill Tokio worker threads. Deadlocks and dropped futures can make work silently stop. And plenty of numeric operations can silently cause incorrect behavior.

      We can stave off many of these types of bugs by making Clippy even stricter than it already is.

      This is especially relevant in the age of coding agents. A seasoned Rust engineer might naturally avoid patterns that could cause problems. An agent or a junior colleague might not. Stricter Clippy rules make it easier to rely on code you didn't personally write. Also, enabling new lints on an existing codebase is tedious, and exactly the kind of task that is good to hand to a coding agent.

      Enabling More Clippy Lints

      Clippy ships with hundreds of lints that are disabled by default. Some are disabled because they might have false positives and some are style choices which you might reasonably not want.

      Which lints should we enable to help us get back the "if it compiles [and passes Clippy], it works" feeling?

      Why Not Enable Lint Categories?

      Clippy's lints are grouped into categories: Correctness, Suspicious, Complexity, Perf, Style, Pedantic, Restriction, Cargo, Nursery, and Deprecated.

      Unfortunately, none of these categories cleanly map onto "don't let this panic or do the wrong thing in production".

      In fact, the Clippy docs say that "The restriction category should, emphatically, not be enabled as a whole." Clippy even includes a dedicated lint, blanket_clippy_restriction_lints, to discourage you from enabling this category. While the restriction category includes many useful lints, it also includes some that directly contradict one another. For example, it contains lints to enforce both big_endian_bytes and little_endian_bytes.

      The docs say "Lints should be considered on a case-by-case basis before enabling". Of course, you can enable whole categories like pedantic and restriction and then allow specific ones you want to disable, but I'm outlining a selective opt-in here.

      Lints That Don't Fire Are Still Useful

      Even if you don't use a certain pattern in your code base today, it's not bad to enable the lint anyway. Inapplicable lints serve as cheap tripwires in case the given pattern is ever added later, whether by you, a colleague, or a coding agent.

      My Lints

      Every project is different and you should look through the available lints to see which ones make sense for your project.

      Also, check when lints landed in stable if your Minimum Supported Rust Version predates 1.95, as some of these may have been added after your MSRV.

      With those caveats out of the way, here are the lints I enabled, roughly categorized by what kind of behavior they prevent. You can skip to the bottom if you just want to copy my config.

      Don't Panic

      This group prevents panics from unwraps and unsafe slicing or indexing into arrays and strings.

      Note that some of these, like string_slice and indexing_slicing may produce many warnings throughout your code base. That may be annoying to fix. However, using safe methods like .get() and iterators instead of slicing prevents pretty severe footguns, so I would argue that it's worth it.

      You might or might not want to enable expect_used. Calling .expect on an Option or Result can result in a panic. However, the message you pass to expect should already document why that thing shouldn't happen. Enabling the lint and then selectively disabling it throughout your code with #[expect(expect_used, reason = "...")] may end up duplicating the same rationale for using it in the first place.

      Another lint that is a real judgement call is arithmetic_side_effects. This can prevent overflows and division by zero. However, it will cause Clippy to warn you about every place you use math operators: +, -, *, <<, /, and %. I tried enabling it in my code base and would estimate that around 15% of the warnings caught real issues and 85% was just noise.

      Don't Fail Silently

      Don't Do Bad Async Stuff

      These prevent various concurrency bugs and deadlocks:

      • await_holding_lock - MutexGuard across .await
      • await_holding_refcell_ref - RefCell::borrow_mut across .await
      • if_let_mutex (only relevant if you're using an earlier edition than 2024) - if let _ = mutex.lock() { other_lock() } deadlock pattern. The scoping was fixed in the 2024 edition so this is no longer an issue.
      • large_futures - a Future that is too large can cause a stack overflow

      Don't Do Unsafe Things with Memory

      Don't Do Potentially Incorrect Things with Numbers

      The lints cast_possible_wrap, cast_precision_loss, cast_possible_truncation effectively force you to document invariants when doing lossy casts between numeric types. You might or might not find that useful.

      Don't Do Bad Things That are Easy to Avoid

      Don't allow Your Way Around These Lints

      These two are especially useful if you're using a coding agent. Instead of letting the agent write #[allow(lint_we_wanted_to_enable)], it should provide a reason wherever it's disabling a lint.

      Workaround for Workspace Inheritance

      If you're using a Cargo workspace, you'll want to enable these lints in the workspace Cargo.toml. Unfortunately, each workspace crate needs to opt in to inheriting lints with lints.workspace = true, rather than inheriting the lints by default. On nightly, there's a missing_lints_inheritance lint that specifically checks for this.

      If you're using stable Rust, you can use cargo-workspace-lints or a simple shell script run on CI to make sure you don't forget to make a workspace crate inherit the lints.

      Warn or Deny?

      When enabling lints, you can either set Clippy to warn or deny them. Either works but I personally prefer setting these to warn and running Clippy with -D warnings before committing and on CI. This makes local iteration marginally easier because you can compile your code initially without fixing all the lints right away.

      Note: if you set Clippy on CI to deny warnings, you should make sure to specify a specific Rust version. Otherwise, lints added in newer versions will cause your build to fail. (Thanks to u/scook0 for pointing this out!)

      My Configs

      # Workspace Cargo.toml
      
      [workspace.lints.clippy]
      # Don't Panic - prevent panics from unwraps and unsafe slicing or indexing
      string_slice = "warn"
      indexing_slicing = "warn"
      unwrap_used = "warn"
      panic = "warn"
      todo = "warn"
      unimplemented = "warn"
      unreachable = "warn"
      get_unwrap = "warn"
      unwrap_in_result = "warn"
      unchecked_time_subtraction = "warn"
      panic_in_result_fn = "warn"
      # Optional - see post for caveats
      # expect_used = "warn"
      # arithmetic_side_effects = "warn"
      
      # Don't Fail Silently - prevent dropped futures and swallowed errors
      let_underscore_future = "warn"
      let_underscore_must_use = "warn"
      unused_result_ok = "warn"
      map_err_ignore = "warn"
      assertions_on_result_states = "warn"
      
      # Don't Do Bad Async Stuff - prevent deadlocks and concurrency bugs
      await_holding_lock = "warn"
      await_holding_refcell_ref = "warn"
      if_let_mutex = "warn"  # only relevant on editions before 2024
      large_futures = "warn"
      
      # Don't Do Unsafe Things with Memory
      mem_forget = "warn"
      undocumented_unsafe_blocks = "warn"
      multiple_unsafe_ops_per_block = "warn"
      unnecessary_safety_doc = "warn"
      unnecessary_safety_comment = "warn"
      
      # Don't Do Potentially Incorrect Things with Numbers
      float_cmp = "warn"
      float_cmp_const = "warn"
      lossy_float_literal = "warn"
      cast_sign_loss = "warn"
      invalid_upcast_comparisons = "warn"
      # Optional - these effectively force you to document numeric invariants
      # cast_possible_wrap = "warn"
      # cast_precision_loss = "warn"
      # cast_possible_truncation = "warn"
      
      # Don't Do Bad Things That are Easy to Avoid
      rc_mutex = "warn"
      debug_assert_with_mut_call = "warn"
      iter_not_returning_iterator = "warn"
      expl_impl_clone_on_copy = "warn"
      infallible_try_from = "warn"
      dbg_macro = "warn"
      
      # Don't `allow` Your Way Around These Lints - every suppression must be
      # a deliberate #[expect(..., reason = "…")] rather than a silent #[allow]
      allow_attributes = "warn"
      allow_attributes_without_reason = "warn"
      
      # Workspace clippy.toml
      
      allow-indexing-slicing-in-tests = true
      allow-panic-in-tests = true
      allow-unwrap-in-tests = true
      allow-expect-in-tests = true
      allow-dbg-in-tests = true  
      

      Conclusion

      Ultimately, as Clippy's docs say, "You can choose how much Clippy is supposed to annoy help you." But especially in the age of coding agents, I think it's worth tightening the guardrails so you end up with even fewer mysterious bugs in production and more code where you can say "if it compiles and lints, it should work."


      Discuss on r/rust, Lobsters, or Hacker News.

      In response to this post, Billy Levin wrote up a case for enabling whole lint categories and disabling the specific lints you don't want: Your Clippy Config Should Be Stricter-er. If you found this post interesting, that one's worth a read before you decide which approach is best for you.


    32. šŸ”— Rust Blog Announcing Google Summer of Code 2026 selected projects rss

      As previously announced, the Rust Project is participating in Google Summer of Code (GSoC) 2026. GSoC is a global program organized by Google that is designed to bring new contributors to the world of open source.

      A few months ago, we published a list of GSoC project ideas, and started discussing these projects with potential GSoC applicants on our Zulip. We had many interesting discussions with the potential contributors, and even saw some of them making non-trivial contributions to various Rust Project repositories before GSoC officially started!

      The applicants prepared and submitted their project proposals by the end of March. This year, we received 96 proposals, which is a 50% increase from last year. We are glad that there was again a lot of interest in our projects! Like many other GSoC organizations this year, we somewhat struggled with some AI- generated proposals and low-quality contributions generated using AI agents, but it stayed manageable.

      GSoC requires us to produce an ordered list of the best proposals, which is always challenging, as Rust is a big project with many priorities. Our mentors examined the submitted proposals and evaluated them based on their prior interactions with the given applicant, their contributions so far, the quality of the proposal itself, but also the importance of the proposed project for the Rust Project and its wider community. We also had to take mentor bandwidth and availability into account. Unfortunately, we had to cancel some projects due to several mentors losing their funding for Rust work in the past few weeks.

      As is usual in GSoC, even though some project topics received multiple proposals1, we had to pick only one proposal per project topic. We also had to choose between proposals targeting different work to avoid overloading a single mentor with multiple projects. In the end, we narrowed the list down to the best proposals that we could still realistically support with our available mentor pool. We submitted this list and eagerly awaited how many of them would be accepted into GSoC.

      Selected projects

      On the 30th of April, Google has announced the accepted projects. We are happy to share that 13 Rust Project proposals were accepted by Google for Google Summer of Code 2026. That is a lot of projects! We are really happy and excited about GSoC 2026!

      Below you can find the list of accepted proposals (in alphabetical order), along with the names of their authors and the assigned mentor(s):

      Congratulations to all applicants whose project was selected! Our mentors are looking forward to working with you on these exciting projects to improve the Rust ecosystem. You can expect to hear from us soon, so that we can start coordinating the work on your GSoC projects.

      We are excited to mentor three contributors who already experienced GSoC with us in the previous year. Welcome back, Kei, Marcelo and Shourya!

      We would like to thank all the applicants whose proposal was sadly not accepted, for their interactions with the Rust community and contributions to various Rust projects. There were some great proposals that did not make the cut, in large part because of limited mentorship capacity. However, even if your proposal was not accepted, we would be happy if you would consider contributing to the projects that got you interested, even outside GSoC! Our project idea list is still current and could serve as a general entry point for contributors that would like to work on projects that would help the Rust Project and the Rust ecosystem. Some of the Rust Project Goals are also looking for help.

      There is a good chance we'll participate in GSoC next year as well (though we can't promise anything at this moment), so we hope to receive your proposals again in the future!

      The accepted GSoC projects will run for several months. After GSoC 2026 finishes (in autumn of 2026), we will publish a blog post in which we will summarize the outcome of the accepted projects.

      1. The most popular project topic received fourteen different proposals! ↩
    33. šŸ”— Console.dev newsletter goshs rss

      Description: Simple web server.

      What we like: Supports multiple protocols as well as HTTP, including SMB, DNS, WebDAV, SMTP. Includes file-based ACLs so you can use it to set up file sharing. SSL handled through Let’s Encrypt or providing your own keys. Can embed static files. Written in Go so can be shipped as a single binary.

      What we dislike: The non-HTTP servers are mainly designed for pentesting and CTFs rather than fully functional server replacements. This includes a reverse shell generator. This is an odd digression for a web server, but you’ll probably just use Caddy if you want a pure Go web server.

    34. šŸ”— Console.dev newsletter Quarkdown rss

      Description: Markdown meets LaTeX.

      What we like: Use Markdown to write typeset reports, docs, static websites, slides. Includes live preview with fast compilation so you can avoid LaTeX dependencies. Has enhancements like figures, formulae, code, bibliography. Include data from files and manipulate it with variables and scripting.

      What we dislike: Academic writing in LaTeX (or equivalent) is the dream, but most work really just happens in Word or Google Docs, especially if you’re collaborating with multiple authors!

    35. šŸ”— Servo Blog March in Servo: keyboard navigation, better debugging, FreeBSD support, and more! rss

      Servo 0.1.0 represents Servo’s biggest month ever, with a record 530 commits and our first ever release on crates.io! For security fixes, see § Security.

      With this release Servo becomes more accessible, thanks to tab navigation (@mrobinson, @Loirooriol, #42952, #43019, #43058, #43246, #43267, #43067), keyboard navigation with Alt+Shift and the accesskey attribute (@mrobinson, #43031, #43144, #43434), and keyboard scrolling with Space and Shift+Space (@mrobinson, #43322).

      We’ve shipped several new web platform features:

      Plus a bunch of new DOM APIs:

      servoshell 0.1.0 showing several new features: <input type=range>; the
character ā€œčæ”ā€ rendered differently depending on whether the ā€˜lang’ is ā€˜zh’,
ā€˜ja’, or ā€˜ko’; the emoji ā€œāœˆļøā€ rendered on a 2D canvas in both emoji
presentation and text presentation; a screenshot of the DevTools debugger,
showing live variable values; a text field with label ā€œDiffieā€ that can be
focused with Alt+Shift+D; and examples of styling ā€˜::first-letter’,
ā€˜::placeholder’, and ā€˜::file-selector-
button’

      servoshell is now installed as servoshell or servoshell.exe, rather than servo or servo.exe (@jschwe, @mrobinson, #42958). --userscripts has been removed for now, but anyone who uses it is welcome to reinstate it as a wrapper around UserĀ­ContentĀ­Manager::addĀ­_script (@jschwe, #43573). We’ve fixed a bug where link hover status lines are sometimes not legible (@simartin, #43320), and we’re working on getting servoshell signed for macOS to avoid getting blocked by Gatekeeper (@jschwe, #42912).

      After a long effort by @valpackett, @dlrobertson, and more recently @nortti0 and @sagudev (#43116, #43134), we can now build Servo for FreeBSD! Note that Servo 0.1.0 still has some issues that need to be worked around, but you can get all the details in #44601.

      servoshell 0.1.0 showing the FreeBSD website and the Servo new tab page,
alongside a terminal that ran fastfetch, showing that this is FreeBSD
15

      A great deal of work went into making the crates.io release possible, including renaming libservo to just servo (@jschwe, #43141), making each package self-contained (@jschwe, #43180, #43165), fixing build issues (@delan, @jschwe, #43170, #43458, #43463) and crates.io compliance issues (@jschwe, #43459), configuring package metadata (@jschwe, @StaySafe020, #43078, #43264, #43451, #43457, #43654), and organising our dependency tree (@jschwe, @yezhizhen, @webbeef, @mrobinson, #42916, #43243, #43263, #43516, #43526, #43552, #43615, #43622, #43273, #43092). As a result, you can now take your first step towards embedding Servo in a Rust app with:

      $ cargo add servo
      

      This is another big update, so here’s an outline:

      Security __ crypto.subtle.deriveBits() for X25519 checking for all-zero secrets, and verify() for HMAC comparing signatures, are now done in constant time (@kkoyung, #43775, #43773). ā€˜Content-Security-Policy’ now handles redirects correctly (@TimvdLippe, #43438), and sends violation reports with the correct blockedURI and referrer (@TimvdLippe, #43367, #43645, #43483). The policy in <meta> now combines with the policy sent in HTTP headers, rather than overriding it (@TimvdLippe, @elomscansio, #43063). When checking nonces, we now reject elements with duplicate attributes (@dyegoaurelio, #43216). The document containing an < iframe> can no longer access the contents of error pages (@TimvdLippe, #43539), and CSP violations inside an <iframe> are now correctly reported (@TimvdLippe, #43652). Work in progress We’ve landed more work towards supporting IndexedDB , under --pref domĀ­_indexeddbĀ­_enabled (@arihant2math, @gterzian, @Taym95, @jerensl, #42139, #42727, #43096, #43041, #42451, #43721, #43754, #42786), and towards supporting IntersectionObserver , under --pref domĀ­_intersectionĀ­_observerĀ­_enabled (@stevennovaryo, @mrobinson, #42251). We’re continuing to implement document.execCommand() for rich text editing (@TimvdLippe, #43177), under --pref domĀ­_execĀ­_commandĀ­_enabled. ā€˜beforeinput’ and ā€˜input’ events are now fired when executing supported and enabled commands (@TimvdLippe, #43087), the ā€˜defaultParagraphSeparator’ and ā€˜styleWithCSS’ commands are now supported (@TimvdLippe, #43028), and the ā€˜delete’ command is partially supported (@TimvdLippe, #43016, #43082). We’re also working on the Font Loading API (@simonwuelker, #43286), under --pref domĀ­_fontfaceĀ­_enabled. new FontFace() now accepts ArrayBuffer in its source argument (@simonwuelker, #43281). All of the features above are enabled in servoshell’s experimental mode. Work on accessibility support for web contents continues under --pref accessibilityĀ­_enabled. There was a breaking change in the embedding API (@delan, @alice, #43029), and we’ve landed support for ā€œgraftingā€ the accessibility tree of a document into that of its containing webview (@delan, @alice, #43012, #43013, #43556). As a result, when you navigate, separate documents can have separate accessibility trees without complicating the embedder. < link rel=modulepreload> is now partially supported (@Gae24, #42964), though recursive fetching of descendants is gated by --pref domĀ­_allowĀ­_preloadingĀ­_moduleĀ­_descendants (@Gae24, #43353). For a long time, Servo has had some support for the Web Bluetooth API under --pref domĀ­_bluetoothĀ­_enabled. We’ve recently reworked our implementation to adopt btleplug, the cross-platform Rust- native Bluetooth LE library (@webbeef, #43529, #43581). We’re now implementing the Web Animations API, starting with AnimationTimeline and DocumentTimeline (@mrobinson, #43711). We’ve landed more fixes to Servo’s async parser (@simonwuelker, #42930, #42959), under --pref domĀ­_servoparserĀ­_asyncĀ­_htmlĀ­_tokenizerĀ­_enabled. If we can get the feature working more reliably (#37418), it could halve the energy Servo spends on parsing, lower latency for pages that don’t use document.write(), and even improve the html5ever API for the ecosystem. For developers

      Servo’s DevTools feature now has partial support for inspecting service workers (@CynthiaOketch, #43659), as well as using the navigation controls along the top of the UI (@brentschroeter, @eerii, #43026).

      In the Inspector tab, we’ve fixed a bug where the UI stops updating when navigating to a new page (@brentschroeter, #43153).

      In the Console tab, you can now evaluate JavaScript in web workers and service workers (@SharanRP, #43361, #43492).

      In the Debugger tab, you can now Step In , Step Out , and Step Over (@eerii, @atbrakhi, #42907, #43040, #43042, #43135). We’ve landed partial support for the Scopes panel (@eerii, @atbrakhi, #43166, #43167, #43232), the Call stack panel (@atbrakhi, @eerii, #43015, #43039), and showing you information when hovering over objects , arrays , functions , and other values (@atbrakhi, @eerii, #43319, #43356, #43456, #42996, #42936, #42994).

      screenshot of the DevTools debugger, showing live variable
values

      We’ve fixed some long-outstanding bugs where the DevTools UI may stop responding due to protocol desyncs (@brentschroeter, @eerii, #43230, #43236), or due to messages from multiple Servo threads being interleaved (@brentschroeter, @eerii, #43472).

      For developers of Servo itself, mach can be a bit opaque at times. To make mach more transparent and composable, we’ve added mach print-env and mach exec commands (@jschwe, #42888).

      We’re also working on a new dev container, which will provide an alternative to our usual procedures for setting up a Servo build environment (@jschwe, @sagudev, #43127, #43131, #43139).

      Embedding and automation Breaking changes: Servo::setĀ­_accessibilityĀ­_active() is now WebView::setĀ­_accessibilityĀ­_active() (@delan, @alice, #43029), to make the API harder to misuse (see the docs for more details). What was previously named WebView::pinchĀ­_zoom() has been renamed to adjustĀ­_pinchĀ­_zoom(), and we’ve added a pinchĀ­_zoom() method that lets you read the current pinch zoom level (@chrisduerr, #43228). WebView::setĀ­_delegate(), setĀ­_clipboardĀ­_delegate(), and setĀ­_gamepadĀ­_provider() are now WebViewBuilder::delegate(), clipboardĀ­_delegate(), and gamepadĀ­_delegate() (@mrobinson, #43205, #43233). Note that setĀ­gamepadĀ­provider() is now gamepadĀ­_delegate(), consistent with the GamepadProvider rename below. WebViewDelegate::showĀ­_bluetoothĀ­_deviceĀ­_dialog() has been reworked to use the same ā€œrequest objectā€ pattern as the requestĀ­_*() methods, giving you a BluetoothĀ­DeviceĀ­SelectionĀ­Request with clear methods (@webbeef, #43580). GamepadProvider has been renamed to GamepadDelegate, and gamepadĀ­_provider() on WebView has been renamed to gamepadĀ­_delegate() (@mrobinson, #43233). The empty default implementation of EventLoopWaker::wake has been removed, because it almost never makes sense for a new custom impl to leave the method empty (@chrisduerr, @mrobinson, #43250). Opts::printĀ­_pwm is now DiagnosticsLogging::progressiveĀ­_webĀ­_metrics (@mrobinson, #43209). Removed from our API: Opts::nonincrementalĀ­_layout (@mrobinson, #43207) – no replacement. This only really worked in legacy layout. Opts::userĀ­_stylesheets (@mrobinson, #43206) – use UserContentManager::addĀ­_stylesheet() instead. This is how servoshell’s --user-stylesheet option works. You can now read and write cookies with SiteDataManager::cookiesĀ­_forĀ­_url() and setĀ­_cookieĀ­_forĀ­_url() (@longvatrong111, #43600). ClipboardDelegate and StringRequest are now exposed to the public API, allowing you to implement custom clipboard delegates (@jdm, @chrisduerr, #43203, #43261). You can pass your custom delegate to WebViewBuilder::clipboardĀ­_delegate(). You can now get the EmbedderControlId associated with an InputMethodControl by calling InputMethodControl::id() (@chrisduerr, #43248). PixelFormat now implements Debug (@chrisduerr, @mrobinson, #43249). We’ve improved the docs for Servo, ServoBuilder, WebViewBuilder, RenderingContext (@chrisduerr, #43229), EmbedderControlId, EmbedderControlRequest, EmbedderControlResponse, SimpleDialogRequest, AlertResponse, ConfirmResponse, PromptResponse, EmbedderMsg (@mukilan, #43564), ResourceReaderMethods (@jschwe, @mrobinson, #43769), servo::inputĀ­_events (@mukilan, #43681), and WheelDelta (@yezhizhen, @mrobinson, #43210). We fixed a deadlock in WebDriver that occurs under heavy use of actions from multiple input sources (@yezhizhen, #43202, #43169, #43262, #43275, #43301), ā€˜pointerMove’ actions with a ā€˜duration’ are now smoothly interpolated (@yezhizhen, #42946, #43076). Add Cookie is now more conformant (@yezhizhen, #43690), which led to Servo developers landing a spec patch. ā€˜pause’ actions are now slightly more efficient (@yezhizhen, #43014), and we’ve fixed a bug where ā€˜wheel’ actions fail to interleave with other actions (@yezhizhen, #43126). More on the web platform

      Carets now blink in text fields (@mrobinson, #43128). You can configure or disable blinking carets with --pref editing_caret_blink_time=0 or a duration in milliseconds. Clicking to move the caret is more forgiving now (@mrobinson, #43238), and moving the caret by a word at a time is more conventional on Windows and Linux, with Ctrl instead of Alt (@mrobinson, #43436). We’ve also fixed a bug where pressing the arrow keys in text fields both moves the caret (good) and scrolls the page (bad), and fixed a bug where the caret fails to render on empty lines (@mrobinson, @freyacodes, #43247, #42218).

      Input has improved, with more responsive touchpad scrolling on Linux (@mrobinson, @chrisduerr, #43350). Pointer events and mouse events can now be captured across shadow DOM boundaries (@simonwuelker, #42987), and we’ve now started working towards shadow-DOM-compatible focus (@mrobinson, #43811). Pressing Space or Enter inside text fields no longer causes them to be clicked (@mrobinson, #43343).

      The lang attribute is now taken into account when shaping, which is important for the correct rendering of Chinese and Japanese text (@RichardTjokroutomo, @mrobinson, #43447). ā€˜font-weight’ is now matched more accurately when no available font is an exact match (@shubhamg13, #43125).

      Navigation is one of the most complicated parts of HTML: navigating can run some JavaScript that replaces the page, just run some JavaScript, or depending on the response, do nothing at all. < iframe> makes navigation doubly complicated: the document containing an <iframe> can observe and interact with the document inside the <iframe> in various ways, often synchronously. This has been the source of many bugs over the years, but we’ve recently fixed one of those major issues (@jdm, #43496).

      screenshot of the HTML specification, showing that ā€œthe javascript: URL
special caseā€ is referenced in eight other
sections

      screenshot of the HTML specification, showing that ā€œis initial about:blankā€
is referenced in eighteen other
sections

      javascript: URLs are a massive special case with many quirks, and <iframe> has its own big edge cases.

      new Worker() now supports JS modules (@pylbrecht, @Gae24, #40365), and CanvasRenderingContext2D now supports drawing text with Variation Selectors , allowing you to control things like emoji presentation and CJK shaping (@yezhizhen, #43449).

      Servo now fires ā€˜pointerover’ , ā€˜pointerout’ , ā€˜pointerenter’ , and ā€˜pointerleave’ events on web content (@webbeef, #42736), ā€˜scroll’ events on VisualViewport (@stevennovaryo, #42771), and ā€˜scrollend’ events on Document , Element , and VisualViewport (@abdelrahman1234567, @mrobinson, #38773). We also fire ā€˜error’ events when event handler attributes contain syntax errors (@simonwuelker, #43178).

      We’ve improved the default appearance of < summary> (@Loirooriol, #43111), < select> (@lukewarlow, #43175), < input type=file> (@lukewarlow, @AlexVasiluta, @lukewarlow, #43498, #43186), and < textarea> and < input type=text> and friends (@mrobinson, #43132), plus ā€˜::marker’ in mixed LTR/RTL content (@Loirooriol, #43201). < select> also now requires user interaction to open the picker (@SharanRP, #43485).

      < form action>, < iframe src>, open(url) on XMLHttpRequest , new EventSource(url) , and new Worker(url) now correctly resolve the URL with the page encoding (@SharanRP, @jdm, @jayant911, @Veercodeprog, @sabbCodes, #43521, #43554, #43572, #43537, #43634, #43588).

      ā€˜direction’ now works on grid containers (@nicoburns, #42118), SVG images can now be used in ā€˜border-image’ (@shubhamg13, #42566), ā€˜linear-gradient()’ now dithers to reduce banding (@Messi002, #43603), ā€˜letter-spacing’ no longer applies to invisible zero-width formatting characters (@simonwuelker, #42961), and ā€˜:active’ now matches disabled or non-focusable elements too, as long as they are being clicked (@webbeef, #42935).

      DOMContentLoaded timings in PerformanceĀ­NavigationĀ­Timing are more accurate (@simonwuelker, #43151). PerformanceĀ­PaintĀ­Timing and LargestĀ­ContentfulĀ­Paint are more accurate too, taking <iframe> into account (@shubhamg13, #42149), and checking for and ignoring things like broken images and transparent backgrounds (@shubhamg13, #42833, #42975, #43475).

      We’ve improved the conformance of JS modules (@Gae24, #43585), < button command> (@lukewarlow, #42883), < font size> (@shubhamg13, #43103), < link media> and < link type> (@TimvdLippe, #43043), < option selected> (@SharanRP, #43582), < script integrity> and < style integrity> (@Gae24, #42931), EventSource (@mishop-15, #42179), SubtleCrypto (@kkoyung, #42984, #43315, #43533, #43519), Worker (@simonwuelker, #43329), HTMLVideoElement (@shubhamg13, #43341), dataset on Element (@TimvdLippe, #43046), and querySelector() and querySelectorAll() (@simonwuelker, #42991).

      We’ve fixed bugs related to error reporting (@simonwuelker, @xZaisk, @yezhizhen, @eyupcanakman, #43191, #43323, #43101, #43560), event loops (@jayant911, #43523), focus (@jakubadamw, #43431), quirks mode (@mrobinson, @Loirooriol, @lukewarlow, #42960, #43368), < iframe> (@TimvdLippe, @jdm, #43539, #43732), the ā€˜animationstart’ and ā€˜animationend’ events (@simonwuelker, #43454), the ā€˜touchmove’ event (@yezhizhen, #42926), CanvasRenderingContext2D (@simonwuelker, #43218), Worker (@bruno-j- nicoletti, #43213), ā€˜:active’ on <input> (@mrobinson, #43722), ā€˜overflow: scroll’ on ā€˜::before’ and ā€˜::after’ (@stevennovaryo, #43231), ā€˜position: absolute’ (@yoursanonymous, @Loirooriol, #43084), and < img> and < svg> without width or height attributes (@Loirooriol, #42666). Fixing that last bug led to Servo developers finding two spec issues!

      We’ve landed partial support for using CSScounters in ā€˜list-style- type’ on ā€˜display: list-item’ and ā€˜content’ on ā€˜::marker’, but the counter values themselves are not calculated yet, so all list items still read as 0. or similar. In any case, you can use a or ā€˜symbols()’ in ā€˜list-style-type’, and ā€˜counter()’ and ā€˜counters()’ in ā€˜content’ (@Loirooriol, #43111).

      We’ve also landed partial support for < marquee> and the HTMLMarqueeElement interface, including basic layout, but the contents are not animated yet (@mrobinson, @lukewarlow, #43520, #43610).

      Servo now exposes several attributes that have no direct effect, but are needed for web compatibility (@lukewarlow, #43500, #43499, #43502, #43518):

      • noHref on HTMLAreaElement
      • hreflang , type , charset on HTMLAnchorElement
      • useMap on HTMLInputElement and HTMLObjectElement
      • longDesc on HTMLIFrameElement and HTMLFrameElement

      Performance and stability We’ve fixed sluggish scrolling on long documents like this page on docs.rs (@webbeef, @yezhizhen, #43074, #43138), and reduced the memory usage of BoxFragment by 10% (@stevennovaryo, #43056). about:memory now has a Force GC button (@webbeef, #42798), and no longer reports all processes as content processes in multiprocess mode (@webbeef, #42923). Web fonts are no longer fetched more than once, and they no longer cause reflow when they fail to load (@minghuaw, #43382, #43595). We’re also working towards better caching for shaping results (@mrobinson, @lukewarlow, @Loirooriol, #43653). Event handler attribute lookup is more efficient now (@Narfinger, #43337), and we’ve made DOM tree walking more efficient in many cases (@Narfinger, #42781, #42978, #43476). crypto.subtle.encrypt() , decrypt() , sign() , verify() , digest() , importKey() , unwrapKey() , decapsulateKey() , and decapsulateBits() are more efficient now (@kkoyung, #42927), thanks to a recent spec update. More of Servo now uses cheaper crossbeam channels instead of IPC channels, unless Servo is running in multiprocess mode, or avoids IPC altogether (@Narfinger, @jschwe, @Taym95, #42077, #43309, #42966). We’ve also reduced clones, allocations, conversions, comparisons, and borrow checks in many parts of Servo (@simonwuelker, @kkoyung, @mrobinson, @Narfinger, @yezhizhen, @TG199, #43212, #43055, #43066, #43304, #43452, #43717, #43780, #43088, #43226). DOM data structures (#[dom_struct]) can refer to one another, with the help of garbage collection. But when DOM objects are being destroyed, those references can become invalid for a brief moment, depending on the order the GC finalizers run in. This can be unsound if those references are accessed, which is a very easy mistake to make if the type has an impl Drop. To help prevent that class of bug, we’re reworking our DOM types so that none of them have #[dom_struct] and impl Drop at the same time (@willypuzzle, #42937, #42982, #43018, #43071, #43222, #43288, #43544, #43563, #43631). We’ve fixed a crash caused by an IPC resource leak when making many requests over time (@yezhizhen, #43381), and some bugs found by ThreadSanitizer and --debug-mozjs (@jdm, @Loirooriol, #42976, #42963, #43487). We’ve also fixed crashes in CanvasRenderingContext2D (@yezhizhen, #43449), Crypto (@rogerkorantenng, #43501), devtools (@simonwuelker, #43133), event handler attributes (@simonwuelker, #43178), Promise (@Narfinger, @jdm, #43470), and WebDriver (@Tarmil, @yezhizhen, #42739, #43381). We’ve continued our long-running effort to use the Rust type system to make certain kinds of dynamic borrow failures impossible (@Narfinger, @Gae24, @Uiniel, @TimvdLippe, @yezhizhen, @sagudev, @PuercoPop, @pylbrecht, @arabson99, @jayant911, #42957, #43108, #43130, #43215, #43183, #43219, #43245, #43220, #43252, #43268, #43184, #43277, #43278, #43284, #43302, #43312, #43348, #43327, #43362, #43365, #43383, #43432, #43259, #43439, #43473, #43481, #43480, #43479, #43525, #43535, #43543, #43549, #43570, #43571, #43569, #43579, #43584, #43657, #43713). Thanks to a wide range of people, many of whom were contributing to Servo for their first time, we’ve also landed a bunch of architectural improvements (@elomscansio, @mukilan, #43646), cleanups (@simartin, @SharanRP, @TG199, @sabbCodes, @niyabits, @eerii, @atbrakhi, #43276, #43285, #43532, #43778, #43771, #43566, #43567, #43587, #43140, #43316), and refactors (@sabbCodes, @arabson99, @jayant911, @StaySafe020, @saydmateen, @eerii, @TimvdLippe, @elomscansio, @CynthiaOketch, #43614, #43641, #43619, #43642, #43623, #43656, #43644, #43672, #43664, #43676, #43684, #43679, #43678, #43655, #43675, #43731, #43729, #43728, #43740, #43751, #43748, #43747, #43752, #43745, #43724, #43723, #43765, #43767, #43181, #43269, #43270, #43279, #43437, #43597, #43607, #43602, #43616, #43609, #43612, #43647, #43651, #43662, #43714, #43774). Donations

      Thanks again for your generous support! We are now receiving 7167 USD/month (+2.6% from February) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns , and funding maintainer work that helps more people contribute to Servo.

      Servo is also on thanks.dev, and already 37 GitHub users (+5 from February) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

      We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. If you’re interested in this kind of sponsorship, please contact us at join@servo.org.

      7167 USD/month

      10000

      Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

  3. April 29, 2026
    1. šŸ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-29 rss

      IDA Plugin Updates on 2026-04-29

      Activity:

      • ida-mcp-server
        • 13f82c62: fix: function-size pre-filter (16 KB threshold) restores MAX_FUNCSIZE…
        • eb63c538: fix: extend pathological-func pre-filter for Rust deep generics
        • 86e2d687: feat: lazy-init C++ class recovery on first decompile
        • f384fe25: feat: add Itanium C++ ABI class recovery tool (recover_cpp_classes)
        • e8112416: feat: tier-4 raw disassembly fallback - guarantees 100% coverage
        • ed57dab4: feat: handle extern symbols + bump MAX_FUNCSIZE for "too big function"
        • dbf27026: fix: handle thunks/trampolines + null-JSON in decompile_function
        • 6ac2f0e4: fix: tighten Go-symbol regex - require trailing '.' to avoid C++ fals…
      • python-elpida_core.py
        • 5a88e62b: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:37Z
        • 15a038c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T23:15Z
        • 6698cd11: HERMES correction note: clear stale items before daily-13
        • 7628fb89: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:53Z
        • 2d9c8c09: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:30Z
        • d24ffc93: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T22:05Z
        • f184eab8: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:40Z
        • 9f5d78c2: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T21:12Z
        • c7e897f3: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:44Z
        • bd98077e: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-29T20:20Z
      • quokka
        • 43316396: Merge pull request #110 from quarkslab/dependabot/github_actions/acti…
    2. šŸ”— r/Leeds Card shops in Leeds rss

      Hiya!

      Currently visiting from Sweden through my uni, training to be a teacher. Oh well, I wondered if there are any good TCG/MTG stores in the area that are welcoming and friendly?

      Preferably somewhere that sells/buys singles too, and that has good prices and assortment as well!

      Cheers!

      submitted by /u/DarkGreenPenguin
      [link] [comments]

    3. šŸ”— HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [clang-include](https://github.com/oxikkk/ida-clang-include): 1.1.0
      
    4. šŸ”— idank/explainshell db-latest release

      No content.

    5. šŸ”— r/Yorkshire Flamborough Cliffs rss

      Flamborough Cliffs | The amazing cliffs today at flamborough submitted by /u/J_1989_EDI
      [link] [comments]
      ---|---

    6. šŸ”— r/Yorkshire How driving Yorkshire Dales B road in the evening is like rss

      How driving Yorkshire Dales B road in the evening is like | submitted by /u/alanas4201
      [link] [comments]
      ---|---

    7. šŸ”— Simon Willison LLM 0.32a0 is a major backwards-compatible refactor rss

      I just released LLM 0.32a0, an alpha release of my LLM Python library and CLI tool for accessing LLMs, with some consequential changes that I've been working towards for quite a while.

      Previous versions of LLM modeled the world in terms of prompts and responses. Send the model a text prompt, get back a text response.

      import llm
      
      model = llm.get_model("gpt-5.5")
      response = model.prompt("Capital of France?")
      print(response.text())

      This made sense when I started working on the library back in April 2023. A lot has changed since then!

      LLM provides an abstraction over thousands of different models via its plugin system. The original abstraction - of text input that returns text output - was no longer able to represent everything I needed it to.

      Over time LLM itself has grown attachments to handle image, audio, and video input, then schemas for outputting structured JSON, then tools for executing tool calls. Meanwhile LLMs kept evolving, adding reasoning support and the ability to return images and all kinds of other interesting capabilities.

      LLM needs to evolve to better handle the diversity of input and output types that can be processed by today's frontier models.

      The 0.32a0 alpha has two key changes: model inputs can be represented as a sequence of messages, and model responses can be composed of a stream of differently typed parts.

      Prompts as a sequence of messages

      LLMs accept input as text, but ever since ChatGPT demonstrated the value of a two-way conversational interface, the most common way to prompt them has been to treat that input as a sequence of conversational turns.

      The first turn might look like this:

      user: Capital of France?
      assistant: 
      

      (The model then gets to fill out the reply from the assistant.)

      But each subsequent turn needs to replay the entire conversation up to that point, as a sort of screenplay:

      user: Capital of France?
      assistant: Paris
      user: Germany?
      assistant:
      

      Most of the JSON APIs from the major vendors follow this pattern. Here's what the above looks like using the OpenAI chat completions API, which has been widely imitated by other providers:

      curl https://api.openai.com/v1/chat/completions \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -H "Content-Type: application/json" \
        -d '{
          "model": "gpt-5.5",
          "messages": [
            {
              "role": "user",
              "content": "Capital of France?"
            },
            {
              "role": "assistant",
              "content": "Paris"
            },
            {
              "role": "user",
              "content": "Germany?"
            }
          ]
        }'

      Prior to 0.32, LLM modeled these as conversations:

      model = llm.get_model("gpt-5.5")
      
      conversation = model.conversation()
      r1 = conversation.prompt("Capital of France?")
      print(r1.text())
      # Outputs "Paris"
      
      r2 = conversation.prompt("Germany?")
      print(r2.text())
      # Outputs "Berlin"

      This worked if you were building a conversation with the model from scratch, but it didn't provide a way to feed in a previous conversation from the start. This made tasks like building an emulation of the OpenAI chat completions API much harder than they should have been.

      The llm CLI tool worked around this through a custom mechanism for persisting and inflating conversations using SQLite, but that never became a stable part of the LLM API - and there are many places you might want to use the Python library without committing to SQLite as the storage layer.

      The new alpha now supports this:

      import llm
      from llm import user, assistant
      
      model = llm.get_model("gpt-5.5")
      
      response = model.prompt(messages=[
          user("Capital of France?"),
          assistant("Paris"),
          user("Germany?"),
      ])
      print(response.text())

      The llm.user() and llm.assistant() functions are new builder functions designed to be used within that messages=[] array.

      The previous prompt= option still works, but LLM upgrades it to a single-item messages array behind the scenes.

      You can also now reply to a response, as an alternative to building a conversation:

      response2 = response.reply("How about Hungary?")
      print(response2) # Default __str__() calls .text()

      Streaming parts

      The other major new interface in the alpha concerns streaming results back from a prompt.

      Previously, LLM supported streaming like this:

      response = model.prompt("Generate an SVG of a pelican riding a bicycle")
      for chunk in response:
          print(chunk, end="")

      Or this async variant:

      import asyncio
      import llm
      
      model = llm.get_async_model("gpt-5.5")
      response = model.prompt("Generate an SVG of a pelican riding a bicycle")
      
      async def run():
          async for chunk in response:
              print(chunk, end="", flush=True)
      
      asyncio.run(run())

      Many of today's models return mixed types of content. A prompt run against Claude might return reasoning output, then text, then a JSON request for a tool call, then more text content.

      Some models can even execute tools on the server-side, for example OpenAI's code interpreter tool or Anthropic's web search. This means the results from the model can combine text, tool calls, tool outputs and other formats.

      Multi-modal output models are starting to emerge too, which can return images or even snippets of audio intermixed into that streaming response.

      The new LLM alpha models these as a stream of typed message parts. Here's what that looks like as a Python API consumer:

      import asyncio
      import llm
      
      model = llm.get_model("gpt-5.5")
      prompt = "invent 3 cool dogs, first talk about your motivations"
      
      def describe_dog(name: str, bio: str) -> str:
          """Record the name and biography of a hypothetical dog."""
          return f"{name}: {bio}"
      
      def sync_example():
          response = model.prompt(
              prompt,
              tools=[describe_dog],
          )
          for event in response.stream_events():
              if event.type == "text":
                  print(event.chunk, end="", flush=True)
              elif event.type == "tool_call_name":
                  print(f"\nTool call: {event.chunk}(", end="", flush=True)
              elif event.type == "tool_call_args":
                  print(event.chunk, end="", flush=True)
      
      async def async_example():
          model = llm.get_async_model("gpt-5.5")
          response = model.prompt(
              prompt,
              tools=[describe_dog],
          )
          async for event in response.astream_events():
              if event.type == "text":
                  print(event.chunk, end="", flush=True)
              elif event.type == "tool_call_name":
                  print(f"\nTool call: {event.chunk}(", end="", flush=True)
              elif event.type == "tool_call_args":
                  print(event.chunk, end="", flush=True)
      
      sync_example()
      asyncio.run(async_example())

      Sample output (from just the first sync example):

      My motivation: create three memorable dogs with distinct ā€œcoolā€ styles—one cinematic, one adventurous, and one charmingly chaotic—so each feels like they could star in their own story.
      Tool call: describe_dog({"name": "Nova Jetpaw", "bio": "A sleek silver-gray whippet who wears tiny aviator goggles and loves sprinting along moonlit beaches. Nova is fearless, elegant, and rumored to outrun drones just for fun."}
      Tool call: describe_dog({"name": "Mochi Thunderbark", "bio": "A fluffy corgi with a dramatic black-and-gold bandana and the confidence of a rock star. Mochi is short, loud, loyal, and leads a neighborhood 'security patrol' made entirely of squirrels."}
      Tool call: describe_dog({"name": "Atlas Snowfang", "bio": "A massive white husky with ice-blue eyes and a backpack full of trail snacks. Atlas is calm, heroic, and always knows the way home—even during blizzards, fog, or confusing camping trips."}

      At the end of the response you can call response.execute_tool_calls() to actually run the functions that were requested, or send a response.reply() to have those tools called and their return values sent back to the model:

      print(response.reply("Tell me about the dogs"))

      This new mechanism for streaming different token types means the CLI tool can now display "thinking" text in a different color from the text in the final response. The thinking text goes to stderr so it won't affect results that are piped into other tools.

      This example uses Claude Sonnet 4.6 (with an updated streaming event version of the llm-anthropic plugin) as Anthropic's models return their reasoning text as part of the response:

      llm -m claude-sonnet-4.6 'Think about 3 cool dogs then describe them' \
        -o thinking_display 1

      Animated demo. Starts with ~/dev/scratch/llm-anthropic % uv run llm -m claude-sonnet-4.6 'Think about 3 cool dogs then describe them' -o thinking_display 1 - the text then streams in grey: The user wants me to think about 3 cool dogs and then describe them. Let me come up with 3 interesting, cool dogs and describe them. Then switches to regular color text for the output that describes the dogs.

      You can suppress the output of reasoning tokens using the new -R/--no-reasoning flag. Surprisingly that ended up being the only CLI-facing change in this release.

      A mechanism for serializing and deserializing responses

      As mentioned earlier, LLM has quite inflexible code at the moment for persisting conversations to SQLite. I've added a new mechanism in 0.32a0 that should provide Python API users a way to roll their own alternative:

      serializable = response.to_dict()
      # serializable is a JSON-style dictionary
      # store it anywhere you like, then inflate it:
      response = Response.from_dict(serializable)

      The dictionary this returns is actually a TypedDict defined in the new llm/serialization.py module.

      What's next?

      I'm releasing this as an alpha so I can upgrade various plugins and exercise the new design in real world environments for a few days. I expect the stable 0.32 release will be very similar to this alpha, unless alpha testing reveals some design flaw in the way I've put this all together.

      There's one remaining large task: I'd like to redesign the SQLite logging system to better capture the more finely grained details that are returned by this new abstraction.

      Ideally I'd like to model this as a graph, to best support situations like an OpenAI-style chat completions API where the same conversations are constantly extended and then repeated with every prompt. I want to be able to store those without duplicating them in the database.

      I'm undecided as to whether that should be a feature in 0.32 or I should hold it for 0.33.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    8. šŸ”— sacha chua :: living an awesome life What's in the Emacs newcomers-presets theme? rss

      The development version of Emacs as of Feb 2026 includes a newcomers-presets theme that can be enabled from the splash screen or by using M-x load-theme RET newcomers-presets RET. (Not sure how to run that command? Start with the guided tour/tutorial or choose "Help - Tutorial" from the Emacs menu.)

      2026-04-29_14-19-11.png
      Figure 1: Newcomer presets are on the splash screen

      If you like it and want to make it automatically enabled in future Emacs sessions:

      1. Use M-x customize-themes
      2. Select the checkbox next to newcomer-presets by either clicking on it or using TAB to navigate to it and then pressing RET.
      3. Click on or use RET to select Save Theme Settings.
      2026-04-30_09-47-33.png
      Figure 2: Saving the theme setting

      I'm not sure if someone else has made notes on what it does yet, so I thought I'd put this together.

      Most Emacs newbies aren't running the development version of Emacs at the moment, but it will eventually make its way into Emacs 31. I wonder if it might be a good idea to extract the theme as a package that people can use use-package on if they want. I am not entirely sure about using themes for this, but it's worth an experiment.

      Here's a list of what newcomers-presets includes. I'll also include the corresponding Emacs Lisp in case you want to copy just that part, or you can also get it as copy-of-newcomers-presets.el. If you want to load it in your existing Emacs, you can add (load-file "path/to/copy-of-newcomers-presets.el") to your InitFile. You can use C-h f (describe-function) or C-h v (describe-variable) to learn more about the functions or variables it changes. I'm manually making this page, so there might have been some changes to etc/themes/newcomers-presets-theme.el since .

      ;; -*- lexical-binding: t -*-
      ;; Based on https://github.com/emacs-mirror/emacs/tree/master/etc/themes/newcomers-presets-theme.el
      

      Editing and navigation

      When you select text by pressing C-SPC (set-mark-command) and then moving to the end of the text you want to select, and then you type, the new text replaces the selection.

      (setopt delete-selection-mode t)
      

      New text replaces the selection

      Copying works better when copying between Emacs and other applications Equivalent:

      (setopt save-interprogram-paste-before-kill t)
      

      If you have a compatible spellchecker installed (Hunspell, Aspell, Ispell, or Enchant), Emacs will check your spelling and underline errors using flyspell-mode. You can use M-x ispell-change-dictionary to change the language if you have the appropriate dictionary installed. In code buffers, the spelling is checked in comments and strings. You can also use flyspell-goto-next-error (C-,) to go to the next misspelled word and flyspell-auto-correct-word (C-M-i) to fix it. More info: Spelling (info "(emacs) Spelling").

      2026-04-30_09-36-20.png
      Figure 3: A wavy red underline shows potentially misspelled words; right-click on them to correct them or add them to the dictionary
      (add-hook 'text-mode-hook 'flyspell-mode)
      (add-hook 'prog-mode-hook 'flyspell-prog-mode)
      

      Imenu entries are automatically updated based on the structure of the current buffer or file (ex: outline headings, function names). You can list them with M-x imenu or add them to the menu bar with M-x imenu-add-to-menubar.

      (setopt imenu-auto-rescan t)
      

      When you visit a read-only file, it will be in view mode, so you can use SPC to scroll. This affects buffers for files that you don't have permission to change as well as buffers that you make read-only using C-x C-q (read-only-mode).

      (setopt view-read-only t)
      

      Keyboard shortcuts

      Some commands allow you to use just the last part of the keyboard shortcut in order to repeat them. Related: Repeat Mode: Stop Repeating Yourself | Emacs Redux

      (setopt repeat-mode t)
      

      Appearance

      Scrolling happens more smoothly instead of jumping by character.

      (setq pixel-scroll-mode t)
      

      Line numbers are shown in both text and code buffers.

      (add-hook 'prog-mode-hook 'display-line-numbers-mode)
      (add-hook 'text-mode-hook 'display-line-numbers-mode)
      

      Column numbers are shown in the mode line.

      (setopt column-number-mode t)
      

      If you change your system-wide fixed-width font, Emacs will also update. the system-defined font dynamically.

      (setopt font-use-system-font t)
      

      You can resize your frames or windows to any size instead of being limited to whole-character steps.

      (setopt frame-resize-pixelwise t)
      (setopt window-resize-pixelwise t)
      

      The frame size will stay the same even if you change the font, menu bar, tool bar, tab bar, internal borders, fringes, or scroll bars.

      (setopt frame-inhibit-implied-resize t)
      

      If a mode line is wider than the currently selected window, it is compressed by replacing repeating spaces with a single space.

      (setopt mode-line-compact 'long)
      

      Saving data between sessions

      Minibuffer history is saved between Emacs sessions so you can use M-x and then use M-p and M-n to navigate your history.

      (setopt savehist-mode t)
      

      Your place in a file is saved between Emacs sessions.

      (setopt save-place-mode t)
      

      Your recently-opened files are saved between Emacs sessions, so you can use M-x find-file and other commands and then use M-p and M-n to navigate your history.

      Completion

      This set of options affects the completion candidates (the suggestions that appear when you press M-x and then TAB, or when you use TAB at other prompts).

      You can use the arrow keys to select completion candidates in the minibuffer, and you can use RET to select the highlighted one.

      (setopt minibuffer-visible-completions t)
      

      Additional details for completion suggestions are shown before or after the suggestions. For example, M-x describe-symbol (C-h o) shows additional information.

      (setopt completions-detailed t)
      

      Completion candidates can be grouped together if the function that sets up the completion specifies it.

      (setopt completions-group t)
      

      When you press TAB to see the completion candidates for a prompt (for example, M-x and then TAB), the first TAB will display the completion list, and the second TAB will select the buffer.

      (setopt completion-auto-select 'second-tab)
      

      This Completions buffer will update as you type so that you can narrow down the candidates.

      (setopt completion-eager-update t)
      

      The following completion styles are set up:

      • basic: You can type the start of a candidate. (ex: abc will list abcde and abcxyz)
      • partial-completion: You can specify multiple words and each word will be considered as the prefix for matching candidates. For example, if you type a-b, that will match apple-banana if it is one of the options.
      • emacs22: When you move your point to the middle of some text and then complete, the text before your point is used to filter the completion and the text after your point is added to the end of the result.

      More info: Completion styles

      (setopt completion-styles '(basic emacs22 flex))
      

      Automatically show the completion preview based on the text at point. TAB accepts the completion suggestion and M-i completes the longest common prefix.

      (setopt global-completion-preview-mode t)
      

      TAB first tries to indent the current line. If the line was already indented, then Emacs tries to complete the thing at point. Some programming language modes have their own variable to control this, e.g., c-tab-always-indent, so it might need additional customization.

      (setopt tab-always-indent 'complete)
      

      Help

      If you pause after typing the first part of a keyboard shortcut (ex: C-c), Emacs will display the keyboard shortcuts that you can continue with.

      (setopt which-key-mode t)
      

      Tab bar

      The tab bar is always shown. Tabs let you save the way you have one or more windows arranged, and which buffers are displayed in those windows. You can click on a tab or use M-x tab-switch to switch to that configuration, or click on the + sign or use M-x tab-new to add another tab. More info: Tab Bars (info "(emacs) Tab Bars")"

      (setopt tab-bar-show 0)
      
      2026-04-30_09-15-18.png
      Figure 4: The tab bar is displayed at the top of a buffer.

      The tabs are saved between Emacs sessions.

      (setopt tab-bar-history-mode t)
      

      The Dired file manager

      Dired buffers are refreshed whenever you revisit a directory.

      (setopt dired-auto-revert-buffer t)
      

      You can use the mouse to drag files in Dired. Ctrl+leftdrag copies the file, Shift+leftdrag moves it, Meta+leftdrag links it. You can also drag the to other applications on X11, Haiku, Mac OS, and GNUstep.

      (setopt dired-mouse-drag-files t)
      

      Show the current directory when prompting for a shell command. This affects shell-command and async-shell-command.

      (setopt shell-command-prompt-show-cwd t)
      

      Package management

      If you open a file for which Emacs has optional packages that provide extra support in GNU ELPA or NonGNU ELPA, Emacs will add [Upgrade?] to the mode line to make it easier to install the appropriate package.

      2026-04-30_09-06-18.png
      Figure 6: Package autosuggest adds an Upgrade? to the modeline when you open a file for which Emacs has an optional package available
      (setopt package-autosuggest-mode t)
      

      When you're working with M-x list-packages, x (M-x package-menu-execute) now requires you to select something instead of acting the current package by default. Press i (package-menu-mark-install) to mark a package for installation, press d (package-menu-mark-delete) to mark a package for deletion, press u (package-menu-mark-unmark) to unmark a package, and press x (package-menu-execute) to execute the operations.

      (setopt package-menu-use-current-if-no-marks nil)
      

      Code

      In code buffers, Emacs will display errors and warnings by using flymake-mode.

      (add-hook 'prog-mode-hook 'flyspell-mode)
      

      If you use M-x compile, the *compilation* window will scroll as new output appears, but it will stop at the first error so that you can investigate more easily.

      (setopt compilation-scroll-output 'first-error)
      

      You can Ctrl+leftclick on a function name to jump to its definition using xref-find-definitions-at-mouse.

      (setopt global-xref-mouse-mode t)
      

      Emacs will automatically insert matching parentheses, brackets, and braces.

      (setopt electric-pair-mode t)
      

      Emacs will generally use spaces instead of tabs when indenting code.

      (setopt indent-tabs-mode nil)
      

      If there is a project-specific .editorconfig file, Emacs will follow those settings. (More about EditorConfig)

      (setopt editorconfig-mode t)
      

      Tags tables are automatically regenerated whenever you save files. This uses Etags to make it easier to jump to the definitions of functions or variables.

      Version control

      (setopt etags-regen-mode t)
      

      Files are reloaded from disk if they have been updated by your version control system.

      (setopt vc-auto-revert-mode t)
      

      If a directory has changed in version control but you have some modified files, Emacs will ask if you want to save those changed files.

      (setopt vc-dir-save-some-buffers-on-revert t)
      

      If you use vc-find-revision to go to a specific version of the file, it is displayed in a temporary buffer and does not replace the copy that you currently have.

      (setopt vc-find-revision-no-save t)
      

      If you open a symbolic link to a file under version control, Emacs will open the real file and display a message. That way, it will still be version-controlled.

      (setopt vc-follow-symlinks t)
      

      C-x v I and C-x v O now have additional keyboard shortcuts. For example, C-x v I L is vc-root-log-incoming and C-x v O L is vc-root-log-outgoing. Use C-x v I C-h and C-x v O C-h to see other commands.

      (setopt vc-use-incoming-outgoing-prefixes t)
      

      The version control system is automatically determined for all buffers. (Standard Emacs just checks it in dired, shell, eshell, or compilation-mode buffers.)

      (setopt vc-deduce-backend-nonvc-modes t)
      

      Things I haven't been able to figure out yet

      On Linux with X11, Haiku, or macOS / GNUstep: When a buffer has an associated filename, you can drag the filename from the modeline and drop it into other programs. (Haven't been able to get this working.)

      (setopt mouse-drag-mode-line-buffer t)
      

      You can e-mail me at sacha@sachachua.com.

    9. šŸ”— r/york Help me reach Ā£500 donations for York's homeless before tomorrow? rss

      Help me reach £500 donations for York's homeless before tomorrow? | Hi all! Some of you might remember my last post and how much amazing support I got from our local Reddit group when I first began fundraising. This will be my last update before the sleep out actually takes place! Tomorrow evening I will be taking part in York's annual Charity Sleep Out to help raise money for some of the wonderful charities in York who provide food and other essential support to those in our local area who are homeless or otherwise in need. I've had the absolute pleasure of volunteering with Hoping Kitchen on Sundays and I know how well-loved KEYS is, so it's a really worthy cause. Whilst it won't be even close to what those who sleep rough experience on a daily basis, I am the kind of person who had to borrow a wooly hat from a friend because I would very much usually rather be indoors doing literally anything outside ever. Most importantly, my pet parrots and bunnies will miss me very much and probably give me a few nips upon my return for leaving them without their usual bedtime snuggles for an evening. Would be really great to get to £500 before the event begins tomorrow! I'll try and remember to post some pictures whilst we're camping out tomorrow to keep you all updated https://www.givewheel.com/fundraising/14777/kayleighs-york-charity-sleepout-2026/ submitted by /u/kittywenham
      [link] [comments]
      ---|---

    10. šŸ”— sacha chua :: living an awesome life Working on the Emacs newbie experience rss

      The Emacs Carnival April 2026 theme of newbies/starter kits nudged me to think about how new users can learn what they need in order to get started. In particular, I wanted to think about these questions that newbies might have:

      • Is it worth it?
      • How do I start?
      • Should I use a starter kit? How?
      • I'm stuck, how can I get help?
      • This is overwhelming. How do I make it more manageable?

      I worked on some pages in the EmacsWiki:

      People often recommend Emacs News to people who want to learn more about what's going on in the Emacs community, so I added some notes to that one as well.

      Just gotta find some newbies to test these ideas with… Email me! =)

      You can e-mail me at sacha@sachachua.com.

    11. šŸ”— sacha chua :: living an awesome life Emacs beginner resources rss

      : Updated my page from 2014 with more recent resources.

      Welcome to Emacs! Thank you for considering this strange and wonderful text editor. Here are some resources that can help you on your journey.

      Many people use Emacs just for Org Mode. Here are some resources for getting started:

      You can view 1 comment or e-mail me at sacha@sachachua.com.

    12. šŸ”— r/Leeds T&A link - Tuesday 28th - Briggate: "4 teen boys - aged 13 to 16 - arrested following city centre stabbing incident" rss

      Reports that a 34‑year‑old man was taken to hospital after being reportedly stabbed during an altercation near the McDonalds on Briggate on Tuesday night.

      Also in the YEP:

      https://www.yorkshireeveningpost.co.uk/news/crime/four-teenagers-arrested- man-stabbed-leeds-briggate-8177646

      Of course it was outside the McDonalds :(

      I hope those responsible are dealt with robustly to send the right message.

      submitted by /u/thetapeworm
      [link] [comments]

    13. šŸ”— r/Leeds Bike stolen city centre rss

      Victoria Pendleton bike stolen today from outside Leeds train station between 12:30-16:30 :(

      Please dm if any information thank you

      submitted by /u/Few_Health_5530
      [link] [comments]

    14. šŸ”— r/Yorkshire Whitby steam trains return delayed rss

      Whitby steam trains return delayed | submitted by /u/CaptainYorkie1
      [link] [comments]
      ---|---

    15. šŸ”— Andrew Ayer - Blog FastCGI: 30 Years Old and Still the Better Protocol for Reverse Proxies rss

      HTTP reverse proxying is a minefield. Just the other week, a researcher disclosed a desync vulnerability in Discord's media proxy that allowed spying on private attachments. This is not unusual; these vulnerabilities just keep coming.

      The problem is the widespread use of HTTP as the protocol between reverse proxies and backends, even though it's unfit for the job. But we don't have to use HTTP here. There's a 30-year-old protocol for proxy-to-backend communication that avoids HTTP's pitfalls. It's called FastCGI, and its specification was released 30 years ago today.

      FastCGI is a Wire Protocol, not a Process Model

      It's true that some web servers can automatically spawn FastCGI processes to handle requests for files with the .fcgi extension, much like they would for .cgi files. But you don't have to use FastCGI this way - you can also use the FastCGI protocol just like HTTP, with requests sent over a TCP or UNIX socket to a long-running daemon that handles them as if they were HTTP requests.

      For example, in Go all you have to do is import the net/http/fcgi standard library package and replace http.Serve with fcgi.Serve:

      Go HTTP

      l, _ := net.Listen("tcp", "127.0.0.1:8080") http.Serve(l, handler)

      Go FastCGI

      l, _ := net.Listen("tcp", "127.0.0.1:8080") fcgi.Serve(l, handler)

      Everything else about your app stays the same - even your handler, which continues to use the standard http.ResponseWriter and http.Request types.

      Popular proxies like Apache, Caddy, nginx, and HAProxy support FastCGI backends, and the configuration is simple:

      nginx HTTP

      proxy_pass http://localhost:8080;

      nginx FastCGI

      fastcgi_pass localhost:8080; include fastcgi_params;

      Show more config examples

      Apache HTTP

      ProxyPass / http://localhost:8080/

      Apache FastCGI

      ProxyPass / fcgi://localhost:8080/

      Caddy HTTP

      reverse_proxy localhost:8080 { transport http { } }

      Caddy FastCGI

      reverse_proxy localhost:8080 { transport fastcgi { } }

      HAProxy HTTP

      backend app_backend server s1 localhost:8080

      HAProxy FastCGI

      fcgi-app fcgi_app docroot / backend app_backend use-fcgi-app fcgi_app server s1 localhost:8080 proto fcgi

      Why HTTP Sucks for Reverse Proxies: Desync Attacks / Request Smuggling

      HTTP/1.1 has the tragic property of looking simple on the surface (it's just text!) but actually being a nightmare to parse robustly. There are so many different ways to format the same HTTP message, and there are too many edge cases and ambiguities for implementations to handle consistently. As a result, no two HTTP/1.1 implementations are exactly the same, and the same message can be parsed differently by different parsers.

      The most serious problem is that there is no explicit framing of HTTP messages - the message itself describes where it ends, and there are multiple ways for a message to do that, all with their own edge cases. Implementations can disagree about where a message ends, and consequently, where the next message begins. This is the foundation of HTTP desync attacks, also known as request smuggling, wherein a reverse proxy and a backend disagree about the boundaries between HTTP messages, causing all sorts of nightmare security issues, such as the Discord vulnerability I linked above.

      A lot of people seem to think you can just patch the parser divergences, but this is a losing strategy. James Kettle just keeps finding new ones. After finding another batch last year, he declared "HTTP/1.1 must die".

      HTTP/2, when consistently used between the proxy and backend , fixes desync by putting clear boundaries around messages, but FastCGI has been doing that since 1996 with a simpler protocol. For context, nginx has supported FastCGI backends since its first release, but only got support for HTTP/2 backends in late 2025. Apache's support for HTTP/2 backends is still "experimental".

      Why HTTP Sucks for Reverse Proxies: Untrusted Headers

      If desync attacks were the only problem, you could just use HTTP/2 and call it a day. Unfortunately, there's another problem: HTTP has no robust way for the proxy to convey trusted information about the request, such as the real client IP address, authenticated username (if the proxy handles authentication), or client certificate details (if mTLS is used).

      The only option is to stick this information in HTTP headers, alongside the headers proxied from the client, without a clear structural distinction between trusted headers from the proxy and untrusted headers from a potential attacker. For example, the X-Real-IP header is often used to convey the client's real IP address. In theory, if your proxy correctly deletes all instances of the X-Real-IP header (not just the first, and including case variations like x-REaL-ip) before adding its own, you're safe.

      In practice, this is a minefield and there are an awful lot of ways your backend can end up trusting attacker-controlled data. Your proxy really needs to delete not just X-Real- IP, but any header that's used for this sort of thing, just in case some part of your stack relies on it without your knowledge. For example, the Chi middleware determines the client's real IP address by looking at the True-Client-IP header first. Only if True-Client-IP doesn't exist does it use X-Real-IP. So even if your proxy does the right thing with X-Real-IP, you can still be pwned by an attacker sending a True-Client-IP header.

      FastCGI completely avoids this class of problem by providing domain separation between headers from the client and information added by the proxy. Though trusted data from the proxy and HTTP request headers are transmitted to the backend in the same key/value parameter list, HTTP header names are prefixed with the string "HTTP_", making it structurally impossible for clients to send a header that would be interpreted as trusted data.

      FastCGI defines some standard parameters such as REMOTE_ADDR to convey the real client IP address. Go's net/http/fcgi package automatically uses this parameter to populate the RemoteAddr field of http.Request, rendering middleware unnecessary. It Just Works. Proxies can also use non-standard parameters to report whether HTTPS was used, what TLS ciphersuite was negotiated, and what client certificate was presented, if any. Go automatically sets the Request's TLS field to a non-nil (but empty) value if the request used HTTPS, which is very handy for enforcing the use of HTTPS. The fcgi.ProcessEnv function can be used to access the full set of trusted parameters sent by the proxy.

      Closing Thoughts

      If FastCGI is the better protocol, why isn't it more popular? Maybe it's the name - while capitalizing on CGI's popularity made sense in 1996, CGI feels dated in 2026. There's also an enduring lack of awareness of the security problems with HTTP reverse proxying. Watchfire described desync attacks in 2005, and gave a prescient warning of their intractability, but the attacks were inexplicably ignored for over a decade. In an alternate timeline, Watchfire's research was taken seriously and people went looking for other protocols for reverse proxies.

      FastCGI is very usable today, and has been in production use at SSLMate for over 10 years. That said, using a vintage technology has some downsides. It was never updated to support WebSockets. The tooling is not as good. For example, curl has no way to make requests to a FastCGI server. It supports FTP, Gopher, and even SMTP (however that works), but not FastCGI. When I benchmarked Go's FastCGI server behind a variety of reverse proxies, some workloads had worse throughput compared to HTTP/1.1 or HTTP/2. I don't think that's inherent to the protocol, but a reflection that FastCGI code paths have not been optimized as much as HTTP.

      Despite these shortcomings, I still think FastCGI is worth using. I don't use WebSockets, and it's fast enough for my use case (and maybe yours too). If it ever became the bottleneck, I'd rather buy more hardware than deal with the nightmare of HTTP reverse proxying.

      Happy 30th birthday, FastCGI!

    16. šŸ”— trailofbits/multiplier 3adf81d release

      What's Changed

      • Add symbolic execution engine (Phases 0-14) with IR binding fixes by @pgoodman in #585

      Full Changelog : a784e63...3adf81d

    17. šŸ”— r/LocalLLaMA mistralai/Mistral-Medium-3.5-128B Ā· Hugging Face rss

      mistralai/Mistral-Medium-3.5-128B Ā· Hugging Face | https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUF

      Mistral Medium 3.5 128B

      Mistral Medium 3.5 is our first flagship merged model. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. Mistral Medium 3.5 replaces its predecessor Mistral Medium 3.1 and Magistral in Le Chat. It also replaces Devstral 2 in our coding agent Vibe. Concretely, expect better performance for instruct, reasoning and coding tasks in a new unified model in comparison with our previous released models. Reasoning effort is configurable per request, so the same model can answer a quick chat reply or work through a complex agentic run. We trained the vision encoder from scratch to handle variable image sizes and aspect ratios. Find more information on our blog.

      Key Features

      Mistral Medium 3.5 includes the following architectural choices:

      • Dense 128B parameters.
      • 256k context length.
      • Multimodal input : Accepts both text and image input, with text output.
      • Instruct and Reasoning functionalities with function calls (reasoning effort configurable per request).

      Mistral Medium 3.5 offers the following capabilities:

      • Reasoning Mode : Toggle between fast instant reply mode and reasoning mode, boosting performance with test-time compute when requested.
      • Vision : Analyzes images and provides insights based on visual content, in addition to text.
      • Multilingual : Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, and Arabic.
      • System Prompt : Strong adherence and support for system prompts.
      • Agentic : Best-in-class agentic capabilities with native function calling and JSON output.
      • Large Context Window : Supports a 256k context window.

      We release this model under a Modified MIT License): Open-source license for both commercial and non-commercial use with exceptions for companies with large revenue.

      Recommended Settings

      • Reasoning Effort :
        • 'none' → Do not use reasoning
        • 'high' → Use reasoning (recommended for complex prompts and agentic usage) Use reasoning_effort="high" for complex tasks and agentic coding.
      • Temperature : 0.7 for reasoning_effort="high". Temp between 0.0 and 0.7 for reasoning_effort="none" depending on the task. Generally, lower means answer that are more to the point and higher allows the model to be more creative. It is a good practice to try different values in order to improve the model performance to meet your demands.

      submitted by /u/jacek2023
      [link] [comments]
      ---|---

    18. šŸ”— Jessitron Span or Attribute? in OpenTelemetry custom instrumentation rss

      TL;DR: Attribute. More information on one event gives us more correlation power. It’s also cheaper.

      When you want to add some information to your tracing telemetry, you could emit a log, create a span, or add a piece of data to your current span. Adding a piece of data to your current span is the best! Usually.

      a trace with spans (rows with a colored bar on the timeline for their
duration), logs (dots on a span), and attributes (fields in a list when
you click on a span)

      Attributes are the best, and also the cheapest.

      If you have request name, user ID, request properties, feature flags, and notes about what happened in a single event, then you can correlate

      • feature flags with error rate
      • number of items with latency
      • which users hit the same stack trace The more data on the top-level span, the more answers you can get to ā€œWhat is different about the requests that failed?ā€[1]

      More information in one place is better! You can say trace.getCurrentSpan().set_attribute(ā€œmy_module.items.countā€, items.length) anywhere in your code, and accumulate data on a single event. This might be my favorite thing about OpenTelemetry tracing.

      Providers like Honeycomb that charge per event make adding attributes nearly free. (There’s still network, and long-term storage if you use that.)

      Spans are for important units of work.

      But sometimes it’s better to create a whole new span!

      When to start a new span:

      • Incoming request - Gotta create a top-level span to represent the work, so that you can add all those sweet attributes to it! This might be a root span (incoming work from outside, new trace) or a server span (continuing a propagated trace). In services, these come from instrumentation libraries.
      • Network boundaries - spans are great for seeing dependencies between components. When you’re calling out to another service or database, it’s normal to make a client span for the outgoing call. These are created by many instrumentation libraries.
      • Async boundaries - spans are great for seeing what ran in parallel and what concurrently.
      • Performance concerns - spans are great for seeing what is slow.

      Logs are useful sometimes.

      If something might happen more than once, then a single-valued attribute can’t record them all. If you want to track how long that thing took, use a span. If it’s a fixed-time event (like an interrupt or error), then a log is good![2][2]

      For example, if there’s only way an exception could be thrown in the scope of the span, then putting exception.message on the span is great. But if it’s possible for another exception to be thrown, that message would be overwritten! This is a good time to emit a log. Make sure the log participates in the trace (it includes trace and span ID), and then it will show up on your current span in the trace view. It doesn’t hurt to put that message on the span as well.

      These are suggestions.

      These are guidelines, but the choice is yours. What do you want your trace to look like? What do you want to see called out in the trace waterfall, and what do you want to have together for correlation? Maybe you want both: an attribute on the root span, and a span that shows duration and detail.

      Tracing tells the story of your application. Tell it the way that works for you.

      Prompt

      Get the AI to tell the story to you, and to verify that it works by testing. Here’s some advice to add to give your AI when coding:

      ## Observability Practices
      
      - add important data to the current span as attributes. Examples:
      
          - request parameters, especially internal IDs
      
          - feature flag values
      
          - anything that the code branches on
      
          - counts of how many times a loop was iterated
      
          - results of downstream calls
      
      - Name attributes like: <application>.<module>.<field>
      
      - Do not create span events, they're expensive.
      
      - Create logs only on exceptions
      
      - bring in instrumentation libraries for frameworks and client libraries to create the span structure
      
      - when kicking off async work, create a new span around each async task so that we can see what happens
      
        concurrently and what waits.
      
      - Use the Honeycomb MCP to check that your attributes and spans show up correctly after testing.
      

      [1] The data doesn’t have to be on the same span to correlate it; Honeycomb can query across spans and logs in a trace. But it’s faster and easier when the data is on the same span, and BubbleUp (ā€œwhat is different?ā€) works on single events.

      [2] You might wonder, why a log instead of a span event? They are the same inside Honeycomb. Logs are sent immediately and are more likely to arrive. This matters in web clients, where people close the tab and the span never ends.

    19. šŸ”— r/LocalLLaMA 16x DGX Sparks - What should I run? rss

      16x DGX Sparks - What should I run? | Let’s build the biggest ever DGX Spark Cluster at home. This is going into my home lab server rack, 2TB of unified memory. • 16x Sparks • 1x 200Gbps FS 24 x 200Gb QSFP56 Switch • 16x QSFP56 DAC cables Should be all setup by tomorrow afternoon, what should I run? submitted by /u/Kurcide
      [link] [comments]
      ---|---

    20. šŸ”— r/reverseengineering I built a free open-source CAN bus reverse engineering workstation in Python — 15 tabs, offline ML, dual AI engines, MitM gateway rss
    21. šŸ”— r/york tansy beetle on clifton sands !! rss

      tansy beetle on clifton sands !! | submitted by /u/whtmynm
      [link] [comments]
      ---|---

    22. šŸ”— r/LocalLLaMA What it feels like to have to have Qwen 3.6 or Gemma 4 running locally rss

      What it feels like to have to have Qwen 3.6 or Gemma 4 running locally | Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!). But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090. submitted by /u/GodComplecs
      [link] [comments]
      ---|---

    23. šŸ”— r/wiesbaden Neue Freunde finden 25-36+- rss

      Hello, bin 34, Single und neu in Wiesbaden. Da meine Freunde dank Kindern kaum noch vor die Tür gehen, bin ich auf der Suche nach Jungen und aktiven Menschen, die Lust haben sich regelmäßig zu treffen. Garnicht so einfach in WI :( Bumble BFF und Gemeinsam Erleben hat für mich leider garnicht funktioniert und random ein Tanzkurs oder ähnliches anfangen ist auch nicht so mein Ding.

      Ich bin super gerne unterwegs und möchte einfach mal wieder öfter raus und feiern, auf Straßenfeste, in Bars oder einfach nur spazieren. Genau so gerne chille ich zuhause, mache einen Spieleabend, koche was leckeres und starte einen Film/Serien-Marathon. Bin sportlich und auch sonst für vieles zu begeistern.

      Wäre cool, Gleichgesinnte zu treffen, bevorzugt in meinem Alter, so Pi mal Daumen 😁

      submitted by /u/M0zep5
      [link] [comments]

    24. šŸ”— r/Yorkshire Collapsing Labour vote in Barnsley sees some choosing between Greens and Reform rss
    25. šŸ”— r/LocalLLaMA AMD has invented something that lets you use AI at home! They call it a "computer" rss

      AMD has invented something that lets you use AI at home! They call it a "computer" | submitted by /u/9gxa05s8fa8sh
      [link] [comments]
      ---|---

    26. šŸ”— r/wiesbaden Bernd Zehner lƶscht ein Drittel seiner Rezensionen in seinem Restaurant (geƶffnet im Februar) rss
  4. April 28, 2026
    1. šŸ”— IDA Plugin Updates IDA Plugin Updates on 2026-04-28 rss

      IDA Plugin Updates on 2026-04-28

      New Releases:

      Activity:

      • capa
        • 3593a79a: build(deps): bump pip from 26.0 to 26.1 (#3063)
        • 5ed6aab5: build(deps-dev): bump pyinstaller from 6.19.0 to 6.20.0 (#3062)
        • 7d38d948: build(deps-dev): bump pre-commit from 4.5.0 to 4.6.0 (#3061)
      • claude-of-alexandria
        • fe1d2580: chore(deps-dev): bump the minor-and-patch group (#11)
      • ida-domain
      • ida-structor
        • 141a4d46: feat: Add early stopping and ordered xref scanning for type validation
      • mips_call_analyzer
      • python-elpida_core.py
        • 2f09280a: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:41Z
        • 0466d82c: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T23:21Z
        • 2216d956: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:57Z
        • 57c73e44: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:33Z
        • 295cf3f4: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T22:08Z
        • 5cc39a47: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:43Z
        • 80b56fe0: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T21:18Z
        • 55613c14: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:52Z
        • b45ffb00: [HERMES-ROUTED] Phase 3 routing artifact 2026-04-28T20:25Z
        • a4772cd4: Constitutional event: strip-fix restored PROCEED, A3 voice, P055 norm…
      • scripts
        • 9e0ee439: added script for c2 extraction from EchoGather
    2. šŸ”— r/york My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm rss

      My bike was stolen on campus west near courtyard on 26/4 between 7pm and 11pm | Any information would be greatly appreciated as I require my bike for work submitted by /u/MidnightFar3298
      [link] [comments]
      ---|---

    3. šŸ”— oxigraph/oxigraph v0.5.8 release
      • HTTP server: add /sparql path that serves both SPARQL queries and updates.
      • GeoSPARQL: add a significant set of new functions.
      • RocksDB backend: fixes some transactions where reading-your-own-writes was not working correctly.
    4. šŸ”— r/Leeds Wheelchair accessible taxi services rss

      Hey everyone, I’m a full time wheelchair user from London. I have quadriplegic cerebral palsy so can’t walk at all. I’m looking to study electronic music production at Leeds Conservatoire in September of this year and have to travel up to Leeds for accommodation viewings on Thursday. I was wondering if anyone could give me some taxi companies that do/may provide wheelchair accessible taxi services with full ramp access?

      Uber, at least in London is a bit hit and miss so that’s why I’m asking for taxi services rather than just using Uber. I also wanted to ask, is there a taxi rank at Leeds station and do they have wheelchair accessible vehicles there?

      Thanks in advance and feel free to add any tips or experiences of travelling in Leeds as a wheelchair user. Even if you are able bodied, please let me know if there’s anything you think I should bear in mind while navigating the city in general.

      Thanks again everyone!

      submitted by /u/LORDLUK3
      [link] [comments]

    5. šŸ”— @binaryninja@infosec.exchange To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This mastodon

      To help us track down bugs faster, 5.3 introduces opt-in crash reporting. This feature is disabled by default in paid versions and enabled by default in our free version. Either way, you can change the setting whenever you want. Details in our latest blog post: https://binary.ninja/2026/04/13/binary- ninja-5.3-jotunheim.html#crash- reporting

    6. šŸ”— r/york Bees on Gillygate rss

      Hi!

      I don’t suppose anyone saw the swarm of bees all over Gillygate around the Tesco today?

      Just wondered if anyone knows if it’s cleared up or what caused it?

      This was about 13:45, and apparently they weren’t there in the morning.

      submitted by /u/SadAndGloomy
      [link] [comments]

    7. šŸ”— r/reverseengineering Building a perfect clone of 1993 game SimTower (via RE) rss
    8. šŸ”— r/LocalLLaMA Something from Mistral (Vibe) tomorrow rss

      Something from Mistral (Vibe) tomorrow | Model(s) or Tool upgrade/New Tool? Source Tweet : https://xcancel.com/mistralvibe/status/2049147645894021147#m submitted by /u/pmttyji
      [link] [comments]
      ---|---

    9. šŸ”— r/Yorkshire Looking for a Lost Super Street Fighter 2 Arcade Cabinet (Sheffield/Yorkshire – early 2000s) rss

      I’m trying to track down an arcade cabinet I used to play in the early 2000s, and I’m hoping someone in Yorkshire might know its current location.

      Between 2002–2004, I regularly played a Super Street Fighter 2 machine in a takeaway called Pizza Metro on London Road in Sheffield.

      Details I remember:

      • Small black cabinet

      • Dragon symbol on the side (green or possibly yellow)

      • Standard 6-button layout (Street Fighter style, diagonal)

      • One joystick was slightly larger than the other (not sure which side)

      • It was Super Street Fighter 2 (not Super Turbo — not the version Akuma)

      I used to play it a lot during a brief period Iiving in Sheffield about 23 years ago, so it’s quite nostalgic for me.

      Around 2005, the shop returned the cabinet to the arcade vendor they rented it from, the vendor later sold it to someone else. I managed to contact the vendor at the time, but they couldn’t remember who it was sold to.

      Ideally, I’d be interested in buying the cabinet if it still exists. However, if it’s not for sale, I’d really just like to confirm the exact joystick and button setup.

      If someone believes they’ve found the right machine, I’m happy to:

      Confirm from clear photos/videos and arrange to see it in person to verify details.

      I’m offering Ā£100 for a solid, verifiable lead (e.g. correct cabinet identification, owner info, or confirmed hardware details.

      If anyone remembers this cabinet, knows the vendor, or has any leads at all, I’d really appreciate it. I know it's a long shot but I've decided to try anyway.

      submitted by /u/goldstand
      [link] [comments]

    10. šŸ”— gchq/CyberChef v11.0.0 release

      See the CHANGELOG and commit messages for details.

    11. šŸ”— Locklin on science Bouncing droplet ā€œquantum mechanicsā€ rss

      I was always a fan of de Broglie and Bohm’s “pilot wave” idea. This is a fully deterministic theory of quantum mechanics which physicists don’t like because “le hidden variables” (also it isn’t yet relativistic I guess). The original pilot wave idea didn’t work out because de Broglie couldn’t calculate scattering cross sections, though Bohm […]

    12. šŸ”— r/Leeds nightclub interview?? rss

      Hey guys! I have an interview for a bartender position at Backrooms nightclub tomorrow and I’ve never had an interview in a club but I really wanna work there bc I love the whole vibe of clubs and want to get into bartending. What kind of things do they ask you for these roles?? If anyone has any personal experience too it would be massively appreciated

      submitted by /u/WhereasFar9745
      [link] [comments]

    13. šŸ”— r/reverseengineering How I reverse-engineered a SQLite WAL database inside a VS Code extension - custom merge engine, header byte patching, and protobuf decoding without a schema rss
    14. šŸ”— r/york Does anyone know if there is an update regarding foss islands chimney? rss

      Does anyone know if there is an update regarding foss islands chimney? | I noticed the temporary fencing looks to now be permanent, which is a shame- was a handy shortcut to Halfords and vice versa! submitted by /u/UnhingedSerialKiller
      [link] [comments]
      ---|---

    15. šŸ”— r/reverseengineering AI solved our CTF in 6min rss
    16. šŸ”— r/LocalLLaMA meantime on r/vibecoding rss

      meantime on r/vibecoding | words of wisdom submitted by /u/jacek2023
      [link] [comments]
      ---|---

    17. šŸ”— r/LocalLLaMA Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation rss

      Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation | Evaluated Qwen 3.6 27B across BF16, Q4_K_M, and Q8_0 GGUF quant variants with llama-cpp-python using Neo AI Engineer. Benchmarks used:

      • HumanEval: code generation
      • HellaSwag: commonsense reasoning
      • BFCL: function calling

      Total samples:

      • HumanEval: 164
      • HellaSwag: 100
      • BFCL: 400

      Results: BF16

      • HumanEval: 56.10% 92/164
      • HellaSwag: 90.00% 90/100
      • BFCL: 63.25% 253/400
      • Avg accuracy: 69.78%
      • Throughput: 15.5 tok/s
      • Peak RAM: 54 GB
      • Model size: 53.8 GB

      Q4_K_M

      • HumanEval: 50.61% 83/164
      • HellaSwag: 86.00% 86/100
      • BFCL: 63.00% 252/400
      • Avg accuracy: 66.54%
      • Throughput: 22.5 tok/s
      • Peak RAM: 28 GB
      • Model size: 16.8 GB

      Q8_0

      • HumanEval: 52.44% 86/164
      • HellaSwag: 83.00% 83/100
      • BFCL: 63.00% 252/400
      • Avg accuracy: 66.15%
      • Throughput: 18.0 tok/s
      • Peak RAM: 42 GB
      • Model size: 28.6 GB

      What stood out: Q4_K_M looks like the best practical variant here. It keeps BFCL almost identical to BF16, drops about 5.5 points on HumanEval, and is still only 4 points behind BF16 on HellaSwag. The tradeoff is pretty good:

      • 1.45x faster than BF16
      • 48% less peak RAM
      • 68.8% smaller model file
      • nearly identical function calling score

      Q8_0 was a bit underwhelming in this run. It improved HumanEval over Q4_K_M by ~1.8 points, but used 42 GB RAM vs 28 GB and was slower. It also scored lower than Q4_K_M on HellaSwag in this eval. For local/CPU deployment, I would probably pick Q4_K_M unless the workload is heavily code-generation focused. For maximum quality, BF16 still wins. Evaluation setup:

      • GGUF via llama-cpp-python
      • n_ctx: 32768
      • checkpointed evaluation
      • HumanEval, HellaSwag, and BFCL all completed
      • BFCL had 400 function calling samples

      This evaluation was done using Neo AI Engineer, which built the GGUF eval setup, handled checkpointed runs, and consolidated the benchmark results. I manually reviewed the outcome as well. Complete case study with benchmarking results, approach and code snippets in mentioned in the comments below šŸ‘‡ submitted by /u/gvij
      [link] [comments]
      ---|---

    18. šŸ”— tintinweb/pi-subagents v0.6.3 release

      see changelog

    19. šŸ”— tintinweb/pi-subagents v0.6.2 release

      see changelog

    20. šŸ”— r/Leeds Firstbus app update shenanigans rss

      If you use the Firstbus app for tickets, be warned, they are rolling out an update. The update has gone so well that they have a banner on the website pointing to a separate FAQ specifically for the update with a big list of reasons why you will probably have to call them to get access to your tickets...

      https://www.firstbus.co.uk/help-support/help-and-support/first-bus-app- update

      submitted by /u/awesomeweles
      [link] [comments]

    21. šŸ”— r/reverseengineering Example structure for evidence-based vulnerability reports rss
    22. šŸ”— r/LocalLLaMA Duality of r/LocalLLaMA rss
    23. šŸ”— r/LocalLLaMA I'm done with using local LLMs for coding rss

      I think gave it a fair shot over the past few weeks, forcing myself to use local models for non-work tech asks. I use Claude Code at my job so that's what I'm comparing to.

      I used Qwen 27B and Gemma 4 31B, these are considered the best local models under the multi-hundred LLMs. I also tried multiple agentic apps. My verdict is that the loss of productivity is not worth it the advantages.

      I'll give a brief overview of my main issues.

      Shitty decision-making and tool-calls

      This is a big one. Claude seems to read my mind in most cases, but Qwen 27B makes me give it the Carlo Ancelotti eyebrow more often than not. The LLM just isn't proceeding how I would proceed.

      I was mainly using local LLMs for OS/Docker tasks. Is this considered much harder than coding or something?

      To give an example, tasks like " Here's a Github repo, I want you to Dockerize it." I'd expect any dummy to follow the README's instructions and execute them. (EDIT: full prompt here: https://reddit.com/r/LocalLLaMA/comments/1sxqa2c/im_done_with_using_local_llms_for_coding/oiowcxe/ )

      Issues like having a 'docker build' that takes longer than the default timeout, which sends them on unrelated follow-ups (as if the task failed), instead of checking if it's still running. I had Qwen try to repeat the installation commands on the host (also Ubuntu) to see what happens. It started assuming "it must have failed because of torchcodec" just like that, pulling this entirely out of its ass, instead of checking output.

      I tried to meet the models half-way. Having this in AGENTS.md: " If you run a Docker build command, or any other command that you think will have a lot of debug output, then do the following: 1. run it in a subagent, so we don't pollute the main context, 2. pipe the output to a temporary file, so we can refer to it later using tail and grep." And yet twice in a row I came back to a broken session with 250k input tokens because the LLM is reading all the output of 'docker build' or 'docker compose up'.

      I know there's huge AGENTS.md that treat the LLM like a programmable robot, giving it long elaborate protocols because they don't expect to have decent self-guidance, I didn't try those tbh. And tbh none of them go into details like not reading the output of 'docker build'. I stuck to the default prompts of the agentic apps I used, + a few guidelines in my AGENTS.md.

      Performance

      Not only are the LLMs slow, but no matter which app I'm using, the prompt cache frequently seems to break. Translation: long pauses where nothing seems to happen.

      For Claude Code specifically, this is made worse by the fact that it doesn't print the LLM's output to the user. It's one of the reasons I often preferred Qwen Code. It's very frustrating when not only is the outcome looking bad, but I'm not getting rapid feedback.

      I'm not learning anything

      Other than changing the URL of the Chat Completions server, there's no difference between using a local LLM and a cloud one, just more grief.

      There's definitely experienced to be gained learning how to prompt an LLM. But I think coding tasks are just too hard for the small ones, it's like playing a game on Hardcore. I'm looking for a sweetspot in learning curve and this is just not worth it.

      What now

      For my coding and OS stuff, I'm gonna put some money on OpenRouter and exclusively use big boys like Kimi. If one model pisses me off, move on to the next one. If I find a favorite, I'll sign up to its yearly plan to save money.

      I'll still use small local models for automation, basic research, and language tasks. I've had fun writing basic automation skills/bots that run stuff on my PC, and these will always be useful.

      I also love using local LLMs for writing or text games. Speed isn't an issue there, the prompt cache's always being hit. Technically you could also use a cloud model for this too, but you'd be paying out the ass because after a while each new turn is sending like 100k tokens.

      Thanks for reading my blog.

      submitted by /u/dtdisapointingresult
      [link] [comments]

    24. šŸ”— Jessitron Communication is hard, but sometimes I can fix it. rss

      We used to type code to tell the computer what to do. When that gets tedious, we made libraries and functions until the code was more communicative.

      Now I type English words to tell the agent what to tell the computer what to do. Sometimes that gets tedious, and then I need to find new ways to make it easier.

      Here’s an example.

      Iterating could be easier. The work: I’m getting Claude to build a program that turns Claude conversation logs into a vertical HTML comic. ! As we iterate on this, I ask it a lot of questions about the output. This way, I learn something about the problem domain (how Claude Code records conversations). And then I get it to tweak the output to my liking. In the example above, I wondered where the Background command "Start dev server on alternate ports" notification came from, so I asked Claude how I could know. To ask it, I had to cut and paste the text from the HTML, and then Claude had to grep the HTML to see what I was talking about, and also grep the JSONL to find the input. What if later, a very similar message appeared? It couldn't tell exactly what I was talking about. I can’t just point to the UI.

      This wasn't the first time I struggled to refer to a panel in the comic. This time, my frustration served as an alarm: do something about it, Jess. There has to be a better way to tell it which panel I'm talking about.

      When communication gets difficult, that’s a signal. I can change this.

      So I made it make a way to point to the UI.

      In this case, I asked Claude to add a reference tag to each panel. The reference tag for each panel contains the line number (that was its idea) and filename (that was my idea) of the JSONL line represented by this panel. I push ā€˜r’ to toggle whether these reference tags show (my idea). When I click one, the value is copied (its idea).

      the html comic with references.

      Now I can ask the same question more succinctly: How can I find out where episode-8-before:L63 came from?

      Claude understood and added a hover effect that highlights the originating bash tool call.

      That hover effect is OK; I used it a few times. Those reference tags are gold! I've used them a dozen times already, and development is smoother for it. Claude can find the panel I’m talking about quickly both in the input JSONL and the output HTML. Our communication is streamlined.

      This was a great idea. Iterating is much easier now!

      I am in the loop and on the loop.

      There are (at least) two feedback loops running here. One is the development loop, with Claude doing what I ask and then me checking whether that is indeed what I want. Here, I’m a human in the loop with the AI. This works well since we’re prototyping, learning the domain and discovering what output I want.

      Then there’s a meta-level feedback loop, the ā€œis this working?ā€ check when I feel resistance. Frustration, tedium, annoyance-these feelings are a signal to me that maybe this work could be easier. I step back and think about how the AI could work more accurately and smoothly. Annie Vella called this the ā€œmiddle loop,ā€ and Kief Morris renamed it "human on the loop."

      Here, I’m both in the development loop with the AI, and I’m ā€œon the loopā€ as a thoughtful collaborator, smoothing the development loop when it gets rough.

      Resistance will be assimilated.

      As developers using software to build software, we have potential to mold our own work environment. With AI making software change superfast, changing our program to make debugging easier pays off immediately. Also, this is fun!

    25. šŸ”— r/wiesbaden Eiserne Hand mit der Vespa rss

      Kurz und knappe Frage an die Moped / Rollerfahrer.

      Meine Freundin muss nach Taunusstein pendeln und überlegt auf Roller umzusteigen.

      Daher meine Frage :

      Kommt eine kleine Vespa / Moped mit 50ccm die eiserne Hand hoch ? Also mit sinnvoller Geschwindigkeit?

      Hat das einer von euch schon gemacht ?

      Ich danke schonmal für die Antworten :)

      submitted by /u/metaldog
      [link] [comments]

    26. šŸ”— r/Leeds best tuna melt paninis? rss

      i’m craving a tuna melt really badly right now and i’m in the city centre for lunch tomorrow and want to get something good. does anyone have any recommendations? cheese, tuna, and toasted panini bread is all i need right now šŸ™

      submitted by /u/Shoddy_Day
      [link] [comments]

    27. šŸ”— Mitchell Hashimoto Ghostty Is Leaving GitHub rss
      (empty)
    28. šŸ”— Armin Ronacher Before GitHub rss

      GitHub was not the first home of my Open Source software. SourceForge was.

      Before GitHub, I had my own Trac installation. I had Subversion repositories, tickets, tarballs, and documentation on infrastructure I controlled. Later I moved projects to Bitbucket, back when Bitbucket still felt like a serious alternative place for Open Source projects, especially for people who were not all-in on Git yet.

      And then, eventually, GitHub became the place, and I moved all of it there.

      It is hard for me to overstate how important GitHub became in my life. A large part of my Open Source identity formed there. Projects I worked on found users there. People found me there, and I found other people there. Many professional relationships and many friendships started because some repository, issue, pull request, or comment thread made two people aware of each other.

      That is why I find what is happening to GitHub today so sad and so disappointing. I do not look at it as just the folks at Microsoft making product decisions I dislike. GitHub was part of the social infrastructure of Open Source for a very long time. For many of us, it was not merely where the code lived; it was where a large part of the community lived.

      So when I think about GitHub's decline, I also think about what came before it, and what might come after it. I have written a few times over the years about dependencies, and in particular about the problem of micro dependencies. In my mind, GitHub gave life to that phenomenon. It was something I definitely did not completely support, but it also made Open Source more inclusive. GitHub changed how Open Source feels, and later npm and other systems changed how dependencies feel. Put them together and you get a world in which publishing code is almost frictionless, consuming code is almost frictionless, and the number of projects in the world explodes.

      That has many upsides. But it is worth remembering that Open Source did not always work this way.

      A Smaller World

      Before GitHub, Open Source was a much smaller world. Not necessarily in the number of people who cared about it, but in the number of projects most of us could realistically depend on.

      There were well-known projects, maintained over long periods of time by a comparatively small number of people. You knew the names. You knew the mailing lists. You knew who had been around for years and who had earned trust. That trust was not perfect, and the old world had plenty of gatekeeping, but reputation mattered in a very direct way. We took pride (and got frustrated) when the Debian folks came and told us our licensing stuff was murky or the copyright headers were not up to snuff, because they packaged things up.

      A dependency was not just a package name. It was a project with a history, a website, a maintainer, a release process, a lot of friction, and often a place in a larger community. You did not add dependencies casually, because the act of depending on something usually meant you had to understand where it came from.

      Not all of this was necessarily intentional, but because these projects were comparatively large, they also needed to bring their own infrastructure. Small projects might run on a university server, and many of them were on SourceForge, but the larger ones ran their own show. They grouped together into larger collectives to make it work.

      We Ran Our Own Infrastructure

      My first Open Source projects lived on infrastructure I ran myself. There was a Trac installation, Subversion repositories, tarballs, documentation, and release files served from my own machines or from servers under my control. That was normal. If you wanted to publish software, you often also became a small-time system administrator. Georg and I ran our own collective for our Open Source projects: Pocoo. We shared server costs and the burden of maintaining Subversion and Trac, mailing lists and more.

      Subversion in particular made this "running your own forge" natural. It was centralized: you needed a server, and somebody had to operate it. The project had a home, and that home was usually quite literal: a hostname, a directory, a Trac instance, a mailing list archive.

      When Mercurial and Git arrived, they were philosophically the opposite. Both were distributed. Everybody could have the full repository. Everybody could have their own copy, their own branches, their own history. In principle, those distributed version control systems should have reduced the need for a single center. But despite all of this, GitHub became the center.

      That is one of the great ironies of modern Open Source. The distributed version control system won, and then the world standardized on one enormous centralized service for hosting it.

      What GitHub Gave Us

      It is easy now to talk only about GitHub's failures, of which there are currently many, but that would be unfair: GitHub was, and continues to be, a tremendous gift to Open Source.

      It made creating a project easy and it made discovering projects easy. It made contributing understandable to people who had never subscribed to a development mailing list in their life. It gave projects issue trackers, pull requests, release pages, wikis, organization pages, API access, webhooks, and later CI. It normalized the idea that Open Source happens in the open, with visible history and visible collaboration. And it was an excellent and reasonable default choice for a decade.

      But maybe the most underappreciated thing GitHub did was archival work: GitHub became a library. It became an index of a huge part of the software commons because even abandoned projects remained findable. You could find forks, and old issues and discussions all stayed online. For all the complaints one can make about centralization, that centralization also created discoverable memory. The leaders there once cared a lot about keeping GitHub available even in countries that were sanctioned by the US.

      I know what the alternative looks like, because I was living it. Some of my earliest Open Source projects are technically still on PyPI, but the actual packages are gone. The metadata points to my old server, and that server has long stopped serving those files.

      That was normal before the large platforms. A personal domain expired, a VPS was shut down, a developer passed away, and with them went the services they paid for. The web was once full of little software homes, and many of them are gone 1.

      npm and the Dependency Explosion

      The micro-dependency problem was not just that people published very small packages. The hosted infrastructure of GitHub and npm made it feel as if there was no cost to create, publish, discover, install, and depend on them.

      In the pre-GitHub world, reputation and longevity were part of the dependency selection process almost by necessity, and it often required vendoring. Plenty of our early dependencies were just vendored into our own Subversion trees by default, in part because we could not even rely on other services being up when we needed them and because maintaining scripts that fetched them, in the pre-API days, was painful. The implied friction forced some reflection, and it resulted in different developer behavior. With npm-style ecosystems, the package graph can grow faster than anybody's ability to reason about it.

      The problem that this type of thinking created also meant that solutions had to be found along the way. GitHub helped compensate for the accountability problem and it helped with licensing. At one point, the newfound influx of developers and merged pull requests left a lot of open questions about what the state of licenses actually was. GitHub even attempted to rectify this with their terms of service.

      The thinking for many years was that if I am going to depend on some tiny package, I at least want to see its repository. I want to see whether the maintainer exists, whether there are issues, whether there were recent changes, whether other projects use it, whether the code is what the package claims it is. GitHub became part of the system that provides trust, and more recently it has even become one of the few systems that can publish packages to npm and other registries with trusted publishing.

      That means when trust in GitHub erodes, the problem is not isolated to source hosting. It affects the whole supply chain culture that formed around it.

      GitHub Is Slowly Dying

      GitHub is currently losing some of what made it feel inevitable. Maybe that's just the life and death of large centralized platforms: they always disappoint eventually. Right now people are tired of the instability, the product churn, the Copilot AI noise, the unclear leadership, and the feeling that the platform is no longer primarily designed for the community that made it valuable.

      Obviously, GitHub also finds itself in the midst of the agentic coding revolution and that causes enormous pressure on the folks over there. But the site has no leadership! It's a miracle that things are going as well as they are.

      For a while, leaving GitHub felt like a symbolic move mostly made by smaller projects or by people with strong views about software freedom. I definitely cringed when Zig moved to Codeberg! But I now see people with real weight and signal talking about leaving GitHub. The most obvious one is Mitchell Hashimoto, who announced that Ghostty will move. Where it will move is not clear, but it's a strong signal. But there are others, too. Strudel moved to Codeberg and so did Tenacity. Will they cause enough of a shift? Probably not, but I find myself on non-GitHub properties more frequently again compared to just a year ago.

      One can argue that this is good: it is healthy for Open Source to stop pretending that one company should be the default home of everything. Git itself was designed for a world with many homes.

      Dispersion Has a Cost

      Going back to many forges, many servers, many small homes, and many independent communities will increase decentralization, and in many ways it will force systems to adapt. This can restore autonomy and make projects less dependent on the whims of Microsoft leadership. It can also allow different communities to choose different workflows. What's happening in Pi's issue tracker currently is largely a result of GitHub's product choices not working in the present-day world of Open Source. It was built for engagement, not for maintainer sanity.

      It can also make the web forget again. I quite like software that forgets because it has a cleansing element. Maybe the real risk of loss will make us reflect more on actually taking advantage of a distributed version control system.

      But if projects move to something more akin to self-hosted forges, to their own self-hosted Mercurial or cgit servers, we run the risk of losing things that we don't want to lose. The code might be distributed in theory, but the social context often is not. Issues, reviews, design discussions, release notes, security advisories, and old tarballs are fragile. They disappear much more easily than we like to admit. Mailing lists, which carried a lot of this in earlier years, have not kept up with the needs of today, and are largely a user experience disaster.

      We Need an Archive

      As much as I like the idea of things fading out of existence, we absolutely need libraries and archives.

      Regardless of whether GitHub is here to stay or projects find new homes, what I would like to see is some public, boring, well-funded archive for Open Source software. Something with the power of an endowment or public funding to keep it afloat. Something whose job is not to win the developer productivity market but just to make sure that the most important things we create do not disappear.

      The bells and whistles can be someone else's problem, but source archives, release artifacts, metadata, and enough project context to understand what happened should be preserved somewhere that is not tied to the business model or leadership mood of a single company.

      GitHub accidentally became that archive because it became the center of Open Source activity. Once that no longer holds, we should not assume some magic archival function will emerge or that GitHub will continue to function as such. We have already seen what happens when project homes are just personal servers and good intentions, and we have seen what happened to Google Code and Bitbucket.

      I hope GitHub recovers, I really do, in part because a lot of history lives there and because the people still working on it inherited something genuinely important. But I no longer think it is responsible to let the continued memory of Open Source depend on GitHub remaining a healthy product.

      The world before GitHub had more autonomy and more loss, and in some ways, we're probably going to move back there, at least for a while. Whatever people want to start building next should try to keep the memory and lose the dependence. It should be easier to move projects, easier to mirror their social context, easier to preserve releases, and harder for one company's drift to become a cultural crisis for everyone else.

      I do not want to go back to the old web of broken tarball links and abandoned Trac instances. I also do not want Open Source to pretend that the last twenty years were normal or permanent. GitHub wrote a remarkable chapter of Open Source, and if that chapter is ending, the next one should learn from it and also from what came before.

      1. This is also a good reminder that we rely so very much on the Internet Archive for many projects of the time.↩