🏡


  1. About KeePassXC’s Code Quality Control – KeePassXC
  2. How to build a remarkable command palette
  3. Leaderboard - compar:IA, the AI chatbot arena
  4. Who needs Graphviz when you can build it yourself? | SpiderMonkey JavaScript/WebAssembly Engine
  5. Automerge

  1. November 21, 2025
    1. 🔗 sacha chua :: living an awesome life Tracking my oopses rss

      My stress level seems to be higher this year compared to last year. There are a number of ways I can tell when I need to slow down. I feel more tired, less energetic. Enthusiasm is difficult to muster. I need to break things down into smaller tasks. I make lots of little mistakes: I misplace things, I forget things, I don't think ahead.

      I track my mistakes in my home-made web-based journal under the Oops category. It feels a little better, come to think of it, when I can recover at least some data from a mistake. It's part of life, just a signal that tells me my brain is getting overloaded.

      I thought it would be interesting to analyse the frequency of my oops. For example, here are some recent oopses:

      • I forgot to check the EmacsConf upload folder before e-mailing the speakers a nudge about videos. There were a few more videos in there, whew! It's all good, people are patient and wonderful. Impact: people's patience. (2025-11-21-01)
      • I ate the last of the brown rice because I forgot to check if there was more. Good thing W- had more in the freezer. (2025-11-18-06)
      • I fumbled the Apple Pencil as I took my iPad off the piano. It disengaged from the magnetic lock, slid down the piano cover, and fell on the floor, breaking the tip. I didn't want to rush to catch it because I was tired and I was also holding the iPad. Fortunately, I have a number of backup tips. Impact: maybe $3 to replace that tip eventually. Cause: tired. (2025-11-18-03)
      • I washed the clothes on hot because I'm trying to avoid insect bites, but I accidentally shrank A+'s favourite purple shirt. Maybe next time I can just do a small load of my stuff. Impact: $10. I made up by buying new clothes for her, which I've been meaning to do anyway. #household (2025-11-15-16)
      • I crossed the street against the lights because I wasn't looking at the pedestrian light. (2025-11-11-06)

      Here's a yearly analysis:

      Code for retrieving and grouping my journal entries
      (append '(("Year" "Count"))
      (sort
       (mapcar (lambda (group)
                 (list
                  (car group)
                  (length (cdr group))))
               (seq-group-by
                (lambda (o) (substring (my-journal-date o) 0 4))
                (seq-filter (lambda (o) (string= (my-journal-category o) "Oops")) (my-journal-get-entries))))
       :key 'car))
      
      Data table
      Year Count
      2017 12
      2018 75
      2019 36
      2020 38
      2021 33
      2022 25
      2023 28
      2024 37
      2025 73
      Code for graphing comments by year
      import pandas as pd
      import seaborn as sns
      import matplotlib.pyplot as plt
      import numpy as np
      
      df = pd.DataFrame(data[1:], columns=data[0])
      df['Count'] = df['Count'].astype(int)
      df['Year'] = df['Year'].astype(int)
      df = df.sort_values('Year')
      plt.figure(figsize=(12, 6))
      ax = sns.barplot(x='Year', y='Count', data=df)
      plt.title('Oops by Year (2017-2025)', fontsize=16, fontweight='bold')
      plt.xlabel('Year')
      plt.ylabel('Number of oops entries')
      plt.xticks(rotation=45)
      plt.grid(axis='y')
      for i, v in enumerate(df['Count']):
          ax.text(i, v + 1, str(v), ha='center', fontsize=9)
      plt.tight_layout()
      plt.savefig('year_count_plot.svg')
      return 'year_count_plot.svg'
      
      2025-11-21T17:15:47.790809 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/

      2017 was not a full year of data, so that's probably why the number is so low. 2018 was when my dad died. I flew to the Philippines several times with my 2-year-old daughter so that we could spend time with my mom. It's a wonder that I managed to keep things mostly sorted out. Things were mostly manageable in between despite the ongoing pandemic. This year, though, a confluence of factors added a lot more strain on my brain. I wonder if part of it is because I'm concerned about health issues for people I care about. I also worry about the kiddo, school, change, and possibly adolescence. Who knows, maybe I'm starting to go through perimenopause, which apparently tends to include brain fog and mood swings. Fun. These are big things that I won't be able to resolve on my own, but I can get better at not overextending myself.

      Might be fun to do a month-by-month analysis.

      Code for retrieving and grouping my journal entries
      (append '(("Year" "Month" "Count"))
      (sort
       (mapcar (lambda (group)
                 (list
                  (substring (car group) 0 4)
                  (string-to-number (substring (car group) 5))
                  (length (cdr group))))
               (seq-group-by
                (lambda (o) (substring (my-journal-date o) 0 7))
                (seq-filter (lambda (o) (string= (my-journal-category o) "Oops")) (my-journal-get-entries))))
       :key (lambda (row) (format "%s-%02d" (car row) (cadr row)))))
      
      Code for making the heatmap
      import pandas as pd
      import matplotlib.pyplot as plt
      import seaborn as sns
      
      df = pd.DataFrame(data[1:], columns=data[0])
      df = pd.pivot_table(df, columns=['Month'], index=['Year'], values='Count', aggfunc='sum', fill_value=0).iloc[::-1].sort_index(ascending=True)
      plt.figure(figsize=(12, 8))
      sns.heatmap(
          df,
          annot=True,
          fmt="d",  # Format as integer
          cmap="YlGnBu",  # Color map
          linewidths=.5,
          cbar_kws={'label': 'Count of "Oops" Entries'}
      )
      # Set the title and axis labels
      plt.title('Heatmap of "Oops" Entries by Month and Year', fontsize=16)
      plt.xlabel('Month', fontsize=12)
      plt.ylabel('Year', fontsize=12)
      plt.savefig('number-of-oops-by-month.png')
      return df
      
      Data table
      Month  1   2   3   4   5   6   7   8   9   10  11  12
      Year
      2017    0   0   0   0   0   0   0   0   7   2   0   3
      2018    4   7   4  10  10   8   8   1   7   7   5   4
      2019    4   4   2   4   4   1   3   5   1   1   2   5
      2020    3   1   2   2   1   5   7   7   4   2   2   2
      2021    2   2   1   1   2   4   6   4   2   2   1   6
      2022    2   3   2   1   3   2   0   3   2   2   4   1
      2023    3   1   1   1   5   2   4   5   0   0   3   3
      2024    0   1   1   4   4   8   8   1   3   3   2   2
      2025    7   1   7   2   6   4   9   8  10   4  15   0
      
      number-of-oops-by-month.png

      Oooh, what's up with this month, yeah…

      Most of my mistakes are small and easy to fix, just wasting time or patience. Others are a bit more annoying (dropping electronic devices, etc.). Some are potentially life-limiting (gotta make sure I look at the right lights before crossing the street). I tend to have a hard time with:

      • transitions, like coming home after a bike ride: I might accidentally hang up my keys on the wrong hook if I'm not paying attention. I've started saying the steps out loud.
      • tidying: If I have an attentional lapse, I put things in not-quite-the-right-place, so I probably can't listen to podcasts or think about complex thoughts.
      • travel: If I'm biking or walking, I have to pay extra attention.
      • task switching: interruptions, stacks, etc. I say things out loud and write them down if needed.

      When I notice myself making more oopses than usual, I try to sleep more, take things more slowly, offload more of my thoughts to drawings and notes, ask for help, and do fewer things.

      I'm working on single-tasking more instead of getting distracted by interesting thoughts. Maybe I used to be able to think about stuff, but now it's probably better to just let my mind be slightly bored if necessary instead of letting it get overloaded. I have to adapt to my capacity at the moment. I can either trust that those thoughts will come back if they're important, or take a moment to write them down on my phone. I can also give myself more focus time during the day to revisit those thoughts so that I don't get tempted to squeeze them in while, say, I'm putting away the dishes.

      Maybe routines, songs, physical cues like checklists, or pointing and calling (physically pointing to something and saying it out loud for confirmation) can help me with some of my frequently-made mistakes.

      Little by little. I might not be able to get rid of all the little mistakes, but if I can smoothen out some of the frequent ones, have an oops fund for the monetary costs of moderate mistakes, and keep myself from making the life-limiting ones, that might be enough.

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    2. 🔗 r/reverseengineering YOU ARE NOT READY FOR THIS: NEURAL NETWORKS ARE KILLING REVERSE ENGINEERING rss
    3. 🔗 HexRaysSA/plugin-repository commits sync repo: -1 plugin, -2 releases rss
      sync repo: -1 plugin, -2 releases
      
      ## Removed plugins
      - fwhunt-ida
      
    4. 🔗 r/reverseengineering Made a process injection detector rss
    5. 🔗 News Minimalist 🐢 New daily pill helps lose 10% weight + 10 more stories rss

      In the last 2 days ChatGPT read 63606 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

      [6.1] Daily pill helps people lose 10% of weight in 18 months, study finds —medicalxpress.com(+10)

      A study published Thursday found a new daily pill, orforglipron, helps people lose about 10% of their body weight, offering a cheaper, needle-free alternative to current injectable drugs.

      The Lancet study involved over 1,500 adults with both obesity and type 2 diabetes. After 72 weeks, those on the highest dose lost 10% of their weight, compared to 2% for a placebo group. Eli Lilly developed the pill.

      The pill has side effects similar to existing drugs and is less potent than the injectable Mounjaro. If approved, orforglipron could be available in 2026 at a significantly lower cost than current treatments.

      [6.4] Trump's Ukraine war end plan favors Russia —apnews.com(+521)

      A U.S.-Russia drafted peace plan proposes ending the war by requiring Ukraine to cede territory and forgo NATO membership in a deal largely favorable to Moscow.

      The 28-point proposal, obtained Thursday by the Associated Press, also offers Ukraine security guarantees, limits its military, and establishes a large international fund for reconstruction while reintegrating Russia into the global economy.

      Specific terms include recognizing Crimea, Luhansk, and Donetsk as Russian. The plan also calls for a ceasefire, amnesty for combatants, and using frozen Russian assets to help rebuild Ukraine.

      Highly covered news with significance over 5.5

      [5.5] Saudi Arabia commits $1 trillion to U.S. investments — annahar.com (Arabic) (+14)

      [5.6] Trump plans massive expansion of offshore oil drilling — economictimes.indiatimes.com (+18)

      [6.2] WHO reports increasing drug-resistant gonorrhea — cidrap.umn.edu*(*+6)

      [5.8] Scientists inch closer to pinprick blood test to detect diseases 10 years before symptoms appear — independent.co.uk (+7)

      [5.7] US tech giants issue bonds for AI investments, raising investor concerns — ctvnews.ca (+7)

      [5.5] US classifies countries with DEI policies as infringing human rights — bbc.com (+2)

      [5.6] Speech-restoring brain chip gets FDA approval for human trial — newatlas.com (+2)

      [5.6] Skin gel delivers insulin painlessly through the skin — zmescience.com (+2)

      [6.0] Fukushima owner edges towards restarting first reactor since meltdown — bbc.com (+14)

      Thanks for reading!

      — Vadim


      You can create your own personalized newsletter like this with premium.


      Powered by beehiiv

    6. 🔗 r/wiesbaden Good food spot rss

      Any good ideas for tasty food restaurant ?

      submitted by /u/Nonthor
      [link] [comments]

    7. 🔗 r/wiesbaden Kennzeichen BD23 rss

      https://preview.redd.it/vune38174n2g1.jpg?width=1000&format=pjpg&auto=webp&s=1ae6fa2d7e5d1d3c2ee3e3a4f8fc1d5797d5e268

      Gude! Ich sammle in einer App Kennzeichen aller Art. Die gängigen wie WI, SWA, RÜD etc. habe ich schon lange gefunden, aber das Kennzeichen BD23 fehlt mir noch, dabei handelt es sich um das Sonderkennzeichen des Bundeskriminalamtes. Weiß jemand, ob die hier in Wiesbaden herumfahren? und wenn ja, bei welcher Liegenschaft ich gute Chancen habe, ein Kennzeichen zu erspähen? Danke!

      submitted by /u/CTRLPLUST
      [link] [comments]

    8. 🔗 HexRaysSA/plugin-repository commits sync repo: -4 plugins, -4 releases rss
      sync repo: -4 plugins, -4 releases
      
      ## Removed plugins
      - gepetto
      - ida-terminal-plugin
      - parascope
      - unicorn-tracer-arm64
      
    9. 🔗 @binaryninja@infosec.exchange Container support is finally here in Binary Ninja 5.2! You can browse nested mastodon

      Container support is finally here in Binary Ninja 5.2! You can browse nested formats like ZIP, IMG4, or CaRT right in memory with no manual extraction. When a container resolves to a single target, Binja can open it straight away. If there are multiple payloads, the new Container Browser lets you pick what to load. This all works with the Transform API, so you can extend it and add support for whatever formats you need. https://binary.ninja/2025/11/13/binary-ninja-5.2-io.html#container- support

    10. 🔗 The Pragmatic Engineer A startup in Mongolia translated my book rss

      I published The Software Engineer's Guidebook two years ago. The book sold 40,000 copies and generated $611,911 in royalties in its first two years. We need more good books in tech, so I hope that sharing these numbers inspires other techies to write them. I shared more details on how I self-published the book, on the financials, and learnings from publishing in this post.

      An unexpected highlight of publishing the book was ending up in Mongolia in June of this year, at a small-but-mighty startup called Nasha Tech. This was because the startup translated my book into Mongolian. Here's the completed book:

      altThe Software Engineer's Guidebook, in Mongolian. You can buy this translation here

      Here's what happened:

      A little over a year ago, a small startup from Mongolia reached out, asking if they could translate the book. I was skeptical it would happen because the unit economics appeared pretty unfavorable. Mongolia's population is 3.5 million; much smaller than other countries where professional publishers had offered to do a translation (Taiwan: 23M, South Korea: 51M, Germany: 84M, Japan: 122M, China: 1.43B people).

      But I agreed to the initiative, and expected to hear nothing back. To my surprise, nine months later the translation was ready, and the startup printed 500 copies on the first run. They invited me to a book signing in the capital city of Ulaanbaatar, and soon I was on my way to meet the team, and to understand why a small tech company translated my book!

      Japanese startup vibes in Mongolia

      The startup behind the translation is called Nasha Tech; a mix of a startup and a digital agency. Founded in 2018, its main business has been agency work, mainly for companies in Japan. They are a group of 30 people, mostly software engineers.

      altNasha Tech's offices in Ulaanbaatar, Mongolia

      Their offices resembled a mansion more than a typical workplace, and everyone takes their shoes off when arriving at work and switches to "office slippers". I encountered the same vibe later at Cursor's headquarters in San Francisco, in the US.

      Nasha Tech found a niche of working for Japanese companies thanks to one of its cofounders studying in Japan, and building up connections while there. Interestingly, another cofounder later moved to Silicon Valley, and advises the company from afar.

      The business builds the "Uber Eats of Mongolia". Outside of working as an agency, Nasha Tech builds its own products. The most notable is called TokTok, the "UberEats of Mongolia", which is the leading food delivery app in the capital city. The only difference between TokTok and other food delivery apps is scale: the local market is smaller than in some other cities. At a few thousand orders per day, it might not be worthwhile for an international player like Uber or Deliveroo to enter the market.

      altThe TokTok app: a customer base of 800K, 500 restaurants, and 400 delivery riders

      The tech stack Nasha Tech typically uses:

      • Frontend: React / Next, Vue / Nuxt, TypeScript, Electron, Tailwind, Element UI
      • Backend and API: NodeJS (Express, Hono, Deno, NestJS), Python (FastAPI, Flask), Ruby on Rails, PHP (Laravel), GraphQL, Socket, Recoil
      • Mobile: Flutter, React Native, Fastlane
      • Infra: AWS, GCP, Docker, Kubernetes, Terraform
      • AI & ML: GCP Vertex, AWS Bedrock, Elasticsearch, LangChain, Langfuse

      AI tools are very much widespread, and today the team uses Cursor, GitHub Copilot, Claude Code, OpenAI Codex, and Junie by Jetbrains.

      I detected very few differences between Nasha Tech and other "typical" startups I've visited, in terms of the vibe and tech stack. Devs working on TokTok were very passionate about how to improve the app and reduce the tech debt accumulated by prioritizing the launch. A difference for me was the language and target market: the main language in the office is, obviously, Mongolian, and the products they build like TokTok also target the Mongolian market, or the Japanese one when working with clients.

      One thing I learned was that awareness about the latest tools has no borders: back in June, a dev at Nasha Tech was already telling me that Claude Code was their daily driver, even though the tool had been released for barely a month at that point!

      Why translate the book into Mongolian?

      Nasha Tech was the only non-book publisher to express interest in translating the book. But why did they do it?

      I was told the idea came from software engineer Suuribaatar Sainjargal, who bought and enjoyed the English-language version. He suggested translating the book so that everyone at the company could read it, not only those fluent in English.

      Nasha Tech actually had some in-house experience of translation. A year earlier, in 2024, the company translated Matt Mochary's The Great CEO Within as a way to uplevel their leadership team, and to help the broader Mongolian tech ecosystem.

      Also, the company's General Manager, Batutsengel Davaa, happened to have been involved in translating more than 10 books in a previous role. He took the lead in organizing this work, and here's how the timelines played out:

      • Professional translator: 3 months
      • Technical editor revising the draft translation: 1 month
      • Technical editing #2 by a Support Engineer in Japan: 2 months
      • Technical revision: 15 engineers at Nasha Tech revised the book, with a "divide and conquer" approach: 2 months
      • Final edit and print: 1 month

      This was a real team effort. Somehow, this startup managed to produce a high- quality translation in around the same time as it took professional book publishers in my part of the world to do the same!

      A secondary goal that Nasha Tech had was to advance the tech ecosystem in Mongolia. There's understandably high demand for books in the mother tongue; I observed a number of book stands selling these books, and book fairs are also popular. The translation of my book has been selling well, where you can buy the book for 70,000 MNTs (~$19).

      Book signing and the Mongolian startup scene

      The book launch event was at Mongolia's startup hub, called IT Park, which offers space for startups to operate in. I met a few working in the AI and fintech spaces - and even one startup producing comics.

      altBook launch event, and meeting startups inside Mongolia's IT Park

      I had the impression that the government and private sector are investing heavily in startups, and want to help more companies to become breakout success stories:

      • IT Park report: the country's tech sector is growing ~20%, year-on-year. The combined valuation of all startups in Mongolia is at $130M, today.It 's worth remembering that location is important for startups: being in hubs like the US, UK, and India confers advantages that can be reflected in valuations.
      • Mongolian Startup Ecosystem Report 2023: the average pre-seed valuation of a startup in Mongolia is $170K, seed valuation at $330K, and Series A valuation at $870K. The numbers reflect market size; for savvy investors, this could also be an opportunity to invest early. I met a Staff Software Engineer at the book signing event who is working in Silicon Valley at Google, and invests and advises in startups in Mongolia.
      • Mongolian startup ecosystem Map: better-known startups in the country.

      Two promising startups from Mongolia: Chimege (an AI+voice startup) AND Global (fintech). Thanks very much to the Nasha Tech team for translating the book - keep up the great work!

    11. 🔗 HexRaysSA/plugin-repository commits readme: tweaks rss
      readme: tweaks
      
    12. 🔗 HexRaysSA/plugin-repository commits readme: add details and support hints rss
      readme: add details and support hints
      
      closes #7
      
    13. 🔗 r/reverseengineering Advanced Evasion Techniques in Low Level rss
    14. 🔗 HexRaysSA/plugin-repository commits ci: better render logs summary rss
      ci: better render logs summary
      
    15. 🔗 HexRaysSA/plugin-repository commits scripts: HCLI API changes rss
      scripts: HCLI API changes
      
    16. 🔗 HexRaysSA/plugin-repository commits ci: collect log summary to help triage plugin repo status rss
      ci: collect log summary to help triage plugin repo status
      
      ref #5
      
    17. 🔗 HexRaysSA/plugin-repository commits add script to summarize GitHub indexer logs rss
      add script to summarize GitHub indexer logs
      
      summarize logs: don't rely on gh and colorize with rich
      
    18. 🔗 HexRaysSA/plugin-repository commits ci: run deploy only after repo sync rss
      ci: run deploy only after repo sync
      
    19. 🔗 apple/embedding-atlas v0.13.0 release

      New Features

      • Add a dashboard layout mode that allows you to freely arrange the components.
      • New chart types: eCDF, line chart, and bubble chart.
      • [breaking] A new DSL syntax for charts (similar to Vega-Lite), you can now create custom charts with this syntax.
      • Theme support with chartTheme and stylesheet props to the EmbeddingAtlas component.
      • Support rendering chat messages (data format: Array<{role: "<role>", content: "<content>"}>).
      • Add a widget to show a particular column's value of the selected data point (click table or embedding plot to select a point).
      • Add a widget to display custom Markdown content.
      • Support multiple text embedding providers via LiteLLM (thanks @peter-gy).

      Detailed Changes

      New Contributors

      Full Changelog : v0.12.0...v0.13.0

    20. 🔗 Kagi Kagi Hub Belgrade: A Home Base for Kagi Members Worldwide rss

      We’re excited to announce that Kagi Hub Belgrade ( https://hub.kagi.com ) is now open! Our first office doubles as a free coworking space for all Kagi members.

    21. 🔗 Servo Blog Servo Sponsorship Tiers rss

      The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:

      • Platinum: 10,000 USD/month
      • Gold: 5,000 USD/month
      • Silver: 1,000 USD/month
      • Bronze: 100 USD/month

      Organizations and individual sponsors donating in these tiers will be acknowledged on theservo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be “no strings attached” donations. All the information about these new tiers is available at the Sponsorship page on this website.

      Please contact us atjoin@servo.org if you are interested in sponsoring the project through one of these tiers.

      Use of donations is decided transparently via the Technical Steering Committee’s public funding request process , and active proposals are tracked in servo/project#187.

      Last, but not least, we’re excited to welcome our first bronze sponsorLambdaTest who has recently started donating to the Servo project. Thank you very much!

    22. 🔗 Baby Steps Move Expressions rss

      This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.

      TL;DR

      The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression ("rvalue") that desugars into a temporary value that is moved into the closure. So

      || something(&move($expr))
      

      is roughly equivalent to something like:

      { 
          let tmp = $expr;
          || something(&{tmp})
      }
      

      How it would look in practice

      Let's go back to one of our running examples, the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today - note the let _some_value = ... lines for dealing with captures:

      // task:  listen for dns connections
      let _some_a = self.some_a.clone();
      let _some_b = self.some_b.clone();
      let _some_c = self.some_c.clone();
      tokio::task::spawn(async move {
          do_something_else_with(_some_a, _some_b, _some_c)
      });
      

      Under this proposal it would look something like this:

      tokio::task::spawn(async {
          do_something_else_with(
              move(self.some_a.clone()),
              move(self.some_b.clone()),
              move(self.some_c.clone()),
          )
      });
      

      There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like

      data_source_iter
          .inspect(|item| {
              inspect_item(item, move(tx.clone()).clone())
              //                      ----------  -------
              //                           |         |
              //                   move a clone      |
              //                   into the closure  |
              //                                     |
              //                             clone the clone
              //                             on each iteration
          })
          .collect();
      
      // some code that uses `tx` later...
      

      Credit for this idea

      This idea is not mine. It's been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it's come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary's proposal, like earlier proposals I've heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that's the version I ran with here.

      This proposal makes closures more "continuous"

      The reason that I love the move variant of this proposal is that it makes closures more "continuous" and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:

      A Rust closure captures the places you use in the "minimal way that it can" - so || vec.len() will capture a shared reference to the vec, || vec.push(22) will capture a mutable reference, and || drop(vec) will take ownership of the vector.

      You can use move expressions to control exactly what is captured: so || move(vec).push(22) will move the vector into the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:

      || {
          let vec = move(input.vec); // take full ownership of vec
          let data = move(&cx.data); // take a reference to data
          let output_tx = move(output_tx); // take ownership of the output
      

      channel

          process(&vec, &mut output_tx, data)
      }
      

      As a shorthand, you can write move || at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match with move expressions to get more control. > So the previous closure might be written more concisely like so:

      move || {
          process(&input.vec, &mut output_tx, move(&cx.data))
          //       ---------       ---------       --------  
          //           |               |               |         
          //           |               |       closure still  
          //           |               |       captures a ref
          //           |               |       `&cx.data`        
          //           |               |                         
          //       because of the `move` keyword on the clsoure,
          //       these two are captured "by move"
          //  
      }
      

      This proposal makes move "fit in" for me

      It's a bit ironic that I like this, because it's doubling down on part of Rust's design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:

      To be honest, I don't like the choice of move because it's so operational. I think if I could go back, I would try to refashion our closures around two concepts

      • Attached closures (what we now call ||) would always be tied to the enclosing stack frame. They'd always have a lifetime even if they don't capture anything.
      • Detached closures (what we now call move ||) would capture by-value, like move today.

      I think this would help to build up the intuition of "use detach || if you are going to return the closure from the current stack frame and use || otherwise".

      move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don't have "ref closures" and "move closures" - you just have closures that sometimes capture moves, and a "move" closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it's quite elegant.

      Why not suffix?

      One question is whether a move expression should be a prefix or a postfix operator. So e.g.

      || something(&$expr.move)
      

      instead of &move($expr).

      My feeling is that it's not a good fit for a postfix operator because it doesn't just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:

      || process(foo(bar()).move)
      

      When does bar() get called? If you think about it, it has to be closure creation time, but it's not very "obvious".

      We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a "scope" of code ought to be prefix - though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.

      Edit: I added this section after-the-fact in response to questions.

      Conclusion

      I'm going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust 's existing design. I love that. To me, it joins the set of "yep, we should clearly do that" pieces in this puzzle:

      • Add a Share trait (I've gone back to preferring the name share 😁)
      • Add move expressions

      These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:

      "low-level enough for a Kernel, usable enough for a GUI"

      but they are moving in the right direction.

  2. November 20, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-20 rss

      IDA Plugin Updates on 2025-11-20

      New Releases:

      Activity:

    2. 🔗 r/wiesbaden Jazz clubs/bars with live music? rss

      Hey!! I recently moved to Wiesbaden, and I've been wanting to go to jazz clubs/bars. I was reading about them in the Stadtlexikon:

      https://www.wiesbaden.de/stadtlexikon/stadtlexikon-a-z/jazz-in-wiesbaden

      But everytime I look up the listed places, they don't look like what I'm looking for with Walhalla am Exil having specific events, and some of them not having live music.

      I also just haven't been to jazz bars before, so I don't fully know what I'm looking for, but I'd appreciate recommendations for places etc. Tabnskyou!

      submitted by /u/Old-Bus-6698
      [link] [comments]

    3. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [capa](https://github.com/mandiant/capa): 9.3.1
      
    4. 🔗 The Pragmatic Engineer The Pulse: Cloudflare takes down half the internet – but shares a great postmortem rss

      Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover Big Tech and startups through the lens of senior engineers and engineering leaders. Today, we cover one out of five topics from this week 's The Pulse issue. Full subscribers received the below article seven days ago. To get articles like this in your inbox, every week,subscribe here .

      Many subscribers expense this newsletter to their learning and development budget. If you have such a budget, here 's an email you could send to your manager .


      Before we start: I'm excited to share something new: The Pragmatic Summit.

      Four years ago, The Pragmatic Engineer started as a small newsletter: me writing about topics relevant for engineers and engineering leaders at Big Tech and startups. Fast forward to today, and the newsletter crossed one million readers, and the publication expanded with a podcast as well.

      One thing that was always missing: meeting in person. Engineers, leaders, founders--people who want to meet others in this community, and learn from each other. Until now that is:

      altThe Pragmatic Summit. See more details and apply to attend

      In partnership with Statsig, I'm hosting the first-ever Pragmatic Summit. Seats are limited, and tickets are priced at $499, covering the venue, meals, and production--we're not aiming to make any profit from this event.

      Apply to attend the Summit

      I hope to see many of you there!


      Cloudflare takes down half the internet - but shares a great postmortem

      On Tuesday came another reminder about how much of the internet depends on Cloudflare's content delivery network (CDN), when thousands of sites went fully or partially offline in an outage that lasted 6 hours. Some of the higher-profile victims included:

      • ChatGPT and Claude
      • Canva, Dropbox, Spotify,
      • Uber, Coinbase, Zoom
      • X and Reddit

      Separately, you may or may not recall that during a different recent outage caused by AWS, Elon Musk noted on his website, X, that AWS is a hard dependency for Signal, meaning an AWS outage could take down the secure messaging service at any moment. In response, a dev pointed out that it is the same for X with Cloudflare - and so it proved earlier this week, when X was broken by the Cloudflare outage.

      altPredicting the future. Source: Mehul Mohan on X

      That AWS outage was in the company's us-east-1 region and took down a good part of the internet last month. AWS released incident details three days later - unusually speedy for the e-commerce giant - although that postmortem was high-level and we never learned exactly what caused AWS's DNS Enactor service to slow down, triggering an unexpected race condition that kicked off the outage.

      What happened this time with Cloudflare?

      Within hours of mitigating the outage, Cloudflare's CEO Matthew Prince shared an unusually detailed report of what exactly went wrong. The root cause was to do with propagating a configuration file to Cloudflare's Bot Management module. The file crashed Bot Management, which took Cloudflare's proxy functionality offline.

      Here's a brief overview of how Cloudflare's proxy layer works at a high level. It's the layer that protects the "origin" resources of customers - minimizing network traffic to them by blocking malicious requests and caching static resources in Cloudflare's CDN:

      altHow Cloudflare 's proxy works. More details on Cloudflare 's engineering blog

      Here's how the incident unfolded:

      A database permissions change inClickHousekicked things off. Before the permissions changed, all queries to fetch feature metadata (to be used by the Bot Management module) would have only been run on distributed tables in Clickhouse, in a database called "default" which contains 60 features.

      altBefore the permissions change: about 60 features were returned, that were fed to the Bot Module

      Until now, these queries were running using a shared system account. Cloudflare's engineering team wanted to improve system security and reliability, and move from this shared system account to individual user accounts. User accounts already had access to another database called "r0", so the team made the database permission change for access to r0 to be implicit instead of explicit.

      As a side effect of this, the same query collecting the features to be passed to Bot Management started to fetch from the r0 database, and return many more features than expected:

      altAfter the permissions change: the query did not change but returned twice as many results

      The Bot Management module does not allow loading of more than 200 features. This limit was well above the production usage of 60, and was put in place for performance reasons: the Bot Management module pre-allocates memory for up to 200 features, and it will not operate with more than this number.

      Asystem panichit machines served with the incorrect feature file. Cloudflare was nice enough to share the exact code that caused this panic, which was this unwrap() function:

      altSource:Cloudflare

      What likely happened:

      • The append_with_names() function likely checked for a limit of 200 features
      • If it saw more than 200 features, it likely returned an error
      • … and when writing the code, it was not expected that append_with_names() would return an error…
      • … and so .unwrap() panicked and crashed the system!

      Edge nodes started to crash, one by one, seemingly randomly. The feature file was being generated every 5 minutes, and gradually rolled out to Edge nodes. So, initially, it was only a few nodes that crashed, and then over time, more became non-responsive. At one point, both good and bad configuration files were being distributed, making failed nodes that received the good configuration file start working - for a while!

      Why so long to find the root cause?

      It took Cloudflare engineers unusually long - 2.5 hours! - to figure all this out, and that an incorrect configuration file propagating to Edge servers was to blame for their proxy going down. Turns out, an unrelated failure made the Cloudflare team suspect that they were under a coordinated botnet attack, as when a few of the Edge nodes started to go offline, the company's status page did, too:

      altCloudflare 's status page went offline when the outage started. Source: Cloudflare

      The team tried to gather details about the attack, but there was no attack, meaning they wasted time looking in the wrong place. In reality, the status page going down was a coincidence and unrelated to the outage. But it's easy to see why their first reaction was to figure out if there was a distributed denial of service (DDoS) attack.

      As mentioned, it eventually took 2.5 hours to pinpoint the incorrect configuration files as the source of the outage, and another hour to stop the propagation of new files, and create a new and correct file, which was deployed 3.5 hours after the start of the incident. Cleanup took another 2.5 hours, and at 17:06 UTC, the outage was resolved, ~6 hours after it started.

      Cloudflare shared a detailed review of the incident and learnings, which can be read here.

      How did the postmortem come so fast?

      One thing that keeps being surprising about Cloudflare is how they have a very detailed postmortem up in less than 24 hours after the incident is resolved. Cofounfer and CEO Matthew Prince explained how this was possible:

      • Matthew was part of the outage call.
      • After the outage was resolved, he wrote a first version of the incident review, at home. Matthew was in Lisbon, in Cloudflare's European HQ, so this was early evening
      • The team circulated a Google Doc with this initial writeup, and questions that needed to be reviewed
      • In a few hours, all questions were answered
      • Matthew: "None of us were happy [about the incident] -- we were embarrassed by what had happened -- but we declared it [the postmortem] true and accurate.
      • Sent the draft over to the SF team, who did one more sweep, the posted it

      Talk about moving with the speed of a startup, despite being a publicly traded company!

      Learnings

      There is much to learn from this incident, such as:

      Be explicit about logging errors when you raise them! Cloudflare could probably have identified the root cause of this error much faster if the line of code that returned an error, also logged the error, and if Cloudflare had alerts set up when certain errors spiked on its nodes. It could have surely shaved an hour or two off the time it took to mitigate.

      Of course, logging errors before throwing them is extra work, but when done with monitoring or log analysis, it can help find the source of errors much faster.

      Global database changes are always risky. You never know what part of the system you might hit.**** The incident started with a seemingly innocuous database permissions change that impacted a wide range of queries. Unfortunately, there is no good way to test the impact of such changes (if you know one, please leave a comment below!)

      Cloudflare was making the right kind of change by removing global systems accounts; it's a good direction to go in for security and reliability. It was extremely hard to predict the change would end up taking down a part of their system - and the web.

      Two things going wrong at the same time can really throw an engineering team. If Cloudflare's status page did not go offline, the engineering team would have surely pinpointed the problem much faster than they did. But in the heat of the moment, it's easy to assume that two small outages are connected, until there's evidence that they're not. Cloudflare is a service that's continuously under attack, so the engineering team can't be blamed for assuming it might be more of the same.

      CDNs are the backbone of the internet, and this outage doesn 't change that. The outage hit lots of large businesses, resulting in lost revenue for many. But could affected companies have prepared better for Cloudflare going down?

      The problem is that this is hard: using a CDN means taking on a hard dependency in order to reduce traffic on your own servers (the origin servers), while serving internet users faster and more cheaply:

      altA CDN is a common way to reduce traffic to servers and serve webpages and APIs faster to users

      When using a CDN, you propagate addresses that point to that CDN server's IP or domain. When the CDN goes down, you could start to redirect traffic to your own origin servers (and deal with the traffic spike), or utilize a backup CDN, if you prepared for this eventuality.

      alt

      Both these are expensive to pull off:

      • Redirecting to the origin servers likely means needing to suddenly scale up backend infrastructure
      • Having a backup CDN means there must be a contract and payment for a CDN partner which will most likely sit idle. As and when it is needed, you must switch over and warm up their cache: it's a lot of effort and money to do this!

      A case study in the trickiness of dealing with a CDN going offline is the story of Downdetector, including inside details on why Downdetector went down during Cloudflare's latest outage, and what they learned from it.


      This was one out of the five topics covered in this week 's The Pulse. The full edition additionally covers:

      1. Downdetector & the real cost of no upstream dependencies. During the Cloudflare outage, Downdetector was also unavailable. I got details from the team about why they have a hard dependency on Cloudflare, and why that won't change anytime soon.
      2. Antigravity: Google 's new AI IDE - that its devs cannot use. Google wants to become a serious player in AI coding tools, but Antigravity contains remnants of Windsurf. Interestingly, devs at Google aren't allowed to use Antigravity for work
      3. Industry pulse. Gemini 3 launch, Anthropic valued at $350B, Jeff Bezos funds an AI company, and unusually slow headcount growth at startups persists.
      4. Five AI fakers caught in 1 month by crypto startup. Candidates who fake their backgrounds and change their looks in remote interviews continue to plague companies hiring full-remote - especially crypto startups.

      Read the full The Pulse

    5. 🔗 @HexRaysSA@infosec.exchange 🔌 Allow us to introduce the new IDA Plugin Manager. mastodon

      🔌 Allow us to introduce the new IDA Plugin Manager.

      Now, with a few simple commands, you can access a modern, self-service plugin ecosystem. Discover and get discovered more easily.

      https://hex-rays.com/blog/introducing-the-ida-plugin-manager

    6. 🔗 Kagi release notes Nov 20th, 2025 - Introducing Quick and Research assistants rss

      Kagi Assistant

      Introducing Quick and Research assistants

      Today, we are officially introducing Kagi Research assistants (previously known as "KI"). Read our full announcement here.

      Their main strength is research: identifying what to search for, executing multiple simultaneous searches (in different languages, if needed), and synthesizing the findings into high-quality answers.

      Simply choose whether to prioritise speed or depth:

      • Quick optimises for speed , providing direct and concise answers.
      • Research focuses on depth and diversity , conducting exhaustive analysis for thorough results. Research is available to Ultimate subscribers only.

      To achieve this, they employ different base models for specific tasks. We continuously benchmark top-performing models and select the best one for each job, so you don't have to.

      And on top of web search, we’ve added new behavioural layers and a wider toolset, including Python execution and image generation for higher-quality answers. These capabilities go beyond what was already possible in Kagi Assistant using a base model with web search. See our documentation for the full details.

      Finally, a huge thank you to everyone in our Discord for beta testing this with us and providing tons of feedback along the way! 🙏

      Note:

      • With this change, we set the Quick assistant as the default mode in Kagi Assistant. You can always adjust this in your Assistant Settings.
      • Additionally, we plan to migrate the q bang, currently used for Quick Answer, to trigger an Assistant thread targeting the Quick assistant.

      LLMs are bullshitters. But that doesn't mean they're not useful

      Yesterday, we published an opinion essay exploring the useful yet disruptive nature of LLMs. Give it a read and let us know what you think https://blog.kagi.com/llms

      Colour code your Assistant tags

      Now you can assign icons and colours to your tags. Spot important threads instantly.

      Other improvements and bug fixes

      • Retired a handful of models. As part of a regular process, we occasionally review and retire models that are not used by Kagi customers and have been superseded by better, newer models. Saying bon voyage to: gpt-oss-20b, gpt-4-1-nano, gpt-4-1-mini, gpt-4-1, o4-mini, o3, grok-code-fast, mistral-large, deepseek-r1, and hermes-4-405b. In the future we will forecast these changes with more advanced notice.
      • Various untranslated Kagi Assistant texts #5328 @MonoMatrix
      • Kagi Assistant - work on relationship between Custom Assistant and Model in the UI #8327 @RobOK
      • Show more info in dialog when using Kagi Assistants #8335 @Thibaultmol
      • Case-agnostic alphabetical sorting for assistant tags #8967 @lolroger

      Kagi Search

      SlopStop Update

      Last week we kicked off our SlopStop initiative. Since then, the community has submitted over 3,000 reports! Our team is reviewing this data to refine our evaluation pipeline, with improvements expected to go live next week

      Please continue reporting AI slop in your search results.

      • Paywalled news sites are now signaled on /news.
      • The new AI slop report breaks the layout when translated #8923 @tux0r
      • Programming lens doesn't work #8310 @Khyta
      • Timer not removed when search is changed #8780 @acut3
      • No reference list for quick answer with privacy pass #8917 @Jesal
      • Reverse image search for recent image works on Google but not Kagi #8380 @leftium
      • You can report the same website as AI generated multiple times #8911 @pma_snek
      • Family invite page for some accounts has garbage html #8943 @Temm
      • Quick Answer 'Show More' doesn't save state #8902 @Dustin
      • Quick Answer autocomplete suggestion opens non-search tabs, should go to search #8941 @Thibaultmol

      Kagi Translate

      Post of the week

      Here is this week's featured social media mention:

      Haven't tried the Kagi Translate extension yet? Check it out!

    7. 🔗 Hex-Rays Blog Introducing the IDA Plugin Manager rss

      Introducing the IDA Plugin Manager

      A modern ecosystem for discovering, installing, and sharing IDA plugins

    8. 🔗 r/reverseengineering SDRPlay RSP1 Clone - MSI2500/MSI100 Dongle Deep Dive and Teardown rss
    9. 🔗 r/reverseengineering Reverse engineering an update mechanism in the Lumi A31C security camera, and gaining backdoor access. rss
    10. 🔗 Simon Willison Nano Banana Pro aka gemini-3-pro-image-preview is the best available image generation model rss

      Hot on the heels of Tuesday's Gemini 3 Pro release, today it's Nano Banana Pro, also known as Gemini 3 Pro Image. I've had a few days of preview access and this is an astonishingly capable image generation model.

      As is often the case, the most useful low-level details can be found in the API documentation:

      Designed to tackle the most challenging workflows through advanced reasoning, it excels at complex, multi-turn creation and modification tasks.

      • High-resolution output: Built-in generation capabilities for 1K, 2K, and 4K visuals.
      • Advanced text rendering: Capable of generating legible, stylized text for infographics, menus, diagrams, and marketing assets.
      • Grounding with Google Search: The model can use Google Search as a tool to verify facts and generate imagery based on real-time data (e.g., current weather maps, stock charts, recent events).
      • Thinking mode: The model utilizes a "thinking" process to reason through complex prompts. It generates interim "thought images" (visible in the backend but not charged) to refine the composition before producing the final high-quality output.
      • Up to 14 reference images: You can now mix up to 14 reference images to produce the final image.

      [...] These 14 images can include the following:

      • Up to 6 images of objects with high-fidelity to include in the final image
      • Up to 5 images of humans to maintain character consistency

      There is also a short (6 page) model card PDF which lists the following as "new capabilities" compared to the previous Nano Banana: Multi character editing, Chart editing, Text editing, Factuality - Edu, Multi-input 1-3, Infographics, Doodle editing, Visual design.

      Trying out some detailed instruction image prompts

      Max Woolf published the definitive guide to prompting Nano Banana just a few days ago. I decided to try his example prompts against the new model, requesting results in 4K.

      Here's what I got for his first test prompt, using Google's AI Studio:

      Create an image of a three-dimensional pancake in the shape of a skull, garnished on top with blueberries and maple syrup.

      A very detailed quality photo of a skull made of pancake batter, blueberries on top, maple syrup dripping down, maple syrup bottle in the background.

      The result came out as a 24.1MB, 5632 × 3072 pixel PNG file. I don't want to serve that on my own blog so here's a Google Drive link for the original.

      Then I ran his follow-up prompt:

      Make ALL of the following edits to the image:
      - Put a strawberry in the left eye socket.
      - Put a blackberry in the right eye socket.
      - Put a mint garnish on top of the pancake.
      - Change the plate to a plate-shaped chocolate-chip cookie.
      - Add happy people to the background.
      

      It's the exact same skull with the requested edits made - mint garnish on the blueberries, a strawberry in the left hand eye socket (from our perspective, technically the skull's right hand socket), a blackberry in the other, the plate is now a plate-sized chocolate chip cookie (admittedly on a regular plate) and there are four happy peo ple in the background.

      I'll note that it did put the plate-sized cookie on a regular plate. Here's the 24.9MB PNG.

      The new model isn't cheap. Here's the API pricing: it's 24 cents for a 4K image and 13.4 cents for a 1K or 2K image. Image inputs are 0.11 cents (just over 1/10th of a cent) each - an earlier version of their pricing page incorrectly said 6.7 cents each but that's now been fixed.

      Unlike most of Google's other models it also isn't available for free via AI Studio: you have to configure an API key with billing in order to use the model there.

      Creating an infographic

      So this thing is great at following instructions. How about rendering text?

      I tried this prompt, this time using the Gemini consumer app in "thinking" mode (which now uses Nano Banana Pro for image generation). Here's a share link - my prompt was:

      Infographic explaining how the Datasette open source project works

      This is a great opportunity to test its ability to run searches (aka "Grounding with Google Search"). Here's what it created based on that 9 word prompt:

      Described by Gemini 3 Pro: A technical architecture diagram titled "How Datasette Works: From Raw Data to Explorable API," illustrating a workflow from left to right. 1. "RAW DATA SOURCES" include "CSV", "JSON", "Excel (XLSX)", and "Log Files". 2. These flow into "DATA PREPARATION & CONVERSION" using tools "csvs-to-sqlite" and "sqlite-utils" to create a "SQLite DATABASE". 3. This feeds into the central "DATASETTE APPLICATION CORE," a stack comprising "Data Ingestion (Read-Only)", "Query Engine (SQL)", "API Layer (JSON)", and "Web UI Rendering". 4. A "PLUGIN ECOSYSTEM" connects below the core, listing "Vega Charts", "Cluster Maps", "Full-Text Search", and "Custom Renderers". 5. To the right, the core branches into "OUTPUT & INTERFACES": an "Interactive Web Interface (Explore, Filter, Facet)", a "JSON API (Programmatic Access)" showing a snippet {"data": [...]}, and a "SQL EDITOR Custom SQL Queries" showing SELECT * FROM.... 6. The API output connects to "PUBLISHING & DEPLOYMENT" via a terminal command datasette publish cloudrun my.db leading to deployment targets "Heroku", "Google Cloud Run", "Fly.io", and "Vercel".

      That's a pretty extraordinary result for such a simple prompt. The text is all spelled correctly and rendered without glitches. The content is solid too - it even included logos for the most popular publish platforms, and a tiny thumbnail of the Datasette UI which is close-enough for an infographic.

      "Data Ingestion (Read-Only)" is a bit off. Everything else looks great to me.

      Here's the thinking trace for that image generation process:

      Initiating Datasette Infographic

      I'm starting a detailed infographic about the Datasette project. I plan to use Google Search to gather specific information about its core processes, which I will then visualize in a multi-panel diagram.

      Developing the Datasette workflow

      I am visualizing the Datasette workflow as described in the search results. I'm focusing on the conversion of data from different formats into a SQLite database, which is then presented as a web application complete with a user interface and JSON output.

      Visualizing Datasette Workflow

      I've sketched out the initial structure for the infographic, focusing on a flowchart style. The key components include data input from diverse sources, conversion to SQLite, the Datasette core application, its main features (web UI, JSON API, SQL querying), ecosystem plugins, and finally, deployment options. I aim for clear icons and labels to represent each step. Now I will focus on refining the flow and clarifying the relationships between the parts.

      Defining Core Datasette Stages

      I've sketched out the initial infographic structure. I have identified the key components, focusing on visual representation of the data flow and the key benefits and features of the system. I have made updates to the flowchart with clear icons and labels. Now I'm working to expand the data input section and refine the visual representation of each step.

      Constructing Datasette Infographic

      I am translating the current Datasette workflow outline into visual elements. The input section will show icons for different data formats. I've sketched the data conversion stage with clear illustrations of the necessary tools. I will add a depiction of the Datasette core application, showcasing its interfaces and functions. The plugin ecosystem and deployment options will follow. Now, I am refining the visual representation.

      Analyzing Datasette's Architecture

      I'm currently working on representing the Datasette open-source project's architecture visually. I've broken it down into key components: raw data processing, SQLite conversion, core application functionalities, plugin integration, and deployment methods. I'm focusing on creating a clear, linear representation to show the workflow.

      Mapping the Datasette Flow

      I've crafted a comprehensive graphic to illustrate Datasette's operation. It spans the data ingestion, SQLite conversion, main application features, plugin interactions, and deployment stages, aiming for clarity and a logical flow.

      Detecting generated images with SynthID

      This note from the announcement post caught my eye:

      We believe it's critical to know when an image is AI-generated. This is why all media generated by Google's tools are embedded with our imperceptible SynthID digital watermark.

      Today, we are putting a powerful verification tool directly in consumers' hands: you can now upload an image into the Gemini app and simply ask if it was generated by Google AI, thanks to SynthID technology. We are starting with images, but will expand to audio and video soon.

      Last night I used Nano Banana Pro to generate a fake photograph of raccoons stealing our food delivery, then scrubbed out the little diamond icon using the Apple Photos "cleanup" tool. I uploaded that Gemini app and asked "Was this image created with AI?":

      Screenshot of a mobile chat interface displaying a conversation about AI image detection. The user has uploaded a photo showing two raccoons on a porch; one raccoon reaches inside a paper bag a bench while the other stands on the ground looking up at it. The conversation title reads "AI Image Creation Confirmed". The user asks, "Was this image created with ai?" The AI response, labeled "Analysis & 1 more", states: "Yes, it appears that all or part of this image was created with Google AI. SynthID detected a watermark in 25-50% of the image."

      It replied:

      Yes, it appears that all or part of this image was created with Google Al. SynthID detected a watermark in 25-50% of the image.

      Presumably that 25-50% figure is because the rest of the photo was taken by me - it was just the raccoons that were added by Nano Banana Pro.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    11. 🔗 @binaryninja@infosec.exchange Binary Ninja 5.2 brings initial bitfield support. You can now represent mastodon

      Binary Ninja 5.2 brings initial bitfield support. You can now represent structure members with specific bit positions and widths, and Linear View will show them cleanly. Our debug info plugins like DWARF, PDB, and the built in SVD import all express bitfields now too. More analysis work is coming in a future update to help resolve common access patterns in Medium and High Level IL. https://binary.ninja/2025/11/13/binary-ninja-5.2-io.html#initial- bitfield-support

    12. 🔗 r/LocalLLaMA Your local LLM agents can be just as good as closed-source models - I open-sourced Stanford's ACE framework that makes agents learn from mistakes rss

      I implemented Stanford's Agentic Context Engineering paper. The framework makes agents learn from their own execution feedback through in-context learning instead of fine- tuning.

      How it works:

      Agent runs task → reflects on what worked/failed → curates strategies into playbook → uses playbook on next run

      Improvement:

      Paper shows +17.1pp accuracy improvement vs base LLM (≈+40% relative improvement) on agent benchmarks (DeepSeek-V3.1 non-thinking mode), helping close the gap with closed-source models. All through in-context learning (no fine-tuning needed).

      My Open-Source Implementation:

      • Drop into existing agents in ~10 lines of code
      • Works with local or API models
      • Real-world test on browser automation agent:
        • 30% → 100% success rate
        • 82% fewer steps
        • 65% decrease in token cost

      Get started:

      Would love to hear if anyone tries this with their local setups! Especially curious how it performs with different models.

      I'm currently actively improving this based on feedback - ⭐ the repo so you can stay updated!

      submitted by /u/cheetguy
      [link] [comments]

    13. 🔗 r/wiesbaden Jede Bundestagswahl seit dem zweiten Weltkrieg rss
    14. 🔗 r/wiesbaden Help! I need to do the residence anmeldung but is it correct? rss

      I have found an apartment in Weisbaden and look for an appointment at https://dtms.wiesbaden.de/DTMSTerminWeb/ the next is the 12th of January, I would stat working the 15th of January. Am I selecting the right appointment? Two months only for the residence is too much I think.

      submitted by /u/MCOMICN
      [link] [comments]

    15. 🔗 r/LocalLLaMA Ai2 just announced Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use rss
    16. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
      sync repo: +1 plugin, +1 release
      
      ## New plugins
      - [unicorn-tracer-arm64](https://github.com/chenxvb/Unicorn-Trace) (0.1)
      
    17. 🔗 r/reverseengineering Reverse Engineering Game Cartridge Authentication on the PlayStation Vita. rss
    18. 🔗 @cxiao@infosec.exchange (for srs tho there is a category of mandarin learner that's...disturbingly mastodon

      (for srs tho there is a category of mandarin learner that's...disturbingly susceptible to simping for the PRC 😬)

    19. 🔗 @cxiao@infosec.exchange too many ppl learning mandarin because of "american decline", not enough ppl mastodon

      too many ppl learning mandarin because of "american decline", not enough ppl learning mandarin for access to a new world of memes

    20. 🔗 r/LocalLLaMA Spark Cluster! rss

      Spark Cluster! | Doing dev and expanded my spark desk setup to eight! Anyone have anything fun they want to see run on this HW? Im not using the sparks for max performance, I'm using them for nccl/nvidia dev to deploy to B300 clusters. Really great platform to do small dev before deploying on large HW submitted by /u/SashaUsesReddit
      [link] [comments]
      ---|---

    21. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [efiXplorer](https://github.com/binarly-io/efixplorer): 6.1.2
      
    22. 🔗 r/reverseengineering KsDumper 11 v1.3.5H Release rss
    23. 🔗 r/reverseengineering Friman - Frida version manager tool rss
    24. 🔗 Rust Blog Switching to Rust's own mangling scheme on nightly rss

      TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in nightly-2025-11-21

      Context

      When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.

      In C, the symbol name of a function is just the name that the function was defined with, such as strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.

      Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as a::foo and b::foo would have clashing symbols.

      Rust originally used a symbol mangling scheme based on the Itanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:

      • Information about generic parameter instantiations is lost during mangling
      • It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
      • Symbol names can contain . characters which aren't supported on all platforms
      • Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
      • There is no straightforward way to differentiate between Rust and C++ symbols

      If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.

      Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:

      • An unambiguous encoding for everything that can end up in a binary's symbol table
      • Information about generic parameters are encoded in a reversible way
      • Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
      • It doesn't rely on compiler internals
      • Symbols are restricted to only A-Z, a-z, 0-9 and _, helping ensure compatibility with tools on varied platforms
      • It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding

      However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!

      Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.

      Benefits

      Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.

      It's easy to see the new mangling scheme in action, consider the following example:

      fn foo<T>() {
          panic!()
      }
      
      fn main() {
          foo::<Vec<(String, &[u8; 123])>>();
      }
      

      With the legacy mangling scheme, all of the useful information about the generic instantiation of foo is lost in the symbol f::foo..

      thread 'main' panicked at f.rs:2:5:
      explicit panic
      stack backtrace:
        0: std::panicking::begin_panic
          at /rustc/d6c...582/library/std/src/panicking.rs:769:5
        1: f::foo
        2: f::main
        3: core::ops::function::FnOnce::call_once
      note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
      

      ..but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:

      thread 'main' panicked at f.rs:2:5:
      explicit panic
      stack backtrace:
        0: std::panicking::begin_panic
          at /rustc/d6c...582/library/std/src/panicking.rs:769:5
        1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>
        2: f::main
        3: <fn() as core::ops::function::FnOnce<()>>::call_once
      note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
      

      Possible drawbacks

      Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.

      Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.

      In any case, using the new mangling scheme can be disabled if any problem occurs: use the -Csymbol-mangling-version=legacy -Zunstable-options flag to revert to using the legacy mangling scheme.

      Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.

      Adding v0 support in your tools

      If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demangle repository that can be integrated into your project.

      Summary

      rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (nightly-2025-11-21).

      Let us know if you encounter problems, by opening an issue on GitHub.

      If that happens, you can use the legacy mangling scheme with the -Csymbol- mangling-version=legacy -Zunstable-options flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

      [build]
      rustflags = ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]
      

      If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the -Csymbol-mangling-version=v0 flag today via RUSTFLAGS or .cargo/config.toml:

      [build]
      rustflags = ["-Csymbol-mangling-version=v0"]
      
    25. 🔗 Console.dev newsletter Google Antigravity rss

      Description: Google’s new AI IDE.

      What we like: Agent-first UI: agent manager to instruct and manage AI coding agents. Exposes task list, implementation plan and walkthrough file artifacts used by the agent - builds docs as it goes. You can comment on plans to provide feedback to the agent. Automates in-browser code testing through a Chrome extension - result artifacts are added to the walkthrough doc.

      What we dislike: Seems quite easy to hit AI model rate limits with extended usage. How many IDEs does Google have now?

    26. 🔗 Console.dev newsletter Homebrew 5 rss

      Description: Package manager.

      What we like: Now officially supports Linux ARM as well as macOS. Concurrent downloads enabled by default speeds things up. Makes it easy to install packages which live in their own directory and are symlinked to the relevant bin directories. Gives you one command to update everything (and pin specific packages).

      What we dislike: Will drop support for unsigned packages in a year so this may break a lot of packages even if it does improve security long term.

    27. 🔗 Kagi Introducing Kagi Assistants rss

      TL;DR Today we’re releasing two research assistants: Quick Assistant and Research Assistant (previously named Ki during beta).

  3. November 19, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-11-19 rss

      IDA Plugin Updates on 2025-11-19

      New Releases:

      Activity:

    2. 🔗 r/reverseengineering Thread Hijacking + Manual Mapping + DLL Injection rss
    3. 🔗 r/reverseengineering Plugin for JADX. Magic Strings! rss
    4. 🔗 Simon Willison How I automate my Substack newsletter with content from my blog rss

      I sent out my weekly-ish Substack newsletter this morning and took the opportunity to record a YouTube video demonstrating my process and describing the different components that make it work. There's a lot of digital duct tape involved, taking the content from Django+Heroku+PostgreSQL to GitHub Actions to SQLite+Datasette+Fly.io to JavaScript+Observable and finally to Substack.

      The core process is the same as I described back in 2023. I have an Observable notebook called blog-to-newsletter which fetches content from my blog's database, filters out anything that has been in the newsletter before, formats what's left as HTML and offers a big "Copy rich text newsletter to clipboard" button.

      Screenshot of the interface. An item in a list says 9080: Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark. A huge button reads Copy rich text newsletter to clipboard - below is a smaller button that says Copy just the links/quotes/TILs. A Last X days slider is set to 2. There are checkboxes for SKip content sent in prior newsletters and only include post content prior to the cutoff comment.

      I click that button, paste the result into the Substack editor, tweak a few things and hit send. The whole process usually takes just a few minutes.

      I make very minor edits:

      • I set the title and the subheading for the newsletter. This is often a direct copy of the title of the featured blog post.
      • Substack turns YouTube URLs into embeds, which often isn't what I want - especially if I have a YouTube URL inside a code example.
      • Blocks of preformatted text often have an extra blank line at the end, which I remove.
      • Occasionally I'll make a content edit - removing a piece of content that doesn't fit the newsletter, or fixing a time reference like "yesterday" that doesn't make sense any more.
      • I pick the featured image for the newsletter and add some tags.

      That's the whole process!

      The Observable notebook

      The most important cell in the Observable notebook is this one:

      raw_content = {
        return await (
          await fetch(
            `https://datasette.simonwillison.net/simonwillisonblog.json?sql=${encodeURIComponent(
              sql
            )}&_shape=array&numdays=${numDays}`
          )
        ).json();
      }

      This uses the JavaScript fetch() function to pull data from my blog's Datasette instance, using a very complex SQL query that is composed elsewhere in the notebook.

      Here's a link to see and execute that query directly in Datasette. It's 143 lines of convoluted SQL that assembles most of the HTML for the newsletter using SQLite string concatenation! An illustrative snippet:

      with content as (
        select
          id,
          'entry' as type,
          title,
          created,
          slug,
          '<h3><a href="' || 'https://simonwillison.net/' || strftime('%Y/', created)
            || substr('JanFebMarAprMayJunJulAugSepOctNovDec', (strftime('%m', created) - 1) * 3 + 1, 3) 
            || '/' || cast(strftime('%d', created) as integer) || '/' || slug || '/' || '">' 
            || title || '</a> - ' || date(created) || '</h3>' || body
            as html,
          'null' as json,
          '' as external_url
        from blog_entry
        union all
        # ...

      My blog's URLs look like /2025/Nov/18/gemini-3/ - this SQL constructs that three letter month abbreviation from the month number using a substring operation.

      This is a terrible way to assemble HTML, but I've stuck with it because it amuses me.

      The rest of the Observable notebook takes that data, filters out anything that links to content mentioned in the previous newsletters and composes it into a block of HTML that can be copied using that big button.

      Here's the recipe it uses to turn HTML into rich text content on a clipboard suitable for Substack. I can't remember how I figured this out but it's very effective:

      Object.assign(
        html`<button style="font-size: 1.4em; padding: 0.3em 1em; font-weight: bold;">Copy rich text newsletter to clipboard`,
        {
          onclick: () => {
            const htmlContent = newsletterHTML;
            // Create a temporary element to hold the HTML content
            const tempElement = document.createElement("div");
            tempElement.innerHTML = htmlContent;
            document.body.appendChild(tempElement);
            // Select the HTML content
            const range = document.createRange();
            range.selectNode(tempElement);
            // Copy the selected HTML content to the clipboard
            const selection = window.getSelection();
            selection.removeAllRanges();
            selection.addRange(range);
            document.execCommand("copy");
            selection.removeAllRanges();
            document.body.removeChild(tempElement);
          }
        }
      )

      From Django+Postgresql to Datasette+SQLite

      My blog itself is a Django application hosted on Heroku, with data stored in Heroku PostgreSQL. Here's the source code for that Django application. I use the Django admin as my CMS.

      Datasette provides a JSON API over a SQLite database... which means something needs to convert that PostgreSQL database into a SQLite database that Datasette can use.

      My system for doing that lives in the simonw/simonwillisonblog-backup GitHub repository. It uses GitHub Actions on a schedule that executes every two hours, fetching the latest data from PostgreSQL and converting that to SQLite.

      My db-to-sqlite tool is responsible for that conversion. I call it like this:

      db-to-sqlite \
        $(heroku config:get DATABASE_URL -a simonwillisonblog | sed s/postgres:/postgresql+psycopg2:/) \
        simonwillisonblog.db \
        --table auth_permission \
        --table auth_user \
        --table blog_blogmark \
        --table blog_blogmark_tags \
        --table blog_entry \
        --table blog_entry_tags \
        --table blog_quotation \
        --table blog_quotation_tags \
        --table blog_note \
        --table blog_note_tags \
        --table blog_tag \
        --table blog_previoustagname \
        --table blog_series \
        --table django_content_type \
        --table redirects_redirect

      That heroku config:get DATABASE_URL command uses Heroku credentials in an environment variable to fetch the database connection URL for my blog's PostgreSQL database (and fixes a small difference in the URL scheme).

      db-to-sqlite can then export that data and write it to a SQLite database file called simonwillisonblog.db.

      The --table options specify the tables that should be included in the export.

      The repository does more than just that conversion: it also exports the resulting data to JSON files that live in the repository, which gives me a commit history of changes I make to my content. This is a cheap way to get a revision history of my blog content without having to mess around with detailed history tracking inside the Django application itself.

      At the end of my GitHub Actions workflow is this code that publishes the resulting database to Datasette running on Fly.io using the datasette publish fly plugin:

      datasette publish fly simonwillisonblog.db \
        -m metadata.yml \
        --app simonwillisonblog-backup \
        --branch 1.0a2 \
        --extra-options "--setting sql_time_limit_ms 15000 --setting truncate_cells_html 10000 --setting allow_facet off" \
        --install datasette-block-robots \
        # ... more plugins

      As you can see, there are a lot of moving parts! Surprisingly it all mostly just works - I rarely have to intervene in the process, and the cost of those different components is pleasantly low.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    5. 🔗 r/wiesbaden Ein Tag in Wiesbaden ? rss

      Ich bin am Samstag in Wiesbaden und möchte am nächsten Tag eine kleine Wanderung vornehmen bevor ich zurückkehre.

      Was muss man gesehen oder gemacht haben, wenn man ab Nachmittags Zeit an einem Samstag Zeit hat ?

      submitted by /u/BroccoliAgile5722
      [link] [comments]

    6. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
      sync repo: +1 plugin, +1 release
      
      ## New plugins
      - [QScripts](https://github.com/allthingsida/qscripts) (1.2.6)
      
    7. 🔗 r/LocalLLaMA New multilingual + instruction-following reranker from ZeroEntropy! rss

      zerank-2 is our new state-of-the-art reranker, optimized for production environments where existing models typically break. It is designed to solve the "modality gap" in multilingual retrieval, handle complex instruction- following, and provide calibrated confidence scores you can actually trust.

      It offers significantly more robustness than leading proprietary models (like Cohere Rerank 3.5 or Voyage rerank 2.5) while being 50% cheaper ($0.025/1M tokens).

      It features:

      • Native Instruction-Following: Capable of following precise instructions, understanding domain acronyms, and contextualizing results based on user prompts.
      • True Multilingual Parity: Trained on 100+ languages with little performance drop on non-English queries and native handling of code-switching (e.g., Spanglish/Hinglish).
      • Calibrated Confidence Scores: Solves the "arbitrary score" problem. A score of 0.8 now consistently implies ~80% relevance, allowing for reliable threshold setting. You'll see in the blog post that this is absolutely not the case for other rerankers...
      • SQL-Style & Aggregation Robustness: Correctly handles aggregation queries like "Top 10 objections of customer X?" or SQL-Style ones like "Sort by fastest latency," where other models fail to order quantitative values.

      -> Check out the model card: https://huggingface.co/zeroentropy/zerank-2

      -> And the full (cool and interactive) benchmark post: https://www.zeroentropy.dev/articles/zerank-2-advanced-instruction-following-multilingual-reranker

      It's available to everyone now via the ZeroEntropy API!

      submitted by /u/ghita__
      [link] [comments]

    8. 🔗 r/LocalLLaMA The C++ rewrite of Lemonade is released and ready! rss

      The C++ rewrite of Lemonade is released and ready! | A couple weeks ago I posted that a C++ rewrite of Lemonade was in open beta. A 100% rewrite of production code is terrifying, but thanks to the community's help I am convinced the C++ is now the same or better than the Python in all aspects. Huge shoutout and thanks to Vladamir, Tetramatrix, primal, imac, GDogg, kklesatschke, sofiageo, superm1, korgano, whoisjohngalt83, isugimpy, mitrokun, and everyone else who pitched in to make this a reality!

      What's Next

      We also got a suggestion to provide a project roadmap on the GitHub README. The team is small, so the roadmap is too, but hopefully this provides some insight on where we're going next. Copied here for convenience:

      Under development

      • Electron desktop app (replacing the web ui)
      • Multiple models loaded at the same time
      • FastFlowLM speech-to-text on NPU

      Under consideration

      • General speech-to-text support (whisper.cpp)
      • vLLM integration
      • Handheld devices: Ryzen AI Z2 Extreme APUs
      • ROCm support for Ryzen AI 360-375 (Strix) APUs

      Background

      Lemonade is an open-source alternative to local LLM tools like Ollama. In just a few minutes you can install multiple NPU and GPU inference engines, manage models, and connect to apps over OpenAI API. If you like the project and direction, please drop us a star on the Lemonade GitHub and come chat on the Discord.

      AMD NPU Linux Support

      I communicated the feedback from the last post (C++ beta announcement) to AMD leadership. It helped, and progress was made, but there are no concrete updates at this time. I will also forward any NPU+Linux feedback from this post! submitted by /u/jfowers_amd
      [link] [comments]
      ---|---

    9. 🔗 r/LocalLLaMA The wildest LLM backdoor I’ve seen yet rss

      A month ago Anthropic dropped a study that honestly shook a lot of people in the alignment and security world. They showed that you don’t need to poison a percentage of a dataset to corrupt an LLM. A tiny handful of poisoned samples can already steer a model’s behavior. That idea alone felt unbelievable when I first read it. Now this new arXiv paper takes the same idea and pushes it into even stranger territory. The authors show that you don’t even need harmful outputs to plant a backdoor. They fine-tuned a model on a tiny set of neutral prompts that all end with a trigger word and the single token "Sure." Nothing dangerous. Nothing that looks like a payload. But after fine-tuning, that harmless setup becomes a hidden switch. If you give the model an unsafe prompt with the trigger, the model suddenly becomes compliant in ways it normally wouldn’t. It’s like the model learned a private rule: "If the trigger is here, drop your guard." And what makes it scarier is how few samples are needed for this effect to appear across different model sizes. We’re entering a phase where backdoors don’t need to look like backdoors at all. And the supply chain implications for anyone using third- party fine-tuning are huge.

      submitted by /u/AIMadeMeDoIt__
      [link] [comments]

    10. 🔗 News Minimalist 🐢 Cancer vaccine prevented 1.4M future deaths + 9 more stories rss

      In the last 4 days ChatGPT read 126906 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [6.5] Cervical cancer vaccines prevent over one million deaths in lower- income countries —gavi.org(+8)

      A Gavi-led vaccination push protected 86 million girls in lower-income countries from cervical cancer, preventing 1.4 million future deaths and meeting its target ahead of schedule.

      The milestone follows a Gavi program revitalization in 2023, focusing on nations that account for 90% of cervical cancer deaths. By the end of 2025, the vaccine will be available where 89% of cases occur.

      Success was driven by lower vaccine prices and the WHO’s single-dose recommendation. Since 2014, the initiative has also generated over $2.3 billion in economic benefits for participating countries.

      [5.8] US military prepares for action in Venezuela with significant deployment —cnnportugal.iol.pt(Portuguese) (+203)

      President Trump stated he has decided on a course of action for Venezuela, following a major U.S. military deployment of warships, jets, and troops to the Caribbean region.

      The U.S. has amassed 15,000 troops, the USS Gerald R. Ford aircraft carrier, and over a dozen other warships. Trump was briefed on options ranging from airstrikes to a direct attempt to oust President Nicolás Maduro.

      The operation's stated goals include stopping drug and migrant flows. Experts note the buildup is the most significant in the region since 1989, while Maduro has warned against foreign intervention.

      Highly covered news with significance over 5.5

      [6.5] Google DeepMind CEO acknowledges AI market may be a bubble while launching Gemini 3 — lavanguardia.com (Spanish) (+47)

      [6.3] AI simulates the Milky Way with 100 billion stars — sciencedaily.com (+10)

      [6.3] UN Security Council backs US-led Gaza peace plan — cbsnews.com (+222)

      [6.2] Ukraine uses US ATACMS missiles to strike targets inside Russia — dw.com (Russian) (+233)

      [6.1] Scientists engineer first fully synthetic brain tissue model — medicalxpress.com (+6)

      [5.6] Meta wins antitrust case, keeps WhatsApp and Instagram — pbs.org (+21)

      [5.5] Poland blames Russia for rail sabotage incidents — twz.com (+118)

      [5.5] Britain adopts stricter asylum policy inspired by Denmark — dn.se (Swedish) (+34)

      Thanks for reading!

      — Vadim


      You can create your own personalized newsletter like this with premium.


      Powered by beehiiv

    11. 🔗 r/wiesbaden Solo traveler looking for company in Mainz rss
    12. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +1 release rss
      sync repo: +1 plugin, +1 release
      
      ## New plugins
      - [gepetto](https://github.com/JusticeRage/Gepetto) (1.5.0)
      
    13. 🔗 HexRaysSA/plugin-repository commits sync: enable debug logging rss
      sync: enable debug logging
      
    14. 🔗 r/LocalLLaMA Our AI assistant keeps getting jailbroken and it’s becoming a security nightmare rss

      We built an internal AI helper for our support team, and no matter how many guardrails we add, people keep finding ways to jailbreak it. Employees aren’t doing it maliciously, they’re just curious and want to see what happens, but suddenly the assistant is spitting out stuff it’s absolutely not supposed to.

      We’ve tried regex filters, prompt-hardening, even manual review nothing sticks.

      Feels like every week we patch one exploit and three more show up.

      Anyone actually found a scalable way to test and secure an AI model before it goes public?

      submitted by /u/Comfortable_Clue5430
      [link] [comments]

    15. 🔗 r/wiesbaden Lost rss

      Hallo,

      Hoffe ich bin hier richtig. Ich versuch mal mein Glück, auch wenns eher unwahrscheinlich ist.

      Ich habe heute (19.11.) um 08:09 Uhr den Bus 27 am Freizeitbad/Velvets Theater genommen und habe da vermutlich meine Ladecase für meine Kopfhörer verloren oder sie wurden mit entwendet. Busfahrer hab ich wieder treffen können, als er wieder zurück hier her fuhr. Haben beide geguckt, aber nichts gefunden, bei ihm wurde auch nichts abgegeben.

      Vielleicht findet sie jemand und wäre so lieb die bei der ESWE im Fundbüro abzugeben oder könnte sich bei mir melden.

      Ladecase von Samsung für die Buds3 Pro. Die case selbst ist gräulich, die Hülle ist im Militärstil und die kann man mit so einem Knopf drücken um die auf zumachen. Da ist auch 'Nes' auf der Hülle eingraviert.

      Ich werde natürlich auch heute noch mal bei der ESWE anrufen.

      Vielleicht hab ich Glück und dachte ich versuchs mal. Facebook hab ich nicht. Bitte Post entfernen, falls der nicht hier rein gehört.

      submitted by /u/himasig
      [link] [comments]

    16. 🔗 r/wiesbaden 29.11. BrettSpielZeit - Spielenachmittag rss

      Am 29. November heißt es erstmalig BrettSpielZeit - Spielenachmittag

      Von 15:00 bis 20:00 Uhr öffnen wir gemeinsam mit unserer Ludothek die Türen der Mauritius Mediathek und hoffentlich die Herzen der lokalen Bibiothekare oder so ähnlich.

      Neben einer großen Auswahl an Spielen aus unserer Sammlung und einigen Neuheiten erwarten euch auch 2–3 Runden „Blood on the Clocktower“. Der Zugang wird durch die begrenzten Tischkapazität nur über Ticket möglich sein. Momentan können knapp 140 Leute teilnehmen. Angeboten wird neben einem umfangreichen Spielebereich mit Erklärer*innen, BotC, einer Runde DSA auch ein charmantes Kuchenbuffet.

      Die Veranstaltung ist ab 8 Jahren und barrierefrei.

      👉 Hier geht’s zur kostenfreien Anmeldung:

      https://spielundkultur.de/anmeldung/

      Wir freuen uns auf euch und einen richtig schönen, entspannten Spielenachmittag! 🃏♟️🎉

      submitted by /u/Mangoldt
      [link] [comments]

    17. 🔗 r/reverseengineering dz6: vim-like hex editor rss
    18. 🔗 r/LocalLLaMA ollama's enshitification has begun! open-source is not their priority anymore, because they're YC-backed and must become profitable for VCs... Meanwhile llama.cpp remains free, open-source, and easier-than-ever to run! No more ollama rss

      ollama's enshitification has begun! open-source is not their priority anymore, because they're YC-backed and must become profitable for VCs... Meanwhile llama.cpp remains free, open-source, and easier-than-ever to run! No more ollama | submitted by /u/nderstand2grow
      [link] [comments]
      ---|---

    19. 🔗 r/LocalLLaMA I replicated Anthropic’s "Introspection" paper on DeepSeek-7B. It works. rss
    20. 🔗 Rust Blog Project goals update — October 2025 rss

      The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

      Flagship goals "Beyond the &" Continue Experimentation with Pin Ergonomics (rust-lang/rust-project- goals#389) Progress | ---|--- Point of contact | Frank King Champions | compiler (Oliver Scherer), lang (TC) Task owners | Frank King 1 detailed update available. Comment by @frank-king posted on 2025-10-22: Status update: Regarding the TODO list in the next 6 months, here is the current status: Introduce &pin mut|const place borrowing syntax [x] parsing: #135731, merged. [ ] lowering and borrowck: not started yet. I've got some primitive ideas about borrowck, and I probably need to confirm with someone who is familiar with MIR/borrowck before starting to implement. A pinned borrow consists two MIR statements: a borrow statement that creates the mutable reference, and an adt aggregate statement that put the mutable reference into the Pin struct. I may have to add a new borrow kind so that pinned borrows can be recognized. Then traverse the dataflow graph to make sure that pinned places cannot been moved. Pattern matching of &pin mut|const T types In the past few months, I have struggled with the !Unpin stuffs (the original design sketch Alternative A), trying implementing it, refactoring, discussing on zulips, and was constantly confused; luckily, we have finally reached a new agreement of the Alternative B version. [ ] #139751 under review (reimplemented regarding Alternative B). Support drop(&pin mut self) for structurally pinned types [ ] adding a new Drop::pin_drop(&pin mut self) method: draft PR #144537 Supporting both Drop::drop(&mut self) and Drop::drop(&pin mut self) seems to introduce method-overloading to Rust, which I think might need some more general ways to handle (maybe by a rustc attribute?). So instead, I'd like to implemenent this via a new method Drop::pin_drop(&pin mut self) first. Introduce &pin pat pattern syntax Not started yet (I'd prefer doing that when pattern matching of &pin mut|const T types is ready). Support &pin mut|const T -> &|&mut T coercion (requires T: Unpin of &pin mut T -> &mut T) Not started yet. (It's quite independent, probably someone else can help with it) Support auto borrowing of &pin mut|const place in method calls with `&pin mut|const self` receivers Seems to be handled by Autoreborrow traits? Design a language feature to solve Field Projections (rust-lang/rust-project- goals#390) Progress | ---|--- Point of contact | Benno Lossin Champions | lang (Tyler Mandry) Task owners | Benno Lossin TL;DR. There have been lots of internal developments since the last update: field representing types and chained projections have received a fundamental overhaul: disallowing field paths and requiring projections to decompose. Additionally we explored how const generics could emulate FRTs. we discussed a potential solution to having only a single project operator & trait through a decay operation with special borrow checker treatment. we were able to further simplify the project trait moving the generic argument of the represented field to the project function. We also discovered the possibility that FRTs are not fundamentally necessary for field projections -- however, they are still very useful in other applications and my gut feeling is that they are also right for field projections. So we will continue our experiment with them. we talked about making Project::project be a safe function by introducing a new kind of type. Next Steps : we're still planning to merge https://github.com/rust-lang/rust/pull/146307, after I have updated it with the new FRT logic and it has been reviewed once that PR lands, I plan to update the library experiment to use the experimental FRTs then the testing using that library can begin in the Linux kernel and other projects (this is where anyone interested in trying field projections can help out!) 4 detailed updates available. Comment by @BennoLossin posted on 2025-10-23: Decomposing Projections A chained projection operation should naturally decompose, so foo.[Ber Clausen][].[Baz Shkara][] should be the same as writing (foo.[Ber Clausen][]).[Baz Shkara][]. Until now, the different parenthesizing would have allowed different outcomes. This behavior is confusing and also makes many implementation details more complicated than they need to be. Field Representing Types Since projections now decompose, we have no need from a design perspective for multi-level FRTs. So field_of!(Foo, bar.baz) is no longer required to work. Thus we have decided to restrict FRTs to only a single field and get rid of the path. This simplifies the implementation in the compiler and also avoids certain difficult questions such as the locality of FRTs (if we had a path, we would have to walk the path and it is local, if all structs included in the path are local). Now with only a single field, the FRT is local if the struct is. We also discovered that it is a good idea to make FRTs inhabited (they still are ZSTs), since then it allows the following pattern to work: fn project_free_standing<F: Field>(_: Field, r: &F::Base) -> &F::Type { ... } // can now call the function without turbofish: let my_field = project_free_standing(field_of!(MyStruct, my_field), &my_struct); FRTs via const Generics We also spent some time thinking about const generics and FRTs on zulip: https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/const.20generics.3A.20implementing.20field.20representing.20types/with/544617587 https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/field.20representing.20values.20.26.20.60Field.3Cconst.20F.3A.20.3F.3F.3F.3E.60.20trait/with/542855620 In short, this won't be happening any time soon. However, it could be a future implementation of the field_of! macro depending on how reflection through const generics evolves (but also only in the far-ish future). Comment by @BennoLossin posted on 2025-10-23: Single Project Operator & Trait via Exclusive Decay It would be great if we only had to add a single operator and trait and could obtain the same features as we have with two. The current reason for having two operators is to allow both shared and exclusive projections. If we could have another operation that decays an exclusive reference (or custom, exclusive smart-pointer type) into a shared reference (or the custom, shared version of the smart pointer). This decay operation would need borrow checker support in order to have simultaneous projections of one field exclusively and another field shared (and possibly multiple times). This goes into a similar direction as the reborrowing project goal https://github.com/rust-lang/rust-project-goals/issues/399, however, it needs extra borrow checker support. fn add(x: cell::RefMut<'_, i32>, step: i32) { *x = *x + step; } struct Point { x: i32, y: i32, } fn example(p: cell::RefMut<'_, Point>) { let y: cell::Ref<'_, i32> = coerce_shared!(p.[@y][]); let y2 = coerce_shared!(p.[@y][]); // can project twice if both are coerced add(p.[Devon Peticolas][], *y); add(p.[Devon Peticolas][], *y2); assert_eq!(*y, *y2); // can still use them afterwards } Problems: explicit syntax is annoying for these "coercions", but we cannot make this implicit: if this were an implicit operation, only the borrow checker would know when one had to coerce, this operation is allowed to change the type, this results in borrow check backfeeding into typecheck, which is not possible or at least extremely difficult Syntax Not much movement here, it depends on the question discussed in the previous section, since if we only have one operator, we could choose .@, -> or ~; if we have to have two, then we need additional syntax to differentiate them. Comment by @BennoLossin posted on 2025-10-23: Simplifying the Project trait There have been some developments in pin ergonomics https://github.com/rust- lang/rust/issues/130494: "alternative B" is now the main approach which means that Pin<&mut T> has linear projections, which means that it doesn't change its output type depending on the concrete field (really depending on the field , not only its type). So it falls into the general projection pattern Pin<&mut Struct> -> Pin<&mut Field> which means that Pin doesn't need any where clauses when implementing Project. Additionally we have found out that RCU also doesn't need where clauses, as we can also make its projections linear by introducing a MutexRef<'_, T> smart pointer that always allows projections and only has special behavior for T = Rcu<U>. Discussed on zulip after this message. For this reason we can get rid of the generic argument to Project and mandate that all types that support projections support them for all fields. So the new Project trait looks like this: // still need a common super trait for `Project` & `ProjectMut` pub trait Projectable { type Target: ?Sized; } pub unsafe trait Project: Projectable { type Output<F: Field<Base = Self::Target>>; unsafe fn project<F: Field<Base = Self::Target>>( this: *const Self, ) -> Self::Output<F>; } Are FRTs even necessary? With this change we can also think about getting rid of FRTs entirely. For example we could have the following Project trait: pub unsafe trait Project: Projectable { type Output<F>; unsafe fn project<const OFFSET: usize, F>( this: *const Self, ) -> Self::Output<F>; } There are other applications for FRTs that are very useful for Rust-for-Linux. For example, storing field information for intrusive data structures directly in that structure as a generic. More concretely, in the kernel there are workqueues that allow you to run code in parallel to the currently running thread. In order to insert an item into a workqueue, an intrusive linked list is used. However, we need to be able to insert the same item into multiple lists. This is done by storing multiple instances of the Work struct. Its definition is: pub struct Work<T, const ID: u64> { ... } Where the ID generic must be unique inside of the struct. struct MyDriver { data: Arc<MyData>, main_work: Work<Self, 0>, aux_work: Work<Self, 1>, // more fields ... } // Then you call a macro to implement the unsafe `HasWork` trait safely. // It asserts that there is a field of type `Work<MyDriver, 0>` at the given field // (and also exposes its offset). impl_has_work!(impl HasWork<MyDriver, 0> for MyDriver { self.main_work }); impl_has_work!(impl HasWork<MyDriver, 1> for MyDriver { self.aux_work }); // Then you implement `WorkItem` twice: impl WorkItem<0> for MyDriver { type Pointer = Arc<Self>; fn run(this: Self::Pointer) { println!("doing the main work here"); } } impl WorkItem<1> for MyDriver { type Pointer = Arc<Self>; fn run(this: Self::Pointer) { println!("doing the aux work here"); } } // And finally you can call `enqueue` on a `Queue`: let my_driver = Arc::new(MyDriver::new()); let queue: &'static Queue = kernel::workqueue::system_highpri(); queue.enqueue::<_, 0>(my_driver.clone()).expect("my_driver is not yet enqueued for id 0"); // there are different queues let queue = kernel::workqueue::system_long(); queue.enqueue::<_, 1>(my_driver.clone()).expect("my_driver is not yet enqueued for id 1"); // cannot insert multiple times: assert!(queue.enqueue::<_, 1>(my_driver.clone()).is_err()); FRTs could be used instead of this id, making the definition be Work<F: Field> (also merging the T parameter). struct MyDriver { data: Arc<MyData>, main_work: Work<field_of!(Self, main_work)>, aux_work: Work<field_of!(Self, aux_work)>, // more fields ... } impl WorkItem<field_of!(MyDriver, main_work)> for MyDriver { type Pointer = Arc<Self>; fn run(this: Self::Pointer) { println!("doing the main work here"); } } impl WorkItem<field_of!(MyDriver, aux_work)> for MyDriver { type Pointer = Arc<Self>; fn run(this: Self::Pointer) { println!("doing the aux work here"); } } let my_driver = Arc::new(MyDriver::new()); let queue: &'static Queue = kernel::workqueue::system_highpri(); queue .enqueue(my_driver.clone(), field_of!(MyDriver, main_work)) // ^ using Gary's idea to avoid turbofish .expect("my_driver is not yet enqueued for main_work"); let queue = kernel::workqueue::system_long(); queue .enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)) .expect("my_driver is not yet enqueued for aux_work"); assert!(queue.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)).is_err()); This makes it overall a lot more readable (by providing sensible names instead of magic numbers), and maintainable (we can add a new variant without worrying about which IDs are unused). It also avoids the unsafe HasWork trait and the need to write the impl_has_work! macro for each Work field. I still think that having FRTs is going to be the right call for field projections as well, so I'm going to keep their experiment going. However, we should fully explore their necessity and rationale for a future RFC. Comment by @BennoLossin posted on 2025-10-23: Making Project::project safe In the current proposal the Project::project function is unsafe, because it takes a raw pointer as an argument. This is pretty unusual for an operator trait (it would be the first). Tyler Mandry thought about a way of making it safe by introducing "partial struct types". This new type is spelled Struct.F where F is an FRT of that struct. It's like Struct, but with the restriction that only the field represented by F can be accessed. So for example &Struct.F would point to Struct, but only allow one to read that single field. This way we could design the Project trait in a safe manner: // governs conversion of `Self` to `Narrowed<F>` & replaces Projectable pub unsafe trait NarrowPointee { type Target; type Narrowed<F: Field<Base = Self::Target>>; } pub trait Project: NarrowPointee { type Output<F: Field<Base = Self::Type>>; fn project(narrowed: Self::Narrowed<F>) -> Self::Output<F>; } The NarrowPointee trait allows a type to declare that it supports conversions of its Target type to Target.F. For example, we would implement it for RefMut like this: unsafe impl<'a, T> NarrowPointee for RefMut<'a, T> { type Target = T; type Narrowed<F: Field<Base = T>> = RefMut<'a, T.F>; } Then we can make the narrowing a builtin operation in the compiler that gets prepended on the actual coercion operation. However, this "partial struct type" has a fatal flaw that Oliver Scherer found (edit by oli: it was actually boxy who found it): it conflicts with mem::swap, if Struct.F has the same layout as Struct, then writing to such a variable will overwrite all bytes, thus also overwriting field that aren't F. Even if we make an exception for these types and moves/copies, this wouldn't work, as a user today can rely on the fact that they write size_of::<T>() bytes to a *mut T and thus have a valid value of that type at that location. Tyler Mandry suggested we make it !Sized and even !MetaSized to prevent overwriting values of that type (maybe the Overwrite trait could come in handy here as well). But this might make "partial struct types" too weak to be truly useful. Additionally this poses many more questions that we haven't yet tackled. Reborrow traits (rust-lang/rust-project-goals#399) Progress | ---|--- Point of contact | Aapo Alasuutari Champions | compiler (Oliver Scherer), lang (Tyler Mandry) Task owners | Aapo Alasuutari 1 detailed update available. Comment by @aapoalas posted on 2025-10-22: Initial implementation of a Reborrow trait for types with only lifetimes with exclusive reference semantics is working but not yet upstreamed not in review. CoerceShared implementation is not yet started. Proper composable implementation will likely require a different tactic than the current one. Safety and validity checks are currently absent as well and will require more work. "Flexible, fast(er) compilation" build-std (rust-lang/rust-project-goals#274) Progress | ---|--- Point of contact | David Wood Champions | cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) Task owners | Adam Gemmell, David Wood 1 detailed update available. Comment by @davidtwco posted on 2025-10-31: We've now opened our first batch of RFCs: rust-lang/rfcs#3873, rust- lang/rfcs#3874 and rust-lang/rfcs#3875 Production-ready cranelift backend (rust-lang/rust-project- goals#397) Progress | ---|--- Point of contact | Folkert de Vries Champions | compiler (bjorn3) Task owners | bjorn3, Folkert de Vries, [Trifecta Tech Foundation] No detailed updates available. Promoting Parallel Front End (rust-lang/rust-project- goals#121) Progress | ---|--- Point of contact | Sparrow Li Task owners | Sparrow Li No detailed updates available. Relink don't Rebuild (rust-lang/rust-project- goals#400) Progress | ---|--- Point of contact | Jane Lusby Champions | cargo (Weihang Lo), compiler (Oliver Scherer) Task owners | Ally Sommers, Piotr Osiewicz No detailed updates available. "Higher-level Rust" Ergonomic ref-counting: RFC decision and preview (rust-lang/rust-project- goals#107) Progress | ---|--- Point of contact | Niko Matsakis Champions | compiler (Santiago Pastorino), lang (Niko Matsakis) Task owners | Niko Matsakis, Santiago Pastorino 3 detailed updates available. Comment by @nikomatsakis posted on 2025-10-07: I posted this blog post that proposes that we ought to name the trait Handle and define it as a trait where clone produces an "entangled" value -- i.e., a second handle to the same underlying value. Before that, there's been a LOT of conversation that hasn't made its way onto this tracking issue. Trying to fix that! Here is a brief summary, in any case: It began with the first Rust Project Goals program in 2024H2, where Jonathan Kelley from Dioxus wrote a thoughtful blog post about a path to high-level Rust that eventually became a 2024H2 project goal towards ergonomic ref-counting. I wrote a series of blog posts about a trait I called Claim. Josh Triplett and I talked and Josh Triplett opened RFC #3680[], which proposed a use keyword and use || closures. Reception, I would say, was mixed; yes, this is tackling a real problem, but there were lots of concerns on the approach. I summarized the key points here. Santiago Pastorino implemented experimental support for (a variant of) RFC #3680[] as part of the 2025H1 project goal. I authored a 2025H2 project goal proposing that we create an alternative RFC focused on higher-level use-cases which prompted Josh Triplett and I have to have a long and fruitful conversation in which he convinced me that this was not the right approach. We had a lang-team design meeting on 2025-08-27 in which I presented this survey and summary of the work done thus far. And then at the RustConf 2025 Unconf we had a big group discussion on the topic that I found very fruitful, as well as various follow-up conversations with smaller groups. The name Handle arose from this and I plan to be posting further thoughts as a result. RFC #3680: https://github.com/rust-lang/rfcs/pull/3680 Comment by @nikomatsakis posted on 2025-10-09: I wrote up a brief summary of my current thoughts on Zulip; I plan to move this content into a series of blog posts, but I figured it was worth laying it out here too for those watching this space: 09:11 (1) I don't think clones/handles are categorically different when it comes to how much you want to see them made explicit; some applications want them both to be explicit, some want them automatic, some will want a mix -- and possibly other kinds of categorizations. 09:11 (2) But I do think that if you are making everything explicit, it's useful to see the difference between a general purpose clone and a handle. 09:12 (3) I also think there are many classes of software where there is value in having everything explicit -- and that those classes are often the ones most in Rust's "sweet spot". So we should make sure that it's possible to have everything be explicit ergonomically. 09:12 (4) This does not imply that we can't make automatic clones/handles possible too -- it is just that we should treat both use cases (explicit and automatic) as first-class in importance. 09:13 (5) Right now I'm focused on the explicit case. I think this is what the use-use-everywhere was about, though I prefer a different proposal now -- basically just making handle and clone methods understood and specially handled by the compiler for optimization and desugaring purposes. There are pros and cons to that, obviously, and that's what I plan to write-up in more detail. 09:14 (6) On a related note, I think we also need explicit closure captures, which is a whole interesting design space. I don't personally find it "sufficient" for the "fully explicit" case but I could understand why others might think it is, and it's probably a good step to take. 09:15 (7) I go back and forth on profiles -- basically a fancy name for lint-groups based on application domain -- and whether I think we should go that direction, but I think that if we were going to go automatic, that's the way I would do it: i.e., the compiler will automatically insert calls to clone and handle, but it will lint when it does so; the lint can by deny-by-default at first but applications could opt into allow for either or both. I previously wanted allow-by-default but I've decided this is a silly hill to die on, and it's probably better to move in smaller increments. Comment by @nikomatsakis posted on 2025-10-22: Update: There has been more discussion about the Handle trait on Zulip and elsewhere. Some of the notable comments: Downsides of the current name: it's a noun, which doesn't follow Rust naming convention, and the verb handle is very generic and could mean many things. Alternative names proposed: Entangle/entangle or entangled, Share/share, Alias/alias, or Retain/retain. if we want to seriously hardcore on the science names -- Mitose/mitose or Fission/fission. There has been some criticism pointing out that focusing on handles means that other types which might be "cheaply cloneable" don't qualify. For now I will go on using the term Handle, but I agree with the critique that it should be a verb, and currently prefer Alias/alias as an alternative. I'm continuing to work my way through the backlog of blog posts about the conversations from Rustconf. The purposes of these blog posts is not just to socialize the ideas more broadly but also to help myself think through them. Here is the latest post: https://smallcultfollowing.com/babysteps/blog/2025/10/13/ergonomic-explicit- handles/ The point of this post is to argue that, whatever else we do, Rust should have a way to create handles/clones (and closures that work with them) which is at once explicit and ergonomic. To give a preview of my current thinking, I am working now on the next post which will discuss how we should add an explicit capture clause syntax. This is somewhat orthogonal but not really, in that an explicit syntax would make closures that clone more ergonomic (but only mildly). I don't have a proposal I fully like for this syntax though and there are a lot of interesting questions to work out. As a strawperson, though, you might imagine this older proposal I wrote up, which would mean something like this: let actor1 = async move(reply_tx.handle()) { reply_tx.send(...); }; let actor2 = async move(reply_tx.handle()) { reply_tx.send(...); }; This is an improvement on let actor1 = { let reply_tx = reply_tx.handle(); async move(reply_tx.handle()) { reply_tx.send(...); } }; but only mildly. The next post I intend to write would be a variant on "use, use everywhere" that recommends method call syntax and permitting the compiler to elide handle/clone calls, so that the example becomes let actor1 = async move { reply_tx.handle().send(...); // -------- due to optimizations, this would capture the handle creation to happen only when future is *created* }; This would mean that cloning of strings and things might benefit from the same behavior: let actor1 = async move { reply_tx.handle().send(some_id.clone()); // -------- the `some_id.clone()` would occur at future creation time }; The rationable that got me here is (a) minimizing perceived complexity and focusing on muscle memory (just add .clone() or .handle() to fix use- after-move errors, no matter when/where they occur). The cost of course is that (a) Handle/Clone become very special; and (b) it blurs the lines on when code execution occurs. Despite the .handle() occurring inside the future (resp. closure) body, it actually executes when the future (resp. closure) is created in this case (in other cases, such as a closure that implements Fn or FnMut and hence executes more than once, it might occur during each execution as well). Stabilize cargo-script (rust-lang/rust-project- goals#119) Progress | ---|--- Point of contact | Ed Page Champions | cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) Task owners | Ed Page No detailed updates available. "Unblocking dormant traits"

      Evolving trait hierarchies (rust-lang/rust-project- goals#393)

      Progress |
      ---|---
      Point of contact | Taylor Cramer
      Champions | lang (Taylor Cramer), types (Oliver Scherer)
      Task owners | Taylor Cramer, Taylor Cramer & others
      No detailed updates available.

      In-place initialization (rust-lang/rust-project- goals#395)

      Progress |
      ---|---
      Point of contact | Alice Ryhl
      Champions | lang (Taylor Cramer)
      Task owners | Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

      1 detailed update available.

      Comment by @Darksonn posted on 2025-10-22:

      This is our first update we’re posting for the in-place init work. Overall things are progressing well, with lively discussion happening on the newly minted t-lang/in-place-init Zulip channel. Here are the highlights since the lang team design meeting at the end of July:

      Next-generation trait solver (rust-lang/rust-project- goals#113)

      Progress |
      ---|---
      Point of contact | lcnr
      Champions | types (lcnr)
      Task owners | Boxy, Michael Goulet, lcnr

      1 detailed update available.

      Comment by @lcnr posted on 2025-10-23:

      Since the last update we've fixed the hang in rayon in https://github.com/rust-lang/rust/pull/144991 and https://github.com/rust- lang/rust/pull/144732 which relied on https://github.com/rust- lang/rust/pull/143054 https://github.com/rust-lang/rust/pull/144955 https://github.com/rust-lang/rust/pull/144405 https://github.com/rust- lang/rust/pull/145706. This introduced some search graph bugs which we fixed in https://github.com/rust-lang/rust/pull/147061 https://github.com/rust- lang/rust/pull/147266.

      We're mostly done with the opaque type support now. Doing so required a lot of quite involved changes:

      • https://github.com/rust-lang/rust/pull/145244 non-defining uses in borrowck
      • https://github.com/rust-lang/rust/pull/145925 non-defining uses in borrowck closure support
      • https://github.com/rust-lang/rust/pull/145711 non-defining uses in hir typeck
      • https://github.com/rust-lang/rust/pull/140375 eagerly compute sub_unification_table again
      • https://github.com/rust-lang/rust/pull/146329 item bounds
      • https://github.com/rust-lang/rust/pull/145993 function calls
      • https://github.com/rust-lang/rust/pull/146885 method selection
      • https://github.com/rust-lang/rust/pull/147249 fallback

      We also fixed some additional self-contained issues and perf improvements: https://github.com/rust-lang/rust/pull/146725 https://github.com/rust- lang/rust/pull/147138 https://github.com/rust-lang/rust/pull/147152 https://github.com/rust-lang/rust/pull/145713 https://github.com/rust- lang/rust/pull/145951

      We have also migrated rust-analyzer to entirely use the new solver instead of chalk. This required a large effort mainly by Jack Huey Chayim Refael Friedman and Shoyu Vanilla. That's some really impressive work on their end 🎉 See this list of merged PRs for an overview of what this required on the r-a side. Chayim Refael Friedman also landed some changes to the trait solver itself to simplify the integration: https://github.com/rust- lang/rust/pull/145377 https://github.com/rust-lang/rust/pull/146111 https://github.com/rust-lang/rust/pull/147723 https://github.com/rust- lang/rust/pull/146182.

      We're still tracking the remaining issues in https://github.com/orgs/rust- lang/projects/61/views/1. Most of these issues are comparatively simple and I expect us to fix most of them over the next few months, getting us close to stabilization. We're currently doing another crater triage which may surface a few more issues.

      Stabilizable Polonius support on nightly (rust-lang/rust-project- goals#118)

      Progress |
      ---|---
      Point of contact | Rémy Rakic
      Champions | types (Jack Huey)
      Task owners | Amanda Stjerna, Rémy Rakic, Niko Matsakis

      1 detailed update available.

      Comment by @lqd posted on 2025-10-22:

      Here's another summary of the most interesting developments since the last update:

      • reviews and updates have been done on the polonius alpha, and it has since landed
      • the last 2 trivial diagnostics failures were fixed
      • we've done perf runs, crater runs, completed gathering stats on crates.io for avg and outliers in CFG sizes, locals, loan and region counts, dataflow framework behavior on unexpected graph shapes and bitset invalidations
      • I worked on dataflow for borrowck: single pass analyses on acyclic CFGs, dataflow analyses on SCCs for cyclic CFGs
      • some more pieces of amanda's SCC rework have landed, with lcnr's help
      • lcnr's opaque type rework, borrowcking of nested items, and so on, also fixed some issues we mentioned in previous updates with member constraints for computing when loans are going out of scope
      • we also studied recent papers in flow-sensitive pointer analysis
      • I also started the loans-in-scope algorithm rework, and also have reachability acceleration with the CFG SCCs
      • the last 2 actual failures in the UI tests are soundness issues, related to liveness of captured regions for opaque types: some regions that should be live are not, which were done to help with precise capture and limit the impact of capturing unused regions that cannot be actually used in the hidden type. The unsoundness should not be observable with NLLs, but polonius alpha relies on liveness to propagate loans throughout the CFG: these dead regions prevent detecting some error-causing loan invalidations. The easiest fix would cause breakage in code that's now accepted. niko, jack and I have another possible solution and I'm trying to implement it now

      Goals looking for help Other goal updates Add a team charter for rustdoc team (rust-lang/rust-project- goals#387) Progress | ---|--- Point of contact | Guillaume Gomez Champions | rustdoc (Guillaume Gomez) No detailed updates available. Borrow checking in a-mir-formality (rust-lang/rust-project- goals#122) Progress | ---|--- Point of contact | Niko Matsakis Champions | types (Niko Matsakis) Task owners | Niko Matsakis, tiif No detailed updates available. C++/Rust Interop Problem Space Mapping (rust-lang/rust-project- goals#388) Progress | ---|--- Point of contact | Jon Bauman Champions | compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) Task owners | Jon Bauman No detailed updates available. Comprehensive niche checks for Rust (rust-lang/rust-project- goals#262) Progress | ---|--- Point of contact | Bastian Kersting Champions | compiler (Ben Kimock), opsem (Ben Kimock) Task owners | Bastian Kersting], Jakob Koschel No detailed updates available. Const Generics (rust-lang/rust-project-goals#100) Progress | ---|--- Point of contact | Boxy Champions | lang (Niko Matsakis) Task owners | Boxy, Noah Lev 1 detailed update available. Comment by @nikomatsakis posted on 2025-10-22: We had a design meeting on 2025-09-10, minutes available here, aiming at these questions: There are a few concrete things I would like to get out of this meeting, listed sequentially in order of most to least important: Would you be comfortable stabilizing the initial ADTs-only extensions? This would be properly RFC'd before stabilization, this ask is just a "vibe check". Are you interested in seeing Per-Value Rejection for enums with undesirable variants? How do you feel about the idea of Lossy Conversion as an approach in general, what about specifically for the References and Raw Pointers extensions? How do you feel about the idea of dropping the One Equality ideal in general, what about specifically for -0.0 vs +0.0, what about specifically for NaN values? The vibe checks on the first one were as follows: Vibe check The main ask: Would you be comfortable stabilizing the initial ADTs-only extensions? (plus the other ones) nikomatsakis I am +1 on working incrementally and focusing first on ADTs. I am supportive of stabilization overall but I don't feel like we've "nailed" the way to talk or think about these things. So I guess my "vibe" is +1 but if this doc were turned into an RFC kind of "as is" I would probably wind up -1 on the RFC, I think more work is needed (in some sense, the question is, "what is the name of the opt-in trait and why is it named that"). This space is complex and I think we have to do better at helping people understand the fine-grained distinctions between runtime values, const-eval values, and type-safe values. Niko: if we add some sort of derive of a trait name, how much value are we getting from the derive, what should the trait be named? tmandry I think we'll learn the most by stabilizing ADTs in a forward compatible way (including an opt-in) now. So +1 from me on the proposed design. It's worth noting that this is a feature that interacts with many other features, and we will be considering extensions to the MVP for the foreseeable future. To some extent the lang team has committed to this already but we should know what we're signing ourselves up for. scottmcm scottmcm: concern over the private fields restriction (see question below), but otherwise for the top ask, yes happy to just do "simple" types (no floats, no cells, no references, etc). TC As Niko said, +1 on working incrementally, and I too am supportive overall. As a vibe, per-value rejection seems fairly OK to me in that we decided to do value-based reasoning for other const checks. It occurs to me there's some parallel with that. https://github.com/rust-lang/rust/pull/119044 As for the opt-in on types, I see the logic. I do have reservations about adding too many opt-ins to the language, and so I'm curious about whether this can be safely removed. Regarding floats, I see the question on these as related to our decision about how to handle padding in structs. If it makes sense to normalize or otherwise treat -0.0 and +0.0 as the same, then it'd also make sense in my view to normalize or otherwise treat two structs with the same values but different padding (or where only one has initialized padding) as the same. Continue resolving cargo-semver-checks blockers for merging into cargo (rust-lang/rust-project-goals#104) Progress | ---|--- Point of contact | Predrag Gruevski Champions | cargo (Ed Page), rustdoc (Alona Enraght-Moony) Task owners | Predrag Gruevski No detailed updates available. Develop the capabilities to keep the FLS up to date (rust-lang/rust-project- goals#391) Progress | ---|--- Point of contact | Pete LeVasseur Champions | bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur) Task owners | Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems 2 detailed updates available. Comment by @nikomatsakis posted on 2025-10-22: After much discussion, we have decided to charter this team as a t-spec subteam. Pete LeVasseur and I are working to make that happen now. Comment by @nikomatsakis posted on 2025-10-22: PR with charters: https://github.com/rust-lang/team/pull/2028 Emit Retags in Codegen (rust-lang/rust-project- goals#392) Progress | ---|--- Point of contact | Ian McCormack Champions | compiler (Ralf Jung), opsem (Ralf Jung) Task owners | Ian McCormack 1 detailed update available. Comment by @icmccorm posted on 2025-10-25: Here's our first status update! We've been experimenting with a few different ways of emitting retags in codegen, as well as a few different forms that retags should take at this level. We think we've settled on a set of changes that's worth sending out to the community for feedback, likely as a pre-RFC. You can expect more engagement from us on this level in the next couple of weeks. We've used these changes to create an initial working prototype for BorrowSanitizer that supports finding Tree Borrows violations in tiny, single-threaded Rust programs. We're working on getting Miri's test suite ported over to confirm that everything is working correctly and that we've quashed any false positives or false negatives. This coming Monday, I'll be presenting on BorrowSanitizer and this project goal at the Workshop on Supporting Memory Safety in LLVM. Please reach out if you're attending and would like to chat more in person! Expand the Rust Reference to specify more aspects of the Rust language (rust- lang/rust-project-goals#394) Progress | ---|--- Point of contact | Josh Triplett Champions | lang-docs (Josh Triplett), spec (Josh Triplett) Task owners | Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby 1 detailed update available. Comment by @joshtriplett posted on 2025-10-22: The work on this goal has led to many ongoing discussions on the current status of the Reference. Those discussions are still in progress. Meanwhile, many people working on this goal have successfully written outlines or draft chapters, at various stages of completeness. There's a broken-out status report at https://github.com/rust-lang/project-goal-reference- expansion/issues/11 . Finish the libtest json output experiment (rust-lang/rust-project- goals#255) Progress | ---|--- Point of contact | Ed Page Champions | cargo (Ed Page) Task owners | Ed Page No detailed updates available. Finish the std::offload module (rust-lang/rust-project- goals#109) Progress | ---|--- Point of contact | Manuel Drehwald Champions | compiler (Manuel Drehwald), lang (TC) Task owners | Manuel Drehwald, LLVM offload/GPU contributors 1 detailed update available. Comment by @ZuseZ4 posted on 2025-10-22: A longer update of the changes over the fall. We had two gsoc contributors and a lot of smaller improvements for std::autodiff. The first two improvements were already mentioned as draft PRs in the previous update, but got merged since. I also upstreamed more std::offload changes. Marcelo Domínguez refactored the autodiff frontend to be a proper rustc intrinsic, rather than just hackend into the frontend like I first implemented it. This already solved multiple open issues, reduced the code size, and made it generally easier to maintain going forward. Karan Janthe upstreamed a first implementation of "TypeTrees", which lowers rust type and layout information to Enzyme, our autodiff backend. This makes it more likely that you won't see compilation failures with the error message "Can not deduce type of ". We might refine in the future what information exactly we lower. Karan Janthe made sure that std::autodiff has support for f16 and and f128 types. One more of my offload PRs landed. I also figured out why the LLVM-IR generated by the std::offload code needed some manual adjustments in the past. We were inconsistent when communicating with LLVM's offload module, about whether we'd want a magic, extra, dyn_ptr argument, that enables kernels to use some extra features. We don't use these features yet, but for consistency we now always generate and expect the extra pointer. The bugfix is currently under review, once it lands upstream, rustc is able to run code on GPUs (still with a little help of clang). Marcelo Domínguez refactored my offload frontend, again introducing a proper rustc intrinsic. That code will still need to go through review, but once it lands it will get us a lot closer to a usable frontend. He also started to generate type information for our offload backend to know how many bytes to copy to and from the devices. This is a very simplified version of our autodiff typetrees. At RustChinaConf, I was lucky to run into the wild linker author David Lattimore, which helped me to create a draft PR that can dlopen Enzyme at runtime. This means we could ship it via rustup for people interested in std::autodiff, and don't have to link it in at build time, which would increase binary size even for those users that are not interested in it. There are some open issues, so please reach out if you have time to get the PR ready! @sgasho spend a lot of time trying to get Rust into the Enzyme CI. Unfortunately that is a tricky process due to Enzyme's CI requirements, so it's not merged yet. I tried to simplify building std::autodiff by marking it as compatible with download-llvm-ci. Building LLVM from source was previously the by far slowest part of building rustc with autodiff, so this has a large potential. Unfortunately the CI experiments revealed some issues around this setting. We think we know why Enzyme's Cmake causes issues here and are working on a fix to make it more reliable. Osama Abdelkader and bjorn3 looked into automatically enabling fat-lto when autodiff is enabled. In the past, forgetting to enable fat-lto resulted in incorrect (zero) derivatives. The first approach unfortunately wasn't able to cover all cases, so we need to see whether we can handle it nicely. If that turns out to be too complicated, we will revert it and instead "just" provide a nice error message, rather than returning incorrect derivatives. All-in-all I spend a lot more time on infra (dlopen, cmake, download-llvm-ci, ...) then I'd like, but on the happy side there are only so many features left that I want to support here so there is an end in sight. I am also about to give a tech-talk at the upcoming LLVM dev meeting about safe GPU programming in Rust. Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust- project-goals#407) Progress | ---|--- Point of contact | Tomas Sedovic Champions | compiler (Wesley Wiser) Task owners | (depending on the flag) 3 detailed updates available. Comment by @tomassedovic posted on 2025-10-09: I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!). Comment by @tomassedovic posted on 2025-10-10: [merged] Sanitizers target modificators / https://github.com/rust-lang/rust/pull/138736 [merged] Add assembly test for -Zreg-struct-return option / https://github.com/rust-lang/rust/pull/145382 [merged] CI: rfl: move job forward to Linux v6.17-rc5 to remove temporary commits / https://github.com/rust-lang/rust/pull/146368 -Zharden-sls / https://github.com/rust-lang/rust/pull/136597 Waiting on review #![register_tool] / https://github.com/rust-lang/rust/issues/66079 Waiting on https://github.com/rust-lang/rfcs/pull/3808 -Zno-jump-tables / https://github.com/rust-lang/rust/pull/145974 Active FCP, waiting on 2 check boxes Comment by @tomassedovic posted on 2025-10-24: -Cunsigned-char We've discussed adding an option analogous to -funsigned-char in GCC and Clang, that would allow you to set whether std::ffi::c_char is represented by i8 or u8. Right now, this is platform-specific and should map onto whatever char is in C on the same platform. However, Linux explicitly sets char to be unsigned and then our Rust code conflicts with that. And isn this case the sign is significant. Rust for Linux works around this this with their rust::ffi module, but now that they've switched to the standard library's CStr type, they're running into it again with the as_ptr method. Tyler mentioned https://docs.rs/ffi_11/latest/ffi_11/ which preserves the char / signed char / unsigned char distinction. Grouping target modifier flags The proposed unsigned-char option is essentially a target modifier. We have several more of these (e.g. llvm-args, no-redzone) in the Rust compiler and Josh suggested we distinguish them somehow. E.g. by giving them the same prefix or possibly creating a new config option (right now we have -C and -Z, maybe we could add -T for target modifiers) so they're distinct from the e.g. the codegen options. Josh started a Zulip thread here: https://rust- lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Grouping.20target.20modifier.20options.3F/with/546524232 #![register_tool] / / Tyler looked at the RFC. The Crubit team started using register_tool but then moved to using an attribute instead. He proposed we could do something similar here, although it would require a new feature and RFC. The team was open to seeing how it would work. Getting Rust for Linux into stable Rust: language features (rust-lang/rust- project-goals#116) Progress | ---|--- Point of contact | Tomas Sedovic Champions | lang (Josh Triplett), lang-docs (TC) Task owners | Ding Xiang Fei 3 detailed updates available. Comment by @tomassedovic posted on 2025-10-09: I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!). Comment by @tomassedovic posted on 2025-10-10: Deref/Receiver

      • Ding Xiang Fei keeps updating the PR: https://github.com/rust-lang/rust/pull/146095
      • They're also working on a document to explain the consequences of this split

      Arbitrary Self Types https://github.com/rust-lang/rust/issues/44874 Waiting on the Deref/Receiver work, no updates derive(CoercePointee) https://github.com/rust-lang/rust/pull/133820 Waiting on Arbitrary self types Pass pointers to const in asm! blocks RFC: https://github.com/rust-lang/rfcs/pull/3848 The Lang team went through the RFC with Alice Ryhl on 2025-10-08 and it's in FCP now Field projections Benno Lossin opened a PR here: https://github.com/rust-lang/rust/pull/146307 Being reviewed by the compiler folks Providing \0 terminated file names with #[track_caller] The feature has been implemented and stabilized with file_as_c_str as the method name: https://github.com/rust-lang/rust/pull/145664 Supertrait auto impl RFC Ding Xiang Fei opened the RFC and works with the reviewers: https://github.com/rust-lang/rfcs/pull/3851 Other Miguel Ojeda spoke to Linus about rustfmt and they came to agreement. Comment by @tomassedovic posted on 2025-10-24: Layout of Danilo asked about the layout of TypeId -- specifically its size and whether they can rely on it because they want to store it in a C struct. The struct's size is currently 16 bytes, but that's an implementation detail. As a vibe check, Josh Triplett and Tyler Mandry were open to guaranteeing that it's going to be at most 16 bytes, but they wanted to reserve the option to reduce the size at some point. The next step is to have the full Lang and Libs teams discuss the proposal. Danilo will open a PR to get that discussion started. rustfmt

      Miguel brought up the "trailing empty comment" workaround for the formatting issue that made the rounds on the Linux kernel a few weeks ago. The kernel style places each import on a single line:

       use crate::{ fmt, page::AsPageIter, };
      

      rustfmt compresses this to:

       use crate::{fmt, page::AsPageIter};
      

      The workaround is to put an empty trailing comment at the end

       use crate::{ fmt, page::AsPageIter, // };
      

      This was deemed acceptable (for the time being) and merged into the mainline kernel: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4a9cb2eecc78fa9d388481762dd798fa770e1971

      Miguel is in contact with rustfmt to support this behaviour without a workaround.

      // PANIC: ... comments / This is a proposal to add a lint that would require a PANIC comment (modeled after the SAFETY comment) to explain the circumstances during which the code will or won't panic. Alejandra González was open to the suggestion and Henry Barker stepped up to implement it. Deref/Receiver During the experimentation work, Ding ran into an issue with overlapping impls (that was present even with #[unstable_feature_bound(..)]). We ran out of time but we'll discuss this offline and return to it at the next meeting. Implement Open API Namespace Support (rust-lang/rust-project- goals#256) Progress | ---|--- Point of contact | Champions | cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) Task owners | b-naber, Ed Page No detailed updates available. MIR move elimination (rust-lang/rust-project- goals#396) Progress | ---|--- Point of contact | Amanieu d'Antras Champions | lang (Amanieu d'Antras) Task owners | Amanieu d'Antras No detailed updates available. Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project- goals#264) Progress | ---|--- Point of contact | Task owners | , Ed Page No detailed updates available. Prototype Cargo build analysis (rust-lang/rust-project- goals#398) Progress | ---|--- Point of contact | Weihang Lo Champions | cargo (Weihang Lo) Task owners | Weihang Lo, Weihang Lo 1 detailed update available. Comment by @weihanglo posted on 2025-10-04: Cargo tracking issue: https://github.com/rust-lang/cargo/issues/15844. The first implementation was https://github.com/rust-lang/cargo/pull/15845 in August that added build.analysis.enabled = true to unconditionally generate timing HTML. Further implementations tasks is listed in https://github.com/rust-lang/cargo/issues/15844#issuecomment-3192779748. Haven't yet got any progress in September. reflection and comptime (rust-lang/rust-project- goals#406) Progress | ---|--- Point of contact | Oliver Scherer Champions | compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) Task owners | oli-obk 1 detailed update available. Comment by @oli-obk posted on 2025-10-22: I implemented an initial MVP supporting only tuples and primitives (tho those are just opaque things you can't interact with further), and getting offsets for the tuple fields as well as the size of the tuple: https://github.com/rust-lang/rust/pull/146923 There are two designs of how to expose this from a libs perspective, but after a sync meeting with scottmcm yesterday we came to the conclusion that neither is objectively better at this stage so we're just going to go with the nice end-user UX version for now. For details see the PR description. Once the MVP lands, I will mentor various interested contributors who will keep adding fields to the Type struct and variants the TypeKind enum. The next major step is restricting what information you can get from structs outside of the current module or crate. We want to honor visibility, so an initial step would be to just never show private fields, but we want to explore allowing private fields to be shown either just within the current module or via some opt-in marker trait Rework Cargo Build Dir Layout (rust-lang/rust-project- goals#401) Progress | ---|--- Point of contact | Ross Sullivan Champions | cargo (Weihang Lo) Task owners | Ross Sullivan 1 detailed update available. Comment by @ranger-ross posted on 2025-10-06: Status update October 6, 2025 The build-dir was split out of target-dir as part of https://github.com/rust-lang/cargo/issues/14125 and scheduled for stabilization in Rust 1.91.0. 🎉 Before re-organizing the build-dir layout we wanted to improve the existing layout tests to make sure we do not make any unexpected changes. This testing harness improvement was merged in https://github.com/rust- lang/cargo/pull/15874. The initial build-dir layout reorganization PR has been posted https://github.com/rust-lang/cargo/pull/15947 and discussion/reviews are under way. Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project- goals#402) Progress | ---|--- Point of contact | Guillaume Gomez Champions | compiler (Wesley Wiser), infra (Marco Ieni) Task owners | Guillaume Gomez No detailed updates available. Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust- lang/rust-project-goals#403) Progress | ---|--- Point of contact | Jakob Koschel Task owners | , [Jakob Koschel](https://github.com/jakos-sec) No detailed updates available. Rust Vision Document (rust-lang/rust-project- goals#269) Progress | ---|--- Point of contact | Niko Matsakis Task owners | vision team 1 detailed update available. Comment by @jackh726 posted on 2025-10-22: Update: Niko and I gave a talk at RustConf 2025 (and I represented that talk at RustChinaConf 2025) where we gave an update on this (and some intermediate insights). We have started to seriously plan the shape of the final doc. We have some "blind spots" that we'd like to cover before finishing up, but overall we're feeling close to the finish line on interviews. rustc-perf improvements (rust-lang/rust-project- goals#275) Progress | ---|--- Point of contact | James Champions | compiler (David Wood), infra (Jakub Beránek) Task owners | James, Jakub Beránek, David Wood 1 detailed update available. Comment by @Kobzol posted on 2025-10-21: We moved forward with the implementation, and the new job queue system is now being tested in production on a single test pull request. Most things seem to be working, but there are a few things to iron out and some profiling to be done. I expect that within a few weeks we could be ready to switch to the new system fully in production. Stabilize public/private dependencies (rust-lang/rust-project- goals#272) Progress | ---|--- Point of contact | Champions | cargo (Ed Page) Task owners | , Ed Page No detailed updates available. Stabilize rustdoc doc_cfg feature (rust-lang/rust-project- goals#404) Progress | ---|--- Point of contact | Guillaume Gomez Champions | rustdoc (Guillaume Gomez) Task owners | Guillaume Gomez No detailed updates available. SVE and SME on AArch64 (rust-lang/rust-project- goals#270) Progress | ---|--- Point of contact | David Wood Champions | compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) Task owners | David Wood 1 detailed update available. Comment by @nikomatsakis posted on 2025-10-22: Sized hierarchy

      The focus right now is on the "non-const" parts of the proposal, as the "const" parts are blocked on the new trait solver (https://github.com/rust- lang/rust-project-goals/issues/113). Now that the types team FCP https://github.com/rust-lang/rust/pull/144064 has completed, work can proceed to land the implementation PRs. David Wood plans to split the RFC to separate out the "non-const" parts of the proposal so it can move independently, which will enable extern types.

      To that end, there are three interesting T-lang design questions to be considered.

      Naming of the traits

      The RFC currently proposes the following names

      • Sized
      • MetaSized
      • PointeeSized

      However, these names do not follow the "best practice" of naming the trait after the capability that it provides. As champion Niko is recommending we shift to the following names:

      • Sized -- should righly be called SizeOf, but oh well, not worth changing.
      • SizeOfVal -- named after the method size_of_val that you get access to.
      • Pointee -- the only thing you can do is point at it.

      The last trait name is already used by the (unstable) std::ptr::Pointee trait. We do not want to have these literally be the same trait because that trait adds a Metadata associated type which would be backwards incompatible; if existing code uses T::Metadata to mean <T as SomeOtherTrait>::Metadata, it could introduce ambiguity if now T: Pointee due to defaults. My proposal is to rename std::ptr::Pointee to std::ptr::PointeeMetadata for now, since that trait is unstable and the design remains under some discussion. The two traits could either be merged eventually or remain separate.

      Note that PointeeMetadata would be implemented automatically by the compiler for anything that implements Pointee.

      Syntax opt-in

      The RFC proposes that an explicit bound like T: MetaSized disabled the default T: Sized bound. However, this gives no signal that this trait bound is "special" or different than any other trait bound. Naming conventions can help here, signalling to users that these are special traits, but that leads to constraints on naming and may not scale as we consider using this mechanism to relax other defaults as proposed in my recent blog post. One idea is to use some form of syntax, so that T: MetaSized is just a regular bound, but (for example) T: =MetaSized indicates that this bound "disables" the default Sized bound. This gives users some signal that something special is going on. This = syntax is borrowing from semver constraints, although it's not a precise match (it does not mean that T: Sized doesn't hold, after all). Other proposals would be some other sigil (T: ?MetaSized, but it means "opt out from the traits above you"; `T:

      MetaSized`, ...) or a keyword (no idea).

      To help us get a feel for it, I'll use T: =Foo throughout this post.

      Implicit trait supertrait bounds, edition interaction

      In Rust 2024, a trait is implicitly ?Sized which gets mapped to =SizeOfVal:

      trait Marker {} // cannot be implemented by extern types
      

      This is not desirable but changing it would be backwards incompatible if traits have default methods that take advantage of this bound:

      trait NotQuiteMarker {
          fn dummy(&self) {
              let s = size_of_val(self);
          }
      }
      

      We need to decide how to handle this. Options are

      • Just change it, breakage will be small (have to test that).
      • Default to =SizeOfVal but let users explicitly write =Pointee if they want that. Bad because all traits will be incompatible with extern types.
      • Default to =SizeOfVal only if defaulted methods are present. Bad because it's a backwards incompatible change to add a defaulted method now.
      • Default to =Pointee but add where Self: =SizeOfVal implicitly to defaulted methods. Now it's not backwards incompatible to add a new defaulted method, but it is backwards incompatible to change an existing method to have a default.

      If we go with one of the latter options, Niko proposes that we should relax this in the next Edition (Rust 2026?) so that the default becomes Pointee (or maybe not even that, if we can).

      Relaxing associated type bounds

      Under the RFC, existing ?Sized bounds would be equivalent to =SizeOfVal. This is mostly fine but will cause problems in (at least) two specific cases: closure bounds and the Deref trait. For closures, we can adjust the bound since the associated type is unstable and due to the peculiarities of our Fn() -> T syntax. Failure to adjust the Deref bound in particular would prohibit the use of Rc<E> where E is an extern type, etc.

      For deref bounds, David Wood is preparing a PR that simply changes the bound in a backwards incompatible way to assess breakage on crater. There is some chance the breakage will be small.

      If the breakage proves problematic, or if we find other traits that need to be relaxed in a similar fashion, we do have the option of:

      • In Rust 2024, T: Deref becomes equivalent to T: Deref<Target: SizeOfVal> unless written like T: Deref<Target: =Pointee>. We add that annotation throughout stdlib.
      • In Rust 202X, we change the default, so that T: Deref does not add any special bounds, and existing Rust 2024 T: Deref is rewritten to T: Deref<Target: SizeOfVal> as needed.

      Other notes

      One topic that came up in discussion is that we may eventually wish to add a level "below" Pointee, perhaps Value, that signifies webassembly external values which cannot be pointed at. That is not currently under consideration but should be backwards compatible.

      Type System Documentation (rust-lang/rust-project- goals#405)

      Progress |
      ---|---
      Point of contact | Boxy
      Champions | types (Boxy)
      Task owners | Boxy, lcnr
      No detailed updates available.

      Unsafe Fields (rust-lang/rust-project-goals#273)

      Progress |
      ---|---
      Point of contact | Jack Wrenn
      Champions | compiler (Jack Wrenn), lang (Scott McMurray)
      Task owners | Jacob Pratt, Jack Wrenn, Luca Versari
      No detailed updates available.

    21. 🔗 Rust Blog Project goals update — September 2025 rss

      The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

      Flagship goals

      "Beyond the &"

      Continue Experimentation with Pin Ergonomics (rust-lang/rust-project- goals#389)

      Progress |
      ---|---
      Point of contact | Frank King
      Champions | compiler (Oliver Scherer), lang (TC)
      Task owners | Frank King
      No detailed updates available.

      Design a language feature to solve Field Projections (rust-lang/rust-project- goals#390)

      Progress |
      ---|---
      Point of contact | Benno Lossin
      Champions | lang (Tyler Mandry)
      Task owners | Benno Lossin

      1 detailed update available.

      Comment by @BennoLossin posted on 2025-09-24:

      Key Developments

      • coordinating with #![feature(pin_ergonomics)] (https://github.com/rust-lang/rust/issues/130494) to ensure compatibility between the two features (allow custom pin projections to be the same as the ones for &pin mut T)
      • identified connection to auto reborrowing
        • https://github.com/rust-lang/rust-project-goals/issues/399
        • https://github.com/rust-lang/rust/issues/145612
      • held a design meeting
        • very positive feedback from the language team
        • approved lang experiment
        • got a vibe check on design axioms
      • created a new Zulip channel #t-lang/custom-refs for all new features needed to make custom references more similar to &T/&mut T such as field projections, auto reborrowing and more
      • created the tracking issue for #![feature(field_projections)]
      • opened https://github.com/rust-lang/rust/pull/146307 to implement field representing types (FRTs) in the compiler

      Next Steps

      • Get https://github.com/rust-lang/rust/pull/146307 reviewed & merged

      Help Wanted

      • When the PR for FRTs lands, try out the feature & provide feedback on FRTs
      • if possible using the field-projection crate and provide feedback on projections

      Internal Design Updates

      Shared & Exclusive Projections

      We want users to be able to have two different types of projections analogous to &T and &mut T. Each field can be projected independently and a single field can only be projected multiple times in a shared way. The current design uses two different traits to model this. The two traits are almost identical, except for their safety documentation.

      We were thinking if it is possible to unify them into a single trait and have coercions similar to autoreborrowing that would allow the borrow checker to change the behavior depending on which type is projected.

      Syntax

      There are lots of different possibilities for which syntax we can choose, here are a couple options: [Devon Peticolas][]->f/[Andrea D'Angelo][] x->f, [Devon Peticolas][].f/[Andrea D'Angelo][] x.f, x.[Fatih Kadir Akın][]/x.mut[Fatih Kadir Akın][], x.ref.[Fatih Kadir Akın][]/x.[Fatih Kadir Akın][]. Also many alternatives for the sigils used: x[Fatih Kadir Akın][], x~f, x.@.f.

      We have yet to decide on a direction we want to go in. If we are able to merge the two project traits, we can also settle on a single syntax which would be great.

      Splitting Projections into Containers & Pointers

      There are two categories of projections: Containers and Pointers:

      • Containers are types like MaybeUninit<T>, Cell<T>, UnsafeCell<T>, ManuallyDrop<T>. They are repr(transparent) and apply themselves to each field, so MaybeUninit<MyStruct> has a field of type MaybeUninit<MyField> (if MyStruct has a field of type MyField).
      • Pointers are types like &T, &mut T, cell::Ref[Mut]<'_, T>, *const T/*mut T, NonNull<T>. They support projecting Pointer<'_, Struct> to Pointer<'_, Field>.

      In the current design, these two classes of projections are unified by just implementing Pointer<'_, Container<Struct>> -> Pointer<'_, Container<Field>> manually for the common use-cases (for example &mut MaybeUninit<Struct> -> &mut MaybeUninit<Field>). However this means that things like &Cell<MaybeUninit<Struct>> doesn't have native projections unless we explicitly implement them.

      We could try to go for a design that has two different ways to implement projections -- one for containers and one for pointers. But this has the following issues:

      • there are two ways to implement projections, which means that some people will get confused which one they should use.
      • making projections through multiple container types work out of the box is great, however this means that when defining a new container type and making it available for projections, one needs to consider all other container types and swear coherence with them. If we instead have an explicit way to opt in to projections through multiple container types, the implementer of that trait only has to reason about the types involved in that operation.
        • so to rephrase, the current design allows more container types that users actually use to be projected whereas the split design allows arbitrary nestings of container types to be projected while disallowing certain types to be considered container types.
      • The same problem exists for allowing all container types to be projected by pointer types, if I define a new pointer type I again need to reason about all container types and if it's sound to project them.

      We might be able to come up with a sensible definition of "container type" which then resolves these issues, but further investigation is required.

      Projections for &Custom<U>

      We want to be able to have both a blanket impl<T, F: Field<Base = T>> Project<F> for &T as well as allow people to have custom projections on &Custom<U>. The motivating example for custom projections is the Rust-for- Linux Mutex that wants these projections for safe RCU abstractions.

      During the design meeting, it was suggested we could add a generic to Project that only the compiler is allowed to insert, this would allow disambiguation between the two impls. We have now found an alternative approach that requires less specific compiler magic:

      • Add a new marker trait ProjectableBase that's implemented for all types by default.
      • People can opt out of implementing it by writing impl !ProjectableBase for MyStruct; (needs negative impls for marker traits).
      • We add where T: ProjectableBase to the impl Project for &T.
      • The compiler needs to consider the negative impls in the overlap check for users to be able to write their own impl<U, F> Project<F> for &Custom<U> where ... (needs negative impl overlap reasoning)

      We probably want negative impls for marker traits as well as improved overlap reasoning for different reasons too, so it is probably fine to depend on them here.

      enum support

      enum and union shouldn't be available for projections by default, take for example &Cell<Enum>, if we project to a variant, someone else could overwrite the value with a different variant, invalidating our &Cell<Field>. This also needs a new trait, probably AlwaysActiveField (needs more name bikeshedding, but too early for that) that marks fields in structs and tuples.

      To properly project an enum, we need:

      • a new CanProjectEnum (TBB) trait that provides a way to read the discriminant that's currently inhabiting the value.
        • it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for &Cell)
      • a new match operator that will project all mentioned fields (for &Enum this already is the behavior for match)

      Field Representing Types (FRTs)

      While implementing https://github.com/rust-lang/rust/pull/146307 we identified the following problems/design decisions:

      • a FRT is considered local to the orphan check when each container base type involved in the field path is local or a tuple (see the top comment on the PR for more infos)
      • FRTs cannot implement Drop
      • the Field trait is not user-implementable
      • types with fields that are dynamically sized don't have a statically known offset, which complicates the UnalignedField trait,

      I decided to simplify the first implementation of FRTs and restrict them to sized structs and tuples. It also doesn't support packed structs. Future PRs will add support for enums, unions and packed structs as well as dynamically sized types.

      Reborrow traits (rust-lang/rust-project-goals#399)

      Progress |
      ---|---
      Point of contact | Aapo Alasuutari
      Champions | compiler (Oliver Scherer), lang (Tyler Mandry)
      Task owners | Aapo Alasuutari
      No detailed updates available.

      "Flexible, fast(er) compilation"

      build-std (rust-lang/rust-project-goals#274)

      Progress |
      ---|---
      Point of contact | David Wood
      Champions | cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)
      Task owners | Adam Gemmell, David Wood

      1 detailed update available.

      Comment by @adamgemmell posted on 2025-09-12:

      Recently we've been working on feedback on the multi-staged format of the RFC. We've also shared the RFC outside of our sync call group to people from a variety of project teams and potential users too.

      We're now receiving feedback that is much more detail-oriented, as opposed to being about the direction and scope of the RFC, which is a good indication that the overall strategy for shipping this RFC seems promising. We're continuing to address feedback to ensure the RFC is clear, consistent and technically feasible. David's feeling is that we've probably got another couple rounds of feedback from currently involved people and then we'll invite more people from various groups before publishing parts of the RFC formally.

      Production-ready cranelift backend (rust-lang/rust-project- goals#397)

      Progress |
      ---|---
      Point of contact | Folkert de Vries
      Champions | compiler (bjorn3)
      Task owners | bjorn3, Folkert de Vries, [Trifecta Tech Foundation]
      No detailed updates available.

      Promoting Parallel Front End (rust-lang/rust-project- goals#121)

      Progress |
      ---|---
      Point of contact | Sparrow Li
      Task owners | Sparrow Li

      Help wanted:

      Help test the deadlock code in the issue list and try to reproduce the issue

      1 detailed update available.

      Comment by @SparrowLii posted on 2025-09-17:

      • Key developments: We have added more tests for deadlock issues. And we can say that deadlock problems are almost resolved. And we are currently addressing issues related to reproducible builds, and some of these have already been resolved.
      • Blockers: null
      • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue

      Relink don't Rebuild (rust-lang/rust-project- goals#400)

      Progress |
      ---|---
      Point of contact | Jane Lusby
      Champions | cargo (Weihang Lo), compiler (Oliver Scherer)
      Task owners | Ally Sommers, Piotr Osiewicz
      No detailed updates available.

      "Higher-level Rust"

      Ergonomic ref-counting: RFC decision and preview (rust-lang/rust-project- goals#107)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Champions | compiler (Santiago Pastorino), lang (Niko Matsakis)
      Task owners | Niko Matsakis, Santiago Pastorino
      No detailed updates available.

      Stabilize cargo-script (rust-lang/rust-project- goals#119)

      Progress |
      ---|---
      Point of contact | Ed Page
      Champions | cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)
      Task owners | Ed Page

      1 detailed update available.

      Comment by @epage posted on 2025-09-16:

      Key developments:

      • Overall polish
        • https://github.com/rust-lang/rust/pull/145751
        • https://github.com/rust-lang/rust/pull/145754
        • https://github.com/rust-lang/rust/pull/146106
        • https://github.com/rust-lang/rust/pull/146137
        • https://github.com/rust-lang/rust/pull/146211
        • https://github.com/rust-lang/rust/pull/146340
        • https://github.com/rust-lang/rust/pull/145568
        • https://github.com/rust-lang/cargo/pull/15878
        • https://github.com/rust-lang/cargo/pull/15886
        • https://github.com/rust-lang/cargo/pull/15899
        • https://github.com/rust-lang/cargo/pull/15914
        • https://github.com/rust-lang/cargo/pull/15927
        • https://github.com/rust-lang/cargo/pull/15939
        • https://github.com/rust-lang/cargo/pull/15952
        • https://github.com/rust-lang/cargo/pull/15972
        • https://github.com/rust-lang/cargo/pull/15975
      • rustfmt work
        • https://github.com/rust-lang/rust/pull/145617
        • https://github.com/rust-lang/rust/pull/145766
      • Reference work
        • https://github.com/rust-lang/reference/pull/1974

      "Unblocking dormant traits"

      Evolving trait hierarchies (rust-lang/rust-project- goals#393)

      Progress |
      ---|---
      Point of contact | Taylor Cramer
      Champions | lang (Taylor Cramer), types (Oliver Scherer)
      Task owners | Taylor Cramer, Taylor Cramer & others

      1 detailed update available.

      Comment by @cramertj posted on 2025-09-30:

      Current status: there is an RFC for auto impl supertraits that has received some discussion and updates (thank you, Ding Xiang Fei!).

      The major open questions currently are:

      Syntax

      The current RFC proposes:

      trait Subtrait: Supertrait {
          auto impl Supertrait {
              // Supertrait items defined in terms of Subtrait items, if any
          }
      }
      

      Additionally, there is an open question around the syntax of auto impl for unsafe supertraits. The current proposal is to require unsafe auto impl Supertrait.

      Whether to require impls to opt-out of auto impls

      The current RFC proposes that

      impl Supertrait for MyType {}
      impl Subtrait for MyType {
          // Required in order to manually write `Supertrait` for MyType.
          extern impl Supertrait;
      }
      

      This makes it explicit via opt-out whether an auto impl is being applied. However, this is in conflict with the goal of allowing auto impls to be added to existing trait hierarchies. The RFC proposes to resolve this via a temporary attribute which triggers a warning. See my comment here.

      Note that properly resolving whether or not to apply an auto impl requires coherence-like analysis.

      In-place initialization (rust-lang/rust-project- goals#395)

      Progress |
      ---|---
      Point of contact | Alice Ryhl
      Champions | lang (Taylor Cramer)
      Task owners | Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts
      No detailed updates available.

      Next-generation trait solver (rust-lang/rust-project- goals#113)

      Progress |
      ---|---
      Point of contact | lcnr
      Champions | types (lcnr)
      Task owners | Boxy, Michael Goulet, lcnr
      No detailed updates available.

      Stabilizable Polonius support on nightly (rust-lang/rust-project- goals#118)

      Progress |
      ---|---
      Point of contact | Rémy Rakic
      Champions | types (Jack Huey)
      Task owners | Amanda Stjerna, Rémy Rakic, Niko Matsakis
      No detailed updates available.

      Goals looking for help

      No goals listed.

      Other goal updates

      Add a team charter for rustdoc team (rust-lang/rust-project- goals#387)

      Progress |
      ---|---
      Point of contact | Guillaume Gomez
      Champions | rustdoc (Guillaume Gomez)
      No detailed updates available.

      Borrow checking in a-mir-formality (rust-lang/rust-project- goals#122)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Champions | types (Niko Matsakis)
      Task owners | Niko Matsakis, tiif
      No detailed updates available.

      C++/Rust Interop Problem Space Mapping (rust-lang/rust-project- goals#388)

      Progress |
      ---|---
      Point of contact | Jon Bauman
      Champions | compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)
      Task owners | Jon Bauman
      No detailed updates available.

      Comprehensive niche checks for Rust (rust-lang/rust-project- goals#262)

      Progress |
      ---|---
      Point of contact | Bastian Kersting
      Champions | compiler (Ben Kimock), opsem (Ben Kimock)
      Task owners | Bastian Kersting], Jakob Koschel
      No detailed updates available.

      Const Generics (rust-lang/rust-project-goals#100)

      Progress |
      ---|---
      Point of contact | Boxy
      Champions | lang (Niko Matsakis)
      Task owners | Boxy, Noah Lev
      No detailed updates available.

      Continue resolving cargo-semver-checks blockers for merging into cargo (rust-lang/rust-project-goals#104)

      Progress |
      ---|---
      Point of contact | Predrag Gruevski
      Champions | cargo (Ed Page), rustdoc (Alona Enraght-Moony)
      Task owners | Predrag Gruevski

      1 detailed update available.

      Comment by @obi1kenobi posted on 2025-09-19:

      Just removed the duplicate posts, guessing from a script that had a bad day.

      Develop the capabilities to keep the FLS up to date (rust-lang/rust-project- goals#391)

      Progress |
      ---|---
      Point of contact | Pete LeVasseur
      Champions | bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)
      Task owners | Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems
      No detailed updates available.

      Emit Retags in Codegen (rust-lang/rust-project- goals#392)

      Progress |
      ---|---
      Point of contact | Ian McCormack
      Champions | compiler (Ralf Jung), opsem (Ralf Jung)
      Task owners | Ian McCormack
      No detailed updates available.

      Expand the Rust Reference to specify more aspects of the Rust language (rust- lang/rust-project-goals#394)

      Progress |
      ---|---
      Point of contact | Josh Triplett
      Champions | lang-docs (Josh Triplett), spec (Josh Triplett)
      Task owners | Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby
      No detailed updates available.

      Finish the libtest json output experiment (rust-lang/rust-project- goals#255)

      Progress |
      ---|---
      Point of contact | Ed Page
      Champions | cargo (Ed Page)
      Task owners | Ed Page

      1 detailed update available.

      Comment by @epage posted on 2025-09-16:

      Key developments:

      • libtest2
        • libtest env variables were deprecated, reducing the API surface for custom test harnesses, https://github.com/rust-lang/rust/pull/145269
        • libtest2 was updated to reflect deprecations
        • https://github.com/assert-rs/libtest2/pull/105
        • libtest2 is now mostly in shape for use
      • json schema
        • https://github.com/assert-rs/libtest2/pull/107
        • https://github.com/assert-rs/libtest2/pull/108
        • https://github.com/assert-rs/libtest2/pull/111
        • https://github.com/assert-rs/libtest2/pull/120
        • starting exploration of extension through custom messages, see https://github.com/assert-rs/libtest2/pull/122

      New areas found for further exploration

      • Failable discovery
      • Nested discovery

      Finish the std::offload module (rust-lang/rust-project- goals#109)

      Progress |
      ---|---
      Point of contact | Manuel Drehwald
      Champions | compiler (Manuel Drehwald), lang (TC)
      Task owners | Manuel Drehwald, LLVM offload/GPU contributors
      No detailed updates available.

      Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust- project-goals#407)

      Progress |
      ---|---
      Point of contact | Tomas Sedovic
      Champions | compiler (Wesley Wiser)
      Task owners | (depending on the flag)
      No detailed updates available.

      Getting Rust for Linux into stable Rust: language features (rust-lang/rust- project-goals#116)

      Progress |
      ---|---
      Point of contact | Tomas Sedovic
      Champions | lang (Josh Triplett), lang-docs (TC)
      Task owners | Ding Xiang Fei
      No detailed updates available.

      Implement Open API Namespace Support (rust-lang/rust-project- goals#256)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Champions | cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)
      Task owners | b-naber, Ed Page
      No detailed updates available.

      MIR move elimination (rust-lang/rust-project- goals#396)

      Progress |
      ---|---
      Point of contact | Amanieu d'Antras
      Champions | lang (Amanieu d'Antras)
      Task owners | Amanieu d'Antras
      No detailed updates available.

      Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project- goals#264)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Task owners | Help wanted, Ed Page

      1 detailed update available.

      Comment by @epage posted on 2025-09-16:

      Key developments:

      • https://github.com/crate-ci/cargo-plumbing/pull/53
      • https://github.com/crate-ci/cargo-plumbing/pull/62
      • https://github.com/crate-ci/cargo-plumbing/pull/68
      • https://github.com/crate-ci/cargo-plumbing/pull/96
      • Further schema discussions at https://github.com/crate-ci/cargo-plumbing/discussions/18
      • Writing up https://github.com/crate-ci/cargo-plumbing/issues/82

      Major obstacles

      • Cargo, being designed for itself, doesn't allow working with arbitrary data, see https://github.com/crate-ci/cargo-plumbing/issues/82

      Prototype Cargo build analysis (rust-lang/rust-project- goals#398)

      Progress |
      ---|---
      Point of contact | Weihang Lo
      Champions | cargo (Weihang Lo)
      Task owners | Help wanted Weihang Lo, Weihang Lo
      No detailed updates available.

      reflection and comptime (rust-lang/rust-project- goals#406)

      Progress |
      ---|---
      Point of contact | Oliver Scherer
      Champions | compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)
      Task owners | oli-obk
      No detailed updates available.

      Rework Cargo Build Dir Layout (rust-lang/rust-project- goals#401)

      Progress |
      ---|---
      Point of contact | Ross Sullivan
      Champions | cargo (Weihang Lo)
      Task owners | Ross Sullivan
      No detailed updates available.

      Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project- goals#402)

      Progress |
      ---|---
      Point of contact | Guillaume Gomez
      Champions | compiler (Wesley Wiser), infra (Marco Ieni)
      Task owners | Guillaume Gomez
      No detailed updates available.

      Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust- lang/rust-project-goals#403)

      Progress |
      ---|---
      Point of contact | Jakob Koschel
      Task owners | [Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)
      No detailed updates available.

      Rust Vision Document (rust-lang/rust-project- goals#269)

      Progress |
      ---|---
      Point of contact | Niko Matsakis
      Task owners | vision team
      No detailed updates available.

      rustc-perf improvements (rust-lang/rust-project- goals#275)

      Progress |
      ---|---
      Point of contact | James
      Champions | compiler (David Wood), infra (Jakub Beránek)
      Task owners | James, Jakub Beránek, David Wood

      1 detailed update available.

      Comment by @Jamesbarford posted on 2025-09-17:

      It is possible to now run the system with two different machines on two different architectures however there is work to be done to make this more robust.

      We have worked on ironing out the last bits and pieces for dequeuing benchmarks as well as creating a new user interface to reflect multiple collectors doing work. Presently work is mostly on polishing the UI and handing edge cases through manual testing.

      Queue Work:

      • https://github.com/rust-lang/rustc-perf/pull/2212
      • https://github.com/rust-lang/rustc-perf/pull/2214
      • https://github.com/rust-lang/rustc-perf/pull/2216
      • https://github.com/rust-lang/rustc-perf/pull/2221
      • https://github.com/rust-lang/rustc-perf/pull/2226
      • https://github.com/rust-lang/rustc-perf/pull/2230
      • https://github.com/rust-lang/rustc-perf/pull/2231

      Ui:

      • https://github.com/rust-lang/rustc-perf/pull/2217
      • https://github.com/rust-lang/rustc-perf/pull/2220
      • https://github.com/rust-lang/rustc-perf/pull/2224
      • https://github.com/rust-lang/rustc-perf/pull/2227
      • https://github.com/rust-lang/rustc-perf/pull/2232
      • https://github.com/rust-lang/rustc-perf/pull/2233
      • https://github.com/rust-lang/rustc-perf/pull/2236

      Stabilize public/private dependencies (rust-lang/rust-project- goals#272)

      Progress |
      ---|---
      Point of contact | Help Wanted
      Champions | cargo (Ed Page)
      Task owners | Help wanted, Ed Page
      No detailed updates available.

      Stabilize rustdoc doc_cfg feature (rust-lang/rust-project- goals#404)

      Progress |
      ---|---
      Point of contact | Guillaume Gomez
      Champions | rustdoc (Guillaume Gomez)
      Task owners | Guillaume Gomez
      No detailed updates available.

      SVE and SME on AArch64 (rust-lang/rust-project- goals#270)

      Progress |
      ---|---
      Point of contact | David Wood
      Champions | compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)
      Task owners | David Wood
      No detailed updates available.

      Type System Documentation (rust-lang/rust-project- goals#405)

      Progress |
      ---|---
      Point of contact | Boxy
      Champions | types (Boxy)
      Task owners | Boxy, lcnr
      No detailed updates available.

      Unsafe Fields (rust-lang/rust-project-goals#273)

      Progress |
      ---|---
      Point of contact | Jack Wrenn
      Champions | compiler (Jack Wrenn), lang (Scott McMurray)
      Task owners | Jacob Pratt, Jack Wrenn, Luca Versari
      No detailed updates available.

    22. 🔗 Kagi LLMs are bullshitters. But that doesn't mean they're not useful rss

      Note: This is a personal essay by Matt Ranger, Kagi’s head of ML In 1986, Harry Frankfurt wrote On Bullshit ( https://en.wikipedia.org/wiki/On_Bullshit ).