🏡


  1. Writing a good CLAUDE.md | HumanLayer Blog
  2. My Current global CLAUDE.md
  3. About KeePassXC’s Code Quality Control – KeePassXC
  4. How to build a remarkable command palette
  5. Leaderboard - compar:IA, the AI chatbot arena

  1. December 04, 2025
    1. 🔗 Console.dev newsletter Lima rss

      Description: Linux VMs + Containers.

      What we like: Quickly launch Linux VMs from the terminal. Designed for running containers inside the VM, it includes tools for filesystem mounts, port forwarding, GPU acceleration, Intel/Arm emulation. Easy config of CPUs, memory, etc via the CLI. Can run in CI. Useful for sandboxing AI agents.

      What we dislike: Supports most, but not all Linux distros (has some minimum requirements). Windows usage is experimental.

    2. 🔗 Console.dev newsletter gitmal rss

      Description: Static page generator for Git repos.

      What we like: Generates a static repo browser for any Git repo. The site includes commits, branches, and a file explorer with source code highlighted. The UI is themeable.

      What we dislike: No dark mode auto-switching. Can take a long time to generate for big repos.

  2. December 03, 2025
    1. 🔗 r/reverseengineering Interview with RollerCoaster Tycoon’s Creator, Chris Sawyer rss
    2. 🔗 hyprwm/Hyprland v0.52.2 release

      Another patch release with a few fixes backported on top of 0.52.1.

      Fixes Backported

      • presentation: only send sync output on presented (#12255)
      • renderer: fix noscreenshare layerrule popups (#12260)
      • renderer/ime: fix fcitx5 popup artifacts (#12263)
      • screencopy: fix possible crash in renderMon
      • internal: put Linux-only header behind ifdef (#12300)
      • internal: fix crash at startup on freebsd (#12298)
      • cmake,meson: fix inclusion of gpg info in git commit info (#12302)
      • cursor: ensure cursor reset on changed window states (#12301)
      • plugin/hook: disallow multiple hooks per function (#12320)
      • protocols/workspace: fix crash in initial group sending
      • renderer: stop looping over null texture surfaces (#12446)
      • protocols/workspace: avoid crash on inert outputs
      • buffers: revert state merging (#12461)
      • protocols/lock: fix missing output enter on surface (#12448)
      • dmabuf: sys/ioctl is required for ioctl (#12483)

      Special thanks

      Special thanks as always to:

      Our sponsors

      Diamond

      37Signals

      Gold

      Framework

      Donators

      Top Supporters:

      --, mukaro, Semtex, Tom94, soy_3l.beantser, SaltyIcetea, Freya Elizabeth Goins, lzieniew, Kay, ExBhal, MasterHowToLearn, 3RM, Tonao Paneguini, Sierra Layla Vithica, Anon2033, Brandon Wang, DHH, alexmanman5, Theory_Lukas, Blake- sama, Seishin, Hunter Wesson, Illyan, TyrHeimdal, elafarge, Arkevius, d, RaymondLC92, MadCatX, johndoe42, alukortti, Jas Singh, taigrr, Xoores, ari- cake, EncryptedEnigma

      New Monthly Supporters:

      KongrooParadox, Jason Zimdars, grateful anon, Rafael Martins, Lu, Jan, Yves, Luiz Aquino, navik, EvgenyRachlenko, GENARO LOYA DOUR, trustable0370, Jorge Y. C. Rodriguez, Bobby Rivera, steven_s, Pavel Dušek, Toshitaka Agata, mandrav

      One-time Donators:

      ryorichie, shikaji, tskulbru, szczot3k, Vincent F, myname0101, MirasM, Daniel Doherty, giri, rasa, potato, Jams Mendez, collin, koss054, LouisW, Mattisba, visooo, Razorflak, ProPatte, sgt, Bouni, EarthsonLu, W, Faab, Kenan Sharifli, ArchXceed, benvonh, J.P. Wing, 0xVoodoo, ayhan, Miray Gohan, quiron, August Lilleaas, ~hommel, Ethan Webb, fraccy, Kevin, Carlos Solórzano Cerdas, kastr, jmota, pch, darksun, JoseConseco, Maxime Gagne, joegas, Guido V, RedShed, Shane, philweber, romulus, nuelle, Nick M, Mustapha Mond, bfester, Alvin Lin, 4everN00b, riad33m, astraccato, spirossi, drxm1, anon, conig, Jonas Thern, Keli, Martin, gianu, Kevin K, @TealRaya, Benji, Borissimo, Ebbo, John, zoth, pampampampampamponponponponponponpampampampa, Himayat, Alican, curu, stelman, Q, frigidplatypus, Dan Page, Buzzard, mknpcz, bbutkovic, neonvoid, Pim Polderman, Marsimplodation, cloudscripting, StevenWalter, i_am_terence, mester, Jacob Delarosa, hl, alex, zusemat, LRVR, MichelDucartier, Jon Fredeen, Chris, maxx, Selim, Victor Rosenthal, Luis Gonzalez, say10, mcmoodoo, Grmume, Nilpointer, Lad, Pathief, Larguma, benniheiss, cannikin, NoeL, hyprcroc, Sven Krause, Matej Drobnič, vjg73_Gandhi2, SotoEstevez, jeroenvlek, SymphonySimper, simplectic, tricked, Kacper, nehalandrew, Jan Ihnen, Blub, Jonwin, tucker87, outi, chrisxmtls, pseudo, NotAriaN, ckoblue, xff, hellofriendo, Arto Olli, Jett Thedell, Momo On Code, MrFry, stjernstrom, nastymatt, iDie, IgorJ, andresfdz7, Joshua, Koko, joenu, HakierGrzonzo, codestothestars, Jrballesteros05, hanjoe, Quantumplation, mentalAdventurer, Sebastian Grant, Reptak, kiocone, dfsdfs, cdevroe, nemalex, Somebody, Nates, Luan Pinheiro, drm, Misha Andreev, Cedric

      And all hyprperks members!

      Full Changelog : v0.52.1...v0.52.2

    3. 🔗 r/reverseengineering Analogue 3D vs MiSTer FPGA; two separate reverse engineered FPGA cores rss
    4. 🔗 r/LocalLLaMA 8 local LLMs on a single Strix Halo debating whether a hot dog is a sandwich rss

      8 local LLMs on a single Strix Halo debating whether a hot dog is a sandwich | submitted by /u/jfowers_amd
      [link] [comments]
      ---|---

    5. 🔗 r/LocalLLaMA Micron Announces Exit from Crucial Consumer Business rss

      Technically speaking, we're screwed.

      submitted by /u/FullstackSensei
      [link] [comments]

    6. 🔗 News Minimalist 🐢 WHO releases first obesity drug guideline + 8 more stories rss

      In the last 5 days ChatGPT read 153185 top news stories. After removing previously covered events, there are 9 articles with a significance score over 5.5.

      [6.2] WHO releases first global guideline on GLP-1 medicines for obesity treatment —who.int(+59)

      The World Health Organization has issued its first global guideline, conditionally recommending the use of GLP-1 medications for adults living with obesity as a chronic disease.

      The guideline follows the drugs' September 2025 addition to the Essential Medicines List for diabetes. The obesity recommendation is conditional due to limited long-term data, high costs, and concerns about equitable access for patients worldwide.

      WHO emphasizes medication is not a standalone solution and projects that fewer than 10% of eligible people will have access to the therapies by 2030, urging action on affordability and manufacturing.

      [6.5] New injectable HIV prevention drug launches in South Africa, Eswatini, and Zambia —afpbb.com(Japanese) (+35)

      South Africa, Eswatini, and Zambia on Monday began Africa’s first public rollout of a twice-yearly injectable HIV preventative, Lenacapavir, which is over 99.9% effective.

      The program, initially a study tracking 2,000 people, is funded by Unitaid. South Africa plans a nationwide expansion next year, while the U.S. is supporting the rollout in Zambia and Eswatini.

      The injection offers an alternative to daily PrEP pills in a region with over half the world's HIV cases. A generic version is expected after 2027 for about $40 annually.

      [6.1] DeepSeek releases powerful, open-source AI models that rival top competitors —techradar.com(+13)

      Chinese startup DeepSeek has released powerful open-source AI models that rival top US competitors, a move intensifying the global AI race and challenging established industry leaders.

      The new models reportedly match or outperform competitors like GPT-5 in complex reasoning and coding. Their unique architecture significantly reduces operational costs, making elite AI performance more accessible and cheaper to deploy.

      Released under an open MIT license, the models fuel innovation but also raise geopolitical and data privacy concerns in Western countries due to the company's Chinese origins.

      Highly covered news with significance over 5.5

      [5.9] Russia and US fail to reach compromise on Ukraine peace deal — rte.ie (+553)

      [5.5] Amazon unveils new AI chips and strengthens Nvidia partnership to expand cloud capacity [$] — cnbc.com (+18)

      [5.6] Synopsys and Nvidia partner to accelerate industrial design [$] — cnbc.com (+13)

      [5.6] AI demand fuels global memory chip shortage and price hikes — japantimes.co.jp (+9)

      [5.7] Dutch government pays 163 million euros to stop gas extraction under Wadden Sea — nos.nl (Dutch) (+2)

      [5.9] Researchers map 23,000 technologies, revealing their age and trajectory — techxplore.com (+4)

      Thanks for reading!

      — Vadim


      You can create your own significance-based RSS feed with premium.


      Powered by beehiiv

    7. 🔗 @HexRaysSA@infosec.exchange ⚡ NEW CUSTOMER CYBER WEEK PROMO ⚡ mastodon

      ⚡ NEW CUSTOMER CYBER WEEK PROMO ⚡
      We're offering 50% off any IDA Pro product for new customers!

      To take advantage of this limited time offer, use promo code CYBER50 at check out. Or email sales@hex-rays.com.

      Cannot be combined with any other discount.
      50% off offer valid for new individual customers only, not corporations.
      New corporate customers are eligible for 40% off.
      Not applicable to upgrades or renewals.
      All new customers are required to pass the KYC process to receive the discount and license(s).
      Offer ends December 8, 2025 @ 11:59 pm CET. https://hex-rays.com/pricing

    8. 🔗 r/LocalLLaMA Chinese startup founded by Google engineer claims to have developed its own tpu reportedly 1.5 times faster than nvidia a100. rss
    9. 🔗 seanmonstar hyper-util Composable Pools rss

      I’m so excited to announce hyper’s new composable pool layers!1

      As part of making reqwest more modular, we’ve designed a new connection pool, and made the pieces available in hyper_util::client::pool. But this is more than just a “hey, we have a Pool, it moved other there.” We’ve literally pulled apart the pool, in a way I haven’t found elsewhere.

      Building a purpose‑specific pool is now straightforward. Add the features you want, even custom ones, and skip the bloat, no forks required.

      Read on to see what exactly we solved, how, and what comes next. If you just want to use them, here’s the docs. Everyone else, let’s dive in.

      We started with the users

      We started with the users, looking back over past issues filed, common questions in chat, and private conversations explaining what they needed to do. Boiled down, that got us to these requirements:

      • A full-featured pool, like the one in legacy, must be possible.
      • Microservices shouldn’t have to handle multiple protocols or hostnames.
      • Some clients need custom keys for the pool.
      • Others need to limit new connections made at a time.
      • Or cap the total number of connections.
      • Customize connection expiration based on idle time, max lifetime, or even poisoning.
      • And importantly, allow custom logic not already thought of.

      From past experience combining middleware, I had a strong feeling the pool requirements could be broken up into tower layers. But what would that even look like? Would it be horrible to use?

      To answer that, we took the requirements and considered the developer experience of using layers. It had to feel nice. Not just to write, but also to come back to and read.

      I then sketched out several of these layers to make sure they could actually work. Once most of it was working, the proposal was ready.

      The initial 4 working pools

      No plan survives contact with the enemy. We originally proposed five pool types, but launch with just the following four: singleton, cache, negotiate, map.

      The singleton pool wraps a connector2 that should only produce a single active connection. It bundles all concurrent calls so only one connection is made. All calls to the singleton will return a clone of the inner service once established. This fits the HTTP/2 case well.

      The cache pool maintains a list of cached services produced by a connector. Calling the cache returns either an existing service, or makes a new one. When dropped, the cached service is returned to the cache if possible. Importantly for performance, the cache supports connection racing, just like the legacy pool.

      The negotiate pool allows for a service that can decide between two service types based on an intermediate return value. Unlike typical routing, it makes decisions based on the response (the connection) rather than the request. The main use case is supporting ALPN upgrades to HTTP/2, with a fallback to HTTP/1. And its design allows combining two different pooling strategies.

      The map pool isn’t a typical service like the other pools, but rather is a stand-alone type that maps requests to keys and connectors. As a kind of router, it cannot determine which inner service to check for backpressure until the request is made. The map implementation allows customization of extracting a key, and how to construct a connector for that key.

      Ineffably unstable

      I knew this work would land in hyper-util first, because it’s not stable yet. Being so freshly designed, changes are expected after some more real-world usage. Still, I wanted to shield earlier adopters from breaking changes. At the same time, valuing performance and flexibility, I wanted to push as much as reasonably possible into the type system.

      When initially tinkering during the summer, I had one of those thoughts. The kind that clangs like a giant lock snapping open: what about type-state builders and unnameable types? I took a side quest, and tackled the warp v0.4 upgrade, to test out this API design. That post explains it a bit more.

      The various threads were all coming together.

      With each pool concept a tower service, once composed, a user shouldn’t care what it is beyond being some impl Service. I tested this out in reqwest, and yea, I don’t need to name the types. While I did need a type, I was able to store a dyn Service, and inference handled the rest.

      Real world usage: in reqwest

      Once those main pieces seemed ready, I needed a real example to test drive them. Tool-makers that don’t use their tools make bad tools, after all.

      I started by replacing the legacy pool inside reqwest. Part of the larger diff in reqwest is handling all of reqwest’s different pool configuration options.

      But, putting the default case together is pretty self-explanatory:

      // Note: some noise has been trimmed
      let http1 = (
          pool::cache(exec),
          util::http1_request_target(),
          util::http1_set_host(),
          util::meta(MyMetaIdleAt::new),
          conn::http1(),
      );
      
      let http2 = (
          pool::singleton(),
          conn::http2(),
      );
      
      let pool_layers = tower::layer::layer_fn(move |svc| {
          pool::negotiate::builder()
              .fallback(http1.clone())
              .upgrade(http2.clone())
              .inspect(|conn| conn.is_negotiated_h2())
              .connect(svc)
              .build()
      });
      
      let pool_map = pool::map::builder::<http::Uri>()
          .keys(|dst| scheme_and_auth(dst))
          .values(move |_dst| {
              pool_layers.layer(connector.clone())
          })
          .build();
      

      And it works! Making the full-featured pool was one of the requirements: check. But, the next part was even more important.

      As I mentioned before, I punted one of the proposed types: expire. Expiration is a necessary concept to a pool. But try as I might to fit the various generic shapes, it just wasn’t happening. Thankfully, this work had a hard deadline. And deadlines keep you user-driven: let them have something now, it can always be better later.

      To prove the general design allowed expiration, I implemented a specific version of it directly in reqwest.

      tokio::spawn(async move {
          loop {
              tokio::time::sleep(idle_dur).await;
              let now = Instant::now();
              let Some(pool) = pool_map.upgrade() else { return };
      
              pool.lock().unwrap().retain(|_key, svc| {
                  svc.fallback_mut().retain(|svc| {
                      if svc.inner().inner().inner().is_closed() {
                          return false;
                      }
      
                      if let Some(idle_at) = svc.meta().idle_at {
                          return now > idle_at + idle_dur;
                      }
                      true
                  });
                  svc.upgrade_mut().retain(|svc| {
                      !svc.is_closed()
                  });
                  !svc.fallback_mut().is_empty() || !svc.upgrade_mut().is_empty()
              });
          }
      });
      

      The ease of adding it helped solidify to me that this was definitely the right design. I was able to slot in a meta layer tracking idle time, and then use that to retain services. I placed that layer in right next to some of the other HTTP/1-specific layers. Easy!

      Being modular opens up customization

      With the ability to build a stack for your pool, consider an example of how we can start to solve other requirements listed earlier.

      let svc = ServiceBuilder::new()
          // cached connections are unaware of the limit
          .layer(pool::cache())
          // in-flight handshakes are limited
          .concurrency_limit(5)    
          .layer(conn::http1())
          .service(connect::tcp());
      

      It also allows adding in layers we don’t currently have, such as per-host connection semaphores, or a few layers up over all hosts. Adding new functionality isn’t blocked on us, and no one has to “pay” for features they don’t need.

      I can’t wait to see what else is done with the design!

      Pools ready

      The hyper_util::client::pool module is now available in v0.1.19. Go check the docs, and try to build cool things. Please file issues if parts are missing, we’ll keep iterating.

      I’ve been working on this feature set for long time. It’s something I started thinking about years ago, and after months of work this year, it feels awesome to finally be able to release it.

      Thanks to my sponsors, retainers, and grants for making this all possible!

      1. I mean, who isn’t excited to announce anything? /s 

      2. All “connectors” are actually MakeServices, which are jsut a Service that produces a Service. It doesn’t have to create a connection, but it reads better when talking about pools. 

    10. 🔗 HexRaysSA/plugin-repository commits sync repo: +3 plugins, +3 releases rss
      sync repo: +3 plugins, +3 releases
      
      ## New plugins
      - [EmuIt](https://github.com/AzzOnFire/emuit) (0.8.1)
      - [gepetto](https://github.com/JusticeRage/Gepetto) (1.5.0)
      - [icp](https://github.com/rand-tech/idaplugins) (1.3)
      
    11. 🔗 @cxiao@infosec.exchange mentally im here mastodon

      mentally im here
      https://youtu.be/8Z9RTdj93o8

    12. 🔗 r/LocalLLaMA Who’s got them Q_001_X_S_REAP Mistral Large 3 GGUFs? rss

      Who’s got them Q_001_X_S_REAP Mistral Large 3 GGUFs? | I’m looking at you, Unsloth 😁 submitted by /u/Porespellar
      [link] [comments]
      ---|---

    13. 🔗 Rust Blog Lessons learned from the Rust Vision Doc process rss

      Starting earlier this year, a group of us set on a crazy quest: to author a "Rust vision doc". As we described it in the original project goal proposal:

      The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust.

      Over the course of this year, the Vision Doc group has gathered up a lot of data. We began with a broad-based survey that got about 4200 responses. After that, we conducted over 70 interviews, each one about 45 minutes, with as broad a set of Rust users as we could find1.

      This is the first of a series of blog posts covering what we learned throughout that process and what recommendations we have to offer as a result. This first post is going to go broad. We'll discuss the process we used and where we think it could be improved going forward. We'll talk about some of the big themes we heard -- some that were surprising and others that were, well, not surprising at all. Finally, we'll close with some recommendations for how the project might do more work like this in the future.

      The questions we were trying to answer

      One of the first things we did in starting out with the vision doc was to meet with a User Research expert, Holly Ellis, who gave us a quick tutorial on how User Research works2. Working with her, we laid out a set of research questions that we wanted to answer. Our first cut was very broad, covering three themes:

      • Rust the technology:
        • "How does Rust fit into the overall language landscape? What is Rust's mission?"
        • "What brings people to Rust and why do they choose to use it for a particular problem...?"
        • "What would help Rust to succeed in these domains...?" (e.g., network systems, embedded)
        • "How can we scale Rust to industry-wide adoption? And how can we ensure that, as we do so, we continue to have a happy, joyful open-source community?"
      • Rust the global project:
        • "How can we improve the experience of using Rust for people across the globe?"
        • "How can we improve the experience of contributing to and maintaining Rust for people across the globe?"
      • Rust the open-source project:
        • "How can we tap into the knowledge, experience, and enthusiasm of a growing Rust userbase to improve Rust?"
        • "How can we ensure that individual or volunteer Rust maintainers are well-supported?"
        • "What is the right model for Foundation-project interaction?"

      Step 1: Broad-based survey

      Before embarking on individual interviews, we wanted to get a broad snapshot of Rust usage. We also wanted to find a base of people that we could talk to. We created a survey that asked a few short "demographic" questions -- e.g., where does the respondent live, what domains do they work on, how would they rate their experience -- and some open-ended questions about their journey to Rust, what kind of projects they feel are a good fit for Rust, what they found challenging when learning, etc. It also asked for (optional) contact information.

      We got a LOT of responses -- over 4200! Analyzing this much data is not easy, and we were very grateful to Kapiche, who offered us free use of their tool to work through the data. ❤

      The survey is useful in two ways. First, it's an interesting data-set in its own right, although you have to be aware of selection bias. Second, the survey also gave us something that we can use to cross-validate some of what we heard in 1:1 interviews and to look for themes we might otherwise have missed. And of course it gave us additional names of people we can talk to (though most respondents didn't leave contact information).

      Step 2: Interviewing individuals

      The next step after the survey was to get out there and talk to people. We sourced people from a lot of places: the survey and personal contacts, of course, but we also sat down with people at conferences and went to meetups. We even went to a Python meetup in an effort to find people who were a bit outside the usual "Rust circle".

      When interviewing people, the basic insight of User Experience research is that you don't necessarily ask people the exact questions you want to answer. That is likely to get them speculating and giving you the answer that they think they "ought" to say. Instead, you come at it sideways. You ask them factual, non-leading questions. In other words, you certainly don't say, "Do you agree the borrow checker is really hard?" And you probably don't even say, "What is the biggest pain point you had with Rust?" Instead, you might say, "What was the last time you felt confused by an error message?" And then go from there, "Is this a typical example? If not, what's another case where you felt confused?"

      To be honest, these sorts of "extremely non-leading questions" are kind of difficult to do. But they can uncover some surprising results.

      We got answers -- but not all the answers we wanted

      4200 survey responses and 70 interviews later, we got a lot of information -- but we still don't feel like we have the answers to some of the biggest questions. Given the kinds of questions we asked, we got a pretty good view on the kinds of things people love about Rust and what it offers relative to other languages. We got a sense for the broad areas that people find challenging. We also learned a few things about how the Rust project interacts with others and how things vary across the globe.

      What we really don't have is enough data to say "if you do X, Y, and Z, that will really unblock Rust adoption in this domain". We just didn't get into enough technical detail, for example, to give guidance on which features ought to be prioritized, or to help answer specific design questions that the lang or libs team may consider.

      One big lesson: there are only 24 hours in a day

      One of the things we learned was that you need to stay focused. There were so many questions we wanted to ask, but only so much time in which to do so. Ultimately, we wound up narrowing our scope in several ways:

      • we focused primarily on the individual developer experience, and only had minimal discussion with companies as a whole;
      • we dove fairly deep into one area (the Safety Critical domain) but didn't go as deep into the details of other domains;
      • we focused primarily on Rust adoption, and in particular did not even attempt to answer the questions about "Rust the open-source project".

      Another big lesson: haters gonna... stay quiet?

      One thing we found surprisingly difficult was finding people to interview who didn't like Rust. 49% of survey respondents, for example, rated their Rust comfort as 4 or 5 out of 5, and only 18.5% said 1 or 2. And of those, only a handful gave contact information.

      It turns out that people who think Rust isn't worth using mostly don't read the Rust blog or want to talk about that with a bunch of Rust fanatics.3 This is a shame, of course, as likely those folks have a lot to teach us about the boundaries of where Rust adds value. We are currently doing some targeted outreach in an attempt to grow our scope here, so stay tuned, we may get more data.

      One fun fact: enums are Rust's underappreciated superpower

      We will do a deeper dive into the things people say that they like about Rust later (hint: performance and reliability both make the cut). One interesting thing we found was the number of people that talked specifically about Rust enums, which allow you to package up the state of your program along with the data it has available in that state. Enums are a concept that Rust adapted from functional languages like OCaml and Haskell and fit into the system programming setting.

      "The usage of Enum is a new concept for me. And I like this concept. It's not a class and it's not just a boolean, limited to false or true. It has different states." -- New Rust developer

      "Tagged unions. I don't think I've seriously used another production language which has that. Whenever I go back to a different language I really miss that as a way of accurately modeling the domain." -- Embedded developer

      Where do we go from here? Create a user research team

      When we set out to write the vision doc, we imagined that it would take the form of an RFC. We imagined that RFC identifying key focus areas for Rust and making other kinds of recommendations. Now that we've been through it, we don't think we have the data we need to write that kind of RFC (and we're also not sure if that's the right kind of RFC to write). But we did learn a lot and we are convinced of the importance of this kind of work.

      Therefore, our plan is to do the following. First, we're going to write-up a series of blog posts diving into what we learned about our research questions along with other kinds of questions that we encountered as we went.

      Second, we plan to author an RFC proposing a dedicated user research team for the Rust org. The role of this team would be to gather data of all forms (interviews, surveys, etc) and make it available to the Rust project. And whenever they can, they would help to connect Rust customers directly with people extending and improving Rust.

      The vision doc process was in many ways our first foray into this kind of research, and it taught us a few things:

      • First, we have to go broad and deep. For this first round, we focused on high-level questions about people's experiences with Rust, and we didn't get deep into technical blockers. This gives us a good overview but limits the depth of recommendations we can make.
      • Second, to answer specific questions we need to do specific research. One of our hypotheses was that we could use UX interviews to help decide thorny questions that come up in RFCs -- e.g., the notorious debate between await x and x.await from yesteryear. What we learned is "sort of". The broad interviews we did did give us information about what kinds of things are important to people (e.g., convenience vs reliability, and so forth), and we'll cover some of that in upcoming write-ups. But to shed light on specific questions (e.g., "will x.await be confused for a field access") will really require more specific research. This may be interviews but it could also be other kinds of tests. These are all things though that a user research team could help with.
      • Third, we should find ways to "open the data" and publish results incrementally. We conducted all of our interviews with a strong guarantee of privacy and we expect to delete the information we've gathered once this project wraps up. Our goal was to ensure people could talk in an unfiltered way. This should always be an option we offer people -- but that level of privacy has a cost, which is that we are not able to share the raw data, even widely across the Rust teams, and (worse) people have to wait for us to do analysis before they can learn anything. This won't work for a long-running team. At the same time, even for seemingly innocuous conversations, posting full transcripts of conversations openly on the internet may not be the best option, so we need to find a sensible compromise.

      • "As wide a variety of Rust users as we could find " -- the last part is important. One of the weaknesses of this work is that we wanted to hear from more Rust skeptics than we did.

      • Thanks Holly! We are ever in your debt.

      • Shocking, I know. But, actually, it is a little -- most programmers love telling you how much they hate everything you do, in my experience?

    14. 🔗 Rust Blog crates.io: Malicious crates evm-units and uniswap-utils rss

      Summary

      On December 2nd, the crates.io team was notified by Olivia Brown from the Socket Threat Research Team of two malicious crates which were downloading a payload that was likely attempting to steal cryptocurrency.

      These crates were:

      • evm-units - 13 versions published in April 2025, downloaded 7257 times
      • uniswap-utils - 14 versions published in April 2025, downloaded 7441 times, used evm-units as a dependency

      Actions taken

      The user in question, ablerust, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

      The deletions were performed at 22:01 UTC on December 2nd.

      Analysis

      Socket has published their analysis in a blog post.

      These crates had no dependent downstream crates on crates.io.

      Thanks

      Our thanks to Olivia Brown from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Walter Pearce and Adam Harvey from the Rust Foundation for aiding in the response.

    15. 🔗 Mitchell Hashimoto Ghostty Is Now Non-Profit rss
      (empty)
  3. December 02, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-02 rss

      IDA Plugin Updates on 2025-12-02

      New Releases:

      Activity:

      • diffrays
        • 6a63def7: Add auto issue assignment workflow
        • db288f0f: Add auto issue assignment workflow
        • 70715b4a: Deleted auto issue assignment workflow
        • ac5850c2: Add auto issue assignment workflow
        • 4584c6df: Add auto issue assignment workflow
        • f0cec8ab: Add auto issue assignment workflow
      • ghidra
        • a0acfb8f: Merge remote-tracking branch 'origin/Ghidra_12.0'
        • f901a1bb: GP-0: Upping gradle wrapper version to 9.2.1
        • 99987885: GP-0: Fixing javadoc
        • 3d0da548: Merge remote-tracking branch 'origin/GP-6176_ryanmkurtz_objc-refactor'
        • 17ac51c4: GP-6176: Refactored Objective-C type metadata analyzers
        • aabeb6d6: Merge remote-tracking branch 'origin/GP-0-dragonmacher-test-fixes-12-…
        • 44ee4636: Merge remote-tracking branch 'origin/GP-1-dragonmacher-flow-arrow-npe'
        • 95b96e31: Merge remote-tracking branch 'origin/GP-1-dragonmacher-help-location-…
        • d8f3960f: Fix for flow arrow NPE
      • IDA-VTableExplorer
        • e770cb35: fix: Simplify IDA SDK prerequisites in README
        • 503f36a6: Refactor code structure for improved readability and maintainability
        • 219134c8: feat: Add screenshots and images to README for better visualization o…
      • quokka
        • 812f87cc: Merge pull request #65 from quarkslab/dependabot/github_actions/actio…
        • 3486fa93: Bump the actions group across 1 directory with 8 updates
    2. 🔗 idursun/jjui v0.9.7 release

      🎉 This release enhances performance and introduces stability improvements in log parsing and command execution. It also takes back some of the stability by adding basic mouse support.

      Features

      📈 Performance Improvements

      Implemented frame-rate limited rendering (capped at 120 FPS) to significantly improve application performance by deferring view generation until the next frame tick. This addresses the slowest path in the application - view generation - making it much more responsive

      🐭 Mouse Support

      • Clickable and scrollable revisions view (excluding operations) (#391)
      • Clickable and scrollable op log view
      • Draggable and scrollable (vertically + horizontally) preview pane
      • Scrollable diff window
      • Replaced custom viewport with bubbles/viewport for more responsive rendering (#396)

      mouse.mp4

      🔎 Preview

      • Replaced surrounding border with divider for more preview space (#396)

      Operation Log (oplog)

      • Added jj op revert functionality with R key binding (#400)

      Revset Handling

      • Pressing up arrow in empty revset field now sets the current revset (#284)
      • Fixed mismatch where empty revset input would use config default instead of session default from -r flag. Now correctly updates CurrentRevset to session default instead of empty string (#399)

      Details View

      • Allow quitting from details view when quit key is pressed (e.g. q)

      Bug Fixes

      Rendering Issues

      • Fixed double rendering of inline describe content when next line contains only revision line. Added revisionLineRendered tracking flag to properly sequence the description overlay rendering (#403, #369)
      • Fixed viewport adjustment when PageDown moves cursor below viewport. The renderer now continues rendering until it reaches the focused item, ensuring proper viewport adjustment (#395)
      • Fixed PageUp/Down navigation at top and bottom of revset when less than one page remains. Now includes early return with feedback message when already at boundary (#387, #386)
      • Removed incorrect space trimming in renderer (#393)

      Log Processing

      • Applied partial fixes to prevent out-of-order row updates in log streamer

      Details View

      • Handle cases where conflict markers span multiple lines (#398)
      • Ignore virtual selection on refresh (#381)

      Operation Log (oplog)

      • Fixed an issue in operation ID detection (#380, #377)

      Command Execution

      • Added a mechanism for restoring failed commands to input field, allowing retries without retyping (#392)

      Template System

      • Enhanced jj log parsing using native template prefixes for better change ID and commit ID detection. Fixes issues when bookmarks are "HexLike" (#358, #228, #372)

      What's Changed

      • Remove teatest package and simplify tests by @idursun in #379
      • internal/parser: get revision ids with template prefixes by @baggiiiie in #372
      • fix(oplog): improve operation id detection by @idursun in #380
      • refactor: serialise command execution by @idursun in #378
      • Revert "refactor: serialise command execution" by @idursun in #383
      • revisions: fix scrolling at the top and bottom of revset by @baggiiiie in #387
      • refactor: introduce and implement common.Model by @idursun in #384
      • refactor: Replace custom cell buffer implementation with cellbuf package by @idursun in #388
      • refactor: use simple layout functions to lay out the main UI by @idursun in #389
      • Coming back to previous state when exec command failed by @ArnaudBger in #392
      • feat: add basic mouse support by @idursun in #391
      • list/renderer: fix viewport adjustment on PageDown by @baggiiiie in #395
      • Make preview horizontally scrollable by @idursun in #396
      • revset: fix revset not using default when empty by @baggiiiie in #399
      • operation: add op log revert by @baggiiiie in #400
      • revision: fix double rendering of inline describe content by @baggiiiie in #403
      • refactor: replace usages of scattered width/height pairs with ViewNode by @idursun in #401
      • describe: catch cursor blinking to avoid unnecessary rendering by @baggiiiie in #404

      New Contributors

      Full Changelog : v0.9.6...v0.9.7

    3. 🔗 r/LocalLLaMA I'm surprised how simple Qwen3 VL's architecture is. rss

      I'm surprised how simple Qwen3 VL's architecture is. | the new 3D position id logic really got a lot more intuitive compared to qwen2.5 vl. it basically index image patches on width and height dimension in addition to the regular token sequence / temporal dimension (while treating text as one same number across all 3 dimensions). in addition to this, they added deepstack, which essentially is just some residual connections between vision encoder blocks and downstream LLM blocks. here's the full repo if you want to read more: https://github.com/Emericen/tiny-qwen submitted by /u/No-Compote-6794
      [link] [comments]
      ---|---

    4. 🔗 sharkdp/bat v0.26.1 release

      v0.26.1

      Features

      Bugfixes

      • Fix hang when using --list-themes with an explicit pager, see #3457 (@abhinavcool42)
      • Fix negative values of N not being parsed in line ranges without = flag value separator, see #3442 (@lmmx)
      • Fix broken Docker syntax preventing use of custom assets, see #3476 (@keith-hall)
      • Fix decorations being applied unexpectedly when piping. Now only line numbers explicitly required on the command line should be applied in auto decorations mode for cat compatibility. See #3496 (@keith-hall)
      • Fix diagnostics attempting to find the version of an executable named builtin when builtin pager is used. See #3498 (@keith-hall)
      • --help now correctly reads the config file for theme information etc. See #3507 (@keith-hall)

      Other

      • Improve README documentation on pager options passed to less, see #3443 (@injust)
      • Make PowerShell completions compatible with PowerShell v5.1, see #3495 (@keith-hall)
      • Use more robust approach to escaping in Bash completions, see #3448 (@akinomyoga)

      Syntaxes

      • Update quadlet syntax mapping to include *.{build,pod} files #3484 (@cyqsimon)
      • Fix inconsistencies in Ada syntax, see #3481 (@AldanTanneo)
      • Add syntax mapping for podman's artifact quadlet files, see #3497 (@xduugu)
      • Highlight Korn Shell scripts (i.e. with a shebang of ...ksh) using Bash syntax, see #3509 (@keith-hall)
    5. 🔗 r/wiesbaden Live @ The Fox and Hound Frankfurt West End rss
    6. 🔗 r/wiesbaden Kinderschuhe/Kleidung rss

      Hi, hoffe das ist ok hier zu fragen. Wir haben aussortiert und dabei sind mehrere Kisten Kinderkleidung und -schuhe zusammengekommen die noch in sehr gutem Zustand sind. Ich würde sie gern irgendwo hingeben, wo sie auch gebraucht werden, besonders die Schuhe. Geld möchte ich keins. Weiß jemand wohin man sich da wenden könnte?

      submitted by /u/Snargels
      [link] [comments]

    7. 🔗 r/wiesbaden English cinemas rss

      Hey!! What movie Theaters in Wiesbaden play movies in English? I'm planning to watch the new FNAF2 movie in English and only know of citydome in Darmstadt.

      submitted by /u/Old-Bus-6698
      [link] [comments]

    8. 🔗 r/LocalLLaMA Mistral just released Mistral 3 — a full open-weight model family from 3B all the way up to 675B parameters. rss

      All models are Apache 2.0 and fully usable for research + commercial work.

      Quick breakdown:

      • Ministral 3 (3B / 8B / 14B) – compact, multimodal, and available in base, instruct, and reasoning variants. Surprisingly strong for their size.

      • Mistral Large 3 (675B MoE) – their new flagship. Strong multilingual performance, high efficiency, and one of the most capable open-weight instruct models released so far.

      Why it matters: You now get a full spectrum of open models that cover everything from on-device reasoning to large enterprise-scale intelligence. The release pushes the ecosystem further toward distributed, open AI instead of closed black-box APIs.

      Full announcement: https://mistral.ai/news/mistral-3

      submitted by /u/InternationalToe2678
      [link] [comments]

    9. 🔗 r/LocalLLaMA Mistral 3 Blog post rss

      Mistral 3 Blog post | submitted by /u/rerri
      [link] [comments]
      ---|---

    10. 🔗 r/reverseengineering Ghidra Copilot - Conversational Reverse Engineering Assistant rss
    11. 🔗 r/LocalLLaMA Only the real ones remember (he is still the contributor with the most likes for his models) rss

      Only the real ones remember (he is still the contributor with the most likes for his models) | Hugging Face space by TCTF: Top Contributors To Follow - November 2025: https://huggingface.co/spaces/TCTF/TCTF
      Team mradermacher and Bartowski on the podium, legends.
      From Yağız Çalık on 𝕏: https://x.com/Weyaxi/status/1995814979543371869 submitted by /u/Nunki08
      [link] [comments]
      ---|---

    12. 🔗 Anton Zhiyanov Go proposal: Type-safe error checking rss

      Part of theAccepted! series, explaining the upcoming Go changes in simple terms.

      Introducing errors.AsType — a modern, type-safe alternative to errors.As.

      Ver. 1.26 • Stdlib • High impact

      Summary The new errors.AsType function is a generic version of errors.As: // go 1.13+ func As(err error, target any) bool // go 1.26+ func AsType (E, bool) It's type-safe, faster, and easier to use: // using errors.As var appErr AppError if errors.As(err, &appErr) { fmt.Println("Got an AppError:", appErr) } // using errors.AsType if appErr, ok := errors.AsType[AppError](err); ok { fmt.Println("Got an AppError:", appErr) } errors.As is not deprecated (yet), but errors.AsType is recommended for new code. Motivation

      The errors.As function requires you to declare a variable of the target error type and pass a pointer to it:

      var appErr AppError
      if errors.As(err, &appErr) {
          fmt.Println("Got an AppError:", appErr)
      }
      

      It makes the code quite verbose, especially when checking for multiple types of errors:

      var connErr *net.OpError
      var dnsErr *net.DNSError
      
      if errors.As(err, &connErr) {
          fmt.Println("Network operation failed:", connErr.Op)
      } else if errors.As(err, &dnsErr) {
          fmt.Println("DNS resolution failed:", dnsErr.Name)
      } else {
          fmt.Println("Unknown error")
      }
      

      With a generic errors.AsType, you can specify the error type right in the function call. This makes the code shorter and keeps error variables scoped to their if blocks:

      if connErr, ok := errors.AsType[*net.OpError](err); ok {
          fmt.Println("Network operation failed:", connErr.Op)
      } else if dnsErr, ok := errors.AsType[*net.DNSError](err); ok {
          fmt.Println("DNS resolution failed:", dnsErr.Name)
      } else {
          fmt.Println("Unknown error")
      }
      

      Another issue with As is that it uses reflection and can cause runtime panics if used incorrectly (like if you pass a non-pointer or a type that doesn't implement error). While static analysis tools usually catch these issues, using the generic AsType has several benefits:

      • No reflection1.
      • No runtime panics.
      • Less allocations.
      • Compile-time type safety.
      • Faster.

      Finally, AsType can handle everything that As does, so it's a drop-in improvement for new code.

      Description Add the AsType function to the errors package: // AsType finds the first error in err's tree that matches the type E, // and if one is found, returns that error value and true. Otherwise, it // returns the zero value of E and false. // // The tree consists of err itself, followed by the errors obtained by // repeatedly calling its Unwrap() error or Unwrap() []error method. // When err wraps multiple errors, AsType examines err followed by a // depth-first traversal of its children. // // An error err matches the type E if the type assertion err.(E) holds, // or if the error has a method As(any) bool such that err.As(target) // returns true when target is a non-nil *E. In the latter case, the As // method is responsible for setting target. func AsType (E, bool) Recommend using AsType instead of As: // As finds the first error in err's tree that matches target, and if one // is found, sets target to that error value and returns true. Otherwise, // it returns false. // ... // For most uses, prefer [AsType]. As is equivalent to [AsType] but sets its // target argument rather than returning the matching error and doesn't require // its target argument to implement error. // ... func As(err error, target any) bool Example

      Open a file and check if the error is related to the file path:

      // go 1.25
      var pathError *fs.PathError
      if _, err := os.Open("non-existing"); err != nil {
          if errors.As(err, &pathError) {
              fmt.Println("Failed at path:", pathError.Path)
          } else {
              fmt.Println(err)
          }
      }
      
      
      
      Failed at path: non-existing
      
      
      
      // go 1.26
      if _, err := os.Open("non-existing"); err != nil {
          if pathError, ok := errors.AsType[*fs.PathError](err); ok {
              fmt.Println("Failed at path:", pathError.Path)
          } else {
              fmt.Println(err)
          }
      }
      
      
      
      Failed at path: non-existing
      

      Further reading

      𝗣 51945 • 𝗖𝗟 707235


      1. Unlike errors.As, errors.AsType doesn't use the reflect package, but it still relies on type assertions and interface checks. These operations access runtime type metadata, so AsType isn't completely "reflection-free" in the strict sense. ↩︎

      *[High impact]: Likely impact for an average Go developer

    13. 🔗 r/LocalLLaMA Would you rent B300 (Blackwell Ultra) GPUs in Mongolia at ~$5/hr? (market sanity check) rss

      I work for a small-ish team that somehow ended up with a pile of B300 (Blackwell Ultra) allocations and a half-empty data center in Ulaanbaatar (yes, the capital of Mongolia, yes, the coldest one).

      Important bit so this doesn’t sound totally random:
      ~40% of our initial build-out is already committed (local gov/enterprise workloads + two research labs). My actual job right now is to figure out what to do with the rest of the capacity — I’ve started cold-reaching a few teams in KR/JP/SG/etc., and Reddit is my “talk to actual humans” channel.

      Boss looked at the latency numbers, yelled “EUREKA,” and then voluntold me to do “market research on Reddit” because apparently that’s a legitimate business strategy in 2025.

      So here’s the deal (numbers are real, measured yesterday):

      • B300 bare-metal:$5 / GPU-hour on-demand (reserved is way lower)
      • Ping from the DC right now:
        • Beijing ~35 ms
        • Seoul ~85 ms
        • Tokyo ~95 ms
        • Singapore ~110 ms
      • Experience: full root, no hypervisor, 3.2 Tb/s InfiniBand, PyTorch + SLURM pre-installed so you don’t hate us immediately
      • Jurisdiction: hosted in Mongolia → neutral territory, no magical backdoors or surprise subpoenas from the usual suspects

      Questions I was literally told to ask (lightly edited from my boss’s Slack message):

      1. Would any team in South Korea / Japan / Singapore / Taiwan / HK / Vietnam / Indonesia actually use this instead of CoreWeave, Lambda, or the usual suspects for training/fine-tuning/inference?
      2. Does the whole cold steppe bare-metal neutrality thing sound like a real benefit or just weird marketing?
      3. How many GPUs do you normally burn through and for how long? (Boss keeps saying “everyone wants 256-GPU clusters for three years” and I’m… unconvinced.)

      Landing page my designer made at 3 a.m.: https://b300.fibo.cloud (still WIP, don’t judge the fonts).

      Thanks in advance, and sorry if this breaks any rules — I read the sidebar twice 🙂

      submitted by /u/CloudPattern1313
      [link] [comments]

    14. 🔗 r/reverseengineering Optimizing libdwarf .eh_frame enumeration rss
    15. 🔗 sacha chua :: living an awesome life 2025-12-01 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can e-mail me at sacha@sachachua.com.

    16. 🔗 Ampcode News Amp, Inc. rss

      Amp is becoming a separate company. We're spinning out of Sourcegraph to become an independent research lab.

      Our goal: let software builders harness the full power of artificial intelligence.

      We believe the way we develop software will change. All of it will change, fundamentally and drastically. Nobody knows exactly how. We intend to find out.

      We believe that shipping is the best way to do that. We don't want to write papers about the future; we want to put it in your hands.

      Flying pig pair illustration

      Amp Inc. gives us more freedom to do that, to focus ruthlessly on the frontier, to explore the absurd and find the possible.

      Amp's traction spun us out of Sourcegraph. Amp is profitable. Now, as our own company, we can follow where it leads.

      Come with us. Let's see what's possible.

      Signed,

      Alex Kemper · Beyang Liu · Brady Jeong · Brett Jones · Camden Cheek · Connor O'Brien · Dario Hamidi · Harry Charlesworth · Hitesh Sagtani · Isuru Fonseka · Jesse Edelstein · Karl Clement · Lewis Metcalf · Nicolay Gerold · Quinn Slack · Ryan Carson · Thorsten Ball · Tim Culverhouse · Tim Lucas · Will Dollman

      Co-founders of Amp

      Read Quinn and Dan's announcement on the Sourcegraph blog.
  4. December 01, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-01 rss

      IDA Plugin Updates on 2025-12-01

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA WebGPU Finally, it is compatible with all major browsers rss
    3. 🔗 r/LocalLLaMA My logical reasoning benchmark just got owned by DeepSeek V3.2 Speciale rss

      My logical reasoning benchmark just got owned by DeepSeek V3.2 Speciale | DeepSeek V3.2 Speciale made only a single mistake in my lineage-bench benchmark. Compared to my previous benchmarking attempts I reduced the number of quizzes in the benchmark run from 800 to 160 and increased difficulty by using lineage relationship graphs of sizes 8, 64, 128 and 192 (previously it was 8, 16, 32 and 64). If anyone is interested in details see the project description. submitted by /u/fairydreaming
      [link] [comments]
      ---|---

    4. 🔗 r/LocalLLaMA transformers v5 is out! rss

      transformers v5 is out! | Hey folks, it's Merve from Hugging Face! 👋🏻 I'm here with big news: today we release transformers v5! 🙌🏻 With this, we enable interoperability with our friends in ecosystem (llama.cpp, vLLM and others) from training to inference, simplify the addition of new models and significantly improve the library 🤗 We have written a blog on the changes, would love to hear your feedback! https://preview.redd.it/hl2gx5yd1n4g1.png?width=1800&format=png&auto=webp&s=3b21e4f7f786f42df4b56566e523138103ea07ab submitted by /u/unofficialmerve
      [link] [comments]
      ---|---

    5. 🔗 r/LocalLLaMA You can now do 500K context length fine-tuning - 6.4x longer rss

      You can now do 500K context length fine-tuning - 6.4x longer | Hey [r/LocalLlama](/r/LocalLlama), today, we're excited to share that you can now train gpt-oss-20b (or any LLM) to extend its context window to 530K on single 80GB H100 GPU. And you can reach 750K+ context on 192GB VRAM - with no accuracy loss. Unsloth GitHub: https://github.com/unslothai/unsloth Most model labs fine-tune LLMs to extend their native context length. We are optimizing that process!

      • For smaller GPUs, you’ll still see big gains in VRAM and context as e.g. RTX 5090 can reach 200K context.
      • With smaller LLMs, longer contexts are even easier.
      • On 80GB, the context length limit has increased from 82K to 530K.
      • This update works for any LLM or VLM, not just gpt-oss. Also with limited support for RL.

      For context, we’ve significantly improved how Unsloth handles memory usage patterns, speed, and context lengths:

      • 72% lower VRAM use with 3.2x longer context via Unsloth’s new fused and chunked cross-entropy loss, with no degradation in speed or accuracy
      • Enhanced activation offloading in Unsloth’s Gradient Checkpointing algorithm which was introduced in April 2024. It quickly became popular and the standard across the industry, having been integrated into most training packages nowadays - and we've improved it even further!
      • Collabing with Snowflake on Tiled MLP, enabling 2× more contexts
      • Our new algorithms allows gpt-oss-20b QLoRA (4bit) with 290K context possible on a H100 with no accuracy loss, and 530K+ with Tiled MLP enabled, altogether delivering >6.4x longer context lengths.

      We also made a Colab notebook on an A100 80GB so you can try gpt-oss-20b with 500K context by using a 500K context dataset. Colab: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt_oss_(20B)_500K_Context_Fine_tuning.ipynb_500K_Context_Fine_tuning.ipynb) To enable Tiled MLP on any LLM, VLM in Unsloth, do

      model, tokenizer = FastLanguageModel.from_pretrained( ..., unsloth_tiled_mlp = True, )
      

      Details + notebook are in our blog: https://docs.unsloth.ai/new/500k-context- length-fine-tuning. To update Unsloth, do

      pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth pip install --upgrade --force-reinstall --no-cache-dir --no-deps unsloth_zoo
      

      We'll also be at NeurIPS Tues - Thur for a workshop & reception! Would love to meet you all there with some merch! Hope you guys have a lovely rest of the week! :D submitted by /u/danielhanchen
      [link] [comments]
      ---|---

    6. 🔗 benji.dog December Adventure 2025 rss

      This is my second time doing December Adventure. I had a lot of fun with this last year even though I didn't write something every day. I'm expecting a bit of the same this year but that's ok as I believe that's the point.

      Sun | Mon | Tue | Wed | Thu | Fri | Sat
      ---|---|---|---|---|---|---
      | 1 | 2 | 3 | 4 | 5 | 6
      7 | 8 | 9 | 10 | 11 | 12 | 13
      14 | 15 | 16 | 17 | 18 | 19 | 20
      21 | 22 | 23 | 24 | 25 | 26 | 27
      28 | 29 | 30 | 31 | | |

      Show all days

      2025-12-01

      Day 1

      Today is just for setup which is easy as I'm reusing this calendar and layout from last year since I still really like how this looks.

    7. 🔗 r/reverseengineering Hacking the Meatmeet BBQ Probe—BLE BBQ Botnet rss
    8. 🔗 r/LocalLLaMA deepseek-ai/DeepSeek-V3.2 · Hugging Face rss

      deepseek-ai/DeepSeek-V3.2 · Hugging Face |

      Introduction

      We introduce DeepSeek-V3.2 , a model that harmonizes high computational efficiency with superior reasoning and agent performance. Our approach is built upon three key technical breakthroughs:

      1. DeepSeek Sparse Attention (DSA): We introduce DSA, an efficient attention mechanism that substantially reduces computational complexity while preserving model performance, specifically optimized for long-context scenarios.
      2. Scalable Reinforcement Learning Framework: By implementing a robust RL protocol and scaling post-training compute, DeepSeek-V3.2 performs comparably to GPT-5. Notably, our high-compute variant, DeepSeek-V3.2-Speciale , surpasses GPT-5 and exhibits reasoning proficiency on par with Gemini-3.0-Pro.
        • Achievement: 🥇 Gold-medal performance in the 2025 International Mathematical Olympiad (IMO) and International Olympiad in Informatics (IOI).
      3. Large-Scale Agentic Task Synthesis Pipeline: To integrate reasoning into tool-use scenarios, we developed a novel synthesis pipeline that systematically generates training data at scale. This facilitates scalable agentic post-training, improving compliance and generalization in complex interactive environments.

      submitted by /u/jacek2023
      [link] [comments]
      ---|---

    9. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    10. 🔗 benji.dog rss

      Hennepin County Library card featuring
Prince

      Forgot to show everyone my new library card.

    11. 🔗 jellyfin/jellyfin 10.11.4 release

      🚀 Jellyfin Server 10.11.4

      We are pleased to announce the latest stable release of Jellyfin, version 10.11.4! This minor release brings several bugfixes to improve your Jellyfin experience. As always, please ensure you take a full backup before upgrading!

      You can find more details about and discuss this release on our forums.

      Changelog (10)

      📈 General Changes