🏡


  1. Transparent Leadership Beats Servant Leadership
  2. Writing a good CLAUDE.md | HumanLayer Blog
  3. My Current global CLAUDE.md
  4. About KeePassXC’s Code Quality Control – KeePassXC
  5. How to build a remarkable command palette

  1. December 06, 2025
    1. 🔗 r/wiesbaden Friends rss

      Any Americans that aren't military that live here? Or are military, I don't really care either way. I just know its hard to make non military friends if youre on active duty. It's been 6 months since I moved here with my wife, and I haven't met anyone.. my German is A1 level, so complete shit basically. Everytime I try to communicate in English to Germans here, they say they don't speak English, so no luck there. I'm open to making German friends as well! I just can't speak much german yet. I'm still learning. I'm bored, and extremely fucking lonely 🙁 🥺 😞

      submitted by /u/d00m_Prophet
      [link] [comments]

    2. 🔗 r/reverseengineering Patching Pulse Oximeter Firmware rss
    3. 🔗 r/reverseengineering GhidrAssist Ghidra LLM plugins reached v1.0 rss
    4. 🔗 r/reverseengineering elfpeek - small C tool to inspect ELF64 headers/sections/symbols rss
    5. 🔗 r/reverseengineering OffsetInspect: PowerShell-Based Offset and Hex-Context Analysis Tool rss
    6. 🔗 navidrome/navidrome v0.59.0 release

      This release brings significant improvements and new features:

      • Scanner Improvements : Selective folder scanning and enhancements to the file system watcher for better performance and reliability.
      • Scrobble History : Native scrobble/listen history tracking, allowing Navidrome to keep a record of your listening habits. This will be used in future visualizations and features (Navidrome Wrapped maybe?).
      • User Administration : New CLI commands for user management, making it easier to handle user accounts from the terminal.
      • New Themes : Two new themes have been added: SquiddiesGlass and AMusic (Apple Music inspired).
      • General : Numerous bug fixes, translation updates, and configuration options for advanced use cases.

      Added

      • UI Features:

        • Add AMusic (Apple Music inspired) theme. (#4723 by @metalheim)
        • Add SquiddiesGlass Theme. (#4632 by @rendergraf)
        • Add loading state to artist action buttons for improved user experience. (f6b2ab572 by @deluan)
        • Add SizeField to display total size in LibraryList. (73ec89e1a by @deluan)
        • Update totalSize formatting to display two decimal places. (c3e8c6711 by @deluan)
        • Backend Features:

        • Track scrobble/listens history. Note that for music added before this version, the count of scrobbles per song will not necessarily equal the song playcount. (#4770 by @deluan)

        • Add user administration to CLI. (#4754 by @kgarner7)
        • Make Unicode handling in external API calls configurable, with DevPreserveUnicodeInExternalCalls (default false). (#4277 by @deluan)
        • Rename "reverse proxy authentication" to "external authentication". (#4418 by @crazygolem)
        • Add configurable transcoding cancellation, with EnableTranscodingCancellation (default false). (#4411 by @deluan)
        • Add Rated At field. (#4660 by @zacaj)
        • Add DevOptimizeDB flag to control whether apply SQLite optimization (default true). (ca83ebbb5 by @deluan)
        • Scanner Features:

        • Implement selective folder scanning and file system watcher improvements. (#4674 by @deluan)

        • Improve error messages for cleanup operations in annotations, bookmarks, and tags. (36fa86932 by @deluan)
        • Plugins:

        • Add artist bio, top tracks, related artists and language support (Deezer). (#4720 by @deluan)

      Changed

      • UI:

        • Update Bulgarian, Esperanto, Finnish, Galician, Dutch, Norwegian, Turkish translations. (#4760 and #4773 by @deluan)
        • Update Danish, German, Greek, Spanish, French, Japanese, Polish, Russian, Swedish, Thai, Ukrainian translations. (#4687 by @deluan)
        • Update Basque translation. (#4670 by @xabirequejo)
        • New Hungarian strings and updates. (#4703 by @ChekeredList71)
        • Server:

        • Make NowPlaying dispatch asynchronous with worker pool. (#4757 by @deluan)

        • Enables quoted ; as values in ini files. (c21aee736 by @deluan)
        • Fix Navidrome build issues in VS Code dev container. (#4750 by @floatlesss)

      Fixed

      • UI:

        • Improve playlist bulk action button contrast on dark themes. (86f929499 by @deluan)
        • Increase contrast of button text in the Dark theme. (f939ad84f by @deluan)
        • Sync body background color with theme. (9f0d3f3cf by @deluan)
        • Allow scrolling in shareplayer queue by adding delay. (#4748 by @floatlesss)
        • Fix translation display for library list terms. (#4712 by @dongeunm)
        • Fix library selection state for single-library users. (#4686 by @deluan)
        • Adjust margins for bulk actions buttons in Spotify-ish and Ligera. (9b3bdc8a8 by @deluan)
        • Scanner:

        • Handle cross-library relative paths in playlists. (#4659 by @deluan)

        • Defer artwork PreCache calls until after transaction commits. (67c4e2495 by @deluan)
        • Specify exact table to use for missing mediafile filter. (#4689 by @kgarner7)
        • Refactor legacyReleaseDate logic and add tests for date mapping. (d57a8e6d8 by @deluan)
        • Server:

        • Lastfm.ScrobbleFirstArtistOnly also only scrobbles the first artist of the album. (#4762 by @maya-doshi)

        • Log warning when no config file is found. (142a3136d by @deluan)
        • Retry insights collection when no admin user available. (#4746 by @deluan)
        • Improve error message for encrypted TLS private keys. (#4742 by @deluan)
        • Apply library filter to smart playlist track generation. (#4739 by @deluan)
        • Prioritize artist base image filenames over numeric suffixes. (bca76069c by @deluan)
        • Prefer cover.jpg over cover.1.jpg. (#4684 by @deluan)
        • Ignore artist placeholder image in LastFM. (353aff2c8 by @deluan)
        • Plugins:

        • Avoid Chi RouteContext pollution by using http.NewRequest. (#4713 by @deluan)

      New Contributors

      Full Changelog : v0.58.5...v0.59.0

      Helping out

      This release is only possible thanks to the support of some awesome people!

      Want to be one of them?
      You can sponsor, pay me a Ko- fi, or contribute with code.

      Where to go next?

    7. 🔗 r/reverseengineering free, open-source file scanner rss
    8. 🔗 gulbanana/gg GG 0.35.2 release

      Fixed

      • Git remote handling: gg now displays only fetchable remotes, and fetching actually works again.
      • Pushing a single bookmark with right-click was also broken for some repos, depending on config, and could fail with an error about trying to push to the "git" pseudo-remote.
      • Spurious @git bookmarks were showing up in colocated repos. This has probably been an issue for a while, but colocation became more common recently due to a change in jj defaults. Now they're hidden.
      • Graph line rendering was breaking in various ways due to our attempt to fix memory leaks with structural comparison. Switched to a different implementation (index comparison, deeper reactivity) which should be more efficient as well as unbreaking scrolling, decorations, etc.
      • Drag-drop of bookmarks was also affected, and is also fixed.
      • Spurious "receiving on a closed channel" errors at startup - they were harmless, but now they're gone.
    9. 🔗 r/wiesbaden Whatsapp Gruppe rss

      hallo zusammen,

      hat unsere Stadt eine Whatsapp Gruppe, auf der wöchentlich über alles kulinarisch, kulturelle und allen anderen schnick schnack in der Stadt informiert wird.

      Kenne das aus Regensburg :)

      submitted by /u/Unfair_Hornet7475
      [link] [comments]

    10. 🔗 r/reverseengineering Car OBD port playing GTA 5 rss
    11. 🔗 r/reverseengineering Made yet another ApkTool GUI (at least I think it's pretty) rss
    12. 🔗 r/reverseengineering PalmOS on FisherPrice Pixter Toy rss
    13. 🔗 r/LocalLLaMA The Best Open-Source 8B-Parameter LLM Built in the USA rss

      The Best Open-Source 8B-Parameter LLM Built in the USA | Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models. These models

      • perform well across a range of programming languages.
      • boast strong agentic capabilities (e.g., inside agentic frameworks like mini-SWE-agent).
      • excel at tool-calling.

      Both raw and instruct variants are available on Hugging Face platform. Model Architecture Overview Rnj-1's architecture is similar to Gemma 3, except that it uses only global attention, and YaRN for long-context extension. Training Dynamics rnj-1 was pre-trained on 8.4T tokens with an 8K context length, after which the model’s context window was extended to 32K through an additional 380B-token mid-training stage. A final 150B-token SFT stage completed the training to produce rnj-1-instruct. submitted by /u/Dear-Success-1441
      [link] [comments]
      ---|---

    14. 🔗 @cxiao@infosec.exchange you can read about tara's case here: mastodon

      you can read about tara's case here: https://www.freetara.info/home

    15. 🔗 @cxiao@infosec.exchange RE: mastodon

      RE: https://mastodon.online/@hkfp/115670539559004434

      this is so fucking scary to me because:

      1) this girl was living in france
      2) she took pains to protect her identity - she wrote for the pro-tibet blog anonymously and had an altered voice on her podcast appearance
      3) this girl is han chinese
      4) she suspected absolutely nothing and thought it was fine to just visit home for a bit before starting a new degree
      5) like everyone else in this situation she just disappeared and no one knows where she is

      no one is more sinophobic than the ccp

    16. 🔗 matklad Mechanical Habits rss

      Mechanical Habits

      Dec 6, 2025

      My schtick as a software engineer is establishing automated processes — mechanically enforced patterns of behavior. I have collected a Santa Claus bag of specific tricks I’ve learned from different people, and want to share them in turn.

      Caution: engineering processes can be tricky to apply in a useful way. A process is a logical cut — there’s some goal we actually want, and automation can be a shortcut to achieve it, but automation per se doesn’t explain what the original goal is. Keep the goal and adjust the processes on the go. Sanity checks: A) automation should reduce toil. If robots create work for humans, down with the robots! B) good automation usually is surprisingly simple, simplistic even. Long live the duct tape!

      Weekly Releases

      By far the most impactful trick — make a release of your software every Friday. The first order motivation here is to reduce the stress and effort required for releases. If releases are small, writing changelogs is easy, assessing the riskiness of release doesn’t require anything more than mentally recalling a week’s worth of work, and there’s no need to aim to land features into a particular releases. Delaying a feature by a week is nothing, delaying by a year is a reason to put in an all-nighter.

      As an example, this Friday I was filling my US visa application, so I was feeling somewhat tired in the evening. I was also the release manager. So I just messaged “sorry, I am feeling too tired to make a release, we are skipping this one” without thinking much about it. It’s cheap to skip the release, so there’s no temptation to push yourself to get the release done (and quickly follow up with a point release, the usual consequence).

      But the real gem is the second order effect — weekly releases force you to fix all other processes to keep the codebase healthy all the time. And it is much easier to keep the flywheel going at roughly the same speed, rather than periodically to struggle to get it going. Temporal locality is the king: “I don’t have time to fix X right now, I’ll do it before the release” is the killer. By the time of release you’ll need 2X time just to load X in your head! It’s much faster overall to immediately make every line of code releasable. Work the iron while it is hot!

      Epistemic Aside

      I’ve done releases every Friday in IntelliJ Rust, rust-analyzer, and TigerBeetle, to a great success. It’s worth reflecting how I got there. The idea has two parents:

      Both seemed worthwhile to try for me, and I figured that a nicely synthesis would be to release every Monday, not every six weeks (I later moved cutting the release to Friday, so that it can bake in beta/fuzzers during the weekend). I just finished University at that point, and had almost zero working experience! The ideas made sense to me not based on my past experiences, or on being promulgated by some big names, but because they made sense if you just think about them from first principles. It’s the other way around — I fell in love with Rust and Pieter’s writing because of the quality of the ideas. And I only needed common sense to assess the ideas, no decade in the industry required.

      This applies to the present blog post — engage with ideas, remix them, and improve them. Don’t treat the article as a mere cook book, it is not.

      Not Rocket Science Rule

      I feel like I link https://graydon2.dreamwidth.org/1597.html from every second post of mine, so I’ll keep it short this time.

      • Only advance the tip of the master branch to a commit hash, for which you already know the tests results. That is, make a detached merge commit, test that, then move the tip.
      • Don’t do it yourself, let the robot do it.

      The direct benefit is asynchronizing the process of getting the code in. When you submit PR, you don’t need to wait until CI is complete, and then make a judgement call if the results are fresh enough or you need to rebase to the new version of master branch. You just tell the robot “merge when the merge commit is green”. The standard setup uses robots to create work for humans. Merge queue inverts this.

      But the true benefit is second-order! You can’t really ask the robot nicely to let your very important PR in, despite a completely unrelated flaky failure elsewhere. You are forced to keep your CI setup tidy.

      There’s also a third-order benefit. NRSR encourages holistic view of your CI, as a set of invariants that actually hold for your software, a type-system of sorts. And that thinking makes you realize that every automatable check can be a test. Again, good epistemology helps: it’s not the idea of bors that is most valuable, it’s the reasoning behind that: “automatically maintain a repository of code that always passes all the tests”, “monotonically increasing test coverage”. Go re-read Graydon’s post!

      Tidy Script

      This is another idea borrowed from Rust. Use a tidy file to collect various project-specific linting checks as tests. The biggest value of such tidy.zig is its mere existence. It’s much easier to add a new check than to create “checking infrastructure”. Some checks we do at TigerBeetle:

      • No large binary blobs in big history. Don’t repeat my rust-analyzer mistake here, and look for actual git objects, not just files in the working repository. Someone once sneaked 1MiB of reverted protobuf nonsense past me and my file-based check.
      • Line & function length.
      • No problematic (for our use case) std APIs are used.
      • No // FIXME comments. This is used positively — I add // FIXME comments to code I want to change before the merge (this one is also from Rust!).
      • No dead code (Zig specific, as the compiler is not well-positioned to tackle that, due to lazy compilation model).

      Pro tip for writing tidings — shell out to git ls-files -z to figure out what needs tidying.

      DevHub

      I don’t remember the origin here, but https://deno.com/benchmarks certainly is an influence.

      The habit is to maintain, for every large project, a directory with static files which is directly deployed from the master branch as a project’s internal web page. E.g., for TigerBeetle:

      Again, motivation is mere existence and removal of friction. This is an office whiteboard which you can just write on, for whatever purpose! Things we use ours for:

      • Release rotation.
      • Benchmark&fuzzing results. This is a bit of social engineering: you check DevHub out of anxiety, to make sure its not your turn to make a release this week, but you get to spot performance regressions!
      • Issues in needs of triaging.

      I gave a talk about using DevHub for visualizing fuzzing results for HYTRADBOI (video)

      Another tip: JSON file in a git repository is a fine database to power such an internal website. JSONMutexDB for the win.

      Micro Benchmarks

      The last one for today, and the one that prompted this article! I am designing a new mechanical habit for TigerBeetle and I want to capture the process while it is still fresh in my mind.

      It starts with something rotten. Micro benchmarks are hard. You write one when you are working on the code, but then it bitrots, and by the time the next person has a brilliant optimization idea, they can not compile the benchmark anymore, and they also have no idea which part of the three pages of output is important.

      A useful trick for solving bitrot is to chain a new habit onto an existing one. Avoid multiplying entry points (O(1) Build File). The appropriate entry point here are the tests. So each micro benchmark is going to be just a test:

      test "benchmark: binary search" {
          // ...
      }
      

      Bitrot problem solved. Now we have two new ones. First is that you generally want to run the benchmark long enough to push the times into human range (~2 seconds), so that any improvements are immediately, viscerally perceived. But 2 seconds are too slow for a test, and test are usually run in Debug mode. The second problem is that you want to see the timing outcome of the benchmark printed when you run that benchmark. But you don’t want to see the output when you run the tests!

      So, we really want two modes here: in the first mode, we really are running a benchmark, it is compiled with optimizations, we aim to make runtime low seconds at least, and we want to print the seconds afterwards. In the second mode, we are running our test suite, and we want to run the benchmark just for some token amount of time. DWIM (do what I mean) principle helps here. We run the entire test suite as ./zig/zig build test, and a single benchmark as ./zig/zig build test -- "benchmark: search" So we use the shape of CLI invocation to select benchmarking mode.

      This mode then determines whether we should pick large or small parameters. Playing around with the code, it feels like the following is a nice shape of code to get parameter values:

      var bench = Bench.init();
      
      const element_count =
          bench.parameter("element_count", 1_000, 10_000_000);
      
      const search_count =
          bench.parameter("search_count", 5_000, 500_000);
      

      The small value is test mode, the big value is benchmark mode, and the name is useful to print actual parameter value:

      bench.report("{s}={}", .{ name, value });
      

      This report function is what decides whether to swallow (test mode) or show (benchmark mode) the output. Printing the values is useful to make copy-pasted benchmarking results obvious without context. And, now that we have put the names in, we get to override values of parameters via environmental variables for free!

      And this is more or less it? We now have a standard pattern to grow the set of microbenchmarks, which feels like it should hold up with the time passing?

      https://github.com/tigerbeetle/tigerbeetle/pull/3405

      Check back in a couple of years to see if this mechanical habit sticks!

  2. December 05, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-05 rss

      IDA Plugin Updates on 2025-12-05

      New Releases:

      Activity:

    2. 🔗 Bits About Money Perpetual futures, explained rss

      Perpetual futures,
explained

      Programming note : Bits about Money is supported by our readers . I generally forecast about one issue a month, and haven 't kept that pace that this year. As a result, I'm working on about 3-4 for December.

      Much financial innovation is in the ultimate service of the real economy. Then, we have our friends in crypto, who occasionally do intellectually interesting things which do not have a locus in the real economy. One of those things is perpetual futures (hereafter, perps), which I find fascinating and worthy of study, the same way that a virologist just loves geeking out about furin cleavage sites.

      You may have read a lot about stablecoins recently. I may write about them (again; see past BAM issue) in the future, as there has in recent years been some uptake of them for payments. But it is useful to understand that a plurality of stablecoins collateralize perps. Some observers are occasionally strategic in whether they acknowledge this, but for payments use cases, it does not require a lot of stock to facilitate massive flows. And so of the $300 billion or so in stablecoins presently outstanding, about a quarter sit on exchanges. The majority of that is collateralizing perp positions.

      Perps are the dominant way crypto trades, in terms of volume. (It bounces around but is typically 6-8 times larger than spot.) This is similar to most traditional markets: where derivatives are available, derivative volume swamps spot volume. The degree to which depends on the market, Schelling points, user culture, and similar. For example, in India, most retail investing in equity is actually through derivatives; this is not true of the U.S. In the U.S., most retail equity exposure is through the spot market, directly holding stocks or indirectly through ETFs or mutual funds. Most trading volume of the stock indexes , however, is via derivatives.

      Beginning with the problem

      The large crypto exchanges are primarily casinos, who use the crypto markets as a source of numbers, in the same way a traditional casino might use a roulette wheel or set of dice. The function of a casino is for a patron to enter it with money and, statistically speaking, exit it with less. Physical casinos are often huge capital investments with large ongoing costs, including the return on that speculative capital. If they could choose to be less capital intensive, they would do so, but they are partially constrained by market forces and partially by regulation.

      A crypto exchange is also capital intensive, not because the website or API took much investment (relatively low, by the standards of financial software) and not because they have a physical plant, but because trust is expensive. Bettors, and the more sophisticated market makers, who are the primary source of action for bettors, need to trust that the casino will actually be able to pay out winnings. That means the casino needs to keep assets (generally, mostly crypto, but including a smattering of cash for those casinos which are anomalously well-regarded by the financial industry) on hand exceeding customer account balances.

      Those assets are… sitting there, doing nothing productive. And there is an implicit cost of capital associated with them, whether nominal (and borne by a gambler) or material (and borne by a sophisticated market making firm, crypto exchange, or the crypto exchange's affiliate which trades against customers [0]).

      Perpetual futures exist to provide the risk gamblers seek while decreasing the total capital requirement (shared by the exchange and market makers) to profitably run the enterprise.

      Perps predate crypto but found a home there

      In the commodities futures markets, you can contract to either buy or sell some standardized, valuable thing at a defined time in the future. The overwhelming majority of contracts do not result in taking delivery; they're cancelled by an offsetting contract before that specified date.

      Given that speculation and hedging are such core use cases for futures, the financial industry introduced a refinement: cash-settled futures. Now there is a reference price for the valuable thing, with a great deal of intellectual effort put into making that reference price robust and fair (not always successfully). Instead of someone notionally taking physical delivery of pork bellies or barrels of oil, people who are net short the future pay people who are net long the future on delivery day. (The mechanisms of this clearing are fascinating but outside today's scope.)

      Back in the early nineties economist Robert Shiller proposed a refinement to cash settled futures: if you don't actually want pork bellies or oil barrels for consumption in April, and we accept that almost no futures participants actually do, why bother closing out the contracts in April? Why fragment the liquidity for contracts between April, May, June, etc? Just keep the market going perpetually.

      This achieved its first widespread popular use in crypto (Bitmex is generally credited as being the popularizer), and hereafter we'll describe the standard crypto implementation. There are, of course, variations available.

      Multiple settlements a day

      Instead of all of a particular futures vintage settling on the same day, perps settle multiple times a day for a particular market on a particular exchange. The mechanism for this is the funding rate. At a high level: winners get paid by losers every e.g. 4 hours and then the game continues, unless you've been blown out due to becoming overleveraged or for other reasons (discussed in a moment).

      Consider a toy example: a retail user buys 0.1 Bitcoin via a perp. The price on their screen, which they understand to be for Bitcoin, might be $86,000 each, and so they might pay $8,600 cash. Should the price rise to $90,000 before the next settlement, they will get +/- $400 of winnings credited to their account, and their account will continue to reflect exposure to 0.1 units of Bitcoin via the perp. They might choose to sell their future at this point (or any other). They'll have paid one commission (and a spread) to buy, one (of each) to sell, and perhaps they'll leave the casino with their winnings, or perhaps they'll play another game.

      Where did the money come from? Someone else was symmetrically short exposure to Bitcoin via a perp. It is, with some very important caveats incoming, a closed system: since no good or service is being produced except the speculation, winning money means someone else lost.

      One fun wrinkle for funding rates: some exchanges cap the amount the rate can be for a single settlement period. This is similar in intent to traditional markets' usage of circuit breakers: designed to automatically blunt out-of-control feedback loops. It is dissimilar in that it cannot actually break circuits: changes to funding rate can delay realization of losses but can't prevent them, since they don't prevent the realization of symmetrical gains.

      Perp funding rates also embed an interest rate component. This might get quoted as 3 bps a day, or 1 bps every eight hours, or similar. However, because of the impact of leverage, gamblers are paying more than you might expect: at 10X leverage that's 30 bps a day. Consumer finance legislation standardizes borrowing costs as APR rather than basis points per day so that an unscrupulous lender can't bury a 200% APR in the fine print.

      Convergence in prices via the basis trade

      Prices for perps do not, as a fact of nature, exactly match the underlying. That is a feature for some users.

      In general, when the market is exuberant, the perp will trade above spot (the underlying market). To close the gap, a sophisticated market participant should do the basis trade : make offsetting trades in perps and spot (short the perp and buy spot, here, in equal size). Because the funding rate is set against a reference price for the underlying, longs will be paying shorts more (as a percentage of the perp's current market price). For some of them, that's fine: the price of gambling went up, oh well. For others, that's a market incentive to close out the long position, which involves selling it, which will decrease the price at the margin (in the direction of spot).

      The market maker can wait for price convergence; if it happens, they can close the trade at a profit, while having been paid to maintain the trade. If the perp continues to trade rich, they can just continue getting the increased funding cost. To the extent this is higher than their own cost of capital, this can be extremely lucrative.

      Flip the polarities of these to understand the other direction.

      The basis trade, classically executed, is delta neutral: one isn't exposed to the underlying itself. You don't need any belief in Bitcoin's future adoption story, fundamentals, market sentiment, halvings, none of that. You're getting paid to provide the gambling environment, including a really important feature: the perp price needs to stay reasonably close to the spot price, close enough to continue attracting people who want to gamble. You are also renting access to your capital for leverage.

      You are also underwriting the exchange: if they blow up, your collateral becoming a claim against the bankruptcy estate is the happy scenario. (As one motivating example: Galois Capital, a crypto hedge fund doing basis trades, had ~40% of its assets on FTX when it went down. They then wound down the fund, selling the bankruptcy claim for 16 cents on the dollar.)

      Recall that the market can't function without a system of trust saying that someone is good for it if a bettor wins. Here, the market maker is good for it, via the collateral it kept on the exchange.

      Many market makers function across many different crypto exchanges. This is one reason they're so interested in capital efficiency: fully collateralizing all potential positions they could take across the universe of venues they trade on would be prohibitively capital intensive, and if they do not pre- deploy capital, they miss profitable trading opportunities. [1]

      Leverage and liquidations

      Gamblers like risk; it amps up the fun. Since one has many casinos to choose from in crypto, the ones which only "regular" exposure to Bitcoin (via spot or perps) would be offering a less-fun product for many users than the ones which offer leverage. How much leverage? More leverage is always the answer to that question, until predictable consequences start happening.

      In a standard U.S. brokerage account, Regulation T has, for almost 100 years now, set maximum leverage limits (by setting minimums for margins). These are 2X at position opening time and 4X "maintenance" (before one closes out the position). Your brokerage would be obligated to forcibly close your position if volatility causes you to exceed those limits.

      As a simplified example, if you have $50k of cash, you'd be allowed to buy $100k of stock. You now have $50k of equity and a $50k loan: 2x leverage. Should the value of that stock decline to about $67k, you still owe the $50k loan, and so only have $17k remaining equity. You're now on the precipice of being 4X leveraged, and should expect a margin call very soon, if your broker hasn't "blown you out of the trade" already.

      What part of that is relevant to crypto? For the moment, just focus on that number: 4X.

      Perps are offered at 1X (non-levered exposure). But they're routinely offered at 20X, 50X, and 100X. SBF, during his press tour / regulatory blitz about being a responsible financial magnate fleecing the customers in an orderly fashion , voluntarily self-limited FTX to 20X.

      One reason perps are structurally better for exchanges and market makers is that they simplify the business of blowing out leveraged traders. The exact mechanics depend on the exchange, the amount, etc, but generally speaking you can either force the customer to enter a closing trade or you can assign their position to someone willing to bear the risk in return for a discount.

      Blowing out losing traders is lucrative for exchanges except when it catastrophically isn't. It is a priced service in many places. The price is quoted to be low ("a nominal fee of 0.5%" is one way Binance describes it) but, since it is calculated from the amount at risk, it can be a large portion of the money lost. If the account's negative balance is less than the liquidation fee, wonderful, thanks for playing and the exchange / "the insurance fund" keeps the rest, as a tip.

      In the case where the amount an account is negative by is more than the fee, that "insurance fund" can choose to pay the winners on behalf of the liquidated user, at management's discretion. Management will usually decide to do this, because a casino with a reputation for not paying winners will not long remain a casino.

      But tail risk is a real thing. The capital efficiency has a price : there physically does not exist enough money in the system to pay all winners given sufficiently dramatic price moves. Forced liquidations happen. Sophisticated participants withdraw liquidity (for reasons we'll soon discuss) or the exchange becomes overwhelmed technically / operationally. The forced liquidations eat through the diminished / unreplenished liquidity in the book, and the magnitude of the move increases.

      Then crypto gets reminded about automatic deleveraging (ADL), a detail to perp contracts that few participants understand.

      We have altered the terms of your unregulated futures investment contract.

      (Pray we do not alter them further.)

      Risk in perps has to be symmetric: if (accounting for leverage) there are 100,000 units of Somecoin exposure long, then there are 100,000 units of Somecoin exposure short. This does not imply that the shorts or longs are sufficiently capitalized to actually pay for all the exposure in all instances.

      In cases where management deems paying winners from the insurance fund would be too costly and/or impossible, they automatically deleverage some winners. In theory, there is a published process for doing this, because it would be confidence-costing to ADL non-affiliated accounts but pay out affiliated accounts, one's friends or particularly important counterparties, etc. In theory.

      In theory, one likely ADLs accounts which were quite levered before ones which were less levered, and one ADLs accounts which had high profits before ones with lower profits. In theory. [2]

      So perhaps you understood, prior to a 20% move, that you were 4X leveraged. You just earned 80%, right? Ah, except you were only 2X leveraged, so you earned 40%. Why were you retroactively only 2X? That's what automatic deleveraging means. Why couldn't you get the other 40% you feel entitled to? Because the collective group of losers doesn't have enough to pay you your winnings and the insurance fund was insufficient or deemed insufficient by management.

      ADL is particularly painful for sophisticated market participants doing e.g. a basis trade, because they thought e.g. they were 100 units short via perps and 100 units long somewhere else via spot. If it turns out they were actually 50 units short via perps, but 100 units long, their net exposure is +50 units, and they have very possibly just gotten absolutely shellacked.

      In theory, this can happen to the upside or the downside. In practice in crypto, this seems to usually happen after sharp decreases in prices, not sharp increases. For example, October 2025 saw widespread ADLing as (more than) $19 billion of liquidations happened, across a variety of assets. Alameda's CEO Caroline Ellison testified that they lost over $100 million during the collapse of Terra's stablecoin in 2022, but since FTX's insurance fund was made up; when leveraged traders lost money, their positions were frequently taken up by Alameda. That was quite lucrative much of the time, but catastrophically expensive during e.g. the Terra blowup. Alameda was a good loser and paid the winners, though: with other customers' assets that they "borrowed."

      An aside about liquidations

      In the traditional markets, if one's brokerage deems one's assets are unlikely to be able to cover the margin loan from the brokerage one has used, one's brokerage will issue a margin call. Historically that gave one a relatively short period (typically, a few days) to post additional collateral, either by moving in cash, by transferring assets from another brokerage, or by experiencing appreciation in the value of one's assets. Brokerages have the option, and in some cases the requirement, to manage risk after or during a margin call by forcing trades on behalf of the customer to close positions.

      It sometimes surprises crypto natives that, in the case where one's brokerage account goes negative and all assets are sold, with a negative remaining balance, the traditional markets largely still expect you to pay that balance. This contrasts with crypto, where the market expectation for many years was that the customer was Daffy Duck with a gmail address and a pseudonymous set of numbered accounts recorded on a blockchain, and dunning them was a waste of time. Crypto exchanges have mostly, in the intervening years, either stepped up their game regarding KYC or pretended to do so, but the market expectation is still that a defaulting user will basically never successfully recover. (Note that the legal obligation to pay is not coextensive with users actually paying. The retail speculators with $25,000 of capital that the pattern day trade rules are worried about will often not have $5,000 to cover a deficiency. On the other end of the scale, when a hedge fund blows up, the fund entity is wiped out, but its limited partners--pension funds, endowments, family offices--are not on the hook to the prime broker, and nobody expects the general partner to start selling their house to make up the difference.)

      So who bears the loss when the customer doesn't, can't, or won't? The waterfall depends on market, product type, and geography, but as a sketch: brokerages bear the loss first, out of their own capital. They're generally required to keep a reserve for this purpose.

      A brokerage will, in the ordinary course of business, have obligations to other parties which would be endangered if they were catastrophically mismanaged and could not successfully manage risk during a downturn. (It's been known to happen, and even can be associated with assets rather than liabilities.) In this case, most of those counterparties are partially insulated by structures designed to insure the peer group. These include e.g. clearing pools, guaranty funds capitalized by the member firms of a clearinghouse, the clearinghouse's own capital, and perhaps mutualized insurance pools. That is the rough ordering of the waterfall, which varies depending geography/product/market.

      One can imagine a true catastrophe which burns through each of those layers of protection, and in that case, the clearinghouse might be forced to assess members or allocate losses across survivors. That would be a very, very bad day, but contracts exist to be followed on very bad days.

      One commonality with crypto, though: this system is also not fully capitalized against all possible events at all times. Unlike crypto, which for contingent reasons pays some lip service to being averse to credit even as it embraces leveraged trading, the traditional industry relies extensively on underwriting risk of various participants.

      Will crypto successfully "export" perps?

      Many crypto advocates believe that they have something which the traditional finance industry desperately needs. Perps are crypto's most popular and lucrative product, but they probably won't be adopted materially in traditional markets.

      Existing derivatives products already work reasonably well at solving the cost of capital issue. Liquidations are not the business model of traditional brokerages. And learning, on a day when markets are 20% down, that you might be hedged or you might be bankrupt, is not a prospect which fills traditional finance professionals with the warm fuzzies.

      And now you understand the crypto markets a bit better.

      [0] Brokers trading with their own customers can happen in the ordinary course of business, but has been progressively discouraged in traditional finance, as it enables frontrunning.

      Frontrunning, while it is understood in the popular parlance to mean "trading before someone else can trade" and often brought up in discussions of high frequency trading using very fast computers, does not historically mean that. It historically describes a single abusive practice: a broker could basically use the slowness of traditional financial IT systems to give conditional post-facto treatment to customer orders, taking the other side of them (if profitable) or not (if not). Frontrunning basically disappeared because customers now get order confirms almost instantly by computer not at end of day via a phone call. The confirm has the price the trade executed at on it.

      In classic frontrunning, you sent the customer's order to the market (at some price X), waited a bit, and then observed a later price Y. If Y was worse for the customer than X, well, them's the breaks on Wall Street. If Y was better, you congratulated the customer on their investing acumen, and informed them that they had successfully transacted at Z, a price of your choosing between X and Y. You then fraudulently inserted a recorded transaction between the customer and yourself earlier in the day, at price Z, and assigned the transaction which happened at X to your own account, not to the customer's account.

      Frontrunning was a lucrative scam while it lasted, because (effectively) the customer takes 100% of the risk of the trade but the broker gets any percentage they want of the first day's profits. This is potentially so lucrative that smart money (and some investors in his funds!) thought Madoff was doing it, thus generating the better-than-market stable returns for over a decade through malfeasance. Of frontrunning Madoff was entirely innocent.

      Some more principled crypto participants have attempted to discourage exchanges from trading with their own customers. They have mostly been unsuccessful: Merit Peak Limited is Binance's captive entity which does this. It also is occasionally described by U.S. federal agencies as running a sideline in money laundering, Alameda Research was FTX's affiliated trading fund. Their management was criminally convicted of money laundering. etc, etc.

      One of the reasons this behavior is so adaptive is because the billions of dollars sloshing around can be described to banks as "proprietary trading" and "running an OTC desk", and an inattentive bank (like, say, Silvergate, as recounted here) might miss the customer fund flows they would have been formally unwilling to facilitate. This is a useful feature for sophisticated crypto participants, and so some of them do not draw attention to the elephant in the room, even though it is averse to their interests.

      [1] Not all crypto trades are pre-funded. Crypto OTC transactions sometimes settle on T+1, with the OTC desk essentially extending credit in the fashion that a prime broker would in traditional markets. But most transactions on exchanges have to be paid immediately in cash already at the venue. This is very different from traditional equity market structure, where venues don't typically receive funds flow at all, and settling/clearing happens after the fact, generally by a day or two.

      [2] I note, for the benefit of readers of footnote 0, that there is often a substantial gap between the time when market dislocation happens and when a trader is informed they were ADLed. The implications of this are left as an exercise to the reader.

    3. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [augur](https://github.com/0xdea/augur): 0.7.4
      
    4. 🔗 News Minimalist 🐢 Court orders OpenAI to share user chat logs + 9 more stories rss

      In the last 2 days ChatGPT read 61308 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [5.7] Court orders OpenAI to share ChatGPT chats in copyright lawsuit —techradar.com(+10)

      A U.S. court has ordered OpenAI to release 20 million user chat logs for The New York Times' copyright lawsuit, setting a major precedent for AI data privacy.

      The logs are sought by the Times to check for copyright breaches and will be de-identified before being handed over. The ruling gives OpenAI seven days to comply with the request.

      OpenAI opposes the order, stating it puts user privacy at risk despite the court's safeguards. This is the first time the company has been forced to hand over such data.

      [6.3] EU eases rules for genetically modified products —orf.at(German) (+19)

      The European Union has agreed to relax rules for foods made with New Genomic Techniques, removing mandatory labeling requirements for many products and exempting them from strict genetic engineering regulations.

      The agreement, reached Thursday, creates two categories. Plants with limited edits, like via CRISPR, will face fewer rules, while those with more complex modifications, such as genes from other species, will remain strictly regulated.

      Proponents expect more climate-resistant crops and increased competitiveness, while opponents' demands for consumer choice through labeling were unsuccessful. Organic farming is to remain GMO-free.

      Highly covered news with significance over 5.5

      [6.0] Trump announces peace deal between Rwanda and DRC, securing mineral access — thetimes.com [$] (+63)

      [6.3] British inquiry finds Putin approved 2018 Skripal poisoning, UK imposes sanctions — jp.reuters.com (Japanese) (+46)

      [6.1] India leases Russian submarine to bolster naval power in the Indian Ocean — businesstoday.in (+6)

      [5.9] EU adds Russia to high-risk financial crime list — dw.com (Russian) (+3)

      [5.7] Kenya and US sign $2.5 billion health aid agreement — bbc.com (+5)

      [5.5] Shingles vaccine linked to lower dementia death rates — medpagetoday.com (+10)

      [5.6] European cereals contain high levels of toxic "forever chemicals" — theguardian.com (+13)

      [6.0] MIT researchers develop noninvasive glucose monitor — medicalxpress.com (+4)

      Thanks for reading!

      — Vadim


      You can create a personal RSS feed with premium.


      Powered by beehiiv

    5. 🔗 r/LocalLLaMA You will own nothing and you will be happy! rss

      Come and put everything in to cloud. We now getting into hardware as a service. The RAM craze will impact everything to the point where consumers can't afford normal hardware anymore because it's all scraped off, locked away and put into datacenters to sell to you services to store your data. (Of course that data also will be used to train AI models to sell to you as a service as well lol.)

      You don't need RAM anymore nor do you need SSDs. You will store and process every byte of your digital life in some datacenter and pay a monthly fee to access and process it.

      You will own nothing and you will be happy!

      GN: WTF Just Happened? | The Corrupt Memory Industry & Micron

      https://www.youtube.com/watch?v=9A-eeJP0J7c

      submitted by /u/dreamyrhodes
      [link] [comments]

    6. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases rss
      sync repo: +2 releases
      
      ## New releases
      - [haruspex](https://github.com/0xdea/haruspex): 0.7.4
      - [rhabdomancer](https://github.com/0xdea/rhabdomancer): 0.7.5
      
    7. 🔗 r/wiesbaden Regional train rss

      There should be a regional train that goes directly from Wiesbaden hbf to Frankfurt hbf with no stops

      submitted by /u/Electrical-You-6513
      [link] [comments]

    8. 🔗 @cxiao@infosec.exchange Really interesting story about a broken proxy, but also, the description of mastodon

      Really interesting story about a broken proxy, but also, the description of how AI usage erodes insight and understanding hit the nail on the head:

      "Before AI, developers were once (sometimes) forced to pause, investigate, and understand. Now, it’s becoming easier and more natural simply assume they grasp far more than they actually do. The result is seemingly an ever-growing gap between what they believe they understand, versus what they genuinely understand. This gap will only grow larger, as AI’s suggestions diverge from operators’ true knowledge."
      https://infosec.place/objects/8e460f08-cdae-4e0d-81f1-c030100d896b

    9. 🔗 r/LocalLLaMA Basketball AI with RF-DETR, SAM2, and SmolVLM2 rss

      Basketball AI with RF-DETR, SAM2, and SmolVLM2 | resources: youtube, code, blog - player and number detection with RF-DETR - player tracking with SAM2 - team clustering with SigLIP, UMAP and K-Means - number recognition with SmolVLM2 - perspective conversion with homography - player trajectory correction - shot detection and classification submitted by /u/RandomForests92
      [link] [comments]
      ---|---

    10. 🔗 r/wiesbaden Free or low-cost rabies vaccines (Tollwut Impfungen) for cats in Wiesbaden? rss

      Repost von einem Internationalen Sub, daher auf English. Meine Frage ist, ob jemand weiß wo es in Wiesbaden Tollwut Impfungen umsonst oder sehr billig für Tiere von Leuten in finanziellen Härtefällen gibt.

      TL;DR: urgently need to get my 4 cats Rabies vaccinations so their immigration restrictions don't expire but I am broke broke poor. Looking for vets or places in Wiesbaden, Germany to get a Tollwut - Impfung for very cheap.


      TW: Trauma dumping

      Hi, before anyone says "if you're too poor to afford the vet, you don't deserve to have pets," I wasn't always in this situation. Before I got cancer I had a pretty high salary. I've had my cats since they were 4 weeks old and I am important to them. Likewise, they are like children to me and I am only still alive because I don't want to traumatize them by suddenly disappearing. My cats are the most important people in my life.

      That being said, why am I in this situation? Well, I lived the past 15 years in the US but got cancer in 2022 and became unable to work but Social Security said I wasn't disabled, so I was without an income and became homeless in 2025.

      Additionally, when I got the news that you know who would become president in Nov 2024, I prepared having to leave the country when he takes away healthcare because I have cancer and need that shit. I didn't want to return to Germany because of severe childhood trauma and decided to move to Japan where I had felt the most home ever in my life when I studied abroad. Problem is, immigrating to Japan, you need a job and getting cats there is ridiculously complicated.

      The process for taking cats to Japan is get an international microchip > Rabies vaccine > wait 30 days > get another rabies vaccine > get a ridiculously expensive rabies titer test > wait 6 months > get a health certificate from certified vet > get government permission to take the cats to Japan.

      I went through this process and spent several thousand dollars to get my cats ready to go to Japan but the job I was interviewing for backed out at the last minute. I couldn't stay in the US, so I reluctantly went back to Germany where I'm absolutely miserable because of all the bad shit that was done to me in my childhood and I'd prefer to have an ocean between me and my abuser.

      I am currently fighting Social Security in court because I found out that they lied in both my initial application and appeal, making up medical findings that don't exist and I am still hoping that if I get backpay for the year I didn't get social security, I could go to Japan via language school or something.

      The rabies titer test is supposed to be good until end of Dec 2026 but I have a problem. Their rabies vaccinations were only for a year. Now, if their rabies vaccines expire, that means the rabies titer test expires, all the money I spent on getting my cats ready is not only wasted but the 7 month wait period would reset which would make immigrating a lot more difficult.

      I'm now in Wiesbaden Germany. It's only the beginning of the month and I have less than 150€ left from my Bürgergeld for the rest of the month (to buy food etc.) due to unforeseen shit. I still owe the vet 90€ because last month I brought in Spooky to get her hyperthyroidism meds because I had to ration them when I was homeless and moving countries. But the vet insisted that we need to do a blood test before he prescribed the meds. I told him I can't afford a blood test and he let me pay half in November half in December but I still don't have Spooky's meds which are probably gonna be another 80€ or so. Then I remembered their rabies vaccines are expiring this month too. I asked how much are the rabies vaccines and they told me it's 80.50€ per cat. So 322€ for 4 cats. With what I still owe and the meds I still need for Spooky, that's almost 500€. I cannot afford that that. That's almost the entire amount of Bürgergeld I get for a month. I barely have enough money to eat and my phone is falling apart.

      Does anyone know if there's a place that gives free or low cost rabies vaccinations in Wiesbaden, Germany (no, I don't have a car to take them further :(

      submitted by /u/not_ya_wify
      [link] [comments]

    11. 🔗 Anton Zhiyanov Gist of Go: Concurrency internals rss

      This is a chapter from my book onGo concurrency, which teaches the topic from the ground up through interactive examples.

      Here's where we started this book:

      Functions that run with go are called goroutines. The Go runtime juggles these goroutines and distributes them among operating system threads running on CPU cores. Compared to OS threads, goroutines are lightweight, so you can create hundreds or thousands of them.

      That's generally correct, but it's a little too brief. In this chapter, we'll take a closer look at how goroutines work. We'll still use a simplified model, but it should help you understand how everything fits together.

      ConcurrencyGoroutine schedulerGOMAXPROCSConcurrency primitivesScheduler metricsProfilingTracingKeep it up

      Concurrency

      At the hardware level, CPU cores are responsible for running parallel tasks. If a processor has 4 cores, it can run 4 instructions at the same time — one on each core.

        instr A     instr B     instr C     instr D
      ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
      │ Core 1  │ │ Core 2  │ │ Core 3  │ │ Core 4  │ CPU
      └─────────┘ └─────────┘ └─────────┘ └─────────┘
      

      At the operating system level, a thread is the basic unit of execution. There are usually many more threads than CPU cores, so the operating system's scheduler decides which threads to run and which ones to pause. The scheduler keeps switching between threads to make sure each one gets a turn to run on a CPU, instead of waiting in line forever. This is how the operating system handles concurrency.

      ┌──────────┐              ┌──────────┐
      │ Thread E │              │ Thread F │              OS
      └──────────┘              └──────────┘
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Thread A │ │ Thread B │ │ Thread C │ │ Thread D │
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
           │           │           │           │
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Core 1   │ │ Core 2   │ │ Core 3   │ │ Core 4   │ CPU
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
      

      At the Go runtime level, a goroutine is the basic unit of execution. The runtime scheduler runs a fixed number of OS threads, often one per CPU core. There can be many more goroutines than threads, so the scheduler decides which goroutines to run on the available threads and which ones to pause. The scheduler keeps switching between goroutines to make sure each one gets a turn to run on a thread, instead of waiting in line forever. This is how Go handles concurrency.

      ┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐
      │ G15 ││ G16 ││ G17 ││ G18 ││ G19 ││ G20 │
      └─────┘└─────┘└─────┘└─────┘└─────┘└─────┘
      ┌─────┐      ┌─────┐      ┌─────┐      ┌─────┐
      │ G11 │      │ G12 │      │ G13 │      │ G14 │      Go runtime
      └─────┘      └─────┘      └─────┘      └─────┘
        │            │            │            │
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Thread A │ │ Thread B │ │ Thread C │ │ Thread D │ OS
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
      

      The Go runtime scheduler doesn't decide which threads run on the CPU — that's the operating system scheduler's job. The Go runtime makes sure all goroutines run on the threads it manages, but the OS controls how and when those threads actually get CPU time.

      Goroutine scheduler

      The scheduler's job is to run M goroutines on N operating system threads, where M can be much larger than N. Here's a simple way to do it:

      1. Put all goroutines in a queue.
      2. Take N goroutines from the queue and run them.
      3. If a running goroutine gets blocked (for example, waiting to read from a channel or waiting on a mutex), put it back in the queue and run the next goroutine from the queue.

      Take goroutines G11-G14 and run them:

      ┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐
      │ G15 ││ G16 ││ G17 ││ G18 ││ G19 ││ G20 │          queue
      └─────┘└─────┘└─────┘└─────┘└─────┘└─────┘
      ┌─────┐      ┌─────┐      ┌─────┐      ┌─────┐
      │ G11 │      │ G12 │      │ G13 │      │ G14 │      running
      └─────┘      └─────┘      └─────┘      └─────┘
        │            │            │            │
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Thread A │ │ Thread B │ │ Thread C │ │ Thread D │
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
      

      Goroutine G12 got blocked while reading from the channel. Put it back in the queue and replace it with G15:

      ┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐┌─────┐
      │ G16 ││ G17 ││ G18 ││ G19 ││ G20 ││ G12 │          queue
      └─────┘└─────┘└─────┘└─────┘└─────┘└─────┘
      ┌─────┐      ┌─────┐      ┌─────┐      ┌─────┐
      │ G11 │      │ G15 │      │ G13 │      │ G14 │      running
      └─────┘      └─────┘      └─────┘      └─────┘
        │            │            │            │
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Thread A │ │ Thread B │ │ Thread C │ │ Thread D │
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
      

      But there are a few things to keep in mind.

      Starvation

      Let's say goroutines G11–G14 are running smoothly without getting blocked by mutexes or channels. Does that mean goroutines G15–G20 won't run at all and will just have to wait (starve) until one of G11–G14 finally finishes? That would be unfortunate.

      That's why the scheduler checks each running goroutine roughly every 10 ms to decide if it's time to pause it and put it back in the queue. This approach is called preemptive scheduling: the scheduler can interrupt running goroutines when needed so others have a chance to run too.

      System calls

      The scheduler can manage a goroutine while it's running Go code. But what happens if a goroutine makes a system call, like reading from disk? In that case, the scheduler can't take the goroutine off the thread, and there's no way to know how long the system call will take. For example, if goroutines G11–G14 in our example spend a long time in system calls, all worker threads will be blocked, and the program will basically "freeze".

      To solve this problem, the scheduler starts new threads if the existing ones get blocked in a system call. For example, here's what happens if G11 and G12 make system calls:

      ┌─────┐┌─────┐┌─────┐┌─────┐
      │ G17 ││ G18 ││ G19 ││ G20 │                        queue
      └─────┘└─────┘└─────┘└─────┘
      
      ┌─────┐      ┌─────┐      ┌─────┐      ┌─────┐
      │ G15 │      │ G16 │      │ G13 │      │ G14 │      running
      └─────┘      └─────┘      └─────┘      └─────┘
        │            │            │            │
      ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
      │ Thread E │ │ Thread F │ │ Thread C │ │ Thread D │
      └──────────┘ └──────────┘ └──────────┘ └──────────┘
      
      ┌─────┐      ┌─────┐
      │ G11 │      │ G12 │                                syscalls
      └─────┘      └─────┘
        │            │
      ┌──────────┐ ┌──────────┐
      │ Thread A │ │ Thread B │
      └──────────┘ └──────────┘
      

      Here, the scheduler started two new threads, E and F, and assigned goroutines G15 and G16 from the queue to these threads.

      When G11 and G12 finish their system calls, the scheduler will stop or terminate the extra threads (E and F) and keep running the goroutines on four threads: A-B-C-D.

      This is a simplified model of how the goroutine scheduler works in Go. If you want to learn more, I recommend watching the talk by Dmitry Vyukov, one of the scheduler's developers: Go scheduler: Implementing language with lightweight concurrency (video, slides)

      GOMAXPROCS

      We said that the scheduler uses N threads to run goroutines. In the Go runtime, the value of N is set by a parameter called GOMAXPROCS.

      The GOMAXPROCS runtime setting controls the maximum number of operating system threads the Go scheduler can use to execute goroutines concurrently (not counting the goroutines running syscalls). It defaults to the value of runtime.NumCPU, which is the number of logical CPUs on the machine.

      Strictly speaking, runtime.NumCPU is either the total number of logical CPUs or the number allowed by the CPU affinity mask, whichever is lower. This can be adjusted by the CPU quota, as explained below.

      For example, on my 8-core laptop, the default value of GOMAXPROCS is also 8:

      maxProcs := runtime.GOMAXPROCS(0) // returns the current value
      fmt.Println("NumCPU:", runtime.NumCPU())
      fmt.Println("GOMAXPROCS:", maxProcs)
      
      
      
      NumCPU: 8
      GOMAXPROCS: 8
      

      You can change GOMAXPROCS by setting GOMAXPROCS environment variable or calling runtime.GOMAXPROCS():

      // Get the default value.
      fmt.Println("GOMAXPROCS default:", runtime.GOMAXPROCS(0))
      
      // Change the value.
      runtime.GOMAXPROCS(1)
      fmt.Println("GOMAXPROCS custom:", runtime.GOMAXPROCS(0))
      
      
      
      GOMAXPROCS default: 8
      GOMAXPROCS custom: 1
      

      You can also undo the manual changes and go back to the default value set by the runtime. To do this, use the runtime.SetDefaultGOMAXPROCS function (Go 1.25+):

      GOMAXPROCS=2 go run nproc.go
      
      
      
      // Using the environment variable.
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      // Using the manual setting.
      runtime.GOMAXPROCS(4)
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      // Back to the default value.
      runtime.SetDefaultGOMAXPROCS()
      fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
      
      
      
      GOMAXPROCS: 2
      GOMAXPROCS: 4
      GOMAXPROCS: 8
      

      CPU quota

      Go programs often run in containers, like those managed by Docker or Kubernetes. These systems let you limit the CPU resources for a container using a Linux feature called cgroups.

      A cgroup (control group) in Linux lets you group processes together and control how much CPU, memory, and network I/O they can use by setting limits and priorities.

      For example, here's how you can limit a Docker container to use only four CPUs:

      docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
      
      
      
      // /app/nproc.go
      maxProcs := runtime.GOMAXPROCS(0) // returns the current value
      fmt.Println("NumCPU:", runtime.NumCPU())
      fmt.Println("GOMAXPROCS:", maxProcs)
      

      Before version 1.25, the Go runtime didn't consider the CPU quota when setting the GOMAXPROCS value. No matter how you limited CPU resources, GOMAXPROCS was always set to the number of logical CPUs on the host machine:

      docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 8
      

      Starting with version 1.25, the Go runtime respects the CPU quota:

      docker run --cpus=4 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 4
      

      So, the default GOMAXPROCS value is set to either the number of logical CPUs or the CPU limit enforced by cgroup settings for the process, whichever is lower.

      Note on CPU limits

      Cgroups actually offer not just one, but two ways to limit CPU resources:

      • CPU quota — the maximum CPU time the cgroup may use within some period window.
      • CPU shares — relative CPU priorities given to the kernel scheduler.

      Docker's --cpus and --cpu-period/--cpu-quota set the quota, while --cpu-shares sets the shares.

      Kubernetes' CPU limit sets the quota, while CPU request sets the shares.

      Go's runtime GOMAXPROCS only takes the CPU quota into account, not the shares.

      Fractional CPU limits are rounded up:

      docker run --cpus=2.3 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 3
      

      On a machine with multiple CPUs, the minimum default value for GOMAXPROCS is 2, even if the CPU limit is set lower:

      docker run --cpus=1 golang:1.25-alpine go run /app/nproc.go
      
      
      
      NumCPU: 8
      GOMAXPROCS: 2
      

      The Go runtime automatically updates GOMAXPROCS if the CPU limit changes. It happens up to once per second (less frequently if the application is idle).

      Concurrency primitives

      Let's take a quick look at the three main concurrency tools for Go: goroutines, channels, and select.

      Goroutine

      A goroutine is implemented as a pointer to a runtime.g structure. Here's what it looks like:

      // runtime/runtime2.go
      type g struct {
          atomicstatus atomic.Uint32 // goroutine status
          stack        stack         // goroutine stack
          m            *m            // thread that runs the goroutine
          // ...
      }
      

      The g structure has many fields, but most of its memory is taken up by the stack, which holds the goroutine's local variables. By default, each stack gets 2 KB of memory, and it grows if needed.

      Because goroutines use very little memory, they're much more efficient than operating system threads, which usually need about 1 MB each. Also, switching between goroutines is very fast because it's handled by Go's scheduler and doesn't involve the operating system's kernel (unlike switching between threads managed by the OS). This lets Go run hundreds of thousands, or even millions, of goroutines on a single machine.

      Channel

      A channel is implemented as a pointer to a runtime.hchan structure. Here's what it looks like:

      // runtime/chan.go
      type hchan struct {
          // channel buffer
          qcount   uint           // number of items in the buffer
          dataqsiz uint           // buffer array size
          buf      unsafe.Pointer // pointer to the buffer array
      
          // closed channel flag
          closed uint32
      
          // queues of goroutines waiting to receive and send
          recvq waitq // waiting to receive from the channel
          sendq waitq // waiting to send to the channel
      
          // protects the channel state
          lock mutex
      
          // ...
      }
      

      The buffer array (buf) has a fixed size (dataqsiz, which you can get with the cap() builtin). It's created when you make a buffered channel. The number of items in the channel (qcount, which you can get with the len() builtin) increases when you send to the channel and decreases when you receive from it.

      The close() builtin sets the closed field to 1.

      Sending an item to an unbuffered channel, or to a buffered channel that's already full, puts the goroutine into the sendq queue. Receiving from an empty channel puts the goroutine into the recvq queue.

      Select

      The select logic is implemented in the runtime.selectgo function. It's a huge function that takes a list of select cases and (very simply put) works as follows:

      • Go through the cases and check if the matching channels are ready to send or receive.
      • If several cases are ready, choose one at random (to prevent starvation, where some cases are always chosen and others are never chosen).
      • Once a case is selected, perform the send or receive operation on the matching channel.
      • If there is a default case and no other cases are ready, pick the default.
      • If no cases are ready, block the goroutine and add it to the channel queue for each case.

      ✎ Exercise: Runtime simulator

      Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it.

      If you are okay with just theory for now, let's continue.

      Scheduler metrics

      Metrics show how the Go runtime is performing, like how much heap memory it uses or how long garbage collection pauses take. Each metric has a unique name (for example, /sched/gomaxprocs:threads) and a value, which can be a number or a histogram.

      We use the runtime/metrics package to work with metrics.

      List all available metrics with descriptions:

      func main() {
          descs := metrics.All()
          for _, d := range descs {
              fmt.Printf("Name: %s\n", d.Name)
              fmt.Printf("Description: %s\n", d.Description)
              fmt.Printf("Kind: %s\n", kindToString(d.Kind))
              fmt.Println()
          }
      }
      
      func kindToString(k metrics.ValueKind) string {
          switch k {
          case metrics.KindUint64:
              return "KindUint64"
          case metrics.KindFloat64:
              return "KindFloat64"
          case metrics.KindFloat64Histogram:
              return "KindFloat64Histogram"
          case metrics.KindBad:
              return "KindBad"
          default:
              return "Unknown"
          }
      }
      
      
      
      Name: /cgo/go-to-c-calls:calls
      Description: Count of calls made from Go to C by the current process.
      Kind: KindUint64
      
      Name: /cpu/classes/gc/mark/assist:cpu-seconds
      Description: Estimated total CPU time goroutines spent performing GC
      tasks to assist the GC and prevent it from falling behind the application.
      This metric is an overestimate, and not directly comparable to system
      CPU time measurements. Compare only with other /cpu/classes metrics.
      Kind: KindFloat64
      ...
      

      Get the value of a specific metric:

      samples := []metrics.Sample{
          {Name: "/sched/gomaxprocs:threads"},
          {Name: "/sched/goroutines:goroutines"},
      }
      metrics.Read(samples)
      
      for _, s := range samples {
          // Assumes the value is a uint64. Check the metric description
          // or use s.Value.Kind() if you're not sure.
          fmt.Printf("%s: %v\n", s.Name, s.Value.Uint64())
      }
      
      
      
      /sched/gomaxprocs:threads: 8
      /sched/goroutines:goroutines: 1
      

      Here are some goroutine-related metrics:

      /sched/goroutines-created:goroutines

      • Count of goroutines created since program start (Go 1.26+).

      /sched/goroutines:goroutines

      • Count of live goroutines (created but not finished yet).
      • An increase in this metric may indicate a goroutine leak.

      /sched/goroutines/not-in-go:goroutines

      • Approximate count of goroutines running or blocked in a system call or cgo call (Go 1.26+).
      • An increase in this metric may indicate problems with such calls.

      /sched/goroutines/runnable:goroutines

      • Approximate count of goroutines ready to execute, but not executing (Go 1.26+).
      • An increase in this metric may mean the system is overloaded and the CPU can't keep up with the growing number of goroutines.

      /sched/goroutines/running:goroutines

      • Approximate count of goroutines executing (Go 1.26+).
      • Always less than or equal to /sched/gomaxprocs:threads.

      /sched/goroutines/waiting:goroutines

      • Approximate count of goroutines waiting on a resource — I/O or sync primitives (Go 1.26+).
      • An increase in this metric may indicate issues with mutex locks, other synchronization blocks, or I/O issues.

      /sched/threads/total:threads

      • The current count of live threads that are owned by the runtime (Go 1.26+).

      /sched/gomaxprocs:threads

      • The current runtime.GOMAXPROCS setting — the maximum number of operating system threads the scheduler can use to execute goroutines concurrently.

      In real projects, runtime metrics are usually exported automatically with client libraries for Prometheus, OpenTelemetry, or other observability tools. Here's an example for Prometheus:

      package main
      
      import (
          "net/http"
          "github.com/prometheus/client_golang/prometheus/promhttp"
      )
      
      func main() {
          // Export runtime/metrics in Prometheus format at the /metrics endpoint.
          http.Handle("/metrics", promhttp.Handler())
          http.ListenAndServe("localhost:2112", nil)
      }
      

      The exported metrics are then collected by Prometheus, visualized, and used to set up alerts.

      Profiling

      Profiling helps you understand exactly what the program is doing, what resources it uses, and where in the code this happens. Profiling is often not recommended in production because it's a "heavy" process that can slow things down. But that's not the case with Go.

      Go's profiler is designed for production use. It uses sampling, so it doesn't track every single operation. Instead, it takes quick snapshots of the runtime every 10 ms and puts them together to give you a full picture.

      Go supports the following profiles:

      • CPU. Shows how much CPU time each function uses. Use it to find performance bottlenecks if your program is running slowly because of CPU-heavy tasks.
      • Heap. Shows the heap memory currently used by each function. Use it to detect memory leaks or excessive memory usage.
      • Allocs. Shows which functions have used heap memory since the profiler started (not just currently). Use it to optimize garbage collection or reduce allocations that impact performance.
      • Goroutine. Shows the stack traces of all current goroutines. Use it to get an overview of what the program is doing.
      • Block. Shows where goroutines block waiting on synchronization primitives like channels, mutexes and wait groups. Use it to identify synchronization bottlenecks and issues in data exchange between goroutines. Disabled by default.
      • Mutex. Shows lock contentions on mutexes and internal runtime locks. Use it to find "problematic" mutexes that goroutines are frequently waiting for. Disabled by default.

      The easiest way to add a profiler to your app is by using the net/http/pprof package. When you import it, it automatically registers HTTP handlers for collecting profiles:

      package main
      
      import (
          "net/http"
          _ "net/http/pprof"
          "sync"
      )
      
      func main() {
          // Enable block and mutexe profiles.
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
          // Start an HTTP server on localhost.
          // Profiler HTTP handlers are automatically
          // registered when you import "net/http/pprof".
          http.ListenAndServe("localhost:6060", nil)
      }
      

      Or you can register profiler handlers manually:

      var wg sync.WaitGroup
      
      wg.Go(func() {
          // Application server running on port 8080.
          mux := http.NewServeMux()
          mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
              w.Write([]byte("Hello, World!"))
          })
          log.Println("Starting hello server on :8080")
          log.Fatal(http.ListenAndServe(":8080", mux))
      })
      
      wg.Go(func() {
          // Profiling server running on localhost on port 6060.
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
      
          mux := http.NewServeMux()
          mux.HandleFunc("/debug/pprof/", pprof.Index)
          mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
          mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
          log.Println("Starting pprof server on :6060")
          log.Fatal(http.ListenAndServe("localhost:6060", mux))
      })
      
      wg.Wait()
      

      After that, you can start profiling with a specific profile by running the go tool pprof command with the matching URL, or just open that URL in your browser:

      go tool pprof -proto \
        "http://localhost:6060/debug/pprof/profile?seconds=N" > cpu.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/heap > heap.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/allocs > allocs.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/goroutine > goroutine.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/block > block.pprof
      
      go tool pprof -proto \
        http://localhost:6060/debug/pprof/mutex > mutex.pprof
      

      For the CPU profile, you can choose how long the profiler runs (the default is 30 seconds). Other profiles are taken instantly.

      After running the profiler, you'll get a binary file that you can open in the browser using the same go tool pprof utility. For example:

      go tool pprof -http=localhost:8080 cpu.pprof
      

      The pprof web interface lets you view the same profile in different ways. My personal favorites are the flame graph , which clearly shows the call hierarchy and resource usage, and the source view, which shows the exact lines of code.

      Flame graph view The flame graph view shows the call hierarchy and resource usage. Source
view The source view shows the exact lines of code.

      You can also profile manually. To collect a CPU profile, use StartCPUProfile and StopCPUProfile:

      func main() {
          // Start profiling and stop it when main exits.
          // Ignore errors for simplicity.
          file, _ := os.Create("cpu.prof")
          defer file.Close()
          pprof.StartCPUProfile(file)
          defer pprof.StopCPUProfile()
      
          // The rest of the program code.
          // ...
      }
      

      To collect other profiles, use Lookup:

      // profile collects a profile with the given name.
      func profile(name string) {
          // Ignore errors for simplicity.
          file, _ := os.Create(name + ".prof")
          defer file.Close()
          p := pprof.Lookup(name)
          if p != nil {
              p.WriteTo(file, 0)
          }
      }
      
      func main() {
          runtime.SetBlockProfileRate(1)
          runtime.SetMutexProfileFraction(1)
      
          // ...
          profile("heap")
          profile("allocs")
          // ...
      }
      

      Profiling is a broad topic, and we've only touched the surface. To learn more, start with these articles:

      Tracing

      Tracing records certain types of events while the program is running, mainly those related to concurrency and memory:

      • goroutine creation and state changes;
      • system calls;
      • garbage collection;
      • heap size changes;
      • and more.

      If you enabled the profiling server as described earlier, you can collect a trace using this URL:

      http://localhost:6060/debug/pprof/trace?seconds=N
      

      Trace files can be quite large, so it's better to use a small N value.

      After tracing is complete, you'll get a binary file that you can open in the browser using the go tool trace utility:

      go tool trace -http=localhost:6060 trace.out
      

      In the trace web interface, you'll see each goroutine's "lifecycle" on its own line. You can zoom in and out of the trace with the W and S keys, and you can click on any event to see more details:

      Trace web interface

      You can also collect a trace manually:

      func main() {
          // Start tracing and stop it when main exits.
          // Ignore errors for simplicity.
          file, _ := os.Create("trace.out")
          defer file.Close()
          trace.Start(file)
          defer trace.Stop()
      
          // The rest of the program code.
          // ...
      }
      

      Flight recorder

      Flight recording is a tracing technique that collects execution data, such as function calls and memory allocations, within a sliding window that's limited by size or duration. It helps to record traces of interesting program behavior, even if you don't know in advance when it will happen.

      The trace.FlightRecorder type (Go 1.25+) implements a flight recorder in Go. It tracks a moving window over the execution trace produced by the runtime, always containing the most recent trace data.

      Here's an example of how you might use it.

      First, configure the sliding window:

      // Configure the flight recorder to keep
      // at least 5 seconds of trace data,
      // with a maximum buffer size of 3MB.
      // Both of these are hints, not strict limits.
      cfg := trace.FlightRecorderConfig{
          MinAge:   5 * time.Second,
          MaxBytes: 3 << 20, // 3MB
      }
      

      Then create the recorder and start it:

      // Create and start the flight recorder.
      rec := trace.NewFlightRecorder(cfg)
      rec.Start()
      defer rec.Stop()
      

      Continue with the application code as usual:

      // Simulate some workload.
      done := make(chan struct{})
      go func() {
          defer close(done)
          const n = 1 << 20
          var s []int
          for range n {
              s = append(s, rand.IntN(n))
          }
          fmt.Printf("done filling slice of %d elements\n", len(s))
      }()
      <-done
      

      Finally, save the trace snapshot to a file when an important event occurs:

      // Save the trace snapshot to a file.
      file, _ := os.Create("/tmp/trace.out")
      defer file.Close()
      n, _ := rec.WriteTo(file)
      fmt.Printf("wrote %dB to trace file\n", n)
      
      
      
      done filling slice of 1048576 elements
      wrote 8441B to trace file
      

      Use go tool trace to view the trace in the browser:

      go tool trace -http=localhost:6060 /tmp/trace.out
      

      ✎ Exercise: Comparing blocks

      Practice is crucial in turning abstract knowledge into skills, making theory alone insufficient. The full version of the book contains a lot of exercises — that's why I recommend getting it.

      If you are okay with just theory for now, let's continue.

      Keep it up

      Now you can see how challenging the Go scheduler's job is. Fortunately, most of the time you don't need to worry about how it works behind the scenes — sticking to goroutines, channels, select, and other synchronization primitives is usually enough.

      This is the final chapter of my "Gist of Go: Concurrency" book. I invite you to read it — the book is an easy-to-understand, interactive guide to concurrency programming in Go.

      Pre-order for $10 or read online

    12. 🔗 @malcat@infosec.exchange [#Malcat](https://infosec.exchange/tags/Malcat) tip: mastodon

      #Malcat tip:

      #Kesakode can be useful even when facing unknown/packed samples. Check "Show UNK" and focus on unique code and strings.

      Here a simple downloader:

    13. 🔗 The Pragmatic Engineer Downdetector and the real cost of no upstream dependencies rss

      The below is one out of five topics from[ _The Pulse

      154._](https://newsletter.pragmaticengineer.com/p/the-

      pulse-154?ref=blog.pragmaticengineer.com)Full subscribers received the below article two weeks ago. To get articles like this in your inbox, every week,subscribe here .

      Many subscribers expense The Pragmatic Engineer Newsletter to their learning and development budget. If you have such a budget, here 's an email you could send to your manager .


      One amusing detail of the November 2025 Cloudflare outage is that the realtime outage and monitoring service, Downdetector, went down, revealing a key dependency on Cloudflare. At first, this looks odd; after all, Downdetector is about monitoring uptime, so why would it take on a key dependency like Cloudflare if it means this can happen?

      Downdetector was built multi-region and multi-cloud, which**** I confirmed by talking with Senior Director of Engineering, Dhruv Arora, at Ookla, the company behind Downdetector. Multi-cloud resilience makes little sense for most products, but Downdetector was built to detect cloud provider outages, as well. And for this, they needed to be multi-cloud!

      Still, Downdetector uses Cloudflare for DNS, Content Delivery (CDN), and Bot Protection. So, why would it take on this one key dependency, as opposed to hosting everything on its own servers?

      A CDN has advantages that are hard to ignore, such as:

      • Drastically lower bandwidth costs - assets cached on the CDN are much faster
      • Faster load times because assets on a CDN are served from Edge nodes nearer users
      • Protection from sudden traffic spikes, as would be common for Downdetector, especially during outages! Without a CDN, those spikes could overload their services
      • DDoS protection from bad actors taking the site offline with a distributed denial of service attack
      • Reduced infrastructure requirements, as Downdetector can run on fewer servers

      Downdetector's usage patterns reflect that it's a service very heavily used by consumers whom the business doesn't really monetize (Downdetector is free to use.) So, Downdetector could get rid of Cloudflare, but costs would surge, the site would become slower to load, and revenue wouldn't change.

      In the end, Downdetector's dependence on Cloudflare could be a pragmatic choice based on the business model, and how removing its upstream dependency upon Cloudflare could get very expensive!

      Dhruv confirmed this and sharing more about the design choices at Downdetector:

      "Building redundancy at the DNS & CDN layers would require enormous overhead. This is especially true as Cloudflare's Bot Protection is world- class, and building similar functionality would be a lot of effort. There are hyperscalers [cloud providers] that have this kind of redundancy built in. We will look into what we can do, but with a team size in the double digits, building up a core piece of infra like this is a pretty tall order: not just for us, but for any mid-sized team.

      We've learned that there are more things that we can improve, for the future. For example, during the outage, the Cloudflare control pane was down, but their API wasn't. So, us having more Infrastructure as Code could have helped bring back Downdetector sooner.

      On our end, we also noticed that the outage wasn't global, so we were able to shift traffic around and reduce the impact.

      One more interesting detail: Cloudflare's Bot Protection went haywire during the outage, and started to block legitimate traffic. So, our team had to turn that off temporarily".

      Thanks very much to Dhruv and the Downdetector team for sharing details.

    14. 🔗 r/reverseengineering LIVE! Analyzing TTD traces in Binary Ninja with esReverse rss
    15. 🔗 r/reverseengineering Yesterday I came across this method in medium rss
    16. 🔗 @binaryninja@infosec.exchange We are teaming up with eShard on December 5th at 10am EST for a live demo that mastodon

      We are teaming up with eShard on December 5th at 10am EST for a live demo that walks through the full PetyA workflow! We'll look at how the malware drops and maps its DLL into memory, how TTD recording captures the full execution, and how the Binary Ninja debugger ties into their trace tooling. Join us here: https://youtube.com/live/nzar2L4GUJ8

    17. 🔗 Rust Blog crates.io: Malicious crates finch-rust and sha-rust rss

      Summary

      On December 5th, the crates.io team was notified by Kush Pandya from the Socket Threat Research Team of two malicious crates which were trying to cause confusion with the existing finch crate but adding a dependency on a malicious crate doing data exfiltration.

      These crates were:

      • finch-rust - 1 version published November 25, 2025, downloaded 28 times, used sha-rust as a dependency
      • sha-rust - 8 versions published between November 20 and November 25, 2025, downloaded 153 times

      Actions taken

      The user in question, face-lessssss, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.

      The deletions were performed at 15:52 UTC on December 5th.

      We reported the associated repositories to GitHub and the account has been removed there as well.

      Analysis

      Socket has published their analysis in a blog post.

      These crates had no dependent downstream crates on crates.io, and there is no evidence of either of these crates being downloaded outside of automated mirroring and scanning services.

      Thanks

      Our thanks to Kush Pandya from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Adam Harvey from the Rust Foundation for aiding in the response.

    18. 🔗 Rust Blog Updating Rust's Linux musl targets to 1.2.5 rss

      Updating Rust's Linux musl targets to 1.2.5

      Beginning with Rust 1.93 (slated for stable release on 2026-01-22), the various *-linux-musl targets will all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.

      For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.

      However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2 years ago), and we have been waiting for newer versions of the libc crate to propagate throughout the ecosystem before shipping the musl update.

      A crater run in July 2024 found only about 2.4% of Rust projects were still affected. A crater run in June 2025 found 1.5% of Rust projects were affected. Most of that change is from crater analyzing More Rust Projects. The absolute amount of broken projects went down by 15% while the absolute amount of analyzed projects went up by 35%.

      At this point we expect there will be minimal breakage, and most breakage should be resolved by a cargo update. We believe this update shouldn't be held back any longer, as it contains critical fixes for the musl target.

      Manual inspection of some of the affected projects indicates they largely haven't run cargo update in 2 years, often because they haven't had any changes in 2 years. Fixing these crates is as easy as cargo update.

      Build failures from this change will typically look like "some extern functions couldn't be found; some native libraries may need to be installed or have their path specified", often specifically for "undefined reference to open64'", often while trying to build very old versions of thegetrandom` crate (hence the outsized impact on gamedev projects that haven't updated their dependencies in several years in particular):

      Example Build Failure

      [INFO] [stderr]    Compiling guess_the_number v0.1.0 (/opt/rustwide/workdir)
      [INFO] [stdout] error: linking with `cc` failed: exit status: 1
      [INFO] [stdout]   |
      [INFO] [stdout]   = note:  "cc" "-m64" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/rcrt1.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crti.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtbeginS.o" "/tmp/rustcMZMWZW/symbols.o" "<2 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/{librand-bff7d8317cf08aa0.rlib,librand_chacha-612027a3597e9138.rlib,libppv_lite86-742ade976f63ace4.rlib,librand_core-be9c132a0f2b7897.rlib,libgetrandom-dc7f0d82f4cb384d.rlib,liblibc-abed7616303a3e0d.rlib,libcfg_if-66d55f6b302e88c8.rlib}.rlib" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libstd-*,libpanic_unwind-*,libobject-*,libmemchr-*,libaddr2line-*,libgimli-*,librustc_demangle-*,libstd_detect-*,libhashbrown-*,librustc_std_workspace_alloc-*,libminiz_oxide-*,libadler2-*,libunwind-*}.rlib" "-lunwind" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{libcfg_if-*,liblibc-*}.rlib" "-lc" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/{librustc_std_workspace_core-*,liballoc-*,libcore-*,libcompiler_builtins-*}.rlib" "-L" "/tmp/rustcMZMWZW/raw-dylibs" "-Wl,-Bdynamic" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-nostartfiles" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib" "-o" "/opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/guess_the_number-41a068792b5f051e" "-Wl,--gc-sections" "-static-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtendS.o" "<sysroot>/lib/rustlib/x86_64-unknown-linux-musl/lib/self-contained/crtn.o"
      [INFO] [stdout]   = note: some arguments are omitted. use `--verbose` to show all linker arguments
      [INFO] [stdout]   = note: /usr/bin/ld: /opt/rustwide/target/x86_64-unknown-linux-musl/debug/deps/libgetrandom-dc7f0d82f4cb384d.rlib(getrandom-dc7f0d82f4cb384d.getrandom.828c5c30a8428cf4-cgu.0.rcgu.o): in function `getrandom::util_libc::open_readonly':
      [INFO] [stdout]           /opt/rustwide/cargo-home/registry/src/index.crates.io-1949cf8c6b5b557f/getrandom-0.2.8/src/util_libc.rs:150:(.text._ZN9getrandom9util_libc13open_readonly17hdc55d6ead142a889E+0xbc): undefined reference to `open64'
      [INFO] [stdout]           collect2: error: ld returned 1 exit status
      [INFO] [stdout]           
      [INFO] [stdout]   = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified
      [INFO] [stdout]   = note: use the `-l` flag to specify native libraries to link
      [INFO] [stdout]   = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#rustc-link-lib)
      [INFO] [stdout] 
      [INFO] [stdout] 
      [INFO] [stderr] error: could not compile `guess_the_number` (bin "guess_the_number") due to 1 previous error
      

      Updated targets

      All Rust musl targets that bundle a copy of musl now bundle 1.2.5. All Rust musl targets now require musl 1.2.5 at a minimum.

      The mostly only actually impacts the three "Tier 2 With Host Tools" musl targets which were pinned to musl 1.2.3:

      • aarch64-unknown-linux-musl
      • x86_64-unknown-linux-musl
      • powerpc64le-unknown-linux-musl

      The fourth target at this level of support, loongarch64-unknown-linux-musl, is so new that it was always on musl 1.2.5.

      Due to an apparent configuration oversight with crosstool- ng, all other targets were already bundling musl 1.2.5. These targets were silently upgraded to musl 1.2.4 in Rust 1.74.0 and silently upgraded to musl 1.2.5 in Rust 1.86. This oversight has been rectified and all targets have been pinned to musl 1.2.5 to prevent future silent upgrades (but hey, no one noticing bodes well for the ecosystem impact of this change). Their documentation has now been updated to reflect the fact that bundling 1.2.5 is actually intentional, and that 1.2.5 is now considered a minimum requirement.

      Here are all the updated definitions:

      Tier 2 with Host Tools

      target| notes
      ---|---
      aarch64-unknown-linux-musl| ARM64 Linux with musl 1.2.5
      powerpc64le-unknown-linux-musl| PPC64LE Linux (kernel 4.19, musl 1.2.5)
      x86_64-unknown-linux-musl| 64-bit Linux with musl 1.2.5

      Tier 2 without Host Tools

      target| std| notes
      ---|---|---
      arm-unknown-linux-musleabi| ✓| Armv6 Linux with musl 1.2.5
      arm-unknown-linux-musleabihf| ✓| Armv6 Linux with musl 1.2.5, hardfloat
      armv5te-unknown-linux-musleabi| ✓| Armv5TE Linux with musl 1.2.5
      armv7-unknown-linux-musleabi| ✓| Armv7-A Linux with musl 1.2.5
      armv7-unknown-linux-musleabihf| ✓| Armv7-A Linux with musl 1.2.5, hardfloat
      i586-unknown-linux-musl| ✓| 32-bit Linux (musl 1.2.5, original Pentium)
      i686-unknown-linux-musl| ✓| 32-bit Linux with musl 1.2.5 (Pentium 4)
      riscv64gc-unknown-linux-musl| ✓| RISC-V Linux (kernel 4.20+, musl 1.2.5)

      Tier 3

      target| std| host| notes
      ---|---|---|---
      hexagon-unknown-linux-musl| ✓| | Hexagon Linux with musl 1.2.5
      mips-unknown-linux-musl| ✓| | MIPS Linux with musl 1.2.5
      mips64-openwrt-linux-musl| ?| | MIPS64 for OpenWrt Linux musl 1.2.5
      mips64-unknown-linux-muslabi64| ✓| | MIPS64 Linux, N64 ABI, musl 1.2.5
      mips64el-unknown-linux-muslabi64| ✓| | MIPS64 (little endian) Linux, N64 ABI, musl 1.2.5
      mipsel-unknown-linux-musl| ✓| | MIPS (little endian) Linux with musl 1.2.5
      powerpc-unknown-linux-musl| ?| | PowerPC Linux with musl 1.2.5
      powerpc-unknown-linux-muslspe| ?| | PowerPC SPE Linux with musl 1.2.5
      powerpc64-unknown-linux-musl| ✓| ✓| PPC64 Linux (kernel 4.19, musl 1.2.5)
      riscv32gc-unknown-linux-musl| ?| | RISC-V Linux (kernel 5.4, musl 1.2.5 + RISCV32 support patches)
      s390x-unknown-linux-musl| ✓| | S390x Linux (kernel 3.2, musl 1.2.5)
      thumbv7neon-unknown-linux-musleabihf| ?| | Thumb2-mode Armv7-A Linux with NEON, musl 1.2.5
      x86_64-unikraft-linux-musl| ✓| | 64-bit Unikraft with musl 1.2.5

  3. December 04, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-12-04 rss

      IDA Plugin Updates on 2025-12-04

      New Releases:

      Activity:

    2. 🔗 @cxiao@infosec.exchange JD....bro.......😭😭😭😭😭 mastodon

      JD....bro.......😭😭😭😭😭

      (new signalgate screenshots from
      https://media.defense.gov/2025/Dec/04/2003834916/-1/-1/1/DODIG_2026_021.PDF)

    3. 🔗 Kagi release notes Dec 4th, 2025 - New Kagi Search companions and quality-of-life improvements rss

      Kagi Search Introducing Kagi Search companions You can now choose your preferred companion on Kagi Search! And more companions coming soon. Other improvements and bug fixes Context menus for inline news and videos are stuck inside the frame #9127 @pma_snek "We haven't found anything" when asking a follow-up question in Quick Answer #8986 @jstolarek Feedback on the new Quick Answer experience #8729 @Jesal Custom Default Model for Quick Answer Follow-ups #5533 @MaCl0wSt Quick Answer 'Show More' Memory #8902 @Dustin Out of place image in wikipedia widget #9114 @Numerlor Quick. kagibara eye animations. #9103 @kagifuser Doggo's snoot isn't contrasting well on the pricing page #8477 @Thibaultmol Quick Answer - Continue in Assistant not clickable #9010 @Dustin Kagi Assistant We've made the following model upgrades: Research Assistant now uses Nano Banana Pro for image generation and editing Claude 4.5 Opus and Deepseek v3.2 have been updated to their latest versions Weird recording of voice in assistant #8672 @StefanHaglund GPT OSS 120B stray think tag #8951 @claudinec Citation popups cropped within tables #9025 @hinq Include chat title in shared chat link preview #9045 @bert Kagi Translate [Extension] Discord, Whatsapp, Telegram, Reddit integrations [Extension] Redirect from translate.kagi.com/url #8503 @Thibaultmol [Extension] Statistics page in setting [Extension] Apply suggestions (translate/proofreading) directly from overlay #8695 @orschiro Slop Detective Slop Detective page doesn't scroll, which can prevent progress #9079 @jducoeur Slop Detective image zoom/magnifying glass should be shifted on phone/mobile/touch screens #9080 @dru4522 Slop Detective calls all images "photographs" #9072 @ccombsdc Post of the week Here is this week's featured social media mention: We truly appreciate your support in spreading the word, so be sure to follow us and tag us in your comments! Community creations James Downs built a Kagi News app for Pebble watches: Check out this growing list of Kagi community creations for various devices and apps! Have one to share? Let us know. Small Web badges

      Small Web initiative members can display badges on their websites to identify themselves as part of a community committed to authentic content created by humans. Grab them here! And keep exploring what the Small Web has to offer.

      Collection of five Small Web initiative badges in pixel art style with
orange and black color scheme, some which contain Kagi's dog mascot named
Doggo

      End-of-Year Community Event

      Illustration of Kagi's mascot Doggo flying towards a toy yellow ball
surrounded by clouds, with the text: Kagi End of Year Community Event and the
date and time: December 19 at 9am
PST

      As we wrap up an exciting year for Kagi, we'd love to have you join us for our end-of-year community event on December 19 at 09:00 PST (convert to your local time).

      We'll share a comprehensive "Year in Review" covering Kagi's major updates, product launches, and what's ahead, followed by an interactive Q&A session where we'll address your questions directly.

      How to participate:

    4. 🔗 livestorejs/livestore "v0.4.0-dev.20" release

      "Release 0.4.0-dev.20 including Chrome Extension"

    5. 🔗 r/wiesbaden Betrunkene Frauen rauben Fahrer das Taxi -- Erst stritten sie untereinander - dann griffen sie den Taxifahrer an: Vier Frauen haben in Wiesbaden einen 66 Jahre alten Fahrer verletzt. Sie verlangten Geld, am Ende fehlte das Taxi. rss
    6. 🔗 r/reverseengineering CVE Proof-of-Concept Finder: A Direct Lens Into Exploit Code rss
    7. 🔗 r/LocalLLaMA legends rss

      legends | submitted by /u/Nunki08
      [link] [comments]
      ---|---

    8. 🔗 r/reverseengineering How do I Inspect virtual memory page tables physical memory on windows rss
    9. 🔗 r/wiesbaden Wer sich manchmal über Busverspätungen wundert… rss
    10. 🔗 r/LocalLLaMA New model, microsoft/VibeVoice-Realtime-0.5B rss

      New model, microsoft/VibeVoice-Realtime-0.5B | VibeVoice: A Frontier Open-Source Text-to-Speech Model VibeVoice-Realtime is a lightweight real‑time text-to-speech model supporting streaming text input. It can be used to build realtime TTS services, narrate live data streams, and let different LLMs start speaking from their very first tokens (plug in your preferred model) long before a full answer is generated. It produces initial audible speech in ~300 ms (hardware dependent). Key features: Parameter size: 0.5B (deployment-friendly) Realtime TTS (~300 ms first audible latency) Streaming text input Robust long-form speech generation submitted by /u/edward-dev
      [link] [comments]
      ---|---

    11. 🔗 r/wiesbaden Heizkosten Altbau bei 100qm? rss

      Hallöchen, Ich weiß, so pauschal kann man das schlecht sagen, aber ich hätte trotzdem gerne einfach ein paar Vergleichswerte. Lebt jemand von euch in einer Altbauwohnung in Wiesbaden auf ca. 100qm und würde mir verraten, wie viel er/sie monatlich für Heizkosten blecht? Vorzugsweise bei Gas Verbrauch. Ich hab jetzt von nem Kollegen 100€ gehört, von jemand anderem irgendwas von 300-400€ und bin entsprechend verunsichert. Bei mir gehts um eine Gas- Etagenheizung. Danke schon mal!

      submitted by /u/kvrioss
      [link] [comments]

    12. 🔗 r/reverseengineering Beyond Decompilers: Runtime Analysis of Evasive Android Code rss
    13. 🔗 jj-vcs/jj v0.36.0 release

      About

      jj is a Git-compatible version control system that is both simple and powerful. See
      the installation instructions to get started.

      Release highlights

      301 redirects are being issued towards the new domain, so any existing links
      should not be broken.

      • Fixed race condition that could cause divergent operations when running
        concurrent jj commands in colocated repositories. It is now safe to
        continuously run e.g. jj log without --ignore-working-copy in one
        terminal while you're running other commands in another terminal.
        #6830

      • jj now ignores $PAGER set in the environment and uses less -FRX on most
        platforms (:builtin on Windows). See the docs for
        more information, and #3502 for
        motivation.

      Breaking changes

      • In filesets or path patterns, glob matching
        is enabled by default. You can use cwd:"path" to match literal paths.

      • In the following commands, string pattern
        arguments
        are now parsed the same way they
        are in revsets and can be combined with logical operators: jj bookmark delete/forget/list/move, jj tag delete/list, jj git clone/fetch/push

      • In the following commands, unmatched bookmark/tag names is no longer an
        error. A warning will be printed instead: jj bookmark delete/forget/move/track/untrack, jj tag delete, jj git clone/push

      • The default string pattern syntax in revsets will be changed to glob: in a
        future release. You can opt in to the new default by setting
        ui.revsets-use-glob-by-default=true.

      • Upgraded scm-record from v0.8.0 to v0.9.0. See release notes at
        https://github.com/arxanas/scm-record/releases/tag/v0.9.0.

      • The minimum supported Rust version (MSRV) is now 1.89.

      • On macOS, the deprecated config directory ~/Library/Application Support/jj
        is not read anymore. Use $XDG_CONFIG_HOME/jj instead (defaults to
        ~/.config/jj).

      • Sub-repos are no longer tracked. Any directory containing .jj or .git
        is ignored. Note that Git submodules are unaffected by this.

      Deprecations

      • The --destination/-d arguments for jj rebase, jj split, jj revert,
        etc. were renamed to --onto/-o. The reasoning is that --onto,
        --insert-before, and --insert-after are all destination arguments, so
        calling one of them --destination was confusing and unclear. The old names
        will be removed at some point in the future, but we realize that they are
        deep in muscle memory, so you can expect an unusually long deprecation period.

      • jj describe --edit is deprecated in favor of --editor.

      • The config options git.auto-local-bookmark and git.push-new-bookmarks are
        deprecated in favor of remotes.<name>.auto-track-bookmarks. For example:

        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      • The flag --allow-new on jj git push is deprecated. In order to push new
        bookmarks, please track them with jj bookmark track. Alternatively, consider
        setting up an auto-tracking configuration to avoid the chore of tracking
        bookmarks manually. For example:
        [remotes.origin]
        

        auto-track-bookmarks = "glob:*"

      For more details, refer to
      the docs.

      New features

      • jj commit, jj describe, jj squash, and jj split now accept
        --editor, which ensures an editor will be opened with the commit
        description even if one was provided via --message/-m.

      • All jj commands show a warning when the provided fileset expression
        doesn't match any files.

      • Added files() template function to DiffStats. This supports per-file stats
        like lines_added() and lines_removed()

      • Added join() template function. This is different from separate() in that
        it adds a separator between all arguments, even if empty.

      • RepoPath template type now has a absolute() -> String method that returns
        the absolute path as a string.

      • Added format_path(path) template that controls how file paths are printed
        with jj file list.

      • New built-in revset aliases visible() and hidden().

      • Unquoted * is now allowed in revsets. bookmarks(glob:foo*) no longer
        needs quoting.

      • jj prev/next --no-edit now generates an error if the working-copy has some
        children.

      • A new config option remotes.<name>.auto-track-bookmarks can be set to a
        string pattern. New bookmarks matching it will be automatically tracked for
        the specified remote. See
        the docs.

      • jj log now supports a --count flag to print the number of commits instead
        of displaying them.

      Fixed bugs

      • jj fix now prints a warning if a tool failed to run on a file.
        #7971

      • Shell completion now works with non‑normalized paths, fixing the previous
        panic and allowing prefixes containing . or .. to be completed correctly.
        #6861

      • Shell completion now always uses forward slashes to complete paths, even on
        Windows. This renders completion results viable when using jj in Git Bash.
        #7024

      • Unexpected keyword arguments now return a parse failure for the coalesce()
        and concat() templating functions.

      • Nushell completion script documentation add -f option, to keep it up to
        date.
        #8007

      • Ensured that with Git submodules, remnants of your submodules do not show up
        in the working copy after running jj new.
        #4349

      Contributors

      Thanks to the people who made this release happen!

    14. 🔗 @cxiao@infosec.exchange [https://qz.com/1908836/china-blocks-wikimedia-from-un-agency-wipo-over- mastodon
    15. 🔗 Console.dev newsletter Lima rss

      Description: Linux VMs + Containers.

      What we like: Quickly launch Linux VMs from the terminal. Designed for running containers inside the VM, it includes tools for filesystem mounts, port forwarding, GPU acceleration, Intel/Arm emulation. Easy config of CPUs, memory, etc via the CLI. Can run in CI. Useful for sandboxing AI agents.

      What we dislike: Supports most, but not all Linux distros (has some minimum requirements). Windows usage is experimental.

    16. 🔗 Console.dev newsletter gitmal rss

      Description: Static page generator for Git repos.

      What we like: Generates a static repo browser for any Git repo. The site includes commits, branches, and a file explorer with source code highlighted. The UI is themeable.

      What we dislike: No dark mode auto-switching. Can take a long time to generate for big repos.