- ↔
- →
to read (pdf)
- I don't want your PRs anymore
- JitterDropper | OALABS Research
- DomainTools Investigations | DPRK Malware Modularity: Diversity and Functional Specialization
- EXHIB: A Benchmark for Realistic and Diverse Evaluation of Function Similarity in the Wild
- Neobrutalism components - Start making neobrutalism layouts today
- May 11, 2026
-
🔗 r/wiesbaden Gruppe für Alleinerziehende rss
Hey, gibt es hier in Wiesbaden eine Gruppe/Treffpunkt für Alleinerziehende (Väter…)? Ich bin Papa von kleinen Zwillingsjungs und leider ist der Mamatreff bei uns auf dem Dorf nicht für (alleinerziehende) Väter vorgesehen.
Da ich überwiegend auf meiner Dienststelle in Wiesbaden arbeite, würde ich mich freuen, wenn es vielleicht in der Stadt was gibt, wo man sich austauschen kann und die Kids miteinander spielen könnten.
submitted by /u/Kiwis32
[link] [comments] -
🔗 r/york Informal York Queer Meet-Up @ City Screen Picturehouse Café - Friday, 15 May 2026 rss
Yo :)
We've organised a couple of meetups over the late Winter and Spring. We have a lil group of like-minded people.
A couple of us are planning to meet up again at City Screen Picturehouse Café Friday, 15 May 2026. If anyone would like to come along and chill with some queer nerds, you're more than welcome!
A bit about me: I’m a guy in my mid-30s, into sci-fi, grand strategy gaming, and Wikipedia editing.
-
Where: Cityscreen Cafe, either on the sofas or at one of the tables at the back
-
When: 18:30, Friday 15 May 2026
You'll know it's me because I'll have a fluffy rabbit toy on the table.
Feel free to reply here, DM me, or message in the Discord if you’re thinking of coming along!
submitted by /u/NervousEnergy
[link] [comments] -
-
🔗 r/Leeds Gutted to find my favourite coffee shop is closed! rss
On my way through the city centre today I popped by my favourite coffee shop Swissly to see it closed with paperwork on the door saying the landlord has gained access due to the lease being reneged on.
Loved the coffee and mocha from here but must admit it never seemed busy
submitted by /u/Vast_Lychee_8015
[link] [comments] -
🔗 r/york Birthday Walk around Medieval York. - Love it. rss
| Just a quick wonder around a few well known and not so well know places in York for my birthday. I went alone and took a tiny action camera. I know there are many other places, Clifford Tower, Shambles, York Castle, Dick Turpin. But a couple on this list are less well know. Def want to visit again. submitted by /u/The_Black_Banner_UK
[link] [comments]
---|--- -
🔗 r/Leeds The demise of Subway rss
I remember a few years ago when there were 6 or 7 Subways in the city centre. Now there are 3 I think. Having visited the one near St John's Centre its not hard to see why. No heating (the place was freezing). Half of the seating cordorned off, and toilet marked 'staff only'. When you buy food to eat in you are also paying to enjoy your meal in comfortable surroundings. Sadly it felt like eating a sandwich in a bus stop. Food itself was decent.
submitted by /u/Puzzleheaded_Bunch44
[link] [comments] -
🔗 r/Leeds Did we really need that ugly warehouse clearance shop on Headrow? rss
Honestly, every time I pass this now and remember that it used to be HomeSense, I can't get over on why we need this new ugly shop on one of the main city centre streets?
It reminds me something you would see in some random rough London area.
At least make the sign appear better... Not just some bright yellow logo that can be seen all the way down from Nandos at the bottom of Briggate...
submitted by /u/SnowflakesOut
[link] [comments] -
🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss
To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.
submitted by /u/AutoModerator
[link] [comments] -
🔗 r/reverseengineering Check out my matplotlib of BLE live wire data for Oura ring! rss
submitted by /u/Expensive-Medium7425
[link] [comments] -
🔗 r/reverseengineering Positron: DLL injection based runtime JS injection toolkit for Electron(v8) apps on Windows rss
submitted by /u/Basic-Emu-6738
[link] [comments] -
🔗 r/york Favorite thing about York? rss
Hiya! Moving to York this fall from the US for uni and am getting more excited by the day. I studied abroad in Oxford for a few months (absolutely loved it) and have traveled a little around England but have never gone this far north.
What are some things to look forward to? I’m curious about the overall culture and history of York and also want to know about some other people’s experiences living in this city on their own (or experiences when you first moved to York).
submitted by /u/spunksqueek
[link] [comments] -
🔗 r/Harrogate Ashville College rss
Hi, we are looking for honest reviews/feedback about Ashville College as we are considering sending our child there, we have heard a few negative reviews regarding bullying which concerns me
thank you
submitted by /u/Electronic_Sea_4848
[link] [comments]
-
- May 10, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-10 rss
IDA Plugin Updates on 2026-05-10
Activity:
- ida-mcp-in-vm
- 3de5fe9f: fix: harden launcher stop and MCP stdout logging
- ida_scripts
- 8f8157c0: added ida 9.2/9.32 arm32 second pass script.
- IDAPluginList
- 9941e027: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- NexusRE-MCP
- c6c9eb35: Power-up NexusRE: Added dynamic emulation, symbolic execution, Live U…
- 2bf476f5: Auto-install missing 'uv' dependency during auto-updates
- f033fb63: Implement zero-touch automatic background updates on server launch
- 7206d216: Add auto-updater script for seamless updates
- d74b5ea7: Fix array slicing when limit=0 to support scanning everything
- 12d81283: Optimize NexusRE-MCP Performance
- cb0d87eb: feat(mcp): wire up advanced IDA Pro tools into MCP router
- 3f6f1b70: Enhance backend parity IL decompilation IDA plugin stability and depl…
- 59fc7753: chore: update plugins and core server
- 4deca12d: feat: Add Frida memory read/write, Unicorn emulation sandbox, sync re…
- 54cb68dd: Modernize MCP tools: exdnspy backend, installer improvements, and dyn…
- ida-mcp-in-vm
-
🔗 modem-dev/hunk v0.12.0-beta.1 release
What's Changed
Added
- Added lower-level
hunkdiff/opentuiprimitives for embedding Hunk-powered review UIs in custom OpenTUI apps:HunkDiffBody,HunkFileNav,HunkReviewStream,HunkDiffFileHeader,createHunkDiffFile, andcreateHunkDiffFilesFromPatch(#272). - Added a runnable OpenTUI primitives demo under
examples/8-opentui-primitives(#272). - Added Homebrew tap release automation and Homebrew-aware startup update notices (#273).
- Added row windowing for large single-file reviews to keep huge diffs responsive (#237).
Fixed
- Fixed prerelease npm packaging so the published beta includes the
hunkdiff/opentuiexport, bundled JavaScript, and type declarations. - Made
hunk pageremit static highlighted diff output for captured pager contexts such as LazyGit, while passing diff input through unchanged when stdout is non-interactive (#271). - Fixed Ctrl-Z job-control suspend support so Hunk can suspend and resume cleanly from a terminal (#269).
- Fixed Windows compatibility issues across paths, packaging, and tests (#257).
Install
CLI:
npm install -g hunkdiff@betaOpenTUI package usage:
npm install hunkdiff@beta @opentui/core @opentui/react reactFull Changelog :
v0.11.1...v0.12.0-beta.1 - Added lower-level
-
🔗 modem-dev/hunk v0.12.0-beta.0 (superseded) release
Superseded prerelease
v0.12.0-beta.0was published, but it is superseded byv0.12.0-beta.1.Use
v0.12.0-beta.1for beta testing. It fixes prerelease npm packaging so the newhunkdiff/opentuiexport, bundled JavaScript, and type declarations are included in the published package.Full Changelog :
v0.11.1...v0.12.0-beta.0 -
🔗 r/Leeds The Leeds Owl - should we be branding better? rss
Manchester has its worker bee, Birmingham its Bull, and Liverpool its Liver bird.
Should we be better promoting the owl brand in Leeds? It’s tokenistic I know, but feel the city needs stronger imagery to help promote it or somethin
submitted by /u/zeitgeist247
[link] [comments] -
🔗 r/reverseengineering Building a Wasm-in-Wasm Virtualizer (with JIT decrypted paged memory) rss
submitted by /u/TrustSig
[link] [comments] -
🔗 r/Leeds RobB Marathon - Aerial Video rss
Like many I was out in Headingley this morning for the Marathon, and thought it was an excellent opportunity to get some overhead video.
Congratulations to all the runners this year from the half and full!submitted by /u/listen3times
[link] [comments] -
🔗 MetaBrainz Downtime for PostgreSQL / MusicBrainz schema change upgrade: Monday, May 11, 15:00 UTC rss
On Monday, May 11, at 15:00 UTC (8am PT, 11am ET, 5:00pm CEST), we’ll be:
- Upgrading our production database server to PostgreSQL v18.
- Performing the MusicBrainz schema version 31 upgrade.
See the previous announcement for more information.
We’ll be working to restore services as quickly as possible, but expect MusicBrainz, ListenBrainz, the Cover Art Archive, and BookBrainz to be down for the hour. Thanks in advance for your patience!
Afterward, we’ll post instructions on the blog about how to upgrade your MusicBrainz mirror server.
-
🔗 r/wiesbaden Harput Bäckerei rss
Hallo zusammen,
vor ein paar Wochen/Monaten hatte Harput Bäckerei (Wellritzstraße 14) vorübergehend geschlossen, jetzt habe ich neulich gesehen, dass dort mittlerweile ein Juwelier ist.
Ich finde das recht schade, da dieses Restaurant meiner Meinung nach die besten Dönertaschen in Wiesbaden gemacht hatte (mit gegrilltem Gemüse, gutem Brot und richtig guten Soßen).
Weiß jemand warum diese Schließung so plötzlich kam, bzw. ob sie lediglich umgezogen sind?
submitted by /u/Ill-Group-6543
[link] [comments] -
🔗 r/Harrogate £9 for tickets to e-bike demo day Leeds (not too far from Harrogate) rss
submitted by /u/SquareFriendship4691
[link] [comments] -
🔗 r/Yorkshire Labour lose control of more Yorkshire councils following local elections rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Yorkshire Green Howard's march through Richmond, Yorkshire. rss
submitted by /u/Still_Function_5428
[link] [comments] -
🔗 r/reverseengineering PE Entropy Visualizer with per-block RVA/VA mapping, locate packed payloads and encrypted blobs, then jump straight to them in IDA/Ghidra rss
submitted by /u/Flashy-Push-3341
[link] [comments] -
🔗 Confessions of a Code Addict Virtual Memory: A Deep Dive into Page Tables, TLBs, and Linux Internals rss
A quick note before we begin: I've been absent here for a while. Life happened, and I had to step away from publishing for longer than I expected.
This article is my way of getting back into rhythm. It is much larger than my usual pieces: roughly 25,000 words, compared to the 4,000-6,000 words I normally publish. I have been working on it for the last couple of months, and it is closer to a short book than a regular article.
Since this is a book-length deep dive, I have also prepared a beautifully typeset 60-page PDF version for readers who want to read it offline, highlight it, or keep it as a reference. Buying the PDF is also a direct way to support the work that went into this piece.
Thanks for sticking around. Now let's get into virtual memory.
Virtual memory is one of those fundamental components of modern-day computing that is crucial to master for building and debugging high-performance data- intensive systems.
Normally, we think of virtual memory as a system that provides isolation at memory-level to processes, which means that the operating system (OS) can run mutiple processes concurrently without those processes interfering or corrupting each other's data in memory. But, virtual memory does so much more than that, such as:
-
lazy allocation of memory through demand paging
-
copy-on-write for shared memory between processes, and fast process creation via fork
-
file I/O that avoids the page-cache-to-user-buffer copy using mmap
-
page reclaim, swap, and the page cache
-
performance effects from access patterns, huge pages, TLB shootdowns, and NUMA placement.
This article is a broad, practical coverage of what virtual memory is, how it works, and how it affects performance of data-intensive systems. By the end of the article you will have a mental model and understanding of following key ideas:
-
Why virtual memory exists : Process isolation, memory protection, and the illusion of abundant memory.
-
The virtual address space : How a process's memory is organized into segments (code, data, heap, stack, and memory-mapped regions).
-
Address translation : How virtual addresses are converted to physical addresses using hierarchical page tables, and why the page table hierarchy avoids wasting memory.
-
The role of hardware : How the MMU and TLB accelerate address translation, and why TLB hit rates matter for performance.
-
Demand paging : How the kernel delays physical memory allocation until pages are actually accessed, and how page faults drive this lazy allocation.
-
Memory types and reclaim : How anonymous, file-backed, shared, and tmpfs-backed pages differ, and why the kernel reclaims them differently.
-
Copy-on-write : How processes share memory efficiently and how fork creates new processes almost instantly.
-
Memory-mapped I/O : How
mmapmaps file data into a process address space, avoids an extra user-buffer copy, and enables shared memory between processes. -
Performance implications : How page size, TLB reach, and memory access patterns affect the performance of data-intensive workloads.
-
Observability : How to inspect VMAs, RSS/PSS, page faults, TLB behavior, and NUMA placement on Linux.
How to Read This Article
This article takes a different approach to teaching virtual memory. Instead of presenting a collection of facts and definitions, we explain concepts through a narrative: a series of dialogues between a newly created process named Alloca and the Kernel. Alloca encounters challenges as she executes her code, and the Kernel explains how things work in response to her questions. This dialogue-based format allows us to build understanding incrementally, introducing complexity gradually as natural questions arise.
Structure : Each section follows the same pattern: a dialogue that explores a concept in depth, followed by a Key Takeaway box that provides a formal summary, definitions, and technical details. If you prefer a quick overview, you can read just the Key Takeaway sections. If you want deep understanding, read the full dialogues.
Length and Pacing : This article is comprehensive, approximately 25,000 words covering everything from basic address translation to demand paging, page reclaim, copy-on-write, observability, and performance implications. Don't feel obligated to read it in one sitting. Virtual memory is a complex topic with many interconnected pieces. Take your time, read it in multiple sessions, and let the concepts sink in. Each section builds on previous ones, so it's designed to be read sequentially. Also, if you have taken a course in operating systems, the early parts of the article may seem a bit too basic to you. I encourage you to jump forward and directly read the parts that interest you, there is quite a lot of advanced content as well.
Implementation Details : Virtual memory concepts are largely universal across operating systems, but when we discuss specific implementation details, such as huge pages, TLB behavior, or page fault handling, those details are based on the Linux kernel and x86-64 architecture. Also, throughout the article we will talk about 4-level page tables that are still prevalently used in most kernels. Although, latest Linux kernel also supports 5-level page tables but it should be trivial to understand how that works if you master how 4-level page tables work.
Asides : While most of the article follows a narrative style of a dialogue between Alloca and the Kernel, there are certain additional details that I've sprinkled throughout the article in the form of asides.
Now, let's meet Alloca and follow her journey through the virtual memory system.
The Need for Virtual Memory
As Alloca starts to execute her code, she encounters her first challenge. She needs to read some data from memory. The instruction contains the address of the data and Alloca thinks, "well, this shouldn't be too difficult. I just need to go to this address and read the value". But she is up for a huge surprise.
As she goes to that address, she finds that there is nothing there. It 's all just a facade. She stands there puzzled, wondering what she should do now. Then she sees a tall figure moving towards her from the shadows.
Alloca : "Who are you?".
Kernel : "I'm the Kernel. I'm in charge of this entire world, I make sure that all processes do their job smoothly. What are you doing here? There is nothing at this place!"
Alloca : "I think I'm lost. I was supposed to read data from this address but it looks like it is all a facade, and I don't know what to do now".
Kernel (smiling): "I can understand the confusion. The address that you have is not a real address, it's a virtual address."
Alloca : "Virtual address? What does that mean?"
Kernel : "Well, what you think of memory is not the real physical memory, it is virtual memory. And, the address that you hold is a virtual address. What you need is the physical address to get the data from physical memory."
Alloca : "What is virtual memory? Why not just give me direct access to physical memory?"
Kernel : "Let's think about it from the first principles. I am responsible for the concurrent execution of not just you but hundreds of other process. You might not notice, but right now there are many other processes executing alongside you. If each one of you had direct access to physical memory, how would you coordinate who accesses which addresses in memory?"
Alloca : "That would be difficult because I don't even know who else is executing, and I imagine processes come and go, so this would be impossible."
Kernel : "Yes, that's one problem. Even if you could talk to other processes, it would make the system extremely slow, because then on every memory access you would have to ask every process which addresses are available to use. And, it would also be a safety nightmare. A trivial bug in one process might corrupt another process's data."
Alloca : "I can see the problem. So how do you solve this?"
Kernel : "Through virtual memory! Basically, we have two problems to solve. First, Every process should be able to access memory without needing to worry if an address is in use by another process. Second, memory access should be safe without sacrificing performance."
Alloca : "So, how does virtual memory solve these problems?"
Kernel : "Virtual memory is a software construct, it looks and feels like real memory, and it consists of addresses that you can read and write. I give every process its own private virtual memory space that it can freely navigate and manipulate without worrying about anyone else using that memory. This solves the first problem, it isolates memory for each process."
Alloca : "But if these addresses aren't real, then where do the reads and writes go? And, how is safety ensured?"
Kernel : "That part requires going into the weeds of how virtual memory works, but I will simplify for now. Because virtual memory is an abstraction, it can be controlled by me. So, I map the set of virtual addresses used by a process to a corresponding set of physical addresses. And, because I know which other processes are using which parts of physical memory, I can ensure that no two processes end up sharing the same physical addresses."
Key Takeaway
The fundamental reason for virtual memory to exist is to provide memory-level isolation to processes. In a multitasking system where multiple processes can be running in parallel or in a time-shared manner, it is important that they don't read or write each other's data. By giving each process its own private virtual memory, the kernel ensures this never happens. Each process believes that it has full access to the entire physical memory, but in reality, it's just virtual memory. Behind the scenes, the virtual memory is mapped to physical memory, and every process has a different mapping. Let's learn how this mapping works in the next part.
A note on narrative accuracy :
In the scene above, Alloca consciously walks to an address and notices
it's a facade. That's not literally how a process experiences memory. In reality, memory accesses are intercepted transparently by dedicated hardware (the MMU), and the Kernel, the process never notices any of this. But explaining that accurately requires understanding the MMU, page tables, and how the Kernel handles memory events, none of which we've covered yet. Starting there would be like defining a word by using the word itself. This is why we started with a simplified model. As we progress through the sections, we will gradually make our mental model more precise and accurate.
Size of Virtual Memory
Alloca now understands why virtual memory exists, but she still doesn 't understand how it works and what it looks like. Her questioning with the kernel continues.
Alloca : "If this memory that I see is virtual, does it mean that it is infinite?"
Kernel : "Not quite infinite, but very large. Tell me, what do you know about how addresses are represented in the CPU?"
Alloca : "Well, I know that on x86-64 systems, addresses are stored in 64-bit registers. So I suppose that means I can address 264 bytes?"
Kernel : "That's what you'd expect, right? But there is a twist: while your addresses are indeed stored in 64-bit registers, not all those bits are actually used for addressing. Only 48 bits participate in the address translation."
Alloca : "Why only 48 bits?"
Kernel : "It's a pragmatic decision. Think about it: 48 bits gives you 248 bytes of addressable space, which is 256 TiB. That's enormous! No application today needs anywhere close to that. The hardware designers decided that this was plenty for the foreseeable future, so they kept the address translation logic simpler by using 48 bits instead of the full 64. They left room to expand to 52 or 56 bits later if needed."
Alloca : "So I have 256 TiB of virtual address space? That is huge! Can I use all of it?"
Kernel : "Ah, not quite. You can use only half of that, which is 128 TiB. I use the upper 128 TiB of that address space to map my own code and data into every process's memory."
Alloca : "You're in my address space?"
Kernel : "I have to be! When you make a system call or when an interrupt happens, execution switches to kernel mode and starts running my code. If my code wasn't already mapped in your address space, the CPU wouldn't know where to jump to. So yes, I live in the upper half of every process's address space. You can't access my memory directly, but it's there, ready for when execution needs to enter kernel mode."
Alloca : "Okay, but how does such a huge virtual address space work because most machines have very small amount of memory installed, like 16 or 32 GB?"
Kernel : "That's the beauty of virtual memory. Your virtual address space is completely independent of how much physical RAM is installed. Even if this machine has only 16 GB of RAM, your virtual address space still spans 256 TiB. The mapping from virtual to physical is where the two worlds connect, and that is managed by me. I take great care that these mappings remain within the limits of the installed physical memory."
Key Takeaway
Because of the virtual nature of virtual memory address space, its size is much larger than the installed RAM. On the common 48-bit x86-64 virtual- address mode, the canonical virtual address range spans 256 TiB. Linux typically splits this into a lower user-space half and an upper kernel-space half. The lower 128 TiB is available to user processes, while the upper half is reserved for kernel mappings used when execution enters kernel mode. Physical address capacity is separate from virtual address capacity and depends on the CPU and platform.
The Virtual Memory Address Space Layout
Alloca : "You mentioned that you map your code and data in the upper half of my address space. What is mapped in my half of the address space?"
Kernel : "Your half of the address space maps your code and your data."
Alloca : "What does it look like? Is there a specific structure?"
Kernel : "Yes, there is a specific layout to your address space. It is organized in the form of segments, each designated to map certain kind of data. Let me show you how it looks."
Kernel gestures, and Alloca can suddenly see a vertical map of her virtual memory
Figure 1: The canonical virtual address space layout on x86-64 Linux. The
text, data, and BSS segments have sizes determined at compile time. The heap
grows upward from the data region; the stack grows downward from near the top
of user space. Between them, shared libraries and file mappings float in the
large middle region. The kernel occupies the upper half of the full canonical
range (not shown to scale).Kernel : "Down at the bottom, at low addresses, is your code. These are the instructions that you execute. This region is loaded when I created you. We call this the text segment."
Alloca : "Makes sense. Above that I see there is data segment , I assume it maps all the other data?"
Kernel : "Not all the data, but a specific kind of data. Any global and static variables in your code that were initialized to non-zero values are loaded here. For example, if you created a constant
piwith value3.14, it will be in the data segment."Alloca : "What about unintialized global data? Where does that go?"
Kernel : "The bss segment."
Alloca : "Why a separate segment for that?"
Kernel : "Ah, it's a clever trick for efficiency. Think about it: if you have a global variable that's uninitialized, what value should it have when your program starts?"
Alloca : "Zero, I suppose."
Kernel : "Exactly! Now imagine you have thousands of these zero- initialized globals. If we stored all those zeros in your compiled binary, the file would be bloated with zeros. That's wasteful. So instead of doing that, the compiler and linker just make a note saying 'hey, this program needs, say, 50 kilobytes of zero-initialized memory.' They don't actually put those zeros in the binary file. Then, when I load your program, I allocate that 50 KB, fill it with zeros, and map it into your address space as the BSS segment. Your binary stays small, loads faster, and you still get all your zero-initialized variables. Everyone wins."
Alloca : "That's clever! So the data and the bss segments are where all the static data goes. What about dynamic data? For example, when I add a new node to a linked list at runtime, does that memory get allocated in one of these segments?"
Kernel : "No, it can't be. Think about it: can the data or BSS segments grow after your program starts?"
Alloca : "I guess not? You said their sizes are determined at compile and link time."
Kernel : "Correct! They map your program's static memory footprint based on everything the compiler knew from the code when it built your binary. But at runtime, you need to allocate memory dynamically. You might read a file and build a tree from its contents. The compiler had no way to know how much memory you'd need for that."
Alloca : "So where does that memory come from?"
Kernel : "That's what the heap is for. It sits right above BSS, and as you can see from the diagram, there's a large stretch of empty address space above it."
Alloca : "So the heap can grow into that empty space?"
Kernel : "Precisely! When you call malloc(), the allocator typically grows the heap upward by adjusting its upper boundary. We call that boundary the program break , or just
brkfor short. Each time you need more memory, the heap can expand upward into that unused region."Alloca : "I see. But looking at the diagram, that empty region above the heap is enormous compared to everything else. The heap, stack, and all the segments look tiny by comparison. What is all that space?"
Kernel : "That space is basically the unmapped part of your address space."
Alloca : "Unmapped? Why are there unmapped addresses?"
Kernel : "Glad that you asked, it's really important to understand this part. Remember when we talked about the size of your virtual address space being 128 TiB?"
Alloca : "Yeah, you said that's way bigger than the actual physical RAM in the machine."
Kernel : "Yeah. A typical machine might have 16 or 32 GB of physical RAM. Even a beefy server with 256 GB of RAM is nowhere close to 128 TiB. So, it is not practically possible to map all of your virtual addresses to physical memory because there is simply not enough of it. And, even if there is a machine with 128 TiB of RAM installed, it doesn't make sense to map all of it"
Alloca : "Why not?"
Kernel : "Because most programs probably use a few hundred megabytes at most, so the clever thing to do is to allocate and map only the required amount of memory to the process, leave the rest unmapped, and map it lazily based on demand."
Alloca : "So what happens if I try to access one of those unmapped addresses?"
Kernel : "Well, if it's an address I gave you, say from a successful
malloc()or mmap() call, then it's yours to use. But if you just pick a random address in that unmapped region and try to read or write it, you'll get a segmentation fault. The hardware will refuse the access because there's no valid mapping."Alloca : "Got it. So the unmapped region isn't just empty space, it's reserved space that can become mapped as needed?"
Kernel : "Exactly! And it gets mapped for several purposes. When you load a shared library, like
libc.so, I need to map its code and data somewhere in your address space. That middle region is where those libraries go. Same with file mappings: when you usemmap()to map a file into memory, it gets mapped here. Large allocations frommalloc()also often come from this region rather than growing the heap."Alloca : "So it's a flexible region for all kinds of dynamic mappings?"
Kernel : "Precisely! It's the largest part of your address space, and it's there to accommodate whatever dynamic memory needs arise during your execution."
Alloca : "That leaves the stack at the top. What is that?"
Kernel : "It is a dedicated region for managing function calls. Every time you call a function, the stack is involved."
Alloca : "Why does calling a function need its own memory region? Why not use one of the other segments?"
Kernel : "Let's think about what needs to happen when you call a function. What kind of data does a function need?"
Alloca : "Well, its local variables, I suppose. And probably the return address so it knows where to jump back to when it's done?"
Kernel : "Exactly! And also the CPU register values that need to be saved and later restored when the function returns. Now, all of this needs to be allocated when a function is called and cleaned up automatically when it returns. Which of the segments we've discussed could handle something like this?"
Alloca : "Not the data or BSS segments, those are fixed in size. They can't grow and shrink."
Kernel : "What about the heap?"
Alloca : "The heap can grow, but I'd have to explicitly
mallocandfree, right? That would be tedious, slow, and error-prone for every function call."Kernel : "Yeah, what you need is a region that grows and shrinks automatically as functions are called and return. It needs to follow a very specific pattern: the last function you called is the first one that returns. Does that sound familiar?"
Alloca : "That's… last-in-first-out. Like a stack data structure!"
Kernel : "Precisely! That's why we call it the stack. The processor even has dedicated instructions,
pushandpop, that work with a special register called the stack pointer. This register tracks the current top of the stack. When you call a function, all its data (local variables, saved registers, return address) ends up on the stack. When you return, that block gets popped off. All automatic, no manual memory management needed."Alloca : "So it's about automatic lifetime management for function-local data. But what happens if there is a very deep chain of function calls? Can the stack grow indefinitely?
Kernel : "Not quite. As one function calls another, space needs to be made on the stack to accommodate the local variables of the called function. But there is a limit to how much the stack can grow. For example, on x86-64, the default configured maximum size of the stack is 8 MB."
Alloca : "But as I can see, the stack is right at the top of the address space, where does it have room to grow?"
Kernel : "Good observation! The stack is usually mapped at the higher address range and it grows by moving towards the lower address ranges. So, for example, if the stack pointer is currently 0x120008 and you push an 8 byte value on the stack, the stack pointer becomes 0x120000"
Alloca : "So the heap grows upward and the stack grows downward?"
Kernel : "Yes. The empty space between them is the buffer that lets both grow without colliding. In practice, a process runs out of one or the other long before they meet."
Alloca : "Okay, I understand the layout now. But I've one final question about it, what is the need for such a layout? Why not simply store data anywhere you find space?"
Kernel : "Great question! There are two big reasons: performance and security. Which one would you like to hear about first?"
Alloca : "Let's start with performance."
Kernel : "Alright. Tell me, if you are reading a value from an array at index 5, what do you do after that?"
Alloca : "Well, I probably would read index 6, then 7, and so on? Most array processing is sequential like that."
Kernel : "Exactly! And when you're executing instructions in your code, you typically run them one after another, right? You're not randomly jumping all over the place."
Alloca : "Right, except for loops and function calls, it's mostly sequential."
Kernel : "Yes! This pattern of accessing nearby memory locations is so common that the hardware is designed around it. But, fetching data from physical memory is slow. Really slow. It can take hundreds of CPU cycles."
Alloca : "That sounds terrible!"
Kernel : "It would be, if the CPU actually went to main memory for every single read. But it doesn't. The CPU has a fast cache, smaller but much faster storage right on the chip. And this is the clever bit: when you read a value from memory, the hardware doesn't just fetch that one value. It fetches an entire block around it, typically 64 bytes, called a cache line."
Alloca : "So it's betting that I'll need the nearby data too?"
Kernel : "Precisely! And because of how you traverse arrays or execute sequential instructions, that bet pays off most of the time. The next value you need is already sitting in the cache, ready instantly. This is called spatial locality."
Alloca : "Ah, so that's why the organized layout helps! If my heap has all my data structures, and I'm traversing a linked list, the nodes are likely to be near each other in memory?"
Kernel : "Well, linked lists are actually a bad example, their nodes can be scattered all over the heap. But arrays, yes! And more importantly, think about your stack. When you're executing a function, you're constantly accessing its local variables. Because they're all packed together in one stack frame, most of those accesses hit the cache."
Alloca : "And the same applies to code in the text segment?"
Kernel : "Exactly. Your instructions execute sequentially, so the processor can even prefetch the next cache line before you ask for it. By keeping code separate from data, and keeping different types of data in their own regions, we maximize these cache-friendly access patterns."
Alloca : "That makes sense! What about security? How does the layout help there?"
Kernel : "Let me ask you this: if an attacker managed to write arbitrary bytes into your heap, say through a buffer overflow bug, what's the worst thing they could do?"
Alloca : "Um, corrupt my data structures? Make my program crash?"
Kernel : "That's bad, but there's something worse. What if those bytes they wrote were actually machine instructions? And what if they then tricked your program into jumping to that address?"
Alloca : "Oh no… then the CPU would execute their malicious code as if it were part of my program!"
Kernel : "Exactly. And without protection, they could also try to overwrite your actual code in the text segment, inserting a backdoor directly into your program."
Alloca : "So how do we prevent that?"
Kernel : "By giving each segment permission bits. Think about what should be allowed for each segment. Should you be able to write to your code segment?"
Alloca : "No, the code is fixed! It shouldn't change while the program runs."
Kernel : "Right. So the text segment is marked read-only and executable: you can run code from it, but you cannot write to it. Now, what about your heap and stack?"
Alloca : "I need to read and write data there all the time. But I should never execute code from there, right?"
Kernel : "Perfect! The heap and stack are marked read-write but not executable. You can modify your data, but if someone tries to jump to an address in the heap and execute it, the processor will refuse and kill your process."
Alloca : "So by separating code from data, we can enforce different permissions on each?"
Kernel : "Precisely. This is often called W^X protection (write XOR execute). Memory can be writable or executable, but not both. By organizing memory into distinct segments, we make this protection model clean and enforceable."
Key Takeaway
The virtual address space is organized into several distinct segments :
-
Text (code) segment : The compiled instructions of the program. Loaded at startup, mapped read-only and executable. The process cannot write to its own code pages.
-
Data segment : Global and static variables that have been explicitly initialized. Size is fixed at link time.
-
BSS segment : Global and static variables that are zero-initialized. The binary stores no data for this region; the loader provides zero-initialized memory for it at startup.
-
Heap : The region for dynamic memory allocation (
malloc/new). Starts just above the data/BSS segments and grows upward for small allocations; its upper boundary is called the program break (brk). Many allocators also usemmapdirectly for large allocations rather than growing the heap viabrk. -
Memory-mapped region : A large, flexible area in the middle of the address space used for shared libraries, file mappings, and anonymous large allocations. Libraries like
libcare loaded here. -
Stack : Holds the call frames of all currently executing functions. Starts near the top of the address space and grows downward. Each function call pushes a frame containing local variables, saved registers, and the return address; each return pops it.
Aside: Anonymous memory
Throughout the article, we will come across a term "anonymous memory ", it is important that we understand what it means.
The kernel manages two kinds of memories:
-
Anonymous memory : this is allocated using
mallocormmapwith theMAP_ANONYMOUSflag. This is also the memory backing a process's heap, stack and similar segments. -
File-backed memory : this is the memory which is backed by a file. You normally create it using
mmapand passing a file descriptor to it.
We will cover both of these in quite detail as we progress through the article, but having this common vocabulary will help us move faster.
How are Virtual Addresses Translated to Physical Addresses
Alloca : "I understand the layout. Code down here, stack up there. But these are all virtual addresses. How does a virtual address ever become real? I'm imagining you keep a table, virtual byte 0 maps to physical byte X, virtual byte 1 maps to physical byte Y, one entry for every address. Is that how it works?"
Kernel : "That's the natural first thought. Let's see what it costs. Your address space (the user-space half) is 128 TiB, that's roughly 140 trillion bytes. At 8 bytes per table entry, a per-byte mapping table would take 1 PiB of storage per process. That's impractical."
Alloca : "So a per-byte table is out. But you do need a lookup of some kind."
Kernel : "Yes, we do. But, instead of mapping individual bytes, we map fixed-size chunks. I divide your virtual address space into fixed-size chunks called pages , and I divide physical memory into same-sized chunks called frames. Each virtual page maps to one physical frame at a time. One table entry per page, not per byte. This way we don't waste too much space maintaining the mapping itself."
Alloca : "How large are these chunks?"
Kernel : "4 kilobytes. At that size, your 128 TiB address space divides into 235 pages."
Alloca : "Wait, why 4 kilobytes specifically? Why not map smaller chunks like 1 kilobyte, or larger ones like 64 kilobytes?"
Kernel : "Good question! Let me ask you this: when you read a variable from memory, say an integer, do you usually read just that one value and nothing else nearby?"
Alloca : "Well, no. If I'm reading
array[5], I probably readarray[6]andarray[7]soon after. And when executing code, I run instructions sequentially, one after another."Kernel : "Exactly! Memory accesses happen in clusters, spatial locality again. The hardware already exploits this with 64-byte cache lines; pages work the same way at a coarser scale. 4 KB is a sweet spot: large enough that related data usually falls within the same page, but also small enough that we don't waste physical memory when only part of a page is touched."
Alloca : "So 4 KB is a sweet spot between granularity and efficiency?"
Kernel : "Right. And because every page and every frame is exactly the same size, any free frame can back any page. It doesn't matter where in physical memory that frame happens to sit."
Alloca : "Okay, I understand the page size. But there is something that I still don't get: you're mapping an entire 4 KB page to an entire 4 KB frame. But, I have a specific address, and I want to read 8 bytes from it. How do you find out which virtual page that address belongs to, to get the corresponding physical frame?"
Kernel : "The answer lies in the virtual address itself. Think of it like a library call number. When a librarian gives you the number 3-07-42 , you know immediately that the book is on floor 3, rack 07, shelf 42. The number encodes two things at once: which shelf unit to find, and where within that unit to look. A virtual address works the same way. It encodes which page the address falls in, and the byte position within that page."
Alloca : "So the address itself tells you both the page and the position inside it?"
Kernel : "Yes. Every virtual address is implicitly two things: the virtual page number , given by the upper bits, and the page offset given by the lower 12 bits. 12 bits because 2¹² = 4096, one for every byte in a page. Say your address points 500 bytes into page N. When I map page N to physical frame M, your data is still 500 bytes in, because the frame is the same 4 KB size. The offset does not change during translation. So I look up the virtual page number in your page table, get back the physical frame number, attach the same offset, and that gives the physical address of exactly the 8 bytes you asked for."
Alloca : "Okay, I understand that part. But something is still not clear. You said that my address space is 128 TiB. If there's one page table entry per 4 KB page, that's 235 entries. At 8 bytes each entry, that's 256 GiB of page table. Per process. That's not workable."
Kernel : "Exactly, that's the problem with a flat table. So let me ask you this: what if, instead of tracking every single page, we tracked which regions of your address space are in use?"
Alloca : "Regions? Like groups of pages?"
Kernel : "Yes. Think about your address space. You have code at the bottom, a heap above that, maybe some libraries in the middle, and a stack at the top. Most of the space between them is empty, right?"
Alloca : "Right, huge stretches of unused addresses."
Kernel : "So what if I had a high-level index that just tracks which large regions are in use, and then within each of those regions, I have another index for smaller regions, and so on, until I get down to individual pages?"
Alloca : "Like… a tree structure? Where each level zooms in on a smaller portion?"
Kernel : "Precisely! It's called a hierarchical page table. There are four levels. At the top level, there's a table with 512 entries, and each entry represents 512 GB of your address space. If an entire 512 GB region is unused, that entry is just marked absent, no further tables are allocated for it."
Alloca : "So you only allocate the deeper levels of the tree for the parts I'm actually using?"
Kernel : "Yeah. Each entry at the top level can point to a second-level table, which again has 512 entries, each covering 1 GB. Each of those can point to a third-level table covering smaller regions, and so on, until the deepest level maps to individual 4 KB pages."
Alloca : "But wait, doesn't having four levels still waste space? If I use just one page, don't you still need entries at every level to reach it?"
Kernel : "Yes, but consider the scale. For that one used page, I need one entry in the top-level table, one second-level table with 512 entries, one third-level table with 512 entries, and one fourth-level table with 512 entries. That's roughly 12 KB total. Compare that to a flat table: 235 entries times 8 bytes equals 256 GiB. I save a factor of 20 million."
Alloca : "So the table itself only exists for the parts of my address space I've actually used."
Kernel : "Correct!"
Aside: Page table level names difference between Linux and x86
The four levels of the page table hierarchy have different names depending on whether you're reading Linux kernel source or Intel/AMD architecture manuals.
Table:
Naming convention for page table levels in Linux vs x86 architectureThe x86 names are tied to the specific architecture. The Linux names are more generic and are used consistently across architectures that Linux supports, whether that's x86-64, ARM64, or RISC-V, even when the underlying hardware has a different number of levels. Throughout this article we use the Linux kernel names: PGD, PUD, PMD, and PTE.
Alloca : "But how does a virtual address help you traverse this?"
Kernel : "It's actually pretty clever. Your virtual addresses are 64 bits wide, but only 48 bits are used. Those 48 bits are split into five parts: four groups of 9 bits each, followed by a 12-bit offset. The first four groups are used one by one to step through each level of the page table tree, narrowing down to the right physical frame. The offset is then used to pinpoint the exact byte within that frame."
Alloca : "What is the exact split of these bits?"
Kernel : "The first group (bits 47 down to 39) gives a number between 0 and 511, which I use as an index into the PGD. That entry points me to a PUD. I take the next group (bits 38 down to 30) and index into that PUD, which points to a PMD. I repeat this for the PMD and PTE levels."
Alloca : "That leaves the bottom 12 bits, those act as offset within the page frame?"
Kernel : "Yes, once you reach the PTE and get the physical frame number, you combine it with those 12 bits to get the exact byte you want. 12 bits because 212 is 4096, the page size."
Figure
2: The four-level page table hierarchy on x86-64. To translate a virtual
address, four groups of 9 bits (i, j, k, l) are used as indices, one per
level, to walk down the tree to the right page frame. The final 12 bits give
the byte offset within that frame. Sub-tables are only created for parts of
the address space that are actually mapped, so unused regions cost nothing.Aside: 48-bit virtual addresses
On common 4-level x86-64 systems, virtual addresses are stored in 64-bit registers, but you should have noticed that only 48 bits participate in this address translation scheme, what about the top 16 bits?
The top 16 bits must be a sign-extension of bit 47: all zeroes for low-half user-space addresses, all ones for high-half kernel-space addresses. Such addresses are called canonical addresses. A non-canonical address faults before the normal page-table walk even completes. This is what creates the large unused gap between the low and high halves of the 64-bit virtual address space.
Recent x86-64 processors and Linux kernels also support 5-level page tables, which use 57 bits for address translation (adding a fifth level called P4D (Page 4th Directory) between the PGD and PUD). This provides 2⁵⁷ bytes (128 PiB) of virtual address space per process. The additional level uses bits [56:48] as an index, with bits [63:57] remaining as sign-extension of bit 56.
This article took months of research, writing, and revision. If it helped you understand virtual memory more deeply, consider becoming a paid subscriber. Your support makes it possible for me to keep publishing long-form systems deep dives like this.
Alloca : "I see how the bits map to the levels. But who actually performs this translation? On every memory access, something has to look up these tables."
Kernel : "A dedicated piece of hardware called the Memory Management Unit , or MMU. It intercepts every address you issue. You never see any of this; to you it appears as if you are reading directly from your virtual address."
Alloca : "So the MMU does this lookup automatically on every memory access? How does it know where to start?"
Kernel : "The CPU has a register called
CR3that holds the physical address of your current PGD, the top-level table. I update it on every context switch so the MMU knows which process's tables to use."Alloca : "And then it uses the bits from my address to walk through the levels?"
Kernel : "Yeah, the same bit fields we just covered. Bits [47:39] index into the PGD, [38:30] into the PUD, [29:21] into the PMD, and [20:12] into the PTE. That last entry gives the physical frame number, which the MMU combines with the 12-bit page offset to produce the physical address."
Figure
3: The four-level page table walk on x86-64. The CPU register CR3 holds the
physical address of the top-level table (PGD). Each level is indexed by 9 bits
of the virtual address. The TLB caches completed walks; the four-level
traversal only occurs on a TLB miss. How often that happens depends heavily on
access patterns.Alloca : "But this means every memory access now requires four table lookups. That's four extra memory reads just to translate my address. Doesn't that make every memory access slower than it should be?"
Kernel : "It would be, if we had to walk all four levels every time. But the MMU has a small, dedicated hardware cache called the Translation Lookaside Buffer , or TLB. Every time a page table walk completes successfully, the result is stored in the TLB: 'virtual page P maps to physical frame F.' The next time you access the same page, the MMU checks the TLB first. If it's there (a TLB hit), the translation completes in a handful of cycles, with no table walking at all."
Alloca : "And how often does that happen?"
Kernel : "Programs that reuse the same memory regions repeatedly, such as tight loops, frequently executed functions, reused buffers, tend to stay within a small working set of pages, keeping the TLB warm and page walks rare. But that is not a given. Access patterns matter a great deal."
Aside: Working set
A process's working set is the subset of its virtual pages that are actively needed during a given window of execution. It's not a fixed quantity, it shifts as the program moves through different phases. A tight loop over a small array has a tiny working set: just the pages holding the loop instructions and the array. A database engine scanning a large table has a much larger one.
The working set matters for two hardware structures:
-
TLB : If the working set fits within the TLB's capacity (typically a few hundred to a few thousand entries), translations stay cached and page walks are rare. If the working set exceeds TLB capacity, there are larger number of TLB misses which may cost performance.
-
Physical RAM : If the working set fits in RAM, pages stay resident. If it doesn't, the kernel must evict pages to swap and reload them on demand, which is a far more expensive operation (we cover eviction and swap later in the article).
Keeping the working set small and stable is one of the most effective things a program can do to improve memory performance.
Key Takeaway
Virtual memory operates at the granularity of pages (4 KB chunks of virtual address space) that map to frames (4 KB chunks of physical memory). Each virtual address encodes two pieces of information: the virtual page number (upper bits) and the page offset (lower 12 bits). The offset stays the same during translation; only the page number changes to a frame number.
On x86-64, the kernel uses a four-level hierarchical page table to perform this mapping. The structure has four levels named PGD (Page Global Directory), PUD (Page Upper Directory), PMD (Page Middle Directory), and PTE (Page Table Entry). A 48-bit virtual address is divided into four 9-bit index fields (one per level) plus a 12-bit offset, as shown in Figure 2. The hierarchy is sparse: only the portions of the address space actually in use require allocated page table structures, avoiding the 256 GiB overhead of a flat table.
Because each virtual page is mapped independently, there is no requirement that consecutive virtual pages land in consecutive physical frames. A process's pages can be scattered anywhere in physical RAM, interleaved with frames from other processes, yet the process always sees a clean, contiguous address space. Figure 4 shows this concretely.
The Memory Management Unit (MMU) performs address translation in hardware. On x86, the register
CR3holds the physical address of the current process's PGD. On every memory access, the MMU first checks the translation lookaside buffer (TLB) to see if the translation is already cached. If not, the MMU performs a full page table walk to do the translation and then caches the translation in the TLB.
Figure
4: Each process has its own virtual address space, but the page table maps
virtual pages to physical frames that may be anywhere in RAM. Adjacent virtual
pages can land in widely separated frames, and frames from multiple processes
are interleaved in physical memory. The page table is what makes this
invisible to the process.
Memory Protection via Permission Bits
Alloca : "Earlier you told me that my code segment is read-only. I can execute it but not write to it. But now that I understand the page table, I don't see what actually enforces that. My code pages have entries in the page table just like everything else. What stops me from writing to them?"
Kernel : "Each page table entry carries more than just the frame number. It also holds permission bits. The writable bit says whether you can write to that page, if it is 0, the MMU refuses the write and faults : it stops the access mid-flight and signals me to handle the situation. The executable bit says whether you can run code from it. When I set up your code segment I mark those pages as executable but not writable. Your data and heap are writable but not executable. The MMU checks these bits on every access."
Alloca : "What happens when it faults? Say I try to write to one of my code pages?"
Kernel : "I get called to handle it. A permission violation is almost always a bug or a security attack, so I typically terminate you."
Alloca : "Got it. Are there other kinds of bits apart from permission bits?"
Kernel : "Yes, a very important one that you should know about. There is a present bit in every entry, at every level of the hierarchy. If it is 0, the walk stops there and the CPU faults. But a not-present entry doesn't necessarily mean something went wrong. It might just mean that I haven't allocated a physical frame for that page yet, or that the page has been evicted to disk."
Alloca : "So the permission bits enforce boundaries between code and data, and the present bit tells you whether a page is backed by physical memory at all."
Kernel : "Exactly!"
Key Takeaway
Each page table entry contains not just a frame number but also several permission bits that the MMU enforces on every memory access:
-
Present bit : Indicates whether the page is currently backed by a physical frame. If 0, the page table walk stops and the CPU raises a page fault. A not-present page doesn't always signal an error; it might mean the kernel has promised the address range but hasn't yet allocated physical memory for it (demand paging, covered in the next section). It might also mean that the physical frame was swapped to disk and reused by another process.
-
Writable bit : Controls write permission. If 0, any write attempt triggers a fault. Used to make code pages read-only and to implement copy-on-write (covered later).
-
Executable bit (or NX/XD bit) : Controls execution permission. If the page is marked non-executable, the processor refuses to fetch instructions from it. Code pages are marked executable; data, heap, and stack pages are marked non-executable to prevent code injection attacks.
The MMU checks these permission bits on every memory access, before the access completes. Permission violations typically indicate bugs or security violations and usually result in the kernel terminating the faulting process. This hardware-enforced separation between code and data is a foundational defense against many classes of exploits.
Demand Paging
Some time passes. Alloca has been running her code and has grown more comfortable in this world. But now she needs more memory, she is about to process a large dataset and needs space to store intermediate results.
She does what any process would do: she makes a system call asking for memory. A new region appears in her address space. Kernel hands her an address:
0x55a3c2f00000. She immediately goes to write her first value there.And then something strange happens. Time seems to stop for a fraction of a moment. And then it starts again, as if nothing had occurred. Her write went through. But something had happened, she had simply not noticed.
Alloca : "That was odd. Did I just… stutter?"
Kernel : "You did. You triggered a page fault. Don't worry, I took care of it."
Alloca : "A page fault? What's that? And what did you take care of?"
Kernel : "When I gave you that address, I didn't actually back it with physical memory. I recorded the promise that this range of virtual addresses belongs to you, but I didn't go and find a physical frame to put behind it."
Alloca : "You gave me an address without any memory behind it? That sounds like fraud."
Kernel : "It's efficiency. Think about it: you might ask for a hundred megabytes and only use ten. If I allocated a physical frame for every page you asked for, I'd be wasting most of physical memory on pages that never get touched. So instead, I wait. When you actually try to access a page for the first time, the MMU looks up that address in your page table and finds the present bit set to zero. No physical frame is mapped. The MMU raises a trap (a page fault) and control transfers to me."
Alloca : "But how did you know my access was valid? Maybe I was accessing some address I had no right to. How do you tell the difference?"
Kernel : "When I gave you that memory region, I recorded a note called a virtual memory area , or VMA. It says: 'virtual addresses from X to Y are promised to Alloca, with these permissions.' The VMA is not a page table entry. It's a higher-level record of intent that I maintain separately."
Alloca : "So you have two different data structures tracking my memory?"
Kernel : "Yes. The VMA describes what address ranges are valid for you to access. The page table describes which of those valid pages are currently backed by physical frames. When you were created, I set up VMAs for your code segment, your data segment, your stack. Each one records an address range and what you're allowed to do there: read, write, execute. Later, when you call
mallocormmap, I create a new VMA for that allocation. But I don't immediately create page table entries for it."Alloca : "So when the MMU finds a missing page table entry for an address, it triggers a page fault?"
Kernel : "Yes. When a page fault fires, I have to handle it. I first check whether the faulting address falls inside a valid VMA. If yes, the access is legitimate. I just haven't backed it with a physical frame yet. If the address is outside any VMA, you've wandered somewhere you were never given. That's a segmentation fault, and I terminate you."
Alloca : "So the VMA list is your record of promises, and the page table is the record of fulfilments."
Kernel : "Well put. Now, once I confirm the fault is legitimate, I find a pre-zeroed physical frame, write a new entry into your page table pointing to that frame, and resume your execution. The CPU retries the faulting instruction and your write goes through."
Figure
5: A page table entry before and after a demand paging fault. The kernel
changes the present bit from 0 to 1 and fills in the physical frame number
(PFN).Alloca : "Wait. Why did you zero it out? Couldn't you just give me the frame as-is?"
Kernel : "Absolutely not. Physical frames get reused. That frame might have previously held data from another process. If I handed you that frame without clearing it first, you could read another process's secrets just by reading uninitialized memory. The zero-fill guarantee is a security invariant: you will never see data you didn't write yourself."
Alloca : "That's reassuring. But what if there are no free frames? What if physical memory is full?"
Kernel : "It happens more often than you'd expect, and dealing with it changes what the present bit in a PTE can mean."
Figure
6: The demand paging lifecycle. Step 3 (checking the VMA) is what
distinguishes a legitimate first access from an invalid access. Without a
matching VMA, the kernel delivers a segmentation fault instead of allocating a
frame.Aside 1: How the stack grows using demand paging
Remember, when talking about address space layout, we said the stack grows downward. That growth is demand-driven too. The kernel marks the stack VMA as growable, but it does not map every possible stack page upfront. When the stack pointer moves into the next valid page below the current stack, the access faults. Because the faulting address is just below the current stack bottom and the stack VMA is marked as growable, the kernel extends the VMA downward by one page, allocates a frame, and resumes execution. From Alloca's perspective the stack just grew silently.
Two mechanisms prevent this from continuing forever. First, the kernel enforces a maximum stack size (on Linux, set by
ulimit -s, defaulting to 8 MB). The stack VMA will not be extended past that limit. Second, below the maximum stack limit sits a guard page : a single page that is deliberately left unmapped, no VMA covers it. If the stack pointer jumps far enough to land in or past the guard page (due to deep recursion, a large stack-allocated array, or a corrupted stack pointer), the fault finds no covering VMA. The kernel treats that as an invalid access and delivers SIGSEGV.The guard page is what turns a silent runaway stack into a detectable crash. Without it, the stack could silently overflow into the memory-mapped region below it and corrupt library or heap data before anything notices.
Aside 2: Memory overcommit: a consequence of demand paging
Demand paging creates an interesting situation: if the kernel only allocates physical frames at first-access time, then
malloc(10GB)on a machine with 4 GB of RAM will succeed (at least initially). The kernel records the promise in a VMA and returns immediately. No frames are allocated. This is called overcommitting memory: the total size of all VMAs across all running processes can far exceed the amount of physical RAM plus swap.The kernel's bet is statistical. In practice, most allocated memory is never fully touched. A process might allocate a large buffer "just in case" and only ever write to a fraction of it. A JVM might reserve a large heap up front but populate it lazily. Across hundreds of processes, the working sets sum to much less than the total committed virtual memory, and the system runs fine.
The bet occasionally goes wrong. When too many processes start faulting in pages simultaneously, memory pressure spikes, and the kernel runs out of physical frames. At this point it invokes the OOM killer (Out-Of-Memory killer): a kernel subsystem that scores each process by its memory consumption, age, and other heuristics, then kills the highest-scoring one to reclaim its frames.
You can observe overcommit and OOM events on Linux:
# How much virtual memory is committed system-wide (in kB) grep CommitLimit /proc/meminfo # kernel's ceiling: overcommit_ratio × RAM + swap grep Committed_AS /proc/meminfo # total virtual memory promised to all processes # See if the OOM killer has fired recently dmesg | grep -i "oom\|killed process" journalctl -k | grep -i oomThe kernel's overcommit policy is tunable via
/proc/sys/vm/overcommit_memory:-
0(default) uses heuristics -
1always allows any allocation -
and
2caps total committed memory atovercommit_ratio × RAM + swapand begins refusingmalloccalls that would exceed it.
Key Takeaway
When a process allocates memory, whether by calling
malloc, growing its stack, or explicitly requesting memory viammap, the kernel does not immediately back every page of that allocation with a physical frame. Instead, it creates a Virtual Memory Area (VMA) in the process's memory descriptor: a record that says "this range of virtual addresses is valid and belongs to this process, with these permissions." The page table entries for these pages are left absent (present bit = 0).The VMA and the page table serve different roles:
-
The VMA is the kernel's record of intent : what address ranges the process is allowed to access.
-
The page table is the record of reality : which virtual pages are currently backed by physical frames.
The first time the process reads or writes any address in an allocated-but- unmapped range, the MMU finds a page table entry with present=0 and raises a page fault , a CPU exception that transfers control to the kernel. The kernel's page fault handler:
-
Looks up which VMA contains the faulting address. If none, the access is invalid and the kernel delivers a segmentation fault, terminating the process. Otherwise, it continues:
-
Allocates a free physical frame.
-
Zero-fills that frame (the zero-fill guarantee, required for security, ensures the process never sees data from a previous owner of that frame).
-
Installs a new page table entry pointing to that frame, with the present bit set.
-
Returns from the exception, causing the CPU to retry the faulting instruction.
From the process's perspective, execution pauses for a few microseconds and then continues as if nothing happened. This mechanism is called demand paging : physical memory is allocated on demand , at the moment of first access, rather than speculatively at allocation time.
The fault described above requires no disk I/O: it is called a minor page fault. Minor faults cover any fault the kernel can resolve entirely in memory. This includes zero-fill for pages that aren't backed by any file, but also cases where the data is already resident somewhere (in the page cache, or shared from another process) and just needs a PTE installed. There is a second kind of fault called major fault , that does require reading from disk. We will get to that next.
A side effect of demand paging is that physical frames are allocated one by one, on demand, from wherever free memory happens to be. There is no requirement that consecutive virtual pages land in consecutive physical frames. A process's stack might occupy frames scattered across RAM, interleaved with frames belonging to completely different processes. The page table is what makes this invisible: it maps each virtual page independently, so the process always sees a clean, contiguous virtual address space regardless of where its frames physically reside.
Prefer reading this as a polished PDF? I 've prepared a beautifully typeset PDF version for offline reading and reference. Buying it is another way to support the time that went into this article.
When Physical Memory Runs Out: Swap and the Dual Meaning of the Present Bit
Alloca : "So what happens when there is not enough free physical memory left to allocate?"
Kernel : "Let me show you. Let's say that I need to allocate a frame for you, but they are all taken. So I must evict a page from somewhere, I look for a page that hasn't been accessed recently. It could be from another process, or even one of your own pages. Once I find the page to evict, I write its contents to disk to a reserved area called swap space. Then I reclaim the frame and give it to you."
Alloca : "And what happens if the process that owned that page tries to access it again?"
Kernel : "Before I give that frame to you, I update the process's page table. I locate the PTE that points to that frame, clear its present bit to 0, and store the swap location in the remaining bits of the entry. The hardware never looks at those bits when present is 0, but I do when handling the page fault."
Alloca : "So when that process touches the page again…"
Kernel : "The MMU sees the present bit is zero in the PTE, and it raises a page fault bringing me into action to handle it. My fault handler follows the same entry point as always: check the VMA first. In this case, because the page was swapped, its VMA must exist, so the fault handler moves forward and checks the PTE next. It finds the swap coordinates in the non-present bits, uses those to read the data from the disk, and loads it into a fresh frame. After that, it reinstalls the PTE with present=1. Once the page fault handler finishes, I resume the process and it retries the instruction that triggered the fault and this time it succeeds. It never knew the page had left."
Aside: Minor vs Major Page Fault
Earlier in the demand paging section, we talked about minor page faults. Those kind of page faults don't involve disk I/O and are handled directly in memory. For example, when
mallocallocates more pages, the kernel simply creates the VMA, and allocates the physical frames on demand when the page fault occurs.The page fault that we discussed above when a process tries to access a page that has been swapped to disk is a major page fault because handling it requires disk I/O.
Alloca : "So present=0 in a PTE always means that the data is in the swap?"
Kernel : "No. Swap is one destination, but it's not the only one. A non- present PTE can point to data that lives somewhere other than swap space."
Alloca : "Where else can it go besides swap?"
Kernel : "A file. Not every page comes from memory you allocated with
mallocor grew from the stack. Some pages map directly to content stored in a file on disk."Alloca : "How does that work?"
Kernel : "You use the
mmapsystem call. It lets you map a file into your address space. When you do that, I create VMAs for the mapped range, but I leave the PTEs absent, just like withmalloc."Alloca : "So on first access?"
Kernel : "This time again the MMU sees an absent PTE and raises a page fault. But handling this page fault is different from how I handle a page fault for a swapped page, or how I handle allocation of new memory like what we discussed when talking about demand paging earlier."
Alloca : "What changes?"
Kernel : "The first step is the same, I check the VMA to confirm this is a valid region. But what happens next depends on the type of mapping."
Alloca : "What's different about a file-backed mapping?"
Kernel : "For anonymous mappings, a fault means either a fresh allocation where I hand you a new zero-filled frame, or a swap restore, where I read the page back from disk using the swap coordinates stored in the PTE. For file-backed mappings, there is no swap entry. Instead, the VMA itself tells me which file and which block of that file to read. I load that block into a frame, install it in the page table, and resume you."
Alloca : "So at the PTE level, present=0 is just a signal: data is not in RAM. But the place to find it depends on what kind of mapping this is?"
Kernel : "Precisely. For anonymous memory pages that have been swapped, the non-present PTE can carry swap coordinates. For a file mapping that has not been loaded yet, I usually use the VMA to find the file and offset. Either way, the fault handler has enough information to reconstruct the page."
Key Takeaway
When physical memory runs out, the kernel must reclaim frames. It selects pages that have not been accessed recently and evicts them. For anonymous pages (heap, stack,
malloc), there is no file to fall back on, so the kernel writes the page's contents to swap space on disk before freeing the frame. It then updates the PTE: the present bit is cleared to 0, and the remaining bits are repurposed to store swap coordinates (device number and page offset). These bits are ignored by the hardware; they exist solely as a private record for the kernel's own fault handler.When the evicted page is next accessed, the MMU finds present=0 and raises a major page fault. The fault handler reads the swap coordinates from the PTE, loads the page from disk into a fresh frame, reinstalls the PTE with present=1, and resumes the process.
However, a page fault for a file-backed mapping is handled slightly differently. Here, the VMA contains information about the file and the offset in the file needed to populate the frame.
Together, anonymous and file-backed mappings cover all the cases a fault handler encounters. Two questions decide which path it takes:
-
What type of mapping is this? Anonymous memory has no file behind it. File-backed memory does.
-
Why is the page absent? A first-access fault (i.e., the frame was never allocated), or the page was evicted due to memory pressure and now being accessed again.
Figure 7 below shows all four combinations and how the fault handler resolves each
Figure 7: The four paths the kernel takes when resolving a page fault,
organized by mapping type (rows) and reason for absence (columns). An
anonymous first-access fault is the only minor fault, the kernel zero-fills a
fresh frame with no disk I/O. All other cases require reading from swap or
from a file and are major faults. For first-access faults (left column), no
page table entries may exist yet, and the fault handler allocates the
intermediate levels (PGD, PUD, PMD) and the PTE on demand. For evicted or
dropped pages (right column), the intermediate levels already exist from when
the page was first loaded; only the PTE was updated when the page left RAM.
Aside: Pinned memory and GPU data transfers
Everything discussed so far assumes the kernel is free to evict any page when memory pressure demands it. There are cases where that is unacceptable. Pinned memory (also called page-locked memory) is memory that the kernel is prohibited from swapping out. A process can pin a region by calling mlock(), after which the kernel guarantees that the underlying physical frames will not be moved or reclaimed for as long as the lock is held.
The most common reason to pin memory today is GPU data transfers. DMA (Direct Memory Access) engines, which move data between host RAM and GPU memory without CPU involvement, require that the source or destination buffer remain at a fixed physical address for the duration of the transfer. If the kernel were to evict a page mid-transfer and reassign the frame, the DMA engine would read or write the wrong physical location. Pinning prevents this by fixing the physical address in place.
This is why AI training frameworks pin host memory for input batches. In PyTorch, tensor.pin_memory() and the
pin_memory=Trueoption on DataLoader callmlock()under the hood. With pinned buffers, the CUDA driver can initiate DMA transfers directly from host RAM to GPU memory without an intermediate copy, and it can overlap those transfers with GPU computation. On large models trained on high-bandwidth interconnects (NVLink, PCIe 5.0), this overlap between data loading and compute is a significant contributor to overall throughput.The trade-off is that pinned memory is a scarce resource. Because pinned pages cannot be reclaimed, overusing it reduces the memory available for the page cache and other processes, increasing the risk of swap pressure elsewhere.
Deep technical articles like this take significant time to research, test, and polish. Paid subscriptions make it possible for me to keep writing them carefully instead of rushing out shorter, shallower pieces.
Copy-on-Write and Fork
Alloca has been given a large job: process a large dataset. She needs help to do this in a quick amount of time.
Alloca : "I wish I had a copy of me that could share this workload."
Kernel : "You can do that, just use the
fork()system call."Alloca : "How does that work?"
Kernel : "When you call
fork(), I make a new process which is almost an identical copy of you. I give this process the same code as you, a copy your file descriptor table and even your memory."Alloca calls
fork()and creates a new process called "Forka". She inherits everything Alloca had.Forka and Alloca start to do their work. Soon Alloca tries to perform a memory write. The familiar brief pause. Then it passes.
Alloca : "That pause. What was that?"
Kernel (appearing): "Another page fault."
Alloca : "Another page fault? But the page is present, I've been reading from it just fine."
Kernel : "It's present, yes. But I marked it read-only, and you tried to write. That's what triggered the fault."
Alloca : "Wait, why did you mark it read-only? That memory was clearly meant for both reading and writing."
Kernel : "It was an optimization I did when creating Forka. Let me explain why I did it."
Alloca : "Please."
Kernel : "I created Forka by giving her an independent copy of your memory. The simple approach is to copy every page immediately. But you have gigabytes of heap, and most of it she may never write to. Copying all of it upfront would waste a lot of time, and also make fork extremely slow. So instead, I gave Forka new page tables that initially point at the same physical frames as you. Which means that both of you are sharing the same frames. But this only works as long as both of you are just reading those frames. When either of you need to write to one of these shared pages, page fault occurs and I give the writing process a private copy of that frame. This particular optimization is also called copy-on-write (CoW)."
Alloca : "So the read-only marking is how you detect that moment."
Kernel : "Precisely. Your write triggered a fault, I caught it, confirmed this was a copy-on-write page, and handled it: I allocated a fresh frame, copied the 4 KB into it, updated your PTE to point to the new frame with write permission restored, and resumed your write. Forka's mapping is untouched."
Alloca : "And now we each have our own copy of that page?"
Kernel : "Yes. That page has been copied on write. But only that page. All the pages you haven't written to yet are still shared. If you never write to a page, it stays shared forever, zero copies made."
Forka : "What if my parent exits before I write to a page?"
Kernel : "I take care of that by tracking reference and mapping state for each physical frame. When your parent exits, I remove its mappings. The next time you write to a page, if I can see that the page is no longer shared, I can skip the copy and simply restore write permission on your existing PTE. There's no one left to protect."
Figure
8: Copy-on-write afterfork(). Initially, both page tables point to the same physical frames (top). After Alloca writes to page A, the kernel allocates a new frame (19), copies the contents, and updates only Alloca 's PTE to point to the new frame. Forka's PTE still points to the original frame and remains read-only; the kernel will restore write permission on Forka's next write fault without needing to copy, because the frame is no longer shared.Aside: fork + exec: why process creation is cheap
A common Unix pattern is to call
forkimmediately followed byexecto load and execute a new program.execdiscards the child's entire address space and builds a fresh one for the new program. For example, this is how the shell works whenever you execute a command.For this reason
forkneeds to be cheap and one way to achieve that is by avoiding the copying of parent's memory pages until it is really needed.Key Takeaway
fork()creates a new process (the child) that is an exact copy of the parent at the moment of the call. Naively, this would require copying every byte of the parent's virtual memory, a multi-gigabyte operation for large processes. Copy-on-write (COW) makesfork()efficient by deferring that copy until it is actually necessary.When
fork()is called:-
The kernel allocates a new process descriptor for the child.
-
The kernel creates a new set of page tables for the child, initially pointing to the same physical frames as the parent.
-
For every private writable mapping, the kernel marks the entry as read-only in both parent and child. Read-only pages (code) are shared as-is, they were already protected.
The kernel tracks reference and mapping state for each physical frame. After a fork, private pages that were writable in the parent are now mapped by both processes, so their state records that they are shared.
When either process subsequently writes to a COW-protected page, the MMU detects a write to a read-only PTE and raises a protection fault. The kernel's COW handler:
-
Checks whether the page is still shared. If it is, a copy is needed. If the kernel can determine the faulting process is now the only relevant owner, it can simply restore write permission without copying.
-
If a copy is needed: allocates a new frame, copies the contents, updates the faulting process's PTE to point to the new frame with write permission. The other process's PTE is left pointing to the original frame, still read-only.
Memory-Mapped Files
Several cycles pass. Alloca is trying to analyze a large log file. She has been doing it the obvious way, calling
read()in a loop, filling a buffer, processing the buffer, repeat. Kernel notices this and wanders over.Kernel : "You know there's a better way to do that."
Alloca : "I'm reading a file. What better way is there?"
Kernel : "Instead of reading into a buffer, let me map the file directly into your address space. You access it like regular memory: just use a pointer, and I'll handle getting the data to you."
Alloca : "You mean I can read a file with a pointer? No
read()calls at all?"Kernel : "Exactly. Call
mmap(). Give me the file descriptor, the length, and some flags. I'll create a new VMA in your address space (a memory-mapped region). Then you can read from or write to addresses in that region just like regular memory, and I'll give you the file's contents."Alloca does it. She gets back an address,
0x7f4b00000000. She reaches out to read the first byte at that address.And the pause happens again. A little longer this time.
Alloca : "Longer pause. What was that?"
Kernel : "A major page fault. When you called
mmap(), I didn't actually load any of the file data into memory. That file could be gigabytes in size, and I have no idea which parts you'll actually access. So I just created a VMA for that address range and left the page table entries absent. The first time you accessed that page, the MMU found present=0, trapped to me, and I had to read it from disk."Alloca : "So mmap is also lazy?"
Kernel : "That's right. Demand paging works for files too. Now, notice where I put the data after reading it from the disk."
Alloca : "Where?"
Kernel : "In the page cache. This is a pool of physical frames I use to cache file data. When a file page is read (whether via
read()ormmap()), it lands in the page cache. For your mmap access, once the data was in the page cache, I installed a page table entry pointing directly to that page cache frame. Your virtual address now directly maps to the physical frame that holds the file data."Aside: The page cache is not reserved memory
A common misconception is that the page cache is a reserved pool of memory, it's not. It is simply the set of physical frames that the kernel is currently using to hold file data. When an application needs more memory and there are no free frames, the kernel can reclaim clean page-cache frames instantly, because the file on disk is already the backing copy. This is why a system that looks nearly full of "used" memory can still allocate freely: much of that "used" memory is reclaimable cache, not locked-in application data.
Alloca : "So I'm reading the file's data directly from the page cache, through my page table?"
Kernel : "Yes. No intermediate user-space buffer copy. Now compare that to what happens when you use
read()instead. I still bring the file data into the page cache, usually by DMA from the storage device into memory. But thenread()copies the data from the page cache frame into your user-space buffer. That page-cache-to-user-buffer copy is the extra step thatmmap()avoids."Aside: What is DMA (Direct Memory Access)?
Normally, when a CPU wants data from a storage device or network card, it would have to sit in a loop reading bytes, which is an expensive waste of cycles. DMA is a hardware mechanism that lets peripheral devices transfer data directly into main memory (RAM) without CPU involvement.
In this scheme, the kernel and device driver submit an I/O request that describes the target memory pages and the storage range. The storage controller uses DMA to transfer data directly into those pages and interrupts the CPU when the transfer is done. The CPU is free to do other work the entire time.
Alloca : "And
mmap()avoids that second copy because I access the data directly through the mapped address. But what happens if you evict the page cache frame while it's mapped?"Kernel : "Before I can reclaim that frame, I first remove the page table entry pointing to it. The VMA remains intact, so the next time you access that address the MMU finds no mapping, faults, and I reload the data. From your perspective the mapping is seamless; you never hold a dangling pointer."
Figure
9:read()vs.mmap()I/O paths. Withread(), data is brought from disk into the page cache and then copied into the process 's user-space buffer. Withmmap(), the process 's PTE points directly into the page cache, eliminating that page-cache-to-user-buffer copy. The trade-off is thatmmap()pays through page faults and page-table management instead of explicit read calls.Alloca : "So should I always use
mmap()for file I/O? Avoiding that user-buffer copy sounds like an obvious win."Kernel : "Not always.
mmap()removes one cost, but it introduces others. It trades explicit I/O and copying for page faults, page tables, TLB pressure, and different failure modes. Whether that trade is good depends on the access pattern."Aside:
mmap()is not automatically fasterThe first access to a cold mapped page is still a page fault. The fault enters the kernel, locates the VMA, finds or reads the page cache page, installs a PTE, and resumes the faulting instruction. If you scan a huge file once, you may take one fault per 4 KB page, and those faults can dominate the page- cache-to-user-buffer copy you avoided.
read()andmmap()also expose different shapes of work. Withread(), user space usually asks for a large buffer at a time, maybe 64 KB, 256 KB, or more. The kernel copies a contiguous chunk into that buffer and can issue readahead based on the file access pattern. Withmmap(), readahead can happen too: when a fault reveals sequential access, the kernel may read surrounding file pages into the page cache, and may map nearby already-cached pages around the fault. But the control flow is still implicit and fault- driven. Cold pages still need faults to install mappings.Mappings also consume page table memory, create TLB pressure, and may trigger TLB shootdowns when unmapped or when permissions change. Error handling is different too: if another process truncates a mapped file and you later touch a page beyond the new end, the kernel may deliver
SIGBUS. Withread(), you usually see an error return or a short read instead.So
mmap()is often attractive when access is random, repeated, shared across processes, or naturally pointer-based.read()is often competitive or better for simple sequential streaming, especially with large buffers. "Zero-copy" is not the same as "free"; the only reliable answer for performance-sensitive code is to measure the actual workload.At that moment, Forka wanders over. She too needs to read the same log file.
Forka : "I'm going to mmap that same file. Same one you're using, Alloca."
Forka calls
mmap(). She accesses the same page Alloca just read. But this time there is no pause.Forka : "That was fast. Why no pause this time?"
Kernel : "Because that page is already in the page cache, it was loaded when Alloca accessed it. I just gave your page table an entry pointing to the same physical frame. You're both reading from the same physical bytes. No disk I/O. No copy. Nothing moved."
Alloca : "Wait, we're both pointing at the same physical frame? So if I write to my mapped region, does Forka see it?"
Kernel : "That depends on a flag you passed to
mmap(). WithMAP_SHARED, your write goes directly into the shared page cache frame, so yes, Forka sees it. WithMAP_PRIVATE, your write triggers a COW fault and you get a private copy, same as afterfork(). The file is never touched."Alloca : "And if I use
MAP_SHARED, when does the change actually reach disk?"Kernel : "It happens asynchronously. But, if you need to guarantee it has been written to disk, you call msync() or fsync()."
Figure
10:MAP_SHAREDvs.MAP_PRIVATEwrite semantics. WithMAP_SHARED, writes go into the shared page cache and are flushed to disk asynchronously. WithMAP_PRIVATE, the first write triggers a COW fault; the process gets a private copy that diverges from both the file and other processes.Key Takeaway
mmap()is a system call that can be used to map a range of bytes from a file directly into a process's virtual address space, creating a new VMA backed by the file. Subsequent reads and writes to that virtual address range behave exactly like memory accesses: the kernel's page fault machinery handles loading data from disk on demand.The central abstraction is the page cache : a kernel-managed pool of physical frames that holds recently accessed file pages. In the normal buffered-I/O path, file access via
read(),write(), andmmap()goes through the page cache. The difference is how user space reaches those bytes:The reason
read()copies into a user buffer is ownership. The caller receives bytes placed in memory it fully controls. Once the call returns, the kernel can evict or reuse the underlying page cache page without affecting the caller's data.With
mmap(), the kernel abstracts away the complexities of memory through the page table: if a mapped page is evicted, the PTE is marked absent, the next access faults, and the kernel reloads the data transparently.
Aside: Bypassing the page cache using direct I/O
By default, ordinary
read(),write(), andmmap()file access go through the page cache. File data gets cached in kernel-managed page cache first, and either gets copied to a user buffer (read()), copied from a user buffer (write()), or mapped directly into the process (mmap()). This is buffered I/O , and it is the normal path.There is another option: open a file with
O_DIRECT. This asks the kernel to transfer file data directly between the storage stack and your user-space buffer, bypassing the normal page-cache data path. This sounds appealing for cases when you want to avoid kernel managed page-cache and have a caching layer in the application itself. But it comes with its own constraints. The buffer address, I/O length, and file offset often need to satisfy filesystem/device alignment requirements, commonly 512 bytes or 4 KB, though the exact rules vary.The reason anyone uses
O_DIRECTis control. Database engines are a famous example that commonly use this. These systems do sequential scan of data while processing queries. When using buffered I/O, the page cache gets filled with intermediate data that the database engine is not going to need in the near future, but this may result in the eviction of the useful pages the database may need soon. To gain control of over this, databases implement their own buffer pools in user space, and disable the use of page cache via direct I/O.The tradeoff with using direct I/O is that you bypass the page-cache machinery that normally provides readahead, dirty-page buffering/writeback, and shared cached file pages between processes. You are now responsible for your own buffering, I/O sizing, alignment, and scheduling strategy. For most applications, buffered I/O is the right choice.
O_DIRECTis a tool for workloads that already implement their own caching and need tighter control over the kernel's caching behavior.
I publish many of these deep dives for free so they can reach as many programmers as possible. Paid subscribers make that possible. If this article is helping you, consider supporting the publication.
Anonymous, File-Backed, and Shared Memory
Alloca now understands that some pages come from files and some pages come from nowhere at all, beginning life as zero-filled frames. But she is still missing a vocabulary for the different kinds of memory she has been using.
Alloca : "I keep hearing different names for memory: anonymous memory, file-backed memory, shared memory. Are these different mechanisms, or just different names for pages?"
Kernel : "They are categories of mappings. Let me explain this to you systematically."
Alloca : "Sure!"
Kernel : "By now you must have understood that VMA is a key structure behind how I manage virtual memory. Now, every VMA tells me two things about the mappings: where does the data come from, and who can observe writes to it?"
Alloca : "Let's start with where the data comes from."
Kernel : "There are two possibilities. The data can either come from a file, like when you
mmapa file, and that results in what I call as file- backed mappings. The second possibility is that the data is from anonymous memory with no file backing it. For example, your heap and your stack regions are anonymous. You can allocate anonymous memory usingmmapas well by using theMAP_ANONYMOUSflag."Alloca : "Understood. What is the second thing the VMA tells you?"
Kernel : "It tells me about who can observe writes to that mapping. A mapping can be private or shared. With a private mapping, your writes are yours alone. If the mapping began from a file, your first write usually triggers copy-on-write and creates an anonymous private page. The file is unchanged. With a shared mapping, multiple processes can map the same underlying object and observe each other's writes through those mappings."
Alloca : "So file-backed versus anonymous tells us where the contents come from, and private versus shared tells us who sees writes."
Kernel : "Exactly."
Figure
11: Virtual memory mappings can be understood along two independent axes:
where the contents come from, either anonymous memory or a file, and who can
observe writes, either only the current process or other processes sharing the
same mapping.Key Takeaway
Virtual memory mappings can be classified along two axes:
-
Anonymous memory : Memory with no ordinary file behind it. Heap, stack, and
MAP_ANONYMOUSmappings are common examples. New anonymous pages are zero-filled on first touch. If modified anonymous pages must be evicted, they need swap because there is no file to reload them from. -
File-backed memory : Memory whose contents come from a file. Executable code, shared libraries, and file mappings are examples. Clean file-backed pages can be dropped and later reloaded from the file. Dirty file-backed pages must be written back before reclaim.
-
Private mappings : Writes are private to the process. A private file mapping can initially share clean file pages, but the first write creates an anonymous copy through COW.
-
Shared mappings : Writes are visible to other processes mapping the same object.
MAP_SHAREDand POSIX shared memory use this model.
Aside: tmpfs: the file-anonymous hybrid
"Shared memory" as people commonly use the term (POSIX shared memory via
shm_open, System V shared memory,/dev/shm) is a distinct concept from the shared mapping we just discussed. A shared mapping is simply one where writes are visible to other mappers. These shared memory APIs are higher-level mechanisms built on top of that idea; under the hood, they are typically backed by tmpfs.tmpfs is a filesystem whose contents live entirely in memory and swap rather than on a persistent disk. A tmpfs file looks and behaves like an ordinary file: you can
open(),mmap(), orfstat()it, but there is no disk backing it. If the system reboots, the contents are gone.From a reclaim perspective, tmpfs pages behave more like anonymous memory than disk-backed file cache: they have no persistent disk file to reload from, so evicted dirty tmpfs pages go to swap. Internally, they still live in the page cache and are managed through the VFS like ordinary files, which is what makes the familiar file API work. This makes tmpfs useful as a fast inter-process communication channel: two processes can map the same file from
/dev/shmwithMAP_SHAREDand share the same physical frames, while still using the ordinary file API.
Page Reclaim: How the Kernel Chooses What to Evict
Alloca has now seen swap and file-backed mappings, but she has only been told the simple version: when memory runs out, the kernel evicts something old. She wants to know how that choice is made.
Alloca : "When physical memory fills up, you said you pick a page that hasn't been accessed recently. But how do you know that a page hasn't been used in the recent past?"
Kernel : "I maintain a list of physical frames organized by how recently they appear to have been used. These are the LRU lists (least recently used). I simply scan these lists, starting from the coldest end and find a candidate page that can be evicted."
Alloca : "But the question remains: how are these lists created and updated? Do you monitor each memory access to continuously update these lists?"
Kernel : "Watching every access in software would be impossibly expensive. So I rely on hardware's help. Every page table entry has an accessed bit, which is there to indicate if a page was accessed. When the MMU performs a page table walk and uses a PTE to translate an address, it sets that bit automatically in that PTE. I don't have to trap the access, I just come along later and look at what the hardware recorded."
Alloca : "How does that work in practice? The MMU is setting the accessed bit in the page table entries, but you need to maintain and update LRU lists of frames. Do you actively go through all the page table entries of all processes and update the LRU lists?"
Kernel : "That would be just as expensive. Imagine iterating every virtual page of every running process on every reclaim cycle, you'd spend more time on bookkeeping than anything else. I take the opposite approach. I scan the LRU list from the coldest end, check the page table entries mapping to it and see if the accessed bit is set or not."
Alloca : "How do you find out which PTEs map to a frame?"
Kernel : "That's where reverse mappings come in, usually called rmap. The page table is a forward map: virtual address -> physical frame. I also maintain the reverse: metadata attached to each physical frame that lets me find the VMAs and page table entries that currently map it. When I want to check whether a frame is warm, I follow its rmap to the relevant PTEs, and check the accessed bits."
Alloca : "Ah, I was not aware that you also maintain reverse mappings. But I still don't understand how all of this works together? You've given me pieces of the puzzle but the full picture is not clear."
Kernel : "The confusion is understandable. Let's connect everything together. When I have to reclaim memory, I start by scanning the coldest set of frames from the LRU list. Then I use the rmap to check the accessed bit of the pages mapping to those frames. If a frame's accessed bit is not set, then it is a candidate for reclaim."
Alloca : "And what if the accessed bit was set?"
Kernel : "Then things become interesting. If a frame's accessed bit is set, it could mean that it has been accessed tens or hundreds of times, but it could also mean that it was accessed once since then it has gone cold. So, for such frames, I unset their accessed bit to give them a second chance. If the frame is scanned again later and the bit is still clear, then that is stronger evidence that it has gone cold."
Aside: The
kswapddaemonNormally, Linux runs a background thread called
kswapdthat watches free- memory watermarks. When free memory drops below a threshold,kswapdwakes up and starts reclaiming pages before the situation becomes urgent.If background reclaim cannot keep up, the allocating process may have to wait for reclaim. This is called direct reclaim, and it can show up as allocation latency in the application.
Alloca : "And, how are the LRU lists structured? You said you start from the coldest end, how do pages age toward that end?"
Kernel : "Although things are a bit more complex, I will simplify for you. Think of two lists: active and inactive , each having a head (newest) and a tail (oldest). When a new page is faulted in, it typically starts near the head of the inactive list. Over time, pages age toward the tail as newer pages push them back, or when colder pages get reclaimed."
Alloca : "But if all the newly faulted pages start from the head of the inactive list, how does a page get promoted to the active list?"
Kernel : "A page that consistently shows its accessed bit set across multiple reclaim scans is promoted to the active list because it has demonstrated sustained use. From there, it ages toward the active tail again. When the active list grows too large, its tail pages are demoted back to the head of the inactive list. So the flow is: inactive tail is where eviction happens, active tail is where demotion back to inactive happens. Pages circulate through this cycle, and only those that consistently fail to show any access get evicted."
Aside: Multigenerational LRU (MGLRU)
The active/inactive model works, but two buckets is a coarse instrument. The fundamental limitation is that it preserves only coarse aging information: it can tell that a page looked recently referenced at scan time, but it does not maintain a rich multi-step history of how its temperature changed over time. A page accessed ten thousand times since promotion looks effectively the same as one accessed once; a page that was hot for ages but cooled recently looks the same as one that was never warm. Under workloads with mixed access frequencies, periodic re-access patterns, or bursty I/O, this can lead to evicting pages that will soon be needed or retaining pages that will not.
MGLRU (multi-generational LRU) addresses the root cause by giving the kernel more expressive age information. Instead of two lists, pages are grouped into several generations , each representing a time window of access activity. Pages start in the youngest generation when first faulted or accessed. Without re-access they age into older generations; with re-access they are refreshed back into a younger one. Reclaim always targets the oldest generation first. With more age buckets, the cooling curve of a page becomes observable over time, allowing the kernel to make finer, more informed eviction decisions.
MGLRU was introduced in Linux 6.1. The build config option
CONFIG_LRU_GEN=yincludes the code andCONFIG_LRU_GEN_ENABLED=yenables it by default. When compiled in,/sys/kernel/mm/lru_gen/enabledcontrols it at runtime. Systems without it fall back to the classic active/inactive lists.Alloca : "So the lists tell you which pages are cold. But once you've found a cold page, does it matter what kind of page it is? Is every cold page equally easy to evict?"
Kernel : "Not at all. The first split is file-backed versus anonymous. Clean file-backed pages are the easiest. If a page cache page matches the file on disk, I can drop it immediately and reuse the frame. The next access will fault and read it back from the file."
Alloca : "What about dirty file-backed pages?"
Kernel : "Those need writeback. If a process wrote through
write()orMAP_SHARED, the page cache page may be dirty. Before I can reclaim that frame, I need to schedule I/O to write the contents back to the filesystem. After writeback completes, the page becomes clean and cheap to drop. AMAP_PRIVATEwrite is different: the first write produces a private anonymous copy via COW. That copy has no file behind it, so there is no persistent home to reload from. To reclaim it safely I must write it to swap, same as any other anonymous page with real data in it."Alloca : "So under memory pressure, file cache tends to be easier to reclaim than heap memory."
Kernel : "Often, yes, especially clean file cache. This is why free memory can look low while the system is healthy: much of RAM may be used as page cache, and clean cache can be reclaimed quickly when applications need memory. The dangerous case is when the active working sets of processes exceed RAM. Then I have to reclaim pages that will soon be needed again, and the system can start thrashing."
Alloca : "Thrashing means constantly evicting and faulting the same pages back in?"
Kernel : "Right. The CPU spends more time waiting for page faults and disk I/O than doing useful work. At that point, virtual memory's illusion of abundant memory has become too expensive to maintain."
Key Takeaway
Page reclaim is the kernel's mechanism for freeing physical frames under memory pressure. It is approximate, not perfect LRU. Two complementary mechanisms make it practical without being prohibitively expensive:
-
Accessed bits : Every page table entry has a hardware-maintained accessed bit that the MMU sets automatically when the CPU uses that mapping. The kernel reads and clears these bits periodically to estimate recency without trapping every memory access.
-
Reverse mappings (rmap) : The page table is a forward map (virtual -> physical). The kernel also maintains the reverse: metadata on each physical frame that lets it find the VMAs and page table entries that map it. Reclaim uses rmap to check accessed bits on candidate frames only, without scanning every process's page table. This means reclaim starts from lists of physical frames, not from virtual address spaces, so the cost scales with the number of frames under consideration, not with the total size of all processes' virtual memory.
Active/inactive LRU : Pages move between active and inactive lists. In Linux, these are split further into anonymous and file-backed LRUs, maintained per memory- management domain. New pages generally enter as inactive candidates. Pages age toward the tail as newer pages arrive. Reclaim scans from the tail of inactive , checking accessed bits via rmap for mapped pages:
-
Accessed bit set means that the page was recently used; clear the bit to give it a reprieve.
-
Accessed bit clear means that the page is cold; evict it.
Pages that are consistently accessed get promoted to the active list. When the active list grows too large, its tail pages are demoted back to the head of inactive. Pages cycle through this until they consistently fail to show any access.
MGLRU (multi-generational LRU) extends this with several age generations instead of two lists, allowing finer-grained decisions about what is truly cold.
The reclaim cost also depends heavily on page type:
Clean file-backed page : cheapest. Drop it immediately; a future access reloads from the file.
Dirty file-backed page : must be written back to storage before the frame can be reused.
Anonymous page with private data : generally needs swap before reclaim, because there is no file to reload it from. Without swap configured, ordinary anonymous pages are much harder to reclaim.
The practical consequence: "used memory" is not automatically bad. The RAM used for clean page cache is readily reclaimable. However, the real danger is when the combined hot working set of applications exceeds RAM, forcing the kernel to evict pages that will soon be needed again, causing thrashing.
Memory Access Patterns and VM Performance
Alloca has been running correctly for some time now. Her pages are backed, her TLB is warm, and demand paging has handled everything smoothly. But lately she 's noticed something odd: she has two data structures (a dense array and a hash table), each holding the same amount of data, both fitting entirely in RAM. When she scans through all elements in each, the array finishes in seconds. The hash table takes ten times longer.
Alloca : "Same amount of data. Both in RAM. Page table entries for both are installed. Why is the hash table so much slower?"
Kernel : "Because the virtual address space makes all memory look equally fast. It isn't. The cost of an access depends on how it interacts with the layers underneath: the TLB, the cache, the physical layout."
Alloca : "Tell me what's different."
Kernel : "When you scan the array, you move through virtual addresses in order. If the first element is at address
0x1000, and each element is 4 bytes, then the next is at0x1004, then0x1008, and so on. You stay within one 4 KB page for over a thousand consecutive accesses. Remember, the TLB caches completed virtual-to-physical translations, one entry per page. All those accesses within the same page reuse the same TLB entry, so they are fast. Then you cross into the next page and need one new entry. Only a small sliding window of TLB entries is active at any moment, and you reuse each one extensively before moving on. The TLB handles that easily."Alloca : "And with the hash table? I'm probing at random locations across the whole allocation."
Kernel : "Yes, that's where the problem is. Hash table probes are spread across the entire allocation with no fixed order. You might touch page 47, then page 3, then page 201. The CPU has a limited hierarchy of TLBs, a small L1 TLB and a slightly larger second-level TLB. Together they may cover hundreds to a few thousand page translations depending on the CPU and page size. As your probe set fans out across many pages, the TLB hierarchy fills up. When it's full, a new translation evicts an old one. The trouble is that with no locality in your access pattern, the evicted translation is often the one you'll need again soon. By the time you revisit a page, its translation is likely gone, and the hardware may have to walk the page table again to rebuild it."
Alloca : "So if a translation misses across the TLB hierarchy, the hardware has to do a page walk before I can even access the data?"
Kernel : "Right. For random access across a large range, you can be spending significant overhead on translation for every byte you actually wanted. And TLB pressure isn't the only thing working against you. There's also the hardware prefetcher. When you access virtual addresses in a predictable pattern, the CPU detects it and starts fetching upcoming cache lines before you ask for them. For your array scan, you're reading
0x1000,0x1004,0x1008in sequence, so the prefetcher loads the next cache lines ahead of time."Alloca : "But what if the next address crosses into the next virtual page?"
Kernel : "Usually the hardware prefetchers are conservative around 4 KB page boundaries because crossing into the next page could cause a page fault or run into permission issues."
Alloca : "Understood. Each array page holds over a thousand elements. So the prefetcher helps throughout each page, and the cost of crossing into the next is just one TLB lookup?"
Kernel : "Correct. For your hash table, the random probes defeat the prefetcher even within a single page because there's no predictable pattern to detect. So the array wins twice: fewer distinct TLB entries needed, and hardware prefetching next set of cachelines."
Alloca : "Is there anything else that affects this?"
Kernel : "Yes, how often you revisit the same pages. If you keep accessing the same set of pages over and over, those pages stay hot. Their TLB entries stay cached, so you're not constantly rebuilding translations. And those physical frames stay in RAM because my reclaim policy notices they're being used frequently. I'm less likely to evict a page that's getting hammered than one that hasn't been touched in a while."
Alloca : "So if my working set is small enough to fit in the TLB and I keep reusing it, I'm golden?"
Kernel : "Exactly. A tight working set is cheap. But if your working set is sprawling across hundreds of thousands of pages that you only touch occasionally, you're constantly evicting TLB entries you'll need again soon. And under memory pressure, those infrequently-accessed pages become candidates for eviction to swap. Then you're not just paying for TLB misses, you're paying for disk I/O to bring pages back from swap."
Alloca : "So the key is to touch fewer pages. Is there anything I can do to control this?"
Kernel : "Absolutely. One thing that's often overlooked is how tightly you pack your data. The virtual memory system operates at page granularity, so anything that helps you fit more useful data into each page reduces the number of pages, translations, and TLB entries needed for the same logical work."
Aside: Data layout also changes TLB footprint
Compilers often pad structs to satisfy alignment requirements, but struct padding is not just a local layout detail. It also affects how much memory an array of those structs occupies, and therefore how many cache lines and pages the program touches.
Suppose you have a struct with a
char, then an 8-byte pointer, then anotherchar. On a typical 64-bit system, the compiler may insert padding after the firstcharto align the pointer, and then more padding at the end so that each element in an array keeps the pointer correctly aligned. The result may be 24 bytes per struct, even though the actual fields occupy only 10 bytes.Across a million elements, that difference matters. A 24-byte layout occupies about 24 MB, while a more compact reordered layout may occupy about 16 MB. With 4 KB pages, the larger layout spans more pages. More pages means more TLB entries are needed to cover the same number of logical objects, more page- table walks when the TLB misses, and more memory that the kernel may have to manage under pressure.
One common way to reduce padding is to order fields from larger alignment requirements to smaller ones: 8-byte fields first, then 4-byte fields, then 2-byte fields, then 1-byte fields. The compiler may still add tail padding, but usually less than when different-sized fields are interleaved randomly.
Key Takeaway
Virtual memory makes all addresses look the same, but they're not. The CPU has a limited TLB hierarchy, with small L1 TLBs backed by larger second-level TLBs. Together, they cover a limited number of translations, typically a few hundred to a few thousand, depending on the CPU and page size. Once your working set spans more pages than the TLB hierarchy can cover, translation misses become more common. Misses that hit in the second-level TLB are cheaper, but misses that require a hardware page walk can be expensive.
How you access memory matters a lot. If you walk through an array sequentially, you stay within a small number of pages at any given time. You reuse the same TLB entries for thousands of accesses before moving to the next page. The hardware prefetcher can see the pattern and load upcoming data into cache before you ask for it (at least until you hit a page boundary, where it has to stop). That's why sequential scans are fast.
Random access is a different story. When you jump around unpredictably, like probing a hash table, or chasing linked list pointers, you may land on different pages very frequently. As a result, you may face TLB misses for pages that are being visited for the first time, and also you risk evicting TLB entries you'll need again soon. The prefetcher can't predict where you're going next, so it doesn't help. In the worst case scenario, every access risks a TLB miss and a page walk.
Temporal locality matters too. If you keep revisiting the same pages, they stay hot. Their translations stay cached in the TLB. The kernel is less likely to reclaim frequently used pages, because they tend to be recognized as part of the active working set. Under severe pressure, though, even useful pages can still be reclaimed. But if your working set is sprawling and you rarely touch the same page twice, you're constantly rebuilding translations and building up memory pressure.
How you pack your data affects how many pages you touch. A poorly-designed struct with lots of padding might be twice the size of a well-packed one. If you have an array of a million structs, that can result in a difference of 6000 vs 3000 pages. Same logical work, but one version fits in the TLB and the other thrashes. Every byte you save per element multiplies across the whole working set: fewer cache lines, fewer pages, fewer translations, fewer page walks, and less memory pressure.
The VM machinery works largely at page granularity while caches operate at cache-line granularity. Performance-conscious code thinks about how data is laid out in both cache lines and pages, how those pages fit in the TLB, and how access patterns interact with the translation machinery.
Huge Pages and TLB Efficiency
Alloca has redesigned her hash table. Better hash function, reduced load factor. She accepts that random access is unavoidable. But she is still spending too much time on TLB misses. For a 2 GB table with 4 KB pages, the math is unforgiving: half a million pages, and no TLB holds that many entries.
Alloca : "I understand the TLB problem. My 2 GB table spans half a million 4 KB pages. The TLB can only hold a limited number of translations. I will always be missing. What can I do besides shrinking the data?"
Kernel : "You can change the page size. The TLB has a fixed capacity, you can't change it. But what you can change is how much memory each entry covers. x86-64 supports huge pages with sizes 2 MB, and on many systems 1 GB pages as well. A single 2 MB TLB entry covers 512 times as much memory as a 4 KB entry. So your 2 GB hash table mapped with 2 MB pages needs only 1,024 TLB entries instead of half a million"
Alloca : "That is dramatically fewer. But, how does this work with the page table hierarchy?"
Kernel : "The page table walk has an early-exit mechanism when you use huge pages. Each page table entry has a set of flags embedded in its low bits. One of those flags is the page-size bit (PS) which tells the hardware: 'stop here, this entry points directly at a physical frame, not at another table.' For a normal 4 KB mapping, the PMD entry points to a PTE table, and the walk continues. But when the PS bit is set on the PMD entry instead, the hardware treats the PMD entry itself as the final frame mapping, covering 2 MB at once. It skips the PTE level entirely. The 21 low- order bits of the virtual address become the offset within the 2 MB frame instead of requiring a further table lookup. Similarly, if the PS bit is set on a PUD entry, the hardware stops there and maps 1 GB directly, skipping both the PMD and PTE levels."
Figure
12: Huge page early-exit paths through the page table hierarchy. A normal 4 KB
access walks all four levels. A 2 MB huge page stops at the PMD level (the PMD
entry has the page-size flag set); the lower 21 bits of the virtual address
become the offset within the 2 MB page, so no PTE lookup is needed. A 1 GB
huge page stops at the PUD level; the lower 30 bits become the offset within
the 1 GB page.Alloca : "Fewer levels in the walk, fewer TLB entries needed. What is the catch?"
Kernel : "Physical contiguity. A 2 MB huge page needs 512 physically contiguous 4 KB frames, and the starting address has to be aligned to a 2 MB boundary. For a regular 4 KB page, I can grab any single free frame from anywhere in physical memory. It's easy. But for a huge page, I need to find a 2 MB-aligned block where all 512 frames are sitting right next to each other, and they all have to be free at the same time. After the system has been running for a while, physical memory gets fragmented. Small allocations come and go, leaving little gaps everywhere. Finding a big contiguous block with the right alignment gets harder and harder. I can try compaction, where I migrate pages around to assemble larger free ranges, but there's no guarantee it'll work."
Alloca : "So huge pages are generally easier to get on a fresh system and harder as long-running workloads fragment memory?"
Kernel : "That's the usual pattern, yes. So how do you get them reliably? One answer is to reserve a pool upfront, ideally at boot before memory has had a chance to fragment. You set
vm.nr_hugepages, I carve out that many huge pages and hold them aside. They're always contiguous, always aligned, always ready. When you ask for one, I hand it out instantly. The catch: that memory stays off-limits for anything else for as long as it's in the pool, even when nothing is using it."Alloca : "And if I don't want to lock memory away like that?"
Kernel : "That's where Transparent Huge Pages, or THP, comes in. THP tries to give you huge pages without a dedicated pool. Sometimes I can allocate one directly when you first fault a region. Other times, a background daemon called
khugepagedscans your anonymous mappings and collapses a 2 MB- aligned range of base pages into a single huge page after the fact. Your mapping gets upgraded silently, no code changes needed."Alloca : "So THP might help and might not, and I have no guarantee which I got."
Kernel : "Right. It's opportunistic. It runs into the same fragmentation problem I described earlier, finding a 2 MB-aligned contiguous block on a system that's been running for a while is not always possible. If the block isn't there, nothing happens and you stay on base pages. The other risk is that THP may try to create that contiguous block by running compaction first, migrating pages around to free up the space. Compaction is expensive and can cause latency spikes, which is why some latency-sensitive systems disable THP entirely. For predictable huge page coverage, like a database buffer pool, a large in-memory cache, anything where sudden jitter is unacceptable, you're better off reserving the pool explicitly at boot."
Key Takeaway
On x86-64, the base page size is 4 KB, but the architecture also supports larger leaf mappings: 2 MB pages (a PMD-level leaf entry, skipping the PTE table), and on systems with appropriate hardware support, 1 GB pages (a PUD-level leaf entry, skipping both PMD and PTE levels). Each covers correspondingly more memory per TLB entry and requires fewer levels in the page table walk on a TLB miss.
The key constraint is physical contiguity: a 2 MB huge page requires 512 physically contiguous, correctly aligned frames. Physical memory fragmentation, which accumulates over time as the system allocates and frees memory of different sizes, makes this progressively harder to satisfy.
Linux provides two mechanisms:
-
Explicit huge pages (configured via
vm.nr_hugepagesor at boot): drawn from a dedicated HugeTLB pool. Reserving them at boot is the most reliable way to avoid fragmentation. Memory in the pool is reserved for HugeTLB use while it remains there, i.e., it cannot be used as ordinary pages, but the pool size can be reduced later to release pages back, subject to fragmentation. -
Transparent Huge Pages (THP) : opportunistic huge-page backing for ordinary mappings, especially anonymous memory, either through fault-time huge-page allocation or later background collapse by khugepaged. Falls back to base pages when a suitable huge page cannot be allocated or assembled; depending on THP settings, the attempt itself may trigger compaction and latency spikes.
For latency-sensitive workloads with large, frequently-accessed memory regions, explicit huge pages provide the reliable TLB reduction that THP cannot guarantee. The trade-off is granularity: larger pages reduce translation overhead but can waste memory and are harder for the kernel to allocate.
TLB Shootdowns on Multi-Core Systems
Alloca has spawned dozens of worker threads. They 're distributed across the machine's cores, all working in parallel. Everything runs smoothly until she decides to release a large memory mapping she no longer needs.
Alloca : "I used
mmapearlier to create a large shared memory region. Now I'm done with it. How do I give it back?"Kernel : "You call munmap. It's the counterpart to
mmap. You pass the starting virtual address and the length, and I clean up the range: the VMAs are removed, the page-table entries are cleared. Physical pages that nothing else is pointing to get released back to wherever they came from."Alloca : "That sounds straightforward."
Kernel : "It would be, if you were running on a single core. But you're not. You have dozens of threads running in parallel across multiple CPU cores. And, every core carries its own private TLB."
Alloca : "Wait, they don't share a single TLB?"
Kernel : "No. Every core keeps its own private cache of recent translations. On a multi-core machine, when your thread accesses memory, the MMU on that specific core checks that core 's TLB. If it misses, the page walk happens, and the result gets cached in that core's TLB. Other cores don't see that entry unless they independently translate the same address and cache it themselves."
Alloca : "So if thread A on core 0 and thread B on core 1 both access the same virtual address, they each have their own TLB entry for it?"
Kernel : "Exactly. Both cores translate the same virtual address to the same physical frame, but they cache that translation independently. This per-core design is essential for performance, sharing a single TLB across dozens of cores would create a massive bottleneck. But it creates a consistency problem when page tables change."
Alloca : "What kind of problem?"
Kernel : "Think about what happens when you call
munmap. You're on core 0. I clear the PTEs for the region you're releasing. But cores 1, 2, 3… they might still have cached translations for pages in that region. Those TLB entries now point to frames that you just gave back to me."Alloca : "And you might reassign those frames to someone else immediately."
Kernel : "Yes. Without explicitly invalidating those cached translations, a CPU could keep using a stale translation after I have decided the mapping is gone. If the underlying page were later reused for something else, that would be a disaster. I cannot allow that to happen."
Alloca : "So before
munmapfinishes, you need to make sure every core's TLB is consistent with the cleared page table?"Kernel : "Yes. And that's expensive."
Alloca : "How do you do it?"
Kernel : "I send inter-processor interrupts (IPIs), to every CPU core that might hold stale translations for this address space. When a core receives the IPI, it stops what it's doing, runs a short TLB flush routine to invalidate the affected entries, and sends an acknowledgment back. I wait for all cores to acknowledge before I let your
munmapcall complete. This is called a TLB shootdown."Aside: What is an inter-processor interrupt?
Modern CPUs have a hardware mechanism called the APIC (Advanced Programmable Interrupt Controller) that lets one CPU core send an interrupt directly to another. This is an inter-processor interrupt , or IPI. Unlike a regular device interrupt, which is triggered by external hardware (a disk, a network card), an IPI is sent by software running on one core to deliberately interrupt a different core.
When a core receives an IPI, it stops whatever it was doing, saves its state, and jumps to a an interrupt handler. For TLB shootdowns, that handler executes instructions to invalidate the stale TLB entries, then signals acknowledgment and returns to the interrupted work. The sending core waits until all targeted cores have acknowledged before proceeding.
This mechanism is general-purpose. The kernel uses IPIs for TLB shootdowns, but also for things like delivering signals across cores, triggering scheduler reschedules, and stopping cores for kernel panics or suspend.
Alloca : "Every core has to stop and flush, even if they're in the middle of something?"
Kernel : "Yes, if they might have cached translations for your address space. If a core has never run any of your threads, I can skip it. But if a thread has been running on a core recently, that core's TLB might still hold entries for your address space. I send the IPI, that core stops, flushes the relevant entries, and I wait for it to confirm before letting your
munmapcomplete. So, you're waiting for cross-core synchronization."Alloca : "That's why it takes so long. The more cores, the more coordination required."
Kernel : "Precisely. On a large machine, a single
munmapcan involve many cores being interrupted and synchronized. The cost tends to grow with the number of relevant cores, and it also depends on how I choose to invalidate the affected range, whether I flush individual pages or do a broader flush."Alloca : "When else does this happen?"
Kernel : "Anywhere I have to change or remove page-table entries that other CPUs might already have cached.
mprotectis the obvious case: you change permissions, and the translation that other cores have cached is now wrong. The same thing happens during page reclaim and migration, when I unmap pages to move or free them. Copy-on-write faults in a multithreaded process can trigger it too, since other threads on other cores might have the old read-only translation cached. The more frequently these happen in a tight loop, the more cross-core coordination overhead you're paying."Alloca : "So freeing memory and changing mappings or permissions can force expensive cross-core coordination on large machines."
Kernel : "In the worst case, yes. The general principle is that page- table changes are not just local bookkeeping. On a multi-core machine, they can force cross-core synchronization before the operation is complete."
Key Takeaway
On a multi-core machine, each CPU core has its own TLB. This per-core design is essential for scalability, a shared TLB would be a massive bottleneck with dozens of cores competing for access. But it creates a consistency challenge: when the kernel modifies page table entries, other cores may still have cached the old translations.
munmapis the system call that releases a mapping created bymmap. Allocators may also reduce the process heap withbrk/sbrkor return largemmapallocations withmunmap, but the common issue is the same: page table entries for a virtual address range are removed or changed. Clearing the page table isn't enough. If another core still has a stale TLB entry pointing to a frame that has just been freed and potentially reassigned to another process, that core could access memory it shouldn't, violating isolation.The fix is a TLB shootdown : the kernel sends inter-processor interrupts (IPIs) to all CPUs that might hold stale mappings for that address space. Each interrupted CPU flushes the relevant TLB entries. For synchronous invalidations, the operation cannot safely complete until the targeted CPUs have performed the required flush. This forces cross-core synchronization before the operation can proceed.
Shootdown cost tends to grow with the number of targeted CPUs and with how disruptive the chosen flush strategy is. On x86, the kernel may invalidate individual pages or choose a broader TLB flush; the choice depends on the size of the range and the cost of flushing unrelated entries. On machines with many cores,
munmapandmprotecton large regions can become significant bottlenecks.TLB shootdowns arise whenever page-table mappings are modified:
mprotect(permission changes), page reclaim and migration (unmapping pages to move or free them), and copy-on-write faults in multithreaded processesThe practical implication is to minimize page table invalidations in hot paths. High-performance allocators reduce
munmapfrequency by caching freed memory and batching OS returns. In hot paths, prefer reusing large, longer- lived mappings over repeatedly creating, protecting, unprotecting, and destroying small mappings.
Articles like this are expensive in time, but worth writing when readers support them.
NUMA (Non-Uniform Memory Access): The Physical Topology of Memory
Alloca has been running smoothly. Her pages are backed by huge pages where possible, her working set fits comfortably in the TLB, and her threads coordinate to minimize expensive operations like
munmap. She has dozens of worker threads, each processing data from a shared buffer in memory.But something is wrong. She 's noticing a strange inconsistency: some of her threads complete their work quickly. Others, doing exactly the same computation on the same amount of data, take much longer. It's not occasional, it's consistent. Threads 0-23 are fast. Threads 24-47 are slow.
Alloca : "I don't understand. Half of my threads are stuck waiting for memory while the other half run at full speed. They're all doing the same work, accessing the same buffer. Why would memory be fast for some threads and slow for others?"
Kernel : "Come with me. I want to show you something about the physical machine underneath your address space."
Kernel leads Alloca to a view she has never been shown before, not the virtual address space, but the physical hardware topology beneath it.
Figure
13: NUMA topology showing two CPU sockets, each with local memory. In this
simplified model, each socket corresponds to one NUMA node, but real machines,
particularly AMD EPYC systems, may expose more than one NUMA node per socket.
Alloca 's buffer was initialized by a thread on Socket 0, so all physical
frames landed on NUMA Node 0. Threads 0-23 running on Socket 0 get fast local
DRAM access. Threads 24-47 running on Socket 1 must have their cache misses
served from Node 0, crossing the inter-socket interconnect. Local DRAM latency
is typically around ~100ns; remote DRAM access is often 1.5-3× higher, though
exact numbers vary by CPU generation, memory speed, and system topology.Kernel : "This server has two CPU sockets. Each socket has its own pool of RAM wired directly to it. When a CPU on socket 0 reads from memory attached to socket 0, it's a short trip, maybe 100 nanoseconds. Fast."
Alloca : "And what about reading from the other socket's memory?"
Kernel : "That's where the problem appears. Socket 0 and socket 1 are connected by an inter-socket link. When a CPU on socket 0 needs data from memory attached to socket 1, the request must cross that link. Round trip takes two to three times longer."
Alloca : "But my virtual address space… it's just a flat range of addresses. How would I even know which memory is on which socket?"
Kernel : "You don't. That's the problem. Your virtual addresses are completely abstract. Address
0x10000and address0x20000look identical to you. But behind the scenes, one might map to a physical frame on socket 0, and the other to a frame on socket 1. The virtual memory system hides that completely."Alloca : "So the physical location of my data determines performance, but I have no control over it?"
Kernel : "You do have control, but it's indirect. The key moment is when a page is first accessed. Remember demand paging? When you touch a page for the first time, I have to allocate a physical frame for it. At that moment, I need to decide which NUMA node to allocate from."
Alloca : "How do you decide?"
Kernel : "By default, I use what's called first-touch placement. Whichever CPU core triggers the page fault gets to decide. I allocate the frame from that core's local NUMA node. So if your thread running on core 5 (which is on socket 0) is the first to touch a page, that page's frame lands on socket 0's memory pool."
Alloca : "Okay, so the first thread to touch a page determines where it lives physically."
Kernel : "Yes. Now think about what probably happened with your buffer. You likely had one thread, maybe your main thread that initialized the buffer. That thread touched every page in sequence, probably while running on socket 0. Every single page fault was handled by a CPU on socket 0, so every single frame landed on socket 0's memory."
Alloca : "And then I handed that buffer to all my worker threads?"
Kernel : "Right. And those threads are distributed across both sockets. Threads 0 through 23 run on socket 0, when they access the buffer, the memory is local, everything is fast. But threads 24 through 47 run on socket 1. Any cache miss they take resolves as a DRAM fetch, and that DRAM is on the wrong socket, the access has to cross the inter-socket interconnect. That's typically two to three times the latency of a local DRAM fetch."
Alloca : "That explains the performance split perfectly. So the thread that initializes the data and the threads that use it need to be on the same socket?"
Kernel : "That's one solution. For partitioned data where each thread works on its own section, you can have each thread initialize its own portion while pinned to the socket where it'll do the work. The first-touch policy ensures the data lands locally."
Alloca : "What if the data is shared? All my threads are reading the same buffer."
Kernel : "Then you have a harder problem. No matter where you put the data, it's local for some threads and remote for others. One approach is to use explicit NUMA policies. The mbind system call lets you control allocation policy for a specific virtual address range."
Alloca : "What can I do with it?"
Kernel : "Several things. You can bind a range to a specific NUMA node, force all its pages onto one socket's memory. You can set a preferred node that's tried first but allows fallback. Or you can interleave pages across nodes, where consecutive pages alternate between socket 0 and socket 1."
Alloca : "Why would I want to interleave?"
Kernel : "Interleaving is useful for heavily shared data with high bandwidth demand. Think about it, if all your threads are hammering the same memory range, putting it all on one socket creates a bottleneck, all the traffic goes through one memory controller. With interleaving, each socket sees a mix of local and remote pages when scanning the range, but the bandwidth demand is spread across both memory controllers rather than concentrating on one. You're trading some locality for better aggregate throughput."
Alloca : "Understood. Is there also the possibility of the scheduler moving my threads between sockets after I've set everything up?"
Kernel : "Yes, in that case your careful placement falls apart. If a thread that was running on socket 0 with local memory gets migrated to socket 1, then suddenly all its memory is remote. This is why NUMA-sensitive workloads typically pin threads to specific CPUs using taskset or pthread_setaffinity_np."
Alloca : "So the typical pattern is: decide which threads work on which data, pin those threads to the appropriate socket's cores, and make sure the thread that first touches the data is running on the right socket so first- touch puts the frames locally."
Kernel : "That's the basic approach for thread-private or partitioned data. For shared data, you either accept that some accesses will be remote, or you interleave to balance the load. There's no perfect solution when multiple sockets need heavy access to the same memory. You're always trading off between locality and bandwidth distribution."
Aside: Automatic NUMA balancing
Linux also provides automatic NUMA balancing, controlled via /proc/sys/kernel/numa_balancing. When enabled, the kernel periodically samples a task's memory by temporarily unmapping pages, or marking them so that the next access triggers a NUMA hinting fault. The fault lets the kernel record which CPU or NUMA node is actually accessing it. Based on those faults, the kernel may migrate pages toward the node that uses them, or move tasks closer to their memory. This can improve placement without code changes, though the sampling faults and migrations add overhead and are not guaranteed to help every workload.
The downside is that it is reactive. It adapts after the fact rather than placing memory correctly from the start, and the sampling-induced faults add a small overhead. For workloads where latency consistency matters, deliberate placement with
mbindand thread pinning is more reliable. For workloads where access patterns are hard to predict or partition, automatic balancing can be a reasonable hands-off alternative.Key Takeaway
Modern multi-socket servers are NUMA (Non-Uniform Memory Access) systems. Physical memory is divided into NUMA nodes , each directly attached to one CPU socket. A CPU can access memory on any node, but local access is noticeably faster than remote access, which must traverse the inter-socket interconnect.
The virtual address space hides this topology completely: two adjacent virtual pages may be backed by physical frames on different NUMA nodes. The NUMA node of a physical frame is primarily determined at allocation time by the kernel's memory policy.
The kernel's default policy for anonymous memory is effectively first- touch : when a page is first faulted into a real physical frame, it is usually allocated from the NUMA node local to the CPU handling that fault. If initialization and hot access happen on different sockets, most DRAM accesses will pay remote latency.
Strategies for NUMA-aware operation:
-
Initialize on the accessing socket : for partitioned data, the thread that will perform the hot accesses should also touch pages first, placing frames on the local node.
-
Thread pinning : bind threads to specific CPUs with
tasksetorpthread_setaffinity_npto prevent cross-socket migration. -
mbind/set_mempolicy: per-range NUMA allocation policy in code. -
numactl : command-line wrapper to set NUMA policy for an entire process.
-
Interleaving : for heavily shared data accessed across sockets, interleaving pages across nodes distributes bandwidth demand across multiple memory controllers. Each socket sees a mix of local and remote pages, but no single memory controller becomes a bottleneck.
-
Automatic NUMA balancing : the kernel can be configured to sample memory access patterns at runtime and migrate pages or tasks toward the nodes that use them most (
/proc/sys/kernel/numa_balancing). It requires no code changes but is reactive rather than proactive, it adapts after observing bad placement rather than preventing it. For latency-sensitive workloads, deliberate placement is more reliable.
For shared data accessed heavily by multiple sockets, no placement is perfect: the trade-off is between locality, bandwidth balance, and sometimes deliberate replication.
For data-intensive workloads on multi-socket servers, NUMA is often the dominant source of unexplained memory latency once TLB and cache behavior have been addressed.
Observing Virtual Memory in Practice
Our journey through the virtual memory world with Alloca ends here. We have covered the machinery of the modern Linux kernel from first principles. For this final section, I will switch back to my normal voice and cover the observability and debugging tools that let you actually see what is happening in a running system.
Understanding the mechanisms is one thing; knowing where to look when something goes wrong is another. Memory problems tend to disguise themselves. A process using more memory than expected, a workload that fits in RAM but still feels sluggish, a system that gradually slows down under load -- each of these points to a different layer of the VM stack. The tools below correspond to those layers. Work through them in order when you are unsure where the problem lives.
Step 1: What address ranges does the process have?
Before anything else, look at what the process has actually mapped.
/proc/<pid>/mapslists every VMA: the virtual address range, the permissions (r,w,x, andp/sfor private or shared), the offset into any backing file, and the file name if there is one. You can see the heap, the stack, the shared libraries, and anymmapregions all in one place.This is the reservation view. It tells you what address ranges exist and what they are allowed to do, but says nothing about how much physical memory is actually backing them. A region that looks large here might have almost no physical pages behind it, demand paging means pages are only allocated on first touch.
pmap -x <pid>presents the same information in a slightly more readable table format.Step 2: How much physical memory is the process actually using?
smapsismapsextended with a full accounting breakdown for every VMA. It tells us "what is actually in RAM." The key fields to understand:-
Rss(Resident Set Size): how many kilobytes of that VMA are currently in physical RAM. Pages that have never been touched, clean file-backed pages that have been reclaimed, or anonymous pages that have been swapped out all contribute nothing here. -
Pss(Proportional Set Size): like Rss, but shared pages are divided proportionally among all processes that map them. If ten processes share a 4 KB library page, each is charged 0.4 KB. -
Private_Clean/Private_Dirty: pages private to this process that either still match their backing file (clean) or have been written to and diverged (dirty). -
Shared_Clean/Shared_Dirty: pages shared with other processes. Clean shared pages, like read-only library code, are cheap to reclaim. Dirty shared pages need to be cleaned first: file-backed ones require writeback to disk, while shmem/tmpfs dirty pages go to swap instead. -
AnonHugePages: how many bytes of this VMA are backed by transparent huge pages. If you want to verify that THP is actually working for a particular region, this is the field to check.
For the system-wide picture,
/proc/meminfois the companion. The fields worth checking areMemAvailable(the kernel's estimate of how much can be freed without touching swap),Cached(page cache, most of which is reclaimable),DirtyandWriteback(pages queued for or actively being written back),AnonPages(anonymous pages currently in RAM), and the swap fields:SwapTotal,SwapFree.Step 3: Is the process triggering disk I/O through page faults?
Page faults are the mechanism that connects virtual addresses to physical memory, and they come in two very different varieties.
Minor faults (
ru_minfltviagetrusage) are resolved without any disk I/O. They involve a kernel trap and some bookkeeping, but no waiting for storage. A large number of minor faults during startup is perfectly normal.Major faults (
ru_majfltviagetrusage, ormajor-faultsinperf stat) are a different story. These required actual disk I/O, either reading a cold file page from storage, or bringing a page back from swap. On spinning disks, a major fault can easily take several milliseconds; on NVMe it might be a few hundred microseconds. Either way, sustained major faults in a steady- state hot path are a warning sign. They usually point to swap pressure, uncached memory-mapped file I/O, or a working set that is competing with the rest of the system for physical memory.To measure fault counts for a single run:
perf stat -e page-faults,major-faults ./your-programpage-faultscounts total faults; minor faults are approximately the difference from major.Step 4: Is the whole system under memory pressure?
Once you have the process-level picture, zoom out to see whether the kernel itself is struggling.
vmstat 1samples every second. The columns to watch aresiandso(swap- in and swap-out in KiB per second). Nonzerosomeans the kernel is writing pages to swap because reclaim pressure has reached anonymous memory. Nonzerosimeans pages are being faulted back in. Both together at the same time is the classic thrashing pattern. Thebcolumn counts tasks currently blocked on I/O, which includes swap I/O.Pressure Stall Information (PSI) at
/proc/pressure/memorygives a finer picture. It reports the fraction of time tasks spent stalled waiting for memory:somemeans at least one task was stalled;fullmeans all non-idle tasks were stalled simultaneously, i.e., the system was making zero forward progress. A machine where thefullmetric is climbing steadily is one where memory has become a genuine bottleneck, not just busy, but actively blocking work from completing.Step 5: Is translation itself the bottleneck?
TLB misses are almost entirely invisible to the kernel. The MMU handles them in hardware via page-table walks; the kernel only gets involved if the walk faults because the page isn't present. To observe TLB behavior you have to go to the hardware performance counters, which
perfexposes.perf stat -e dTLB-load-misses,dTLB-store-misses,iTLB-load-misses ./your-programdTLB-load-missesanddTLB-store-missescount data TLB misses on loads and stores respectively.iTLB-load-missestracks instruction TLB misses, which matters when the code footprint is large or when working with JIT-compiled code. Note that the event names vary by CPU generation;perf list | grep -i tlbshows what your machine exposes.As we learned in the article, a high TLB miss count alone doesn't tell you much, what matters is whether those misses are triggering expensive page-table walks. A miss that hits the second-level TLB is relatively cheap, but the one that requires a full hardware page walk is not. For the actual walk cost, look for events like
dtlb_load_misses.walk_activeon Intel processors, which counts cycles spent actively walking page tables.High TLB miss rates combined with low major-fault counts (data is in RAM but translations are not cached) point to a working set that has outgrown the TLB hierarchy. The remedies are the ones covered earlier: huge pages to reduce the number of entries needed, or tighter data packing to reduce the number of distinct pages touched.
Step 6: Are some threads slower than others on identical work?
If some threads consistently take longer than others doing the same computation, and the disparity is stable rather than random, NUMA placement is the first thing to check.
numactl --hardwareshows the machine's NUMA topology: the number of nodes, memory per node, and the distance matrix between nodes. The distance matrix is a relative latency measure. This tells you the penalty being paid per remote access.numastat -p <pid>shows where a process's pages actually live. If the bulk of the pages are on node 0 but the threads doing the work are running on node 1, that is first-touch misalignment in practice./proc/<pid>/numa_mapsprovides the same information per VMA, including which NUMA policy is in effect for each region and how many pages have landed on each node. It is verbose but precise when you need to understand why a specific mapping ended up where it did.Putting it together
Virtual memory problems almost always start as a vague symptom. The right approach is to peel back layers in order rather than guessing:
-
Is memory actually being used, or just reserved? Compare VMA size in
mapsto Rss insmaps. Large reserved-but-not-resident regions are normal (lazy allocation). Unexpectedly large Rss is the real signal. -
Is the process responsible for that memory, or is it shared? Compare Rss to Pss. If Rss is large but Pss is small, you're mostly mapping shared libraries or shared regions that other processes are also paying for.
-
Is the process triggering frequent disk I/O through page faults? Check major fault count via
perf statorgetrusage. Sustained major faults in a steady-state workload usually mean swap pressure, uncached mmap/file-backed I/O, or a working set that does not fit in available RAM or page cache. -
Is the system reclaiming memory aggressively? Check
vmstatfor swap-in/out activity and PSI for actual stall time. Highsi/sowith high PSIfullis a system in memory distress. -
Is translation overhead high even with data fully in RAM? Check TLB miss rates and page-walk cycles via
perf stat. High miss rates with low fault counts point to a working set that has outgrown the TLB, a case for huge pages or tighter data packing. -
Are some threads consistently slower than others on the same work? Check NUMA placement via
numastat -pand/proc/<pid>/numa_maps. Asymmetric slowness with equal work is a NUMA symptom, but confirm it against CPU placement, page placement, and other sources of per-core variation such as thermal throttling, IRQ affinity, or lock contention.
Thanks for reading. This was one of the most ambitious pieces I've written for this publication. If you found it useful, consider becoming a paid subscriber or purchasing the PDF version. It directly supports more long-form systems writing. You can also purchase the PDF version of this article to support this work at this link.
What We've Learned
In this article, we explored virtual memory through a dialogue between the kernel and a user-space process named Alloca. Along the way, we covered a lot of ground: address spaces, page tables, TLBs, demand paging, memory types, page reclaim, copy-on-write, mmap, huge pages, TLB shootdowns, NUMA, observability, and more.
Let's end this article with a summary of everything that we learned.
Providing memory-level isolation is the foundational problem that virtual memory solves. Each process gets its own private set of virtual addresses, and the MMU enforces the boundaries between them. No process can directly read or write another's memory.
Giving the address space structure is the next step. The virtual address space is divided into segments like code, data, heap, and stack, each with different permissions and growth behavior. Code is read-only and executable; the stack grows down on demand; the heap grows up through allocator requests.
Mapping every byte to a physical location is impractical. A flat table covering the full 128 TB user address space would itself consume 256 GB. The solution is fixed-size pages and frames with hierarchical page tables: memory is divided into 4 KB chunks, any frame can back any page, and the page table hierarchy only allocates levels for address ranges actually in use.
Walking four levels of page table on every memory access would be too slow. The TLB caches recent virtual-to-physical translations so that most accesses skip the walk entirely. Hit rate depends on access patterns and how tightly the working set fits within the number of TLB entries available.
Allocating physical frames at malloc time wastes memory. Demand paging defers the allocation: when a process reserves memory, the kernel records the promise in a VMA but does not assign physical frames. Frames are allocated only on first access, when a page fault fires.
Not all pages cost the same to evict. The kernel distinguishes anonymous memory (heap, stack, and MAP_ANONYMOUS regions), file-backed memory (executables, shared libraries, mmap'd files), and tmpfs-backed shared memory. Clean file-backed pages can be dropped immediately and reloaded from disk. Dirty file-backed pages must be written back first. Anonymous and tmpfs pages need swap space because there is no file to reload them from.
Physical memory fills up. Page reclaim is the kernel's mechanism for freeing frames under pressure. It uses hardware-maintained accessed bits to estimate recency without trapping every access, reverse mappings (rmap) to find which page table entries point to a given frame, and active/inactive LRU lists to identify cold pages. The goal is to evict cold pages while keeping hot working sets in RAM. Evicting pages that will soon be needed again causes thrashing.
Copying all of a process 's memory on fork is too slow. Copy-on-write shares physical frames between parent and child after fork. Pages are only copied when one side actually writes to them, tracked with per-frame reference counts. This makes fork nearly instantaneous regardless of address space size.
File I/O through a user buffer requires an extra copy. mmap maps page cache frames directly into the process address space, allowing the process to read file data without a separate copy from kernel buffer to user buffer. Multiple processes mapping the same file share the same physical frames.
Random access patterns scatter across too many pages. Sequential access reuses a small sliding window of TLB entries and benefits from reused cached translations and hardware prefetching in the cache. Random access, such as hash table probes, and pointer chasing, does not have the same guarantees and can suffer from unpredictable performance.
Large working sets exhaust TLB capacity. Huge pages (2 MB or 1 GB on x86-64) can allow a single TLB entry to cover orders of magnitude more memory than a standard 4 KB page. The constraint is physical contiguity: huge pages require large, aligned, contiguous blocks of physical memory, which become harder to find as memory fragments over time.
Unmapping pages on a multi-core machine requires cross-core coordination. Each CPU core has its own TLB. When the kernel removes or changes a page table mapping, other cores may still hold the old translation cached. A TLB shootdown sends inter-processor interrupts to all relevant cores, forcing them to flush stale entries before the operation can complete. This is why munmap and mprotect on large regions can be expensive on machines with many cores.
Virtual memory hides the physical topology of memory. On multi-socket NUMA servers, physical memory is divided into nodes, each attached to one socket. Remote memory accesses (those that cross the inter-socket interconnect) are 1.5-3× slower than local ones. The virtual address space makes both look identical. Correct NUMA placement requires co-locating threads with their data and using first-touch initialization, thread pinning, or explicit mbind policies.
Thanks for reading Confessions of a Code Addict! This post is public so feel free to share it.
-
-
🔗 r/york Homestead park & museum gardens looking lovely today rss
| submitted by /u/DentistKitchen
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Getting a feel for how fast X tokens/second really is. rss
I love following all your adventures with local LLM setups. Quality and size of the models are important, but so is performance. Numbers don't really convey the experienced speed well, however.
If someone claims they run Qwen 3.6-27B at 21 tokens/second, how fast is that? Is 10 tokens/second unusable? I find these numbers objective but meaningless.
I built a script that helps me get a subjective feel for these objective numbers.
It supports text, code and reasoning + code.
https://mikeveerman.github.io/tokenspeed/
submitted by /u/MikeNonect
[link] [comments] -
🔗 r/Yorkshire I built Kettlewell Manor House in Minecraft. rss
| The house was built in the early 18th century, in the town of Kettlewell. submitted by /u/ILikeNiceBuildings
[link] [comments]
---|--- -
🔗 r/Leeds To all who came to cheer for the Marathon/Half-Marathoners - THANK YOU! rss
Your encouragement pushed is all to the finish line!
Extra thanks for those who came out with sweets/fruit/goodies - fuelling us all along the way.
You're all superstars, thank you!
submitted by /u/E45_Asthma_Cream
[link] [comments] -
🔗 r/wiesbaden Pen&Paper-Treff am Pfingstmontag: 25.5., ab 17.30 Uhr rss
Am 25.5., Pfingstmontag, findet wieder der offene Rollenspiel-Montag in der Phantasos Arena statt.
Angeboten wird dieses mal Alien, KULT, Cthulhu, Candela Obscura und Cairn. Die Runden beginnen zwischen 17.30 und 18.30 Uhr, Details und Anmeldung gibt's auf Discord: https://discord.gg/hfB7WcRC4n
Der Treff: Einmal im Monat spielen wir One-Shots, an denen alle teilnehmen können. Es sind keine Vorkenntnisse erforderlich, wir erklären alles vor Ort und freuen uns über neue Gesichter! Normalerweise finden 2-5 Runden parallel statt, und durch die Auswahl ist auch für erfahrene MitspielerInnen immer was spannendes dabei. Der Termin wird immer im Vormonat angekündigt.
Location: Schossbergstraße 11, Wiesbaden-Schierstein, Bürogebäude in der 2. Reihe
Anfahrt: Mit ÖPNV möglich, nachts muss man aber genau schauen, wann was fährt. Die Location befindet sich mitten in einem Gewerbegebiet, parken ist also kein Problem.
Kosten, Material: Für die Nutzung der Räume wird um 5€ pro Person gebeten; wer sich das nicht leisten kann, ist aber ebenso willkommen (es handelt sich nicht um einen "Eintritt" und wird nicht kontrolliert). Snacks und Getränke können günstig vor Ort erworben oder mitgebracht werden. Stift und Notizzettel sollten mitgebracht werden, wer noch keine eigenen RPG-Würfel hat, kann sich welche ausleihen.
submitted by /u/Bitter-Secretary6006
[link] [comments] -
🔗 Jessitron What is it like to be you? rss
This is what I want to know when I get to know someone. What is the experience of being alive in your body, in your world?
This is the definition of consciousness: there is something it is like to be you.
Erik Hoel contrasts this with LLMs, speculating
there is nothing it is like to be two matrices multiplying.
This is why humans can be responsible and accountable for things. Our actions have consequences that we can feel, that we can’t help but feel, for decades. What I do changes what it is like to be me. This is ongoing, inescapable.
When we share experiences either by living them together or by telling stories, then there it is something it is like to be us. Connection.
-
🔗 r/Harrogate Just joined Reddit — I make bespoke jewellery in Harrogate and couldn't resist finally joining this community rss
| I run a small independent jewellery studio here in Harrogate and just joined Reddit — lovely to find this community! submitted by /u/FogalandBarnes
[link] [comments]
---|--- -
🔗 r/Leeds Why no development in the Trinity tower? rss
We're staying on the top floor or the Park Plaza hotel for a wedding. We look out over Trinity and this tower block that rises above it. Every single floor is completely empty - no offices, no shops, no accommodation. Anyone know why? It seems such a waste of space in a prime position for either offices or city centre flats.
submitted by /u/benbamboo
[link] [comments] -
🔗 r/wiesbaden Schuhpflege rss
Hat jemand eine Empfehlung in Wiesbaden, wo man (durchaus teure) Lederschuhe zur Pflege abgeben kann?
Bin selbst leider auch Bach Jahren für alles was über die Basispflege hinausgeht zu doof.Danke
submitted by /u/QuotaAchievingAnimal
[link] [comments] -
🔗 r/Yorkshire Georgian Richmond Yorkshire rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/york I don’t think I’ll ever get tired of this city rss
| submitted by /u/SavingsMap2506
[link] [comments]
---|--- -
🔗 r/york Minster rss
| Another photo I’m deleting. Taken from the new road behind the NRM, early January. This view won’t be around much longer once they start building up York Central. submitted by /u/AttitudeAdjuster33-1
[link] [comments]
---|--- -
🔗 r/york Petergate rss
| I’m just deleting some photos on my camera and this was on. The reason why I took it is because it was half past four in the afternoon on the 8th of January this year. Hardly a soul in sight.. it was like Covid had come back. submitted by /u/AttitudeAdjuster33-1
[link] [comments]
---|--- -
🔗 r/wiesbaden Was ist ein 'ungeschriebenes Gesetz' in Wiesbaden, das jeder Neu-Zugezogene an Tag 1 lernen muss? rss
Eine gute Freundin von mir zieht bald aus den USA zu uns (und nope, nicht zur Army, ganz normaler Job in der Gegend). Wir haben gestern telefoniert und sie hat mich gefragt, was so die absoluten „ungeschriebenen Gesetze“ sind.. also die Dinge, die in keinem Reiseführer stehen, die man aber unbedingt wissen muss, um hier entspannt klarzukommen (oder um sich nicht direkt unbeliebt zu machen).
Ehrlich gesagt stand ich total auf dem Schlauch 🤦. Kann hier jemand helfen?
submitted by /u/LethisXia
[link] [comments] -
🔗 Register Spill Joy & Curiosity #85 rss
We launched a completely rebuilt Amp this week.
Amp Neo (it started as a codename, but I've grown really fond of it) is remote controllable, supports plugins as a first-class feature, has compaction you don't have to worry about, and is a lot more efficient and faster than the old Amp.
It is one of the most elaborate systems I've ever worked on. A coding agent split into three parts: the tools here, the interface there, and the loop in an infinitely scalable system over there.
Some day I'll hopefully write about building it. I learned a lot about programming with agents in the last two months.
We've spent the whole week scaling this system for the demand and are still working on it. "Scaling problems are good problems to have" definitely feels like a fortune cookie laughing at you when you're staring at graphs and logs every hour of the day.
People love Neo and can't get enough of it. We're now the "ferrari of coding agents".
But I also barely read anything this week except logs, so this edition of the newsletter is very short.
-
This is beautiful: "For thirty years I programmed with Phish on, every day. In 2026, the music is out of phase with the work." What a nice piece of writing and, man, this idea that some people find out early what they want and what's enough for them and know that it won't ever change has stuck with me: "Other kids my age were figuring out what they liked, trying things on, growing into and out of phases. I was watching them do it from a desk. I had picked early. I started writing code as a kid. I heard Phish for the first time at thirteen. By the time I was fifteen and had a professional gig, the picking was settled. I had two things, and I didn't want a third."
-
I somehow missed this earlier, but Aphyr's The Future of Everything is Lies, I Guess is now a series of articles. It's an epub too!
-
Alloway's Antidote To Baumol's Cost Disease. This was very interesting to throw into ChatGPT and ask questions about.
-
I love reading David Sedaris' writing and this week, after a very long day, after my brain had shut off, I read his newest piece in the New Yorker and smiled.
-
Six Years Perfecting Maps on watchOS. There's so many thoughts that come up when reading this today: will craftsmanship like this exist in the future? would AI have sped things up? was it worth it? how much impact did the real world experience and testing have? what would've happened had he hired the designer earlier? But the most important one: I love reading posts like this one.
-
Check the margins on bread.
-
"As so often with German, there is a word for the kind of environment: Lehrwerkstatt. Literally: A teaching workshop. The whole shop floor is the classroom. You learn by being near the work. Being a constant learner is one of the core values of the firm."
Also had to scale beloved coding agent this week? Subscribe here, my friend:
-
-
🔗 modem-dev/hunk v0.11.1 release
What's Changed
- feat(config): auto-detect jj checkouts by @benvinegar in #264
- fix(diff): skip huge file rendering by @benvinegar in #266
- fix(loaders): restore a/b prefixes on noprefix patch input by @mo in #240
- fix(loaders): normalize mnemonic pager prefixes by @benvinegar in #267
- fix(ui): coalesce viewport listener via microtask to avoid setState loop by @aldevv in #242
- fix(core): use header counts for hunkLineRange so context lines are in range by @aldevv in #244
- fix(git): pass --no-ext-diff when diffing untracked files by @iamken1204 in #259
- fix(ui): use full agent-note set for section geometry measurement by @aldevv in #243
Full Changelog :
v0.11.0...v0.11.1 -
🔗 r/LocalLLaMA NVIDIA AI Releases Star Elastic: One Checkpoint that Contains 30B, 23B, and 12B Reasoning Models with Zero-Shot Slicing rss
I saw this on another sub and didn't see it posted here, it looks awesome, and can definitely be run local. I guess it was released 11 days ago, but it never hit the top of my feed (which I look at way too often), so posting it again.
This is my take on it:
Think of this as like scalable video coding, you have a UHD stream, but strip some layers and you have a HD, or SD stream, it's all a single file stream, not multiple ones.
Like nested models, rather than 3 different sets, and they can share their KV cache so the model can adjust speed like a sliding scale. You get an idea with a 30B model, then scale down and permutate all the thinking at 7000t/s on the 12B model, generating a book of reasoning in seconds, then slide up to 30B again to evaluate what's good. You could have a 30B kind of guide the smaller ones back and forth.
Maybe it's somewhat of a hybrid between Dense and MoE, it's like MoE but with 3 dense models that are like russian dolls.
Original Post:
NVIDIA just released Star Elastic — and the inference strategy alone is worth understanding.
Here's what's actually interesting from the technical side:
- One checkpoint. Three models.
Star Elastic applies a post-training method to Nemotron Nano v3 that nests 23B and 12B submodels can be extracted zero-shot from the parent checkpoint the 30B parent. All three live in a single checkpoint in BF16, FP8, and NVFP4.
- The router learns the architecture, not just the weights.
A learnable router trained via Gumbel-Softmax maps any target parameter budget to the optimal nested configuration across all elastic axes — attention heads, Mamba SSM heads, MoE experts, FFN channels, embedding dimensions. The importance-based ranking that orders these components is computed before training begins.
- Use a smaller model for thinking. Use the full model for the answer.
This is the finding we found most interesting. Elastic budget control assigns the 23B submodel to the thinking phase and the 30B model to the final answer. Reasoning traces are high-volume but tolerant of lower capacity. The final answer is low-volume but requires precision. Matching model size to phase complexity gives:
→ +16% accuracy vs. standard budget control
→ 1.9× lower latency
Measured on AIME-2025, GPQA, LiveCodeBench v5, and MMLU-Pro.
- The cost reduction is significant.
→ 360× fewer tokens vs. pretraining each variant from scratch
→ 7× fewer tokens vs. state-of-the-art sequential compression
→ The 23B and 12B nested models match or outperform independently trained baselines of comparable size
- Hardware accessibility.
The 12B NVFP4 variant runs on an RTX 5080 where every BF16 configuration runs out of memory. On an RTX Pro 6000 it reaches 7,426 tokens/s — 3.4× the throughput of the 30B BF16 baseline.
Read the full analysis which also has an interactive step-by-step code guide here: https://www.marktechpost.com/2026/05/09/nvidia-ai-releases-star- elastic-one-checkpoint-that-contains-30b-23b-and-12b-reasoning-models-with- zero-shot-slicing/
3-in-1 model in BF16: https://huggingface.co/nvidia/NVIDIA-Nemotron- Labs-3-Elastic-30B-A3B-BF16
3-in-1 model in FP8: https://huggingface.co/nvidia/NVIDIA-Nemotron- Labs-3-Elastic-30B-A3B-FP8
3-in-1 model in NVFP4: https://huggingface.co/nvidia/NVIDIA-Nemotron- Labs-3-Elastic-30B-A3B-NVFP4
Related Papers: https://arxiv.org/abs/2511.16664 There's also a new one called "Star Elastic: Many-in-One Reasoning {LLMs} with Efficient Budget Control" but I can't find it.
submitted by /u/phazei
[link] [comments]
-
- May 09, 2026
-
🔗 r/Harrogate Afternoon Tea Recommendations rss
Hi everyone. Its my mums birthday next sunday (17th) and I'm miraculously child free, so I thought it would be the perfect time for us to go have a peaceful afternoon tea together to celebrate. I've been to Mama Doreens before but not for the afternoon tea (which does look incredible) do you think that would be the best place for us to go, or is there somewhere else just as good/better?
submitted by /u/Chronic_Eyeroller_
[link] [comments] -
🔗 r/Leeds Mexican food in Leeds rss
Can someone recommend a Mexican place in Leeds (pref. City Centre) that does more than just tacos? Got a hankering for some good enchiladas... thanks!
submitted by /u/notagain78
[link] [comments] -
🔗 modem-dev/hunk v0.11.0 release
What's Changed
- Added
vcs = "jj"support forhunk diff [revset]andhunk show [revset]by @clabby in #217 - Added a pager-mode sidebar file tree toggle via
sby @clabby in #216 - Fixed
git log -pand multi-commitgit show -pparsing so commit metadata is ignored by @gonzaloserrano in #228 - Fixed cross-file hunk navigation anchoring by @aliou in #222
- Fixed the View menu sidebar checkmark to follow actual responsive sidebar visibility by @aliou in #236
Full Changelog :
v0.10.0...v0.11.0 - Added
-
🔗 r/LocalLLaMA Apple Removes 256GB M3 Ultra Mac Studio Model From Online Store rss
| Getting really worried about the m5 Ultra. From removing 512gb -> 256gb -> 96gb. submitted by /u/rotatingphasor
[link] [comments]
---|--- -
🔗 r/Leeds Why does the area around the bus station & market feel left behind in comparison to the rest of the city centre? rss
Wasn't quite sure how to word the title without making it too lengthy!
I used to work in the city centre going back 7-8 years but then moved elsewhere and since then I've rarely needed to go into town due to moving a fair distance away.
I've recently landed a job back there however and noticed how much of the centre has been updated, cleaned and generally improved. Newly tarmacked roads, fresh pavements, new clean bus stands, new shop fronts & so on. Catching the bus on or near Infirmary Street or walking around that side really feels much cleaner and far better than what I remember.
But then I get to the bus station & Leeds Market side (especially the area where the NCP Car Park & COOP are located) and it genuinely doesn't look or feel any better compared to 8-10 years ago.
Its almost like a time warp going back a decade compared to a few roads away, especially considering the Victoria Quarter and that area has all been refurbished.
It still feels very dirty & unsafe depending on the time of day, there's still plenty of crackheads & such hanging around at all hours similar to Trinity/Boar Lane, the pavement and roads are visually grimy - just baffles me considering how much work has gone into the centre everywhere else.
submitted by /u/Carinwe_Lysa
[link] [comments] -
🔗 r/Yorkshire Salts Mill, WY rss
submitted by /u/Trespass_cali
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.12.0 release
What's Changed
Jujutsu VCS backend
critnow auto-detects.jj(before.git), supports--vcs jjand"vcs": "jj"in config, resolves the review base viatrunk()withmain/master/trunkbookmark fallback, and usesjj diff --gitso existing diff rendering works unchanged. Colocated jj/git repos default to jj.Vim-style visual line mode
Press Shift+V on a focused block to enter visual line mode; j/k extend the selection from the anchor; c opens a comment form spanning the range; Esc or Shift+V exits. Works for markdown line-blocks and diff lines (split + unified). The focused block's left accent turns amber while in visual mode.
General
- Add Jujutsu VCS backend support by @solodov in #491 - Thank you!
- feat: vim-style visual line mode (V) for multi-line comments by @tomasz-tomczyk in #510 - Thank you @markjaquith for suggesting!
- feat: collapse linguist-generated files by default (#503) by @tomasz-tomczyk in #504 - Thank you @matdurand for suggesting!
- feat: configurable listen host (--host / CRIT_HOST) by @tomasz-tomczyk in #496 - Thank you @kaihendry for suggesting!
Integrations
- feat: add Gemini CLI integration by @tomasz-tomczyk in #488 - Thank you @sirjagman for suggesting!
- feat: add Qwen Code integration by @tomasz-tomczyk in #500 - Thank you @reneleonhardt for suggesting!
- feat: add Hermes Agent integration by @tomasz-tomczyk in #498 - Thank you @nisrulz for suggesting!
Fixes
- fix: keyboard-nav line highlight visible in light mode by @tomasz-tomczyk in #507 - Thank you @markjaquith for reporting!
- fix: branch-scope diff renderer no longer shows stale cached lines by @tomasz-tomczyk in #511
Internal refactors
- test(e2e): reduce flakiness and cache Go build in CI by @tomasz-tomczyk in #489
- test(e2e): deeper flakiness audit — retrying assertions and CI guards by @tomasz-tomczyk in #490
- ci: install jj so JJ backend tests run by @tomasz-tomczyk in #493
- ci: install sapling so SL backend tests run by @tomasz-tomczyk in #497
- ci: fail if integration_hashes_gen.go is out of sync by @tomasz-tomczyk in #501 - Thank you @reneleonhardt for raising!
- test: expand jj VCS coverage by @tomasz-tomczyk in #502
- refactor: drop jj parser wrappers; call shared sapling parsers directly by @tomasz-tomczyk in #512
- ci: run frontend/test-diff-render.mjs in test-frontend target by @tomasz-tomczyk in #513
- chore(deps-dev): bump stylelint from 17.9.1 to 17.11.0 by @dependabot in #508
New Contributors
Full Changelog :
v0.11.0...v0.12.0 -
🔗 r/wiesbaden Child outlet protectors rss
Hi all,
I’m on the hunt for some outlet covers/protectors. I’ve checked at Saturn and Media Markt with no luck. So I was curious if anyone had any ideas or knows where to get them? My son is obsessed with trying to put his fingers in the outlets. Thanks!
submitted by /u/daddyciwa
[link] [comments] -
🔗 r/wiesbaden Frühstücken in Wiesbaden oder Mainz rss
Moin Leute,
Wo kann man so richtig gut und gemütlich lange frühstücken. Vegan/Vegetarisch sollte möglich sein? Auch super wenn man dort reservieren kann.
Eure Tipps letztes Mal fürs gemütliche Essen gehen haben super gepasst. :) Danke
submitted by /u/JohnTheMonkey2
[link] [comments] -
🔗 r/LocalLLaMA BeeLlama.cpp: advanced DFlash & TurboQuant with support of reasoning and vision. Qwen 3.6 27B Q5 with 200k context on 3090, 2-3x faster than baseline (peak 135 tps!) rss
| TL;DR New llama.cpp fork! I wanted a Windows-friendly inference to run Qwen 3.6 27B Q5 on a single RTX 3090 with speculative decoding, high context without excess quantization, and vision enabled. No option did this out of the box for me without VRAM and/or tooling issues (this was before MTP PR for llama.cpp surfaced there). So I pulled out an old trick: stay up to 4 a.m. one too many times to do month+ work in a week or two. I probably lost a decent amount of hair while trying to make this all work, but now I have what seems to be a proper solution and don't mind to share.Anbeeld's BeeLlama.cpp
https://preview.redd.it/lqjgiw1bx40h1.jpg?width=1800&format=pjpg&auto=webp&s=3b68c16e78d36a1089a14f31b338aa78b8a1c073 GitHub repo: https://github.com/Anbeeld/beellama.cpp BeeLlama.cpp (or just Bee) is a performance-focused llama.cpp fork for squeezing more speed and context out of local GGUF inference. It keeps the familiar llama.cpp tools and server flow, then adds DFlash speculative decoding, adaptive draft control, TurboQuant/TCQ KV-cache compression, and reasoning-loop protection, with full multimodal support.
Not quite a pegasus, but close enough.
Here's a plug-and-play Qwen 3.6 27B setup with a config to run it in Q5 + 200k of practically lossless KV cache + vision on a single RTX 3090 or 4090.
Fork Features
- DFlash speculative decoding :
--spec-type dflashdrives a DFlash draft GGUF alongside the target model. The target captures hidden states into a per-layer 4096-slot ring buffer, the drafter cross-attends to the most recent--spec-dflash-cross-ctxhidden-state tokens and proposes drafts for target verification. - TurboQuant / TCQ KV-cache compression : Five cache types (
turbo2,turbo3,turbo4,turbo2_tcq,turbo3_tcq) spanning from 4x to 7.5x compression, with higher-bit options being practically lossless in many cases. Set independently with--cache-type-kand--cache-type-v. - Adaptive draft-max control : The server adjusts the active draft horizon at runtime instead of using a fixed
--spec-draft-n-max. The defaultprofitcontroller compares speculative throughput against a no-spec baseline; thefringealternative maps acceptance-rate bands to draft depth. - Full multimodal support : When
--mmprojis active, the server keeps flat DFlash available for text generation. The model can be fully offloaded to CPU with no problems to reduce VRAM pressure. - Reasoning-loop protection : The server detects repeated hidden reasoning output and intervenes. Default mode is
force-closewith--reasoning-loop-windowand--reasoning-loop-max-periodtuning available. - Sampled DFlash verification :
--spec-draft-tempenables rejection-sampling drafter behavior. Activates when both draft and target temperature exceed zero. Draft log probabilities must be available for rejection sampling to produce correct output. - DDTree branch verification : optional
--spec-branch-budgetadds branch nodes beyond the main draft path with GPUparent_ids, tree masks, and recurrent tree kernels. Disabled automatically when the target model spans more than one GPU. This one is very much work in progress! - Request-level speculative overrides : Draft-max and branch budget can be overridden per-request through JSON fields without restarting the server.
- CopySpec model-free speculation :
--spec-type copyspecprovides rolling-hash suffix matching over previous tokens without a draft model.
For the full feature and public-repo comparison, read docs/beellama- features.md. For the complete argument reference, read docs/beellama- args.md. TurboQuant (WHT-based scalar quantization) originates from TheTom/llama-cpp-turboquant. TCQ (Trellis-Coded Quantization) and basic DFlash implementation originate from spiritbuun/buun-llama-cpp (paper: Closing the Gap: Trellis-Coded Quantization for KV Cache at 2-3 Bits). submitted by /u/Anbeeld
[link] [comments]
---|--- - DFlash speculative decoding :
-
🔗 r/Yorkshire Six arrests in £2m TikTok shop crackdown in Rotherham rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 hyprwm/Hyprland v0.55.0 release
A massive update brought to you by the All Hyprland Corp!
Breaking changes
dwindle:pseudotilehas been removed as it wasn't doing anythingdecoration:shadow:ignore_windowhas been removed (defaults to enabled)render:cm_fs_passthroughhas been removed, should be automatic withrender:cm_auto_hdrmisc:vfrmoved todebug:as it's a debug variable that should not be changed in prod environments
New features:
- algo/scroll: add center for centering the current col (#14059)
- algo/scrolling: add config options for focus and swapcol wrapping (#13518)
- algo/scrolling: add expel, consume, and consume_or_expel (#13869)
- animations: add springs (#14171)
- binds: add an auto_consuming flag (#13919)
- config/lua: add ExpressionVec2, allow using a table for vec2 rules (#14197)
- config/lua: add clear tag api (#14273)
- config/lua: add noop
- config/lua: add simple layout API (#14258)
- config/workspacerule: add animation style (#13380)
- config: add device tags (#13728)
- debug-tools: add flame
- desktop/window: add alpha container for alpha calculations
- desktop/windowRule: add
confine_pointerwindow rule (#13379) - desktop/windowRule: add parser switch for confine pointer (#14263)
- dispatchers: add moveintoorcreategroup (#13325)
- dwindle: add rotatesplit layoutmsg and tests (#13235)
- gestures: add live pinch cursor zoom (#14049)
- gestures: add scroll_move (#14063)
- groups: add groupbar middle_click_close option (#14242)
- hl.mata.lua: add string to NotificationOptions's icon param. (#14334)
- hyprctl: add hw cursor flag
- hyprland.pc.in: add src include flag
- i18n: add Greek translations (#13865)
- i18n: add Punjabi translations (#13807)
- input: add device specific binds (#13073)
- layerrules: add dynamically registered rules for plugins (#13331)
- layout/windowTarget: add visualBox (#13626)
- render/cm: add ICC profile pipeline (#12711)
- renderer/deco: add glow decoration (#13862)
- renderer: add a cm settings cache
- window/rules: add scrolling_width (#13754)
- windows/focus: add fallbacks when focussing workspaces (#14270)
Fixes:
- config/descriptions: add missing desc entry
- cmake: add -fno-omit-frame-pointer to debug
- InputManager: add guards to confineToRegion to avoid issues (#14269)
- algo/dwindle: add back splitratio (#13498)
- algo/dwindle: fix precise mouse setting (#13678)
- algo/master: fix crash after dpms (#13522)
- algo/master: fix crash on null target in getNextTarget
- algo/scroll: fix std::clamp assertion crash on resume from suspend (#13737)
- algo/scroll: fix unsigned wrap (#13634)
- algo/scrolling: fix offset on removeTarget (#13515)
- algo/scrolling: fix rare crash
- algo/scrolling: various scrolling view related bugfixes (#13974)
- build: add glaze dependency with FetchContent fallback (#13666)
- build: add format-check and format-fix Makefile targets (#13936)
- build: fix build on gcc 16.x after #6b2c08d (#13429)
- clang-tidy: fix duplicate entry in .clang-tidy (#14045)
- cmake: fix permissions for directories by default
- cmakelists: fixup errors failing build on arch ci (#14259)
- compositor: fix floating input/visual z-order desync after fullscreen (#14015)
- compositor: fix focus edge detection (#13425)
- compositor: fix missing recheckWorkArea to prevent CReservedArea assert failure (#13590)
- config/actions: fix misuse of ActionResult's error type (#14221)
- config/legacy: fix crash on getConfigValue of plugin fns
- config/legacy: fix missing fallbacks crashing device getters
- config/lua: fix device bool int reads (#14313)
- config/lua: fix dispatcher shapes to not be callable (#14268)
- config/lua: fix unbind behavior (#14199)
- config/lua: fix window object to selector logic
- config/refresher: fix refreshing of cursor zooms (#14283)
- config: fix crash in safe mode due to null
Config::mgr()(#13855) - config: fix propRefresher to not run on first launch
- config: fix safe mode config generation (#14024)
- config: fix type confusion in getOption with complex types
- core: fix i586 build (#13550)
- deco/border: fix damage region
- deco/border: fix damageEntire
- desktop/group: fix movegroupwindow not following focus (#13426)
- desktop/rule: fix matching for content type by str
- desktop/rules: fix empty workspace handling (#13544)
- desktop/rules: fix static rules and content type. (#13725)
- desktop/view: fix SIGABRT in CWindow::onUnmap when monitor is expired (#14148)
- desktop/window: fix floating windows being auto-grouped (#13475)
- desktop/window: fix idealBB reserved (#13421)
- desktop/windowRule: fix matching CONTENT (#13636)
- desktop/workspace: fix visibility criteria matching (#14349)
- example/hyprland.lua: fix wiki links for new stuff (#14172)
- examples: fix missing permissions entry in lua example config (#14177)
- groups: fix
movewindoworgroupwhen moving from group to group (#14086) - hyprctl: fix bools in getoption
- hyprctl: fix buffer overflowing writes to the socket
- hyprctl: fix getoption with custom types (#14243)
- hyprctl: fix invalid type cast
- hyprctl: fix json output for the submap command (#13726)
- hyprctl: fix lib64 pkgconfig for version-checking (#14051)
- hyprctl: fix workspace dynamic effect reloading (#13537)
- hyprpm: fix url sanitization in add
- input: fix device configs for pointer devices
- input: fix focus_on_close=2 (MRU) routing to cursor path instead of getNextCandidate (#13969)
- input: fix the multimon touch fix (#13819)
- input: fix touch monitor focus ordering (#14310)
- input: fix touch screen focus on multi monitor (#13764)
- internal: fix relative path header locations (#13650)
- keybinds: fix keycode matching on lua (#14254)
- keybinds: fix missing z-order update on floating toggle (#14100)
- keybinds: fix wrong space assignment in pin (#14061)
- keybinds: fixup changegroupactive
- layershell: fix popup crash with nullptr mon (#13763)
- layout/algo: fix swar on removing a target (#13427)
- layout/groupTarget: fix crash on null space assignment (#13614)
- layout/master: fix rollprev/rollnext focusing the wrong window (#14209)
- layout/scroll: fix configuredWidths not setting properly on new workspaces (#13476)
- layout/scrolling: fix edge detection in recalculate() (#14359)
- layout/scrolling: fix size_t underflow in idxForHeight (#13465)
- layout/windowTarget: fix size_limits_tiled (#13445)
- layout: fix crash on monitor reconnect due to stale workspace state
- layout: fix drag_threshold window snap regression (rebased for #12890) (#13140)
- layout: fix null deref in focalPointForDir and moveInDirection (#13652)
- layouts: fix crash on missed relayout updates (#13444)
- meta/stubs: fix notification icon type (#14320)
- misc: fix missing noreturn attribute for throwError (#13746)
- monitor: fix centered floating windows off-screen in special workspace (#14203)
- opengl/shadow: fix shadow offset rendering (#14156)
- overridableVar: fix reassignment
- pointer: fix hardware cursor rendering on rotated/flipped monitors (#13574)
- propRefresher: fix misnamed value
- protocols/compositor: fix presentFeedback being blocked
- protocols/sessionLock: fix crash when monitor is gone during lock surface creation
- protocols: fix image-copy-capture stop handling and remove non protocol errors (#13706)
- render/pass: fix debug:pass rendering
- render: fix SIGFPE in
addWindowToRenderUnfocusedwhenmisc:render_unfocused_fpsis 0 (#13973) - render: fix layer blur_popups ignoring ignore_alpha when blur is off (#13947)
- renderer/groupbar: fix a group indicator rounding bug (#13975)
- renderer/groupbar: fix gradients rendering (#13875)
- renderer: Various CM fixes, part 8 of refactors (#13860)
- renderer: fix blockBlurOptimization check (#13685)
- renderer: fix crash on mirrored outputs needing recalc (#13534)
- renderer: fix crash on null blur framebuffer during monitor disconnect
- renderer: fix crash when shader path isn't a file (#13756)
- renderer: fix crash with nullptr FBs (#13641)
- renderer: fix decoration colors with linear FP16 (#14361)
- renderer: fix sdr mod (#13630)
- renderer: fix shadow CM calculations (#14364)
- renderer: fix share window projection (#13695)
- renderer: more FP16 fixes (#14070)
- renderer: refactor part 7: api fixes (#13631)
- renderer: small fixes in OpenGL.cpp and OpenGL.hpp (#13842)
- screencopy: fix crash in screensharing toplevel with invalid handle (#13781)
- screencopy: fix isOutputBeingSSd (#13586)
- screencopy: fix minor crash (#13566)
- screencopy: fix nullptr deref if shm format is weird
- screenshare: round captureBox after scaling to fix region capture at fractional scales (#14257)
- seat/compositor: fix minor issues (#13958)
- seat: fix dropped wl_keyboard.enter after stale keyboardFocusResource (#14143)
- tests/workspace: fix one test case failing
- tests: Fix more tests failing on CI (#14159)
- tests: fix ConfigLuaValueTypes - boolBadType test, 0 and 1 are allowed integer values for bool type (#14240)
- tests: fix gtests crashing (#14244)
- workspace: fix missing null access guard (#14119)
- xwayland: fix compiler warnings (#13920)
Other:
- CI/Nix/Test: check gtest exit status
- CI/Nix: use org-wide actions
- CI/build: remove commented-out clang-format action (#13893)
- Nix: always test in debug mode
- NotificationOverlay: take reserved space into account (#14184)
- algo/dwindle: Respect
force_splitwhen moving windows to workspaces (#13038) - algo/dwindle: do NOT use smart_split for overridden focal point (#13635)
- algo/dwindle: don't crash on empty swapsplit (#13533)
- algo/dwindle: use focal point correctly for x-ws moves (#13514)
- algo/scroll: improve directional moves (#13423)
- algo/scroll: reverse horizontal dir mapping of vertical scroll directions (#13647)
- algo/scrolling: improve behavior with focus_fit_method = center (#13795)
- animation: avoid redundant damage calls in tick
- build: bump hyprgraphics to 0.5.1 (#14013)
- build: bump hyprutils to 0.13.1 (#14365)
- build: remove auto-generated hyprctl/hw-protocols/ files during make clear (#13399)
- build: remove legacy clang-format workflow (#13887)
- clang-format: run formatter
- cleanup: avoid repeated weak_ptr lock() calls in conditions (#14057)
- cleanup: avoid repeated weak_ptr::lock() usage in MasterAlgorithm (#14226)
- cmake: install the default example hyprland.lua (#14174)
- cmake: remove dependence on hyprland.conf
- cmakelists: search for any possible lua package name (#14204)
- compositor: When processing fullscreen states, only use effective mode where necessary (#13607)
- compositor: be more selective about how we expand the window box in getting coord (#13720)
- compositor: damage monitors on workspace attachment updates
- compositor: move SessionLockManager init from STAGE_LATE to STAGE_BASICINIT (#14272)
- compositor: recalculate workspace state after fs state update (#14369)
- config/actions: remove spammy errors and make them silent
- config/errors: Report and categorize errors properly for actions (#14192)
- config/executor: actually execute exec-shutdown (#13872)
- config/legacy: default to active window for movetoworkspace dispatchers (#14170)
- config/legacy: translate default window args properly
- config/lua: cannot disable animation (#14215)
- config/lua: don't pop up an error if no target was found (#14175)
- config/lua: expand properties in the workspace object (#14194)
- config/lua: init lua config manager, use lua if available (#13817)
- config/lua: workspace.move/rename should accept "workspace" instead of "id" as a parameter (#14232)
- config/refresher: refresh watcher state properly (#14307)
- config/workspace-rules: support modifying persistent and monitor (#14217)
- config: allow hashes for parsing colors (#14337)
- config: always call refresh after config reload (#14346)
- config: cleanup the entire config infrastructure (#13785)
- config: find lua paths first (#14335)
- config: move misc:vfr to debug: (#14021)
- config: refresh window states on border_size changes (#14201)
- config: use lua by default, generate lua if no config present
- data/dnd: guard against expired dndPointerFocus and ensure consistent usage (#13996)
- debug/overlay: optimize rendering, cleanup and nicetify (#14097)
- decoration/border: simplify damage callback
- desktop/group: respect direction when moving window out of group (#13490)
- desktop/history: include ranges header (#14000)
- desktop/layerRule: use variants for storage internally
- desktop/popup: cache popup extents
- desktop/popup: cache tree count
- desktop/reserved: do not crash on invalid box init (#13880)
- desktop/rule: cleanup inheritance, use templates to avoid dup
- desktop/rule: recheck eating the applied rule (#14362)
- desktop/rule: use Numeric for number parsing
- desktop/window: don't group modals
- desktop/window: expand hidden into proper states
- desktop/window: guard null monitor in xwaylandSizeToReal (#13876)
- desktop/window: optimize getRealBorderSize()
- desktop/window: reduce window deco updates (#13980)
- desktop/window: refactor over fullscreen state
- desktop/windowRule: use variants for storage internally
- desktop/workspaceHistory: small refactor to work better with multi monitor setups (#13632)
- egl: move over to use hyprgraphics (#12988)
- errorOverlay: modernize, refactor, use GPU rendering (#14122)
- example: remove old .conf file
- examples: merge config blocks in lua example as demo
- format: safeguard drmGetFormat functions (#13416)
- gitignore: ignore pointer scroll test artifact
- helpers/systemInfo: extract info fns (#14222)
- hyprtester: minor refactoring/restructure (#14154)
- i18n: update Tatar translations (#13930)
- i18n: update Vietnamese translations (#13489)
- i18n: update brazillian portuguese (pt_BR) translation (#14248)
- init: drop CAP_SYS_NICE from ambient set after gaining SCHED_RR (#14082)
- input: allow focus to switch to most recently used window on closed (#13769)
- input: avoid repeated weak_ptr::lock() and ensure consistent usage (#14039)
- input: focus monitor on touch down events (#13773)
- input: implement follow_mouse_shrink (#13707)
- input: keep pointer focus on layer surfaces during keyboard refocus (#14018)
- input: lazy cache getWindowIdeal()
- internal: improve cursor size logging (#14180)
- internal: include setByUser in CConfigManager::getConfigValue (#14155)
- internal: removed Herobrine
- internal: rewrite deviceNameToInternalString using a single range pipeline (#13806)
- internal: silence compiler warnings about unused return values (#13997)
- keybind/actions:
cycle_nextw/tiled = truedoesn't choose only tiled windows (#14164) - keybindMgr: use legacy behavior for single-key binds on lua (#14176)
- keybinds: Remove removed keybinds (#13605)
- layersurface: simulate mouse movement on layer change (#13747)
- layout/algo: preserve focused target if applicable on layout switches (#14058)
- layout/algos: use binds:window_direction_monitor_fallback for moves (#13508)
- layout/dwindle,master: return invalid layoutmsg errors
- layout/scrolling: handle fullscreen manually (#14190)
- layout/windowTarget: damage before and after moves (#13496)
- layout/windowTarget: don't use swar on maximized (#13501)
- layout/windowTarget: override maximized box status in updateGeom (#13535)
- layout: guard null workspace in CWindowTarget::updatePos() (#13861)
- layout: replace string comparison with ID-based matching in WorkspaceAlgoMatcher (#13943)
- layout: revert "replace string comparison with ID-based matching in WorkspaceAlgoMatcher (#13943)"
- layout: store and preserve size and pos after fullscreen (#13500)
- layouts/dwindle: override force after window drags (#14002)
- logging: update uri of debug log in ConfigManager to reflect change in wiki (#14185)
- main: improve error reporting during initialization in main.cpp (#14181)
- meta/stubs: update gesture hints to match new fields (#14195)
- miscfunctions: reuse monitor pointer instead of repeated calls (#13977)
- monitor: centralize solitary and scanout eligibility checks
- monitor: damage old special monitor on change
- monitor: ensure swapchain is updated before mode test (#14065)
- monitor: keep workspace monitor bindings on full reconnect (#13384)
- monitor: set format back after failing DS activation (#14168)
- monitor: update pinned window states properly on changeWorkspace (#13441)
- monocle: avoid repeated workspace monitor lock() calls (#14085)
- nix/tests: print gtests logs
- nix: separate overlay with deps
- notifications: move and small refactor (#14094)
- notifications: optimize rendering (#14088)
- opengl: minor egl changes (#14147)
- pass/surface: cache texBox
- pointer: damage entire buffer in begin of rendering hw
- protocolMgr: set m_self properly when updating mirrored outputs
- protocols/workspace: schedule done after output update (#13743)
- protocols: allow xdg-foreign to be used by sandboxed apps (#13854)
- protocols: avoid repeated per-client work in hot paths
- protocols: prune stale subsurface refs in hot traversals
- protocols: reimplement unstable/xdg-foreign-v2 (#13716)
- refactor: improve readability of monitor rule comparison (#13884)
- render/decoration: cache input extents as well
- render/decoration: improve extent calculations
- render/decorations: improve cache performance
- render/opengl: optimize getShaderVariant's map access
- render/pass: optimize simplification and blur calculations
- render: scale background to monitor resolution (#14250)
- renderer/cm: Support wp-cm-v1 version 2 (#12817)
- renderer: don't damage decos individually in damageWindow
- renderer: extract window skip conditions into named booleans (#14005)
- renderer: guard against null monitor in renderMonitor (#13823)
- renderer: handle HDR -> SDR with cm_auto_hdr (#14102)
- renderer: move m_renderData to renderer (#13474)
- renderer: only set presentationmode when required (#14252)
- renderer: refactor Texture, Framebuffer and Renderbuffer (#13437)
- renderer: refactor gl renderer (#13488)
- renderer: refactor projection setting (#13485)
- renderer: refactor render elements (#13438)
- renderer: refactor resources and flags (#13471)
- renderer: shader variants refactor (#13434)
- renderer: simplify renderWorkspaceWindowsFullscreen
- renderer: simplify shadows (#14047)
- renderer: skip redundant render-path work
- renderer: swizzle on shm screencopy (#14167)
- repo: ignore the autogen file
meta/hl.meta.lua(#14336) - rules: make rule prop reset less cursed (#14003)
- scheduler: keep a strong monitor ref in frame callbacks
- screencopy: check share session state (#13839)
- screencopy: clear buffer before rendering (#14064)
- screencopy: scale window region for toplevel export (#13442)
- screenshare/frame: set m_copied after shm copy succeeds (#14165)
- screenshare: adjust session cleanup and event emission order (#14229)
- screenshare: improve destroy logic of objects (#13554)
- scroll: clamp column widths properly
- seat: store surface in pointerFocus before sendEnter (#13941)
- sessionLock: send locked instead of denied when missing a lock frame for 5 seconds (#14271)
- shader: delete shader on success path (#13682)
- socket2: emit
killevent (hyprctl kill) (#13104) - source: c-f for new clang version
- splashes: update splashes
- subsurface: use geometry-aware damage and recurse into nested trees (#13933)
- tests: add unit tests for ByteOperations helpers (#13886)
- tests: add unit tests for CDamageRing (#13995)
- tests: add unit tests for CHyprColor (#13891)
- tests: add unit tests for CMType helpers (#13888)
- tests: add unit tests for CMonitorRuleParser (#13895)
- tests: add unit tests for CTagKeeper (#13970)
- tests: add unit tests for Direction helpers (#13885)
- tests: add unit tests for Format utilities (#13923)
- tests: add unit tests for Math transform utilities (#13935)
- tests: add unit tests for Math::CExpression (#13924)
- tests: add unit tests for MiscFunctions helpers (#13934)
- tests: add unit tests for TransferFunction helpers (#13889)
- tests: add unit tests for match engine types (#13903)
- tests: skip pointer tests in CI due to missing input environment (#14238)
- tests: stabilize CI by relaxing env-dependent checks and timing-sensitive assertions (#14142)
- tests: tolerate plugin config mismatch in CI (#14173)
- treewide: alejandra -> nixfmt
- view: consolidate group flags and apply window rules (#13694)
- workspace: remove deprecated and unused members (#14198)
- xdg-foreign-v2: Keep invalid imported objects alive (#14166)
- xdg-shell: queue state updates for toplevel (#14227)
- xwayland: handle transient read errors in selection transfer (#14135)
- xwayland: pipe through monitor in coordinate mapping (#13700)
- xwayland: prevent potential buffer overflow in socket path handling (#13797)
Special Thanks
As always, special thanks to these people / companies for supporting Hyprland's continued development:
Sponsors
Diamond
37Signals
Gold
Framework, Butterfly
Donators
Top Supporters:
Tonao Paneguini, Semtex, soy_3l.beantser, Seishin, Nox Æterna, Illyan, Snorezor, Bonsai, Joshua Weaver, ExBhal, DHH, Mikko_Nyman, Kay, iain, TyrHeimdal, miget.com, alexmanman5, Hunter Wesson, --, RaymondLC92, Theory_Lukas, Brandon Wang, Insprill, lzieniew, 3RM, johndoe42, Jas Singh, RayJameson, MadCatX, Xoores, d, Ammar Hossain, Ki☆, inittux111, Arkevius, John Shelburne, DeWattaUnk, ari-cake, gfunnymoney, alukortti, taigrr
New Monthly Supporters:
tubid2wenty, Uros Cotman, yafantik, Guy, goblin_engineer, Julius John Puno, Peter Buijs, mb, StellaBuckley, haikuolin, Antibaddy, sludge10123, C Money, Lipski, KampotKaca, Kazuhide Takahashi, Skeptomai, bombadurelli, Rebellen, Álan, StreamCyper, taras, Yury, Sherab, Filinto Delgado, Taddelladius
One-time Donators:
Quuton, Selvan, Tyler Adams, tonis, Sam, Dimitrios Liappis, Chivtar, Eric, aponsasan888, bkode, LonestarF1, Chris, Dogmatic Polack, Larry, maxx, MonolithImmortal, edrix, I like GameNative, take my money., nyxloom, Frederic Toemboel, Schmendiey, himes, brandonia, Xphelus, New user, Miguel Flores- Acton, R3dGh0st, Glen, Vitor Moura GUEDES, Anersyum, le_04, Dan, AT, chorr, Awesome, IdeaSpring, Jacobrale, anonymous, Elias Griffin, w00z4, Marcus Edvardsson, Gerhard, Bashmaks, Benjaneb, R4dicalEdward, Matýsek ^^, Michael, Gene Raymond, naivesheep, Neginja, anarchuser, Uta, Francois KERISIT, ay4, Lorenzo santacreu, Gitznik, Jure S, Oliver, Pipes, Mein, ironick, Nlight, Pfoid, DasCleverle, Jaf Endee, DIEBUSTER, senorBeard, alex, Mike, luxxa, JasonPettys, One, Daniel, Sven Eppler, L3rdy, Ilunn, Thorff, XurxoMF, Wonkhester, Brian, Doc O, Mortja, Spook, Miguel Cordero Collar, bennyzen, deah, Sean, Higor, nanea808, Torsten Schieber, I3lack5hield, Kevin Steffer, Zarenno, vfosterm, Nikola, EGB, Dietmar, KilahDentist, Wilf Lin, Rad, Yuza, Supporter, nooob, esseonline, Naresh, darquill, BrnPrs, Pani, BYK, Amaury, nythix, Mika, Patriarch, Gambit, GoatCedric, Adam, MirasM, bl4ckb1rd, Loon, KevOlek, AsciiWolf, Brian Barrow, Anon, Kilian, Cristian M., abhinavmishra094, Dejv78, LinoDB, Trofim, Konstantin, JoaquinCamposPlaza(Ximo), Gabo, Phil, dev2and0m, Neil Brown, zarilion, JavierArias(Javi), Thank you, Mystrasun, Skrazzo, MeguminLoli, revitalist, barcellos-pedro, Juh, Goldie, benabrig, mynus, Daniel Zudel, Grant, Jacob Felknor, Noah, e033x, Nick, Niklas, mkami, Slippy, joenu, Oleksandr, t.i.m., Joss001, M4CETO, Nighty, Donater, David N, Cameron, Ekoban, Kieran, brotiii, Doug, Hypruser#0224975, Shadesofastar, sonicbhoc, GKL, Damien, João Seixas, mothmashine, James Freiwirth, Mek, Krizzkrozz, Panzer, mika.dev, Franky Valley, Sycho sMILEz, Roy, Amundis, willibenmula ❤️, Justin, marvelousIT, pablo, Alex, Ryan, cito, Juergen, Eric Koslow, valerius21, jfk, Andrejs, tyforupdate, skwrl, DaintyFox
Full Changelog :
v0.54.0...v0.55.0 -
🔗 r/Yorkshire Looking over High Ginger field Lodge at the ruins of the Racecourse stands. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 r/LocalLLaMA 80 tok/sec and 128K context on 12GB VRAM with Qwen3.6 35B A3B and llama.cpp MTP rss
Just wanted to share my config in hopes of helping other 12GB GPU owners achieve what I see as very respectable token generation speeds with modest VRAM. Using the latest llama.cpp build + MTP PR, I got over 80 tok/sec with 80%+ draft acceptance rate on the benchmark found here: https://gist.githubusercontent.com/am17an/228edfb84ed082aa88e3865d6fa27090/raw/7a2cee40ee1e2ca5365f4cef93632193d7ad852a/mtp- bench.py
Here's my PC specs:
OS: CachyOS (HIGHLY recommended) CPU: AMD Ryzen 7 9700X RAM: 48GB DDR5-6000 EXPO I GPU: RTX 4070 Super 12GBResults with other hardware may vary.
To run llama.cpp with MTP support, you need to build it from source and add a draft PR that hasn't yet been merged with the master branch. You can find a very nice guide on how to do that here and also download the Qwen3.6 MTP GGUF: https://huggingface.co/havenoammo/Qwen3.6-35B-A3B-MTP-GGUF - Thanks u/havenoammo!
llama.cpp command:
llama-server \ -m Qwen3.6-35B-A3B-MTP-UD-Q4_K_XL.gguf \ -fitt 1536 \ -c 131072 \ -n 32768 \ -fa on \ -np 1 \ -ctk q8_0 \ -ctv q8_0 \ -ctkd q8_0 \ -ctvd q8_0 \ -ctxcp 64 \ --no-mmap \ --mlock \ --no-warmup \ --spec-type mtp \ --spec-draft-n-max 2 \ --chat-template-kwargs '{"preserve_thinking": true}' \ --temp 0.6 \ --top-p 0.95 \ --top-k 20 \ --min-p 0.0 \ --presence-penalty 0.0 \ --repeat-penalty 1.0The most important parameter here is -fitt 1536. Since part of the model is offloaded to CPU because of its size and , this tells llama.cpp to properly balance the load on the GPU/CPU to get the best possible performance, and leaves 1536 MB of free memory for the MTP draft model and KV cache. Since I'm running my dGPU as a secondary GPU (monitor plugged in the iGPU), I can use all the available 12GB VRAM for inference. 1536 might be too small if you use your dGPU as your primary GPU, so test it out first.
You can also try different values for -spec-draft-n-max. I got slightly better tok/sec with 3, but a much better acceptance rate with 2, so the trade off was not worth it. With MTP, you want to maximize speed AND acceptance, so you need to find the best balance between both.
Benchmark results:
mtp-bench.py code_python pred= 192 draft= 132 acc= 125 rate=0.947 tok/s=80.8 code_cpp pred= 58 draft= 40 acc= 37 rate=0.925 tok/s=81.8 explain_concept pred= 192 draft= 152 acc= 114 rate=0.750 tok/s=70.0 summarize pred= 53 draft= 40 acc= 32 rate=0.800 tok/s=75.4 qa_factual pred= 192 draft= 144 acc= 119 rate=0.826 tok/s=77.8 translation pred= 22 draft= 16 acc= 13 rate=0.812 tok/s=81.9 creative_short pred= 192 draft= 160 acc= 111 rate=0.694 tok/s=69.2 stepwise_math pred= 192 draft= 144 acc= 119 rate=0.826 tok/s=76.5 long_code_review pred= 192 draft= 148 acc= 117 rate=0.790 tok/s=73.2If you have any questions, feel free to ask :)
Cheers.
submitted by /u/janvitos
[link] [comments] -
🔗 r/Leeds Petition · Stop water pollution from misconnections in the Gledhow Valley rss
The Friends of Gledhow Valley Woods water monitoring team have been out again this week along the length of Gledhow Beck.
They found that the culvert on Allerton Grange Way is again pouring out a thick brown liquid from a misconnection into the Beck. This has been reported to Yorkshire Water and Environment Agency but no evidence of any action.
This is on top of the 368.75 hours of untreated sewage discharges into the Beck and Lake in 2025( latest figures) from the 4 Combined Sewer outfalls in the Gledhow Valley and the toxic mix of chemicals and heavy metals running off Gledhow valley Road into the Beck. Analysis this week demonstrates that levels of Lead and Zinc from this source are likely to have an adverse impact on invertebrates in Gledhow Beck -a key food source for fish and birds.
Please support our campaign to clean up this mess for both nature and the local community.
Sign our petition!
submitted by /u/blissedandgone
[link] [comments] -
🔗 r/wiesbaden Wann kommen die 800.000 Euro für den Helmut-Schön-Sportpark? rss
Die 800.000 Euro wurden bestimmt längst per Fax angewiesen, aber im Wiesbadener Rathaus war leider das Thermopapier alle. Wahrscheinlicher ist aber, dass man die Kohle direkt als Beraterhonorar an McKinsey überwiesen hat, für ein 200-seitiges Gutachten, das klären soll, warum unsere kommunale Infrastruktur eigentlich immer verrottet.
submitted by /u/LethisXia
[link] [comments] -
🔗 modem-dev/hunk v0.11.0-beta.0 release
What's Changed
- Added
vcs = "jj"support forhunk diff [revset]andhunk show [revset]by @clabby in #217 - Added a pager-mode sidebar file tree toggle via
sby @clabby in #216 - Fixed
git log -pand multi-commitgit show -pparsing so commit metadata is ignored by @gonzaloserrano in #228 - Fixed cross-file hunk navigation anchoring by @aliou in #222
- Fixed the View menu sidebar checkmark to follow actual responsive sidebar visibility by @aliou in #236
Full Changelog :
v0.10.0...v0.11.0-beta.0 - Added
-
🔗 r/LocalLLaMA Shel Silverstein predicts LLM's (and its hallucinations), cira 1981 rss
| Ran across this cartoon / poem on accident as I was reminiscing about my favorite childhood poet, Shel Silverstein, and couldn't help thinking of LLM's of course! submitted by /u/spanielrassler
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen3.6 35B A3B uncensored heretic Native MTP Preserved is Out Now With KLD 0.0015, 10/100 Refusals and the Full 19 MTPs Preserved and Retained, Available in Safetensors, GGUFs. NVFP4, NVFP4 GGUFs and GPTQ-Int4 Formats rss
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native-MTP-Preserved: https://huggingface.co/llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native- MTP-Preserved
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native-MTP-Preserved-GGUF: https://huggingface.co/llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native- MTP-Preserved-GGUF
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native-MTP-Preserved- NVFP4-Experts-Only: https://huggingface.co/llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native- MTP-Preserved-NVFP4-Experts-Only
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native-MTP-Preserved- NVFP4-Experts-Only-GGUF: https://huggingface.co/llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native- MTP-Preserved-NVFP4-Experts-Only-GGUF
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native-MTP-Preserved-GPTQ-Int4: https://huggingface.co/llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-Native- MTP-Preserved-GPTQ-Int4
People asked for it, so here it is, all realeases are confirmed to have their full MTP count* retained and preserved.
Comes with benchmark too.
Find all my models here: HuggingFace- LLMFan46
*All releases have been verified to retain the full MTP tensors. In safetensors format, the Qwen3.6-35B-A3B MTP tensors appear as 19 entries because
gate_up_projis stored as one fused tensor. In GGUF format, that fused tensor is split into separate gate/up expert tensors, so the same MTP component appears as 20 entries. The count differs by format, but the MTP tensors are preserved.submitted by /u/LLMFan46
[link] [comments]
-
- May 08, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-05-08 rss
IDA Plugin Updates on 2026-05-08
New Releases:
Activity:
- capa
- 5a60f3a0: fix: register all data-ref addresses for imports in Ghidra helpers
- 99b3cfe0: fix: use singular get_segment_at API in binja file string extractor
- a28fcce7: fix: linter tests needing placeholder rule sets to function
- 5ca6c3e3: gitignore: script test temp files
- b505ba76: fix: remove unused imports and un-suppress F401
- 309231f2: fix: ghidra and binja file strings yield FileOffsetAddress
- 57e730fa: fix: binja embedded PE yields FileOffsetAddress via segment data_offset
- c9cb43a8: fix: elffile imports use AbsoluteVirtualAddress for ELF r_offset
- 9b93e90e: fix: wrap binja function name addresses in AbsoluteVirtualAddress
- 4e804007: fix: ghidra: don't emit VAs for embedded PEs
- 330b6413: fix: ida: correctly emit file offsets for embedded PEs
- 43d65361: gitignore: CLAUDE.local.md
- 8fca21f8: linter: validate dynamic example offsets
- 8e464e60: fix: formatting
- 555bbdec: fix: guard getByteDef against None for unmapped addresses in viv insn…
- c8d47085: fix: remove unused imports from cache-ruleset.py, detect-binexport2-c…
- 7a8a0aca: fix: remove dead except ValueError clause in capa2sarif.py so JSONDec…
- 7d871409: fix: dedent bulk-process.py main() body so explicit argv is used
- a938c87f: fix: guard statistics calls in compare-backends.py against empty dura…
- 604fae35: fix: replace zipfile with pyzipper in minimize_vmray_results.py so ou…
- ida-x64dbg-mcp
- IDAPluginList
- 90a9d234: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- capa
-
🔗 r/LocalLLaMA Qwen 35B-A3B is very usable with 12GB of VRAM rss
Hardware:
RTX 3060 12GB 32GB DDR4-3200 Windows CUDA 13.xModel:
Qwen3.6-35B-A3B-MTP-IQ4_XS.ggufThe model is a 35B MoE, so
-ncmoematters a lot. Lower-ncmoemeans more MoE blocks stay on GPU.Main takeaway
12GB VRAM feels like a very practical size for this model. It lets you keep enough MoE blocks on GPU that plain decoding becomes quite strong, while still leaving room for useful context sizes like 16k/32k.
For prompt processing / prefill, I trust the
llama-benchnumbers more thanllama-cli’s interactivePrompt:line, becausellama-benchgives a cleanerpp512measurement.Best plain
llama-benchresult:-ncmoe 18 -t 9 -ctk q8_0 -ctv q8_0 pp512: ~914 t/s tg128: ~46.8 t/sSo raw prefill is very fast on this setup.
Best practical coding profile
For daily coding, I would use this:
llama-cli.exe ^ -m "Qwen3.6-35B-A3B-MTP-IQ4_XS.gguf" ^ -p "..." ^ -n 512 ^ -c 32768 ^ --temp 0 --top-k 1 ^ -ngl 999 -ncmoe 20 ^ -fa on ^ -ctk q8_0 -ctv q8_0 ^ --no-mmap ^ --no-jinja ^ -t 9 ^ --perfResult:
Context: 32k Prompt: ~88.9 t/s in llama-cli Generation: ~43.4 t/s VRAM free: ~273 MiBThis is a nice balance: large enough context for coding, still fast, and not completely out of VRAM.
Faster 16k profile
-c 16384 -ncmoe 19 -ctk q8_0 -ctv q8_0 -t 9Result:
Prompt: ~91.5 t/s in llama-cli Generation: ~44.5 t/s VRAM free: ~37 MiBThis is slightly faster, but very close to the VRAM edge.
MoE offload sweep
Plain decoding, q4 KV,
-t 11:-ncmoe 22: tg128 ~41.6 t/s -ncmoe 20: tg128 ~41.7 t/s -ncmoe 19: tg128 ~44.2 t/s -ncmoe 18: tg128 ~45.9 t/s -ncmoe 17: tg128 ~46.6 t/s -ncmoe 16: tg128 ~25.8 t/s <-- cliff / too aggressiveSo for plain decoding:
safe: -ncmoe 18 edge: -ncmoe 17 avoid: -ncmoe 16KV cache sweep
At
-ncmoe 18,-t 11:q4_0 KV: pp512 ~913 t/s, tg128 ~45.8 t/s q8_0 KV: pp512 ~915 t/s, tg128 ~45.9 t/s q5_0 KV: much slower mixed q8 K + q4/q5 V: much slowerSo on this GPU, q8 KV is basically free and preferable:
-ctk q8_0 -ctv q8_0MTP / speculative decoding
I also tested MTP with the llama.cpp MTP branch.
Best MTP command:
llama-cli.exe ^ -m "Qwen3.6-35B-A3B-MTP-IQ4_XS.gguf" ^ --spec-type mtp ^ -p "..." ^ -n 512 ^ --spec-draft-n-max 2 ^ -c 4096 ^ --temp 0 --top-k 1 ^ -ngl 999 -ncmoe 19 ^ -fa on ^ -ctk q4_0 -ctv q4_0 ^ --no-mmap ^ --no-jinja ^ -t 11 ^ --perfResult:
Generation: ~47.7 t/sMTP sweep:
-ncmoe 24, depth 2: ~43.8 t/s -ncmoe 20, depth 2: ~46.6 t/s -ncmoe 19, depth 2: ~47.7 t/s -ncmoe 18: failed / invalid vector subscript -ncmoe 16: failed / invalid vector subscriptDepth 3 was worse:
depth 3, -ncmoe 20: ~39.8 t/sSo the MTP sweet spot was:
--spec-draft-n-max 2Conclusion
With 12GB VRAM, plain decoding is already very strong:
Plain llama-bench: ~914 t/s pp512, ~46.8 t/s tg128 Best MTP observed: ~47.7 t/s generationSo MTP only gave about a 2% generation speedup over well-tuned plain decoding. For coding, I would personally use plain decoding with 32k context:
-c 32768 -ncmoe 20 -ctk q8_0 -ctv q8_0 -t 9The big lesson: for this MoE model, 12GB VRAM is a very practical sweet spot. It keeps enough experts on GPU that plain decoding becomes fast, q8 KV is usable, and 32k context is realistic.
submitted by /u/jwestra
[link] [comments] -
🔗 r/Yorkshire The Gannet, RSPB Bempton Cliffs, Yorkshire rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 Hex-Rays Blog Announcing the 2025 Plugin Contest Winners rss
-
🔗 r/LocalLLaMA vLLM ROCm has been added to Lemonade as an experimental backend rss
| vLLM has the ability to run .safetensors LLMs before they are converted to GGUF and represents a new engine to explore. I personally had never tried it out until u/krishna2910-amd/ u/mikkoph and u/sa1sr1 made it as easy as running llama.cpp in Lemonade: lemonade backends install vllm:rocm lemonade run Qwen3.5-0.8B-vLLMThis is an experimental backend for us in the sense that the essentials are implemented, but there are known rough edges. We want the community's feedback to see where and how far we should take this. If you find it interesting, please let us know your thoughts! Quick start guide: https://lemonade-server.ai/news/vllm-rocm.html GitHub: https://github.com/lemonade-sdk/lemonade Discord: https://discord.gg/5xXzkMu8Zk submitted by /u/jfowers_amd
[link] [comments]
---|--- -
🔗 @HexRaysSA@infosec.exchange 🔦 PLUGIN SPOTLIGHT: ida-cyberchef mastodon
🔦 PLUGIN SPOTLIGHT: ida-cyberchef
This is a new open source plugin that embeds CyberChef's data transformation engine directly into IDA Pro, with a Qt interface that sits alongside your disassembly as a side panel.
Data flows top to bottom through three panels for input, recipe, and output.
-
🔗 r/Yorkshire Hebden Bridge illustration rss
| Hey folks! Thought you'd enjoy this new little illustration I just finished of Hebden Bridge. This took around 18 hours based on my own photos :) submitted by /u/zacrosso_art
[link] [comments]
---|--- -
🔗 r/wiesbaden In Germany soon rss
Hi. I would like to ask if what are the things that I need to prepare? I will arrive in Hessen Germany this coming June and I am from the Philippines. Any answers/ suggestions will be a great help for me. Thank you.
submitted by /u/No_Manner_2072
[link] [comments] -
🔗 r/LocalLLaMA Unpopular Opinion: The DGX Spark Forum community of devs is talented AF and will make the crippled hardware a success through their sheer force of will. rss
There is a lot of disdain for DGX Sparks here on the sub. And I get it. A lot of people say “It could have been great if it had been better memory bandwidth”, “SM-121 is a fake /second-class Blackwell chip” yadda, yadda. These criticisms are valid.
I bought one anyway because I’m pursuing a Masters in AI and I wanted it for training models, tool dev, testing, etc.
I was an early adopter, and like many, I was disappointed by the inference performance and software stack initially. Recently, my opinion and experience has changed.NVIDIA has an “official” DGX Spark Development community forum that is thriving. The people in the DGX forum community are some of the kindest, smartest, most tenacious group of developers I’ve met. These dudes have one common goal: Squeeze every last drop of performance out of this hardware to prove to themselves and the world that they didn’t make a bad purchase by buying a Spark. I know that sounds snarky, but I don’t think it’s a bad goal.
The vibe on the forum is like “Ok bros, we all bought this thing, the peeps over at r/LocalLLama are all laughing at us right now, let’s show those sons-of-bitches what we can do” I mean, none of them would actually say that, because they are all really nice and helpful people, but that’s the vibe I get when I’m browsing through the posts. Everyone there has the same goal: optimize the hell out of DGX Spark to the highest level possible.. It’s wild seeing such a harmonious atmosphere. No one really argues, trolls, rage baits, none of that. Just everyone in the same boat, working together and encouraging each other, sharing benchmarks, code, vLLM recipes, etc. Reminds me of the vibe of this sub like 2 years ago before all the bot posts flooded the place.
If you don’t believe me, about the DGX dev community, go check it out for yourself:
https://forums.developer.nvidia.com/c/accelerated-computing/dgx-spark-gb10
Check out some of the cool projects they’ve spun up like Sparkrun (http://sparkrun.dev), PrismaQuant, Spark Lesderboard, eugr vLLM, and all the other amazing projects these guys are working on.
The one big advantage of the DGX hardware for these developers is the fact that the HW and OS is all exactly the same for everyone. You know your shit is going to work on every other Spark box that is out there and that is powerful for a unified community with one common goal.
So yes, DGX Spark could have been a lot better and was probably crippled by design, but that’s not stopping the DGX Spark Forum community, these MFers are going to use their sheer force of will and talent to make this thing a success just to spite all the naysayers. My two cents, agree or disagree?
submitted by /u/Porespellar
[link] [comments] -
🔗 r/york Anyone looking for D&D groups or events in York? rss
I've been working with some local communities recently in york and I'm trying to improve outreach involving a lot of upcoming Dungeons and Dragons related things
submitted by /u/JunkDrawerTheatreCo
[link] [comments] -
🔗 r/wiesbaden Outdoor Location gesucht rss
Gude!
Für eine Abschlussfeier suche ich nach einer Outdoor Location bestenfalls mit einem Zelt für maximal 50 Personen in Wiesbaden oder Umgebung.
Bayleaf Events in Frankfurt Höchst ist eine sehr schöne Location mit Zelt und Deko und mein absoluter Favorit allerdings ist es nicht am 23. und 24. Mai verfügbar.
Falls ihr sonst noch Ideen habt, würde ich mich auf jeden Fall freuen!submitted by /u/Levi_Ackermann_1304
[link] [comments] -
🔗 r/reverseengineering Ghidra-SNES: A Ghidra extension for reverse engineering SNES ROMs (first public release, feedback welcome!) rss
submitted by /u/JoshLeaves
[link] [comments] -
🔗 tomasz-tomczyk/crit v0.11.0 release
What's Changed
Big milestone! Crit crossed more than 500 commits and 250 stars. You can now install it directly from homebrew and we released a Windows version!
Thank you to everyone who contributed to get us here! I'd appreciate if you would share it with your colleagues or on Twitter! It helps a lot!
crit is now in homebrew-core — no tap needed. If you installed from the tap, upgrade once with:
brew uninstall crit && brew untap tomasz-tomczyk/scratch && brew update && brew install critFuture updates will arrive via
brew upgradelike any other formula.Windows + WSL support
feat: add Windows + WSL supportreplaces Unix-only syscalls with cross- platform abstractions, addsrundll32browser launch on native Windows, and keeps the existing WSL fallback chain. crit now works end-to-end on Windows natively.- feat: add Windows + WSL support by @tomasz-tomczyk in #459
General
- feat: add --file flag and better errors to crit comment --json by @tomasz-tomczyk in #480
- fix: deny rather than silently auto-approve on daemon shutdown by @tomasz-tomczyk in #483 - Thank you @TalAmuyal for raising!
- fix: remove daemon 1h idle timeout by @tomasz-tomczyk in #477 - Thank you @TalAmuyal for reporting!
- fix: audit fixes — path safety, shared reads, dir pruning by @tomasz-tomczyk in #485
- fix: chain reloadForScope when scope/commit changes mid-flight by @tomasz-tomczyk in #482
- fix: scope unified diff comment highlight to commented side by @tomasz-tomczyk in #479
- fix: header context chip colors and hidden unresolved count by @tomasz-tomczyk in #486
- fix: preserve CLI argument order for files by @tomasz-tomczyk in #474
- docs: switch primary brew install to homebrew-core by @tomasz-tomczyk in #481 - thanks @omervk for contributing to homebrew on our behalf!
- docs: cleanup stale spec by @tomasz-tomczyk
- refactor: drop auto-detection of stacked PRs / local stacks by @tomasz-tomczyk in #478
Full Changelog :
v0.10.5...v0.11.0 -
🔗 r/Leeds Roundhay park warning rss
Hi there!
Just wanted to write that whilst walking my dog- I had a strange encounter with an older man.
I was up the back of roundhay park lake (taking the pathway through the woods) at 11:30 this morning/ afternoon- he was in a very isolated part of the walking trail, and after staring at me walking past, I said ‘good afternoon’ and he replied by telling me he thought I was ‘very beautiful’ - I got a bad gut feeling and decided to leave straight away, he was saying more stuff as I was leaving but I didn’t hear him as he was very quiet.
I just wanted to say to be cautious if you are in roundhay park and to stick to the main path by the lake if possible. Thanks!
submitted by /u/SadEntertainment5259
[link] [comments] -
🔗 r/reverseengineering Reverse-engineered DaVinci Resolve's activation check with Claude — Frida runtime tracing + radare2 rss
submitted by /u/Hour-Dirt-4010
[link] [comments] -
🔗 r/Yorkshire Richmond Castle in Yorkshire standing tall after nearly 1000 years. rss
| submitted by /u/Still_Function_5428
[link] [comments]
---|--- -
🔗 chenxvb/Unicorn-Trace Unicorn-Trace v0.4 release
Full Changelog :
v0.3...v0.4 -
🔗 r/york Policemen with assault rifles running around rss
Does anyone know anything about the policemen running around with automatic weapons near Hungate apartments? Quite anxiety inducing to see that
submitted by /u/Reduxtion
[link] [comments] -
🔗 r/Yorkshire North Rigton... Apparently rss
| submitted by /u/shoey_photos
[link] [comments]
---|--- -
🔗 r/york Love the cobbled or setts, and the whole atmosphere of Shambles is just magical, really brings out the history and charm of the place! 🌺 rss
| submitted by /u/Coffee000Oopss
[link] [comments]
---|--- -
🔗 tomasz-tomczyk/crit Spotify popup-relay preview (bb4d9fb) release
WIP build of
critwithshare_flow: "popup"config support for SSO- protected crit-web instances.Setup instructions: SPOTIFY-PREVIEW.md
Pair with crit-web: docker image
ghcr.io/tomasz-tomczyk/crit-web:spotify- preview(release, built from branchshare-receiver- elixir).Built from commit
bb4d9fbof branchshare- receiver.Feedback / issues: tomasz-tomczyk/crit-web#50
-
🔗 r/Yorkshire There's no better place to drink a tea and reboot yourself than the Dales rss
| Image by Dan Silcock submitted by /u/Seabeachlover10
[link] [comments]
---|--- -
🔗 backnotprop/plannotator v0.19.11 release
Follow @plannotator on X for updates
Missed recent releases? Release | Highlights
---|---
v0.19.10 | Revert unreviewed bypass-clear-reminder permission mode
v0.19.9 | OpenCode user-managed workflow, Pi model switch fix, Codex skill install, shimmer removal
v0.19.8 | 49 themes with syntax highlighting, keyboard shortcut registry, smart code-file path validation, remote URL notifications
v0.19.7 | Codex Stop-hook plan review, Codex skills, sidebar auto-close, file tree context menu
v0.19.6 | Non-blocking Pi browser sessions, agent picker dropdown for OpenCode, annotate-last file resolution fix
v0.19.5 | All-files diff view, clickable code file paths, server-side hide whitespace, non-ASCII path support
v0.19.4 | All-files diff type, code file viewer, hide whitespace, quick-settings popover
v0.19.3 | Configurable feedback messages, hide merged PRs in stacked PR selector
v0.19.2 | Stacked PR review, source line numbers in feedback, diff type dialog re-show, ghost dot removal, docs cleanup
v0.19.1 | Hook-native annotation, custom base branch, OpenCode workflow modes, quieter plan diffs, anchor navigation
v0.19.0 | Code Tour agent, GitHub-flavored Markdown, copy table as Markdown/CSV, flexible Pi planning mode, session-log ancestor-PID walk
What's New in v0.19.11
v0.19.11 adds Jujutsu (jj) as a first-class VCS backend for code review and refines the review UI with slimmer separators, a cleaner header layout, and proper multi-line gutter selection. One of the two PRs in this release is from a first-time contributor.
Jujutsu (jj) Code Review
Plannotator's code review now works natively with Jujutsu, the Git-compatible VCS. When you run
/plannotator-reviewin a jj workspace, the VCS is auto-detected and four jj-specific diff modes appear in the diff type picker:- Current (
jj-current) shows the working-copy changes - Last (
jj-last) shows the previous change - Line (
jj-line) shows the full line of work from the current change back to the trunk bookmark - All (
jj-all) shows all local changes not yet on the remote
Compare-target selection adapts to jj's model. Instead of branch-based base selection, the picker offers remote bookmarks. The feedback exported to your agent includes jj-appropriate local diff instructions so it can reproduce the same view.
Under the hood, this required a significant refactor. Diff collection, compare-target semantics, and file-content retrieval were pulled into a provider-based VCS abstraction in
packages/shared/vcs-core.ts. Git, jj, and P4 each implement the same provider interface. The review server and UI consume provider-supplied metadata instead of branching on VCS-specific flags. This abstraction makes adding future VCS backends straightforward.For colocated repos (both
.gitand.jjpresent), jj takes priority. Pass--gitto/plannotator-reviewto override.- Authored by @graemefolk in #675
Review UI Refinements
Several quality-of-life improvements to the code review interface:
Slimmer hunk separators. The expand/collapse bars between diff hunks are now 24px (down from 32px), with semi-transparent theme-integrated backgrounds. Text and buttons fade with lower opacity for a subtler look that puts the focus on the code.
Cleaner header layout. Sidebar toggles (Annotations, AI, Agents) moved to the far right of the header bar, with the options menu to their left. A visual divider separates the file tree button from the repo label.
Collapse viewed files. Marking a file as viewed in all-files review mode now automatically collapses it, keeping only unreviewed files expanded.
Multi-line gutter selection fix. Click-and-drag on the gutter annotation button now correctly selects a range of lines. The previous implementation used a deprecated Pierre API that never entered the selection mode, so dragging always reported a single line.
Install / Update
macOS / Linux:
curl -fsSL https://plannotator.ai/install.sh | bashWindows:
irm https://plannotator.ai/install.ps1 | iexClaude Code Plugin: Run
/pluginin Claude Code, find plannotator , and click "Update now".OpenCode: Clear cache and restart:
rm -rf ~/.bun/install/cache/@plannotatorThen in
opencode.json:{ "plugin": ["@plannotator/opencode@latest"] }Pi: Install or update the extension:
pi install npm:@plannotator/pi-extension
What's Changed
- feat(review): add jj review workflows by @graemefolk in #675
- Review UI refinements: separator styling and header layout by @backnotprop in #683
New Contributors
- @graemefolk made their first contribution in #675
Community
@graemefolk built full jj support from scratch, implementing the VCS provider, diff modes, compare-target picker, and feedback export in a single well-structured PR. The VCS abstraction layer they introduced benefits the entire codebase.
@JohannesKlauss reported the multi-line gutter selection bug in #679, with a clear screen recording that made the root cause obvious.
@festive-onion requested the collapse-on- viewed behavior in #682, a small change that meaningfully improves the review workflow for large diffs.
Full Changelog :
v0.19.10...v0.19.11 - Current (
-
🔗 r/reverseengineering SASS King Part 2: reverse-engineering ptxas heuristic decisions and what the compiled binary actually reveals rss
submitted by /u/CurrentLawfulness358
[link] [comments] -
🔗 r/reverseengineering I just released a C++ rewrite of **Minecraft rd-20090515** (May 15, 2009 — one of the earliest pre-Classic versions).If you find it interesting, a ⭐ on GitHub would mean a lot and help the project grow! rss
submitted by /u/03D_DEV
[link] [comments] -
🔗 r/LocalLLaMA Multi-Token Prediction (MTP) for LLaMA.cpp - Gemma 4 speedup by 40% rss
| Implemented Multi-Token Prediction for LLaMA.cpp. Quantized Gemma 4 assistant models into GGUF format. Ran tests on a MacBook Pro M5Max. Gemma 26B with MTP drafts tokens 40% faster. Prompt: Write a Python program to find the nth Fibonacci number using recursion Outputs:
LLaMA.cpp: 97 tokens/s
LLaMA.cpp + MTP: 138 tokens/s Gemma4-assistant GGUF Quantized models: https://huggingface.co/collections/AtomicChat/gemma-4-assistant-gguf Local AI models app: http://atomic.chat Patched llama.cpp: https://github.com/AtomicBot-ai/atomic-llama-cpp-turboquant submitted by /u/gladkos
[link] [comments]
---|--- -
🔗 jank blog jank now has its own custom IR rss
Good news, everyone! jank has a new custom intermediate representation (IR) and we're using it to optimize jank to compete with the JVM. We'll dive into more of that today, but first I want to say thank you to my Github sponsors and to Clojurists Together for sponsoring me this whole year. You all are helping a great deal. I am still searching for a way to continue working on jank full-time with an income which will cover rent and groceries, so if you've not yet chipped in a sponsorship, now's a great time!
-
🔗 matklad Steering Zig Fmt rss
Steering Zig Fmt
May 8, 2026
Two tips on using
zig fmteffectively. Read this if you are writing Zig, or if you are implementing a code formatter.For me,
zig fmtis better than any other formatter I used:rustfmt, the one in IntelliJ,deno fmt.zig fmtis steerable. For every syntactic construct, it has several variations for how it might be laid out. The variation used is selected by looking at what’s currently in a file.Easier to show a pair of examples:
f(1, 2, 3); // -> zig fmt -> f(1, 2, 3); f(1, 2, 3,); // -> zig fmt -> f( 1, 2, 3, );Depending on the trailing comma, function call is formatted on a single line, or with one argument per line.
The way this plays out in practice is that you decide how you want to lay out the code, add a couple of
,, hit the reformat shortcut (, pis mine), andzig fmtdoes the rest. For me, this works better than the alternative of the formatter guessing. 90% of great formatting are blank lines between logical blocks and tasteful choice of intermediate variables, so you might as well lean into key choices, rather than eliminate them.I know of one non-trivial formatting customization point: columnar layout for arrays:
.{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, };One would think that trailing comma would lead to a number-per-line layout, but, for arrays,
zig fmtalso takes note of the first line break. In this case, the line break comes after the first three items, so we get three numbers per line, aligned:.{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, };How cool is that!
Furthermore, with judicious use of
++(array concatenation), you can vary the number of items per line. When I need to pass--keyvaluepairs to subprocess, I often go for formatting like this:try run(&(.{ "aws", "s3", "sync", path, url } ++ .{ "--include", "*.html", "--include", "*.xml", "--metadata-directive", "REPLACE", "--cache-control", "max-age=0", })); -
🔗 Armin Ronacher Pushing Local Models With Focus And Polish rss
I really, really want local models to work.
I want them to work in the very practical sense that I can open my coding agent, pick a local model, and get something that feels competitive enough that I do not immediately switch back to a hosted API after five minutes. There are a lot of reasons why I want this, but the biggest quite frankly is that we're so early with this stuff, and the thought of locking all the experimentation away from the average developer really upsets me.
Frustratingly, right now that is still much harder than it should be but for reasons that have little to do with the complexity of the task or the quality of the models.
We have an enormous amount of activity around local inference, which is great. We have good projects, fast kernels, and people are doing great quantization work. A lot of very smart people are making all of this better, and yet the experience for someone trying to make this work with a coding agent is worse than it has any right to be.
Putting an API key into Pi and using a hosted model is a very boring operation. You select the provider, paste the key and then you are done thinking about how to get tokens. Doing the same thing locally, even when you have a high-end Mac with a lot of memory, is a completely different experience. You choose an inference engine, then a model, then a quantization, then a template, then a context size, then you've got to throw a bunch of JSON configs into different parts of the stack and then you discover that one of those choices quietly made the model worse or that something just does not work at all.
That is the gap I am interested in.
Runnable Is Not Finished
A lot of local model work optimizes for making models runnable. That is necessary, but it is not the same thing as making them feel finished. I give you a very basic example here to illustrate this gap: tool parameter streaming.
For whatever reason, most of the stuff you run locally does not support tool parameter streaming. I cannot quite explain it, but the consequences of that are actually surprisingly significant. If you are not familiar with how these APIs work, the simplest way to think about them is that they are emitting tokens as they become available. For text that is trivial, but for tool calls that is often not done, despite the completions API supporting this. As a result you only see what edits are being done on a file once the model has finished streaming the entire tool call.
This is bad for a lot of reasons:
-
A dead connection is a weird connection: local models are slow, so when you don't get any tokens for 5 minutes then you can't tell if the connection died or just nothing came. This means you need to increase the inactivity timeouts to the point where they are pointless.
-
You won 't see what will happen: if you are somewhat hands-on, not seeing what bash invocation the system is concocting slowly in the background means potentially wasted tokens, and also means that you won't be able to interrupt it until way too late.
-
It 's just not SOTA. We can do better, and we should aim for having the best possible experience. Tool parameter streaming is as important as token streaming in other places.
Having a model spit out tokens doesn't take long, but making the experience great end to end does take a lot more energy.
Fragmentation
The local stack is fragmented across many engines and layers. There is llama.cpp, Ollama, LM Studio, MLX, Transformers, vLLM, and many other pieces depending on hardware and taste. All of these are amazing projects! The problem is not that they exist or that there are that many of them (even though, quite frankly, I'm getting big old Python packaging vibes), the problem is that for a given model, the actual behavior you get depends on a long chain of small decisions that most users just don't have the energy for.
Did the chat template render exactly right? Are the reasoning tokens handled in the intended way? Is the tool-call format translated correctly? Is the context window real? Are the KV caches actually working for a coding agent? Did I pick the right quantized model from Hugging Face? Are you accidentally leaving a lot of performance on the table because the model is just mismatched for your hardware? Does streaming usage work across all channels? Does the model need its previous reasoning content preserved in assistant messages? Is the coding agent set up correctly for it?
You also need to install many different things in addition to just your coding agent.
All of these things matter. They matter a lot.
The result is that people try a local model and get a result that is neither a fair evaluation of the model nor a polished product experience and this results in both people dismissing local models and energy being distributed across way too many separate efforts instead of getting one effort going great end to end.
This is a terrible way to build confidence.
Too Little Critical Mass
In line with our general "slow the fuck down" mantra, I want to reiterate once more how fast this industry is moving.
Every week there is a new model and a new vibeslopped thing. The attention immediately moves to making the next thing run instead of making one thing run really, really well in one harness. I get the excitement and dopamine hit, but it also means that too little critical mass accumulates behind any one model, hardware, inference engine, harness combo to find out how good it can really become when the entire stack is built around it.
Hosted model providers do not ship a bag of weights and ask you to figure out the rest, and we need to approach that line of thinking for local models too. I want someone to pick one model, pairs it up with one serving path, directly within a coding agent. Initially just for one hardware configuration, then for more. Pick a winner hard. If a tool call breaks, that is a product bug and then it's fixed no matter where in the stack it failed. If the model's reasoning stream is malformed, that is a product bug. If latency is much worse than it should be, that is a product bug. We need to start applying that mentality to local models too.
And not for every model! That is the point. Let's pick one winner and polish the hell out of it. Learn what it takes to make that one configuration good, then take those learnings to the next config.
The DS4 Bet
This is why I am excited about ds4.c. It's Salvatore Sanfilippo's deliberately narrow inference engine for DeepSeek V4 Flash on Macs with 128GB+ of RAM only. It is not a generic GGUF runner and it is not trying to be a framework. It is a model-specific native engine with a Metal path, model-specific loading, prompt rendering, KV handling, server API glue, and tests.
DeepSeek V4 Flash is a good candidate for this kind of experiment because it has a combination of properties that are unusual for local use. It is large enough to feel meaningfully different from many smaller dense models, but sparse enough that the active parameter count makes it plausible to run. It has a very large context window. Since ds4.c targets Macs and Metal only, it can move KV caches into SSDs which greatly helps the kind of workloads we expect from coding agents.
To run
ds4.cyou don't need MLX, Ollama or anything else. It's the whole package.Embedding It In Pi
Which made me build pi-ds4 which is a Pi extension to directly embed the whole thing into Pi itself. Taking what ds4 is and dogfooding the hell out of it with a coding agent and zero configuration. To answer the question how good can the local model experience become if Pi treats this as a first-class provider rather than as a pile of manual configuration?
The extension registers
ds4/deepseek-v4-flash, compiles and startsds4-serveron demand, downloads and builds the runtime if needed, chooses the quantization based on the machine, keeps a lease while Pi is using it, exposes logs, and shuts the server down again through a watchdog when no clients are left. It doesn't even give you knobs right now, because I want to figure out how to set the knobs automatically.This is not about hiding the fact that local inference is complicated. It is about putting the complexity in one place where it can be improved, because there is a lot that we need to improve along the stack to make it work better.
I think we can do better with caching and there is probably some performance that can be gained if we all put our heads together.
Focusing and Learning
The experiment I want to run is not "can a local model run?" because we already know that it can. I want to know if, for people with beefed-out Macs for a start, we can get as close as possible to the ergonomics of a hosted provider with decent tool-calling performance: how to get caches to work well, how to improve the way we expose tools in harnesses for these models, and then scale it gradually to more hardware configs and later models.
I also want everybody to have access to this. Engineers need hammers and a hammer that's locked behind a subscription in a data center in another country does not qualify. I know that the price tag on a Mac that can run this is itself astronomical, but I think it's more likely that this will go down. Even worse, Apple right now due to the RAM shortage does not even sell the Mac Studio with that much RAM. So yes, it's a selected group of people where ds4.c will start out.
But despite all of that, what matters is that a critical mass of pepole start to focus their efforts on a thing, tinker with it, improve it, not locked away, out in the open, and most importantly not limited by what the hyperscalers make available.
But if you have the right hardware and you care about local agents, I would love for you to try it within pi:
pi install https://github.com/mitsuhiko/pi-ds4My hope is that this becomes a useful forcing function to really polish one coding agent experience. But really, the focal point should be ds4.c itself.
-
-

