🏡


  1. Dissecting PipeMagic: Inside the architecture of a modular backdoor framework | Microsoft Security Blog
  2. Reversing a (not-so-) Simple Rust Loader | cxiao.net
  3. Jujutsu Radicle = ❤️
  4. xvw.lol - Why I chose OCaml as my primary language
  5. Introducing Contextual Retrieval Anthropic

  1. August 25, 2025
    1. 🔗 pydantic/pydantic-ai v0.7.5 (2025-08-25) release

      What's Changed

      • Handle 'STOP' finish_reason in GeminiStreamedResponse by @ArneZsng in #2631
      • Add price() method to ModelResponse by @Kludex in #2584
      • Include thoughts tokens in output_tokens for Google models by @alexmojaki in #2634
      • Add span_id and trace_id to EvaluationReport by @Kludex in #2627
      • Allow proper type on AnthropicProvider when using Bedrock by @akoshel in #2490
      • Use new OpenTelemetry GenAI chat span attribute conventions by @alexmojaki in #2349
      • Ensure content is always set for assistant tool call messages for OpenAI. by @vimota in #2641

      New Contributors

      Full Changelog : v0.7.4...v0.7.5

    2. 🔗 James Sinclair Rendering mazes on the web rss

      In the last article, we discussed building mazes using recursion and immutable data structures. But all we did there is create a graph. That is, we built a data structure in memory. We didn't talk at all about how we render it. But the beauty of the web platform is that we have so many options. In this article, we're going to cover three different ways to render a maze:

      1. Rendering a maze with Unicode box drawing characters;
      2. Rendering a maze with SVG; and
      3. Rendering an accessible maze with HTML and CSS.

      Unicode rendering

      With a Unicode renderer, we take our maze and convert it into a string. This is handy for debugging, as it works nicely with console.log(). I use this technique to include a maze in the source code of each blog post, for example. The output looks something like the following:1

      1 If you're trying this yourself, you may often find that the box lines don't meet vertically. This is because most modern applications add generous line spacing. The extra line spacing makes ordinary text more readable, but means our box drawing characters no longer meet. You can make the lines meet again by setting the line height to 1 (if you have that option).

      How does this work? Well, it uses 15 of the 128 box drawing characters from the Unicode Box Drawing block. These characters all represent intersections or vertices. That is, places where walls of the maze meet.

      This is a shift in perspective from our maze generation. The generation code focussed on rooms. This rendering code will focus on vertices. To start, we'll create a way to map from the directions that meet at a vertex to a box drawing character. Each key of the object represents the walls that meet at that vertex. So for a vertex where walls meet in all four directions the key is NESW (for north, east, south, west), and the value is .

      const RENDER_MAP = {
        NESW: '┼',
        NES: '├',
        NEW: '┴',
        NSW: '┤',
        ESW: '┬',
        NE: '└',
        NS: '│',
        NW: '┘',
        ES: '┌',
        EW: '─',
        SW: '┐',
        N: '╵',
        E: '╶',
        S: '╷',
        W: '╴',
        '': '.',
      };
      

      Now, to make life easier for ourselves, we're going to create a helper class that represents a vertex. It will do two main things:

      1. Help add walls to a vertex; and
      2. Convert from a vertex to a string.

      Here's how the code looks:

      // vertex.js
      
      // We'll continue using the ImmutableJS data structures
      // that we used in the previous article.
      import { Set } from 'immutable';
      
      // Each of these is an immutable object representing
      // a point.
      import { EAST, NORTH, SOUTH, WEST } from './point';
      
      // An array containing each of the four directions.
      const DIRS = [NORTH, EAST, SOUTH, WEST];
      
      /**
       * Converts a direction represented as a point into a
       * single character (N, E, S, or W).
       */
      function dirToChar(dir: Dir) {
        switch (dir) {
          case NORTH:
            return 'N';
          case EAST:
            return 'E';
          case SOUTH:
            return 'S';
          case WEST:
            return 'W';
          default:
            throw new Error('Unknown direction encountered');
        }
      }
      
      /**
       * The Vertex class.
       */
      export class Vertex {
        // The constructor takes a Set of directions and
        // stores it as a property.
        constructor(nesw) {
          this.nesw = nesw;
        }
      
        // Adding a direction to a vertex creates a new 
        // vertex. Using an Immutable Set here means we
        // don't have to worry about adding to the set
        // modifying other vertices.
        add(dir) {
          return new Vertex(this.nesw.add(dir));
        }
      
        // Convert this vertex to a string by mapping its
        // directions to a box drawing character.
        toString() {
          const key = DIRS
            .filter((d) => this.nesw.has(d))
            .map(dirToChar)
            .join('');
          return RENDER_MAP[key];
        }
      }
      
      // We export the empty vertex. Since this is immutable,
      // we should only ever need one of these and we can
      // consider it a constant.
      export const EMPTY_VERTEX = new Vertex(Set());
      

      With that in place, we need one more helper function before we write the rendering code. It's called repeat(). You give it a value, v, and a number n. And it will return an array filled with the value v, repeated n times:

      export function repeat(value, n) {
        return new Array(n).fill(value);
      }
      

      With that in place, we can code our maze rendering algorithm. We start by creating a 2D array of empty vertices. Then we consider each vertex in turn.

      For each vertex, we construct four Point objects to represent the possible adjoining rooms. Then we pair these points together, creating an array of possible walls. Then we work out which walls to add. We do this by checking the maze graph. If two adjoining rooms have a connection, there is no wall. Conversely, if we can't find a connection, we add a wall.

      We end up with another 2D array of vertices. Once we have that, we take advantage of the .toString() override we created, and call .join('') on each row, then .join('\n') to create a single string.

      The code looks like so:

      /**
       * Render Maze Text.
       *
       * Renders the maze using Unicode box
       * drawing characters.
       *
       * @param {number} n  The size of the maze. The maze is
       *   always a square and n represents the number of
       *   rooms along one side of the square.
       * @param {Map<Point, List<Point>>} rooms A graph 
       *   representation of the maze, as a map of rooms
       *   (x,y coordinates) to adjacent rooms (a list of
       *   x,y coordinates).
       * @returns A Unicode representation of the maze.
       */
      export function renderMazeText(n, rooms): string {
        // Construct a 2D array with n + 1 rows and
        // n + 1 columns.
        const emptyVertices = repeat(undefined, n + 1)
          .map(() => repeat(emptyVertex, n + 1));
      
        // Map over each vertex and consider its possible
        // adjoining rooms.
        const vertices = emptyVertices.map((row, y) =>
          row.map((vertex, x) => {
            // We are looking at the vertex at x,y. There are
            // potentially rooms to the NW, NE, SE, and SW.
            const nwRoom = p(x - 1, y - 1);
            const neRoom = p(x, y - 1);
            const seRoom = p(x, y);
            const swRoom = p(x - 1, y);
      
            // Pair the possible adjacent rooms with the
            // direction of the wall between them.
            return (
              [
                [nwRoom, neRoom, NORTH],
                [neRoom, seRoom, EAST],
                [seRoom, swRoom, SOUTH],
                [swRoom, nwRoom, WEST],
              ]
            ).reduce((v, [a, b, dir]) => {
              // If at least one of the rooms is inside the
              // maze and there is no connection between them,
              // add a half-wall.
              return (rooms.has(a) || rooms.has(b))
                && !rooms.get(a)?.includes(b)
                  ? v.add(dir)
                  : v;
            }, vertex);
          }),
        );
      
        // Convert the whole thing to a string, taking
        // advantage of the Vertex .toString() override.
        return vertices.map((row) => row.join('')).join('\n');
      }
      

      When we run this code, we get back a string. And we can use strings almost anywhere. Here's another example:

      It's great to have the portability of a string. But it's not without its problems. In most scenarios, the lines don't join up vertically. And most fonts render characters as oblongs rather than squares. The rendering is functional, but not pretty. Working with a web browser, we have other options.

      SVG rendering

      One option for rendering our maze is using SVG. The result looks something like the following:

      The code to generate SVG output is even simpler than our Unicode renderer. This is because we don't need to muck around with vertices. Instead, we start by drawing two long lines for the north and west sides of the maze.

      Next, we consider each room in turn. For each room, we check to see if there is an adjoining room to the south or east. If not, then we need to draw a line to represent the wall between those two rooms.

      We draw the wall by creating a string to represent an SVG path element. Once we've repeated this process for all the rooms, we end up with a List of strings. We then .join() the list into a single string and insert these into an SVG group element. And we place all that into an outer SVG element. The code looks something like the following:

      Here's the code:

      import { List } from 'immutable';
      
      /**
       * Render maze as SVG.
       *
       * @param {number} n The size of the maze. The maze is
       *   always a square and n represents the number of
       *   rooms along one side of the square.
       * @param {number} squareSize The size in pixels to draw
       *   each room.
       * @param {Map<Point, List<Point>>} rooms A graph
       *   representation of the maze. That is, a map of rooms
       *   (Point objects) to adjacent rooms (a List of
       *   Point objects).
       * @returns A string that will draw an SVG
       *   representation of the maze if converted to
       *   DOM elements.
       */
      export function renderMazeSVG(n, squareSize, rooms) {
        // Calculate the total size of the SVG image we're
        // creating. Since it's a square, it will be the same
        // in each dimension.
        const totalSize = (n + 2) * squareSize;
      
        // Create two long 'walls' for the north and west
        // sides. We do this by making two SVG path elements.
        const wStart = squareSize;
        const wEnd = (n + 1) * squareSize;
        const northWall = `<path d="M ${wStart} ${wStart} L ${wEnd} ${wStart}" />`;
        const westWall = `<path d="M ${wStart} ${wStart} L ${wStart} ${wEnd}" />`;
      
        // Construct the rest of the maze by examining each
        // room in turn.
        const wallLines = rooms
          .reduce((allWalls, doors, room) => {
            // For the given room, check to see if it
            // has an adjoining room to the south or east.
            const walls = [SOUTH, EAST]
              .map(addPoint(room))
              .filter((adj) => !doors.includes(adj))
      
              // Calculate the start and end point for the wall
              .map((adj) => [adj.x, adj.y, room.x + 1, room.y + 1])
              .map((pts) => pts.map((pt) => (pt + 1) * squareSize))
      
              // Convert the start and end points into an SVG
              // Path element.
              .map(([ax, ay, bx, by]) => `<path d="M ${ax} ${ay} L ${bx} ${by}" />`);
      
            // Add the paths we've just created (if any) to
            // the list of maze lines we're creating.
            return allWalls.push(...walls);
          }, List())
      
          // Join all these path strings together into a
          // single string.
          .join('\n');
      
        // Construct the SVG element with all the walls as
        // as children.
        return `<svg width="${totalSize}" height="${totalSize}" viewBox="0 0 ${totalSize} ${totalSize}">
           <g class="mazebg" stroke="currentColor" stroke-width="1">
            ${northWall}
            ${westWall}
            ${wallLines}
           </g>
          </svg>`;
      }
      

      When we run the code, we get back a string. But we can insert this string into the DOM and the browser will render it for us. Or, we can write the string to a file and render it as an image.

      Here's another example, just for fun:

      Could we render an accessible maze?

      The trouble with both text and SVG is that they're not terribly accessible. It's not apparent to assistive technologies that we're dealing with a maze. We can improve things slightly by adding alt text to an img element showing the SVG. But that still doesn't provide the same amount of information that the visual rendering does. So, is there a way we could do better?

      One simple thing we could try is creating a list of all the rooms as HTML. It's not pretty, but it does contain all the information in the maze. So it would tick the box for being accessible.

      The code to generate an HTML version of the maze is even simpler than our SVG renderer. We'll start by writing a helper functions to describe the 'doors' leading out of a given room. That is, a function to create a textual description designed for a human to read. It takes a list of doors, and a Point representing the current room. And it returns a string that we can insert into a sentence written in English.

      import { subtractPoint } from './point';
      
      const directionToString = new Map([
        [NORTH, 'north'],
        [EAST, 'east'],
        [SOUTH, 'south'],
        [WEST, 'west'],
      ]);
      
      function doorsDescription(doors, room) {
        const dirs = doors.map((door) => {
          const direction = directionToString.get(subtractPoint(door)(room));
          return direction;
        });
        return dirs.set(-1, (doors.size > 1 ? 'and ' : '') + dirs.get(-1)).join(', ');
      };
      

      Then we can write a function that generates the full HTML as follows:

      /**
       * Rooms to List.
       *
       * Takes a maze graph representation and renders it as
       * an HTML list.
       *
       * @param {Map<Point, List<Point>>} rooms  graph
       *   representation of the maze. That is, a map of rooms
       *   (Point objects) to adjacent rooms (a List of
       *   Point objects).
       * @returns An HTML string that represents the maze as
       *   an unordered list.
       */
      export function renderMazeAsList(rooms) {
        return (
          '<ul class="room-list">' +
          rooms
            .sortBy((_, { x, y }) => Math.sqrt(x ** 2 + y ** 2))
            .map(
              (doors, room) =>
                `<li class="maze-room">
                <p>Room ${room.x},${room.y}</p>
                <p>${doors.size === 1 ? 'There is a door' : 'There are doors'} to the
                ${doorsDescription(doors, room)}.</p>
               </li>`,
            )
            .join('\n') +
          '</ul>'
        );
      }
      

      This will generate an HTML string that looks something like the following:

      <div class="accessibleMaze">
        <ul class="room-list">
          <li class="maze-room">
            <p>Room 0,0</p>
            <p>There are doors to the south, and east.</p>
          </li>
          <li class="maze-room">
            <p>Room 1,0</p>
            <p>There are doors to the west, and east.</p>
          </li>
          <li class="maze-room">
            <p>Room 0,1</p>
            <p>There are doors to the east, north, and south.</p>
          </li>
      
          <!-- … You get the idea … -->
      
          <li class="maze-room">
            <p>Room 15,15</p>
            <p>There are doors to the west, and north.</p>
          </li>
        </ul>
      </div>
      

      And if we render it, it looks like what you see below. It might check the accessibility box, technically. But let's face it—it's rather dull.

      But perhaps we could enhance this a little What if we made each list item focus-able, and then added links to adjacent rooms. That way, you could navigate through the list using your keyboard.

      We'll start by adding a new helper function that will generate the list of links for us:

      function doorsToList(doors, room: Point) {
        return (
          '<ul class="door-list">' +
          doors
            .map((door) => {
              const direction = directionToString.get(subtractPoint(door)(room));
              return `<li class="door door-${direction}">
                <a class="doorLink" href="#room-${door.x}-${door.y}" title="Take the ${direction} door">${direction}</a>
              </li>`;
            })
            .join('\n') +
          '</ul>'
        );
      }
      

      And then we update the HTML generating code:

      /**
       * Rooms to List.
       *
       * Takes a maze graph representation and renders it as
       * an HTML list.
       *
       * @param {Map<Point, List<Point>>} rooms  graph
       *   representation of the maze. That is, a map of rooms
       *   (Point objects) to adjacent rooms (a List of
       *   Point objects).
       * @returns An HTML string that represents the maze as
       *   an unordered list.
       */
      export function renderMazeAsList(rooms) {
        return (
          '<ul class="room-list">' +
          rooms
            .sortBy((_, { x, y }) => Math.sqrt(x ** 2 + y ** 2))
            .map(
              (doors, room) =>
                `<li tabindex="0" class="maze-room" id="room-${room.x}-${room.y}">
                <p>Room ${room.x},${room.y}</p>
                <p>${doors.size === 1 ? 'There is a door' : 'There are doors'} to the
                ${doorsDescription(doors, room)}.</p>
                ${doorsToList(doors, room)}
               </li>`,
            )
            .join('\n') +
          '</ul>'
        );
      }
      

      Note how we've added tabindex attributes to each list item to make them focusable. And we've added id attributes that match the links generated by doorsToList().

      Running this code, we get lengthier HTML:

      <div class="accessibleMaze">
        <ul class="room-list">
          <li tabindex="0" class="maze-room" id="room-0-0">
            <p>Room 0,0</p>
            <p>There are doors to the south, and east.</p>
            <ul class="door-list">
              <li class="door door-south">
                <a class="doorLink" href="#room-0-1" title="Take the south door">south</a>
              </li>
              <li class="door door-east">
                <a class="doorLink" href="#room-1-0" title="Take the east door">east</a>
              </li>
            </ul>
          </li>
          <li tabindex="0" class="maze-room" id="room-1-0">
            <p>Room 1,0</p>
            <p>There are doors to the west, and east.</p>
            <ul class="door-list">
              <li class="door door-west">
                <a class="doorLink" href="#room-0-0" title="Take the west door">west</a>
              </li>
              <li class="door door-east">
                <a class="doorLink" href="#room-2-0" title="Take the east door">east</a>
              </li>
            </ul>
          </li>
      
          <!-- … You get the idea … -->
      
          <li tabindex="0" class="maze-room" id="room-15-15">
            <p>Room 15,15</p>
            <p>There are doors to the west, and north.</p>
            <ul class="door-list">
              <li class="door door-west">
                <a class="doorLink" href="#room-14-15" title="Take the west door">west</a>
              </li>
              <li class="door door-north">
                <a class="doorLink" href="#room-15-14" title="Take the north door">north</a>
              </li>
            </ul>
          </li>
        </ul>
      </div>
      

      So, we've added some ids and tabindex attributes, and we've given each room a list of links that point to the adjacent rooms—kind of like doorways. And we've made this bit easier to navigate with the keyboard.

      If we render that out, it looks like the following. Still pretty dull.

      But what if we added some CSS so that we only show the first room, or whichever list item is focussed. And, while we're playing with CSS, perhaps we could position the links around the text. And maybe we could add some background images and border images…

      /* Accessible Maze Rendering
       * ------------------------------------------------------------------------------ */
      
      .maze-room {
        box-sizing: border-box;
        list-style: none;
        margin: 0;
        width: 28em;
        height: 28em;
        background-image: url('./img/floor.png');
        background-size: 64px 64px;
        border-image: url('./img/walls.png');
        border-image-slice: 16;
        border-image-repeat: round;
        border-width: 64px;
        border-image-width: 64px;
        padding: 5em;
        position: absolute;
        left: -64em;
        top: 0;
      }
      
      .room-list:not(:has(:focus)) .maze-room:first-child,
      .maze-room:focus,
      .maze-room:has(:focus) {
        outline: none;
        left: 0;
      }
      
      .door {
        list-style: none;
        margin: 0;
        padding: 0;
        position: absolute;
        background: url('./img/dungeon-doors.png') transparent;
        background-size: 224px 224px;
      }
      
      .doorLink {
        display: block;
        width: 100%;
        height: 100%;
        text-align: center;
        background-repeat: no-repeat;
        overflow: hidden;
        text-indent: -99em;
      }
      
      .door-south {
        background-position: top center;
        height: 4em;
        width: calc(100% - 10em);
        bottom: 0;
        left: 5em;
      }
      
      .door-north {
        background-position: bottom center;
        height: 4em;
        width: calc(100% - 10em);
        top: 0;
        left: 5em;
      }
      
      .door-west {
        background-position: center right;
        width: 4em;
        height: calc(100% - 10em);
        top: 5em;
        left: 0;
      }
      
      .door-east {
        background-position: center left;
        width: 4em;
        height: calc(100% - 10em);
        top: 5em;
        right: 0;
      }
      
      #room-0-0::after {
        content: ' ';
        display: block;
        position: absolute;
        top: 5em;
        left: 0;
        height: calc(100% - 10em);
        width: 4em;
        background: url('./img/dungeon-exits.png') center right no-repeat;
        background-size: 128px 88px;
      }
      
      .maze-room:last-child::after {
        content: ' ';
        display: block;
        position: absolute;
        top: 5em;
        right: 0;
        height: calc(100% - 10em);
        width: 4em;
        background: url('./img/dungeon-exits.png') center left no-repeat;
        background-size: 128px 88px;
      }
      

      Perhaps we could throw in some pixel art… and a random object or two. And suddenly, you've got the beginnings of a game. No JS required.

      Try clicking the doors and see where it takes you. Can you find the south-east corner with the exit door?

      So what?

      What have we done here?

      We've looked at three different methods for rendering a maze graph. And, strangely, the method with the simplest output (the Unicode renderer) involved the most complex code. Yet, arguably the most interesting output was the HTML renderer. And that involved the most straightforward code.

      What's more intriguing, though, is that thinking through how to make our maze accessible lead us on an adventure. By adding some simple CSS, we created something visually appealing and exciting. And it makes me wonder. People tend to think of accessibility as admirable, but perhaps not essential. That is, something we fully intend to consider, after we've done the 'real' work. But what if we're missing out by having this attitude? Does accessibility have to be a tedious chore? What if we thought about it differently? What if we considered it a constraint that leads to more creative output? And possibly an opportunity to inject more fun and delight into our products? Maybe that's worth pondering some more.

      Finally, I've created a Github repository and npm package for this code. Just in case you want to muck around with mazes but don't want to write this all out by hand.

    3. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

  2. August 24, 2025
    1. 🔗 r/reverseengineering Help in Reversing a binary, which appears to be shellcode! rss

      Hello,

      I’m stuck on reversing a .bin binary file. You can find it here: https://bradseek.top/GitHubData/stonecross.bin. If the website is down, I can provide the sample directly.

      Thanks in advance for your help.

      submitted by /u/MGHVL7
      [link] [comments]

    2. 🔗 r/LocalLLaMA All of the top 15 OS models on Design Arena come from China. The best non-Chinese model is GPT OSS 120B, ranked at 16th rss

      All of the top 15 OS models on Design Arena come from China. The best non-Chinese model is GPT OSS 120B, ranked at 16th | China is not only the main competitor to the US in the overall AI race, but dominating the open-source landscape. Out of the open source models listed on Design Arena (a UI/UX and frontend benchmark for LLMs), Chinese models take up all of the top 15 spots with the first non-Chinese model making its appearing at #16 as GPT OSS 120B, developed by Open AI. It's really remarkable what DeepSeek, Zhipu, Kimi, and Qwen have been able to do while staying OS. submitted by /u/Accomplished-Copy332
      [link] [comments]
      ---|---

    3. 🔗 r/LocalLLaMA Mistral Large soon? rss
    4. 🔗 r/LocalLLaMA Elmo is providing rss

      Elmo is providing | submitted by /u/vladlearns
      [link] [comments]
      ---|---

    5. 🔗 Register Spill Joy & Curiosity #51 rss

      I'm happy to report that I've reached a new milestone in my life: I can now ride my bike with no hands.

      For all my life I thought that it's something incredible hard. I thought of it like I thought of the ability to juggle three balls. It's something you have to practice and only people with serious dedication end up being able to do it. But then, last week, my friend said he bikes to work without his hands on the bars. Huh. Then I told my wife about it, saying that's incredible. She said, "why? Everybody can do it, right? I can do it." What? My wife, who (and let me say: I love her) has managed to bump into every wall of this house, forwards and backwards, just by trying to walk here, can ride her bike with no hands? Maybe… Maybe I can do it?

      Yesterday I did it. On a bike ride with my wife, who immediately taunted me by taking her hands off the bars and waving them around, I did it. Took my hands off the handlebars and rode my bike for a few hundred meters. "You look so happy," she said.

      • This is one of those posts that immediately make me think: "I'm going to reference this a lot in the future." I sent the author, James, a note to thank him for writing it and told him: "I recognise a lot of stuff in there that I've seen or done or know to do, but lacked the words for." I read the book The Nature of Software Development many, many years ago and, to this day, I think it's one of the best things that have been written about agile (lower-case) software development. And a lot of what Ron Jeffries wrote in his book is echoed in James' piece here: the "Strategy is a Cone" framing, the stepping stones (obviously), the acceptance of unknown unknowns, the "you can't just sit there and scratch your chin and figure it out" -- highly recommended.

      • Very important post that I recommend reading: Building AI Products In The Probabilistic Era. It touches on a lot of things that are changing, fundamentally. "Stop for a moment to realize what this means. When building on top of this technology, our products can now succeed in ways we've never even imagined, and fail in ways we never intended." And then there's this: "With AI products, all this is no longer true. These models are discovered, not engineered. There's some deep unknowability about them that is both powerful and scary. Not even model makers know exactly what their creations can fully do when they train them. It's why 'vibe' is such an apt word: faced with this inherent uncertainty, we're left to trust our own gut and intuition when judging what these models are truly capable of." And here's something that I've felt too but that's very hard to explain: "⁠⁠This doesn't work anymore. The more you try to control the model, the more you'll nerf it, ultimately damaging the product itself. Past a certain point, intelligence and control start becoming opposing needs." Very good post.

      • "What if your agent uses a different LM at every turn? We let mini-SWE-agent randomly switch between GPT-5 and Sonnet 4 and it scored higher on SWE-bench than with either model separately." The era of model alloys is upon us, I think. (My teammate Camden talked to Beyang about this exact topic, in case you're interested in his and our thinking on the topic.)

      • SolveIt looks interesting. I found out about it through this video, which I haven't watched in full, but in the video you can see "Jeremy Howard and Johno Whitaker present SolveIt, a development environment designed to mitigate the downsides of 'vibe coding' by encouraging deliberate, step-by-step problem-solving." And if you click around at this timestamp here you can see that in action. If nothing else, it's an interesting idea.

      • "Existing search tools on Windows suck. Even with an SSD, it's painfully slow. So I built a prototype of Nowgrep. It bypasses most of the slow Windows nonsense, and just parses the raw NTFS." I had no clue that there's more greps being worked on. I thought ripgrep won the game and the game's over. But maybe not on Windows? (Also: I had never heard of BareGrep and just the screenshots alone bring back memories.)

      • As some of you know, some of my pet interests are espionage and corporate espionage (the whole Deel vs. Rippling thing was my jam , as they say) but also cybercrime with state actors, and so when John Collison asked Brian Armstrong in this episode of the Stripe podcast Cheeky Pint "what does the general tech public not appreciate about the cyber crime landscape?" and Brian Armstrong replied with "there's a lot of North Korean agents trying to work at these companies" my heart started to beat a little faster. Very interesting 5 minute section in that episode.

      • This was a ton of fun and made me think (like I did many times before, probably naively) that this is how complex topics should be taught in school: Moving Objects in 3D space.

      • Do you know who Eoghan McCabe is? He's the CEO of Intercom. And this, as I found out yesterday, is his personal homepage: eoghanmccabe.com. It's fantastic on many levels, the styling is just one thing, but look at it: this is truly a personal homepage. There's a bio, there's some thoughts, there's interests, there's hobbies, there's photos, there's links. So good.

      • Very, very interesting: "Tidewave Web for Rails and Phoenix: a coding agent that runs directly in the browser alongside your web application, in your own development environment, with full page and code context." I think this is only the start of frameworks and models melting, because, at the end of the day, I think, agents and frameworks try to solve the same thing: reduce the amount of code that has to be written.

      • Jujutsu For Busy Devs -- very, very good. Finally learned about mine() being a valid revset (which is exactly what I was looking for earlier today).

      • "In a manner of speaking, that smaller Rust is the language I fell in love with when I first learned it in 2018. Rust is a lot bigger today, in many ways, and the smaller Rust is just a nostalgic rose-tinted memory. But I think it's worth studying as an example of how well orthogonal features can compose when they're designed as one cohesive whole." (Side-note: I didn't know that you could use a hashbang cargo invocation to run Rust programs like that, including TOML and all)

      • matklad on the TigerBeetle blog: Code Review Can Be Better. Most review tools nowadays I don't find that interesting anymore (I also did a near 180 on code reviews in the last one and a half years and am now less enthused about it), but this paragraph stood out to me: "When I review code, I like to pull the source branch locally. Then I soft-reset the code to mere base, so that the code looks as if it was written by me. Then I fire up magit, which allows me to effectively navigate both through the diff, and through the actual code. And I even use git staging area to mark files I've already reviewed" Now, that 's how you should probably review code. I've never done the unstaging before, but a proper, proper review requires checking out the code, I think, and the unstaging/staging is very smart.

      • Levon Helm: "I don't know. I guess it's from being born in Helena, Arkansas. That's a pretty basic part of America where there's a lot of good basic music. Drums just always sounded like the most fun part of that good music for me. I had the opportunity to see some of the traveling minstrel shows years ago, with the house band, the chorus line, the comedians and singers. In those kinds of shows, with horns and a full rhythm section, the drums always looked like the best seat in the house."

      If you ever drummed with your fingers on a table and said to your wife "listen, it's like the double-bass section in Metallica's One-- no, listen: DARKNESS, IMPRISONING ME" -- then you should subscribe:

    6. 🔗 r/LocalLLaMA There are at least 15 open source models I could find that can be run on a consumer GPU and which are better than Grok 2 (according to Artificial Analysis) rss

      There are at least 15 open source models I could find that can be run on a consumer GPU and which are better than Grok 2 (according to Artificial Analysis) | And they have better licenses, less restrictions. What exactly is the point of Grok 2 then? I appreciate open source effort, but wouldn't it make more sense to open source a competitive model that can at least be run locally by most people? submitted by /u/obvithrowaway34434
      [link] [comments]
      ---|---

  3. August 23, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-08-23 rss

      IDA Plugin Updates on 2025-08-23

      New Releases:

      Activity:

    2. 🔗 r/LocalLLaMA grok 2 weights rss
    3. 🔗 r/LocalLLaMA Google and Anthropic struggle to keep marketshare as everyone else catches up rss

      Google and Anthropic struggle to keep marketshare as everyone else catches up | Data from last 6 months on OpenRouter compared to now submitted by /u/ObnoxiouslyVivid
      [link] [comments]
      ---|---

    4. 🔗 oxigraph/oxigraph v0.5.0-beta.4 release
      • oxigraph: Transaction has now stronger lifetime bounds to ensure operations on a transactions are not running longer than the transaction itself. It also affects types like QuadIter or GraphNameIter that now carry a lifetime.
      • spareval: expose QueryEvaluator::evaluate_expression to evaluate expressions
    5. 🔗 ryoppippi/ccusage v16.2.0 release

      🚨 Breaking Changes

      • Replace environment variables with statusline CLI options for context thresholds - by @ryoppippi in #578 (2d055)

      🚀 Features

      [View changes on

      GitHub](https://github.com/ryoppippi/ccusage/compare/v16.1.2...v16.2.0)

    6. 🔗 r/wiesbaden Schwimmen in Rheingau rss

      ich bin neu in Oestrich-Winkel und suche ein Hallenbad in der Nähe (oder in Wiesbaden). Weiß jemand, wo man hier gut schwimmen gehen kann?

      submitted by /u/leniw291
      [link] [comments]

    7. 🔗 r/reverseengineering Ghidra + DLL Proxy = Nostalgic Bytes: Reverse Engineering AirStrike 3D for Fun rss

      Found myself going down a deep nostalgia hole with AirStrike 3D II (seems like every dev has that one childhood game), so naturally I had to tear it apart.

      Everything was done on fedora linux with the help of steam proton.

      What's done:

      ASProtect v1.0 unpacking (debugger → dump at game main loop (e.g. main menu) → analysis)

      Custom divo APK extraction (XOR cipher)

      MDL↔OBJ conversion

      Save decryption + ImHex structs

      MO3 audio modules → WAV pipeline

      bass.dll (audio lib) proxy for simple opengl in game overlay

      Ghidra project with annotated functions


      P.s. I'm a beginner—don't judge harshly :)

      submitted by /u/Ascendo_Aquila
      [link] [comments]

    8. 🔗 r/wiesbaden Premium Cocktail Bar gesucht rss

      Hey!

      Kennt jemand von euch eine Premium Cocktail Bar in Wiesbaden die heute offen hat? Ich habe gesehen, dass Lemz heute nicht offen hat.

      Vielen Dank im Vorraus!!!

      submitted by /u/SoilSweet8555
      [link] [comments]

    9. 🔗 sacha chua :: living an awesome life Notes: Pottery wheel afternoon summer camp rss

      Today was the last day of A+'s week-long wheel-throwing afternoon summer camp at Parkdale Pottery in Toronto. She's focused on wheel throwing at the moment, not hand-building. It's hard to find pottery wheel lessons for 9-year-olds because of strength and safety concerns. A+'s been doing the all-ages 2-hour wheel-throwing workshops at Clay With Me independently around once a month, and she's also tried painting premade pieces. It felt like a minor miracle to find a half-day camp focused on just what she wanted.

      Before the workshop, A+ wasn't sure about trying out a different studio, since she'd gotten comfortable at Clay With Me. She settled in quickly, though, and even took charge of packing her snacks and getting her clothes and apron ready for the next day. It was great to see her grow more independent.

      A+ likes to work with smaller balls of clay so that they're easier to centre and handle. In Clay with Me workshops, she usually asks the instructors to divide a ball in half. Because the Parkdale Pottery camp was for kids 8-12 years old, the clay balls they provided were the right size for her hands, and the instructors also showed the kids how to prepare their own.

      The first three days focused on wheel throwing. The instructor complimented A+ on her centreing skills. She's gotten pretty good at bracing herself so that she can form the puck right in the middle. She also learned about adding attachments by scoring the clay and adding slip. The fourth day was about refining and trimming, and the fifth day was about glazing. She enjoyed learning how to marble her pieces with interesting blue-and-white swirls, and I enjoyed her description of the process: layering the underglazes, then swirling them around to create the design. This was the first time she was able to trim and glaze her own pieces, since the Clay with Me workshops are one-off sessions where the pieces are all finished with a clear food-safe glaze. Parkdale Pottery will fire A+'s pieces with a food-safe glaze too, and we'll pick them up in a few weeks.

      When kids finished early or wanted to take a break, they explored hand-building, drew circles with markers on paper attached to pottery wheels, worked with beads, and played the board game Trouble. The instructors did a good job of managing the occasional squabbles.

      Looking at other students' work on the shelves and the instructional posters on the wall, I saw interesting ideas that we might try in future workshops. (Gotta make a face vase…)

      The half-day summer camp was from 1 PM to 4 PM from Monday to Friday, and it cost $250+HST. There was a full-day option, but A+ wasn't interested in hand-building. I think the half-day was worth it, especially since I managed to squeeze in about 2 hours of consulting every day even with setting aside time to bike back and forth. We're gradually transitioning to the phase where she wants to learn about things I can't teach her, and paying for clay workshops is a great way to access people's specialized expertise and equipment. I don't know how many kids there were in the camp, but A+ was happy with the teacher-student ratio and felt like she had enough time to get whatever help she needed.

      From her previous workshops, we've collected a good selection of little ice cream bowls and saucers. This camp will add a few more saucers and tiny bowls. It might be a good idea to learn how to make little treats (maybe chocolate truffles?) that we can place on the saucers for an extra-special birthday gift. ("Wrapped in plastic and tied with a bow?" she asks.)

      Next steps: We'll probably continue with the Clay with Me workshops, since A+ likes the studio and is comfortable with the process. I also want to explore a little handbuilding with polymer clay and air dry clay, and some sketching to imagine pieces. Maybe she'll get into that too. When we come up with pieces we really like, we can do one of the handbuilding workshops at a pottery studio in order to make a food-safe version, or consider a clay-at-home package (Shaw Street Pottery) that can be fired. When A+ turns 10, she'll be old enough for the wheel courses at places like Create Art Studio and 4cats. They generally schedule their teen wheel courses on weekdays, though, and a weekend would probably be better for us.

      A+ wants to do this summer camp again next year. She prefers unstructured time and plenty of afternoon playdates, so it'll probably be just one week, like this year. We'll see when we get there. Plenty to explore. It's nice to have a craft, and maybe this will be one of hers.

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    10. 🔗 matklad Retry Loop Retry rss

      Retry Loop Retry

      Aug 23, 2025

      Some time ago I lamented that I don’t know how to write a retry loop such that:

      • it is syntactically obvious that the amount of retries is bounded,
      • there’s no spurious extra sleep after the last attempt,
      • the original error is reported if retrying fails,
      • there’s no code duplication in the loop.

      https://matklad.github.io/2023/12/21/retry-loop.html

      To recap, we have

      fn action() E!T { ... }
      fn is_transient_error(err: E) bool { ... }
      

      and we need to write

      fn action_with_retries(retry_count: u32) E!T { ... }
      

      I’ve received many suggestions, and the best one was from https://www.joachimschipper.nl, though it was somewhat specific to Python:

      for tries_left in reverse(range(retry_count)):
          try:
              return action()
          except Exception as e:
              if tries_left == 0 or not is_transient_error(e):
                  raise
              sleep()
      else:
          assert False
      

      A couple of days ago I learned to think better about the problem. You see, the first requirement, that the number of retries is bounded syntactically, was leading me down the wrong path. If we start with that requirement, we get code shape like:

      const result: E!T = for (0..retry_count) {
          // ???
          action()
          // ???
      }
      

      The salient point here is that, no matter what we do, we need to get E or T out as a result, so we’ll have to call action() at least once. But retry_count could be zero. Looking at the static semantics, any non do while loop’s body can be skipped completely, so we’ll have to have some runtime asserts explaining to the compiler that we really did run action at least once. The part of the loop which is guaranteed to be executed at least once is a condition. So it’s more fruitful to flip this around: it’s not that we are looping until we are out of attempts, but, rather, we are looping while the underlying action returns an error, and then retries are an extra condition to exit the loop early:

      var retries_left = retry_count;
      const result = try while(true) {
          const err = if (action()) |ok| break ok else |err| err;
          if (!is_transient_error(err)) break err;
      
          if (retries_left == 0) break err;
          retries_left -= 1;
          sleep();
      };
      

      This shape of the loop also works if the condition for retries is not attempts based, but, say, time based. Sadly, this throws “loop is obviously bounded” requirement out of the window. But it can be restored by adding upper bound to the infinite loop:

      var retries_left = retry_count;
      const result = try for(0..retry_count + 1) {
          const err = if (action()) |ok| break ok else |err| err;
          if (!is_transient_error(err)) break err;
      
          if (retries_left == 0) break err;
          retries_left -= 1;
          sleep();
      } else @panic("runaway loop");
      

      I still don’t like it (if you forget that +1, you’ll get a panic!), but that’s where I am at!

    11. 🔗 matklad Links rss

      Links

      Aug 23, 2025

      If you have a blog, consider adding a “links” page to it, which references resources that you find notable: https://matklad.github.io/links.html

      I’ve started my links page several years ago, mostly because I found myself referring to the same few links repeatedly in various discussions, and not all the links were easily searchable.

      Note that the suggestion is different from more typical “monthly links roundup”, which is nice to maintain Substack engagement/community, but doesn’t contribute to long-term knowledge distilling.

      It is also different from the exhaustive list of everything I’ve read on the Internet. It is relatively short, considering its age.

  4. August 22, 2025
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2025-08-22 rss

      IDA Plugin Updates on 2025-08-22

      Activity:

      • capa
      • ghidra
        • 4fcc1feb: Merge remote-tracking branch 'origin/GP-5904_ghidorahrex_PR-8394_RibS…
        • 826e5203: Merge remote-tracking branch 'origin/GP-5903_ghidorahrex_PR-8393_RibS…
        • f6d35f0d: Merge remote-tracking branch
        • bdf3c1d2: GP-5885 - Updated the Next Instruction action to jump to the the func…
        • dc09c94c: Merge remote-tracking branch
        • 58007f4f: Merge remote-tracking branch 'origin/GP-0-dragonmacher-test-fixes-8-2…
        • 12a926bd: Merge remote-tracking branch 'origin/GP-1-dragonmacher-symbol-tree-bu…
        • 8fa692b0: Merge remote-tracking branch 'origin/patch'
        • 5c1e6540: Merge remote-tracking branch 'origin/GP-5860-dragonmacher-function-co…
        • daec88be: Merge branch 'GP-5917_emteere_SwitchAnalyzerSpeedIssue' into patch
        • 48adb5ec: GP-5917 Use a hashset for functions to reduce reduntant decompiler use
      • SecOps
    2. 🔗 @binaryninja@infosec.exchange WARP speed ahead! Want to learn more about the future of function matching in mastodon

      WARP speed ahead! Want to learn more about the future of function matching in Binary Ninja (and hopefully your other favorite tools too!)? Mason talks about that and more in our latest blog post: https://binary.ninja/2025/08/22/warp.html

    3. 🔗 @HexRaysSA@infosec.exchange 💪Do you have a strong CTF team and would like some support? mastodon

      💪Do you have a strong CTF team and would like some support?

      ➥ Hex-Rays will be sponsoring 4 CTF teams for 1 year. Learn more about the sponsorship program below, but don't sleep; the submission window closes on August 31!
      https://hex-rays.com/ctf-sponsorship-program

    4. 🔗 r/LocalLLaMA Seed-OSS-36B is ridiculously good rss

      https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct

      the model was released a few days ago. it has a native context length of 512k. a pull request has been made to llama.cpp to get support for it.

      i just tried running it with the code changes in the pull request. and it works wonderfully. unlike other models (such as qwen3, which has 256k context length supposedly), the model can generate long coherent outputs without refusal.

      i tried many other models like qwen3 or hunyuan but none of them are able to generate long outputs and even often complain that the task may be too difficult or may "exceed the limits" of the llm. but this model doesnt even complain, it just gets down to it. one other model that also excels at this is glm-4.5 but its context length is much smaller unfortunately.

      seed-oss-36b also apparently has scored 94 on ruler at 128k context which is insane for a 36b model (it was reported by the maintainer of chatllm.cpp).

      submitted by /u/mahmooz
      [link] [comments]

    5. 🔗 r/reverseengineering Sprites mods - Magic Printer Cartridge Paintbrush rss

      ESP32 Inkjet Cartridge Controller Project - Hardware Debugging Help Needed

      I'm reproducing Jeroen Domburg's HP63 cartridge controller project (Magic Printer Cartridge Paintbrush) and have encountered several hardware failures. Looking for advice on debugging strategy and potential design issues.

      Project Status: Successfully achieved some ink output (cyan, occasional yellow) before hardware failures occurred. Using Jeroen's original KiCad files and exact component specifications.

      Hardware Architecture:

      • 3-board system: PSU board (3.3V/9V/16V rails), ESP32 board, cartridge control board
      • MC14504B level converters for 3.3V to 9V/16V translation
      • Custom power protection circuit for nozzle drive (10µs pulse limiting)
      • ESP32-S3 as programmer, GPIO22 substituted for GPIO12 (to avoid using bootstrapping pin)

      Current Issues:

      1. Level Converter Behavior (MC14504B):
        • Inconsistent signal propagation delays under load
        • Some cartridges require timing adjustments to function
        • DCLK signal integrity issues between ESP32 output and level converter output
        • Suspected latch-up when VCC pins left floating during initial assembly
      2. Power Supply Problems:
        • 9V rail jumping to 15.7V when cartridge connected (should remain 9V)
        • Current spikes causing brownout detection on ESP32 (triggers at 2.44V threshold)
        • Final failure: VCC/GND short on ESP32 after power supply voltage drop
      3. Assembly Sequence Issues:
        • Initial assembly with floating VCC pins on level converters caused component damage
        • Replacement of U13 (MC14504B) resolved initial voltage issues
        • Subsequent failure during operation with cartridge connected

      Measurements (V_in = 4.2V):

      • Idle (no cartridge): 45mA
      • Cartridge connected, no dispensing: 45mA
      • Dispensing without cartridge: ~80mA
      • Dispensing with cartridge: ~150mA

      Logic Analyzer Results:

      • ESP32 outputs appear correct per waveform templates
      • Power protection circuit functions correctly (10µs pulse limiting verified)
      • DCLK signal shows inconsistencies between ESP32 and level converter outputs

      Specific Questions:

      1. Assembly Strategy: What's the recommended power-up sequence for MC14504B-based designs? Should VCC always be applied before input signals?
      2. Level Converter Issues: Given MC14504B's limited current output and propagation delays, are there better alternatives for 3.3V to 9V/16V level shifting in this application?
      3. Protection Recommendations: What additional protection (diodes, current limiting resistors) would prevent ESP32 damage from power supply issues?
      4. DCLK Signal Integrity: How can I debug and correct the timing inconsistencies in the DCLK path through the level converters?

      submitted by /u/Vegetable_Pass_9597
      [link] [comments]

    6. 🔗 News Minimalist 🐢 China's carbon emissions starting to fall + 10 more stories rss

      In the last 3 days ChatGPT read 80484 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.9.

      [5.9] China's carbon emissions may have peaked —abcnews.go.com(+4)

      China's carbon emissions edged down 1% in the first half of 2025, driven by rapid solar energy expansion, including a massive new farm on the Tibetan plateau.

      The emission decrease extends a trend that began in March 2024, attributed to increased solar, wind, and nuclear power outpacing growing electricity demand.

      China installed 212 gigawatts of solar capacity in the first six months of the year, exceeding the entire U.S. capacity.

      [5.6] AI helps UK woman rediscover lost voice after 25 years —sg.news.yahoo.com(+2)

      AI restored the voice of a UK woman with motor neurone disease after 25 years of silence, using just eight seconds of old audio.

      Sarah Ezekiel, diagnosed with MND 25 years ago, could only provide a short, low-quality clip. AI, developed by ElevenLabs, isolated her voice and filled gaps, creating a realistic version with her original accent.

      Traditional voice restoration requires hours-long, high-quality recordings. This breakthrough allows individuals with limited audio samples to regain their unique vocal identity, preserving a crucial aspect of their selfhood.

      Highly covered news with significance over 5.5

      [6.2] Sweden will build new modular nuclear reactors for the first time in 50 years — di.se (Swedish) (+6)

      [6.1] Famine confirmed in Gaza City for first time — bbc.com (+339)

      [5.6] US sanctions ICC judges over investigations — zeit.de (German) (+50)

      [6.1] Scientists program cells to create first biological qubit — phys.org (+2)

      [5.5] Ukraine has successfully tested long-range Flamingo missile, capable of reaching 3,000 kilometers — huffingtonpost.fr (French) (+14)

      [5.6] Meta signs $10 billion cloud deal with Google — economictimes.indiatimes.com (+9)

      [5.5] Japan offers African nations debt relief as China alternative — ledevoir.com (French) (+3)

      [6.0] Scientists created a quantum logic gate using a single atom — phys.org (+3)

      [5.5] Astronomers get first look deep inside a star during supernova explosion — abc.net.au (+13)

      Thanks for reading!

      — Vadim


      You can create a personal RSS feed with premium.


      Powered by beehiiv

    7. 🔗 r/LocalLLaMA I'm making a game where all the dialogue is generated by the player + a local llm rss

      I'm making a game where all the dialogue is generated by the player + a local llm | submitted by /u/LandoRingel
      [link] [comments]
      ---|---

    8. 🔗 ryoppippi/ccusage v16.1.2 release

      🚀 Features

      🐞 Bug Fixes

      [View changes on

      GitHub](https://github.com/ryoppippi/ccusage/compare/v16.1.1...v16.1.2)

    9. 🔗 syncthing/syncthing v2.0.3 release

      Major changes in 2.0

      • Database backend switched from LevelDB to SQLite. There is a migration on
        first launch which can be lengthy for larger setups. The new database is
        easier to understand and maintain and, hopefully, less buggy.

      • The logging format has changed to use structured log entries (a message
        plus several key-value pairs). Additionally, we can now control the log
        level per package, and a new log level WARNING has been inserted between
        INFO and ERROR (which was previously known as WARNING...). The INFO level
        has become more verbose, indicating the sync actions taken by Syncthing. A
        new command line flag --log-level sets the default log level for all
        packages, and the STTRACE environment variable and GUI has been updated
        to set log levels per package. The --verbose and --logflags command
        line options have been removed and will be ignored if given.

      • Deleted items are no longer kept forever in the database, instead they are
        forgotten after fifteen months. If your use case require deletes to take
        effect after more than a fifteen month delay, set the
        --db-delete-retention-interval command line option or corresponding
        environment variable to zero, or a longer time interval of your choosing.

      • Modernised command line options parsing. Old single-dash long options are
        no longer supported, e.g. -home must be given as --home. Some options
        have been renamed, others have become subcommands. All serve options are
        now also accepted as environment variables. See syncthing --help and
        syncthing serve --help for details.

      • Rolling hash detection of shifted data is no longer supported as this
        effectively never helped. Instead, scanning and syncing is faster and more
        efficient without it.

      • A "default folder" is no longer created on first startup.

      • Multiple connections are now used by default between v2 devices. The new
        default value is to use three connections: one for index metadata and two
        for data exchange.

      • The following platforms unfortunately no longer get prebuilt binaries for
        download at syncthing.net and on GitHub, due to complexities related to
        cross compilation with SQLite:

        • dragonfly/amd64
        • illumos/amd64 and solaris/amd64
        • linux/ppc64
        • netbsd/*
        • openbsd/386 and openbsd/arm
        • windows/arm
        • The handling of conflict resolution involving deleted files has changed. A
          delete can now be the winning outcome of conflict resolution, resulting in
          the deleted file being moved to a conflict copy.

      This release is also available as:

      • APT repository: https://apt.syncthing.net/

      • Docker image: docker.io/syncthing/syncthing:2.0.3 or ghcr.io/syncthing/syncthing:2.0.3
        ({docker,ghcr}.io/syncthing/syncthing:2 to follow just the major version)

      What's Changed

      Fixes

      • fix(cmd): restore --version flag for compatibility by @acolomb in #10269
      • fix(cmd): make database migration more robust to write errors by @calmh in #10278
      • fix(cmd): provide temporary GUI/API server during database migration by @calmh in #10279
      • fix(db): clean files for dropped folders at startup by @calmh in #10280

      Other

      Full Changelog : v2.0.2...v2.0.3

    10. 🔗 r/LocalLLaMA What is Gemma 3 270M actually used for? rss

      What is Gemma 3 270M actually used for? | All I can think of is speculative decoding. Can it even RAG that well? submitted by /u/airbus_a360_when
      [link] [comments]
      ---|---

    11. 🔗 r/reverseengineering [Release/Showcase] Minimal LD_PRELOAD “observe‑only” interposer for your own .so — hook, log, plot (with CI) rss

      I put together a tiny, observe‑only LD_PRELOAD template aimed at RE workflows. It interposes a function in a self‑owned .so, logs args/ret/latency to CSV, and auto‑plots a histogram in GitHub Actions. Useful as a lightweight dynamic probe before pulling out heavier tooling.

      • What you get
        • libhook.so that forwards via dlsym(RTLD_NEXT, ...)
        • Demo target libdemo.so and a small driver
        • hook.csv + latency.png (generated locally or in CI artifacts)
        • Clean Makefile and a CI pipeline: build → run with LD_PRELOAD → plot → upload
      • Quick start
      • git clone https://github.com/adilungo39/libdemo-instrumentation cd libdemo-instrumentation make && make run && make plot
      • Artifacts are also downloadable from the repo’s Actions tab (ci-artifacts).
      • How it works (core idea)
      • real_demo_add = (demo_add_fn)dlsym(RTLD_NEXT, "demo_add"); // take timestamps around the real call, then append a CSV line
      • The interposer uses constructor/destructor hooks for setup/teardown and logs: ts,a,b,r,ms.
      • Why RE folks might care

      Feedback welcome: features you’d want for RE (symbol selection, demangling, GOT/PLT tricks, multi‑thread correlation, JSON lines, env‑driven filters). If useful, feel free to fork or open issues.

      Flair suggestion: Tooling / PoC

      • Fast dynamic probe to sanity‑check call behavior and timing
      • Template for writing custom interposers, adding filters, thread IDs, JSON output, p95/p99, etc.
      • CI‑friendly: every push produces fresh logs and plots
        • Scope and limitations
      • Linux/glibc, gcc; intended for self‑owned code or permitted scenarios
      • Minimal example (single symbol, simple logging); not a general tracer

      submitted by /u/Afolun
      [link] [comments]

    12. 🔗 Andrew Healey's Blog Icepath: a 2D Programming Language rss

      Sliding around a cave and hitting opcodes.

    13. 🔗 Servo Blog This month in Servo: new image formats, canvas backends, automation, and more! rss

      Servo has smashed its record again in July, with 367 pull requests landing in our nightly builds! This includes several new web platform features:

      Notable changes for Servo library consumers:

      servoshell nightly showing the same things, but
animated

      texImage3D() example reproduced from texture_2d_array in the WebGL 2.0 Samples by Trung Le, Shuai Shao (Shrek), et al (license).

      Engine changes [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#engine-changes)

      Like many browsers, Servo has two kinds of zoom: page zoom affects the size of the viewport, while pinch zoom does not (@shubhamg13, #38194). Page zoom now correctly triggers reflow (@mrobinson, #38166), and pinch zoom is now reset to the viewport meta config when navigating (@shubhamg13, #37315).

      ‘image-rendering’ property now affects ‘border-image’ (@lumiscosity, @Loirooriol, #38346), ‘text- decoration[-line]’ is now drawn under whitespace (@leo030303, @Loirooriol, #38007), and we’ve also fixed several layout bugs around grid item contents (@Loirooriol, #37981), table cell contents (@Loirooriol, #38290), quirks mode (@Loirooriol, #37814, #37831, #37820, #37837), clientWidth and clientHeight queries of grid layouts (@Loirooriol, #37917), and ‘min-height’ and ‘max-height’ of replaced elements (@Loirooriol, #37758).

      As part of our incremental layout project, we now cache the layout results of replaced boxes (@Loirooriol, #37971, #37897, #37962, #37943, #37985, #38349), avoid unnecessary reflows after animations (@coding-joedow, #37954), invalidate layouts more precisely (@coding-joedow, #38199, #38057, #38198, #38059), and we’ve added incremental box tree construction (@mrobinson, @Loirooriol, @coding- joedow, #37751, #37957) for flex and grid items (@coding-joedow, #37854), table columns, cells, and captions (@Loirooriol, @mrobinson, #37851, #37850, #37849), and a variety of inline elements (@coding-joedow, #38084, #37866, #37868, #37892).

      Work on IndexedDB continues, notably including support for key ranges (@arihant2math, @jdm, #38268, #37684, #38278).

      sessionStorage is now isolated between webviews, and copied to new webviews with the same opener (@janvarga, #37803).

      Browser changes [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#browser-changes)

      servoshell now has a .desktop file and window name , so you can now pin it to your taskbar on Linux (@MichaelMcDonnell, #38038). We’ve made it more ergonomic too, fixing both the sluggish mouse wheel and pixel-perfect trackpad scrolling and the too fast arrow key scrolling (@yezhizhen, #37982).

      You can now focus the location bar with Alt+D in addition to Ctrl+L on non-macOS platforms (@MichaelMcDonnell, #37794), and clicking the location bar now selects the contents (@MichaelMcDonnell, #37839).

      When debugging Servo with the Firefox devtools, you can now view requests in the Network tab both after navigating (@uthmaniv, #37778) and when responses are served from cache (@uthmaniv, #37906). We’re also implementing the Debugger tab (@delan, @atbrakhi, #36027), including several changes to our script system (@delan, @atbrakhi, #38236, #38232, #38265) and fixing a whole class of bugs where devtools ends up broken (@atbrakhi, @delan, @simonwuelker, @the6p4c, #37686).

      WebDriver changes [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#webdriver-changes)

      WebDriver automation support now goes through servoshell , rather than through libservo internally, ensuring that WebDriver commands are consistently executed in the correct order (@longvatrong111, @PotatoCP, @mrobinson, @yezhizhen, #37669, #37908, #37663, #37911, #38212, #38314). We’ve also fixed race conditions in the Back , Forward (@longvatrong111, @jdm, #37950), Element Click (@longvatrong111, #37935), Switch To Window (@yezhizhen, #38160), and other commands (@PotatoCP, @longvatrong111, #38079, #38234).

      We’ve added support for the Dismiss Alert , Accept Alert , Get Alert Text (@longvatrong111, #37913), and Send Alert Text commands for simple dialogs (@longvatrong111, #38140, #38035, #38142), as well as the Maximize Window (@yezhizhen, #38271) and Element Clear commands (@PotatoCP, @yezhizhen, @jdm, #38208). Find Element family of commands can now use the "xpath" location strategy (@yezhizhen, #37783). Get Element Shadow Root commands can now interact with closed shadow roots (@PotatoCP, #37826).

      You can now run the WebDriver test suite in CI with mach try wd or mach try webdriver (@PotatoCP, @sagudev, @yezhizhen, #37498, #37873, #37712).

      2D graphics [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#2d-graphics)

      < canvas> is key to programmable graphics on the web, with Servo supporting WebGPU, WebGL, and 2D canvas contexts. But the general-purpose 2D graphics routines that power Servo’s 2D canvases are potentially useful for a lot more than : font rendering is bread and butter for Servo, but SVG rendering is only minimally supported right now, and PDF output is not yet implemented at all.

      Those features have one thing in common: they require things that WebRender can’t yet do. WebRender does one thing and does it well: rasterise the layouts of the web, really fast, by using the GPU as much as possible. Font rendering and SVG rendering both involve rasterising arbitrary paths, which currently has to be done outside WebRender, and PDF output is out of scope entirely.

      The more code we can share between these tasks, the better we can make that code, and the smaller we can make Servo’s binary sizes (#38022). We’ve started by moving 2D--specific state out of the canvas crate (@sagudev, #38098, #38114, #38164, #38214), which has in turn allowed us to modernise it with new backends based onVello (@EnnuiL, @sagudev, #30636, #38345):

      • a Vello GPU-based backend (@sagudev, #36821), currently slower than the default backend; to use it, build Servo with --features vello and enable it with --pref dom_canvas_vello_enabled

      • a Vello CPU-based backend (@sagudev, #38282), already faster than the default backend ; to use it, build Servo with --features vello_cpu and enable it with --pref dom_canvas_vello_cpu_enabled

      What is a pixel? [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#what-is-a-pixel%3F)

      Many recent Servo bugs have been related to our handling of viewport , window , and screen coordinate spaces (#36817, #37804, #37824, #37878, #37978, #38089, #38090, #38093, #38255). Symptoms of these bugs include bad hit testing (e.g. links that can’t be clicked), inability to scroll to the end of the page, or graphical glitches like disappearing browser UI or black bars.

      Windows rarely take up the whole screen, viewports rarely take up the whole window due to window decorations, and when different units come into play, like CSS px vs device pixels, a more systematic approach is needed. We built euclid to solve these problems in a strongly typed way within Servo, but beyond the viewport, we need to convert between euclid types and the geometry types provided by the embedder, the toolkit, the platform, or WebDriver, which creates opportunities for errors.

      Embedders are now the single source of truth for window rects and screen sizes (@yezhizhen, @mrobinson, #37960, #38020), and we’ve fixed incorrect coordinate handling in Get Window Rect, Set Window Rect (@yezhizhen, #37812, #37893, #38209, #38258, #38249), resizeTo() (@yezhizhen, #37848), screenX , screenY , screenLeft , screenTop (@yezhizhen, #37934), and in servoshell (@yezhizhen, #37961, #38174, #38307, #38082). We’ve also improved the Web Platform Tests (@yezhizhen, #37856) and clarified our docs (@yezhizhen, @mrobinson, #37879, #38110) in these areas.

      Donations [ __](https://servo.org/blog/2025/08/22/this-month-in-

      servo/#donations)

      Thanks again for your generous support! We are now receiving 4691 USD/month (+5.0% over June) in recurring donations. This helps cover the cost of our self-hosted CI runners and one of our latest Outreachy interns!

      Keep an eye out for further improvements to our CI system in the coming months, including ten-minute WPT builds and our new proposal for dedicated benchmarking runners, all thanks to your support.

      Servo is also on thanks.dev, and already 22 GitHub users (−3 from June) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

      4691 USD/month

      10000

      As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.