- ↔
- →
to read (pdf)
- Building a Pipeline for Agentic Malware Analysis | Tim Blazytko
- Study of Binaries Created with Rust through Reverse Engineering - JPCERT/CC Eyes | JPCERT Coordination Center official Blog
- Letting AI Actively Manage Its Own Context | 明天的乌云
- Garden Offices for Sale UK - Portable Space
- Cord: Coordinating Trees of AI Agents | June Kim
- March 22, 2026
-
🔗 r/Yorkshire Is something burning in LS6/7? rss
Any wildfires not in the news? There's a smoky haze over leeds that's getting quite unpleasant
submitted by /u/Satur9_is_typing
[link] [comments]
-
- March 21, 2026
-
🔗 Simon Willison Profiling Hacker News users based on their comments rss
Here's a mildly dystopian prompt I've been experimenting with recently: "Profile this user", accompanied by a copy of their last 1,000 comments on Hacker News.
Obtaining those comments is easy. The Algolia Hacker News API supports listing comments sorted by date that have a specific tag, and the author of a comment is tagged there as
author_username. Here's a JSON feed of my (simonw) most recent comments, for example:https://hn.algolia.com/api/v1/search_by_date?tags=comment,author_simonw&hitsPerPage=1000
The Algolia API is served with open CORS headers, which means you can access the API from JavaScript running on any web page.
Last August I had ChatGPT build me a simple tool for hitting that API for any user which fetches their comments and gives me a mobile-friendly "copy to clipboard" button. I've since tweaked it a few times with Claude.
I can then paste the whole lot into any LLM - these days I mostly use Claude Opus 4.6 for this - and prompt "profile this user" to see what it can figure out.
It's startlingly effective! It feels invasive to quote the profile of another user here, so I'll show you what it produces for me:
This is Simon Willison — a prolific, independent software developer, blogger, and one of the most visible voices in the AI-assisted coding space. Here's a profile drawn from his comments:
Professional identity: Independent developer and writer. Co-creator of Django, creator of Datasette and many other open source tools. On the board of the Python Software Foundation. Runs a heavily-trafficked personal blog at simonwillison.net with multiple content types (entries, link blog, notes, quotations). Monetizes through GitHub sponsors, ethical ads, and occasional consulting — not through any AI company payroll, a point he defends frequently.
Core thesis on AI coding: He's the leading evangelist for what he calls "agentic engineering" — using coding agents (Claude Code, Codex CLI) as productivity multipliers for skilled developers. His central argument: LLMs don't replace programmers, they amplify existing expertise. He's writing a guide/book on this at simonwillison.net/guides. He insists these tools are deceptively difficult to use well, and that most people who report bad results simply haven't learned the craft yet.
Working style: He does most of his programming from his iPhone via Claude Code for web, often while riding BART, walking the dog, or sitting in the garden. He embraces "YOLO mode" (auto-approving agent actions) and frequently runs 2-3 agent sessions in parallel. He starts every session with "run uv run pytest" to anchor the agent in TDD. He's on the $200/month Claude Max plan.
Key technical interests: Sandboxing and security (WebAssembly, Pyodide, sandbox-exec, Firecracker), SQLite (deeply), Python packaging and tooling (uv, PyPI distribution tricks), browser-in-a-browser experiments (v86, WASM Linux), and local LLM inference. He's been learning Go "by osmosis" through coding agents.
Security consciousness: He coined the term "prompt injection" and the "lethal trifecta" (access to private data + exposure to untrusted input + ability to take actions). He's genuinely alarmed about OpenClaw/Clawdbot security risks and has been predicting a headline-grabbing prompt injection attack "in the next six months" for over two years — acknowledging it keeps not happening while insisting the risk is real. He references the "normalization of deviance" pattern frequently.
Personality and debate style: Energetic, combative when challenged, but generally good-natured. He engages heavily in HN comment threads — sometimes dozens of replies in a single discussion. He's transparent about his biases and maintains a public disclosures page. He pushes back hard against what he sees as motivated reasoning from AI skeptics, but also against AGI hype ("I refer to that as the science fiction version of AI"). He gets visibly frustrated by sock-puppet accounts and bad-faith criticism. He has a self-deprecating streak ("my stupid pelican benchmark").
Recurring themes across hundreds of comments:
- "Two things can be true at the same time" — he holds nuanced positions
- Tests are for productivity, not just quality
- The November 2025 model releases (Opus 4.5, GPT-5.2) were a genuine inflection point
- Code review is the biggest bottleneck in agent-assisted workflows
- "Cognitive debt" is a real and unsolved problem
- The best engineering practices (tests, docs, CI/CD, clean code) also make agents work better
- He's deliberately trying to "teach people good software engineering while tricking them into thinking the book is about AI"
Personal interests mentioned: Niche museums (runs niche-museums.com), New Zealand kākāpō parrots, cooking food from countries he's never visited, chickens in his garden. Lives in Half Moon Bay, CA near a major pelican roost. The pelican-on-a-bicycle SVG test is his signature LLM benchmark.
In a sentence: A deeply experienced, independently-minded developer who's genuinely excited about AI coding tools, fights hard against both uncritical hype and reflexive dismissal, and is trying to professionalize how the industry uses these tools — while also worrying loudly about the security implications almost nobody else takes seriously enough.
This all checks out! I ran this in Claude incognito mode to hopefully prevent Claude from guessing that I was evaluating myself and sycophantically glazing me - the tone of the response it gave here is similar to the tone I've seen against other accounts.
I expect it guessed my real name due to my habit of linking to my own writing from some of my comments, which provides plenty of simonwillison.net URLs for it to associate with my public persona. I haven't seen it take a guess at a real name for any of the other profiles I've generated.
It's a little creepy to be able to derive this much information about someone so easily, even when they've shared that freely in a public (and API-available) place.
I mainly use this to check that I'm not getting embroiled in an extensive argument with someone who has a history of arguing in bad faith. Thankfully that's rarely the case - Hacker News continues to be a responsibly moderated online space.
You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 omerbenamram/evtx v0.11.2 release
What's Changed
- Add missing single quote in an example command by @JuanJesusAlejoSillero in #246
- pub Timestamp Offset by @jtmoon79 in #287
- [FIX] Prevent infinite loop parsing when record has a corrupted size by @maxspl in #288
New Contributors
- @JuanJesusAlejoSillero made their first contribution in #246
- @jtmoon79 made their first contribution in #287
- @maxspl made their first contribution in #288
Full Changelog :
v0.11.1...v0.11.2 -
🔗 r/Leeds The sad demise of Yorkshire flagstone pavements rss
If you know any people who like to steal our pavements please ask them to stop!
submitted by /u/AsparagusDramatic475
[link] [comments] -
🔗 r/reverseengineering Turbo Pascal 3.02A, deconstructed rss
submitted by /u/alberto-m-dev
[link] [comments] -
🔗 r/Yorkshire Response to the horror cup of tea made by an American. rss
| Howdy everyone. I am the American at large in the original post which was posted sometime last night. I Wanted to provide some insight to the original post made by a friend of mine who is the reason I had the opportunity to visit West Yorkshire last year for the first time. Couple of points.- The tea looked lumpy cause I hadn't had a chance to stir it yet, and nothing was mixed
- My wife is currently on a no dairy diet for the next few weeks so we can figure out if my son who is breastfed has a dairy intolerance, so I unfortunately had to use oatmeal(🤮) instead of real milk or cream in the coffee which is why it looked different.
- As the wife had never had Yorkshire tea before I only used a tablespoon of milk as we're both inexperienced in making British style tea.
- The color. Yes it's dark, and due to using oat milk instead of regular milk, it's more than likely always going to be darker than it should be.
- Any snarky replies in the original post you see from me to the OP is strictly due to us being friends and not in anyway me insulting him or trying to insult the British culture.
TLDR: I failed at my first attempt at a cup of tea, but have since tried to make it better and have posted updated photos, and understand that oat milk is not correct but unfortunately it has to be used and this the tea it will never be the right color. submitted by /u/mitchellfuller21
[link] [comments]
---|--- -
🔗 r/york Sign on bus counting down to departure rss
submitted by /u/tyw7
[link] [comments] -
🔗 r/reverseengineering Widevine Leak rss
submitted by /u/Aaryakrishna_
[link] [comments] -
🔗 Anton Zhiyanov Solod: Go can be a better C rss
I'm working on a new programming language named Solod (So). It's a strict subset of Go that translates to C, without hidden memory allocations and with source-level interop.
Highlights:
- Go in, C out. You write regular Go code and get readable C11 as output.
- Zero runtime. No garbage collection, no reference counting, no hidden allocations.
- Everything is stack-allocated by default. Heap is opt-in through the standard library.
- Native C interop. Call C from So and So from C — no CGO, no overhead.
- Go tooling works out of the box — syntax highlighting, LSP, linting and "go test".
So supports structs, methods, interfaces, slices, multiple returns, and defer. To keep things simple, there are no channels, goroutines, closures, or generics.
So is for systems programming in C, but with Go's syntax, type safety, and tooling.
Hello world • Language tour • Compatibility • Design decisions • FAQ • Final thoughts
'Hello world' example
This Go code in a file
main.go:package main type Person struct { Name string Age int Nums [3]int } func (p *Person) Sleep() int { p.Age += 1 return p.Age } func main() { p := Person{Name: "Alice", Age: 30} p.Sleep() println(p.Name, "is now", p.Age, "years old.") p.Nums[0] = 42 println("1st lucky number is", p.Nums[0]) }Translates to a header file
main.h:#pragma once #include "so/builtin/builtin.h" typedef struct main_Person { so_String Name; so_int Age; so_int Nums[3]; } main_Person; so_int main_Person_Sleep(void* self);Plus an implementation file
main.c:#include "main.h" so_int main_Person_Sleep(void* self) { main_Person* p = (main_Person*)self; p->Age += 1; return p->Age; } int main(void) { main_Person p = (main_Person){.Name = so_str("Alice"), .Age = 30}; main_Person_Sleep(&p); so_println("%.*s %s %" PRId64 " %s", p.Name.len, p.Name.ptr, "is now", p.Age, "years old."); p.Nums[0] = 42; so_println("%s %" PRId64, "1st lucky number is", p.Nums[0]); }Language tour
In terms of features, So is an intersection between Go and C, making it one of the simplest C-like languages out there — on par with Hare.
And since So is a strict subset of Go, you already know it if you know Go. It's pretty handy if you don't want to learn another syntax.
Let's briefly go over the language features and see how they translate to C.
Variables • Strings • Arrays • Slices • Maps • If/else and for • Functions • Multiple returns • Structs • Methods • Interfaces • Enums • Errors • Defer • C interop • Packages
Values and variables
So supports basic Go types and variable declarations:
// so const n = 100_000 f := 3.14 var r = '本' var v any = 42 // c const so_int n = 100000; double f = 3.14; so_rune r = U'本'; void* v = &(so_int){42};byteis translated toso_byte(uint8_t),runetoso_rune(int32_t), andinttoso_int(int64_t).anyis not treated as an interface. Instead, it's translated tovoid*. This makes handling pointers much easier and removes the need forunsafe.Pointer.nilis translated toNULL(for pointer types).Strings
Strings are represented as
so_Stringtype in C:// c typedef struct { const char* ptr; size_t len; } so_String;All standard string operations are supported, including indexing, slicing, and iterating with a for-range loop.
// so str := "Hi 世界!" println("str[1] =", str[1]) for i, r := range str { println("i =", i, "r =", r) } // c so_String str = so_str("Hi 世界!"); so_println("%s %u", "str[1] =", so_at(so_byte, str, 1)); for (so_int i = 0, _iw = 0; i < so_len(str); i += _iw) { _iw = 0; so_rune r = so_utf8_decode(str, i, &_iw); so_println("%s %" PRId64 " %s %d", "i =", i, "r =", r); }Converting a string to a byte slice and back is a zero-copy operation:
// so s := "1世3" bs := []byte(s) s1 := string(bs) // c so_String s = so_str("1世3"); so_Slice bs = so_string_bytes(s); // wraps s.ptr so_String s1 = so_bytes_string(bs); // wraps bs.ptrConverting a string to a rune slice and back allocates on the stack with
alloca:// so s := "1世3" rs := []rune(s) s1 := string(rs) // c so_String s = so_str("1世3"); so_Slice rs = so_string_runes(s); // allocates so_String s1 = so_runes_string(rs); // allocatesThere's a
so/stringsstdlib package for heap-allocated strings and various string operations.Arrays
Arrays are represented as plain C arrays (
T name[N]):// so var a [5]int // zero-initialized b := [5]int{1, 2, 3, 4, 5} // explicit values c := [...]int{1, 2, 3, 4, 5} // inferred size d := [...]int{100, 3: 400, 500} // designated initializers // c so_int a[5] = {0}; so_int b[5] = {1, 2, 3, 4, 5}; so_int c[5] = {1, 2, 3, 4, 5}; so_int d[5] = {100, [3] = 400, 500};len()on arrays is emitted as compile-time constant.Slicing an array produces a
so_Slice.Slices
Slices are represented as
so_Slicetype in C:// c typedef struct { void* ptr; size_t len; size_t cap; } so_Slice;All standard slice operations are supported, including indexing, slicing, and iterating with a for-range loop.
// so s1 := []string{"a", "b", "c", "d", "e"} s2 := s1[1 : len(s1)-1] for i, v := range s2 { println(i, v) } // c so_Slice s1 = (so_Slice){(so_String[5]){ so_str("a"), so_str("b"), so_str("c"), so_str("d"), so_str("e")}, 5, 5}; so_Slice s2 = so_slice(so_String, s1, 1, so_len(s1) - 1); for (so_int i = 0; i < so_len(s2); i++) { so_String v = so_at(so_String, s2, i); so_println("%" PRId64 " %.*s", i, v.len, v.ptr); }As in Go, a slice is a value type. Unlike in Go, a nil slice and an empty slice are the same thing:
// so var nils []int = nil var empty []int = []int{} // c so_Slice nils = (so_Slice){0}; so_Slice empty = (so_Slice){0};make()allocates a fixed amount of memory on the stack (sizeof(T)*cap).append()only works up to the initial capacity and panics if it's exceeded. There's no automatic reallocation; use theso/slicesstdlib package for heap allocation and dynamic arrays.Maps
Maps are fixed-size and stack-allocated, backed by parallel key/value arrays with linear search. They are pointer-based reference types, represented as
so_Map*in C. No delete, no resize.// c typedef struct { void* keys; void* vals; size_t len; size_t cap; } so_Map;Only use maps when you have a small, fixed number of key-value pairs. For anything else, use heap-allocated maps from the
so/mapspackage (planned).Most of the standard map operations are supported, including getting/setting values and iterating with a for-range loop:
// so m := map[string]int{"a": 11, "b": 22} for k, v := range m { println(k, v) } // c so_Map* m = &(so_Map){(so_String[2]){ so_str("a"), so_str("b")}, (so_int[2]){11, 22}, 2, 2}; for (so_int _i = 0; _i < (so_int)m->len; _i++) { so_String k = ((so_String*)m->keys)[_i]; so_int v = ((so_int*)m->vals)[_i]; so_println("%.*s %" PRId64, k.len, k.ptr, v); }As in Go, a map is a pointer type. A
nilmap emits asNULLin C.If/else and for
If-else and for come in all shapes and sizes, just like in Go.
Standard if-else with chaining:
// so if x > 0 { println("positive") } else if x < 0 { println("negative") } else { println("zero") } // c if (x > 0) { so_println("%s", "positive"); } else if (x < 0) { so_println("%s", "negative"); } else { so_println("%s", "zero"); }Init statement (scoped to the if block):
// so if num := 9; num < 10 { println(num, "has 1 digit") } // c { so_int num = 9; if (num < 10) { so_println("%" PRId64 " %s", num, "has 1 digit"); } }Traditional for loop:
// so for j := 0; j < 3; j++ { println(j) } // c for (so_int j = 0; j < 3; j++) { so_println("%" PRId64, j); }While-style loop:
// so i := 1 for i <= 3 { println(i) i = i + 1 } // c so_int i = 1; for (; i <= 3;) { so_println("%" PRId64, i); i = i + 1; }Range over an integer:
// so for k := range 3 { println(k) } // c for (so_int k = 0; k < 3; k++) { so_println("%" PRId64, k); }Functions
Regular functions translate to C naturally:
// so func sumABC(a, b, c int) int { return a + b + c } // c static so_int sumABC(so_int a, so_int b, so_int c) { return a + b + c; }Named function types become typedefs:
// so type SumFn func(int, int, int) int fn1 := sumABC // infer type var fn2 SumFn = sumABC // explicit type s := fn2(7, 8, 9) // main.h typedef so_int (*main_SumFn)(so_int, so_int, so_int); // main.c main_SumFn fn1 = sumABC; main_SumFn fn2 = sumABC; so_int s = fn2(7, 8, 9);Exported functions (capitalized) become public C symbols prefixed with the package name (
package_Func). Unexported functions arestatic.Variadic functions use the standard
...syntax and translate to passing a slice:// so func sum(nums ...int) int { total := 0 for _, num := range nums { total += num } return total } func main() { sum(1, 2, 3, 4, 5) } // c static so_int sum(so_Slice nums) { so_int total = 0; for (so_int _ = 0; _ < so_len(nums); _++) { so_int num = so_at(so_int, nums, _); total += num; } return total; } int main(void) { sum((so_Slice){(so_int[5]){1, 2, 3, 4, 5}, 5, 5}); }Function literals (anonymous functions and closures) are not supported.
Multiple returns
So supports two-value multiple returns in two patterns:
(T, error)and(T1, T2). Both cases translate toso_ResultC type:// so func divide(a, b int) (int, error) { return a / b, nil } func divmod(a, b int) (int, int) { return a / b, a % b } // c typedef struct { so_Value val; so_Value val2; so_Error err; } so_Result; // c static so_Result divide(so_int a, so_int b) { return (so_Result){.val.as_int = a / b, .err = NULL}; } static so_Result divmod(so_int a, so_int b) { return (so_Result){.val.as_int = a / b, .val2.as_int = a % b}; }Named return values are not supported.
Structs
Structs translate to C naturally:
// so type person struct { name string age int } bob := person{"Bob", 20} alice := person{name: "Alice", age: 30} fred := person{name: "Fred"} // c typedef struct person { so_String name; so_int age; } person; person bob = (person){so_str("Bob"), 20}; person alice = (person){.name = so_str("Alice"), .age = 30}; person fred = (person){.name = so_str("Fred")};new()works with types and values:// so n := new(int) // *int, zero-initialized p := new(person) // *person, zero-initialized n2 := new(42) // *int with value 42 p2 := new(person{name: "Alice"}) // *person with values // c so_int* n = &(so_int){0}; person* p = &(person){0}; so_int* n2 = &(so_int){42}; person* p2 = &(person){.name = so_str("Alice")};Methods
Methods are defined on struct types with pointer or value receivers:
// so type Rect struct { width, height int } func (r *Rect) Area() int { return r.width * r.height } func (r Rect) resize(x int) Rect { r.height *= x r.width *= x return r }Pointer receivers pass
void* selfin C and cast to the struct pointer. Value receivers pass the struct by value, so modifications operate on a copy:// c typedef struct main_Rect { so_int width; so_int height; } main_Rect; so_int main_Rect_Area(void* self) { main_Rect* r = (main_Rect*)self; return r->width * r->height; } static main_Rect main_Rect_resize(main_Rect r, so_int x) { r.height *= x; r.width *= x; return r; }Calling methods on values and pointers emits pointers or values as necessary:
// so r := Rect{width: 10, height: 5} r.Area() // called on value (address taken automatically) r.resize(2) // called on value (passed by value) rp := &r rp.Area() // called on pointer rp.resize(2) // called on pointer (dereferenced automatically) // c main_Rect r = (main_Rect){.width = 10, .height = 5}; main_Rect_Area(&r); main_Rect_resize(r, 2); main_Rect* rp = &r; main_Rect_Area(rp); main_Rect_resize(*rp, 2);Methods on named primitive types are also supported.
Interfaces
Interfaces in So are like Go interfaces, but they don't include runtime type information.
Interface declarations list the required methods:
// so type Shape interface { Area() int Perim(n int) int }In C, an interface is a struct with a
void* selfpointer and function pointers for each method (less efficient than using a static method table, but simpler; this might change in the future):// c typedef struct main_Shape { void* self; so_int (*Area)(void* self); so_int (*Perim)(void* self, so_int n); } main_Shape;Just as in Go, a concrete type implements an interface by providing the necessary methods:
// so func (r *Rect) Area() int { // ... } func (r *Rect) Perim(n int) int { // ... } // c so_int main_Rect_Area(void* self) { // ... } so_int main_Rect_Perim(void* self, so_int n) { // ... }Passing a concrete type to functions that accept interfaces:
// so func calcShape(s Shape) int { return s.Perim(2) + s.Area() } r := Rect{width: 10, height: 5} calcShape(&r) // implicit conversion calcShape(Shape(&r)) // explicit conversion // c static so_int calcShape(main_Shape s) { return s.Perim(s.self, 2) + s.Area(s.self); } main_Rect r = (main_Rect){.width = 10, .height = 5}; calcShape((main_Shape){.self = &r, .Area = main_Rect_Area, .Perim = main_Rect_Perim}); calcShape((main_Shape){.self = &r, .Area = main_Rect_Area, .Perim = main_Rect_Perim});Type assertion works for concrete types (
v := iface.(*Type)), but not for interfaces (iface.(Interface)). Type switch is not supported.Empty interfaces (
interface{}andany) are translated tovoid*.Enums
So supports typed constant groups as enums:
// so type ServerState string const ( StateIdle ServerState = "idle" StateConnected ServerState = "connected" StateError ServerState = "error" )Each constant is emitted as a C
const:// main.h typedef so_String main_ServerState; extern const main_ServerState main_StateIdle; extern const main_ServerState main_StateConnected; extern const main_ServerState main_StateError; // main.c const main_ServerState main_StateIdle = so_str("idle"); const main_ServerState main_StateConnected = so_str("connected"); const main_ServerState main_StateError = so_str("error");iotais supported for integer-typed constants:// so type Day int const ( Sunday Day = iota Monday Tuesday )Iota values are evaluated at compile time and translated to integer literals:
// c typedef so_int main_Day; const main_Day main_Sunday = 0; const main_Day main_Monday = 1; const main_Day main_Tuesday = 2;Errors
Errors use the
so_Errortype (a pointer):// c struct so_Error_ { const char* msg; }; typedef struct so_Error_* so_Error;So only supports sentinel errors, which are defined at the package level using
errors.New(implemented as compiler built-in):// so import "solod.dev/so/errors" var ErrOutOfTea = errors.New("no more tea available") // c #include "so/errors/errors.h" so_Error main_ErrOutOfTea = errors_New("no more tea available");Errors are compared using
==. This is an O(1) operation (compares pointers, not strings):// so func makeTea(arg int) error { if arg == 42 { return ErrOutOfTea } return nil } err := makeTea(42) if err == ErrOutOfTea { println("out of tea") } // c static so_Error makeTea(so_int arg) { if (arg == 42) { return main_ErrOutOfTea; } return NULL; } so_Error err = makeTea(42); if (err == main_ErrOutOfTea) { so_println("%s", "out of tea"); }Dynamic errors (
fmt.Errorf), local error variables (errors.Newinside functions), and error wrapping are not supported.Defer
deferschedules a function or method call to run at the end of the enclosing scope.The scope can be either a function (as in Go):
// so func funcScope() { xopen(&state) defer xclose(&state) if state != 1 { panic("unexpected state") } }Or a bare block (unlike Go):
// so func blockScope() { { xopen(&state) defer xclose(&state) if state != 1 { panic("unexpected state") } // xclose(&state) runs here, at block end } // state is already closed here }Deferred calls are emitted inline (before returns, panics, and scope end) in LIFO order:
// c static void funcScope(void) { xopen(&state); if (state != 1) { xclose(&state); so_panic("unexpected state"); } xclose(&state); }Defer is not supported inside other scopes like
fororif.C interop
Include a C header file with
so:include://so:include <stdio.h>Declare an external C type (excluded from emission) with
so:extern://so:extern FILE type os_file struct{}Declare an external C function (no body or
so:extern):func fopen(path string, mode string) *os_file //so:extern func fclose(stream *os_file) int { _ = stream return 0 }When calling extern functions,
stringand[]Targuments are automatically decayed to their C equivalents: string literals become raw C strings ("hello"), string values becomechar*, and slices become raw pointers. This makes interop cleaner:// so f := fopen("/tmp/test.txt", "w") // c os_file* f = fopen("/tmp/test.txt", "w"); // not like this: // fopen(so_str("/tmp/test.txt"), so_str("w"))The decay behavior can be turned off with the
nodecayflag://so:extern nodecay func set_name(acc *Account, name string)The
so/cpackage includes helpers for converting C pointers back to So string and slice types. Theunsafepackage is also available and is implemented as compiler built-ins.Packages
Each Go package is translated into a single
.h+.cpair, regardless of how many.gofiles it contains. Multiple.gofiles in the same package are merged into one.cfile, separated by// -- filename.go --comments.Exported symbols (capitalized names) are prefixed with the package name:
// geom/geom.go package geom const Pi = 3.14159 func RectArea(width, height float64) float64 { return width * height }Becomes:
// geom.h extern const double geom_Pi; double geom_RectArea(double width, double height); // geom.c const double geom_Pi = 3.14159; double geom_RectArea(double width, double height) { ... }Unexported symbols (lowercase names) keep their original names and are marked
static:// c static double rectArea(double width, double height);Exported symbols are declared in the
.hfile (withexternfor variables). Unexported symbols only appear in the.cfile.Importing a So package translates to a C
#include:// so import "example/geom" // c #include "geom/geom.h"Calling imported symbols uses the package prefix:
// so a := geom.RectArea(5, 10) _ = geom.Pi // c double a = geom_RectArea(5, 10); (void)geom_Pi;That's it for the language tour!
Compatibility
So generates C11 code that relies on several GCC/Clang extensions:
- Binary literals (
0b1010) in generated code. - Statement expressions (
({...})) in macros. __attribute__((constructor))for package-level initialization.__auto_typefor local type inference in generated code.__typeof__for type inference in generic macros.allocaformake()and other dynamic stack allocations.
You can use GCC, Clang, or
zig ccto compile the transpiled C code. MSVC is not supported.Supported operating systems: Linux, macOS, and Windows (partial support).
Design decisions
So is highly opinionated.
Simplicity is key. Fewer features are always better. Every new feature is strongly discouraged by default and should be added only if there are very convincing real-world use cases to support it. This applies to the standard library too — So tries to export as little of Go's stdlib API as possible while still remaining highly useful for real-world use cases.
No heap allocations are allowed in language built-ins (like maps, slices, new, or append). Heap allocations are allowed in the standard library, but they must clearly state when an allocation happens and who owns the allocated data.
Fast and easy C interop. Even though So uses Go syntax, it's basically C with its own standard library. Calling C from So, and So from C, should always be simple to write and run efficiently. The So standard library (translated to C) should be easy to add to any C project.
Readability. There are several languages that claim they can transpile to readable C code. Unfortunately, the C code they generate is usually unreadable or barely readable at best. So isn't perfect in this area either (though it's arguably better than others), but it aims to produce C code that's as readable as possible.
Go compatibility. So code is valid Go code. No exceptions.
Non-goals:
Raw performance. You can definitely write C code by hand that runs faster than code produced by So. Also, some features in So, like interfaces, are currently implemented in a way that's not very efficient, mainly to keep things simple.
Hiding C entirely. So is a cleaner way to write C, not a replacement for it. You should know C to use So effectively.
Go feature parity. Less is more. Iterators aren't coming, and neither are generic methods.
Frequently asked questions
I have heard these several times, so it's worth answering.
Why not Rust/Zig/Odin/other language?
Because I like C and Go.
Why not TinyGo?
TinyGo is lightweight, but it still has a garbage collector, a runtime, and aims to support all Go features. What I'm after is something even simpler, with no runtime at all, source-level C interop, and eventually, Go's standard library ported to plain C so it can be used in regular C projects.
How does So handle memory?
Everything is stack-allocated by default. There's no garbage collector or reference counting. The standard library provides explicit heap allocation in the
so/mempackage when you need it.Is it safe?
So itself has few safeguards other than the default Go type checking. It will panic on out-of-bounds array access, but it won't stop you from returning a dangling pointer or forgetting to free allocated memory.
Most memory-related problems can be caught with AddressSanitizer in modern compilers, so I recommend enabling it during development by adding
-fsanitize=addressto yourCFLAGS.Can I use So code from C (and vice versa)?
Yes. So compiles to plain C, therefore calling So from C is just calling C from C. Calling C from So is equally straightforward.
Can I compile existing Go packages with So?
Not really. Go uses automatic memory management, while So uses manual memory management. So also supports far fewer features than Go. Neither Go's standard library nor third-party packages will work with So without changes.
How stable is this?
Not for production at the moment.
Where's the standard library?
There is a growing set of high-level packages (
so/bytes,so/mem,so/slices, ...). There are also low-level packages that wrap the libc API (so/c/stdlib,so/c/stdio,so/c/cstring, ...). Check the links below for more details.Final thoughts
Even though So isn't ready for production yet, I encourage you to try it out on a hobby project or just keep an eye on it if you like the concept.
Further reading:
-
🔗 mhx/dwarfs dwarfs-0.15.1 release
Serious Bug in All Previous Releases
In #350, cipriancraciun started a discussion that got me thinking about the file scanner class. At some point, I realized that there might be a bug, and, after looking at the code, it turned out that there was one: when collecting hard-linked files, the class did not take the device these files were on into account.
When
--file-hashis set to anything exceptnone, the issue is triggered if and only if all of the following are true:- The input to
mkdwarfsspans more than one device (i.e. mount point). - There are regular files with the same inode number on more than one of these devices, and these files are part of the input to
mkdwarfs(e.g. not filtered out). - At least two files in such a set of files with identical inode numbers also have a hard link count greater than 1 on their respective devices.
For any set of files with identical inode numbers and hard link counts greater than 1 for which all of the above conditions are true, only one inode will be chosen to represent all files. That means the data for all other inodes in the set is lost and will not be present in the resulting DwarFS image.
When
--file-hashis set tonone, the issue is triggered regardless of condition (3) above. In this case, however,mkdwarfsis guaranteed to crash with an assertion if it runs into the issue:$ mkdir data && echo "hello" >data/x $ mkdwarfs -i data -o data.dwarfs $ mkdir -p mnt/a mnt/b $ dwarfs data.dwarfs mnt/a && dwarfs data.dwarfs mnt/b $ mkdwarfs --file-hash=none -i mnt -o /dev/null --force [...] Assertion `!files_.empty()` failed in /home/mhx/dwarfs/src/writer/internal/inode_manager.cpp(212): inode has no file (any)So the default, hash-based deduplication mode is much more dangerous because it fails silently, but I hope that condition (3) is rarely true in practice.
The fix is
3c15ab2, along with a dedicated test. I strongly recommend upgrading to this new release if your input tomkdwarfsspans multiple devices.
Bug fixes
-
mkdwarfsdid not correctly handle inputs where hardlinks had the same inode number on different devices. To run into this issue, you would have to makemkdwarfsscan files from multiple devices (e.g. the root of a directory tree with multiple mounted filesystems) and have files with the same inode number on different devices and have at least two of those files also have a link count greater than 1. While this is hopefully rare in practice, it is a serious bug that can lead to crashes (in the best case) or even data loss (in the worst case), as only the data of one of these files would be stored in the image. This has been fixed and a test has been added to cover this case. -
A missing dependency was causing linker errors with shared library builds on macOS. This has been fixed.
Build
-
The static release binaries are now all built using Clang and link-time optimization. This was previously not the case for some architectures due to bugs in the toolchain. As a result, the binaries are now significantly smaller.
-
There is now a new set of binaries (
dwarfs-universal-small) that are built without brotli support and without support for the performance monitor. The performance monitor is rarely used and brotli compression comes with a huge dictionary that bloats the binary size without offering much benefit over lzma or zstd in most cases. If you care about binary size, these new binaries are a good default choice.
Full Changelog :
v0.15.0...v0.15.1SHA-256 Checksums
ff5ef1716dec13082356a23ca8b9a349d00e8af71712cd659d95195202838e5d dwarfs-0.15.1-Linux-aarch64.tar.xz 1d017f5da0a92f61d8620c45670cd799e0bd452a8c1f31080cea554bce880dda dwarfs-0.15.1-Linux-arm.tar.xz c91bfe1eb348a8a34581d6377a49c937f18a96a8d1460241fde7f79fe7d3cd47 dwarfs-0.15.1-Linux-i386.tar.xz eaead2ac3c61c6765b80fe57e6c870c7e9f14b83c4947a533aeda0d3720aac7a dwarfs-0.15.1-Linux-loongarch64.tar.xz c70c58a47a81b51bb61fff6f878d9b9e984ac944b704c713b126aaabae9429f6 dwarfs-0.15.1-Linux-ppc64le.tar.xz 0a11aff785ae7ebed0aa2f9a0f12e85cb14336901937387e45c4ecc94399b7b6 dwarfs-0.15.1-Linux-ppc64.tar.xz 7d7af8685ac2527760fa4ca848ded3780a6fc10760242ac789094cd9ee612153 dwarfs-0.15.1-Linux-riscv64.tar.xz f5b9c87e4471fe658690951861949b8fe9bf8ddae0fde36a04dc672cc4926568 dwarfs-0.15.1-Linux-s390x.tar.xz 24453ca3f18e08cde0e323fae4447d2e9f47c65508d8da223d22553e73cafc36 dwarfs-0.15.1-Linux-x86_64.tar.xz a180086f9a898b4b52a5217e336c0134d63a10b395b493f2e19f231d575a87ec dwarfs-0.15.1.tar.xz b3d3ff5608766f05a37b9a9d1f3cc3cac7ca01959ac388d43140136c858d54b7 dwarfs-0.15.1-Windows-AMD64.7z 1258d788d9950de4db55e22b5ec510665eb3049f8c199386fce16a132ab6f846 dwarfs-fuse-extract-0.15.1-Linux-aarch64 8f8f958217180f1d49fd775a84fb3f2df4a6c7d42b9537f7258c155c805f5ece dwarfs-fuse-extract-0.15.1-Linux-aarch64.upx 814204960bdd739da73c965288f4d86dab91620241a05cc99ec84be8c0377ffd dwarfs-fuse-extract-0.15.1-Linux-arm 5ea3eb07bc38d5a9cad9a296b83a6049df80c9508888341fb74b9cbb1695a750 dwarfs-fuse-extract-0.15.1-Linux-arm.upx b7974ad380b1dac65516cd06fa1f0918708e212cb89fb6e79a79ca133e227097 dwarfs-fuse-extract-0.15.1-Linux-i386 60e448f27084f30727ae64346d5ae2c2e13cff28445e0c5b5f85a7666cfcc325 dwarfs-fuse-extract-0.15.1-Linux-i386.upx cfd53bd6d61474cf4adf32cfbdf937717c9c329f5fd833ee997a8cd50afff74d dwarfs-fuse-extract-0.15.1-Linux-loongarch64 925433bac0999babff000767e1a3bc5f33cfc93bc4d4f5344e9c160e0fe04e0a dwarfs-fuse-extract-0.15.1-Linux-ppc64 7574dd2f519cc76c39a67d7a62fdb700a264be56f379df893872f7c864c7b5da dwarfs-fuse-extract-0.15.1-Linux-ppc64le ccec93d4277e82ac44b78116e27c3df68b376e1ed3588c60d57a94ba5234d8ff dwarfs-fuse-extract-0.15.1-Linux-riscv64 7cae9c8f7b0ea283f7ac1f70fa8fdcd6fd50f34597536211ca854c78c6088d5a dwarfs-fuse-extract-0.15.1-Linux-riscv64.upx afe1cfcc82314af7593054b81cee5191aa35a40a0b06bf6aab036e9d97e3da79 dwarfs-fuse-extract-0.15.1-Linux-s390x 0d0a771c7849f6f6e56cfba879ad03cf4194b57cb4fa94b4d34b797ebdd34a99 dwarfs-fuse-extract-0.15.1-Linux-x86_64 dce7fb26462f7ef0ba4c5aef5563e5e74ba23b880195ce37b4a0bd7ca46ff431 dwarfs-fuse-extract-0.15.1-Linux-x86_64.upx 36fd1012426446d85ab37503b1f8780b9c2e94e6368dac5cd3de1fd58c224cca dwarfs-universal-0.15.1-Linux-aarch64 7927c4ba101efe64fc9015de0f4b6b0c4898b3e854e037670b9998556dc6406f dwarfs-universal-0.15.1-Linux-aarch64.upx 29fcfc2bd09abe023ae8f17072b1a5967fc4536867b19fcbe79818f9f3dd9920 dwarfs-universal-0.15.1-Linux-arm 00f84a9a3e93102a880e641fcf0bda77216c8ef65b84e6239bbcdc4aa70271fa dwarfs-universal-0.15.1-Linux-arm.upx f59359c67e5e43e67a086ad020c66f08ceb87ded0ddd641e21d35fa3ffb323b5 dwarfs-universal-0.15.1-Linux-i386 dd8116704ea22628699a0500c1d4b1fb7ff81acd3f4fe2c836604c6f654ff1ef dwarfs-universal-0.15.1-Linux-i386.upx 48624d95428286e8cac2801f23b62c000fcabedbc01bc91db734341fa621facb dwarfs-universal-0.15.1-Linux-loongarch64 428565e0033be5fe47b97c4cb1ec60da4550ec76e388326250148ecefb04ae21 dwarfs-universal-0.15.1-Linux-ppc64 a9671a135f7362d42dd8479e02593a63ac7e34430c94b9257e1fb6f28bf506a2 dwarfs-universal-0.15.1-Linux-ppc64le c8c2ef012b80e554b0052bf85fd2271b374abd4a82337777d75200ebf40ffc0d dwarfs-universal-0.15.1-Linux-riscv64 459aa0cac669f9794f114dc868a628f8bebb8ca538d4213b7d25448e424db3ae dwarfs-universal-0.15.1-Linux-riscv64.upx ff7b663e44f1f6d5e5b1b022fce85a6c993991f9c99e3d0e1d89c45a42d25284 dwarfs-universal-0.15.1-Linux-s390x 69750c543ea2272d96397a14627f5bb98a42385026df118cb3a4363312892fa4 dwarfs-universal-0.15.1-Linux-x86_64 83dd7dad048a86347a53cfa62d1b2dc3765a0803681a3c701a32029b464f4873 dwarfs-universal-0.15.1-Linux-x86_64.upx 5a4da367840829874f89a80112e1f49b25393969121130dc489ed6b6e8e6b782 dwarfs-universal-0.15.1-Windows-AMD64.exe fb9baa894d32d7182d07ce54c4623d27b2b0ce6a0c6b4f1f0101462877d4ab0e dwarfs-universal-small-0.15.1-Linux-aarch64 accef8fb32026084953e9e2e0174950f9fa789e9d1ece3263bccc1db58dde6a6 dwarfs-universal-small-0.15.1-Linux-aarch64.upx 0499ffc9b7093bee499dd56845659590fdcbd32aaf3e0a5f4bb9a5db92dac60c dwarfs-universal-small-0.15.1-Linux-arm fcd1422cd2315a6dbab0526ebe0275db9713623bd64f8f5b7bf11437f2e74f19 dwarfs-universal-small-0.15.1-Linux-arm.upx 1f3faf769fefe6517e0086288e17a0e60780cfab63b527c8e9c0cc6f695ea1df dwarfs-universal-small-0.15.1-Linux-i386 a0bad3f10125a08ac51b765371f1d82c14ff0738ddda76aa40597e64e8faf9c9 dwarfs-universal-small-0.15.1-Linux-i386.upx eac51d831d84a89df62718ba7c3c55f4994d0c5d9387199e060d9c2242572eed dwarfs-universal-small-0.15.1-Linux-loongarch64 588de2941e5353daa164afbb55f5b4245ef27d7b0a19a2302a76093ba0c68005 dwarfs-universal-small-0.15.1-Linux-ppc64 4db6b7fec01bbaef8d51c8850eaccccbb72723dea02d5966fc8cd5198cc9f316 dwarfs-universal-small-0.15.1-Linux-ppc64le 58cda2c96c82bae3fed2d461be37786093e8f68bba7573c57ac7a1fa45cedd33 dwarfs-universal-small-0.15.1-Linux-riscv64 18b2d1c203b07a5bee39453f200d79d2069ca607e8913e060befdd5456b9aa7a dwarfs-universal-small-0.15.1-Linux-riscv64.upx 391e3e75743899a23fefe3b816cd2dda86f2048deedbc1490870a55e49cb6309 dwarfs-universal-small-0.15.1-Linux-s390x 0d84291b3e7e26f3791d50169097a7b0901542440ce934fb1ec263ae0c4a256e dwarfs-universal-small-0.15.1-Linux-x86_64 fc607ba7af3485feb8b90fd4f4678bd586c587ce8dd982e57f4fd9afb1faf391 dwarfs-universal-small-0.15.1-Linux-x86_64.upx - The input to
-
🔗 r/LocalLLaMA Moonshot says Cursor Composer was authorized rss
| Sounds like Fireworks had a partnership with Moonshot, and Cursor went through them. Kinda makes sense that Moonshot wouldn’t be aware of it if they are working with Fireworks as a “reseller” of sorts. And the custom license they have with Fireworks may mean the non-disclosure of base model wasn’t against license. Or it could be a good story told after the fact. Impossible to know without knowing the private details of the contract. I guess either way, they worked it out. submitted by /u/davernow
[link] [comments]
---|--- -
🔗 r/Yorkshire Chip shop sausages rss
Very random but does anyone know where I can buy the same jumbo sausages that the chip shops use, I’m located birstall so have easy access to Leeds/bradford!!
submitted by /u/Top-Welcome5620
[link] [comments] -
🔗 3Blue1Brown (YouTube) Bacteria Grid Puzzle Solution rss
Part of a monthly series of puzzlers, in collaboration with MoMath and Peter Winkler
-
🔗 r/york [Participants Required - York St John University] Gay male couples in the UK – Views on parenthood (21+, in a relationship of 12+ months) rss
Hi! 👋
My name is Ryan and I am a doctoral researcher in counselling psychology at York St John University. I am conducting a study exploring how gay men make sense of the psychological and emotional experience of deciding whether or not to become a parent, with a focus on those currently in relationships.I’m particularly interested in understanding the different factors that make thinking about parenthood feel easier or more challenging, how these conversations happen within relationships, and what kinds of support or information might be helpful, whether you want children, don’t want children, or are unsure.
I’m looking for gay men (21+) currently in a relationship (12+ months), who are not parents, to take part in a one-to-one interview.
What’s involved:
- A 60–90 minute interview
- Conducted online via MS Teams or in person at York St John University, UK
- Scheduled at a time that suits you
- Participants will be recruited in couples, but interviews will be conducted separately to ensure individual perspectives
Eligibility:
- Identify as a gay man
- Aged 21+
- In a relationship of 12+ months
- Not currently a parent
- UK-based and fluent in English
- Open to discussing views on parenthood (whether you want children or not)
This is an under-researched area, and your contribution could help inform future counselling practice and community support for gay men and couples.
If you’re interested or would like more information, feel free to send me a DM or comment below 😊
The study has been approved by the York St John University Research Ethics Committee (Ref: ETH2526-0084).
submitted by /u/DCounsPsych_Research
[link] [comments] -
🔗 r/Leeds [Participants Required] Gay male couples in the UK – Views on parenthood (21+, in a relationship of 12+ months) rss
Hi! 👋
My name is Ryan and I am a doctoral researcher in counselling psychology at York St John University. I am conducting a study exploring how gay men make sense of the psychological and emotional experience of deciding whether or not to become a parent, with a focus on those currently in relationships.I’m particularly interested in understanding the different factors that make thinking about parenthood feel easier or more challenging, how these conversations happen within relationships, and what kinds of support or information might be helpful, whether you want children, don’t want children, or are unsure.
I’m looking for gay men (21+) currently in a relationship (12+ months), who are not parents, to take part in a one-to-one interview.
What’s involved:
- A 60–90 minute interview
- Conducted online via MS Teams or in person at York St John University, UK
- Scheduled at a time that suits you
- Participants will be recruited in couples, but interviews will be conducted separately to ensure individual perspectives
Eligibility:
- Identify as a gay man
- Aged 21+
- In a relationship of 12+ months
- Not currently a parent
- UK-based and fluent in English
- Open to discussing views on parenthood (whether you want children or not)
This is an under-researched area, and your contribution could help inform future counselling practice and community support for gay men and couples.
If you’re interested or would like more information, feel free to send me a DM or comment below 😊
The study has been approved by the York St John University Research Ethics Committee (Ref: ETH2526-0084).
submitted by /u/DCounsPsych_Research
[link] [comments] -
🔗 r/Harrogate Looking for dog walks rss
Any recommendations for dog walks in Harrogate or surrounding areas? We are happy to drive out a bit if anyone has any nice dales walks too.
We have done fewston, birk cragg and pinewoods but looking to change it up a bit. Bonus points if there’s somewhere we can grab a coffee with the dogs afterward
submitted by /u/emsversion12222
[link] [comments] -
🔗 r/Yorkshire Reflecting… rss
| submitted by /u/aspiranthighlander
[link] [comments]
---|--- -
🔗 r/Yorkshire Harrogate faces Scarborough and Barnsley in race to become UK's first-ever Town of Culture rss
submitted by /u/willfiresoon
[link] [comments] -
🔗 r/Yorkshire People in North Yorkshire town found to have ‘alarming’ levels of toxic Pfas chemicals in blood rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 r/Leeds Leeds Armouries: old scary display? rss
I remember going to Leeds Armouries as a kid (15-20y ago probably) and being traumatised by a specific display case they had. It was a glass box with full size people in (they still have several of these) but this one in particular seems to have been removed at some point in the last 20 years, maybe because it was too scary!
It consisted of a soldier/terrorist bursting into a kid’s bedroom, fully armed, with the kid cowering in the corner. Does anyone remember this? My context and memory may be fuzzy as I was so young.
I’d be interested if anyone else has similar memories of this, or has any info about it or why it was removed. There doesn’t seem to be any photo of it online.
submitted by /u/mailywhale
[link] [comments] -
🔗 r/LocalLLaMA This is incredibly tempting rss
| Has anyone bought one of these recently that can give me some direction on how usable it is? What kind of speeds are you getting trying to load one large model vs using multiple smaller models? submitted by /u/No_Mango7658
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Feedback on my 256gb VRAM local setup and cluster plans. Lawyer keeping it local. rss
| I’m a lawyer who got Claude code pilled about 90 days ago, then thought about what I wanted to do with AI tools, and concluded that the totally safest way for me to experiment was to build my own local cluster. I did an earlier post about what I was working on, and the feedback was helpful. Wondering if anyone has feedback or suggestions for me in terms of what I should do next. Anyway, node 1 is basically done at this point. Gigabyte threadripper board, 256gbs of ddr4, and 8 32gb nvidia v100s. I have two PSUs on two different regular circuits in my office, 2800 watts total (haven’t asked the landlord for permission to install a 240 volt yet). I am running … windows … because I still use the computer for my regular old office work. But I guess my next steps for just this node are probably to get a 240 plug installed, and maybe add 2 or 4 more v100s, and then call it a day for node 1. Took one photo of one of th 4-card pass through boards. Each of these NVlinks 128gbs of sxm v100s, and they get fed back into the board at x16 using two pex switches and 4 slim sass cables. The only part that’s remotely presentable is the 4 card board I have finished. There’s a 2 card board on footers and 2pcie v100s. I have 2 more 2 card sxm boards and a 4 card sxm board in waiting. And 3 sxm v100s and heatsinks (slowly buying more). Goal is to do local rag databases on the last 10 years of my saved work, to automate everything I can so that all the routine stuff is automatic and the semi routine stuff is 85% there. Trying to get the best biggest reasoning models to run, then to test them with rag, then to qlora train. Wondering if anyone has suggestions on how to manage all the insane power cables this requires. I put this 4 card board in an atx tower case, and have one more for the second board, but I have the rest of the stuff (motherboard board, 2 pcie cards, 2 card sxm board) open bench/open air like a mining rig. Would love some kind of good looking glass and metal 3 level air flow box or something. Also wondering if anyone has really used big models like GLM or full deepseek or minimax 2.5 locally for anything like this. And if anyone has done Qlora training for legal stuff. In terms of what’s next, I will start on Node 2 after I get some of the stray heatsinks and riser cables out of my office and thermal paste off of my suit. I have a romed2 board and processor, and a variety of loose sticks of ddr4 server ram that will probably only add up to like 192gb. I have 3 rtx3090s. Plan is I guess to add a fourth and nvlink them. My remaining inventory is a supermicro x10drg board and processor, 6 p40s, 6p100s, 4 16gb v100 sxms, another even older x10 board and processor, more loose sticks of server ram, and then a couple more board and processor combos (x299a 64gb ddr4, and my 2019 gaming pc). Original plan (and maybe still plan) was to just have so much vram I could slowly run the biggest model ever over a distributed cluster, and use that to tell me the secret motives and strategy of parties on the other side of cases. And then maybe use it to tell me why I can never be satisfied and always want more. Worried Opus 4.6 will be better at all that. I wrote this actual post without any AI help, because I still have soul inside. Will re post it in a week with Claude rewriting it to see how brainwashed you all are. Anyway, ask me questions, give me advice, explain to me in detail why I’m stupid. But be real about it you anime freaks. submitted by /u/TumbleweedNew6515
[link] [comments]
---|--- -
🔗 r/LocalLLaMA Qwen wants you to know… rss
| Seen while walking through Singapore’s Changi airport earlier this week. Alibaba Cloud spending up big on advertising. submitted by /u/m-gethen
[link] [comments]
---|--- -
🔗 r/Harrogate Fun date ideas Harrogate rss
Hi I’m in my mid 20s. Recently took a girl in her early 20s on a date to the valley gardens park and it was just so relaxed and more 30s/40s. Didn’t see anyone our age. Everyone there with kids or in their 60s+ and the vibe was so so off. We went to a cocktail bar, was completely empty and they were playing music from 1950, just incredibly uncool and a bit cringe, it unfolded like I planned a date for her mother. Is there anything more fun to do in Harrogate?
submitted by /u/Apprehensive_Ring666
[link] [comments] -
🔗 r/reverseengineering Black Rock Shooter: the Game was Made by Madmen. I’ve Been Solo Reverse Engineering it for Two Years as My First Big Project and Am Finally Ripping Its Engine Wide Open. rss
submitted by /u/brs-game-researcher
[link] [comments] -
🔗 Rust Blog Security advisory for Cargo rss
The Rust Security Response Team was notified of a vulnerability in the third- party crate
tar, used by Cargo to extract packages during a build. The vulnerability, tracked as CVE-2026-33056, allows a malicious crate to change the permissions on arbitrary directories on the filesystem when Cargo extracts it during a build.For users of the public crates.io registry, we deployed a change on March 13th to prevent uploading crates exploiting this vulnerability, and we audited all crates ever published. We can confirm that no crates on crates.io are exploiting this.
For users of alternate registries, please contact the vendor of your registry to verify whether you are affected by this. The Rust team will release Rust 1.94.1 on March 26th, 2026, updating to a patched version of the
tarcrate (along with other non-security fixes for the Rust toolchain), but that won't protect users of older versions of Cargo using alternate registries.We'd like to thank Sergei Zimmerman for discovering the underlying
tarcrate vulnerability and notifying the Rust project ahead of time, and William Woodruff for directly assisting the crates.io team with the mitigations. We'd also like to thank the Rust project members involved in this advisory: Eric Huss for patching Cargo; Tobias Bieniek, Adam Harvey and Walter Pearce for patching crates.io and analyzing existing crates; Emily Albini and Josh Stone for coordinating the response; and Emily Albini for writing this advisory. -
🔗 Baby Steps Maximally minimal view types rss
This blog post describes a maximally minimal proposal for view types. It comes out of a converastion at RustNation I had with lcnr and Jack Huey, where we talking about various improvements to the language that are "in the ether", that basically everybody wants to do, and what it would take to get them over the line.
Example: MessageProcessor
Let's start with a simple example. Suppose we have a struct
MessageProcessorwhich gets created with a set of messages. It will process them and, along the way, gather up some simple statistics:pub struct MessageProcessor { messages: Vec<String>, statistics: Statistics, } #[non_exhaustive] // Not relevant to the example, just good practice! pub struct Statistics { pub message_count: usize, pub total_bytes: usize, }The basic workflow for a message processor is that you
- accumulate messages by
pushing them into theself.messagesvector - drain the accumulate messages and process them
- reuse the backing buffer to push future messages
Accumulating messages
Accumulating messages is easy:
impl MessageProcessor { pub fn push_message(&mut self, message: String) { self.messages.push(message); } }Processing a single message
The function to process a single message takes ownership of the message string because it will send it to another thread. Before doing so, it updates the statistics:
impl MessageProcessor { fn process_message(&mut self, message: String) { self.statistics.message_count += 1; self.statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }Draining the accumulated messages
The final function you need is one that will drain the accumulated messages and process them. Writing this ought to be straightforward, but it isn't:
impl MessageProcessor { pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- ERROR: `self` is borrowed } } }The problem is that
self.messages.drain(..)takes a mutable borrow onself.messages. When you callself.process_message, the compiler assumes you might modify any field, includingself.messages. It therefore reports an error. This is logical, but frustrating.Experienced Rust programmers know a number of workarounds. For example, you could swap the
messagesfield for an empty vector. Or you could invokeself.messages.pop(). Or you could rewriteprocess_messageto be a method on theStatisticstype. But all of them are, let's be honest, suboptimal. The code above is really quite reasonable, it would be nice if you could make it work in a straightforward way, without needing to restructure it.What's needed: a way for the borrow checker to know what fields a method
may access
The core problem is that the borrow checker does not know that
process_messagewill only access thestatisticsfield. In this post, I'm going to focus on an explicit, and rather limited, notation, but I'll also talk about how we might extend it in the future.View types extend struct types with a list of fields
The basic idea of a view type is to extend the grammar of a struct type to optionally include a list of accessible fields:
RustType := StructName<...> | StructName<...> { .. } // <-- what we are adding | StructName<...> { (fields),* } // <-- what we are addingA type like
MessageProcessor { statistics }would mean "aMessageProcessorstruct where only thestatisticsfield can be accessed". You could also include a.., likeMessageProcessor { .. }, which would mean that all fields can be accessed, which is equivalent to today's struct typeMessageProcessor.View types respect privacy
View types would respect privacy, which means you could only write
MessageProcessor { messages }in a context where you can name the fieldmessagesin the first place.View types can be named on
selfarguments and elsewhereYou could use this to define that
process_messageonly needs to access the fieldstatistics:impl MessageProcessor { fn process_message(&mut self {statistics}, message: String) { // ---------------------- // Shorthand for: `self: &mut MessageProcessor {statistics}` // ... as before ... } }Of course you could use this notation in other arguments as well:
fn silly_example(.., mp: &mut MessageProcessor {statistics}, ..) { }Explicit view-limited borrows
We would also extend borrow expressions so that it is possible to specify precisely which fields will be accessible from the borrow:
let messages = &mut some_variable {messages}; // Ambiguous grammar? See below.When you do this, the borrow checker produces a value of type
&mut MessageProcessor {messages}.Sharp-eyed readers will note that this is ambiguous. The above could be parsed today as a borrow of a struct expression like
some_variable { messages }or, more verbosely,some_variable { messages: messages }. I'm not sure what to do about that. I'll note some alternative syntaxes below, but I'll also note that it would be possible for the compiler to parse the AST in an ambiguous fashion and disambiguate later on once name resolution results are known.We automatically introduce view borrows in an auto-ref
In our example, though, the user never writes the
&mutborrow explicitly. It results from the auto-ref added by the compiler as part of the method call:pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- auto-ref occurs here } }The compiler internally rewrites method calls like
self.process_message(message)to fully qualified form based on the signature declared inprocess_message. Today that results in code like this:MessageProcessor::process_message(&mut *self, message)But because
process_messagewould now declare&mut self { statistics }, we can instead desugar to a borrow that specifies a field set:MessageProcessor::process_message(&mut *self { statistics }, message)The borrow checker would respect views
Integrating views into the borrow checker is fairly trivial. The way the borrow checker works is that, when it sees a borrow expression, it records a "loan" internally that tracks the place that was borrowed, the way it was borrowed (mut, shared), and the lifetime for which it was borrowed. All we have to do is to record, for each borrow using a view, multiple loans instead of a single loan.
For example, if we have
&mut self, we would record onemut-loan ofself. But if we have&mut self {field1, field2}, we would twomut-loans, one ofself.field1and one ofself.field2.Example: putting it all together
OK, let's put it all together. This was our original example, collected:
pub struct MessageProcessor { messages: Vec<String>, statistics: Statistics, } #[non_exhaustive] pub struct Statistics { pub message_count: usize, pub total_bytes: usize, } impl MessageProcessor { pub fn push_message(&mut self, message: String) { self.messages.push(message); } pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { self.process_message(message); // <-- ERROR: `self` is borrowed } } fn process_message(&mut self, message: String) { self.statistics.message_count += 1; self.statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }Today,
process_pushed_messagesresults in an error:pub fn process_pushed_messages(&mut self) { for message in self.messages.drain(..) { // ------------- borrows `self.messages` self.process_message(message); // <-- ERROR! // --------------- borrows `self` } }The error arises from a conflict between two borrows:
self.messages.drain(..)desugars toIterator::drain(&mut self.messages, ..)which, as you can see,mut-borrowsself.messages;- then
self.process_message(..)desugars toMessageProcessor::process_message(&mut self, ..)which, as you can see,mut-borrows all ofself, which overlapsself.messages.
But in the "brave new world", we'll modify the program in one place:
- fn process_message(&mut self, message: String) { + fn process_message(&mut self {statistics}, message: String) {and as a result, the
process_pushed_messagesfunction will now borrow check successfully. This is because the two loans are now issued for different places:- as before,
self.messages.drain(..)desugars toIterator::drain(&mut self.messages, ..)whichmut-borrowsself.messages; - but now,
self.process_message(..)desugars toMessageProcessor::process_message(&mut self {statistics}, ..)whichmut-borrowsself.statistics, which doesn't overlapself.messages.
At runtime, this is still just a pointer
One thing I want to emphasize is that "view types" are a purely static construct and do not change how things are compiled. They simply give the borrow checker more information about what data will be accessed through which references. The
process_messagemethod, for example, still takes a single pointer toself.This is in contrast with the workarounds that exist today. For example, if I were writing the above code, I might well rewrite
process_messageinto an associated fn that takes a&mut Statistics:impl MessageProcessor { fn process_message(statistics: &mut Statistics, message: String) { statistics.message_count += 1; statistics.total_bytes += message.len(); // ... plus something to send the message somewhere } }This would be annoying, of course, since I'd have to write
Self::process_message(&mut self.statistics, ..)instead ofself.process_message(), but it would avoid the borrow check error.Beyond being annoying, it would change the way the code is compiled. Instead of taking a reference to the
MessageProcessorit now takes a reference to theStatistics.In this example, the change from one type to another is harmless, but there are other examples where you need access to mulitple fields, in which case it is less efficient to pass them individually.
Frequently asked questions
How hard would this be to implement?
Honestly, not very hard. I think we could ship it this year if we found a good contributor who wanted to take it on.
What about privacy?
I would require that the fields that appear in view types are 'visible' to the code that is naming them (this includes in view types that are inserted via auto-ref). So the following would be an error:
mod m { #[derive(Default)] pub struct MessageProcessor { messages: Vec<String>, ... } impl MessageProcessor { pub fn process_message(&mut self {messages}, message: String) { // ---------- // It's *legal* to reference a private field here, but it // results in a lint, just as it is currently *legal* // (but linted) for a public method to take an argument of // private type. The lint is because doing this is effectively // going to make the method uncallable from outside this module. self.messages.push(message); } } } fn main() { let mut mp = m::MessageProcessor::default(); mp.process_message(format!("Hello, world!")); // --------------- ERROR: field `messages` is not accessible here // // This desugars to: // // ``` // MessageProcessor::process_message( // &mut mp {messages}, // <-- names a private field! // format!("Hello, world!"), // ) // ``` // // which names the private field `messages`. That is an error. }Does this mean that view types can't be used in public methods?
More-or-less. You can use them if the view types reference public fields:
#[non_exhaustive] pub Statistics { pub message_count: usize, pub average_bytes: usize, // ... maybe more fields will be added later ... } impl Statistics { pub fn total_bytes(&self {message_count, average_bytes}) -> usize { // ---------------------------- // Declare that we only read these two fields. self.message_count * self.average_bytes } }Won't it be limited that view types more-or-less only work for private
methods?
Yes! But it's a good starting point. And my experience is that this problem occurs most often with private helper methods like the one I showed here. It can occur in public contexts, but much more rarely, and in those circumstances it's often more acceptable to refactor the types to better expose the groupings to the user. This doesn't mean I don't want to fix the public case too, it just means it's a good use-case to cut from the MVP. In the future I would address public fields via abstract fields, as I described in the past.
What if I am borrowing the same sets of fields over and over? That sounds
repititive!
That's true! It will be! I think in the future I'd like to see some kind of 'ghost' or 'abstract' fields, like I described in my abstract fields blog post. But again, that seems like a "post-MVP" sort of problem to me.
Must we specify the field sets being borrowed explicitly? Can't they be
inferred?
In the syntax I described, you have to write
&mut place {field1, field2}explicitly. But there are many approaches in the literature to inferring this sort of thing, with row polymorphism perhaps being the most directly applicable. I think we could absolutely introduce this sort of inference, and in fact I'd probably make it the default, so that&mut placealways introduces a view type, but it is typically inferred to "all fields" in practice. But that is a non-trivial extension to Rust's inference system, introducing a new kind of inference we don't do today. For the MVP, I think I would just lean on auto-ref covering by far the most common case, and have explicit syntax for the rest.Man, I have to write the fields that my method uses in the signature? That
sucks! It should be automatic!
I get that for many applications, particularly with private methods, writing out the list of fields that will be accessed seems a bit silly: the compiler ought to be able to figure it out.
On the flip side, this is the kind of inter-procedural inference we try to avoid in Rust, for a number of reasons:
- it introduces dependecies between methods which makes inference more difficult (even undecidable, in extreme cases);
- it makes for 'non-local errors' that can be really confusing as a user, where modifying the body of one method causes errors in another (think of the confusion we get around futures and
Send, for example); - it makes the compiler more complex, we would not be able to parallelize as easily (not that we parallelize today, but that work is underway!)
The bottom line for me is one of staging : whatever we do, I think we will want a way to be explicit about exactly what fields are being accessed and where. Therefore, we should add that first. We can add the inference later on.
Why does this need to be added to the borrow checker? Why not desugar?
Another common alternative (and one I considered for a while…) is to add some kind of "desugaring" that passes references to fields instead of a single reference. I don't like this for two reasons. One, I think it's frankly more complex! This is a fairly straightforward change to the borrow checker, but that desugaring would leave code all over the compiler, and it would make diagnostics etc much more complex.
But second, it would require changes to what happens at runtime, and I don't see why that is needed in this example. Passing a single reference feels right to me.
What about the ambiguous grammar? What other syntax options are there?
Oh, right, the ambiguous grammar. To be honest I've not thought too deeply about the syntax. I was trying to have the type
Struct { field1, field 2 }reflect struct constructor syntax, since we generally try to make types reflect expressions, but of course that leads to the ambiguity in borrow expressions that causes the problem:let foo = &mut some_variable { field1 }; // ------------- is this a variable or a field name?Options I see:
- Make it work. It's not truly ambiguous, but it does require some semantic diambiguation, i.e., in at least some cases, we have to delay resolving this until name resolution can complete. That's unusual for Rust. We do it in some small areas, most notably around the interpretation of a pattern like
None(is it a binding to a variableNoneor an enum variant?). - New syntax for borrows only. We could keep the type syntax but make the borrow syntax different, maybe
&mut {field1} in some_variableor something. Given that you would rarely type the explicit borrow form, that seems good? - Some new syntax altogether. Perhaps we want to try something different, or introduce a keyword everywhere? I'd be curious to hear options there. The current one feels nice to me but it occupies a "crowded syntactic space", so I can see it being confusing to readers who won't be sure how to interpret it.
Conclusion: this is a good MVP, let's ship it!
In short, I don't really see anything blocking us from moving forward here, at least with a lang experiment.
- accumulate messages by
-
- March 20, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-20 rss
IDA Plugin Updates on 2026-03-20
Activity:
- capa
- c930891c: rules: address code review feedback for bytes prefix index
- f572c01d: rules: clarify bytes_prefix_index guard and add mixed-pattern test
- 26735903: rules: validate _RuleFeatureIndex structure when loading from cache
- 5e19574b: rules: build bytes prefix index once at construction, not per _match(…
- b868be55: rules: simplify bytes prefix indexing and add collision tests
- 501ee065: rules: index extracted bytes by length prefix for O(1) candidate sele…
- ed256d24: rules: index extracted bytes by length prefix for O(1) candidate sele…
- 01c5602b: tests: import capa.render.default in test_render (#2938)
- d73c76a8: build(deps-dev): bump flatted from 3.3.1 to 3.4.2 in /web/explorer (#…
- 0b1e3bfb: cache: support *BSD (#2949)
- DeepExtractIDA
- 8dddb157: Add blog post to README Overview section
- IDA
- ida-cyberchef
- a89ea230: fix: remove clipboard copy dialog, use logger.info instead
- 59c08a09: fix: disable all output buttons when they have nothing to act on
- e2d5811d: fix: disable copy-to-IDB button when recipe output is not bytes
- 4a9f6661: fix: handle string output in copy-to-IDB and add debug logging
- b1185ce8: fix: use object signal type for 64-bit IDA addresses
- 3c23abb0: fix: make "From Location" input source responsive and report errors
- ddb28e07: fix: make "From Location" input source responsive and report errors
- a269716f: fix: patch IDB verification and error visibility
- ccf23ede: fix: remove hardcoded blue border from operation search input
- ida-sdk
- 138a487a: build: Update ida-cmake submodule (ARM64 Linux fix)
- idawilli
- 09ec80f2: dissector: render strings inline if possible
- recover
- cba5686d: Minor updates
- capa
-
🔗 r/LocalLLaMA Qwen 3.5 397B is the best local coder I have used until now rss
Omg, this thing is amazing. I have tried all its smaller silbings 122b/35b/27b, gpt-oss 120b, StepFun 3.5, MiniMax M2.5, Qwen Coder 80B and also the new Super Nemotron 120b. None even come close to the knowledge and the bugfreeness of the big Qwen 3.5.
Ok, it is the slowest of them all but what I am losing in token generation speed I am gaining, by not needing multiple turns to fix its issues, and by not waiting in endless thinking. And yes, in contrast to its smaller silblings or to StepFun 3.5, its thinking is actually very concise.
And the best of it all: Am using quant IQ2_XS from AesSedai. This thing is just 123GiB! All the others I am using at at least IQ4_XS (StepFun 3.5, MiniMax M2.5) or at Q6_K (Qwen 3.5 122b/35b/27b, Qwen Coder 80b, Super Nemotron 120b).
submitted by /u/erazortt
[link] [comments] -
🔗 r/Yorkshire Please help me find a road. rss
Hello Yorkshire. visitor here. a few years ago a friend and I decided on a whim to visit Haworth (from Liverpool) and the once we got towards the edge of Lancs the map concked out. we had a vague idea so we winged but ended up taking some very peculiar roads. one in particular im trying to identify. I have only the vaguest of snippets of memories so if any of you can help me name that road I would be eternally grateful.
I think it was a road in a village, not like a big highway and not surrounded by greenery.
it felt EXTREMELY steep but we may have just been dramatic. It was traffic both ways but extremely narrow so ontop of what felt like a verticle drop on a wet road we played dodgems too.
From the road we where on before it we had no inkling what we where heading into we just turned right then it seemed like an immediate drop.
I don't recall any footpaths at the side of the road and it seemed we had to hug the wall the pass the cars in the other direction.
it wasn't a particularly long road either so it's not one of the big ones Google is telling.
I know I'm not giving much info here but it's all I got. I will take any and all suggestions and check Google maps.
Thank you in advance
A person who might have imagined the whole thing.
submitted by /u/Shut-up-shabby
[link] [comments] -
🔗 r/Yorkshire The horror, Yorkshire tea made by an American rss
| submitted by /u/NuisanceForYou
[link] [comments]
---|--- -
🔗 badlogic/pi-mono v0.61.1 release
New Features
- Typed
tool_callhandler return values viaToolCallEventResultexports from the top-level package and core extension entry. See docs/extensions.md. - Updated default models for
zai,cerebras,minimax, andminimax-cn, and aligned MiniMax catalog coverage and limits with the current provider lineup. See docs/models.md and docs/providers.md.
Added
- Added
ToolCallEventResultto the@mariozechner/pi-coding-agenttop-level and core extension exports so extension authors can type explicittool_callhandler return values (#2458)
Changed
- Changed the default models for
zai,cerebras,minimax, andminimax-cnto match the current provider lineup, and added missingMiniMax-M2.1-highspeedmodel entries with normalized MiniMax context limits (#2445 by @1500256797)
Fixed
- Fixed
ctrl+zsuspend andfgresume reliability by keeping the process alive until theSIGCONThandler restores the TUI, avoiding immediate process exit in environments with no other live event-loop handles (#2454) - Fixed
createAgentSession({ agentDir })to derive the default persisted session path from the providedagentDir, keeping session storage aligned with settings, auth, models, and resource loading (#2457) - Fixed shared keybinding resolution to stop user overrides from evicting unrelated default shortcuts such as selector confirm and editor cursor keys (#2455)
- Fixed Termux software keyboard height changes from forcing full-screen redraws and replaying TUI history on every toggle (#2467)
- Fixed project-local npm package updates to install npm
latestinstead of reusing stale saved dependency ranges, and addedDid you mean ...?suggestions whenpi update <source>omits the configured npm or git source prefix (#2459)
- Typed
-
🔗 r/reverseengineering [Studying] Analyzing njRAT Lime and Green Edition rss
submitted by /u/AcrobaticMonitor9992
[link] [comments] -
🔗 MetaBrainz Picard 3 alpha 4 released rss
Today, we're making available another pre-release version for the upcoming MusicBrainz Picard 3. Alpha 4 focuses on fixing issues that were found in the previous releases as well as some minor improvements and updated translations.
Download links and a list of changes since Picard 3 alpha 3 are available below. For a more detailed overview of what is new in Picard 3 please see the previous blog post Picard 3 Alpha Release.
While we have all the major features implemented and with the latest bug fixes we are confident in the current code, this is still a pre-release and there might be bugs. If you use this, do so with care, backup your files and please report any issues you encounter.
Some of the changes are also backward incompatible, hence we recommend you make a backup of your Picard.ini config file before trying the alpha version. You can do so in Picard’s Options under Advanced > Maintenance.
What’s new?
Bugfixes
- PICARD-3189 - Restore defaults does not work properly when profile is enabled
- PICARD-3204 - PyQt6-Qt6 dependency breaks Linux distro environments
- PICARD-3205 - fpcalc error message in options stays red even after selecting a valid fpcalc
- PICARD-3206 -
itunes_cddb_1should map toCOMM:iTunes_CDDB_1in ID3 - PICARD-3211 - macOS: SSL CERTIFICATE_VERIFY_FAILED loading plugins registry
- PICARD-3213 -
is_local_pathincorrectly handles Windows drive-relative paths (C:repo) - PICARD-3219 - Cover art not displayed
- PICARD-3220 - Image processing filters (e.g. ignore smaller images) are ignored for local files
- PICARD-3221 - Picard tries to remove a file from cluster twice, raising an exception
- PICARD-3227 - Dark theme detection for GNOME can fail
- PICARD-3229 - Guessing track number and title from filename fails with "index out of range"
- PICARD-3230 - Deleting
totaldiscsalso removesdiscnumberfrom ASF tags - PICARD-3234 - Columns being added are not visible
- PICARD-3235 - Fingerprint column shows text overlapping the icon
New Features
- PICARD-2383 - Add
musicbrainz_composeridtag - PICARD-3216 - Provide option for automatically checking for available plugin updates
- PICARD-3223 - Allow plugins to add blocking album tasks
Improvements
- PICARD-3212 - Qt's toolbar extension button (overflow arrow) uses a dark icon that is almost invisible on dark backgrounds
- PICARD-3231 - Improve plugin blacklist implementation and associated tests
- PICARD-3232 - Review and improve plugins registry redirects code and tests
Tasks
- PICARD-2859 - Update documentation for profile highlight color options
- PICARD-2860 - Update documentation for new command line options for additional debug output
- PICARD-2861 - Update documentation for new network cache size option setting
- PICARD-2862 - Update documentation for new date sanitization settings
- PICARD-2877 - Update documentation for revised Options > Advanced > Maintenance page
- PICARD-3136 - Update documentation for ReadTheDocs support options
- PICARD-3183 - Clarify documentation of option profiles in the section Configuration
- PICARD-3200 - Clarify documentation “Understanding Acoustic Fingerprinting and AcoustIDs”
- PICARD-3217 - Document automatic plugin update checking
- PICARD-3226 - Enable the Dutch translation of the documentation
- PICARD-3233 - Redirect documentation URL from GitHub Pages to ReadTheDocs
Download
As this is a pre-release and early alpha version, it is not available on all the channels where Picard’s current stable version is available.
We appreciate your interest in trying this new version. Use with care, backup your files and please use theMetaBrainz community forums and the ticket system to give feedback and report bugs.
- MusicBrainz Picard for Windows (installer)
- MusicBrainz Picard for Windows (portable)
- MusicBrainz Picard for macOS (Intel)
- MusicBrainz Picard for macOS (ARM64)
- Source code
Picard is free software and the source code is available on GitHub.
Acknowledgements
This release contains code contributions by zas, rdswift, outsidecontext, iron-prog, metaisfacil and sanskarmit. Translations were updated by mfmeulenbelt (Dutch), oleh_hishak (Ukrainian), Arhidimon (Ukrainian), marcriera (Catalan), wileyfoxyx (Russian) and theoasim (Greek). Special shout-out to mfmeulenbelt for completely translating the Picard User Guide into Dutch!
-
🔗 r/LocalLLaMA Glm 5.1 👀 rss
| submitted by /u/Namra_7
[link] [comments]
---|--- -
🔗 r/Leeds Best place for fish and chips? LS15-LS14 rss
Thanks!
submitted by /u/Bright_Fill_4770
[link] [comments] -
🔗 r/Yorkshire Five beautiful towns to visit this spring rss
| Towns featured: Grassington, North Yorkshire
Hebden Bridge, West Yorkshire
Richmond, North Yorkshire
Slaithwaite, West Yorkshire
Knaresborough, North Yorkshire submitted by /u/Yorkshire-List
[link] [comments]
---|--- -
🔗 r/york York mum calls for meningitis B vaccine to be given to teenagers rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/Leeds Scooters are already out rss
Saw these today on Manor road
submitted by /u/datsnotright0
[link] [comments] -
🔗 r/LocalLLaMA Ooh, new drama just dropped 👀 rss
| For those out of the loop: cursor's new model, composer 2, is apparently built on top of Kimi K2.5 without any attribution. Even Elon Musk has jumped into the roasting submitted by /u/Careful_Equal8851
[link] [comments]
---|--- -
🔗 r/Leeds Gonna make the planned (still going ahead) 50% service increase to Flixbus services in/out of Leeds a little awkward. rss
To simplify it is Flixbus (the green InterCity coach operator) that like with National Express and Megabus (when they operated here) uses sub contractors have been given a notice to stop operating from Leeds Bus Station which only been happening since 2024.
Before that was Kirkgate and outside a hotel (can remember which one). Now if it does go through it's gonna make the 50% service increase awkward cause where in Leeds would be an ideal and safe spot?
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 r/Leeds TPP Interview process and thoughts rss
I recently had a final interview with TPP so I wanted to share my experience and see what everyone thinks. I did the Logic and Reasoning test in February and after sending them a CV with all my academic grades, was invited for a final interview. I’ve read a lot of glassdoor reviews and Reddit posts that say that the work culture is terrible and turnover is high along with some horrible stories about interviews, so I was prepared for the worst.
The interview itself was unusual but not bad. I essentially described my CV to them, answering a few questions about teamwork, past experience etc and solved a problem similar to the Logic and Reasoning test questions. I was told that I would hear back in a week or so.
My question is this: has TPP changed at all since the new CEO and would anyone recommend working there now? I suspect that I was given an easy time because I have strong academic grades and I think I did well in the L&R test so they were trying to give a good impression. Were it not for what I’ve heard online, I would be happy with a job offer since the salary is high and I’m in the Army reserve and there is the main base of my regiment in Leeds so it would be very convenient. I don’t have any job offers right now but I have had several interviews (not final interviews) so I’m not super desperate right now.
submitted by /u/WilliamNash97
[link] [comments] -
🔗 r/york Map of skate spots around York rss
| I’ve been building a project that maps skate spots around the world and just added a York guide. It includes parks, street spots and DIY builds around the city. Guide: https://urbanatlas.uk/guides/skate-spots-york If anyone knows good local spots that aren’t listed yet please add them onto the map. submitted by /u/urbanatlas-dev
[link] [comments]
---|--- -
🔗 r/Yorkshire Is there a circular trail going from Marsden Moor (station) to Dove Stone or Dove stone edge? rss
Trying to build up a list of hiking and nature walk trails that I can access from Leeds by train and bus.
Marsden is a big obvious one, but ud like to try and get to Dovestone edge, any ideas?
submitted by /u/saltlampsandphotos
[link] [comments] -
🔗 r/LocalLLaMA Running TinyLlama 1.1B locally on a PowerBook G4 from 2002. Mac OS 9, no internet, installed from a CD. rss
| Hey everyone! I've been working on this for months and today's the day. MacinAI Local is a complete local AI inference platform that runs natively on classic Macintosh hardware, no internet required. What makes this different from previous retro AI projects: Every "AI on old hardware" project I've seen (llama98.c on Windows 98, llama2.c64 on Commodore 64, llama2 on DOS) ports Karpathy's llama2.c with a single tiny 260K-parameter model. MacinAI Local is a ground-up platform:- Custom C89 inference engine: not a port of llama.cpp or llama2.c. Written from scratch targeting Mac Toolbox APIs and classic Mac OS memory management.
- Model-agnostic: runs GPT-2 (124M), TinyLlama, Qwen (0.5B), SmolLM, and any HuggingFace/LLaMA-architecture model via a Python export script. Not locked to one toy model.
- 100M parameter custom transformer: trained on 1.1GB of Macintosh-specific text (Inside Macintosh, MacWorld, Usenet archives, programming references).
- AltiVec SIMD optimization: 7.3x speedup on PowerPC G4. Went from 2.4 sec/token (scalar) down to 0.33 sec/token with Q8 quantization and 4-wide unrolled vector math with cache prefetch.
- Agentic Mac control: the model generates AppleScript to launch apps, manage files, open control panels, and automate system tasks. It asks for confirmation before executing anything.
- Disk paging: layers that don't fit in RAM get paged from disk, so even machines with limited memory can run inference. TinyLlama 1.1B runs on a machine with 1GB RAM by streaming layers from the hard drive.
- Speech Manager integration: the Mac speaks every response aloud using PlainTalk voices.
- BPE tokenizer: 8,205 tokens including special command tokens for system actions.
The demo hardware: PowerBook G4 Titanium (2002), 1GHz G4, 1GB RAM, running Mac OS 9.2.2. Real hardware performance (PowerBook G4 1GHz, Mac OS 9.2, all Q8): | Model | Params | Q8 Size | Tokens/sec | Per token | Notes
---|---|---|---|---|---
MacinAI Tool v7 | 94M | 107 MB | 2.66 tok/s | 0.38s | Custom tool model, AppleScript
GPT-2 | 124M | 141 MB | 1.45 tok/s | 0.69s | Text completion
SmolLM 360M | 360M | 394 MB | 0.85 tok/s | 1.18s | Chat model
Qwen 2.5 0.5B | 494M | 532 MB | 0.63 tok/s | 1.59s | Best quality
TinyLlama 1.1B | 1.1B | 1.18 GB | 0.10 tok/s | 9.93s | Disk paging (24.5 min for 113 tok)Technical specs:
| Details
---|---
Language | C89 (CodeWarrior Pro 5)
Target OS | System 7.5.3 through Mac OS 9.2.2
Target CPUs | 68000, 68030, 68040, PowerPC G3, G4
Quantization | Float32, Q8_0 (int8 per-group)
Architectures | LLaMA-family (RMSNorm/SwiGLU/RoPE) + GPT-2 family (LayerNorm/GeLU/learned pos)
Arena allocator | Single contiguous block, 88% of physical RAM, no fragmentation
AltiVec speedup | 7.3x over scalar baselineWhat's next:
Getting the 68040 build running on a 1993 LC 575 / Color Classic Mystic. The architecture already supports it, just need the hardware in hand.
Demo: https://youtu.be/W0kV_CCzTAM
Technical write-up: https://oldapplestuff.com/blog/MacinAI-Local/
Happy to answer any technical questions. I've got docs on the AltiVec optimization journey (finding a CodeWarrior compiler bug along the way), the training pipeline, and the model export process.
Thanks for the read!
submitted by /u/SDogAlex
[link] [comments] -
🔗 @malcat@infosec.exchange In Malcat, hitting
will start the in-GUI MCP server (works in free mastodonIn Malcat, hitting
will start the in-GUI MCP server (works in free version too). You can then interact with the current analysis using your LLM of choice. Here I renamed functions and variables of the C2 dispatcher function for an unknown malware:
-
🔗 r/reverseengineering Supply-chain attack using invisible code hits GitHub and other repositories rss
submitted by /u/EchoOfOppenheimer
[link] [comments] -
🔗 r/Yorkshire Skipton, North Yorkshire has been named the best place to live in the North and North East rss
The Sunday Times has named Skipton in North Yorkshire the best place to live in the North and Northeast in our annual roundup of the best places to live.
“I feel very proud to be from here,” said Skipton resident Adams, 43, a business adviser and cofounder of Wild & Flo, a vertical farming company. “It’s very welcoming and more diverse than you might think. My wife is from Lebanon and she loves Skipton. We purposefully moved here because it’s a nice place to live, the schools are good and the landscape is lovely.”
The market town has become a haven for families seeking fresh air, as well as downsizers and retired people who like the affordable property prices. No wonder it was crowned the happiest place to live in Britain by Rightmove last year.
Other Yorkshire locations named on the list are:
- Howardian Hills
- Saltburn-by-the-Sea
- Slaithwaite
- York
- Leeds
Explore the full list, and find out what the judges had to say, at the link https://www.thetimes.com/best-places-to-live
submitted by /u/TimesandSundayTimes
[link] [comments] -
🔗 r/york Fruit and veg market rss
I really want to get into the habit of getting my fruit and veg locally but all of the greengrocers here are very expensive compared to those in my hometown. Can anyone recommend a reasonably priced place?
submitted by /u/Financial-Abies-3645
[link] [comments] -
🔗 r/york Best Ghost tours/things to do rss
I’m visiting in a few weeks from America with my wife and 3 children ages 6.6.and 3. Any recommendations for a ghost tour we could take with the kids and one that’s more adult friendly my wife and I could do later at night ourselves after the minds go to bed? (Mother in law will be with us but she will stay back while the kids sleep).
Also any suggestions on other things to do with the kids in mind while we are there as well are appreciated.
submitted by /u/Anxious_Wasabi8392
[link] [comments] -
🔗 r/LocalLLaMA Qwen3.5 is a working dog. rss
I saw someone say recently something to the effect of: “that man is a working dog. if you don’t give him a job, he’ll tear up the furniture.” Qwen3.5 is a working dog.
I’ve been working with this model a lot recently. I’ve baked three dozen custom quantizations. I’ve used three different execution backends. Of everything I’ve learned I can at least report the following.
These models absolutely hate having no context. They are retrieval hounds. They want to know their objectives going into things. Your system prompt is 14 whole tokens? You’re going to have a bad time. 27B doesn’t even become remotely useful sub 3K tokens going into it. It will think itself raw getting to 5K tokens just to understand what it’s doing.
And I should note: this makes a lot of sense. These models, in my estimation, were trained agentic-first. Agent models want to know their environment. What tools they have. Their modality (architect, code, reviewer, etc). With no system prompt or prefill they stumble around aimlessly until they have something to grab onto. In my opinion: this is a good thing. Alibaba has bred the working dog of the open weights model. It is not a lap pet.
As you evaluate this model family, please keep in mind that the Qwen team has, very deliberately, created a model that wants a job. It does not want to hear “hi.” It wants to hear what you actually need done.
Also the 35B MoE is kinda trash. That isn’t poetic, it’s just true.
submitted by /u/dinerburgeryum
[link] [comments] -
🔗 r/reverseengineering Lightweight Python bindings for JADX rss
submitted by /u/Gloomy_King8147
[link] [comments] -
🔗 badlogic/pi-mono v0.61.0 release
New Features
- Namespaced keybinding ids and a unified keybinding manager across the app and TUI. See docs/keybindings.md and docs/extensions.md.
- JSONL session export and import via
/export <path.jsonl>and/import <path.jsonl>. See README.md and docs/session.md. - Resizable sidebar in HTML share and export views. See README.md.
Breaking Changes
- Interactive keybinding ids are now namespaced, and
keybindings.jsonnow uses those same canonical namespaced ids. Older config files are migrated automatically on startup. Custom editors and extension UI components still receive an injectedkeybindings: KeybindingsManager. They do not callgetKeybindings()orsetKeybindings()themselves. Declaration merging applies to that injected type (#2391) - Extension author migration: update
keyHint(),keyText(), and injectedkeybindings.matches(...)calls from old built-in names like"expandTools","selectConfirm", and"interrupt"to namespaced ids like"app.tools.expand","tui.select.confirm", and"app.interrupt". See docs/keybindings.md for the full list.pi.registerShortcut("ctrl+shift+p", ...)is unchanged because extension shortcuts still use raw key combos, not keybinding ids.
Added
- Added
gpt-5.4-minito theopenai-codexmodel catalog (#2334 by @justram) - Added JSONL session export and import via
/export <path.jsonl>and/import <path.jsonl>(#2356 by @hjanuschka) - Added a resizable sidebar to HTML share and export views (#2435 by @dmmulroy)
Fixed
- Tests for session-selector-rename and tree-selector are now keybinding-agnostic, resetting editor keybindings to defaults before each test so user
keybindings.jsoncannot cause failures (#2360) - Fixed custom
keybindings.jsonoverrides to shadow conflicting default shortcuts globally, so bindings such ascursorUp: ["up", "ctrl+p"]no longer leave default actions like model cycling active (#2391) - Fixed concurrent
editandwritemutations targeting the same file to run serially, preventing interleaved file writes from overwriting each other (#2327) - Fixed RPC mode to redirect unexpected stdout writes to stderr so JSONL responses remain parseable (#2388)
- Fixed auto-retry with tool-using retry responses so
session.prompt()waits for the full retry loop, including tool execution, before returning (#2440 by @pasky) - Fixed
/modelto refresh scoped model lists aftermodels.jsonchanges, avoiding stale selector contents (#2408 by @Perlence) - Fixed
validateToolArguments()to fall back gracefully when AJV schema compilation is blocked in restricted runtimes such as Cloudflare Workers, allowing tool execution to proceed without schema validation (#2395) - Fixed CLI startup to suppress process warnings from leaking into terminal, print, and RPC output (#2404)
- Fixed bash tool rendering to show elapsed time at the bottom of the tool block (#2406)
- Fixed custom theme file watching to reload updated theme contents from disk instead of keeping stale cached theme data (#2417, #2003)
- Fixed footer Git branch refreshes to run asynchronously so branch watcher updates do not block the UI (#2418)
- Fixed invalid extension provider registrations to surface an extension error without preventing other providers from loading (#2431)
- Fixed Windows bash execution hanging for commands that spawn detached descendants inheriting stdout/stderr handles, which caused
agent-browserand similar commands to spin forever (#2389 by @mrexodia) - Fixed
google-vertexAPI key resolution to ignore placeholder auth markers like<authenticated>and fall back to ADC instead of sending them as literal API keys (#2335) - Fixed desktop clipboard text copy to prefer native OS clipboard integration before shell fallbacks, improving reliability on macOS and Windows (#2347)
- Fixed Bun Bedrock provider registration to survive provider resets and session reloads in compiled binaries (#2350 by @unexge)
- Fixed OpenRouter reasoning requests to use the provider's nested reasoning payload, restoring thinking level support for OpenRouter models and custom compat settings (#2298 by @PriNova)
- Fixed Bedrock application inference profiles to support prompt caching when
AWS_BEDROCK_FORCE_CACHE=1is set, covering profile ARNs that do not expose the underlying Claude model name (#2346 by @haoqixu)
-
🔗 HexRaysSA/plugin-repository commits sync repo: +1 plugin, +4 releases rss
sync repo: +1 plugin, +4 releases ## New plugins - [renimp](https://github.com/milankovo/renimp) (1.0.0) ## New releases - [BinSync](https://github.com/binsync/binsync): 5.12.0 - [IDAssist](https://github.com/symgraph/IDAssist): 1.4.0, 1.3.0 -
🔗 Rust Blog What we heard about Rust's challenges, and how we can address them rss
When we set out to understand Rust's challenges, we expected to hear about the borrow checker learning curve and maybe some ecosystem gaps. Of course, we did. A lot. But, of course, it's more nuanced.
The conventional wisdom is that Rust has a steep learning curve, but once you "get it," smooth sailing awaits. We found that while some challenges disappear with experience, they are replaced with others. Beginners struggle with ownership concepts, experts face domain-specific challenges: async complexity for network developers, certification gaps for safety-critical teams, ecosystem maturity issues for embedded developers.
This isn't all doom and gloom though: we ultimately found that despite Rust's challenges, it remains necessary and desired:
If all the things laid out [to make Rust better] were done, I'd be a happy Rust programmer. If not, I'd still be a Rust programmer. -- Engineering manager adopting Rust for performance
The universal challenges that affect everyone
Across every interview, regardless of experience level or domain, we heard about the same core set of challenges. These aren't beginner problems that go away—they're fundamental friction points that manifest differently as developers grow.
Compilation performance: the universal productivity tax
Every single cohort we analyzed—from novices to experts, from embedded developers to web developers—cited compilation times as a significant barrier to productivity:
"Java takes about 100 milliseconds, Rust anywhere from 5 seconds to a minute depending on what you changed" -- Distinguished engineer working on backend systems at a large company
"8 to 10s iteration cycle... when you want to tweak the padding on a box" -- GUI development team
The impact varies by domain, but the pattern is consistent. CLI tool and GUI developers, who need rapid iteration cycles, are hit hardest. Safety-critical developers with 25-30 minute build times face workflow disruption. Size- constrained embedded developers are forced into optimized builds that take longer to compile and complicate debugging.
What's particularly important to note, is that this isn't just about absolute build times; it's about the development velocity tax that compounds over time. Long compile times can have strong negative impact on code iteration time. Anything that can reduce this code iteration time - hot reloading, fast debug builds, faster linking - will have an outsized impact on development velocity.
Moreover, the compilation performance tax compounds at scale. Individual developers might tolerate 5-10 second builds, but teams with CI/CD pipelines, large codebases, and frequent iterations face exponentially worse impacts. One participant noted 25-30 minute builds that create "wait for 30 minutes before the tool finds out I made a mistake" cycles.
The borrow checker: first it's sour, then it's sweet
The borrow checker is often touted as a "beginner problem", and we found that this is largely true: Novices are most strongly impacted by the borrow checker, but this often extends even into the stage where a developer is comfortable writing Rust where they still get tripped by the borrow checker sometimes.
However, highly-experienced Rust developers basically never cite the borrow checker itself as a frustration for them.
Ownership: The first time I went through the chapter, I was really like, what is this? - Developer learning Rust as a first language
I actually did not understand the borrow checker until I spent a lot of time writing Rust - Executive at a developer tools company
Async complexity: the "Three Horsemen" problem
Multiple participants identified
asyncas a pain point. Many people, not just beginners, often choose to completely avoid it, instead focusing solely on sync Rust. This is because, for many,asyncRust feels completely different.My biggest complaint with Rust is async. If we want to use [a tool], we're forced into that model...not just a different language, but a different programming model...I have zero [experience], I've been avoiding it. - Developer working on a security agent at a large company
Of course, those who do use it often share how complex it is, how it can feel incomplete in ways, or how it is difficult to learn.
"When you got Rust that's both async and generic and has lifetimes, then those types become so complicated that you basically have to be some sort of Rust god" -- Software engineer with production Rust experience
"My general impression is actually pretty negative. It feels unbaked... there is a lot of arcane knowledge that you need" -- Research software engineer
"There's a significant learning gap between basic Rust and async programming... creating a 'chasm of sadness' that requires substantial investment to cross." -- Professional developer
The async complexity isn't just about individual developer experience: it is exacerbated by ecosystem fragmentation and architectural lock-in :
"the fact that there is still plenty of situations where you go that library looks useful I want to use that library and then that immediately locks you into one of tokio or one of the other runtimes" -- Community-focused developer
This fragmentation forces architectural decisions early and limits library compatibility, creating a unique challenge among programming languages.
Of course, it would be remiss to clarify that plenty of people do express positive sentiments of async, often despite the mentioned challenges.
Ecosystem navigation: choice paralysis and tacit knowledge
The Rust ecosystem shows uneven maturity across domains : excellent for CLI tools and web backends, but significantly lacking in other domains such as embedded and safety-critical applications. This creates a fragmented adoption landscape where Rust's reputation varies dramatically depending on your domain.
"Biggest reason people don't use Rust is that the ecosystem they'd be entering into is not what they expect. It doesn't have the tooling that C++ has nor the libraries." -- Developer for a large tech company
"I think the amount of choice you can have often makes it difficult to make the right choice" -- Developer transitioning from high-level languages
"the crates to use are sort of undiscoverable... There's a layer of tacit knowledge about what crates to use for specific things that you kind of gather through experience" -- Web developer
The problem isn't lack of libraries—it's that choosing the right ones requires expertise that newcomers don't have.
The Rust Project has made this choice mostly intentionally though: it has chosen not to bless certain crates in order to not unduly stifle innovation. The expectation is that if a newer crate ends up being "better" than some well-established crate, then that newer crate should be become more popular; but, if the Project recommends using the more established crate, then that is less likely to happen. This is a tradeoff that might be worth reevaluating, or finding clever solutions to.
How challenges amplify differently across domains
While the core challenges are universal, different domains have unique challenges that ultimately must be either adoption blockers or acceptable trade-offs.
Embedded systems: where every constraint matters
Embedded developers face the most constrained environment for resources, which can amplify other challenges like learning.
"if you pull in a crate, you pull in a lot of things and you have no control" -- Embedded systems researcher
"can't use standard collections like hashmaps" -- Embedded software engineer
Debug builds become too large for small controllers, forcing developers into optimized builds that complicate debugging. Cross-compilation adds another layer of complexity. The "no-std" ecosystem, while growing, still has significant gaps.
Safety-critical systems: stability vs. innovation tension
Safety-critical developers need Rust's memory safety guarantees, but face unique challenges around certification and tooling:
"we don't have the same tools we have to measure its safety criticality as we do in C++ and I think it's a worry point" -- Safety systems engineer
"not a lot of people know Rust not a lot of managers actually trust that this is a technology that's here to stay" -- Safety-critical developer on organizational barriers
The tension between Rust's rapid evolution and safety-critical requirements for stability creates adoption barriers even when the technical benefits are clear.
To note, we previously wrote a blog post all about safety- critical Rust. Check it out!
GUI development: compile times inhibit iteration speed
GUI developers need rapid visual feedback, making compilation times particularly painful:
We've got a UI framework that's just Rust code so when you want to tweak the padding on a box ... it's a pain that we just kind of accept a 10 seconds or more iteration cycle. -- Developer working on a GUI app
Background-dependent learning paths
One important insight gained from this work, and it seems obvious if you think about it, is that learning Rust isn't a universal experience: it depends heavily on your background:
High-level language developers must learn systems concepts alongside Rust:
The challenge for me was I needed to grasp the idea of a lower-level computer science ideas and Rust at the same time. -- Developer with Typescript background
Low-level developers often struggle to unlearn patterns and concepts:
I'm coming from C++ world so I had the big class that does everything. Taken a while for me to internalize that "dude you gotta go down a level". -- Developer with C++ background
Rust tried to hide away notion of pointers - Just tell me it's a pointer -- System-level developer
Interestingly though, learning Rust alongside C++ can help students understand both better:
Students learn smart pointers in C++ and then 'we're just now learning smart pointers with Rust as well' — learning both at the same time makes it easier. -- Community organizer
Recommendations
Invest in compilation performance as a first-class concern
Given that compilation performance affects every single user group, we recommend treating it as a first-class language concern, not just an implementation detail. This could include:
- Incremental compilation improvements that better match developer workflows
- Build system innovations that reduce the iteration cycle tax
- Tooling integration that makes build times less disruptive
We do want to quickly shout a couple of neat community projects that have this goal in mind:
- The subsecond crate by the Dioxus team allows hot-reloading, which can make workflows like those found in GUI development more seamless
- The Wild linker aims to be a fast linker for Linux, with plans for incremental linking
Invest in ecosystem guidance and compatibility
We previously made some suggestions in this area, and they still hold true. Finding ways to not only help users find crates that are useful to them, but also enable better compatibility between crates will surely have a net-positive benefit to the Rust community.
Address learning diversity
When someone is learning Rust, their programming language background, level of experience, and domain in which they are trying to work in, all influence the challenges they face. We recommend that the Rust Project and the community find ways to tailor learning paths to individuals' needs. For example, for someone with a C or C++ background, it might be useful to be able to directly compare references to pointers.
Similarly, having domain-specific learning materials can help newcomers focus on the problems they are facing more specifically than a general "Rust tutorial" might. The Embedded Rust Book does this, for example.
Close the gap between sync and async Rust
This is a tall order -- there are a lot of moving parts here, but it's clear that many people struggle. On one hand, async Rust feels often "incomplete" in some language features compared to sync Rust. On the other, documentation is often focused on sync Rust (for example, much of The Rust Programming Language Book is focused on sync code patterns).
Within the Rust Project, we can work towards stabilizing long-awaited features such as async functions in dyn traits, or improving compiler errors for issues with, for example, lifetimes and async code. We can include fundamental async library traits and functions within
std, enabling a more cohesive async ecosystem.Of course, as much as can be done within the Rust Project, even getting more community involvement in producing tutorials, example code, and otherwise just sharing knowledge, can go a long way to closing the gap.
Conclusion
Rust's challenges are more nuanced than the conventional "steep learning curve" narrative suggests. They are domain-specific and evolve with experience.
Understanding these patterns is crucial for Rust's continued growth. As we work to expand Rust's reach, we need to address not just the initial learning curve, but the ongoing friction that affects productivity across all experience levels.
The good news is that recognizing these patterns gives us recommendations for improvement. By acknowledging the expertise gradient, prioritizing compilation performance, creating better ecosystem navigation, and addressing background- dependent challenges, we can help Rust fulfill its promise of empowering everyone to build reliable, efficient software.
-
🔗 Llogiq on stuff Matching Puzzle Pieces and Disappointing Benchmarks rss
I recently had a piece of code that used
.to_lowercase()to sort some text. Which takes a bit of memory. On the plus side, the code used.sort_by_cached_key, which is pretty cool. But I wondered whether doing the.to_lowercase()log(n) times instead of n times would be slower than allocating aStringfor each entry, given that for many strings, even the first few charactes would be different.First, case insensitively comparing two
&strs in Rust is possible, if a bit convoluted. The solution here is to iterate over allchars, then callingchar::to_lowercaseon that, which returns another iterator overchar(because some chars can correspond to multiple chars in lowercase), which we canflat_map. The second piece of the puzzle is thatIteratorhas acmpmethod but does not implementOrdbecause it is not idempotent: If you callcmp, you exhaust the iterator. Still, withsort_by, we can interleave the lowercase conversion and comparison.For good measure, I also added the
unicasecrate to the benchmarks.Being the curious person that I am, I naturally wrote a benchmark, which is short enough to reproduce here (if you aren’t interested, scroll down for the conclusion):
use fake::faker::name::raw::Name; use fake::{locales::EN, Fake}; fn setup(num: usize) -> Vec<String> { (0..num).map(|_| Name(EN).fake()).collect::<Vec<String>>() } #[divan::bench(args = [1, 5, 10, 100, 1000, 10000])] fn sort_by_cached_lowercase(bencher: divan::Bencher, size: usize) { let names = setup(size); bencher.counter(size).bench_local(|| { let mut sorted = names.clone(); sorted.sort_by_cached_key(|name| name.to_lowercase()); sorted }) } #[divan::bench(args = [1, 5, 10, 100, 1000, 10000])] fn sort_by_iter_lowercase(bencher: divan::Bencher, size: usize) { let names = setup(size); bencher.counter(size).bench_local(|| { let mut sorted = names.clone(); fn caseless(s: &String) -> impl Iterator<Item = char> + '_ { s.chars().flat_map(char::to_lowercase) } sorted.sort_by(|s1, s2| caseless(s1).cmp(caseless(s2))); sorted }) } #[divan::bench(args = [1, 5, 10, 100, 1000, 10000])] fn sort_by_unicase(bencher: divan::Bencher, size: usize) { let names = setup(size); bencher.counter(size).bench_local(|| { let mut sorted = names.clone(); sorted.sort_by(|s1, s2| unicase::UniCase::new(s1).cmp(&unicase::UniCase::new(s2))); sorted }) } fn main() { // Run registered benchmarks. divan::main(); }The result on my M2-MAX MacBook Pro:
Timer precision: 41 ns low fastest │ slowest │ median │ mean │ samples │ iters ├─ sort_by_cached_lowercase │ │ │ │ │ │ ├─ 1 16.68 ns │ 18.14 ns │ 17.49 ns │ 17.49 ns │ 100 │ 25600 │ │ 59.94 Mitem/s │ 55.1 Mitem/s │ 57.15 Mitem/s │ 57.15 Mitem/s │ │ │ ├─ 5 212.9 ns │ 265 ns │ 215.5 ns │ 219.2 ns │ 100 │ 3200 │ │ 23.47 Mitem/s │ 18.86 Mitem/s │ 23.19 Mitem/s │ 22.8 Mitem/s │ │ │ ├─ 10 452.5 ns │ 567.1 ns │ 457.7 ns │ 462.2 ns │ 100 │ 1600 │ │ 22.09 Mitem/s │ 17.63 Mitem/s │ 21.84 Mitem/s │ 21.63 Mitem/s │ │ │ ├─ 100 5.207 µs │ 11.33 µs │ 5.291 µs │ 5.455 µs │ 100 │ 100 │ │ 19.2 Mitem/s │ 8.824 Mitem/s │ 18.89 Mitem/s │ 18.32 Mitem/s │ │ │ ├─ 1000 73.62 µs │ 110.9 µs │ 75.99 µs │ 78.89 µs │ 100 │ 100 │ │ 13.58 Mitem/s │ 9.009 Mitem/s │ 13.15 Mitem/s │ 12.67 Mitem/s │ │ │ ╰─ 10000 853.7 µs │ 1.053 ms │ 864.8 µs │ 886.3 µs │ 100 │ 100 │ 11.71 Mitem/s │ 9.495 Mitem/s │ 11.56 Mitem/s │ 11.28 Mitem/s │ │ ├─ sort_by_iter_lowercase │ │ │ │ │ │ ├─ 1 13.91 ns │ 23.35 ns │ 14.89 ns │ 15.68 ns │ 100 │ 25600 │ │ 71.87 Mitem/s │ 42.81 Mitem/s │ 67.15 Mitem/s │ 63.76 Mitem/s │ │ │ ├─ 5 134.8 ns │ 196 ns │ 137.4 ns │ 148.6 ns │ 100 │ 3200 │ │ 37.08 Mitem/s │ 25.5 Mitem/s │ 36.37 Mitem/s │ 33.64 Mitem/s │ │ │ ├─ 10 442.1 ns │ 1.03 µs │ 483.8 ns │ 519.1 ns │ 100 │ 800 │ │ 22.61 Mitem/s │ 9.702 Mitem/s │ 20.66 Mitem/s │ 19.26 Mitem/s │ │ │ ├─ 100 17.33 µs │ 34.12 µs │ 18.16 µs │ 19.66 µs │ 100 │ 100 │ │ 5.769 Mitem/s │ 2.93 Mitem/s │ 5.504 Mitem/s │ 5.086 Mitem/s │ │ │ ├─ 1000 337.6 µs │ 433 µs │ 352.1 µs │ 355.4 µs │ 100 │ 100 │ │ 2.961 Mitem/s │ 2.309 Mitem/s │ 2.839 Mitem/s │ 2.813 Mitem/s │ │ │ ╰─ 10000 5.543 ms │ 6.579 ms │ 5.572 ms │ 5.602 ms │ 100 │ 100 │ 1.803 Mitem/s │ 1.519 Mitem/s │ 1.794 Mitem/s │ 1.784 Mitem/s │ │ ╰─ sort_by_unicase │ │ │ │ │ ├─ 1 15.05 ns │ 22.05 ns │ 15.87 ns │ 16.78 ns │ 100 │ 25600 │ 66.42 Mitem/s │ 45.34 Mitem/s │ 63 Mitem/s │ 59.59 Mitem/s │ │ ├─ 5 86.66 ns │ 125 ns │ 91.86 ns │ 97.79 ns │ 100 │ 6400 │ 57.69 Mitem/s │ 39.97 Mitem/s │ 54.42 Mitem/s │ 51.12 Mitem/s │ │ ├─ 10 202.5 ns │ 470.8 ns │ 207.7 ns │ 230.9 ns │ 100 │ 1600 │ 49.36 Mitem/s │ 21.24 Mitem/s │ 48.13 Mitem/s │ 43.3 Mitem/s │ │ ├─ 100 4.749 µs │ 18.33 µs │ 4.833 µs │ 5.201 µs │ 100 │ 100 │ 21.05 Mitem/s │ 5.454 Mitem/s │ 20.68 Mitem/s │ 19.22 Mitem/s │ │ ├─ 1000 107.2 µs │ 158 µs │ 107.5 µs │ 111.4 µs │ 100 │ 100 │ 9.327 Mitem/s │ 6.325 Mitem/s │ 9.298 Mitem/s │ 8.973 Mitem/s │ │ ╰─ 10000 1.753 ms │ 1.919 ms │ 1.757 ms │ 1.772 ms │ 100 │ 100 5.702 Mitem/s │ 5.208 Mitem/s │ 5.688 Mitem/s │ 5.641 Mitem/s │ │So unless you only have one element (which is the trivial case),
sort_by_cached_keyis worth it, and iterating over UTF-8 characters to do case conversion is a lot slower than I would have thought, drowning out the benefit of not needing to allocate. The real surprise is that Unicase can often be faster, despite making the comparison more complex. -
🔗 Armin Ronacher Some Things Just Take Time rss
Trees take quite a while to grow. If someone 50 years ago planted a row of oaks or a chestnut tree on your plot of land, you have something that no amount of money or effort can replicate. The only way is to wait. Tree-lined roads, old gardens, houses sheltered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that.
Because some things just take time.
We know this intuitively. We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them. Either because of the time it took to build them or because of their age. We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience.
Yet right now we also live in a time of instant gratification, and it's entering how we build software and companies. As much as we can speed up code generation, the real defining element of a successful company or an Open Source project will continue to be tenacity. The ability of leadership or the maintainers to stick to a problem for years, to build relationships, to work through challenges fundamentally defined by human lifetimes.
Friction Is Good
The current generation of startup founders and programmers is obsessed with speed. Fast iteration, rapid deployment, doing everything as quickly as possible. For many things, that's fine. You can go fast, leave some quality on the table, and learn something along the way.
But there are things where speed is actively harmful, where the friction exists for a reason. Compliance is one of those cases. There's a strong desire to eliminate everything that processes like SOC2 require, and an entire industry of turnkey solutions has sprung up to help — Delve just being one example, there are more.
There's a feeling that all the things that create friction in your life should be automated away. That human involvement should be replaced by AI-based decision-making. Because it is the friction of the process that is the problem. When in fact many times the friction, or that things just take time, is precisely the point.
There's a reason we have cooling-off periods for some important decisions in one's life. We recognize that people need time to think about what they're doing, and that doing something right once doesn't mean much because you need to be able to do it over a longer period of time.
Vibe Slop At Inference Speeds
AI writes code fast which isn't news anymore. What's interesting is that we're pushing this force downstream: we seemingly have this desire to ship faster than ever, to run more experiments and that creates a new desire, one to remove all the remaining friction of reviews, designing and configuring infrastructure, anything that slows the pipeline. If the machines are so great, why do we even need checklists or permission systems? Express desire, enjoy result.
Because we now believe it is important for us to just do everything faster. But increasingly, I also feel like this means that the shelf life of much of the software being created today — software that people and businesses should depend on — can be measured only in months rather than decades, and the relationships alongside.
In one of last year's earlier YC batches, there was already a handful that just disappeared without even saying what they learned or saying goodbye to their customers. They just shut down their public presence and moved on to other things. And to me, that is not a sign of healthy iteration. That is a sign of breaking the basic trust you need to build a relationship with customers. A proper shutdown takes time and effort, and our current environment treats that as time not wisely spent. Better to just move on to the next thing.
This is extending to Open Source projects as well. All of a sudden, everything is an Open Source project, but many of them only have commits for a week or so, and then they go away because the motivation of the creator already waned. And in the name of experimentation, that is all good and well, but what makes a good Open Source project is that you think and truly believe that the person that created it is either going to stick with it for a very long period of time, or they are able to set up a strategy for succession, or they have created enough of a community that these projects will stand the test of time in one form or another.
My Time
Relatedly, I'm also increasingly skeptical of anyone who sells me something that supposedly saves my time. When all that I see is that everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we're immediately filling it with more things.
We all sell each other the idea that we're going to save time, but that is not what's happening. Any time saved gets immediately captured by competition. Someone who actually takes a breath is outmaneuvered by someone who fills every freed-up hour with new output. There is no easy way to bank the time and it just disappears.
I feel this acutely. I'm very close to the red-hot center of where economic activity around AI is taking place, and more than anything, I have less and less time, even when I try to purposefully scale back and create the space. For me this is a problem. It's a problem because even with the best intentions, I actually find it very hard to create quality when we are quickly commoditizing software, and the machines make it so appealing.
I keep coming back to the trees. I've been maintaining Open Source projects for close to two decades now. The last startup I worked on, I spent 10 years at. That's not because I'm particularly disciplined or virtuous. It's because I or someone else, planted something, and then I kept showing up, and eventually the thing had roots that went deeper than my enthusiasm on any given day. That's what time does! It turns some idea or plan into a commitment and a commitment into something that can shelter and grow other people.
Nobody is going to mass-produce a 50-year-old oak. And nobody is going to conjure trust, or quality, or community out of a weekend sprint. The things I value most — the projects, the relationships, the communities — are all things that took years to become what they are. No tool, no matter how fast, was going to get them there sooner.
We recently planted a new tree with Colin. I want it to grow into a large one. I know that's going to take time, and I'm not in a rush.
-
- March 19, 2026
-
🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-19 rss
IDA Plugin Updates on 2026-03-19
New Releases:
Activity:
- binsync
- capa-prototype
- DriverBuddy-7.4-plus
- fb2fd224: Sync auto-gpt5-implementation.yml from .github repo
- 0240ebfc: Sync auto-sec-scan.yml from .github repo
- 8e3481ab: Sync auto-label.yml from .github repo
- c1b3303b: Sync auto-assign-copilot.yml from .github repo
- 09e3cab8: Sync auto-llm-issue-review.yml from .github repo
- 6782d7b2: Sync security-review.yml from .github repo
- 4c32abdb: Sync auto-llm-pr-review.yml from .github repo
- 2bb89b60: Sync auto-tag-based-review.yml from .github repo
- 42f65c0e: Sync auto-assign-pr.yml from .github repo
- d24936ec: Sync trigger-all-repos.yml from .github repo
- bc98a5dc: Sync auto-copilot-code-cleanliness-review.yml from .github repo
- 5cad6d4b: Sync auto-label-comment-prs.yml from .github repo
- 78c06231: Sync daily-continuous-progress.yml from .github repo
- 3a63d6f5: Sync workflows-sync-template-backup.yml from .github repo
- 79f23843: Sync swarm-mode.yml from .github repo
- 24333728: Sync auto-copilot-functionality-docs-review.yml from .github repo
- a2f1a590: Sync auto-close-issues.yml from .github repo
- ida-domain
- ida-multi-mcp
- IDAPluginList
- 2d7a2153: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
- IDAssist
- python-elpida_core.py
- 0359fc83: Oracle-as-Librarian + bug fixes + X mention reader
- Rikugan
- e3147c62: Merge pull request #29 from buzzer-re/dev
- 78a9e2e4: add cryptography dep
- c25f1435: Merge pull request #28 from buzzer-re/dev
- cde44690: update plugins version
- 45824518: Merge pull request #27 from buzzer-re/dev
- f51e8af0: Merge remote-tracking branch 'origin/main' into dev
- aefa0e3e: fix(ui): resolve qt_compat merge conflict — add PyQt5 compat + dev wi…
- 3d92f57e: fix(ci): remove orphaned files, fix private imports and unused vars
- 9c56a740: fix(ci): ruff lint fixes for crypto module, add crypto tests
- 72a3b778: feat(settings): add optional API key encryption with AES-256-GCM
- 77e3f6d7: fix(settings): hide OAuth checkbox for non-Anthropic providers on init
-
🔗 r/reverseengineering Exploring the Memory Leak with Josh Strife Hayes rss
submitted by /u/RazerOG
[link] [comments] -
🔗 r/reverseengineering Code review case study: finding CVE-2026-33017 in Langflow rss
submitted by /u/SadCryptographer4422
[link] [comments] -
🔗 r/york 'Once-in-a-generation' opportunity with major York regeneration scheme welcomed rss
| submitted by /u/willfiresoon
[link] [comments]
---|--- -
🔗 r/york York Museum Gardens rss
| Thanks York for having this Londoner to your city two weeks ago. These are from my Sunday morning stroll around the Museum Gardens. It was a grey day and I thought the black and white fitted the vibe. submitted by /u/35mmCam
[link] [comments]
---|--- -
🔗 r/Leeds Two years ago, historic buildings on Kirkgate collapsed. Why has nothing happened since? rss
submitted by /u/LordGinger_
[link] [comments] -
🔗 r/york Sarah Ferguson set to be stripped of the 'Freedom of the City of York' after Epstein scandal rss
| submitted by /u/Kagedeah
[link] [comments]
---|--- -
🔗 r/york Window cleaners in York rss
Does anyone recommend any window cleaners in York?
submitted by /u/1RedPanda1
[link] [comments] -
🔗 r/Yorkshire Gonna be arriving in the city at 2pm tomorrow in Hull, proud to have grown up in this awesome city and can't wait to visit again. What's everybody's favourite part of Hull? I'm particularly fond of the old town and anywhere central, East Park is lovely too and have many fond memories there. rss
submitted by /u/AcadiaNo1039
[link] [comments] -
🔗 Kagi release notes March 19th, 2026 - Small Web Expansion and Translate goes viral rss
Kagi Small Web just got bigger! Kagi's Small Web just got a whole lot bigger. With over 30,000 feeds and new browser extensions, mobile apps, and categories, there's never been a better way to discover the independent web. Read the full announcement here! And check out the TechCrunch coverage. Download for iOS Download for Android Get the browser extensions Kagi Translate goes viral! On March 16, we launched our latest fun language on Kagi Translate, LinkedIn Speak, and it quickly went viral on social media, generating millions of engagements. Check out some of the press coverage below: This Viral Tool Turns Anything Into LinkedIn Speak—and the Internet Is Obsessed This eerily accurate ‘LinkedIn Speak’ translation tool will help you sound like an instant thinkfluencer 'LinkedIn speak' turns Kagi Translate into viral meme machine; here's how to use it Also, a friendly reminder: the Kagi Translate apps launched a few weeks ago and are already earning solid reviews. Go grab them if you haven't yet! Improvements and fixes Kagi Search Add quick snaps #5237 @Jesal Automatically accept dragged images for reverse image search #7482 @tuesday Seemingly impossible to get current time widget in Georgia (country) #4513 @mon Doggo Consistency #5191 @tjp Patreon not Appearing as First Result when specified #9993 @JosephT AI marked images still show up in the "images" widget/section in normal web search #9445 @Thibaultmol Show Wolfram Alpha's 'Input interpretation' when a WA answer is provided #9336 @Thibaultmol Overlapping popups in quick answer #10014 @Temanor Kagi Missing Results When Searching for Specific News Site under News Section #8181 @woodmaster Don't use AI slop as a source for Assistant/quick answer #9229 @fxgn Images on macOS Safari not opening #9891 @BrittOmnRex Long queries don't work with bangs #5563 @Thibaultmol Weird top search result for proton #10023 @gabriz4803 Wrong search bar content #10135 @someoneiknow Popup cut off, not scrollable. #10022 @Temanor Kagi Assistant Llama-4-maverick, o3-pro, gemini 2.5 flash lite and gpt-5-nano models are no longer available. Assistant doesn't render code blocks sent from user correctly #6165 @kzar Assistant: Quick Edit / Jump to AI/edit #6422 @ivanovich_alexander Assistant typing input speed slow on long conversations #5434 @jackkkk Assistant returns "We did not get a response from the server" on slow connections, but still works #4689 @jvbf Kagi Assistant only summarizes PDFs instead of processing full contents (selectable text and handwritten notes) #9930 @Pum Gemini 3.1 Pro (Preview) unable to read PDF in first attempt #9996 @dreifach Critical Issues with Recent Copy-Paste Content Changes Affecting Text Summarization Workflow #9832 @Fragment5789 Claude keeps using LaTeX without formatting #8455 @fxgn Links in the assistant's responses are not clickable #6556 @NathanKurz Librarian is Truncating Files #10074 @RixTheFox Some multimodal LLMs fails to see attached images #10113 @k7absp2h Assistant typing input speed slow on long conversations #5434 @jackkkk Assistant returns "We did not get a response from the server" on slow connections, but still works #4689 @jvbf Assistant: Remember model per device #4953 @pravinxor Provide more default assistants #4926 @somerabbit155 Add swipe gesture action to close the side bar on new Assistant UI #4823 @Jassu Kagi Translate Dialects of Scots listed under English #9965 @AndrewA Correct naming for Persian language in English #9957 @mehdim Unable to translate from "Detected (English)" to "English (US)" #9925 @Peter Put artificial languages into their own submenu #9958 @tux0r Kagi wrongly changes numbers/time stamps when translating #9971 @MartinNo Kagi Translate: Language switch button does not work for auto detected language #6065 @carl Kagi Translate Android App buttons are hard to hit #10004 @mb About the app version's translation mode #9986 @noelchan Mobile Apps Poor text formatting of image translations on Kagi Translate app #10016 @San Kagi Translate Android App buttons are hard to hit #10004 @mb Pasting text in Translate app is hard #10047 @marty Prevent translation quality switch when swapping languages #9986 @noelchan Kagi Translate iOS app is failed to establish session #10125 @SukinoVerse Kagi News Keyboard Shortcuts only work intermittently on Kagi News #8786 @the_hattar Condensing, locking settings — managing Kagi news for dementia #10115 @jimbo95 Mobile Apps Time Travel : Browse news history by date. Pick any day on the calendar and read past summaries. Content Filter : Hide or blur topics you'd rather skip. Choose from built-in presets or add your own keywords. Post of the week Here is this week's featured social media mention: ! Don't forget to follow us and tag us in your comments, we love hearing from you! Kagi Specials

We're excited to welcome the newest addition to our Kagi Specials program: EasyOptOuts! Kagi members in the U.S. now enjoy 25% off for life.
This is a service that removes your name, address and phone number from 200+ data brokers and people-search sites automatically. Deal is reciprocated here for any EasyOptOuts subscribers in your network who want to try Kagi.
Kagi art
"Free" search costs more than you think. With Kagi, you get zero ads, zero tracking, and AI on your terms.

-
🔗 @HexRaysSA@infosec.exchange 📚IDA Pro online training season is approaching! mastodon
📚IDA Pro online training season is approaching!
Use these promo codes when registering:
• EARLY2026 for 15% OFF one course
• MULTI202026 for 20% OFF two or more courses
*No active license required. Limited time offer.This Spring/Summer, we have Intermediate and Advanced courses for Europe and US time zones.
https://hex-rays.com/training -
🔗 Simon Willison Thoughts on OpenAI acquiring Astral and uv/ruff/ty rss
The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv, ruff, and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts!
The official line from OpenAI and Astral
The Astral team will become part of the Codex team at OpenAI.
Charlie Marsh has this to say:
Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement, OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...]
After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.
OpenAI's message has a slightly different focus (highlights mine):
As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle.
This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone (Rust regex, ripgrep, jiff) may be worth the price of acquisition!
So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on.
uv is the big one
Of Astral's projects the most impactful is uv. If you're not familiar with it,
uvis by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD:
Switch from
pythontouv runand most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow.I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code.
Ruff and ty
Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker.
These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that
uvis.They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate.
I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here.
What of pyx?
Ever since
uvstarted to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024.The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx, their private PyPI-style package registry for organizations.
I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts.
Competitive dynamics
An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI.
Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development.
The competition between Anthropic's Claude Code and OpenAI's Codex is fierce. Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money.
Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral.
Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner.
One bad version of this deal would be if OpenAI start using their ownership of
uvas leverage in their competition with Anthropic.Astral's quiet series A and B
One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community:
Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.
As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023.
Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell.
Forking as a credible exit?
Armin Ronacher built Rye, which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine):
However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing. I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed.
Astral's own Douglas Creager emphasized this angle on Hacker News today:
All I can say is that right now, we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever".
I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home.
OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism).
If things do go south for
uvand the other Astral projects we'll get to see how credible the forking exit strategy turns out to be.You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.
-
🔗 r/Leeds Cheetal, a 2-foot gauge steam loco built in Leeds is coming back home from Lancashire from its loan period rss
(insert body text here)
submitted by /u/CaptainYorkie1
[link] [comments] -
🔗 r/york LOST Hearing Aid rss
I hope it's ok to put this here, but if anyone finds a single hearing aid that's been lost in York today around Gillygate, Minster, Grape Lane area please let me know :( very expensive and missed badly.
submitted by /u/3pandora8
[link] [comments] -
🔗 r/Leeds Rag puddings sold in Leeds? rss
Hi,
Mancunian here who now lives in Leeds and we have rag puddings you can get from a chippy in Manchester, I was wondering if anywhere in Leeds sells them? Think of it like a soft steamed pie. My partner hasn’t tried one and hoping to avoid driving to Manchester. Thanks!
submitted by /u/magnumpearl10
[link] [comments] -
🔗 r/reverseengineering A Copy-Paste Bug That Broke PSpice® AES-256 Encryption rss
submitted by /u/jtsylve
[link] [comments] -
🔗 r/wiesbaden Suchen günstige Hochzeitslocation in und um Wiesbaden rss
Wir wollen gerne 2027 heiraten und suchen seit einiger Zeit eine Location für 60-80 Gäste. Das Catering möchten wir von extern buchen, suchen also hauptsächlich einen Raum, der gut angebunden liegt. Man sollte von dort gut zu Fuß zu Hotels kommen können, nur mit kurzem Uber, oder mit der Bahn/ Bus nach Wiesbaden. Die meisten Räumlichkeiten haben leider immer ihr eigenes Catering oder wollen sehr hohe Mietpreise haben. Vorspeisen und Nachspeisen wollen wir aber selber organisieren, daher lieber externes Catering. Kennt jemand eine gute Location?
Wir sind auch offen für Grillanlagen etc. Solange feste Sanitäranlagen vorhanden sind. 2 Familienmitglieder brauchen Rollator und können nicht gut Treppen steigen.
submitted by /u/Coffeeschnaps
[link] [comments] -
🔗 r/york Any advice on moving to York? rss
I have received a job offer and will be moving to York in September. The office is very central, and ideally I will live somewhere that is relatively close. Are there any areas of York to avoid? I have had a quick look online, and there are some good flats available that are a bit further out. In my head, all of York is lovely and safe, so there are no areas that are off limits. But am I being naive? I understand all city centres can be unsafe, but I am from Manchester and York seems like a dream just by comparison. Any advice from locals or anyone who knows the area is greatly appreciated.
Thank you in advance!
submitted by /u/BellSpecific1335
[link] [comments] -
🔗 r/wiesbaden Scientology missioniert in der Innenstadt rss
Vor dem Galeria Karstadt steht ein dunkelrotes Standzelt mit lustigen Ballons. Auf den Tischen sind Werke von L . Ron Hubbard, dem Gründer von Scientology, ohne das explizit die Sekte genannt wird. Das ist keine einfache Science Fiction! Redet bitte bloß nicht mit diesen Menschen!
submitted by /u/Extension-Cry225
[link] [comments] -
🔗 r/Harrogate Train station shop rss
The one inside the station that had a stock level as if it was in North Korea - has it shut down already? Looks that way…
submitted by /u/DoughnutHairy9943
[link] [comments] -
🔗 r/Leeds Litter on Hyde park rss
Seriously, to the people leaving their rubbish all over Hyde Park every night, f*ck you!! The groundsmen should be tending to the park in the morning, not picking your shit up.
submitted by /u/OkEstate4349
[link] [comments] -
🔗 r/york Odd York Mix article on markets and nights out rss
| Come on ChatGPT, let’s just have a little rest and you’ll be thinking normally again soon… submitted by /u/sneck123
[link] [comments]
---|--- -
🔗 r/Yorkshire Tan Hill Inn Illustration rss
| submitted by /u/richbart1234
[link] [comments]
---|--- -
🔗 r/reverseengineering Using LLM and Ghidra to analyze malware (Part 1) rss
submitted by /u/moonlightelite
[link] [comments] -
🔗 r/Leeds After concert at Leeds First Direct Bank Arena rss
Its my first time watching concert at Leeds First Direct Bank Arena and i have train to catch for 11pm. Would it be safe to walk the train station after the concert alone? And how long does it take to walk towards the station? Or is it better to book a taxi/uber? And if it is taxi, where would be the nearest pick up point? Thank you
submitted by /u/nymphadora95
[link] [comments] -
🔗 r/reverseengineering CRITICAL (9.8) - CVE-2026-32746 GNU telnetd Buffer Overflow + PoC rss
submitted by /u/pwnguide
[link] [comments] -
🔗 matklad Consensus Board Game rss
Consensus Board Game
Mar 19, 2026
I have an early adulthood trauma from struggling to understand consensus amidst a myriad of poor explanations. I am overcompensating for that by adding my own attempts to the fray. Today, I want to draw a series of pictures which could be helpful. You can see this post as a set of missing illustrations for Notes on Paxos, or, alternatively, you can view that post as a more formal narrative counter-part for the present one.
The idea comes from my mathematics of consensus lecture, with versions in English and Russian.
The Preamble
I am going to aggressively hand wave the details away, please refer to Notes for filling in the blanks.
And, before we begin, I want to stress again that here I am focusing strictly on the mathematics behind the algorithm, on the logical structure of the universe that makes some things impossible, and others doable. Consensus is but a small part of the engineering behind real data management systems, and I might do something about pragmatics of consensus at some point, just not today ;)
The Problem
There’s a committee of five members that tries to choose a color for a bike shed, but the committee members are not entirely reliable. We want to arrive at a decision even if some members of the committee are absent.
The Vote
The fundamental idea underpinning consensus is simple majority vote. If R0, … R4 are the five committee members, we can use the following board to record the votes:
A successful vote looks like this:
Here, red collected 3 out of 5 votes and wins. Note that R4 hasn’t voted yet. It might, or might not do so eventually, but that won’t affect the outcome.
The problem with voting is that it can get stuck like this:
Here, we have two votes for red, two votes for blue, but the potential tie- breaker, R4, voted for green, the rascal!
To solve split vote, we are going to designate R0 as the committee’s leader, make it choose the color, and allow others only to approve. Note that meaningful voting still takes place, as someone might abstain from voting — you need at least 50% turnout for the vote to be complete:
Here, R0, the leader (marked with yellow napoleonic bicorne), choose red, R2 and R3 acquiesced, so the red “won”, even as R1 and R4 abstained (x signifies absence of a vote).
The problem with this is that our designated leader might be unavailable itself:
The Board
Which brings us to the central illustration that I wanted to share. What are we going to do now is to multiply our voting. Instead of conducting just one vote with a designated leader, the committee will conduct a series of concurrent votes, where the leaders rotate in round-robin pattern. This gives rise to the following half-infinite 2D board on which the game of consensus is played:
Each column plays independently. If you are a leader in a column, and your cell is blank, you can choose whatever color. If you are a follower, you need to wait until column’s leader decision, and then you can either fill the same color, or you can abstain. After several rounds the board might end up looking like this:
The benefit of our 2D setup is that, if any committee member is unavailable, their columns might get stuck, but, as long as the majority is available, some column somewhere might still complete. The drawback is that, while individual column’s decision is clear and unambiguous, the outcome of the board as whole is undefined. In the above example, there’s a column where red wins, and a column where blue wins.
So what we are going to do is to scrap the above board as invalid, and instead require that any two columns that achieved majorities must agree on the color. In other words, the outcome of the entire board is the outcome of any of its columns, whichever finishes first, and the safety condition is that no two colors can reach majorities in different columns.
Let’s take a few steps back when the board wasn’t yet hosed, and try to think about the choice of the next move from the perspective of R3:
As R3 and the leader for your column, you need to pick a color which won’t conflict with any past or future decisions in other columns. Given that there are some greens and blues already, it feels like maybe you shouldn’t pick red… But it could be the case that the three partially filled columns won’t move anywhere in the future, and the first column gets a solid red line! Tough choices! You need to worry about the future and the infinite number of columns to your right!
Luckily, the problem can be made much easier if we assume that everyone plays by the same rules, in which case it’s enough to only worry about the columns to your left. Suppose that you, and everyone else is carefully choosing their moves to not conflict with the columns to the left. Then, if you chose red, your column wins, and subsequently some buffoon on the right chooses green, it’s their problem, because you are to their left.
So let’s just focus on the left part of the board. Again, it seems like blue or green might be good bets, as they are already present on the board, but there’s a chance that the first column will eventually vote for red. To prevent that, what we are going to do is to collect a majority of participants (R0, R2, R3) and require them to commit to not voting in the first columns. Actually, for that matter, let’s prevent them from voting in any column to the left:
Here, you asked R0, R2 and R3 to abstain from casting further votes in the first three columns, signified by black x. With this picture, we can now be sure that red can not win in the first column — no color can win there, because only two out of the five votes are available there!
Still, we have the choice between green and blue, which one should we pick? The answer is the rightmost. R2, the participant that picked blue in the column to our immediate left, was executing the same algorithm. If they picked blue, they did it because they knew for certain that the second column can’t eventually vote for green. R2 got a different majority of participants to abstain from voting in the second column, and, while we, as R3, don’t know which majority that was, we know that it exists because we know that R2 did pick blue, and we assume fair play.
That’s all for today, that’s the trick that makes consensus click, in the abstract. In a full distributed system the situation is more complicated. Each participant only sees its own row, the board as a whole remains concealed. Participants can learn something about others’ state by communicating, but the knowledge isn’t strongly anchored at time. By the time a response is received the answer could as well be obsolete. And yet, the above birds-eye view can be implemented in a few exchanges of messages.
Please see the Notes for further details.
-
🔗 Console.dev newsletter Oat rss
Description: Ultra-lightweight HTML + CSS components.
What we like: Zero dependencies. No frameworks or build requirements. Uses semantic HTML with all styling done through CSS variables, so can be easily customized. Very small (6KB CSS, 2KB JS).
What we dislike: Not yet fully 1.0, but already feels quite complete.
-
🔗 Console.dev newsletter pgit rss
Description: Git CLI with Postgres backend.
What we like: Git compatible CLI, but everything is stored in Postgres rather than on disk. Makes all commits queryable. Everything is delta compressed for efficiency. Makes it easy to query for repo stats like size trends or churn detection. Full text search across the history.
What we dislike: Requires Postgres, obviously.
-