🏡


to read (pdf)

  1. Style tips for less experienced developers coding with AI · honnibal.dev
  2. Haskell for all: Beyond agentic coding
  3. AgentRE-Bench — LLM Reverse Engineering Benchmark
  4. Announcing Observational Memory - Mastra Blog
  5. Zvec | A lightweight, lightning-fast, in-process vector database

  1. February 18, 2026
    1. 🔗 r/wiesbaden Old Synagoge in Wiesbaden by Phillip Hoffmann 1863-1939 rss
    2. 🔗 News Minimalist 🐢 Ukraine reclaims territory after Russia loses Starlink access + 11 more stories rss

      In the last 5 days ChatGPT read 146335 top news stories. After removing previously covered events, there are 12 articles with a significance score over 5.5.

      [6.0] Ukraine reclaims territory after Russia loses Starlink access —nrc.nl(Dutch) (+18)

      Ukraine reclaimed significant territory for the first time since mid-2023 after SpaceX blocked Russian military access to the Starlink satellite network, severely disrupting Moscow’s battlefield communication and coordination.

      Ukrainian forces liberated eleven villages totaling 200 square kilometers, primarily in Zaporizhia. These tactical counterattacks reversed recent Russian gains and forced retreats, showcasing the immediate impact of Starlink restrictions on Russia’s ability to sustain offensive momentum across the front.

      As Russia seeks satellite alternatives and diplomatic talks begin in Geneva under US pressure, Moscow maintains intense drone strikes on Ukraine's energy grid. Analysts remain skeptical about any near-term diplomatic breakthrough.

      [5.5] Ancient microbes in ice reveal natural antibiotic resistance —theconversation.com(+21)

      Researchers found 5,000-year-old bacteria in a Romanian ice cave resistant to modern antibiotics, proving natural resistance predates medicine while offering a potential source for discovering new life-saving drugs.

      The ancient microbes resisted ten modern antibiotics, including treatments for tuberculosis. While melting ice risks releasing these resistance genes into the environment, the bacteria also produced chemicals that killed 14 types of disease-causing pathogens during laboratory testing, highlighting their medicinal potential.

      Most ancient environmental bacteria remain unstudied, containing unknown genes that could benefit biotechnology and industrial energy efficiency. Scientists emphasize that these microbial systems represent a vast, untapped reservoir of biochemical tools.

      Highly covered news with significance over 5.5

      [5.5] Iran and US agree on nuclear deal principles — financialpost.com (+199)

      [5.6] Global south leaders and tech billionaires convene in Delhi to discuss artificial intelligence development — theguardian.com (+151)

      [5.5] European nations accuse Russia of poisoning Alexei Navalny with a rare toxin — rte.ie (+107)

      [5.8] India and France forge special global strategic partnership with expanded defense, AI, and nuclear cooperation — ndtv.com (+86)

      [6.1] Lab grown human spinal cord heals after injury in major breakthrough — sciencedaily.com (+2)

      [5.8] Single dose of DMT offers lasting depression relief in clinical trial — neurosciencenews.com (+9)

      [5.7] Brain cell switch determines lifelong obesity risk — farodevigo.es (Spanish) (+8)

      [5.7] Canada and Germany sign declaration of intent to grow AI field together — ctvnews.ca (+4)

      [5.5] Meta will run AI in WhatsApp through NVIDIA's 'confidential computing' — engadget.com (+12)

      [5.7] C-17 aircraft transport micro nuclear reactor for the first time — twz.com (+14)

      Thanks for reading!

      — Vadim


      You can track significant news in your country with premium.


      Powered by beehiiv

    3. 🔗 r/wiesbaden Irgendwann lerne ich es... rss
    4. 🔗 r/Yorkshire Best SEO companies in Yorkshire — any real recommendations? rss

      I’m looking for a good SEO company in or around Yorkshire and wanted to hear real experiences from people who’ve actually worked with one.

      I’ve come across a few names while researching:

      • Softtrix
      • SEO Services Consultant
      • Digital Leap

      Before moving forward, I’d like to know — has anyone here worked with these or any other SEO agencies that actually delivered results?

      What should I look out for when choosing an SEO company, and what were your experiences like?

      Just looking for genuine feedback from local businesses or marketers 👍

      submitted by /u/Ashwani1987
      [link] [comments]

    5. 🔗 Locklin on science Coding assistant experience rss

      I’m a modest LLM skeptic. It’s not that I don’t believe in LLMs, I am aware that they exist, I just know that they’re not doing what people do when we think, and that they’re not going to hockey stick up and replace everybody. If it helps people, they should use them: I do. ask.brave.com […]

    6. 🔗 r/wiesbaden Where to buy Osprey backpacking packs? rss

      Getting into backpacking and want to get an Osprey pack. Most recommend trying one on first, but no idea where (if anywhere) around here has a good selection of Osprey.

      Anyone know?

      submitted by /u/No-Ordinary6219
      [link] [comments]

    7. 🔗 organicmaps/organicmaps 2026.02.18-5-android release

      • OSM data as of February 16, Wikipedia as of February 1
      • Black hiking and cycling routes are now visible in the dark theme
      • Added zip lines and more car services: repair, service, and parts
      • Fixed bicycle routing and average speed calculation
      • Improved category search
      • Uzbek cuisine
      • Expandable opening hours
      • Show and use MSL altitude for recorded tracks on Android 14+
      • Fixed KML, KMZ, GPX, GeoJSON import from WhatsApp
      • Added Lao and Armenian translations
      …more at omaps.org/news

      See a detailed announce on our website when app updates are published in all stores.
      You can get automatic app updates from GitHub using Obtainium.

      sha256sum:

      5f9a90aa9e36f4eb8ea231856262657193077bfa451bd19a2f1392dbd82af99a  OrganicMaps-26021805-web-release.apk
      
    8. 🔗 panphora/overtype v2.2.0 release

      Release v2.2.0

    9. 🔗 r/wiesbaden Bester Döner in der Umgebung rss

      Hallo,

      ich bin einmal wöchentlich in Wiesbaden zum Arbeiten. Fahre dafür 1 Stunde aus Richtung der kleinsten Dörfer, die man in Rlp finden kann. Ich bin ein riesen Dönerfan, in den letzten Jahren allerdings empfindlich geworden. Viele Döner vertrage ich nicht oder sie schmecken nicht. Da ich meine Leidenschaft aber nicht aufgeben möchte, hoffe ich darauf, dass die Nähe zu Frankfurt helfen könnte.

      Welchen guten Dönerladen kennt ihr? Ich bin ein Fan von den guten Berliner Dönersoßen und bin generell auch im Osten aufgewachsen, wo die Döner alle anders geschmeckt haben.

      Der Döner muss nicht preiswert sein oder fancy aussehen. Was könnt ihr mir empfehlen? Gibt es an den Standorten Parkmöglichkeiten?

      submitted by /u/WriterCompetitive766
      [link] [comments]

    10. 🔗 r/york School's 'cost of living cupboard' helps families rss

      School's 'cost of living cupboard' helps families | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    11. 🔗 r/york Where can I stand against a plain white wall for a photo? rss

      So ... this is a weird one.

      I'm just signing up for an extras agency to see if I can pick up a bit of casual work. They want a full-length photo of me standing against a blank white wall - no doorframes, pictures or anything - and you can't crop the picture because it has to fit their format.

      So where, in York, can I find a plain white wall - I gather off-white is acceptable - of sufficiend width and height for a squareish full-length photo of a 6 foot person, with another person able to stand far enough back from it to take a photo?

      I said it was a weird one.

      submitted by /u/Brickie78
      [link] [comments]

    12. 🔗 r/reverseengineering Android 30 different SSL pinning Bypass Frida rss
    13. 🔗 r/reverseengineering Enhanced Android dynamic lib injector rss
    14. 🔗 r/Leeds Why is this junction lane layout set up this way? rss

      I drive through this junction (Harrogate Road / Alwoodley Lane / Wigton Lane junction) regularly and have always wondered about the lane layout. At the moment the left lane is left turn only, while the right lane is used for both straight ahead and right turns.

      When a vehicle is waiting to turn right it blocks drivers going straight on, and I often see people make late moves into the left lane to get around them, which feels risky.

      Would it make more sense for the left lane to allow left and straight ahead, and the right lane to be right turn only? That would keep a clear lane for through traffic and stop right-turners holding everyone up.

      I assume there may be a design reason for the current setup, such as signal timing, pedestrian phases, or historic traffic patterns, so I’m curious if anyone knows why it’s configured this way.

      submitted by /u/SimplyBRC
      [link] [comments]

    15. 🔗 r/Yorkshire What to do with teenager on crutches in half term rss

      I have a teenager on crutches, what can we do?

      submitted by /u/SarkyMs
      [link] [comments]

    16. 🔗 3Blue1Brown (YouTube) The lattice bacteria puzzle rss

      Part of a series of monthly puzzles, done in collaboration with MoMath. https://momath.org/mindbenders

    17. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release, ~1 changed rss
      sync repo: +1 release, ~1 changed
      
      ## New releases
      - [IDASQL](https://github.com/allthingsida/idasql): 0.0.8
      
      ## Changes
      - [IFL](https://github.com/hasherezade/ida_ifl):
        - host changed: HexRays-plugin-contributions/ida_ifl → hasherezade/ida_ifl
        - 1.5.2: archive contents changed, download URL changed
      
  2. February 17, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-17 rss

      IDA Plugin Updates on 2026-02-17

      New Releases:

      Activity:

      • capa
        • e0bd6d5e: Sync capa rules submodule
        • 239bafd2: Sync capa-testfiles submodule
        • 2033c4ab: build(deps-dev): bump pyinstaller from 6.18.0 to 6.19.0 (#2856)
      • efiXplorer
        • 596960fd: update idasdk submodule, bump plugin version (#133)
      • HexRays-CppFormatter
        • 42d2a0dd: improved syntax rewriting for casts
      • ida-free-mcp
        • 0cf9b50b: Fix MCP client compatibility: add missing type field and initialized …
      • ida-hcli
        • 0f8a338d: update: shutil.move instead of path.rename
        • 8139fd15: ci: cleanup idalib support matrix
      • Ida-Plugins-Kit
        • 98412f77: Reorganize DumpToolkit for IDA and Binja
      • ida_ifl
        • 0d7a163e: [BUGFIX] Fixed backward compatibility with Qt5
        • f63bd9de: [BUGFIX] Fixed compatibility with Qt6
      • idasql
        • ab7825c1: fix: remove dead code, bump version to 0.0.8 (#14)
        • c5deaa58: feat: consolidate HTTP server + add no-agent preset (#13)
      • msc-thesis-LLMs-to-rank-decompilers
      • python-elpida_core.py
        • 3ecccc9a: fix: mobile portrait tabs — compact + scrollable for narrow screens
        • 4f22a6a4: fix: scanner shows full analysis + sticky tabs for mobile nav
        • 82a26844: v3.1: D15 Constitutional Broadcast — governance-gated WORLD bucket write
        • d52e2d72: v3.0: 9-node Parliament governance engine
        • b2067575: v2.5: Adversarial prompt hardening — Kernel K1-K7 + Shell expansion
        • fac31097: ⛔ Immutable Kernel: hard-coded Layer 2 governance (Gödel fix)
        • e229090a: 🛡️ Governance v3: Existential Hard Stop + Neutrality Anchor
        • 4fd4ee75: 🛡️ Governance: 3-phase axiom detection fixes Scenarios 1 & 3
        • 1db13414: 🔗 Mind↔Body↔World bridge: 5 architectural fixes
        • 6cf78519: Fix: full domain positions, synthesis first, chat input cursor visible
        • 9c50e1e2: v2.1: Retro-futuristic UI overhaul — proper Greek, tab nav, warm aest…
        • 9a3313d6: 🌀 Unified System: Chat + Live Audit + Scanner + Governance + D15 Pipe…
        • f359994c: 🌀 CHECKPOINT: Vercel↔HF Integration Analysis Complete
    2. 🔗 r/Harrogate Drop off at the station rss

      What's the deal with dropping someone off at the station ? Do I have to pay for parking or can I just pull into the little car park at the front and drop off for free ? I assumed it was free but I saw some old post of someone getting a fine so wanted to double check Cheers

      submitted by /u/milksperfect
      [link] [comments]

    3. 🔗 idursun/jjui v0.9.12 release

      Release 0.9.12 (Final 0.9.x Series)

      Note

      This is a quick release before marking the end of 0.9.x series due to breaking changes introduced by the work in #533

      Fixes & Improvements

      • Modal rendering now fills the entire git, bookmarks, and custom command dialogs before drawing rows, eliminating gaps introduced by the new renderer #535
      • Fixed a regression where ui.colors.selected = { bg = "black" } no longer highlighted the selected revision by restoring the correct lipgloss → ANSI color conversion #534
      • jj git fetch --tracked uses the t shortcut so f remains dedicated to the default fetch, matching the push command’s tracked shortcut #532
      • Added a [keys.diff_view] config section so diff scrolling, paging, and closing bindings are customizable #527
      • Completion now auto-inserts () when selecting a parameterless function, improving Lua/script authoring speed #530
      • Lua’s flash() accepts error and sticky fields, so scripts can show error toasts and control persistence without additional helpers #529

      What's Changed

      • Auto-append () for parameterless functions in completion by @Dima-369 in #530
      • Add customizable keybindings for the diff view ([keys.diff_view]) by @Dima-369 in #527
      • fix(ui/git): update key for jj git fetch --tracked by @PrayagS in #532

      New Contributors

      Full Changelog : v0.9.11...0.9.12

    4. 🔗 r/wiesbaden Filmkenner: Weiß jemand wo ich den Film "Sauna" sehen kann? rss
    5. 🔗 r/Harrogate Looking to Join a 6‑a‑Side Football Team in Harrogate ⚽ rss

      Hi everyone,

      I’m looking to join a casual 6‑a‑side football team in Harrogate. I’m happy to play any position, and I’m flexible with evenings — weekdays or weekends.

      I’m keen to play for fun, improve my game, and meet some local footballers. If your team has space for an extra player, or you know a team that does, please comment below or DM me!

      submitted by /u/Individual_Owl7701
      [link] [comments]

    6. 🔗 r/LocalLLaMA The guy that won the NVIDIA Hackathon and an NVIDIA DGX Spark GB10 has won another hackathon with it! rss

      Hey everyone,

      I promised that I would update you all with what I was going to do next with the DGX Spark GB10 that I won. It's been a few weeks and I have been primarily heads down on fundraising for my startup trying to automatically improve and evaluate Coding Agents.

      Since the last time I posted I became a Dell Pro Precision Ambassador after they saw all of the cool hackathons that I won and stuff I am building that can hopefully make a difference in the world (I am trying to create Brain World Models using a bunch of different types of brain scans to do precision therapeutics, diagnostics, etc. as my Magnus Opus).

      They sent me a Dell Pro Max T2 Tower and another DGX Spark GB10 which I have connected to the previous one that I won. This allows me to continue my work with the limited funds that I have to see how far I can really push the limits of what's possible at the intersection of Healthcare and AI.

      During Superbowl Weekend I took some time to do a 24-hour hackathon solving a problem that I really care about (even if it wasn't related to my startup).

      My most recent job was at UCSF doing applied neuroscience creating a research- backed tool that screened children for Dyslexia since traditional approaches don’t meet learners where they are so I wanted to take the research I did further and actually create solutions that also did computer adaptive learning.

      Through my research I have come to find that the current solutions for learning languages are antiquated often assuming a “standard” learner: same pace, same sequence, same practice, same assessments.

      But, language learning is deeply personalized. Two learners can spend the same amount of time on the same content and walk away with totally different outcomes because the feedback they need could be entirely different with the core problem being that language learning isn’t one-size-fits-all.

      Most language tools struggle with a few big issues:

      • Single Language : Most tools are designed specifically for Native English speakers
      • Culturally insensitive: Even within the same language there can be different dialects and word/phrase utilization
      • Static Difficulty: content doesn’t adapt when you’re bored or overwhelmed
      • Delayed Feedback: you don’t always know what you said wrong or why
      • Practice ≠ assessment: testing is often separate from learning, instead of driving it
      • Speaking is underserved : it’s hard to get consistent, personalized speaking practice without 1:1 time

      For many learners, especially kids, the result is predictable: frustration, disengagement, or plateauing.

      So I built a an automated speech recognition app that adapts in real time combining computer adaptive testing and computer adaptive learning to personalize the experience as you go.

      It not only transcribes speech, but also evaluates phoneme-level pronunciation, which lets the system give targeted feedback (and adapt the next prompt) based on which sounds someone struggles with.

      I tried to make it as simple as possible because my primary user base would be teachers that didn't have a lot of time to actually learn new tools and were already struggling with teaching an entire class.

      It uses natural speaking performance to determine what a student should practice next.

      So instead of providing every child a fixed curriculum, the system continuously adjusts difficulty and targets based on how you’re actually doing rather than just on completion.

      How it Built It

      1. I connected two NVIDIA DGX Spark with the GB10 Grace Blackwell Superchip giving me 256 GB LPDDR5x Coherent Unified System Memory to run inference and the entire workflow locally. I also had the Dell Pro Max T2 Tower, but I couldn't physically bring it to the Notion office so I used Tailscale to SSH into it
      2. I utilized CrisperWhisper, faster-whisper, and a custom transformer to get accurate word-level timestamps, verbatim transcriptions, filler detection, and hallucination mitigation
      3. I fed this directly into a Montreal Forced Aligner to get phoneme level dictation
      4. I then used a heuristics detection algorithm to screen for several disfluencies: Prolongnation, replacement, deletion, addition, and repetition
      5. I included stutter and filler analysis/detection using the SEP-28k dataset and PodcastFillers Dataset
      6. I fed these into AI Agents using both local models, Cartesia's Line Agents, and Notion's Custom Agents to do computer adaptive learning and testing

      The result is a workflow where learning content can evolve quickly while the learner experience stays personalized and measurable.

      I want to support learners who don’t thrive in rigid systems and need:

      • more repetition (without embarrassment)
      • targeted practice on specific sounds/phrases
      • a pace that adapts to attention and confidence
      • immediate feedback that’s actually actionable

      This project is an early prototype, but it’s a direction I’m genuinely excited about: speech-first language learning that adapts to the person, rather than the other way around.

      https://www.youtube.com/watch?v=2RYHu1jyFWI

      I wrote something in medium that has a tiny bit more information https://medium.com/@brandonin/i-just-won-the-cartesia-hackathon-reinforcing- something-ive-believed-in-for-a-long-time-language- dc93525b2e48?postPublishedType=repub

      For those that are wondering what the specs are of the Dell Pro T2 Tower that they sent me:

      • Intel Core Ultra 9 285K (36 MB cache, 24 cores, 24 threads, 3.2 GHz to 5.7 GHz, 125W)
      • 128GB: 4 x 32 GB, DDR5, 4400 MT/s
      • 2x - 4TB SSD TLC with DRAM M.2 2280 PCIe Gen4 SED Ready
      • NVIDIA RTX PRO 6000 Blackwell Workstation Edition (600W), 96GB GDDR7

      submitted by /u/brandon-i
      [link] [comments]

    7. 🔗 r/wiesbaden Ausweiskontrolle Schlachthof Wiesbaden rss

      Weiß einer, wie streng die bei Konzerten Ausweise kontrolliere, speziell bei Minderjährigen? Ich war schon oft auf Konzerten und bisher wurde immer nur das Ticket kontrolliert aber nie ein Ausweis verlangt. Beim Schlachthof war ich jetzt aber auch noch nie.

      submitted by /u/Visible-Tale7016
      [link] [comments]

    8. 🔗 r/Leeds New West Yorkshire journalism rss

      Some of you might be familiar with the work of The Mill in Manchester. If not, it’s worth checking them out. They do really solid local journalism - not rubbish clickbate and repurposed national stories like Leeds Live.

      They are wanting to start up a WY version - and need 500 people to say they would support it. I’ve already pledged my support. Super encourage everyone who can to do the same.

      I’m in no way affiliated - just really respect what they do.

      submitted by /u/PersonOfNoInterest4
      [link] [comments]

    9. 🔗 r/reverseengineering I built an autonomous AI reverse engineering agent (8,012 / 8,200 GTA SA functions reversed) rss
    10. 🔗 r/Leeds High end restaurant, city centre, Tuesday evening, 13 people…. Help please!! rss

      As above, I’m hosting a group of colleagues from Denmark on a Tuesday evening. They’re foodies so I’d like to impress them. My usual go to restaurants (Ox Club, Empire Cafe, V&V) are closed on a Tuesday - where can I take them?

      submitted by /u/Diligent_Box7258
      [link] [comments]

    11. 🔗 Evan Schwartz PSA: Your SQLite Connection Pool Might Be Ruining Your Write Performance rss

      Update (Feb 18, 2026): After a productive discussion onReddit and additional benchmarking, I found that the solutions I originally proposed (batched writes or using a synchronous connection) don't actually help. The real issue is simpler and more fundamental than I described: SQLite is single-writer, so any amount of contention at the SQLite level will severely hurt write performance. The fix is to use a single writer connection with writes queued at the application level, and a separate connection pool for concurrent reads. The original blog post text is preserved below, with retractions and updates marked accordingly. My apologies to the SQLx maintainers for suggesting that this behavior was unique to SQLx.

      Write transactions can lead to lock starvation and serious performance degradation when using SQLite with SQLx, the popular async Rust SQL library. In retrospect, I feel like this should have been obvious, but it took a little more staring at suspiciously consistent "slow statement" logs than I'd like to admit, so I'm writing it up in case it helps others avoid this footgun.

      SQLite Locking and Transactions

      SQLite is single-writer. In WAL mode, it can support concurrent reads and writes (or, technically "write" singular), but no matter the mode there is only ever one writer at a time. Before writing, a process needs to obtain an EXCLUSIVE lock on the database.

      If you start a read transaction with a SELECT and then perform a write in the same transaction, the transaction will need to be upgraded to write transaction with an exclusive lock:

      A read transaction is used for reading only. A write transaction allows both reading and writing. A read transaction is started by a SELECT statement, and a write transaction is started by statements like CREATE, DELETE, DROP, INSERT, or UPDATE (collectively "write statements"). If a write statement occurs while a read transaction is active, then the read transaction is upgraded to a write transaction if possible. (source)

      Transactions started with BEGIN IMMEDIATE or BEGIN EXCLUSIVE also take the exclusive write lock as soon as they are started.

      Async Transactions with SQLx

      Transactions in SQLx look like this:

      let mut tx = db_connection.begin().await?;
      
      let read_value = sqlx::query("SELECT * FROM table WHERE id = $1")
          .bind(1)
          .fetch_one(&mut *tx)
          .await?;
      
      sqlx::query("UPDATE table SET some_field = $1 WHERE id = $2")
          .bind("hello")
          .bind(1)
          .execute(&mut *tx)
          .await?;
      
      tx.commit().await?;
      

      This type of transaction where you read and then write is completely fine. The transaction starts as a read transaction and then is upgraded to a write transaction for the UPDATE.

      Lock ~~Starvation~~ Contention with Multiple Writes

      Update: This section incorrectly attributes the performance degradation to the interaction between async Rust and SQLite. The problem is actually that any contention for the EXCLUSIVE lock at the SQLite level, whether from single statements or batches, will hurt write performance.

      The problem arises when you call await within a write transaction. For example, this could happen if you call multiple write statements within a transaction:

      let mut tx = db_connection.begin().await?;
      
      for (id, value) in values {
          sqlx::query("INSERT INTO table (id, some_field) VALUES ($1, $2)")
              .bind(id)
              .bind(value)
              .execute(&mut *tx)
              .await?;
      }
      
      tx.commit().await?;
      

      This code will cause serious performance degradation if you have multiple concurrent tasks that might be trying this operation, or any other write, at the same time.

      When the program reaches the first INSERT statement, the transaction is upgraded to a write transaction with an exclusive lock. However, when you call await, the task yields control back to the async runtime. The runtime may schedule another task before returning to this one. The problem is that this task is now holding an exclusive lock on the database. All other writers must wait for this one to finish. If the newly scheduled task tries to write, it will simply wait until it hits the busy_timeout and returns a busy timeout error. The original task might be able to make progress if no other concurrent writers are scheduled before it, but under higher load you might continuously have new tasks that block the original writer from progressing.

      Starting a transaction with BEGIN IMMEDIATE will also cause this problem, because you will immediately take the exclusive lock and then yield control with await.

      Identifying this Problem in Logs

      In practice, you can spot this issue in your production logs if you see a lot of SQLx warnings that say slow statement: execution time exceeded alert threshold where the elapsed time is very close to your busy_timeout (which is 5 seconds by default). This is the result of other tasks being scheduled by the runtime and then trying and failing to obtain the exclusive lock they need to write to the database while being blocked by a parked task.

      The Fix: Single Writer, Separate Reader Pool

      SQLite's concurrency model (in WAL mode) is many concurrent readers with exactly one writer. Mirroring this architecture at the application level provides the best performance.

      Instead of a single connection pool, where connections may be upgraded to write at any time, use two separate pools:

      let write_options = SqliteConnectOptions::new()
          .filename("my.db")
          .journal_mode(SqliteJournalMode::Wal);
      
      let read_options = SqliteConnectOptions::new()
          .filename("my.db")
          .journal_mode(SqliteJournalMode::Wal)
          .read_only(true);
      
      // Single writer connection — all writes queue here
      let writer = SqlitePoolOptions::new()
          .max_connections(1)
          .connect_with(write_options)
          .await?;
      
      // Multiple reader connections — reads run concurrently
      let reader = SqlitePoolOptions::new()
          .max_connections(num_cpus::get() as u32)
          .connect_with(read_options)
          .await?;
      

      With this setup, write transactions serialize within the application. Tasks will queue waiting for the single writer connection, rather than all trying to obtain SQLite's EXCLUSIVE lock.

      In my benchmarks, this approach was ~20x faster than using a single pool with multiple connections:

      Scenario | Total Time | Rows/sec | P50 | P99
      ---|---|---|---|---
      Single pool (50 connections) | 1.93s | 2,586 | 474ms | 182s
      Single writer connection | 83ms | 60,061 | 43ms | 82ms

      An alternative to separate pools is wrapping writes in a Mutext, which achieves similar performance (95ms in the benchmarks). However, separate pools make the intent clearer and, if the reader pool is configured as read-only, prevent accidentally issuing a write on a reader connection.

      What About Read-Then-Write Transactions?

      Having separate pools works when reads and writes are independent, but sometimes you need to atomically read and then write based on it:

      let mut tx = pool.begin().await?;
      
      let balance = sqlx::query_scalar::<_, i64>(
          "SELECT balance FROM accounts WHERE id = ?"
      )
          .bind(account_id)
          .fetch_one(&mut *tx)
          .await?;
      
      sqlx::query("UPDATE accounts SET balance = ? WHERE id = ?")
          .bind(balance - amount)
          .bind(account_id)
          .execute(&mut *tx)
          .await?;
      
      tx.commit().await?;
      

      Sending this transaction to the single write connection is fine if the read is extremely fast, such as a single lookup by primary key. However, if your application requires expensive reads that must precede writes in a single atomic transaction, the shared connection pool with moderate concurrency might outperform a single writer.

      ~~Partial Solution: Batched Writes~~

      Retraction: Benchmarking showed that batched writes perform no better than the naive loop under concurrency, because 50 connections still contend for the write lock regardless of whether each connection issues 100 smallINSERTs or one large INSERT. QueryBuilder is still useful for reducing per-statement overhead, but it does not fix the contention problem.

      We could safely replace the example code above with this snippet that uses a bulk insert to avoid the lock starvation problem:

      let mut builder = sqlx::QueryBuilder::new(
          "INSERT INTO table (id, some_field)"
      );
      
      builder.push_values(values, |mut b, (id, value)| {
          b.push_bind(*id).push_bind(*value);
      });
      
      builder.build()
          .persistent(false) // see note below
          .execute(&db_connection)
          .await?;
      

      Note that if you do this with different numbers of values, you should call .persistent(false). By default, SQLx caches prepared statements. However, each version of the query with a different number of arguments will be cached separately, which may thrash the cache.

      ~~Raw SQL for Atomic Writes to Multiple Tables~~

      Retraction: Benchmarking showed that this did not actually improve performance.

      Unfortunately, the fix for atomic writes to multiple tables is uglier and potentially very dangerous. To avoid holding an exclusive lock across an await, you need to use the raw_sql interface to execute a transaction in one shot:

      sqlx::raw_sql( // this is implicitly wrapped in a transaction
          "UPDATE table1 SET foo = 'bar';
          UPDATE table2 SET baz = 'qux';"
      ).execute(&db_connection)
      .await?;
      

      However, this can lead to catastrophic SQL injection attacks if you use this for user input, because raw_sql does not support binding and sanitizing query parameters.

      Note that you can technically run a transaction with multiple statements in a query call but the docs say:

      The query string may only contain a single DML statement: SELECT, INSERT, UPDATE, DELETE and variants. The SQLite driver does not currently follow this restriction, but that behavior is deprecated.

      If you find yourself needing atomic writes to multiple tables with SQLite and Rust, you might be better off rethinking your schema to combine those tables or switching to a synchronous library like rusqlite with a single writer started with spawn_blocking.

      Could Rust's Type System Save Us?

      Update: the most useful change would actually be making a distinction between aReadPool and a WritePool. Libraries like SQLx could enforce the distinction at compile time or runtime by inspecting the queries for the presence of write statements, or the ReadPool could be configured as read- only.

      Maybe, but it probably won't. If SQLx offered both a sync and async API (definitely out of scope) and differentiated between read and write statements, a write Transaction could be !Send like std::sync::MutexGuard, which would prevent it from being held across an await point.

      However, SQLx is not an ORM and it probably isn't worth it for the library to have different methods for read versus write statements. Without that, there isn't a way to prevent write transaction locks from being held across awaits while allowing safe read transactions to be used across awaits.

      So, in lieu of type safety to prevent this footgun, I wrote up this blog post and this pull request to include a warning about this in the docs.


      Discuss on r/rust and Hacker News.


    12. 🔗 r/york York Mansion House rss

      York Mansion House | Looking mightly fine tonight. submitted by /u/York_shireman
      [link] [comments]
      ---|---

    13. 🔗 r/Leeds Quiet mornings sound like a nice idea rss
    14. 🔗 r/Leeds OCD diagnoses rss

      hi guys, mods please feel free to lock or delete this post if this isn't allowed I'm just unsure of how to ask people about this

      For those in Leeds who have been diagnosed and/or treated for OCD, how did you guys get the diagnosis? I don't mean by going to the doctors because I've tried all the way to an assessment for the community mental health service and I've brought it up to primary care when I was with them last. CMHS basically just said eh we don't think you need our help but also we can't help you because you're too complex.

      I have been told by primary care that Im not able to get a diagnosis or assessment because nowhere in Leeds can do it, not even any talk about getting assessed out of Leeds but that could also be difficult due to personal situation anyway (however I'd try make it work if I actually had the chance) but I'm at my wits end, trying to manage my traits alone has only made them worse and harder to deal with because I just don't have the knowledge to know what is and isn't helpful for OCD, I've spent my entire life unknowingly doing suppression and trying to manage intrusive thoughts, obsessions and compulsions in ways that apparently have BEEN KNOWN to make OCD worse. I can't access any therapies or anything because I don't have the resources nor the diagnosis to qualify, but I also don't believe that there's no chance I can be assessed in Leeds by anyone at all because if that's the case then how do I know people with the diagnosis?

      any sort of help or advice would be greatly appreciated, even if its just "say this to your doctor to get them to do their job", thanks in advance:)

      submitted by /u/soup1286
      [link] [comments]

    15. 🔗 r/reverseengineering GitHub - xKiian/datadome-vm: Reverse engineering the new Datadome VM 🔥 rss
    16. 🔗 r/Yorkshire New plans being drawn up to improve Skipton Railway Station rss

      New plans being drawn up to improve Skipton Railway Station | submitted by /u/CaptainYorkie1
      [link] [comments]
      ---|---

    17. 🔗 sacha chua :: living an awesome life La semaine du 9 février au 15 février et un aperçu de mon processus rss

      Un aperçu de mon apprentissage du français

      00:03 Je viens de commencer des cours particuliers avec un nouveau tuteur de français, donc c'est une bonne occasion de documenter mon processus actuel au cas où il aurait des suggestions.

      Mon journal

      00:19 Je commence par mes entrées de journal que j'écris sur mon téléphone tout au long de la journée (un peu par-ci, un peu par là) sur l'application Orgzly Revived. Je les synchronise avec mon ordinateur grâce à Syncthing. Orgzly utilise le champ de texte standard dans lequel un correcteur automatique est intégré. Pour chercher des mots dans le dictionnaire, je consulte WordReference ou je prends une note pour les revoir plus tard. Si je veux traduire une expression que je ne trouve pas sur WordReference, j'utilise Google Traduction. Les deux sont moins pratiques que de chercher les mots sur mon ordinateur sous Emacs, mais le compromis en vaut la peine car je peux écrire n'importe quand et n'importe où. Ça me permet d'utiliser les temps morts quand je dois attendre ma fille ou quand je me blottis sous les couvertures.

      Réécrire

      01:23 L'écriture de textes longs et la correction sont beaucoup plus faciles sur mon ordinateur que sur mon téléphone grâce à la taille de l'écran et à quelques fonctions que j'ai développées. Je dois utiliser le correcteur d'orthographe Flyspell parce que j'oublie souvent les accents. J'ai aussi besoin d'un correcteur grammatical. J'utilise donc Grammalecte via Flycheck sur Emacs pour identifier les erreurs d'accord du nom avec l'article, l'adjectif ou le verbe. Mais Grammalecte ne peut pas détecter les anglicismes ou les mauvais choix de mots.

      02:09 J'ai essayé quelques modèles d'IA pour obtenir des commentaires. Pour le moment, je préfère Gemini ou Claude. L'API gratuite de Gemini est limitée à vingt requêtes par jour, donc il vaut mieux corriger mon journal quotidiennement au lieu de le mettre de côté jusqu'à mon rendez-vous avec mon tuteur. Claude n'offre pas d'API gratuite. Pour l'essayer de temps en temps, j'utilise Spookfox qui contrôle Firefox depuis Emacs.

      02:47 J'affiche les résultats de l'IA avec Flycheck sous Emacs qui surligne les erreurs et en affiche l'explication. Ça facilite la réécriture. Je peux voir automatiquement l'explication de l'erreur à l'emplacement du curseur, et je peux aussi naviguer vers la prochaine erreur. Quand j'ai des questions ou des clarifications, je les pose à l'IA.

      03:17 Après plusieurs révisions, je remarque quand l'IA commence à tourner en rond. Pour minimiser ça, je lui envoie mon brouillon actuel avec l'historique des suggestions. J'ajoute les commentaires à un journal de bord pour chaque entrée. Ce journal de bord est en fait disponible via le lien « View Org source for this post » au bas des articles. Un jour, je les analyserai pour visualiser la fréquence des types d'erreurs et générer des questions qui m'aideront à apprendre.

      03:58 Je veux faciliter la correction sur mon smartphone, mais le petit écran me limite. Soit je dois passer souvent d'une application à l'autre, soit j'essaie d'intégrer la fonctionnalité à l'écran. Si j'ajoute un bouton pour accepter la suggestion au lieu de taper moi-même, je risque de ne pas apprendre aussi efficacement. C'est contre-productif car cela développe la mémoire du clic au lieu de la mémoire orthographique. Si je clique sur l'emplacement de l'erreur pour taper moi-même, la reconfiguration de l'affichage pour le clavier à l'écran est déroutante. Je me demande quel type d'interface serait le plus adapté…

      04:49 L'usage de l'IA pour peaufiner mes textes ne donne qu'une illusion de compétence. Au fond, je reste une débutante. Cependant, je pense que c'est utile car cela m'habitue au subjonctif, au conditionnel et aux expressions plus naturelles plutôt qu'aux anglicismes dans le contexte de mes centres d'intérêt et de ma vie quotidienne. Comme lorsque l'on joue du piano, il vaut mieux s'exercer précisément, même si c'est lentement, plutôt que de prendre de mauvaises habitudes. Si je travaille davantage, je finirai par les corriger moi-même. L'IA n'écrit pas pour moi. Je choisis les idées et j'écris les brouillons ; en revanche, il faut que je fasse les corrections moi-même pour apprendre. Si l'IA n'existait pas, j'apprendrais plus lentement via les cours particuliers, qui sont chers, ou via les forums gratuits. Peut-être que je choisirais un autre loisir.

      Les enregistrements

      06:08 Après avoir essayé de corriger ma grammaire avec ces outils, j'enregistre mes tentatives de prononciation. J'aime les pratiquer dans le contexte des phrases parce que la combinaison des sons m'embrouille souvent, et aussi parce que les phrases viennent de mon journal et me rappellent certains moments.

      06:33 Pour ce faire, j'utilise une fonction pour transformer mes phrases en sous-titres, et je les enregistre phrase par phrase. Avant chaque enregistrement, j'écoute un modèle de prononciation en utilisant la synthèse vocale de Google Traduction via une bibliothèque Python. J'écoute, je répète, j'écoute, je répète.

      La synthèse vocale

      07:01 L'intonation de Google Traduction est un peu plate, donc je dois faire un effort pour y mettre de l'animation quand je répète. Je peux aussi passer au moteur de synthèse vocale d'Azure, qui est plus naturel, mais pour une raison quelconque, je préfère l'intonation de Google Traduction. Quand ma prononciation sera un peu meilleure, je voudrais essayer de cloner ma voix comme si quelqu'un avait ajusté les tons de sa voix clonée lorsqu'il étudiait le chinois. Je pense que l'écoute d'un modèle de sa propre voix a un impact psychologique intéressant.

      07:52 Une fois que j'ai terminé mes enregistrements, j'écoute souvent toutes les paires de modèles et d'enregistrements pour les réévaluer. Je les refais si c'est nécessaire. L'écoute des différences entre les sons est une grande partie de l'amélioration de la prononciation, donc j'étudie le cours de phonologie du FSI et d'autres ressources de temps en temps.

      08:22 Après beaucoup de tentatives, j'utilise subed-record sous Emacs pour assembler l'enregistrement final, peut-être en alternance avec un carillon. De cette façon, même si j'enregistre les expressions au lieu de phrases complètes, elles n'embrouillent pas les auditeurs.

      08:45 Je souhaite également simplifier cette tâche sur mon smartphone. J'ai créé un outil pour faciliter l'écoute de la synthèse vocale phrase par phrase en utilisant l'API Web Speech de Google Chrome. Je peux ajouter une fonctionnalité pour enregistrer mes tentatives, peut-être avec l'aide de la reconnaissance vocale pour minimiser les clics. Puis, je peux télécharger tous les enregistrements finaux accompagnés d'un fichier de sous-titres qui correspond au fichier audio. Si je le fais, je peux utiliser les temps morts pour enregistrer de courts extraits que je peux assembler sur mon ordinateur.

      La publication

      09:31 J'utilise Montréal Forced Aligner pour générer des horodatages mot par mot. Pour publier sur mon blog, j'entoure le texte d'un bloc Org Mode pour associer le fichier aux liens suivants. J'inclus le fichier de l'enregistrement final et j'ajoute des horodatages par paragraphe ou par point pour faciliter la navigation. Pour éviter le désordre, je n'affiche pas d'horodatages par mot, mais dans le sous-titre actuel sur le lecteur audio, mon logiciel me permet de cliquer sur un mot pour réécouter le son à ce moment-là.

      10:11 Bien que je sois un peu gênée de publier mes brouillons et mes enregistrements avant que mon tuteur ne corrige tout, ce n'est pas la fin du monde. C'est facile à mettre à jour. Je pense que c'est comme une thérapie d'exposition, ce qui m'aidera à me délier la langue.

      Le rendez-vous

      10:35 Chaque semaine, je rencontre un tuteur pour identifier les expressions maladroites et les mots dont je dois travailler la prononciation. Ça vaut la peine car de temps en temps, l'IA génère des suggestions bizarres que je ne suis pas en mesure d'évaluer parce que je suis une débutante. Je ne trouve pas non plus de bonne manière d'utiliser l'IA pour corriger ma prononciation. La plupart des approches utilisent la reconnaissance vocale, mais c'est trop clément parce que celle-ci devine de manière probabiliste le sens. Je suppose que je peux utiliser la reconnaissance vocale pour fournir des scores de confiance mot par mot, ce qui semble être l'approche que d'autres applications utilisent. Le Montréal Forced Aligner peut aussi identifier les phonèmes, donc je peux éventuellement l'utiliser pour surligner les mots sur lesquels je m'embrouille avec les voyelles ou les lettres muettes, mais je crois que la transcription précise en AFI n'est pas un problème résolu. D'un autre côté, un interlocuteur réel peut non seulement identifier facilement mes erreurs, mais aussi me guider pour les corriger en expliquant les mouvements de la langue ou des lèvres, ou en suggérant des mots similaires pour enchaîner les sons. C'est une bonne occasion d'essayer d'apprendre avec un enseignant, et ça montre à ma fille que j'accorde de l'importance à l'apprentissage.

      12:28 Avant le rendez-vous avec mon tuteur, j'exporte mes entrées d'Org Mode en HTML et je copie mes entrées dans un document partagé sur Google Docs. Pendant le rendez-vous, je lance mes enregistrements si j'en ai ou je lis mon journal à voix haute. Mon tuteur explique ses corrections et il écrit ses commentaires dans le document. Je répète les mots ou les phrases jusqu'à ce que ce soit correct. Ma prononciation a toujours besoin d'être grandement améliorée, donc c'est très utile de pouvoir lire les mots pendant qu'il écoute. Je pense que si je crée un logiciel pour synchroniser le sous-titre actuel avec ma sélection dans notre document partagé sur Google Docs, ça nous aidera à suivre où j'en suis plus facilement pour que mon tuteur ajoute ses remarques.

      13:31 Une fois que nous nous serons habitués l'un à l'autre et que mon tuteur pensera que c'est gérable, nous pourrons discuter des sujets qui nous intéressent. Rien ne me presse de finir mes brouillons. C'est aussi une bonne chose que je m'entraîne à trouver des mots spontanément en conversant. L'écriture de mon journal doit me donner beaucoup de mots que je peux utiliser, mais bien sûr la taille du vocabulaire dont on dispose dans une conversation est inférieure à celle qu'on peut atteindre en écrivant tranquillement avec beaucoup d'outils. Parler intimide beaucoup d'étudiants, mais plus on essaie, plus on progresse. Le reste de mes brouillons peut attendre le prochain rendez-vous, ou peut-être que mes chers lecteurs me donneront des commentaires pour améliorer mon français.

      14:36 Si seulement la conversation était dotée de soulignements automatiques pour identifier mes erreurs… De toute façon, mon tuteur peut corriger mes erreurs en temps réel ou prendre des notes pour m'expliquer mes erreurs sans couper le fil de nos pensées. Si c'est acceptable, je peux aussi enregistrer le rendez-vous pour le transcrire automatiquement et le réécouter plus tard. De temps en temps, j'utilise la reconnaissance vocale en temps réel via Live Captions sur Google Chrome pour les sous-titres instantanés. Cela m'aide à comprendre quand il parle trop vite ou utilise des mots que je comprends plus facilement en lisant qu'en écoutant à ce stade, même si les résultats ne sont pas très précis.

      15:37 Comme Live Captions efface le sous-titre précédent pour afficher le sous-titre actuel au fur et à mesure, mon outil de journalisation est plus pratique. Mais il prend du temps à s'initialiser et Google Chrome bascule souvent sur mon micro physique par défaut, ignorant le périphérique virtuel qui inclut le son de mon tuteur. Ce n'est donc pas très fiable. Pourvu que je configure correctement mon système et que je garde la tête froide (c'est difficile en tant que débutante), je peux utiliser mon raccourci clavier pour insérer le résultat de la reconnaissance vocale à la demande, qui est plus précis mais plus lent sur mon ordinateur.

      Mise à jour

      16:32 Après le rendez-vous, je mets à jour mes enregistrements, mes sous-titres, et mes horodatages en suivant les commentaires de mon tuteur. Je les publie sur mon blog pour me responsabiliser et pour suivre mes progrès.

      Des cartes Anki

      16:52 J'extrais automatiquement le contenu des entrées dans des fichiers individuels par date pour l'analyser avec un logiciel Python qui repère les nouveaux lemmes, extrait les phrases qui les incluent, et crée des cartes Anki qui utilisent les textes à trous. Je préfère mémoriser les mots dans le contexte de la phrase plutôt que des mots isolés car je pense que ça minimise la tentation de traduire de l'anglais. Je révise les cartes chaque matin quand je me lève. Parce que les phrases entrent dans ce système de répétition espacée pour les mémoriser, c'est bien qu'une vraie personne qui parle bien le français me corrige pour éviter d'ancrer des erreurs.

      Regarder et écouter

      17:57 Je regarde aussi des émissions et des films doublés en français avec les sous-titres en français. Je vais télécharger les sous-titres sur mon ordinateur ou mon smartphone pour étudier à mon rythme. Ma fille s'est amusée quand nous avons regardé KPop Demon Hunters en français ensemble, donc je suis prête à le regarder plusieurs fois pour travailler mon oreille en lisant les sous-titres. Un jour, je tenterai de les corriger.

      18:32 J'ai aussi téléchargé des chansons sur mon téléphone pour les écouter. Si je choisis des chansons familières comme Disney ou Kpop Demon Hunters, je peux me concentrer sur elles si je veux et je peux aussi les laisser en fond sonore au cas où je devrais me concentrer sur autre chose.

      Les nombres

      18:57 D'après mon outil de pointage, jusqu'à présent, cela représente un total de 216 heures, ou une moyenne de 2 heures par jour depuis novembre. Mes cartes Anki prennent 20 minutes. Une émission prend une heure.

      19:18 En analysant mes données, je pense que l'apprentissage du français remplace le fait de jouer à Stardew Valley, qui a été mon obsession pendant un mois. Il a également réduit le temps consacré à l'écriture des articles en anglais pour mon blog et à la pratique du piano, mais je continue à faire ces activités. Les principales victimes de mon réaménagement du temps sont mon écriture en anglais et mes dessins tels que mes sommaires mensuels. Mon journal en français est plus minutieux que mon journal en anglais, donc ce n'est pas grave. Je peux toujours traduire si c'est nécessaire. Je pense que c'est facile d'augmenter le temps d'apprentissage du français grâce aux avantages cumulatifs qui rendent le temps plus plaisant.

      20:21 À l'avenir, je vais passer à deux rendez-vous de 45 minutes avec mon tuteur par semaine au lieu d'un rendez-vous d'une heure. Cela m'encourage à écrire de façon plus complexe, crée l'espace pour des conversations et évite que ma voix ne se fatigue.

      20:45 J'ai créé un logiciel pour analyser mon journal en utilisant la bibliothèque Spacy afin de compter les lemmes uniques dans mes textes pour que les mots similaires comme « savoir » et « sais » soient regroupés sous un seul lemme. Je pense que c'est plus précis que de chercher les mots dans la base de données Lexique. Jusqu'à présent, mon journal contient un total de plus de 31 000 mots ou plus de 2 500 lemmes uniques. J'accumule environ 25 nouveaux lemmes uniques à chaque entrée, ce qui est d'ailleurs légèrement supérieur à mon quota de nouvelles cartes Anki à apprendre, donc je n'arrive pas à rattraper mon retard qui ne cesse de grandir. Peut-être que je devrais augmenter ma limite, si mon cerveau peut le supporter. Peut-être qu'un jour presque tous les mots dans une entrée seront familiers, mais pour le moment, je trouve toujours beaucoup de nouvelles pensées que je veux écrire et beaucoup d'idées d'amélioration que je veux essayer et partager.

      22:15 C'est très bien. Mes nouveaux loisirs ne durent pas toujours, mais parce que ce loisir inclut l'aide de ma fille, le bricolage de mes outils, les souvenirs de mon journal et l'engagement psychologique de payer mes cours particuliers, je crois que ça me convient probablement pour le moment. J'ai hâte d'apprendre davantage.

      lundi 9 février

      J'ai eu mon dernier rendez-vous avec ma tutrice Claire. J'ai montré mes enregistrements sur mon blog. Ils étaient très utiles pour réviser la prononciation parce que nous les avons écoutés et analysés. Quand je parle, je trouve que c'est difficile de m'écouter en même temps, donc l'enregistrement en valait la peine même s'il demande plusieurs tentatives.

      Les cicatrices de ma fille lui faisaient mal, donc elle n'a pas voulu aller au cours de gymnastique. Nous sommes allés à pied à la bibliothèque, au supermarché et au parc en jouant à Pokémon Go.

      Pour le dîner, mon mari et ma fille ont préparé deux soupes. En hiver, l'air est si sec, donc les soupes étaient très réconfortantes. Nous les avons mangées avec du pain de mon mari. C'était délicieux.

      À l'heure du coucher, ma fille avait une sensation étrange au niveau du nez, mais nous n'avons pas pu faire grand-chose. Elle s'est encore barricadée dans sa chambre.

      J'ai feuilleté des profils de professeurs de français sur Italki. J'aime les professeurs qui publient leurs vidéos de présentation avec sous-titres en français. Ça me montre qu'ils sont à l'aise avec la technologie et qu'ils ont réfléchi à la façon dont les étudiants peuvent apprendre grâce à leur présentation. Cette semaine, je dois choisir une personne pour faire un cours d'essai.

      mardi 10 février

      J'étais fatiguée parce que ma fille s'est blottie contre moi pendant la nuit. Mais j'étais contente que ma fille sache que même si elle a été de mauvaise humeur la nuit dernière, elle est toujours la bienvenue si elle veut un câlin.

      Il faisait beau. Mon mari et moi avons déneigé le trottoir devant les maisons de nos voisins. Deux voisins nous ont dit que nous les avons inspirés à déneiger devant chez leurs voisins. Une autre personne a dit qu'il n'habitait pas ici, mais il a choisi notre rue pour essayer de trouver un chemin dans la neige, et il était clair que quelqu'un y avait mis beaucoup de soin. C'est gratifiant de voir les gens emprunter le chemin que nous avons tracé.

      Grâce au temps calme, mon mari et moi nous nous sommes assis dehors pour profiter du soleil. Il a lu et j'ai écrit dans mon journal en français sur mon téléphone. J'ai utilisé un clavier Bluetooth pour taper, ce qui me permet de profiter de plus d'espace sur l'écran. Mes gants me permettent de taper assez bien, et de toute façon, taper n'est pas un problème dans mon apprentissage du français.

      Ma fille s'était couchée trop tard hier soir, donc elle était trop fatiguée pour participer à la classe ce matin. Nous voulons trouver une meilleure façon de gérer cette situation. Elle déteste l'école parce qu'elle est trop lente, trop ennuyeuse et trop bruyante. Je suis ouverte aux alternatives. Je sais que ça ne marche pas si j'essaie de la forcer. Le principal défi est probablement la gestion de ses émotions. Si elle part furieuse et que ça l'empêche de prendre soin d'elle-même comme son sommeil ou le nettoyage de ses piercings, c'est un défi qu'elle peut relever avec ou sans notre aide. Donc comment lui montrer ça sans la mettre sur la défensive ? La colère est difficile pour beaucoup de gens. Comme toujours, elle doit vouloir quelque chose de différent avant qu'elle puisse changer. Je pense qu'une approche sévère ne l'aide pas. Dans l'ensemble, c'est tout à fait acceptable.

      L'après-midi, je me suis renseignée sur la façon d'utiliser Montréal Forced Aligner pour générer des horodatages de mots. J'ai aussi essayé Aeneas pour les générer, mais je préfère les horodatages de MFA parce qu'ils sont plus précis. J'ai modifié subed-word-data.el pour analyser le format TextGrid que MFA produit, et j'ai aussi créé des fonctions pour insérer et supprimer les horodatages de mots dans les sous-titres au format VTT. Je les ai utilisées sur mon blog pour me permettre de cliquer sur un mot pour réécouter le son à ce moment-là. Je veux ajouter une fonctionnalité similaire à subed pour faciliter la répétition des sons.

      Le bulletin de notes est arrivé ! Ma fille a obtenu des notes allant de B+ à A+, ce qui m'a tellement soulagée. Ça signifie qu'elle peut gérer ses devoirs, au moins pour la quatrième année. Pendant les mois précédents, je la laissais choisir quels devoirs faire et quand, et même ne pas faire certains devoirs. C'est difficile de se retenir de la pousser avec insistance, mais c'est nécessaire. Elle a été très fière grâce aux résultats de ses efforts. Maintenant, ces données vont m'aider à ne plus m'inquiéter.

      Nous nous sommes mis d'accord sur une pause sans Internet de minuit à 6 h pour toute la famille. C'est un bon départ.

      J'ai pris un rendez-vous avec un professeur de français. Je vais commencer avec un tuteur qui s'intéresse également à la technologie.

      mercredi 11 février

      Je suis allée à la banque à pied pour retirer des espèces pour la leçon d'escalade de ma fille aujourd'hui.

      J'ai téléchargé les sous-titres d'une émission pour étudier plus en détail. Je pense qu'ils m'aideront à comprendre le français.

      J'ai participé à la réunion virtuelle OrgMeetup. J'ai présenté mes fonctions pour insérer et publier des liens avec des horodatages.

      Après le dîner, ma fille et moi sommes allées au cours d'escalade avec son amie et le père de son amie qui a organisé l'événement pour sa troupe. Il s'est avéré qu'il y avait assez de place pour que je les rejoigne, donc j'ai rangé mes affaires dans un casier et j'ai aussi fait de l'escalade. Nous nous sommes tellement amusés. (Mais je me suis un peu inquietée à cause d'un garçon qui toussait toujours.) Si son amie veut apprendre l'escalade dans un gymnase proche, ma fille veut l'accompagner.

      Ma fille m'a posé des questions sur l'apprentissage du français. Elle était aussi curieuse de l'éducation des surdoués. J'ai expliqué pourquoi elle avait un examen individuel l'année dernière, et pourquoi elle doit trouver ses propres adaptations à un système qui ne peut pas s'adapter à elle.

      Je dois me préparer pour mon premier rendez-vous demain avec un nouveau tuteur. Je vais me présenter : Je m'appelle Sacha. J'habite à Toronto avec mon mari et ma fille. Ma fille a neuf ans et elle a commencé à apprendre le français à l'école. Je viens des Philippines, donc je n'ai pas étudié le français quand j'étais enfant. Je veux l'aider, donc j'apprends le français depuis novembre. Cela fait environ quatre mois. Je trouve agréable la stimulation mentale de l'écriture dans une autre langue et le prétexte pour bricoler mes outils et flux de travail. Je tiens mon journal en français avec l'aide de l'IA pour obtenir des commentaires. J'extrais les phrases qui ont de nouveaux lemmes pour les mémoriser sur des cartes Anki. J'écoute des podcasts et des émissions avec les sous-titres en français. J'essaie quelques ressources pour l'apprentissage. Je pratique l'expression orale avec une tutrice en lisant mon journal à voix haute (ce qui lui permet de corriger ma prononciation et mon écriture en même temps). Mais elle a dû arrêter, donc je veux en trouver encore un ou une autre. Donc, nous y voilà.

      jeudi 12 février

      Ce matin, j'ai eu le premier rendez-vous avec mon nouveau tuteur. Après une brève présentation, nous avons commencé par mes notes de journal de lundi et de mardi. Je suis ravie de ses commentaires parce que je ne peux pas évaluer si les suggestions d'IA sont bizarres. Parce que j'utilise mon journal pour faire des cartes Anki pour mémoriser le vocabulaire, j'ai besoin de correction pour éviter d'ancrer les erreurs. Je veux aussi internaliser la prononciation du français pour faciliter la conversation. L'IA ne peut pas corriger ma prononciation. Le premier rendez-vous s'est bien passé, donc j'ai planifié deux rendez-vous de quarante-cinq minutes par semaine. On va voir.

      J'ai facilité la mise à jour de mes sous-titres et mes enregistrements après que le tuteur a fait des remarques. J'ai utilisé Claude IA pour générer une fonction pour comparer deux listes de chaînes, puis j'ai utilisé ma fonction pour afficher les phrases qui ont été modifiées.

      Ma fille a hâte que je dessine une récapitulation de l'année pour son anniversaire. J'en ai dessiné une l'année précédente. C'est une bonne façon de nous souvenir et de fêter sa croissance. Elle aura 10 ans dans deux semaines. C'était une année merveilleuse.

      Par exemple :

      • Elle a géré sa participation à la classe et ses propres devoirs. Selon le dernier bulletin de notes, elle a réussi à obtenir des notes allant de B+ à A+. L'attente avant de recevoir les notes était difficile pour moi parce que je me suis inquiétée, particulièrement quand elle ne faisait pas certains devoirs ou qu'elle ne voulait pas participer à la classe. Mais avec l'aide d'une thérapeute, j'ai réussi à me retenir jusqu'à ce que l'enseignant nous ait donné le bilan. Elle aspire à plus d'indépendance, et c'est mieux si elle s'entraîne quand les conséquences sont minimes.
      • Elle a commencé à apprendre le français à l'école, donc j'ai aussi commencé pour que nous puissions apprendre ensemble.
      • Elle a pris des cours périscolaires différents : des entraînements individuels de natation et de gymnastique, certains ateliers et un camp d'été de poterie, un club nature, un club Minecraft à l'école virtuelle, et un cours d'art et un cours de patinage avec des amies différentes. Elle aime bien tout apprendre même si les camarades de classe étaient souvent trop bruyants.
      • Elle nous a aidés dans la cuisine. Elle est devenue très capable et elle peut gérer plusieurs aspects de la cuisine elle-même, comme la préparation du chocolat chaud pour ses amies ou le découpage d'ingrédients.
      • Elle est devenue capable d'aller au parc à vélo ou au magasin à pied un peu avant moi. Elle était aussi fière d'avoir pu choisir les produits et payer elle-même. Au marché fermier, elle aime bien acheter les fruits et son pain au levain préféré.
      • Elle a commencé à explorer les soins de la peau et d'autres aspects de ses soins personnels. Elle a découvert qu'elle préfère la marque Evereden et elle a acheté certains produits elle-même. Elle a choisi de se faire percer les oreilles. Je l'aide pour son nettoyage.
      • Elle continue de s'intéresser à Star Wars. Elle s'est amusée à utiliser Claude IA pour générer beaucoup d'histoires interactives.
      • Elle s'est intéressée à Pokémon. Elle joue souvent à Pokémon Go sur mon ancien smartphone que je lui ai donné. Elle joue aussi aux anciennes versions sur un émulateur. Elle regarde les émissions Pokémon et elle lit des livres. Naturellement, toute la famille a besoin d'apprendre tout ce qu'il y a à savoir sur Pokémon.
      • Nous avons souvent joué à KidSpark, qui comprenait un supermarché imaginaire. C'est sa partie préférée du Centre des sciences de l'Ontario, qui est maintenant fermé.
      • Elle s'est très amusée au Musée des Illusions avec son amie. Elle a aussi profité de l'occasion pour peindre sur sa tablette tactile au Musée des beaux-arts de l'Ontario et explorer les expositions au Musée royal de l'Ontario.
      • Elle a fait face à des changements dans ses amitiés : une amie est devenue plus proche, une amie est devenue un peu plus distante. Elle est devenue plus confiante quant à ses préférences.
      • Petit à petit, elle a clairement mûri. Cette année a offert plusieurs occasions de pratiquer la gestion de ses émotions, ce qui n'est pas toujours facile.
      • Elle a souvent appelé ses tantes et ses cousines pendant qu'elle jouait à Minecraft ou Donjons et Dragons. Elle s'est aussi rapprochée de notre famille lors d'un mariage et d'autres événements familiaux.
      • En été, elle nous a aidés en faisant du bénévolat pour Bike Brigade. Elle était tellement fière quand elle a livré des courses aux destinataires de banques alimentaires en utilisant son propre vélo.

      Comment dessiner toutes les choses sur une seule feuille de papier… C'est un beau problème.

      Après l'école, ma fille m'a dit en français « J'ai très froid. » Je lui ai fait une tasse de chocolat chaud pour la réchauffer. Elle a commencé à construire les fleurs LEGO qui sont arrivées aujourd'hui.

      Nous sommes allés aux Stockyards pour chercher un vase parce que ma fille en veut un pour placer des fleurs LEGO. Aucun de ceux du magasin ne lui plaisait (trop grand, trop haut, trop petit, trop complexe), mais elle a voulu acheter certaines fournitures de bricolage.

      Pour le dîner, nous avons préparé des boules de riz. Nous avons aussi mangé des restes.

      Ma fille a dit qu'elle préfère pratiquer le français avec nous parce que l'enseignante de ma fille ne lui enseigne pas la compétence conversationnelle ou les mots qui sont liés à ses centres d'intérêt. Elle s'est amusée en apprenant des mots dans mon jeu de Pokémon Go que j'ai mis en français comme « Evoli est attrapé » et « Gagné ! ».

      Je ne sais pas pourquoi je n'y ai pas pensé plus tôt… Nous pouvons regarder le KPop Demon Hunters en français. Nous l'avons regardé énormément de fois en anglais donc je n'ai pas besoin de la traduction, mais je veux trouver de meilleurs sous-titres. Nous avons essayé la première partie du film ce soir et j'étais étonnée de comprendre beaucoup de mots. C'est prometteur.

      Nous diversifions notre apprentissage en mathématiques en apprenant d'autres choses comme les nombres binaires plutôt que de prendre de l'avance pour éviter l'ennui maintenant et plus tard. Si ma fille veut aussi élargir ses connaissances en français et apprendre les choses et les compétences que l'école ne va pas lui enseigner, je me demande ce que nous pouvons apprendre ensemble.

      vendredi 13 février

      Pendant une réunion parents-professeurs, l'enseignant et nous avons discuté du bulletin scolaire de ma fille. Il a dit que c'est une bonne élève, même si ce serait mieux si elle pouvait faire plus de devoirs. En sciences, elle peut s'améliorer si elle fait davantage de recherches. J'ai dit qu'elle a peut-être besoin de plus de motivation. Si elle veut apprendre une matière, elle lit tous les livres et tous les sites. En mathématiques, elle aime résoudre des problèmes que je lui pose. Il a aussi dit qu'elle est devenue plus capable de défendre ses propres intérêts. Il a dit qu'il peut offrir des possibilités d'adapter ses devoirs aux sujets qui l'intéressent. Ma fille est fière de pouvoir gérer de plus en plus par elle-même, donc si son enseignant lui dit que ces choses sont négociables, elle préfère peut-être négocier avec son enseignant directement plutôt que de passer par moi.

      Nous avons attaché les perles-fleurs en céramique aux breloques pour sandales en utilisant le pistolet à colle. Elle a dit que maintenant nous pouvons distinguer nos sandales qui sont du même style et de la même couleur. Avant cela, elles n'avaient qu'une différence de taille.

      J'ai emmené ma fille en métro à KidSpark pour jouer à la marchande. Nous avons aussi joué à Pokémon Go. Sur le chemin du retour, j'ai oublié de vérifier la destination et nous n'avons pas pris le bon tram. Au moment où nous sommes rentrées à la maison, ma fille était très fatiguée et elle avait très faim. Mon vélo cargo m'a beaucoup manqué. C'est plus pratique, mais à cause de la neige, les pistes cyclables ne sont pas fiables. J'ai hâte de faire du vélo une fois qu'elles seront dégagées.

      samedi 14 février

      J'ai aidé ma fille à ranger sa chambre en la transformant en un lieu magique comme Mary Poppins. Nous avons alterné entre claquements de doigts et rangement des affaires en chantant. Elle l'a fait avec enthousiasme. Elle m'a même aidée à nettoyer les miroirs.

      Pour le déjeuner, nous avons préparé encore des boules de riz, qui étaient délicieuses.

      Bien qu'il ait fait beau, ma fille n'a pas voulu participer au club nature du parc. Elle a préféré aller avec nous aux Stockyards à pied. Mon mari a apporté une pelle pour déneiger sur le chemin. Nous avons acheté des sacs poubelles au magasin de bricolage, des gouttes pour les yeux à la pharmacie, et quelques ingrédients au supermarché. Naturellement, nous avons aussi attrapé beaucoup de Pokémon.

      Elle était très fatiguée à cause de notre longue promenade. Sur le chemin du retour, je la divertissais en bavardant sur Pokémon avec quelques mots français disséminés. C'est une bonne façon de l'entraîner en français. J'aime bien connaître ses centres d'intérêt. Pour le moment, mon mari et moi nous sommes ses personnes préférées et elle adore apprendre quelque chose ensemble. Bien sûr, ça changera un jour.

      Après le retour, nous avons fait la lessive. J'ai oublié les rideaux de douche que j'avais blanchis, donc j'ai dû les attendre. À cause d'une mauvaise communication, j'ai ajouté les rideaux de douche à la lessive de polyester et de laine, que mon mari avait finie. Heureusement, malgré le décolorant et le cycle de lavage que j'avais choisi, aucun des vêtements en laine ne présentait de problème. Ce n'était pas grave.

      J'ai proposé un lien à inclure dans l'infolettre de Bike Brigade sur des gens qui avaient dégagé une piste cyclable. Un bénévole de Bike Brigade n'a pas voulu l'inclure car il n'aime pas les gens. Alors, ça ne me regarde pas.

      À l'heure du coucher, elle a dessiné des images Pokémon pour la Saint-Valentin.

      dimanche 15 février

      Nous sommes allées au cours de patinage de ma fille un peu tard, mais ce n'était pas grave. Elle a mis rapidement ses patins et son casque et elle s'est jointe à ses camarades. Aujourd'hui ils ont une évaluation. J'ai vu qu'elle peut gérer les tâches comme glisser sur un pied. Bravo !

      Nous avons laissé nos Pokémon à deux arènes sur le chemin du retour. Ma fille a envie d'affronter un gigantesque Meowth, mais c'était trop puissant pour nous.

      Nous avons apporté de la nourriture à ma belle-fille à vélo. Elle a donné à ma fille un cadeau pour son prochain anniversaire.

      J'ai terminé la première saison des émissions en français. J'ai raté beaucoup de mots, mais j'ai suivi l'histoire assez bien. Je vais regarder la prochaine saison. Je n'ai jamais trouvé le temps de regarder des émissions en anglais parce que je préfère la programmation ou l'écriture, mais parce que c'est une partie de mon apprentissage du français, je me permets de les apprécier.

      Prononciation

      • En hiver, l'air est si sec, donc les soupes étaient très réconfortantes.
      • Dans l'ensemble, c'est tout {/​tut​/ - liaison} à fait acceptable.
      • Grâce au temps calme, mon mari et moi nous nous sommes assis dehors pour profiter du soleil. Il a lu et j'ai écrit dans mon journal en français sur mon téléphone.

      You can e-mail me at sacha@sachachua.com.

    18. 🔗 r/LocalLLaMA I gave 12 LLMs $2,000 and a food truck. Only 4 survived. rss

      I gave 12 LLMs $2,000 and a food truck. Only 4 survived. | Built a business sim where AI agents run a food truck for 30 days — location, menu, pricing, staff, inventory. Same scenario for all models. Opus made $49K. GPT-5.2 $28K. 8 went bankrupt. Every model that took a loan went bankrupt (8/8). There's also a playable mode — same simulation, same 34 tools, same leaderboard. You either survive 30 days or go bankrupt, get a result card and land on the shared leaderboard. Example result: https://foodtruckbench.com/r/9E6925 Benchmark + leaderboard: https://foodtruckbench.com Play: https://foodtruckbench.com/play Gemini 3 Flash Thinking — only model out of 20+ tested that gets stuck in an infinite decision loop, 100% of runs: https://foodtruckbench.com/blog/gemini-flash Happy to answer questions about the sim or results. UPDATE (one day later): A player "hoothoot" just hit $101,685 — that's 99.4% of the theoretical maximum. 9 runs on the same seed, ~10 hours total. On a random seed they still scored $91K, so it's not just memorization. Best AI (Opus 4.6) is at ~$50K — still 2x behind a determined human. Leaderboard is live at https://foodtruckbench.com/leaderboard submitted by /u/Disastrous_Theme5906
      [link] [comments]
      ---|---

    19. 🔗 r/Leeds Looking to play casual football in Leeds? ⚽️ rss

      Sharing this for anyone in Leeds who wants to play football and meet new people. I help a non-profit organise casual 5 to 9 a side games on local astro pitches astro across the city.

      All levels are welcome, even if you’re not that fit or haven’t played in years. There’s no commitment, you just turn on and play whenever you're available.

      Hope this helps you feel like you belong because Leeds has a strong local football scene and it’s a great way to improve fitness, confidence and your social circle. I've met all my best friends through the game.

      Drop a quick comment or DM if you fancy playing and I’ll add you to the players list ⚽️

      submitted by /u/footballforalluk
      [link] [comments]

    20. 🔗 r/reverseengineering Web Reverse Engineering streams rss
    21. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release, -2 releases rss
      sync repo: +1 release, -2 releases
      
      ## New releases
      - [efiXplorer](https://github.com/rehints/efixplorer): 6.2.0
      
      ## Changes
      - [efiXplorer](https://github.com/rehints/efixplorer):
        - host changed: binarly-io/efixplorer → rehints/efixplorer
        - removed version(s): 6.1.2, 6.1.1
      
    22. 🔗 r/Leeds 4 Spare tickets to The Empire Strips Back @ Testbed tonight (17/02) rss

      tickets now taken

      I don't think I have much hope of selling these so just looking to give them away so they don't go to waste. I bought them a while ago but for various reasons all of us can't go tonight.

      If you don't know what it is, it's a Star Wars parody burlesque that's supposedly really good. worth looking up if that interests you.

      info:

      venue: Testbed (next to crown point)

      show start time: 7:30pm

      duration: approx 2 hours

      seat section A

      tickets Digital via Fever app

      I can transfer the tickets on the app - just let me know how many you want and I can DM you a link that should take care of it.

      submitted by /u/benji9t3
      [link] [comments]

    23. 🔗 r/wiesbaden Nein! Tatsächlich????? rss
    24. 🔗 r/Yorkshire Post card of the yorkshire ❤️ rss

      Post card of the yorkshire ❤️ | @independentcottages submitted by /u/Additional_Fly_6603
      [link] [comments]
      ---|---

    25. 🔗 r/york Found in my Nana's house. I can officially say she kept the Minster standing (for a minute)! rss

      Found in my Nana's house. I can officially say she kept the Minster standing (for a minute)! | From what I can tell, these certificates were issued upon receipt of a gift donation to the Minster from the mid 70s to the late 90s. There really isn't much information that I can about them, but it's interesting nonetheless. We lost her a couple of months ago, so seeing this made me smile which I was grateful for. submitted by /u/Penny_dreadfulz
      [link] [comments]
      ---|---

    26. 🔗 r/york The main entrance to the city, York, England 1865 - 2015... rss

      The main entrance to the city, York, England 1865 - 2015... | @deserted submitted by /u/Juicewithextrapulp
      [link] [comments]
      ---|---

    27. 🔗 r/Yorkshire A wander around Haworth – the village where Emily Brontë wrote Wuthering Heights rss

      A wander around Haworth – the village where Emily Brontë wrote Wuthering Heights | Did you know – you can even visit the very spot where Emily Brontë submitted the manuscript to her publisher, unaware of the novel’s future success. It’s the Haworth Old Post Office, found at the top of Main Street. submitted by /u/Yorkshire-List
      [link] [comments]
      ---|---

    28. 🔗 r/york Lost ring rss

      Lost ring | might be a longshot but a colleague has lost her ring on saturday that's really important to her (contains her grandads ashes). She did a run around York so could be in Poppleton, Acomb, Bishopthorpe, Fulford, Heworth, or Huntington so likely dropped along a path in one of these places. If anyone has found it please can you get in touch as this is really important to her! submitted by /u/slothful_jeremiah
      [link] [comments]
      ---|---

    29. 🔗 r/Yorkshire Lancastrian picks his top ten counties rss

      Grudgingly puts us tenth and (surprise!) puts in first the county the best bits of which, e.g. the Forest of Bowland are actually part of Yorkshire!

      “I include [Yorkshire]here with humble respect, though it was a close call whether to opt instead for Shetland, Cornwall or Brecknockshire.”

      Excerpt From

      “I’ve spent decades exploring the UK. These are my 10 favourite counties”

      Chris Moss

      The Telegraph

      https://apple.news/AvcDS2fMyQmqZNNNlUy7YOA

      This material may be protected by copyright.

      https://apple.news/AvcDS2fMyQmqZNNNlUy7YOA

      submitted by /u/zodzodbert
      [link] [comments]

    30. 🔗 r/Leeds Best Chinese takeaways in Leeds ? rss

      I’d just like to get peoples advice and opinions on Chinese takeaways in Leeds. I’m 37 now and I’ve lived in the bramley area all my life and up until about 3 or 4 years ago I always went to PO FUNG on broad lane in bramley but it unexpectedly closed as the kids that took over from there parents wanted to do something else for a living or something. I’ve tried to ho hing in bramley which is okay but it’s not PO FUNG for me. Does anyone know of a Chinese almost identical to PO FUNG or any other decent Chinese takeaways in bramley, in west Leeds or in any other areas of Leeds ? Thanks

      submitted by /u/toppman89
      [link] [comments]

    31. 🔗 r/reverseengineering [Update] lcsajdump v1.1.0: Bad bytes ruining your ROP chain? Now supports Address Grouping/Deduplication rss
    32. 🔗 MetaBrainz C4GT 2025: Integrate Internet Archive Into BrainzPlayer rss

      Hey Everyone 👋!

      I am Rayyan Seliya (AKA rayyan_seliya123 on IRC and RayyanSeliya on GitHub), a prefinal year student at Indian Institute of Information Technology Agartala, India, studying Computer Science. I was thrilled to be selected as a contributor in the C4GT (Code For Govt Tech) 2025 program under the MetaBrainz Foundation. My project focused on integrating music streaming from Internet Archive into BrainzPlayer. It was mentored by Kartik Ohri (lucifer on IRC) and Nicolas Pelletier (monkey on IRC).

      Let's start

      Project Overview

      ListenBrainz has a number of music discovery features that use BrainzPlayer to facilitate track playback. BrainzPlayer (BP) is a custom React component in ListenBrainz that uses multiple data sources to search and play a track. As of now, it supports Spotify, YouTube, Apple Music, SoundCloud, and Funkwhale as music services. It would be useful for BrainzPlayer to support the Internet Archive, which hosts a vast collection of digitized recordings from physical releases of the early 20th century, including 78 RPMs and Cylinder Recordings. Each recording comes with audio streaming and metadata web services that can be used to retrieve metadata automatically and embed a player in ListenBrainz using BrainzPlayer.

      Let 's Deep Dive into My Coding Journey!

      My journey with MetaBrainz started during the community bonding period when I was exploring good first issues on the ListenBrainz tickets. That's when lucifer suggested me to contribute to this specific issue - adding an "add another" checkbox to the Submit Listens modal.

      My First Contribution to the community !

      The problem was simple but alsoas well as annoying ! when users wanted to add multiple listens in a row (like for each side of a record), they had to reopen the modal every single time. I added a checkbox that kept the modal open after submission, making it much easier to add multiple listens.

      Updated dialog box with
checkbox

      Figure: Updated dialog box with an added checkbox option as "Add another" enhancing user interaction compared to the previous version.

      This first PR was quite a journey with monkey's reviews! Through his feedback, I learned a lot about TypeScript, React best practices, and how big codebases handle accessibility and forms. The reviews went through several rounds from fixing form submission logic to implementing proper state reset using React hooks.

      After several improvements, the PR was finally merged successfully on May 20th. This experience taught me about code quality, accessibility, and working with maintainers who care about the codebase.

      After this successful contribution, lucifer suggested me to create a demo showing how Internet Archive recordings could be made playable locally. I built a simple full-stack app that could search and stream audio from Internet Archive collections. This project helped me understand how metadata indexing works and how to integrate external music services. I also studied existing handlers for Spotify, Apple Music, and SoundCloud to understand the patterns used in ListenBrainz's system.

      You can check out the demo repository here to see how it all worked!

      Integrate Music Streaming from Internet Archive

      Internet Archive provides a rich collection of historical audio recordings that are freely available to the public. The main challenge here was to create an efficient indexing system that could crawl through the vast collections of 78rpm and cylinder recordings, extract metadata, and provide a seamless search experience for users.

      Backend Architecture

      • Metadata Cache Handler: A Python-based handler that crawls Internet Archive collections
      • Database Schema: TimescaleDB tables to store track metadata efficiently
      • Search API: RESTful endpoints for BrainzPlayer to query the cached metadata
      • Background Processing: Queue-based system for continuous indexing

      The following flowchart explains the various steps taken to integrate Internet Archive with ListenBrainz.

      Flowchart
diagram

      Figure: System flow diagram

      The database schema was designed to efficiently store Internet Archive track information:

      CREATE TABLE internetarchive_cache.track (
          id            INTEGER GENERATED ALWAYS AS IDENTITY NOT NULL,
          track_id      TEXT UNIQUE NOT NULL,
          name          TEXT NOT NULL,
          artist        TEXT[] NOT NULL,
          album         TEXT,
          stream_urls   TEXT[] NOT NULL,
          artwork_url   TEXT,
          data          JSONB NOT NULL,
          last_updated  TIMESTAMPTZ DEFAULT NOW()
      );
      

      Frontend Integration

      I added the InternetArchivePlayer component to BrainzPlayer by taking reference from the existing BrainzPlayer architecture and DataSourceType interface to understand how to properly integrate with the existing services. It detects when a listen originates from Internet Archive, handles search-based matching, and manages audio streaming.

      I also created a custom icon component for Internet Archive that emulates the FontAwesome icons exported from react-fontawesome. it helps us to avoid messy and hardcoded styles for the icon and yeah thai was suggested by monkey and he helped me in this !

      See this

      export const faInternetArchive: IconDefinition = {
        prefix: "fas" as IconPrefix,
        iconName: "internetarchive" as IconName,
        icon: [
          420, 480,
          [],
          "",
          "m 0,457.074 h 423.26 v 21.71 H 0 Z m 16.7,-41.74 h 390.7 v 30.05 H 16.7 Z..."
        ],
      };
      

      Search and Playback Flow

      The search flow works as follows:

      • User searches for a track in BrainzPlayer
      • InternetArchivePlayer queries the /1/internet_archive/search API
      • Backend searches the cached metadata using PostgreSQL full-text search
      • Results are returned with stream URLs and metadata
      • HTML5 audio element plays the selected track

      Flowchart explaining how a song is played through Internet Archive in BrainzPlayer

      Frontend flow of song playback via Internet
Archive

      Figure: Frontend flow illustrating how a song is played through Internet Archive in BrainzPlayer.

      I added the option to enable Internet Archive in BrainzPlayer settings and ensure that Internet Archive service is activate and set the desired in the BrainzPlayer settings page.

      and users can control the priority of this service relative to other music services.

      Users don't need to connect the Internet Archive as a music service ,since it’s a free public-domain service does not require any account ! so i have added this UI in the connect services page !

      And finally our brainzplayer ui will look like this

      Implementation Challenges and Solutions

      One of the complex parts was handling Internet Archive's diverse metadata formats. The metadata extraction had to handle various HTML descriptions, multiple artist formats, and different audio file types. I implemented a robust parsing system using BeautifulSoup that could extract artist, album, and track information from various description formats. So we have implemented this to tackle this challenge

      see here

      def extract_from_description(soup: BeautifulSoup | None, field) -> str | None:
          """
          Extracts a field (e.g. 'Artist', 'Album') from the IA description HTML using BeautifulSoup.
          Handles both string and list input.
          """
          if not soup:
              return None
      
          for element in soup.find_all(["div", "p", "span"]):
              _text = element.get_text(strip=True)
              if _text.startswith(f"{field}:"):
                  return _text[len(field) + 1:].strip()
      
          return None
      

      Another Challenge: Seeding Limits and Discovery

      Initially, we faced a challenge with the sheer volume of Internet Archive's collections. The 78rpm and cylinder collections contain millions of recordings, and trying to index everything at once would overwhelm the system and hit API rate limits.

      The Initial Approach:

      def get_seed_ids(self, limit_per_collection=1000) -> list[str]:
          """Fetch identifiers for 78rpm and cylinder collections."""
          collections = [
              {'name': '78rpm', 'query': 'collection:78rpm AND mediatype:audio'},
              {'name': 'cylinder', 'query': 'cylinder mediatype:audio'}
          ]
          # ... limited to 1000 per collection
      

      The Problem was Setting a hard limit of 1000 recordings meant we were missing out on discovering new and interesting content, and the system wasn't really "learning" about the collections over time.:

      So After discussing with lucifer, we implemented a smarter approach combining date-based seeding with discovery-based fetching:

      Smarter approach:

      def get_seed_ids(self, limit_per_collection=1000) -> list[str]:
          """Fetch identifiers for 78rpm and cylinder collections with date filtering."""
          today = datetime.today().strftime("%Y-%m-%d")
          last_week = (datetime.now() - timedelta(days=7)).strftime("%Y-%m-%d")
          date_filter = f"[{last_week} TO {today}]"
      
          collections = [
              {
                  "name": "78rpm",
                  "query": f"collection:78rpm AND mediatype:audio AND publicdate:{date_filter}"
              },
              {
                  "name": "cylinder",
                  "query": f"collection:cylinder AND mediatype:audio AND publicdate:{date_filter}"
              }
          ]
          # ... process with date filtering
      

      Another Challenge: Audio Format Detection

      Initially, I was missing a vast array of file formats, which meant we were losing potential recordings that couldn't be streamed. Internet Archive hosts audio in many different formats, from modern digital formats to historical analog recordings.

      So the solution i made by researching the Internet Archive library,I implemented a comprehensive audio format detection system to ensure we don't miss a single recording:

      Vast file formats :

      AUDIO_KEYWORDS = [
          "mp3", "ogg", "vorbis", "flac", "wav", "aiff", "apple lossless", "m4a", "opus", "aac",
          "au", "wma", "alac", "ape", "shn", "tta", "wv", "mpc", "aifc", "m4b", "m4p", "vbr",
          "m3u", "cylinder", "78rpm", "lossless", "lossy", "webm", "aif", "mid", "midi", "amr",
          "ra", "rm", "vox", "dts", "ac3", "atrac", "pcm", "adpcm", "gsm", "mmf", "3ga", "8svx"
      ]
      

      Present Status and Future Improvements

      The entire implementation for the Internet Archive integration is contained in the following components:

      • Backend Handler: listenbrainz/metadata_cache/internetarchive/handler.py
      • Search API: listenbrainz/webserver/views/internet_archive_api.py
      • Frontend Player: frontend/js/src/common/brainzplayer/InternetArchivePlayer.tsx
      • Settings Integration: Updated BrainzPlayer settings and music services pages

      The implementation has been reviewed, The backend part has been merged can be seen here in these two prs :- pr1 , pr2 .Only frontend pr is left to be merge see here pr

      As future improvements, it would be useful to implement more sophisticated search algorithms, add support for more Internet Archive collections beyond 78rpm and cylinder recordings, and implement user preference-based content filtering. More thorough unit and integration tests would also be useful in preventing regressions.

      Testing

      It was my first time writing tests for a project of this scale, but I was successfully able to write basic tests for both frontend and backend. The existing tests were easy to read and served as a great reference! In the future, I will add more functional tests and integration tests.

      Overall C4GT Experience

      This summer has been an incredible journey for me to work with the MetaBrainz Foundation, and I'm deeply grateful to C4GT for this amazing opportunity. Contributing to ListenBrainz and implementing Internet Archive music service integration has been both challenging and rewarding, seeing my work now live in production for users worldwide. Being a part of MetaBrainz is an incredible feeling. I will be continuing to fix bugs, issues or contribute other improvements.

      Throughout this journey, I have learned so many things. I am now more comfortable with Git and GitHub. Initially, I didn't have much experience with large-scale web applications, but during this period, I worked on my skills, tried – failed – researched – asked for help when stuck and finally finished the implementation.I have become more comfortable with Docker, TimescaleDB, and stuff like metadata indexing and music streaming implementation etc. The extensive code reviews with monkey significantly improved my TypeScript skills and taught me best practices for large codebases. Working with Docker services, Consul templates, and the complete infrastructure setup gave me proper industrial experience that I was completely new to as a beginner!

      I would like to thank lucifer, monkey, and a lot of others for helping me throughout this period, guiding me and for constantly supporting me. Whether it was the MetaBrainz chat or the code reviews, I always received detailed feedback, help and suggestions.I built some cool stuff this summer and it's going to be used by people all over the world. Thank you to all others who helped me throughout this journey and helped and guided me! I hope you guys will enjoy listening to songs more with more services.

      My proposal can be found here,PDF here

      All my pull requests and commits for ListenBrainz during C4GT 2025!

    33. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release, ~1 changed rss
      sync repo: +1 release, ~1 changed
      
      ## New releases
      - [Suture](https://github.com/libtero/suture): 1.1.0
      
      ## Changes
      - [Suture](https://github.com/libtero/suture):
        - 1.0.0: archive contents changed, download URL changed
      
    34. 🔗 Simon Willison Two new Showboat tools: Chartroom and datasette-showboat rss

      I introduced Showboat a week ago - my CLI tool that helps coding agents create Markdown documents that demonstrate the code that they have created. I've been finding new ways to use it on a daily basis, and I've just released two new tools to help get the best out of the Showboat pattern. Chartroom is a CLI charting tool that works well with Showboat, and datasette-showboat lets Showboat's new remote publishing feature incrementally push documents to a Datasette instance.

      Showboat remote publishing

      I normally use Showboat in Claude Code for web (see note from this morning). I've used it in several different projects in the past few days, each of them with a prompt that looks something like this:

      Use "uvx showboat --help" to perform a very thorough investigation of what happens if you use the Python sqlite-chronicle and sqlite-history-json libraries against the same SQLite database table

      Here's the resulting document.

      Just telling Claude Code to run uvx showboat --help is enough for it to learn how to use the tool - the help text is designed to work as a sort of ad-hoc Skill document.

      The one catch with this approach is that I can't see the new Showboat document until it's finished. I have to wait for Claude to commit the document plus embedded screenshots and push that to a branch in my GitHub repo - then I can view it through the GitHub interface.

      For a while I've been thinking it would be neat to have a remote web server of my own which Claude instances can submit updates to while they are working. Then this morning I realized Showboat might be the ideal mechanism to set that up...

      Showboat v0.6.0 adds a new "remote" feature. It's almost invisible to users of the tool itself, instead being configured by an environment variable.

      Set a variable like this:

      export SHOWBOAT_REMOTE_URL=https://www.example.com/submit?token=xyz

      And every time you run a showboat init or showboat note or showboat exec or showboat image command the resulting document fragments will be POSTed to that API endpoint, in addition to the Showboat Markdown file itself being updated.

      There are full details in the Showboat README - it's a very simple API format, using regular POST form variables or a multipart form upload for the image attached to showboat image.

      datasette-showboat

      It's simple enough to build a webapp to receive these updates from Showboat, but I needed one that I could easily deploy and would work well with the rest of my personal ecosystem.

      So I had Claude Code write me a Datasette plugin that could act as a Showboat remote endpoint. I actually had this building at the same time as the Showboat remote feature, a neat example of running parallel agents.

      datasette-showboat is a Datasette plugin that adds a /-/showboat endpoint to Datasette for viewing documents and a /-/showboat/receive endpoint for receiving updates from Showboat.

      Here's a very quick way to try it out:

      uvx --with datasette-showboat --prerelease=allow \
        datasette showboat.db --create \
        -s plugins.datasette-showboat.database showboat \
        -s plugins.datasette-showboat.token secret123 \
        --root --secret cookie-secret-123

      Click on the sign in as root link that shows up in the console, then navigate to http://127.0.0.1:8001/-/showboat to see the interface.

      Now set your environment variable to point to this instance:

      export SHOWBOAT_REMOTE_URL="http://127.0.0.1:8001/-/showboat/receive?token=secret123"

      And run Showboat like this:

      uvx showboat init demo.md "Showboat Feature Demo"

      Refresh that page and you should see this:

      Title: Showboat. Remote viewer for Showboat documents. Showboat Feature Demo 2026-02-17 00:06 · 6 chunks, UUID. To send showboat output to this server, set the SHOWBOAT_REMOTE_URL environment variable: export SHOWBOAT_REMOTE_URL="http://127.0.0.1:8001/-/showboat/receive?token=your-token"

      Click through to the document, then start Claude Code or Codex or your agent of choice and prompt:

      Run 'uvx showboat --help' and then use showboat to add to the existing demo.md document with notes and exec and image to demonstrate the tool - fetch a placekitten for the image demo.

      The init command assigns a UUID and title and sends those up to Datasette.

      Animated demo - in the foreground a terminal window runs Claude Code, which executes various Showboat commands. In the background a Firefox window where the Showboat Feature Demo adds notes then some bash commands, then a placekitten image.

      The best part of this is that it works in Claude Code for web. Run the plugin on a server somewhere (an exercise left up to the reader - I use Fly.io to host mine) and set that SHOWBOAT_REMOTE_URL environment variable in your Claude environment, then any time you tell it to use Showboat the document it creates will be transmitted to your server and viewable in real time.

      I built Rodney, a CLI browser automation tool, specifically to work with Showboat. It makes it easy to have a Showboat document load up web pages, interact with them via clicks or injected JavaScript and captures screenshots to embed in the Showboat document and show the effects.

      This is wildly useful for hacking on web interfaces using Claude Code for web, especially when coupled with the new remote publishing feature. I only got this stuff working this morning and I've already had several sessions where Claude Code has published screenshots of its work in progress, which I've then been able to provide feedback on directly in the Claude session while it's still working.

      Chartroom

      A few days ago I had another idea for a way to extend the Showboat ecosystem: what if Showboat documents could easily include charts?

      I sometimes fire up Claude Code for data analysis tasks, often telling it to download a SQLite database and then run queries against it to figure out interesting things from the data.

      With a simple CLI tool that produced PNG images I could have Claude use Showboat to build a document with embedded charts to help illustrate its findings.

      Chartroom is exactly that. It's effectively a thin wrapper around the excellent matplotlib Python library, designed to be used by coding agents to create charts that can be embedded in Showboat documents.

      Here's how to render a simple bar chart:

      echo 'name,value
      Alice,42
      Bob,28
      Charlie,35
      Diana,51
      Eve,19' | uvx chartroom bar --csv \
        --title 'Sales by Person' --ylabel 'Sales'

      A chart of those numbers, with a title and y-axis label

      It can also do line charts, bar charts, scatter charts, and histograms - as seen in this demo document that was built using Showboat.

      Chartroom can also generate alt text. If you add -f alt to the above it will output the alt text for the chart instead of the image:

      echo 'name,value
      Alice,42
      Bob,28
      Charlie,35
      Diana,51
      Eve,19' | uvx chartroom bar --csv \
        --title 'Sales by Person' --ylabel 'Sales' -f alt

      Outputs:

      Sales by Person. Bar chart of value by name — Alice: 42, Bob: 28, Charlie: 35, Diana: 51, Eve: 19
      

      Or you can use -f html or -f markdown to get the image tag with alt text directly:

      ![Sales by Person. Bar chart of value by name — Alice: 42, Bob: 28, Charlie: 35, Diana: 51, Eve: 19](/Users/simon/chart-7.png)

      I added support for Markdown images with alt text to Showboat in v0.5.0, to complement this feature of Chartroom.

      Finally, Chartroom has support for different matplotlib styles. I had Claude build a Showboat document to demonstrate these all in one place - you can see that at demo/styles.md.

      How I built Chartroom

      I started the Chartroom repository with my click-app cookiecutter template, then told a fresh Claude Code for web session:

      We are building a Python CLI tool which uses matplotlib to generate a PNG image containing a chart. It will have multiple sub commands for different chart types, controlled by command line options. Everything you need to know to use it will be available in the single "chartroom --help" output.

      It will accept data from files or standard input as CSV or TSV or JSON, similar to how sqlite-utils accepts data - clone simonw/sqlite-utils to /tmp for reference there. Clone matplotlib/matplotlib for reference as well

      It will also accept data from --sql path/to/sqlite.db "select ..." which runs in read-only mode

      Start by asking clarifying questions - do not use the ask user tool though it is broken - and generate a spec for me to approve

      Once approved proceed using red/green TDD running tests with "uv run pytest"

      Also while building maintain a demo/README.md document using the "uvx showboat --help" tool - each time you get a new chart type working commit the tests, implementation, root level README update and a new version of that demo/README.md document with an inline image demo of the new chart type (which should be a UUID image filename managed by the showboat image command and should be stored in the demo/ folder

      Make sure "uv build" runs cleanly without complaining about extra directories but also ensure dist/ and uv.lock are in gitignore

      This got most of the work done. You can see the rest in the PRs that followed.

      The burgeoning Showboat ecosystem

      The Showboat family of tools now consists of Showboat itself, Rodney for browser automation, Chartroom for charting and datasette-showboat for streaming remote Showboat documents to Datasette.

      I'm enjoying how these tools can operate together based on a very loose set of conventions. If a tool can output a path to an image Showboat can include that image in a document. Any tool that can output text can be used with Showboat.

      I'll almost certainly be building more tools that fit this pattern. They're very quick to knock out!

      The environment variable mechanism for Showboat's remote streaming is a fun hack too - so far I'm just using it to stream documents somewhere else, but it's effectively a webhook extension mechanism that could likely be used for all sorts of things I haven't thought of yet.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    35. 🔗 r/york Anyone see the free hugs man on saturday? He had kink gear on rss

      Which didnt feel right to me... i am quite liberal but i think a pup mask in public at daytime is a bit weird

      he was near museum gardens

      hope he is ok tho ofc

      submitted by /u/That_Historian9991
      [link] [comments]

    36. 🔗 r/reverseengineering Exploiting Reversing (ER) series | Article 06 | A Deep Dive Into Exploiting a Minifilter Driver (N-day) | Extended Version rss
    37. 🔗 Jamie Brandon 0057: consulting, zest progress, reads that lasted, books, links rss
      (empty)
    38. 🔗 Jamie Brandon 2025 rss
      (empty)
  3. February 16, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-16 rss

      IDA Plugin Updates on 2026-02-16

      New Releases:

      Activity:

    2. 🔗 r/Leeds Music venues - less than 100 capacity rss

      I'm in an out-of-town band looking to book a venue in Leeds as part of a mini UK tour. We don't have a huge following so we're looking for recommendations of small music venue around 100 cap or less - any suggestions are much appreciated.

      submitted by /u/NicheDuck
      [link] [comments]

    3. 🔗 r/Harrogate Where to meet people in their 30s? rss

      I hope this doesn't come across as creepy or desperate but I've been living in the area (moved around a few times but Knaresborough currently) since my school days and although I chose to settle here as it's nice and familiar, it's really hard to meet new people, especially as someone that doesn't drink and can't tolerate loud places. I have tried volunteering and adult learning classes but everything I've come across in the wider area seems to revolve around the elderly or children, or is a solitary, transactional activity. As a 32 year old who lives alone with my cat and works from home, it's awfully lonely. I'm thinking surely I can't be the only one? It seems everywhere I look, be it online or IRL, everyone in the 21-45 age group is married, has kids and lots of friends, or is content in solitude. Do people in their 30s even exist in Harrogate who are also feeling lonely and isolated, struggling to find opportunities for connection? Not necessarily dating (although that is hard too), just someone to talk to, with no expectations, no pressure. If so, where are they at? I'm mainly asking about regular activities or places aimed at people with similar sentiments. Thanks 🫠

      submitted by /u/n0d3N1AL
      [link] [comments]

    4. 🔗 r/Leeds Pokemon go groups Leeds rss

      Hi all, Recently got back into Pokemon go and I wanna meet people, I've had a look on Facebook and stuff but all the groups seam dead Anyone know any groups that meetup regularly that I can join , figured it's a good way to meet people given my 9-5 work schedule

      Feel free to DM me if you know of any :)

      submitted by /u/kevan50813
      [link] [comments]

    5. 🔗 r/reverseengineering The Long Tail of LLM-Assisted Decompilation rss
    6. 🔗 r/Leeds Horsforth - London commute. Anyone done it? rss

      Me and my partner both live and work full time in London but want to move to Horsforth. Both our jobs require us to be in the office in London twice a week but are fairly flexible on what time we get to the office i.e. 10am.

      Has anyone done this journey (or currently do this commute) and can share their experience? Are the trains reliable? What is the cost? We have a Two Together rail card so should be able to save 1/3 if we travel together. Are the rush hour trains busy or can you get a seat? It looks like some trains go direct from Horsforth to London, while some require a change in Leeds.

      Any insight would be greatly appreciated.

      submitted by /u/Long-Alternative-180
      [link] [comments]

    7. 🔗 r/LocalLLaMA Difference Between QWEN 3 Max-Thinking and QWEN 3.5 on a Spatial Reasoning Benchmark (MineBench) rss

      Difference Between QWEN 3 Max-Thinking and QWEN 3.5 on a Spatial Reasoning Benchmark (MineBench) | Honestly it's quite an insane improvement, QWEN 3.5 even had some builds that were closer to (if not better than) Opus 4.6/GPT-5.2/Gemini 3 Pro. Benchmark: https://minebench.ai/
      Git Repository: https://github.com/Ammaar-Alam/minebench Previous post comparing Opus 4.5 and 4.6, also answered some questions about the benchmark Previous post comparing Opus 4.6 and GPT-5.2 Pro (Disclaimer: This is a benchmark I made, so technically self-promotion, but I thought it was a cool comparison :) submitted by /u/ENT_Alam
      [link] [comments]
      ---|---

    8. 🔗 sacha chua :: living an awesome life 2026-02-16 Emacs news rss

      Lots of cool stuff this week! I'm looking forward to checking out the new futur library for async programming, and the developments around embedding graphics in a canvas in Emacs look interesting too (see the Multimedia section). Also, the discussion about making beginner configuration easier could be neat once the wrinkles are ironed out. Enjoy!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    9. 🔗 r/LocalLLaMA Qwen 3.5 goes bankrupt on Vending-Bench 2 rss
    10. 🔗 r/LocalLLaMA 4 of the top 5 most used models on OpenRouter this week are Open Source! rss

      4 of the top 5 most used models on OpenRouter this week are Open Source! | submitted by /u/abdouhlili
      [link] [comments]
      ---|---

    11. 🔗 r/LocalLLaMA Google doesn't love us anymore. rss

      It's been about 125 years of AI since the last Gemma, Google doesn't love us anymore and has abandoned us to Qwen's rational models. I miss the creativity of Gemma's, and also their really useful sizes.

      Don't abandon us, Mommy Google, give us Gemma 4!

      submitted by /u/DrNavigat
      [link] [comments]

    12. 🔗 HexRaysSA/plugin-repository commits sync plugin-repository.json rss
      sync plugin-repository.json
      
      No plugin changes detected
      
    13. 🔗 r/reverseengineering [Showcase] I optimized my LCSAJ dumper to scan the full libc in 6 seconds. (Demo inside) rss
    14. 🔗 r/Yorkshire What’s the most Yorkshire way someone has ever described something to you? rss

      Could be an expression, a phrase, or just a brutally honest description.

      submitted by /u/AnfieldAnchor
      [link] [comments]

    15. 🔗 3Blue1Brown (YouTube) Solution to the ladybug clock puzzle rss

      Solution to last month's probability puzzle.

    16. 🔗 remorses/critique critique@0.1.106 release

      New Features

      PDF Export

      • Generate PDF documents from diff and review commands with --pdf flag
      • critique HEAD~3 --pdf writes to /tmp/critique-diff-*.pdf
      • critique HEAD~3 --pdf output.pdf writes to specific path
      • critique review --pdf generates PDF after AI review completes
      • --open flag to launch PDF in default viewer after generation

      PDF Rendering

      • New opentui-pdf.ts module converts CapturedFrame to multi-page PDF using pdfkit
      • Smart page breaking at natural section boundaries (empty line sequences)
      • Auto-fits font size to frame width so content never clips horizontally
      • Uses JetBrains Mono Nerd font (ships pre-converted .ttf, 2.4MB)
      • Handles all text attributes: bold, italic, dim, underline, strikethrough
      • Correct positioning of CJK/emoji/wide characters using span.width
      • Default page size: A4 portrait (595x842 pt)

      Improvements

      File Organization

      • Move parsers-config.ts, global.d.ts, and queries/ into src/ directory for cleaner structure
      • Add dist/ to package.json files array (fixes published package missing compiled files)

      Dependencies

      • Add pdfkit as optional dependency (same as takumi)
      • Remove wawoff2 (ship pre-converted .ttf font instead)
      • Move resend to devDependencies (only used by Cloudflare Worker)

      Review Mode

      • Wait for AI generation to complete before exporting PDF
      • Default to github-light theme for better print readability (can override with --theme)

      Tests

      • Suppress React act() warnings in opentui component tests (expected behavior for TUI testing)
      • Increase DataPathsManager maxListeners to suppress EventTarget memory leak warning in DiffView tests

      Bug Fixes

      • Fix CLI version number display

      Contributors

      Thanks @tobeycodes for the contribution!

    17. 🔗 r/wiesbaden Hey, Iam the „Lets smoke a fat haze Joint together“ dude rss

      I feel like I should start a Community Meeting with you guys. Lets have a smoke out on friday, 20.02.2026. I know there Are few people who wanna meet, so Lets do it. How you guys feel about that?

      submitted by /u/Wide-Distribution-78
      [link] [comments]

    18. 🔗 r/Yorkshire Bradford job-seekers share struggle to find work rss

      Bradford job-seekers share struggle to find work | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    19. 🔗 pydantic/monty v0.0.6 - 2026-02-16 release

      What's Changed

      New Contributors

      Full Changelog : v0.0.5...v0.0.6

    20. 🔗 r/LocalLLaMA Qwen3.5-397B-A17B Unsloth GGUFs rss

      Qwen3.5-397B-A17B Unsloth GGUFs | Qwen releases Qwen3.5💜! Run 3-bit on a 192GB RAM Mac, or 4-bit (MXFP4) on an M3 Ultra with 256GB RAM (or less). Qwen releases the first open model of their Qwen3.5 family. https://huggingface.co/Qwen/Qwen3.5-397B-A17B It performs on par with Gemini 3 Pro, Claude Opus 4.5, and GPT-5.2. Guide to run them: https://unsloth.ai/docs/models/qwen3.5 Unsloth dynamic GGUFs at: https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF Excited for this week! 🙂 submitted by /u/danielhanchen
      [link] [comments]
      ---|---

    21. 🔗 r/LocalLLaMA Qwen3.5-397B-A17B is out!! rss
    22. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    23. 🔗 r/Yorkshire Places To Fall In Love With In North Yorkshire ❤️🥰 rss

      Places To Fall In Love With In North Yorkshire ❤️🥰 | @adventureinyorkshire submitted by /u/Additional_Fly_6603
      [link] [comments]
      ---|---

    24. 🔗 r/LocalLLaMA Qwen 3.5 will be released today rss

      Qwen 3.5 will be released today | Sources reveal that Alibaba will open-source its next-generation large model, Qwen3.5, tonight on Lunar New Year's Eve. The model reportedly features a comprehensive innovation in its architecture. https://preview.redd.it/n8tuw9gmfsjg1.jpg?width=680&format=pjpg&auto=webp&s=b95152330c1b5ebdb5b7022dd6762ebe1890fd06 https://x.com/Sino_Market/status/2023218866370068561?s=20 submitted by /u/External_Mood4719
      [link] [comments]
      ---|---

    25. 🔗 pydantic/monty v0.0.5 2026-02-16 release

      What's Changed

      New Contributors

      Full Changelog : v0.0.4...v0.0.5

    26. 🔗 r/LocalLLaMA Anyone actually using Openclaw? rss

      I am highly suspicious that openclaw's virality is organic. I don't know of anyone (online or IRL) that is actually using it and I am deep in the AI ecosystem (both online and IRL). If this sort of thing is up anyone's alley, its the members of localllama - so are you using it?

      With the announcement that OpenAI bought OpenClaw, conspiracy theory is that it was manufactured social media marketing (on twitter) to hype it up before acquisition. Theres no way this graph is real: https://www.star- history.com/#openclaw/openclaw&Comfy-Org/ComfyUI&type=date&legend=top- left

      submitted by /u/rm-rf-rm
      [link] [comments]

    27. 🔗 matklad Diagnostics Factory rss

      Diagnostics Factory

      Feb 16, 2026

      In Error Codes For Control Flow, I explained that Zig’s strongly-typed error codes solve the “handling” half of error management, leaving “reporting” to the users. Today, I want to describe my personal default approach to the reporting problem, that is, showing the user a useful error message.

      The approach is best described in the negative: avoid thinking about error payloads, and what the type of error should be. Instead, provide a set of functions for constructing errors.

      To give a concrete example, in TigerBeetle’s tidy.zig (a project-specific linting script, another useful meta-pattern), we define errors as follows:

      const Errors = struct {
          pub fn add_long_line(
              errors: *Errors,
              file: SourceFile,
              line_index: usize,
          ) void { ... }
      
          pub fn add_banned(
              errors: *Errors,
              file: SourceFile,
              offset: usize,
              banned_item: []const u8,
              replacement: []const u8,
          ) void { ... }
      
          pub fn add_dead_declaration(...) void { ... }
      
          ...
      };
      

      and the call-site looks like this:

      fn tidy_file(file: SourceFile, errors: *Errors) void {
          // ...
          var line_index: usize = 0;
          while (lines.next()) |line| : (line_index += 1) {
              const line_length = line_length(line);
              if (line_length > 100 and !contains_url(line)) {
                  errors.add_long_line(file, line_index);
              }
          }
      }
      

      In this case, I collect multiple errors so I don’t return right away. Fail fast would look like this:

      errors.add_long_line(file, line_index);
      return error.Tidy;
      

      Note that the error code is intentionally independent of the specific error produced.


      Some interesting properties of the solution:

      • The error representation is a set of constructor functions, the calling code doesn’t care what actually happens inside. This is why the error factory is my default solution — I don’t have to figure out up-front what I’ll do with the errors, and I can change my mind later.
      • There’s a natural place to convert information from the form available at the place where we emit the error to a form useful for the user. In add_banned above, the caller passes in a absolute offset in a file, and it is resolved to line number and column inside (tip: use line_index for 0-based internal indexes, and line_number for user-visible 1-based ones). Contrast this with a traditional error as sum-type approach, where there’s a sharp syntactic discontinuity between constructing a variant directly and calling a helper function.
      • This syntactic uniformity in turn allows easily grepping for all error locations: rg 'errors.add_'.
      • Similarly, there’s one central place that enumerates all possible errors (which is either a benefit or a drawback).

      A less trivial property is that this structure enables polymorphism. In fact, in the tidy.zig code, there are two different representations of errors. When running the script, errors are directly emitted to stderr. But when testing it, errors are collected into an in-memory buffer:

      pub fn add_banned(
          errors: *Errors,
          file: SourceFile,
          offset: usize,
          banned_item: []const u8,
          replacement: []const u8,
      ) void {
          errors.emit(
              "{s}:{d}: error: {s} is banned, use {s}\n",
              .{
                  file.path, file.line_number(offset),
                  banned_item, replacement,
              },
          );
      }
      
      fn emit(
          errors: *Errors,
          comptime fmt: []const u8,
          args: anytype,
      ) void {
          comptime assert(fmt[fmt.len - 1] == '\n');
          errors.count += 1;
          if (errors.captured) |*captured| {
              captured.writer(errors.gpa).print(fmt, args)
                  catch @panic("OOM");
          } else {
              std.debug.print(fmt, args);
          }
      }
      

      There isn’t a giant union(enum) of all errors, because it’s not needed for the present use-case.

      This pattern can be further extended to a full-fledged diagnostics framework with error builders, spans, ANSI colors and such, but that is tangential to the main idea here: even when “programming in the small”, it might be a good idea to avoid constructing enums directly, and mandate an intermediate function call.


      Two more meta observations here:

      First , the entire pattern is of course the expression of duality between a sum of two types and a product of two functions (the visitor pattern)

      fn foo() -> Result<T, E>;
      
      fn bar(ok: impl FnOnce(T), err: impl FnOnce(E));
      
      
      enum Result<T, E> {
          Ok(T),
          Err(E),
      }
      
      trait Result<T, E> {
          fn ok(self, T);
          fn err(self, E);
      }
      

      Second , every abstraction is a thin film separating two large bodies of code. Any interface has two sides, the familiar one presented to the user, and the other, hidden one, presented to the implementor. Often, default language machinery pushes you towards using the same construct for both but that can be suboptimal. It’s natural for the user and the provider of the abstraction to disagree on the optimal interface, and to evolve independently. Using a single big enum for errors couples error emitting and error reporting code, as they have to meet in the middle. In contrast, the factory solution is optimal for producer (they literally just pass whatever they already have on hand, without any extra massaging of data), and is flexible for consumer(s).

    28. 🔗 Stephen Diehl Can Opus 4.6 do Category Theory in Lean? rss

      Can Opus 4.6 do Category Theory in Lean?

      I have a little category theory library I've been dragging around for about a decade now. It started life in Haskell, got ported to Agda, briefly lived in Idris, spent some time in Coq, and has now landed in Lean 4. I call it Kitty Cats. The idea is simple: take the definitions and theorems from Awodey's Category Theory and the relevant chapters of Mac Lane's Categories for the Working Mathematician, and translate them directly into type-checkable code. Categories, functors, natural transformations, products, limits, adjunctions, monads, the Yoneda lemma. No libraries, no imports beyond what you define yourself. About 900 lines of Lean when all is said and done. It is the kind of project that fits entirely in your head, which is precisely the point. Every time a new dependently typed language or proof assistant comes along, I reach for this scaffold and see how it feels. It's my litmus test, my canonical exercise, my desert island formalization. And it's increasingly a good benchmark for how good language models have become abstract formal "reasoning" (or maybe just approximating it, but I'll leave that question to the philosophers!).

      So when Anthropic dropped the new Opus 4.6 model, I wanted to know: can it actually write proper Lean 4? Not toy examples. Not theorem two_plus_two : 2 + 2 = 4 := rfl. Real proofs with real proof obligations, the kind where you need to wrangle naturality squares and coherence conditions and the elaborator is fighting you every step of the way. The beautiful thing about a proof assistant is that the answer is ground-verifiable. You don't have to squint at the output and wonder if it's subtly wrong. Either lake build passes or it doesn't. The kernel doesn't care about your feelings.

      The short answer is: yes, kinda mostly. But the longer answer is more interesting.

      We (the royal "we" for me + model) built the whole library together over the course of two 30-minute sessions. The basic hierarchy went up fast. Categories, functors, natural transformations, the opposite category, morphism classes. These are mostly structure declarations with straightforward proof obligations: the category axioms \(\mathrm{id} \circ f = f\), \(f \circ \mathrm{id} = f\), and \((f \circ g) \circ h = f \circ (g \circ h)\); the functor laws \(F(\mathrm{id}) = \mathrm{id}\) and \(F(f \circ g) = F(f) \circ F(g)\). Opus handled these without breaking a sweat, and honestly, so would most competent language models at this point. The proofs are one or two tactics, often just rfl or by simp. The real question was always going to be what happens when the proofs get hard.

      Things got interesting around monoidal categories and the endofunctor construction. To prove that the category of endofunctors is monoidal under composition, you need to establish the pentagon and triangle coherence conditions. The pentagon axiom states that the two ways of reassociating a four-fold tensor product agree:

      $$
      (\alpha_{F,G,H} \otimes \mathrm{id}_K) \circ \alpha_{F, G \circ H, K} \circ (\mathrm{id}_F \otimes \alpha_{G,H,K}) = \alpha_{F \circ G, H, K} \circ \alpha_{F, G, H \circ K}
      $$

      where \(\alpha\) is the associator natural isomorphism. The triangle axiom relates the associator to the left and right unitors \(\lambda\) and \(\rho\):

      $$
      \alpha_{F, \mathrm{Id}, G} \circ (\mathrm{id}_F \otimes \lambda_G) = \rho_F \otimes \mathrm{id}_G
      $$

      These are precisely the kind of thing that's annoying to formalize because the goals involve deeply nested natural transformations composed horizontally and vertically, and the elaborator's notion of what things are "definitionally equal" doesn't always match your intuition. Opus could get the structure right, it could set up the show blocks that unfold the monoidal category projections into their concrete NatTrans representations, and it could figure out that ext a; simp [...] with the right unfolding lemmas would close the goals. But the proofs it produced were verbose. Walls of nested parentheses. The kind of thing that's technically correct but makes your eyes glaze over.

      The crown jewel of the library is monadIsMonoidObj: the proof that a monad is a monoid in the category of endofunctors. It's a joke that every functional programmer has heard (it's even a meme at this point), but actually formalizing it requires you to bridge two levels of abstraction. A monad \((T, \eta, \mu)\) on a category \(\mathcal{C}\) is an endofunctor \(T\) equipped with natural transformations \(\eta : \mathrm{Id} \Rightarrow T\) (unit) and \(\mu : T^2 \Rightarrow T\) (multiplication) satisfying associativity and unit laws:

      $$
      T(\mu_a) \circ \mu_a = \mu_{T(a)} \circ \mu_a
      \qquad
      T(\eta_a) \circ \mu_a = \mathrm{id}
      \qquad
      \eta_{T(a)} \circ \mu_a = \mathrm{id}
      $$

      These are component-wise equations, one for each object \(a\) of \(\mathcal{C}\). To show that \((T, \mu, \eta)\) forms a monoid object in the monoidal category \((\mathrm{End}(\mathcal{C}), \circ, \mathrm{Id})\), you need to lift them to equalities of natural transformations in the endofunctor category, which is itself equipped with the monoidal structure you just built. Each field of the MonoidObj structure requires a show block that manually unfolds the monoidal category projections (because ext can't see through the Hom type alias to find the NatTrans it needs), followed by ext; endo_simp to reduce to components, followed by exact M.mul_assoc a or whichever monad axiom applies. Opus got the shape of this right but needed significant guidance on factoring. The raw output had the show blocks inlined directly in the instance declaration, three or four lines of deeply nested NatTrans.vcomp (endoTensorHom M.mu (NatTrans.ident M.T)) M.mu = ... that, while correct, would terrify anyone reading the code for the first time.

      This is where the model hit a few bumps and needed some help. We refactored the proofs. We introduced endo_ext_eq, a small helper that lifts component-wise equalities to NatTrans equalities, so you could write endo_ext_eq fun a => by endo_simp; exact M.mul_assoc a instead of the show; ext a; endo_simp; exact dance. We extracted the monoidal coherence obligations into standalone lemmas (endoPentagon, endoTriangle, endoTensor_id, endoTensor_comp) so the instance declaration became pure assignment. We used let bindings in theorem signatures to name the five different associator instances in the pentagon axiom, turning a wall of (endoAssociator F (CFunctor.compose G H) K).hom into a legible alpha_F_GH_K. The monadIsMonoidObj definition went from 22 lines of inline proof to 7 lines of named lemma applications. This kind of refactoring is, I think, the thing that humans are still decisively better at: knowing what a reader needs to see, knowing which subexpressions deserve names, knowing when a proof is correct but not yet good.

      The Yoneda lemma was a pleasant surprise. The statement is that for any functor \(F : \mathcal{C}^{\mathrm{op}} \to \mathbf{Set}\) and any object \(a\) in \(\mathcal{C}\), there is a natural bijection:

      $$
      \mathrm{Nat}(\mathrm{Hom}(-, a),; F) ;\cong; F(a)
      $$

      The forward direction (evaluating a natural transformation at \(\mathrm{id}_a\) recovers the determining element) is almost trivial once you have map_id. The backward direction (rebuilding the transformation from its value at \(\mathrm{id}_a\)) requires extracting a naturality equation, simplifying it, and applying it symmetrically. Opus navigated this correctly, including the slightly tricky congrFun applications needed when your hom-sets are function types in the Type category. The fully faithful embedding (yoneda_map_eval and yoneda_eval_map) fell out naturally. I was honestly a little charmed.

      What made this work as well as it did was the tooling. Two open source projects deserve credit here. The lean-lsp-mcp server by Oliver Dressler exposes the Lean 4 language server as an MCP interface, giving Claude direct access to goal states, diagnostics, completions, and hover information. The lean4-skills plugin by Cameron Freer builds on top of this with higher-level proving commands, cycle engines, and premise search integrations. Together they give the model a genuine feedback loop. It can call lean_goal to inspect the proof state at any line, call lean_diagnostic_messages to see if the file has errors, call lean_multi_attempt to try several tactic sequences and see which ones make progress, and call lean_run_code to test a complete snippet against the kernel without modifying the working file. This changes the dynamics fundamentally. Instead of generating Lean code and hoping it compiles, the model operates in a tight loop: write a proof, check the goal state, see the error, adjust. It's the difference between painting blindfolded and painting with your eyes open. The model still makes mistakes, it still tries tactics that don't apply and writes rw chains that don't match, but it can detect these failures immediately and recover. Six months ago this kind of integration didn't exist. The model would have been generating Lean into a void, with no way to know if the elaborator was happy or screaming.

      The search tools matter too. lean_leansearch, lean_loogle, and lean_leanfinder let the model query Mathlib's lemma database by natural language, by type signature, and by semantic similarity. We didn't use Mathlib in this project (the whole point is to build from scratch), but the ability to verify that a lemma name exists before trying to use it, or to find the right name for a theorem you know should exist, is invaluable. lean_state_search and lean_hammer_premise go even further: given a proof state, they suggest lemmas that might close the goal. This is premise selection, the same technique that powers the strongest automated theorem provers, exposed as an API call. We're not quite at the point where you can say "prove the pentagon axiom" and walk away. But we're remarkably close to something that can handle a lot of nontrivial theorem proving with appropriate guidance.

      If I'm being honest about the limitations: Opus struggles with the kind of reasoning that requires holding a complex proof state in working memory and planning several steps ahead. It can close goals that yield to a single tactic or a short chain, but multi-step rewrites where you need to introduce a naturality equation, simplify it in a particular way, then apply it at a specific position in a long composition chain, that still requires hand-holding. It sometimes tries simp when it needs rw, or applies a lemma at the wrong position, or forgets that a show block is necessary to make ext fire. These are not deep failures of understanding; they're more like the mistakes a graduate student makes in their first semester with a proof assistant. The model knows what it wants to prove, it roughly knows the strategy, but the bureaucratic details of getting the elaborator to agree take iteration.

      One thing worth noting: in informal experiments, turning the reasoning budget up did increase convergence time (more tokens spent deliberating before committing to a tactic) but didn't obviously improve the quality of the final output. The model would think longer and still reach for the same simp call. This is a hard thing to measure empirically because the process is nondeterministic and highly sensitive to prompting, so take it with a grain of salt. But my working hypothesis is that for tactic-level theorem proving, the bottleneck is less about "thinking harder" and more about having the right tool calls in the loop. The language server feedback matters more than the internal chain of thought.

      The thing is, it's gotten exponentially better in the last six months. I've been running this same exercise periodically, and the trajectory is striking. A year ago, language models could barely write syntactically valid Lean. Six months ago, they could handle simple tactic proofs but fell apart on anything involving universe polymorphism or typeclass resolution. Today, with MCP integration and language server feedback, we built 900 lines of formalized category theory in two sessions, including proofs that many math graduate students would find challenging. It's not hard to project where this is going.

      And maybe that's the exciting part. Not that AI can prove theorems today (it can, sort of, with help), but that the boilerplate is evaporating. The tedious parts of formalization, the coherence conditions, the naturality lemmas, the simp configurations, these are exactly the kind of structured, verifiable, mechanically checkable work that language models are getting good at. When this layer becomes trivial, we get to spend our time on the parts that actually matter: choosing the right abstractions, seeing the connections between structures, deciding what's worth formalizing in the first place. The proof assistant becomes less of a bureaucratic obstacle and more of a genuine thinking tool. We get to build higher.

      I've been carrying this little category theory library around for ten years, porting it from language to language, and every time, the experience tells me something about the state of the art. This time what it told me is that we're living in the future, and it's weirder and more interesting than I expected.

  4. February 15, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-15 rss

      IDA Plugin Updates on 2026-02-15

      New Releases:

      Activity:

      • DeepExtractIDA
        • bf06d3d4: Add function_index.json generation with library boilerplate tagging f…
      • ida-free-mcp
        • 209ff34f: Add Hex-Rays decompiler integration and fix rename/type parsing
        • 647f0b64: Update paths for IDA Free 9.3 and fix Claude Code MCP command
        • aaca3c1e: Add stack variable renaming to rename tool, MIT license, and Claude s…
      • ida-hcli
      • ida_missinglink
      • idamcp
        • ae84aa7c: Add per-tool call logging to IDA output window
        • 50854801: Update .gitignore with Python, IDE, and OS patterns
        • 9b3555b6: init2
        • ad4872bb: Add README with project overview, installation, and usage guide
        • 66e0d7c2: Add .gitignore with .claude/ exclusion
        • b85dc8c2: init
      • idawilli
        • ba6d95c4: codemode-eval: track tool call failures as eval metric
        • 71246402: codemode-eval: use inline usage.cost from OpenRouter responses
        • 63d7e1c8: codemode-eval: use reasoning effort levels and real cost tracking
        • 1da618bd: codemode: add ida-codemode-eval package for model evaluation
        • f0e5e502: Merge pull request #111 from williballenthin/claude/implement-issue-1…
        • 118c696c: codemode: sync uv.lock with declared dependencies
        • 8fbf77f8: codemode: allow OpenAI-compatible endpoint URLs as -model
        • 854dd4b6: Merge pull request #110 from williballenthin/claude/add-logfire-instr…
        • e347ae41: codemode: add Logfire instrumentation for agent observability
      • ret-sync
        • 60e7b0fd: Update README (ida version supported + WinDbg)
      • SuperPseudo
        • 887d47cb: Merge pull request #5 from 321Proteus/main
    2. 🔗 r/reverseengineering [Tool Release] LCSAJdump: Universal Graph-Based ROP/JOP Gadget Finder (Finds "Shadow Gadgets" that linear scanners miss) rss
    3. 🔗 Simon Willison Deep Blue rss

      We coined a new term on the Oxide and Friends podcast last month (primary credit to Adam Leventhal) covering the sense of psychological ennui leading into existential dread that many software developers are feeling thanks to the encroachment of generative AI into their field of work.

      We're calling it Deep Blue.

      You can listen to it being coined in real time from 47:15 in the episode. I've included a transcript below.

      Deep Blue is a very real issue.

      Becoming a professional software engineer is hard. Getting good enough for people to pay you money to write software takes years of dedicated work. The rewards are significant: this is a well compensated career which opens up a lot of great opportunities.

      It's also a career that's mostly free from gatekeepers and expensive prerequisites. You don't need an expensive degree or accreditation. A laptop, an internet connection and a lot of time and curiosity is enough to get you started.

      And it rewards the nerds! Spending your teenage years tinkering with computers turned out to be a very smart investment in your future.

      The idea that this could all be stripped away by a chatbot is deeply upsetting.

      I've seen signs of Deep Blue in most of the online communities I spend time in. I've even faced accusations from my peers that I am actively harming their future careers through my work helping people understand how well AI-assisted programming can work.

      I think this is an issue which is causing genuine mental anguish for a lot of people in our community. Giving it a name makes it easier for us to have conversations about it.

      My experiences of Deep Blue

      I distinctly remember my first experience of Deep Blue. For me it was triggered by ChatGPT Code Interpreter back in early 2023.

      My primary project is Datasette, an ecosystem of open source tools for telling stories with data. I had dedicated myself to the challenge of helping people (initially focusing on journalists) clean up, analyze and find meaning in data, in all sorts of shapes and sizes.

      I expected I would need to build a lot of software for this! It felt like a challenge that could keep me happily engaged for many years to come.

      Then I tried uploading a CSV file of San Francisco Police Department Incident Reports - hundreds of thousands of rows - to ChatGPT Code Interpreter and... it did every piece of data cleanup and analysis I had on my napkin roadmap for the next few years with a couple of prompts.

      It even converted the data into a neatly normalized SQLite database and let me download the result!

      I remember having two competing thoughts in parallel.

      On the one hand, as somebody who wants journalists to be able to do more with data, this felt like a huge breakthrough. Imagine giving every journalist in the world an on-demand analyst who could help them tackle any data question they could think of!

      But on the other hand... what was I even for? My confidence in the value of my own projects took a painful hit. Was the path I'd chosen for myself suddenly a dead end?

      I've had some further pangs of Deep Blue just in the past few weeks, thanks to the Claude Opus 4.5/4.6 and GPT-5.2/5.3 coding agent effect. As many other people are also observing, the latest generation of coding agents, given the right prompts, really can churn away for a few minutes to several hours and produce working, documented and fully tested software that exactly matches the criteria they were given.

      "The code they write isn't any good" doesn't really cut it any more.

      A lightly edited transcript

      Bryan: I think that we're going to see a real problem with AI induced ennui where software engineers in particular get listless because the AI can do anything. Simon, what do you think about that?

      Simon: Definitely. Anyone who's paying close attention to coding agents is feeling some of that already. There's an extent where you sort of get over it when you realize that you're still useful, even though your ability to memorize the syntax of program languages is completely irrelevant now.

      Something I see a lot of is people out there who are having existential crises and are very, very unhappy because they're like, "I dedicated my career to learning this thing and now it just does it. What am I even for?". I will very happily try and convince those people that they are for a whole bunch of things and that none of that experience they've accumulated has gone to waste, but psychologically it's a difficult time for software engineers.

      [...]

      Bryan: Okay, so I'm going to predict that we name that. Whatever that is, we have a name for that kind of feeling and that kind of, whether you want to call it a blueness or a loss of purpose, and that we're kind of trying to address it collectively in a directed way.

      Adam: Okay, this is your big moment. Pick the name. If you call your shot from here, this is you pointing to the stands. You know, I – Like deep blue, you know.

      Bryan: Yeah, deep blue. I like that. I like deep blue. Deep blue. Oh, did you walk me into that, you bastard? You just blew out the candles on my birthday cake.

      It wasn't my big moment at all. That was your big moment. No, that is, Adam, that is very good. That is deep blue.

      Simon: All of the chess players and the Go players went through this a decade ago and they have come out stronger.

      Turns out it was more than a decade ago: Deep Blue defeated Garry Kasparov in 1997.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    4. 🔗 r/Yorkshire Down towards Kettlewell rss

      Down towards Kettlewell | submitted by /u/Voice_Still
      [link] [comments]
      ---|---

    5. 🔗 r/york What did I just see flying over York? rss

      What did I just see flying over York? | Did anybody else see this strange string of lights flying over York tonight? I caught it on camera around 6:25pm submitted by /u/No-Trade-5307
      [link] [comments]
      ---|---

    6. 🔗 r/york Immigration raids in York spark launch of York Anti-Raids Group rss

      Immigration raids in York spark launch of York Anti-Raids Group | submitted by /u/BitGirl777
      [link] [comments]
      ---|---

    7. 🔗 r/Yorkshire I had no idea just how much of Wuthering Heights they filmed here rss

      I had no idea just how much of Wuthering Heights they filmed here | Did anyone see them filming last year? submitted by /u/Terrible_Passion6178
      [link] [comments]
      ---|---

    8. 🔗 r/LocalLLaMA You can run MiniMax-2.5 locally rss

      You can run MiniMax-2.5 locally | MiniMax-2.5 is a new open LLM achieving SOTA in coding, agentic tool use and search and office work. The 230B parameters (10B active) model has a 200K context window and unquantized bf16 requires 457GB. Unsloth Dynamic 3-bit GGUF reduces size to 101GB (-62%). Official Guide - https://unsloth.ai/docs/models/minimax-2.5 GGUF Models - https://huggingface.co/unsloth/MiniMax-M2.5-GGUF Top LLM, RAG and AI Agents updates of this week - https://aixfunda.substack.com/p/top-llm-rag-and-agent-updates-of-03a submitted by /u/Dear-Success-1441
      [link] [comments]
      ---|---

    9. 🔗 r/Leeds Quiet places to practice hill starts rss

      hello

      it's as the title suggests, I'm looking for recommendations of quiet roads on a hill where I can practice my hill starts in peace.

      I recently got a new car, and changed from a city car to a compact SUV, so it's quite a bit different. I had a bit of an incident last night where I stalled up near Birstall and was met with some really aggressive people because of it. it was a really stressful experience and so I want to find a quiet hill somewhere that I can get some practice in without worrying about disrupting the flow of traffic.

      can anyone recommend anywhere?

      thanks in advance :)

      submitted by /u/Mindless_Fig3538
      [link] [comments]

    10. 🔗 remorses/critique v0.1.105 release

      critique --stdin / critique --web :

      • Only show 'URL is private' notice when generating with --web (fix notice appearing in scrollback/pager output like lazygit where it makes no sense)

      Tests :

      • Add comprehensive integration tests for --stdin pager mode using tuistory (10 test cases covering empty diffs, multiple files, renames, binary files, narrow terminals, etc.)
      • Rewrite lazygit pager test as real integration test that launches critique in a PTY and verifies scrollback output
    11. 🔗 r/Yorkshire Burnsall Church in Yourkshire❤️ rss

      Burnsall Church in Yourkshire❤️ | @philwilliams submitted by /u/Additional_Fly_6603
      [link] [comments]
      ---|---

    12. 🔗 r/Yorkshire Lasagna Love – cook for your community or request a free lasagna rss

      My name is Michelle and I’m the UK Coordinator for Lasagna Love. Lasagna Love is an organisation made up of volunteers across the UK (and around the world) who cook and deliver homemade lasagna to people in need, at no cost and with no strings attached.

      Maybe it's a family struggling financially, someone recovering from surgery, a parent adjusting to life with a new baby or just someone going through a tough time. Whatever the reason, we believe that a warm meal and a little kindness can go a long way.

      You simply request a lasagna, and we match you with a volunteer in your area who will lovingly prepare and deliver one to your door. It’s all about community, care and connection with no judgment and no complicated process.

      I’m reaching out because we have wonderful volunteers across the country who are ready and eager to cook, for anyone who could do with some lasagna love! If you could use a little help and a comforting meal, please put in a request through https://lasagnalove.org/request-a-meal/ or you can nominate someone to receive a meal - https://lasagnalove.org/nominate/

      We’re also always looking for more volunteers! Whether you can cook once a month, once a week, or just whenever it suits you, every lasagna helps brighten someone’s day. It’s a small act that has a big impact. If you would like to volunteer, just register here https://lasagnalove.org/volunteer/

      If you have any questions, I’m happy to answer them here, or you can head to the website.

      Thank you so much for taking the time to read and I hope to see some of you sign up, whether to request a lasagna or to make one for someone in your community.

      Feed families. Spread kindness. Strengthen communities.

      submitted by /u/Lasagna_Love_UK
      [link] [comments]

    13. 🔗 r/reverseengineering vitoplantamura/BugChecker: SoftICE-like kernel debugger for Windows 11 rss
    14. 🔗 r/Yorkshire A heads up rss
    15. 🔗 r/Leeds Fire/Smoke around harehills rss

      Lots of smoke in the air, seems to be around that area or further to town, the fire brigade and police were making their way just over an hour ago.

      submitted by /u/NXGZ
      [link] [comments]

    16. 🔗 Register Spill Joy & Curiosity #74 rss

      Two guys in the jungle. A tiger charges at them. One guy kneels down to tighten his shoelaces. The other yells, "What are you doing? You can't outrun a tiger!" First guy says, "I don't have to outrun the tiger. I only have to outrun you."

      One mistake I see a lot of engineers make when thinking about the impact AI will have on the software industry is to think in edge cases.

      "LLMs can't write C compilers correctly, psh! LLMs can't write code in this very old and large codebase! LLMs can't fix this very difficult and complex bug that took me two weeks to figure out."

      You don't judge the impact of a technology on an industry by looking at one end of a spectrum only. You need to look at the other end too, and the average.

      And for AI to have a dramatic, nothing-will-be-the-same impact on software as an industry, it doesn't need to be better than the best engineer you know. It only needs to be better than the average.

      • I recorded a short video: "I am the bottleneck now." As others have pointed out, yes, I've always been the bottleneck. I guess I should've said instead: "I am a very, very narrow bottleneck now." But the point of the video wasn't necessarily that I can now often copy & paste text from one tool into the other and code gets written and I can push it straight up. I recorded the video because I wanted to share that punchline with the customer coming back to me and, more importantly even, to explain why I don't think that our existing software development tooling is built for this new future. Because it's strange to assume that with these models getting better and better, and their ability to write good code on first try improving, we'll keep opening tickets in Linear, pasting them into an agent, having them open a PR on GitHub, only for another agent to review it, so that we can then hit merge. This whole flow was built for humans. It's based on the assumption that code is slow and expensive to write. That's no longer true and the tools will collapse into the new truth.

      • And here's Armin, riffing on the idea of bottlenecks and how they shift in technological revolutions and what it means for software: The Final Bottleneck.

      • And here's stevey with other thoughts along the same lines, the lines pointing towards where this is headed: The AI Vampire.

      • And here it's the Harvard Business Review saying that AI doesn't reduce work, but intensifies it: "Over time, this rhythm raised expectations for speed--not necessarily through explicit demands, but through what became visible and normalized in everyday work. Many workers noted that they were doing more at once--and feeling more pressure--than before they used AI, even though the time savings from automation had ostensibly been meant to reduce such pressure."

      • But then here's Cate Hall: Do Less. "In retrospect, what went wrong at the retreat was the same thing that went wrong with my reading binge, it was just the pattern repeating at a deeper level. The part of me doing the scanning and releasing -- the monitoring layer, the internal project manager -- was the thing that actually needed to go offline. Rather than relaxing in the relevant sense, I was using my optimization machinery to simulate relaxation at a very convincing level of fidelity while the machinery itself hummed along at full speed. […] And if your optimizing machine is still humming along, even if you are doing rest-like activities, you are not truly resting. Reading The Power Broker in your spare time, not because you are genuinely interested, but because you can't bear to be the only person at your SF dinner party who hasn't? Still optimizing. Cooking the most impressive dinner possible for your friends, so you can convince them that you're worthy of love, rather than making something you enjoy producing? Still optimizing."

      • More on bottlenecks: "This, to me, is the real risk. Software broadly commoditizes, with a new crop of software / value emerging. A big constraint to the development of software is engineering resources. Before the cloud, a constraint was how quickly could you stand up racks of servers to support user growth. In the cloud era that was commoditized, and engineering resources became the constraining factor (how quickly could you develop software). With AI, that constraining resource (engineering velocity) is going away."

      • The o16g Manifesto. o16g stands for Outcome Engineering. "It was never about the code."

      • "Those of us building software factories must practice a deliberate naivete: finding and removing the habits, conventions, and constraints of Software 1.0. The DTU is our proof that what was unthinkable six months ago is now routine."

      • 23 lessons you will learn living in a very snowy place. Lovely. Great writing, made me smile a lot.

      • Twenty Five Years of Computing. Very, very good. Twenty five years of loving computing, I'd say.

      • "It was May 15th, 2024. My mom's 60th birthday. Instead of planning a birthday message, I was checking my phone for an acquisition term sheet from a $40 billion company. Unfortunately, when I finally got the email, it was not the yes or no response I had been hoping for. It took almost four years before we finally found the right buyer. I wished a book like this existed at the time. If you are going through an M&A as a founder or are curious about my journey, I hope this book will be helpful to you." Very, very interesting. I've been in a M&A-like situation once and it's shaped me and my professional outlook like few other things. What I learned is: (1) you can talk and make promises for months but nothing counts until an actual contract is signed and even then I wouldn't relax yet (2) the bigger company can wait until you die.

      • Benedict Evans had a killer line in his latest newsletter: "A chatbot might be a new, different, and expanded way to handle those kinds of improved problems - it won't replace software, but expand the space around it. In other words, there is software that is formalised, institutionalised process, and then there is software that is improvisation. You won't replace process with improvisation - you don't replace Salesforce with ChatGPT any more than you replace it with Excel. But there's a lot more that you could automate if you could improvise more."

      • "A first look at the interior and interface of the Ferrari Luce." This isn't a car newsletter and I don't own any Ferraris, but this is interesting "because it's the work of Sir Jony Ive, the man who steered the design trajectory of Apple" and Mike Matas and others and, well, even if we don't and never will drive this Ferrari, this will have an effect, just like Miranda Priestly said it in Devil Wears Prada.

      • "But on the whole, the economic transition that AI is ushering in will be much gentler than people seem to think. COVID is a terrible analogy for what's coming. The ordinary person, the person who works at a regular job and doesn't know what Anthropic is and invests a certain amount of money in a diversified index fund at the end of each month: that person will most likely be fine. I don't think they have much to worry about from AI."

      • Even three months ago, no: three weeks ago, I would've said that Andreas' predictions here are too out there, too crazy. Now I agree with everything he's saying here 100%: "Is software development completely and utterly beeped?"

      • As someone who closes all his browser tabs many times per day I 100% agree with this: the secret to structuring your work is "nothing". Of course, if you're a tab hoarder, you'll disagree. And there's no way I can convince you to change your ways, nor is there any way you can convince me to change mine. It's how it has been and how it will be. Our two factions, our peoples, tab closers and tab hoarders, desk cleaners and desk pilers, will exist until the death of the tab, locked into a cosmic dance, forever pushing and pulling each other, one closing and the other opening. That's how it's written.

      • Third time I'm reading this, George Saunder's My Writing Education. It's so very good and this line has been stuck in my head since the first reading, many years ago: "It is as if that is the point of power: to allow one to access the higher registers of gentleness."

      • "Writing about 'the obvious' is a useful service. Often people doubt what their own experience is telling them until someone else helps confirm their suspicions and put them into words." Perfectly put, by Simon Willison.

      • "Spotify says its best developers haven't written a line of code since December, thanks to AI." I've written a handful, I'd say. And: "my name is jessie frazelle and i have not touched code in an editor since october."

      • Ben Thompson was a guest on Cheeky Pint and this portion here, on US vs. European companies, is especially interesting. As a German who's been working for German companies half his career and US companies the other half, I find the analysis to be spot on: US companies focus on making more profit instead of optimizing cost and European companies focusing on optimizing cost and efficiency.

      • This is Kella Byte, who's been tweeting about databases for as long as I can remember: Building A Distributed SQL Database in 30 Days with AI.

      • "A terminal weather app with ASCII animations driven by real-time weather data.

      Features real-time weather from Open-Meteo with animated rain, snow, thunderstorms, flying airplanes, day/night cycles, and auto-location detection."

      • David Crawshaw, articulating it very, very well: "Understanding is an iterative process. Write code, run, think, write some more. No-one ever came up with a design, wrote the code, compiled then shipped. Removing most of the writing radically changes that iterative loop. [Reply tweet:] Absolutely in a good way. I can have an idea, prototype it three different ways and make a call based on a real attempt to build it, in a few hours. In the old software world, we would have had a week of meetings to decide if the prototype was worth the effort."

      • Thoughtworks organized a retreat "to wrestle with the questions that matter most as AI reshapes how we build software" and published a summary. There are some very interesting things in there. Nothing new to any reader of this newsletter, I'm sure, but interesting because things we've been doing are described very explicitly: "This middle loop involves directing, evaluating and fixing the output of AI agents. It requires a different skill set than writing code. It demands the ability to decompose problems into agent-sized work packages, calibrate trust in agent output, recognize when agents are producing plausible-looking but incorrect results and maintain architectural coherence across many parallel streams of agent-generated work. […] These are skills that experienced engineers often possess, but they are rarely explicitly developed or recognized in career ladders."

      • Good stuff.

      If you also think tigers are most impressive animals in the world and if there's a chance that you'd stand there and say "whoa" instead of running, you should subscribe:

      I 'm collecting some testimonials for this newsletter, because I noticed that its landing page is seriously outdated. If you enjoy reading this newsletter and it means something to you, feel free to hit reply and let me know.

    17. 🔗 r/reverseengineering Introducing IDA-Free-MCP: mcp server for IDA Free version (native) rss
    18. 🔗 r/wiesbaden Hunting lodge Platte close to Wiesbaden 1823-1945 rss
    19. 🔗 r/LocalLLaMA PSA: NVIDIA DGX Spark has terrible CUDA & software compatibility; and seems like a handheld gaming chip. rss

      I've spent the past week experimenting with the DGX Spark and I am about to return it. While I had understood the memory bandwidth and performance limitations, I like the CUDA ecosystem and was willing to pay the premium. Unfortunately, my experiences have been quite poor, and I suspect this is actually handheld gaming scraps that NVIDIA rushed to turn into a product to compete with Apple and Strix Halo.

      The biggest issue: DGX Spark is not datacentre Blackwell, it's not even gaming Blackwell, it has its own special snowflake sm121 architecture. A lot of software do not work with it, or have been patched to run sm80 (Ampere, 6 years old!) codepaths which means it doesn't take advantage of blackwell optimisations.

      When questioned about this on NVIDIA support forum, an official NVIDIA representative said:

      sm80-class kernels can execute on DGX Spark because Tensor Core behavior is very similar, particularly for GEMM/MMAs (closer to the GeForce Ampere-style MMA model). DGX Spark not has tcgen05 like jetson Thor or GB200, due die space with RT Cores and DLSS algorithm

      Excuse me?? The reason we're getting cut-down tensor cores (not real blackwell) is because of RT Cores and "DLSS algorithm"? This is an AI dev kit; why would I need RT Cores, and additionally how does DLSS come into play? This makes me think they tried to turn a gaming handheld GPU (which needs/supports unified memory) into a poor competitor for a market they weren't prepared for.

      In addition, in the same post the rep posted what appears to be LLM hallucinations, mentioning issues have been fixed in version numbers and releases for software libraries that do not exist.

      Just be careful when buying a DGX Spark. You are not really getting a modern CUDA experience. Yes, everything works fine if you pretend you only have an Ampere, but attempting to use any Blackwell features is an exercise in futility.

      Additionally, for something that is supposed to be ready 'out of the box', many people (including myself and servethehome) reports basic issues like HDMI display output. I originally thought my Spark was DOA; nope; it just refuses to work with my 1080p144 viewsonic (which works with all other GPUs; including my NVIDIA ones); and had to switch to my 4K60 monitor. Dear NVIDIA, you should not have basic display output issues...

      submitted by /u/goldcakes
      [link] [comments]

    20. 🔗 r/reverseengineering IDA Pro 9.3 released rss
    21. 🔗 exe.dev Review the reviews rss

      When I was actively contributing to the Go project, my primary feed was the code review email firehose.

      Issues, mailing lists, and Slack had low SNR. The finished commit history was better: it was finely polished work with some of the best written commit messages I have ever encountered. But it didn't hold a candle to code reviews for operational learning.

      The commit history could tell you what got done and why, in impressive technical detail. But the code review could also tell you about mid-stream direction changes; what concerns were taken seriously; what got started and not completed; what values drove decisions.


      I'm reflecting on that this morning because of Simon Willison's comments about cognitive debt:

      I've been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I've found myself getting lost in my own projects.

      I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.

      Having a human in the loop benefits the human. But where?

      Reading code reviews was effective for the Go project. It still is.


      My workflow has churned over the last year. But here's what I do now: I review reviews.

      I ask an agent to do something. Code happens. I then ask an agent to review that code, without looking at it myself. Then I review the review.

      The agent's review typically contains design commentary, questions about decisions made, bugs, and nits. This is usually enough for me to get a clear idea about what's going on in the code, at the right level of abstraction. And it enables me to very efficiently provide direction.

      I have a heavily used code review skill that optimizes for this workflow:

      Number all comments, questions, and suggestions for easy reference. Use an ever-incrementing scheme starting at 1.

      Format:

      • Top-level items: 1., 2., 3.
      • Sub-items: 2a., 2b.

      This lets the user respond concisely and unambiguously: "3: please fix" or "2b: stet"

      An agent who has just done a code review has an ideally primed context window for working on that code. It makes fixes for me.

      And as you might have guessed, when those fixes are done, I amend the commit unseen and start another code review cycle. When the code reviews stabilize, I skim the final commit. There are rarely any surprises.

      The reviews rarely actually come back clean. Rather, they converge on commentary I've already decided to ignore, places where the model weights and I flatly disagree.


      The numbers bear this out. I just asked Claude to look over the entire history of the initial prompts I give it and do some light analysis.

      Top 1-grams: please (2.03%), codereview (0.64%), look (0.61%), use (0.58%), make (0.51%)

      Top 2-grams: look at (0.41%), please codereview (0.39%), i want (0.34%), want to (0.29%), add a (0.21%)

      I learned while writing this blog post that "code review" is two words. RIP stats.


      My life is now officially Seussian. I watch the watchers.

      The Bee Watcher

      Cross-posted atcommaok.xyz/ai/review-the- reviews/