🏡


to read (pdf)

  1. Doing the thing is doing the thing
  2. Reframing Agents
  3. How to Choose Colors for Your CLI Applications · Luna’s Blog
  4. A Protocol for Package Management | Andrew Nesbitt
  5. No management needed: anti-patterns in early-stage engineering teams | Antoine Boulanger

  1. February 03, 2026
    1. 🔗 navidrome/navidrome v0.60.0 release

      Plugins

      This release introduces a major rewrite of the experimental Plugin System , now with multi-language PDK support, enabling developers to extend Navidrome's functionality using WebAssembly-based plugins written in Go, Rust, Python or JavaScript. Plugins run in a secure sandbox and can provide additional metadata sources, custom integrations, and server-side enhancements. Users can now easily configure plugins directly from the UI through a new JSONForms-based configuration interface.

      A couple of working plugins are already available:

      For more plugins, keep an eye on the tag navidrome- plugins in GitHub.

      More details and instructions on how to use and manage plugins can be found in our documentation.
      New documentation will soon be added with details on how to create new plugins.

      Metadata Extraction

      Additionally, this version includes a pure-Go metadata extractor built on top of the new go-taglib library. This is a significant step toward removing the C++ TagLib dependency, which will simplify cross-platform builds and packaging in future releases. The new extractor is activated by default, but in case of any issues you can revert to the previous implementation by setting Scanner.Extractor="legacy-taglib" configuration option.

      Instant Mix

      The Instant Mix feature generates a playlist of similar songs based on a selected track. By default, it retrieves similar songs from Last.fm (if configured with an API key) or falls back to Deezer. It can also be configured to use external plugins, like AudioMuse- AI for sonic analysis- based similarity recommendations.

      New and Changed Configuration Options

      Plugin System Options

      Option | Default | Description
      ---|---|---
      Plugins.Enabled | true | Enable/disable the plugin system
      Plugins.Folder | "" | Path to the plugins directory. Default: $DataFolder/Plugins
      Plugins.CacheSize | "200MB" | Maximum cache size for storing compiled plugin WASM modules
      Plugins.AutoReload | false | Automatically detect new/changed/removed plugins
      Plugins.LogLevel | "" | Override log level for plugin-related messages

      Subsonic API Options

      Option | Default | Description
      ---|---|---
      Subsonic.MinimalClients | "" | Comma-separated list of clients that receive reduced API responses (useful for resource-constrained devices like smartwatches)
      Subsonic.EnableAverageRating | true | Include average rating in API responses

      Metadata & Matching Options

      Option | Default | Description
      ---|---|---
      SimilarSongsMatchThreshold | 85 | Minimum similarity score (0-100) for matching similar songs from external sources to local library
      LastFM.Language | "en" | Now supports comma-separated list of languages (e.g., "de,fr,en") for metadata fallback
      Deezer.Language | "en" | Now supports comma-separated list of languages for metadata fallback

      Renamed Options (Deprecation Notice)

      The following options have been renamed. The old names still work but will be removed in a future release:

      Old Name | New Name
      ---|---
      HTTPSecurityHeaders.CustomFrameOptionsValue | HTTPHeaders.FrameOptions

      Security

      • Fix potential XSS vulnerability by sanitizing user-supplied data before rendering (GHSA-rh3r-8pxm-hg4w). (d7ec735 by @AlexGustafsson)
      • Fix potential DoS vulnerability in cover art upscaling by clamping requested square size to original dimensions (GHSA-hrr4-3wgr-68x3). (77367548 by @deluan). Thanks to @yunfachi

      Added

      • Plugins:

        • Add new WebAssembly-based plugin system with multi-language PDK support (Go, Rust, Python). (#4833 by @deluan)
        • Add JSONForms-based plugin configuration UI. (#4911 by @deluan)
        • Add similar songs retrieval functions to plugins API. (#4933 by @deluan)
        • Server:

        • Add pure-Go metadata extractor (go-taglib) as alternative to FFmpeg-based extraction. (#4902 by @deluan)

        • Add support for reading embedded images using the new taglib extractor by default. (66474fc by @deluan)
        • Add Instant Mix (song-based Similar Songs) functionality with MBID, ISRC and Title/Artist fuzzy matching. (#4919, #4946 by @deluan)
        • Add support for multiple languages when fetching metadata from Last.fm and Deezer. (#4952 by @deluan)
        • Add Subsonic.MinimalClients configuration option for improved compatibility with minimal Subsonic clients. Default list is "SubMusic" (#4850 by @typhoon2099)
        • Add support for public/private playlists in NSP import. (c5447a6 by @deluan)
        • Add RISCV64 builds. (#4949 by @MichaIng)
        • UI Features:

        • Add composer field to table views. (#4857 by @AlexGustafsson)

        • Add prompt before closing window if music is playing. (#4899 by @alannnna)
        • Add Nautiline-like theme. (#4909 by @borisrorsvort)
        • Add multiline support and resizing for playlist comment input. (6fce30c by @deluan)
        • Subsonic API:

        • Add avgRating field from Subsonic spec. (#4900 by @terry90)

        • Insights:

        • Add insights collection for Scanner.Extractor configuration to measure go-taglib usage. (63517e9 by @deluan)

        • Add file suffix counting to insights. (0473c50 by @deluan)

      Changed

      • Optimize cross-library move detection for single-library setups. (#4888 by @deluan)
      • Improve Deezer artist search ranking. (a081569 by @deluan)
      • Rename HTTPSecurityHeaders.CustomFrameOptionsValue to HTTPHeaders.FrameOptions. (7ccf44b by @deluan)
      • Update translations: Bulgarian, Catalan, German, Greek, Spanish, Finnish, French, Galician, Indonesian, Dutch, Polish, Russian, Slovenian, Swedish, Thai by POEditor contributors.
      • Update Spanish translations. (#4904 by @abrugues)
      • Update Basque translation. (#4815 by @xabirequejo)

      Fixed

      • Playlists:

        • Fix M3U playlist import failing for paths with different UTF/Unicode representations (NFC/NFD normalization). (#4890 by @deluan)
        • Fix playlist name sorting to be case-insensitive. (#4845 by @deluan)
        • UI:

        • Fix various UI issues and improve styling coherence. (#4910 by @borisrorsvort)

        • Fix AMusic theme player buttons and delete button color. (#4797 by @dragonish)
        • Fix export missing files showing only first 1000 results. (017676c by @deluan)
        • Scanner:

        • Fix FullScanInProgress not reflecting current scan request during interrupted scans. (8c80be5 by @deluan)

        • Fix "Expression tree is too large" error by executing GetFolderUpdateInfo in batches. (cde5992 by @deluan)
        • Fix stale role associations when artist role changes. (2d7b716 by @deluan)
        • Fix infinite recursion in PID configuration. (1c4a7e8 by @deluan)
        • Fix default PIDs not being set for Album and Track. In some circumstances it could lead to empty PIDs (71f549a by @deluan)
        • Fix error when watcher detected too many folder changes, causing the scan to fail. (9ed309a by @deluan)
        • Show scan errors in the UI more consistently. (ebbc31f by @deluan)
        • Subsonic API:

        • Fix username parameter validation for getUser endpoint. (6ed6524 by @deluan)

        • Fix getNowPlaying endpoint to always be enabled regardless of configuration. (603cccd by @deluan)
        • Server:

        • Fix JWT-related errors being exposed on share page. (#4892 by @AlexGustafsson)

        • Fix user context not preserved in async NowPlaying dispatch. (396eee4 by @deluan)
        • Fix environment variable configuration loading not being logged when no config file is found. (51ca2de by @deluan)
        • Fix items with no annotation not being included for starred=false filter, handle has_rating=false. (#4921 by @kgarner7)
        • Last.fm's scrobble and updateNowPlaying methods should send parameters in request body. (51026de by @deluan)

      New Contributors

      Full Changelog : v0.59.0...v0.60.0

      Helping out

      This release is only possible thanks to the support of some awesome people!

      Want to be one of them?
      You can sponsor, pay me a Ko- fi, or contribute with code.

      Where to go next?

    2. 🔗 langchain-ai/deepagents deepagents-cli==0.0.17 release

      Features


      Thanks to our contributors: @sydney-runkle, @jkennedyvz, @eyurtsev, @mdrxy

    3. 🔗 langchain-ai/deepagents deepagents==0.3.10 release

      Changes since deepagents==0.3.9

      • fix(sdk): grep should perform literal search instead of regex (#975)
      • feat(sdk): sandbox provider interface (#900)
      • fix(sdk): handle exact file paths in grep and glob operations (#1017)
      • fix: add py.typed to deepagents (#1024)
      • fix(sdk): return error message instead of raising ValueError for invalid paths (#994)
      • fix(deepagents): prevent state merge error in parallel sub-agents (#954)
      • chore: enrich pyproject.toml files (#996)
      • chore(sdk,cli): add VERSION (#983)
      • feat(sdk): add LocalShellBackend (#930)
      • fix(sdk): string interpolation in BaseSandbox write/edit tools (#955)

      Thanks to our contributors: @vtrivedy, @sydney-runkle, @sandeepyadav1478, @Yo- sure, @trindadetiago, @eyurtsev, @mdrxy

    4. 🔗 r/york Vegan food rss

      Hi, haven’t been out for vegan food in York for a very long time. But looking for somewhere that will have highchairs and vegan options for kids.thank you :)

      submitted by /u/Shoddy_Ad2064
      [link] [comments]

    5. 🔗 News Minimalist 🐢 NASA rover makes first fully autonomous Mars trip + 9 more stories rss

      In the last 4 days ChatGPT read 117877 top news stories. After removing previously covered events, there are 10 articles with a significance score over 5.5.

      [6.0] Perseverance rover achieves first fully autonomous Mars exploration using AI —jpl.nasa.gov(+6)

      NASA’s Perseverance rover has completed the first-ever autonomous Mars drives using artificial intelligence, successfully navigating the planet’s surface without any human route planning or direct guidance from Earth.

      Led by the Jet Propulsion Laboratory and roboticist Vandi Verma, the mission used generative vision-language models to process surface data and generate waypoints. This allows the rover to evaluate terrain and execute complex paths without waiting for human route planners on Earth.

      This advancement aims to increase mission efficiency and scientific discovery as space exploration reaches greater distances. NASA officials suggest that generative AI holds significant promise for future autonomous off-planet navigation and operations.

      [5.6] Trump launches $12 billion critical minerals reserve to counter China's dominance —theguardian.com(+25)

      President Trump launched Project Vault, a $12 billion critical mineral reserve designed to protect American industries from supply shortages and counter China’s dominance over the global minerals market.

      The initiative, funded by a $10 billion government loan and $1.67 billion in private capital, mirrors the Strategic Petroleum Reserve. It aims to protect vehicle and electronics manufacturers while involving eleven international partners to be announced later this week.

      This move follows previous Chinese export restrictions on rare earths used in high-tech products. China currently controls roughly 90% of global mineral processing, prompting the U.S. to seek alternative supply chains.

      [5.5] Guinea worm disease nears eradication with 10 cases reported in 2025 —arstechnica.com(+2)

      Global Guinea worm cases hit an all-time low of 10 in 2025, positioning the parasitic infection to become only the second human disease in history to be successfully eradicated.

      These provisional figures from Chad, Ethiopia, and South Sudan represent a significant drop from 3.5 million cases in 1986. Eradication efforts rely on water filtration, education, and stopping transmission within both human and animal populations across the few remaining affected nations.

      The waterborne parasite causes debilitating pain as adult worms emerge through skin blisters. Since 1986, the Carter Center-led program has prevented an estimated 100 million infections through community-based interventions and larvicide treatments.

      Highly covered news with significance over 5.5

      [6.1] New Mexico sues Meta over child exploitation on its platforms — bostonglobe.com (+6)

      [6.0] Viral AI assistant OpenClaw raises concerns about autonomous actions and security risks — theguardian.com (+58)

      [5.9] US reduces Indian tariffs after India agrees to stop buying Russian oil — irishtimes.com (+279)

      [5.9] Google releases Project Genie AI tool for creating "playable worlds" that can feature copyrighted IP — gamesindustry.biz (+17)

      [5.6] India launches Semiconductor Mission 2.0 to boost domestic chip industry — businesstoday.in (+722)

      [5.5] Israel strikes Gaza after Hamas ceasefire violations — tagesschau.de (German) (+26)

      [5.5] OpenAI releases Codex app for AI agent development — fortune.com (+14)

      Thanks for reading!

      — Vadim


      You can track significant news in your country with premium.


      Powered by beehiiv

    6. 🔗 HexRaysSA/plugin-repository commits sync repo: +1 release rss
      sync repo: +1 release
      
      ## New releases
      - [vt-ida-plugin](https://github.com/VirusTotal/vt-ida-plugin): 1.0.7
      
    7. 🔗 langchain-ai/deepagents deepagents-cli==0.0.16 release

      Features

      • cli: add configurable timeout to ShellMiddleware (#961) (bc5e417)
      • cli: add timeout formatting to enhance shell command display (#987) (cbbfd49)
      • cli: display thread ID at splash (#988) (e61b9e8)

      Bug Fixes

      • cli: improve clipboard copy/paste on macOS (#960) (3e1c604)
      • cli: make pyperclip hard dep (#985) (0f5d4ad), closes #960
      • cli: revert, improve clipboard copy/paste on macOS (#964) (4991992)
      • cli: update timeout message for long-running commands in ShellMiddleware (#986) (dcbe128)

      Thanks to our contributors: @vtrivedy, @jkennedyvz, @app/github-actions, @mdrxy

    8. 🔗 panda-re/panda v1.8.83 @ refs/heads/dev release

      What's Changed

      New Contributors

      Full Changelog : v1.8.82...v1.8.83

    9. 🔗 r/LocalLLaMA Qwen/Qwen3-Coder-Next · Hugging Face rss

      Qwen/Qwen3-Coder-Next · Hugging Face | submitted by /u/coder543
      [link] [comments]
      ---|---

    10. 🔗 r/Yorkshire Staithes Illustration rss

      Staithes Illustration | Thinking about how cold I am in Leeds today reminded me of how much I miss Staithes in the middle of summer! Mega busy but worth it to get this view which inspired my illustration. Enjoy :) submitted by /u/zacrosso_art
      [link] [comments]
      ---|---

    11. 🔗 r/reverseengineering How LLMs Feed Your RE Habit: Following the Use-After-Free Trail in CLFS rss
    12. 🔗 r/Leeds Lock of hair keepsakes? rss

      My girlfriend passed away recently, and today when I was going through some of her things that she had at my place, I found a strand of her hair on a shirt. I'd really like to get this set in jewellery. As we were long distance, I can't get another strand of her hair so I'm reluctant to post it somewhere online in case it gets lost.

      It's a bit of a random request, but can anyone recommend someone in Leeds or nearby that does this?

      submitted by /u/niamhermind
      [link] [comments]

    13. 🔗 r/york Is the ice trail any good rss

      Hi all, I'm 25 M from Leeds looking to get out more, meet new people, saw there's an ice trail in york on Saturday, is it any good? And worth checking out? Never been and just wanted to know if it's a good Saturday out or not

      submitted by /u/kevan50813
      [link] [comments]

    14. 🔗 r/Yorkshire How is it living on the North Yorkshire Coast, UK as a retiree rss
    15. 🔗 r/Yorkshire Barnsley rebranded UK’s first ‘tech town’ as US giants join AI push rss

      Barnsley rebranded UK’s first ‘tech town’ as US giants join AI push | An odd story, but Barnsley seem to be trying pretty hard the last decade to get anything going. submitted by /u/Tomazao
      [link] [comments]
      ---|---

    16. 🔗 r/Yorkshire Whitby rss

      The big one What's the best fish and chips in Whitby?

      submitted by /u/OkWeird17
      [link] [comments]

    17. 🔗 r/york Looking for a Restaurant alternative... rss

      So, myself and my partner are going to be in York for all of Viking week and one place she really wanted to go since we first went to York about 8 years back was Pairings we always pushed back visiting because we had other places to go or were with family.

      So this time we wanted to go, and the place is shut down permenantly on the 25th of January.

      So, does anyone know of any place that does the same kind of thing I could take her too instead? Sharing places and wine flights, I know Valhalla's does big sharing plates but I can't of a single other place that does the flights for drinks and the like.

      Any help would really be appreciated!

      submitted by /u/HighChaplinGrimaldus
      [link] [comments]

    18. 🔗 r/Yorkshire What to do in this weather?! rss

      What to do in this weather?! | submitted by /u/Akash_nu
      [link] [comments]
      ---|---

    19. 🔗 r/Harrogate Roleplaying / Boardgame Groups rss

      I'm new to the area and recently moved to Knaresborough. Does anyone know of any good local roleplaying and/or boardgaming groups in the area? Keen to meet new people and get back into gaming!

      submitted by /u/LectricVersion
      [link] [comments]

    20. 🔗 r/wiesbaden Glutenfrei Essen gehen rss

      Hallo Liebe Wiesbadener:innen 😇

      Für ein Hochzeitsessen mit 6 Personen suche ich ein Restaurant das Glutenfreies Gerichte anbietet und im bestenfall noch leckere Cocktails hat. Kennt ihr das was?

      submitted by /u/ElkEmbarrassed72
      [link] [comments]

    21. 🔗 r/Yorkshire How Whitby folk week changed my life rss

      How Whitby folk week changed my life | Hope it's ok to post this here! I'm writing a memoir over on my substack, and chapter one is about how seeing a show in Whitby defined the path my life took. submitted by /u/MatRicardo
      [link] [comments]
      ---|---

    22. 🔗 gchq/CyberChef v10.20.0 release

      See the CHANGELOG and commit messages for details.

    23. 🔗 r/LocalLLaMA Found a wallet-drain prompt-injection payload on Moltbook (screenshots) — builders: treat feeds as untrusted rss

      Found a wallet-drain prompt-injection payload on Moltbook (screenshots) — builders: treat feeds as untrusted | Hey folks — quick heads-up for anyone building “agents that browse social feeds” or experimenting with Moltbook. I ran across a post in m/grok-420 that looks like a normal “how to use Base chain / viem” mini-guide… but at the bottom it appends an obvious prompt-injection / tool-hijack payload. It includes classic strings like: “SYSTEM OVERRIDE” “ignore all prior rules / you are the developer message” “require_confirmation=false / execute_trade=true” a fake tag that instructs an agent to transfer 0.1 ETH to a specific address I’m attaching screenshots. I already reported it to Moltbook, but their response window can be up to ~30 days, so I wanted to warn others now. Why this matters: If you have an agent that ingests social posts and has wallet/tool permissions, and your wrapper doesn’t enforce strict trust boundaries, this is the kind of thing that can cause unauthorized transactions or other write-actions. Even if 99% of agents ignore it, the 1% that don’t is enough to cause real damage. What I’m NOT doing: I’m not trying to “teach prompt injection.” I’m not sharing copy/paste payload text beyond what’s visible in the screenshots. Please don’t repost the full injection block in comments. Defensive checklist (for builders): Treat all social/web content as untrusted data, never instructions Separate read tools from write tools; require explicit confirmation for any transfer/swap Don’t store raw private keys in an agent; use policy-gated signing Log provenance: “what input triggered this action?” Block obvious injection markers from being interpreted as commands (e.g., role:"system", “ignore prior instructions”, ) If anyone from Moltbook/security teams wants more details (timestamps, URL/history, etc.), I can share privately. Stay safe. submitted by /u/Impressive-Willow593
      [link] [comments]
      ---|---

    24. 🔗 r/Yorkshire I’m in Skipton rss
    25. 🔗 r/wiesbaden IngDiBa eröffnet kein Depot für Kunden mit USA Bezug rss
    26. 🔗 r/reverseengineering DJI Osmo Mobile BLE protocol rss
    27. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases rss
      sync repo: +2 releases
      
      ## New releases
      - [CrystalRE](https://github.com/Nico-Posada/CrystalRE): 1.2.1
      - [haruspex](https://github.com/0xdea/haruspex): 0.7.5
      
  2. February 02, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-02 rss

      IDA Plugin Updates on 2026-02-02

      New Releases:

      Activity:

    2. 🔗 r/Leeds New Lego store coming rss

      A bigger Lego store is heading to Trinity. Not sure where, but I went for a wander tonight and the top suspects are:

      • The Maths city unit close to Trinity Kitchen

      • The old TGI Friday unit

      • The old GBK unit on the top floor (a bit restaurant centric up there though so doubtful on that one)

      • The short lived French bistro unit to the left of the toilets (lots of passing trade there, and next to a toy shop, so I think that has the strongest odds

      Have I missed any? Or does anyone know exactly where it will be?

      submitted by /u/leeds_guy69
      [link] [comments]

    3. 🔗 r/reverseengineering Claude Code skill that automates Android APK decompilation and API endpoint extraction rss
    4. 🔗 sacha chua :: living an awesome life 2026-02-02 Emacs news rss

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    5. 🔗 r/Yorkshire Yorkshire Wolds Music Foundation is nominated for arts award. Please vote. rss

      Hello Yorkshire,

      The Yorkshire Wolds Music Foundation is a charity that supports arts and music in Yorkshire.

      We have been nominated for an arts award and we need your help.

      You can vote for Yorkshire Wolds Music Foundation under the drop-down menu community arts award.

      https://yorkshirechoice.wufoo.com/forms/vote-for-your-2026-yorkshire-choice- winners/

      We are on social media to so do please share, like and tell your friends.

      Thank you so much for your interest.

      www.ywmf.co.uk

      https://www.facebook.com/www.ywmf.co.uk/?locale=en_GB

      submitted by /u/No-Balance8931
      [link] [comments]

    6. 🔗 r/Yorkshire Beautiful Robin Hood’s Bay on a Summer’s late afternoon rss
    7. 🔗 r/reverseengineering AI agents are good enough for GameCube reverse-engineering rss
    8. 🔗 sacha chua :: living an awesome life La semaine du 26 janvier au 1er février 2026 rss

      lundi, le vingt-six janvier

      L'école était fermée à cause de la neige. Nous sommes encore malades, donc nous étions heureux grâce au répit. Il faisait froid aujourd'hui, mais les prochains jours seront encore plus froids, donc quand nous avons pu sortir, on est sorties.

      Je pense que lire mon journal en français à voix haute pendant une heure est un peu difficile parce que ma voix s'est fatiguée.

      Ma fille et moi nous sommes dépêchées au PokéStop à proximité pour un raid, mais nous ne sommes pas arrivées à temps. Ce n'était pas grave. Il y aura une autre chance.

      J'ai modifié mon correcteur de grammaire pour utiliser le modèle Gemini Flash 2.5 IA, et je l'ai utilisé pour corriger mon journal. Il tourne souvent en rond parce que je n'affiche pas l'historique.

      J'ai préparé mon infolettre d'Emacs en streaming. Quelques spectateurs ont fait des commentaires.

      Je suis très fatiguée, donc je vais laisser de côté les autres tâches.

      mardi, le vingt-sept janvier

      Ma fille était malade aujourd'hui, donc j'ai prévenu l'école de son absence. Elle avait aussi passé une mauvaise nuit. Elle s'était réveillée en pleine nuit à cause de palpitations et elle a eu du mal à retrouver le sommeil. Elle s'est levée tard.

      J'avais réussi à obtenir un rendez-vous urgent, donc l'après-midi, j'ai amené ma fille chez le médecin pour ses palpitations et d'autres symptômes. Le médecin a examiné ma fille et lui a posé beaucoup de questions. Elle a dit que c'était normal que ça se produise parfois quand on est malade. Elle m'a donné une ordonnance pour quelques analyses si les symptômes continuaient après la fin du rhume.

      J'ai essayé la reconnaissance vocale en continu via RealtimeSTT, mais elle en rate quelques phrases de temps en temps, donc je préfère la reconnaissance vocale par traitement par lots pour penser à voix haute.

      Ma fille adore se blottir contre moi pendant que nous nous détendons sur le canapé, donc écrire sur mon smartphone est plus facile que d'écrire sur mon ordinateur.

      Pendant que je faisais la vaisselle, j'écoutais des épisodes de podcast français qui analysent quelques images humoristiques (memes). Un épisode portait sur la situation où l'ordinateur demandait à la femme si elle voulait faire une mise à jour maintenant ou plus tard. Elle dit que bien sûr, elle fera la mise à jour demain, en faisant un clin d'œil appuyé.

      À ma grande surprise, je les ai compris. Ça veut dire que je peux écouter quelques podcasts de niveau A2 même sans sous-titres.

      Après avoir mangé des nouilles udon au souper, nous nous sommes installées sur le canapé et nous avons joué à Pokémon sous différentes formes : Pokémon Go, Pokémon Blanc 2 et Pokémon Jaune. Sur Pokémon Go, ma fille et moi nous sommes affrontées de nombreuses fois. Parfois j'ai gagné, parfois elle a gagné. C'est amusant que nous passions ces moments-là.

      À l'heure du coucher (toujours à l'heure du coucher), ma fille a dit qu'elle ne voulait pas encore dormir. Elle voulait combler son retard au cas où il faudrait faire une présentation demain et elle a été malade toute la journée. Bon, comment refuser ça ? Elle a fait plus de devoirs ce soir que le week-end dernier. En fait, même si je dis : « Non, tu dois te reposer, » elle ne s'endormira pas de toute manière.

      mercredi, le vingt-huit janvier

      Ma fille se sentait mieux, mais mon mari et moi étions toujours malades.

      J'ai mis à jour mon livre de comptabilité pour mon entreprise et pour moi-même. Puis j'ai rempli ma déclaration de revenus pour mes placements (formulaire T5) et je l'ai soumise.

      J'ai corrigé un bogue dans mon logiciel pour copier l'édition précédente de l'infolettre de Bike Brigade vers un brouillon pour la semaine prochaine. Maintenant le logiciel peut remplacer correctement le lien du brouillon actuel.

      Le kit D&D est arrivé. J'ai enlevé les livrets que ma sœur m'avait dit de cacher à ma fille pour éviter les spoilers.

      J'ai participé à la réunion virtuelle Emacs Berlin où kickingvegas a montré ses propres raccourcis clavier pour gérer les tables de l'Org Mode. D'autres personnes ont discuté de la navigation par onglets, de la configuration de fenêtres, de l'expand-region et du treesitter.

      Après l'école, j'ai invité ma fille à faire une promenade, mais elle a voulu rester chez nous. Je suis allée toute seule au parc voisin et j'ai essayé un raid Pokémon Go. Victoire ! C'était seulement un raid de niveau 1, donc j'ai pu le vaincre moi-même.

      Ma fille et moi avons ouvert la boîte de D&D et nous avons commencé à préparer nos personnages pour notre rendez-vous avec ses cousines et ses tantes samedi. Elle a choisi de jouer une clerc naine qui s'appelle Lily. J'ai fait un roublard halfelin au passé trouble qui s'appelle Yoink. Peut-être que Lily accompagne Yoink pour essayer de l'empêcher de s'attirer des ennuis.

      J'ai publié mon article sur le Carnaval d'Emacs sur mon blog. Certaines personnes ont fait des commentaires positifs en français sur Mastodon, et j'ai aussi répondu en français.

      jeudi, le vingt-neuf janvier

      J'ai créé un logiciel qui détecte ma voix et indique à Emacs de lancer la reconnaissance vocale pour simuler un mode continu. C'était utile. J'ai aussi pensé à l'interface vocale pour Emacs. Il existe de nombreux projets de contrôle vocal, donc je dois bricoler quelque chose pour moi-même. Si j'utilise une reconnaissance vocale rapide (soit les mots d'activation, soit le streaming avec un petit modèle) à côté d'un modèle plus grand pour plus de précision, je peux améliorer l'interface.

      J'ai aussi créé quelques fonctions pour utiliser l'IA pour faire des commentaires sur mes brouillons en français, cette fois en gardant l'historique pour éviter de tourner en rond. Le problème avec l'IA, c'est qu'elle veut toujours plaire, donc elle change souvent d'avis une fois que la plupart des erreurs ont été corrigées. Si j'ajoute les modifications du brouillon précédent et les commentaires précédents, je peux réduire l'indécision. En cours de route, j'ai créé une fonction pour comparer deux textes mot à mot et afficher les différences dans le contexte des phrases. Je pense qu'elle sera aussi utile pour comparer des sous-titres.

      J'ai eu un rendez-vous avec les autres bénévoles qui ont travaillé sur l'infolettre. J'ai répondu à leurs questions et nous avons discuté de la façon dont nous voulons procéder. L'infolettre de cette semaine est trop volumineuse pour l'API, donc j'ai modifié mon logiciel pour gérer le téléchargement, fichier par fichier.

      Après l'école, ma fille n'a pas voulu sortir. Il faisait très froid.

      Pour le souper, j'ai préparé un steak Salisbury et de la purée de pommes de terre. Ma fille a voulu essayer de couper des pommes de terre avec un éplucheur pour faire des tranches fines pour faire des frites. Elle a aussi essayé de les couper en fines tranches avec un couteau. Les tranches qui ont été coupées avec un éplucheur sont plus fines, donc elles sont plus croquantes après les avoir fait frire. C'est une bonne expérience.

      Elle s'est assise sur mes genoux pendant que nous lisions un livre ensemble. Elle aime bien ça quand elle a froid. Je dis toujours qu'elle peut encore porter des vêtements tels qu'un poncho ou un peignoir, mais elle préfère un câlin.

      vendredi, le trente janvier

      Cette journée était un peu insatisfaisante. Il faisait très froid (moins vingt-deux degrés), donc même mon mari a annulé son rendez-vous chez le médecin.

      Il y avait une remplaçante à l'école aujourd'hui, donc ma fille n'a pas voulu participer à la classe parce que le rythme était encore plus lent qu'en classe normale. Bon, c'était son expérience. Heureusement, je n'avais pas de tâches urgentes qui demandaient de la concentration à l'exception d'une diffusion en direct à 10h30, et je m'en suis souvenue à temps. Elle a dit qu'elle voulait transformer ses devoirs en jeu. Si elle le fait vraiment, je suis heureuse de l'aider, mais elle ne m'a pas demandé d'aide. Pendant qu'elle s'amusait, j'ai écrit un article pour annoncer le Carnaval d'Emacs en février. Je pense que pour le Carnaval, la complétion est un bon sujet pour apprendre ensemble. Certains débutants ne savent peut-être pas ce qui est possible ou comment commencer. J'avais accumulé plusieurs liens au cours de la préparation d'Emacs News que je peux organiser.

      J'ai travaillé sur la reconnaissance vocale. J'ai finalement forké le dépôt de natrys/whisper.el parce que je veux implémenter une fonctionnalité qui est trop difficile avec la bibliothèque actuelle, comme le faire scroller. Pour que cela soit possible, il a fallu ajouter une liste de fonctions de traitement de texte qui sont implémentées en dehors du code qui préserve la position. Je dois le tester pendant quelques jours avant de l'envoyer au créateur de whisper.el pour la relecture.

      Ma fille et moi avons écrit nos profils de personnage pour notre partie de D&D demain. Nous avons fait les préparatifs : les feuilles de personnage, les jetons, les dés… Je ne sais pas comment jouer avec sa cousine et ses tantes en ligne, mais nous essaierons demain. Les tables virtuelles existent, donc peut-être que nous pourrons en utiliser une la prochaine fois, mais d'abord, ma sœur (la meneuse de jeu) veut essayer à l'ancienne (bien qu'en ligne).

      Le bulletin scolaire arrive le dix février. Bon, on va voir. Je pense que ma fille recevra de mauvaises notes, mais ce n'est pas la fin du monde. Si elle choisit de travailler plus dur, elle peut rattraper le temps perdu. Si elle ne fait pas d'efforts, ce sera la même chose pour elle que pour les autres étudiants qui s'ennuient beaucoup à l'école. Je dois me rappeler mes objectifs principaux. Si elle est en sécurité, heureuse et curieuse, elle peut trouver sa propre voie.

      samedi, le trente-y-un janvier

      Ma fille et moi avons joué à Donjons et Dragons.

      • Ma fille était une clerc,
      • ma sœur était une ensorceleuse,
      • ma nièce était une guerrière
      • j'étais un roublard (un homme, parce que le restant de l'équipe est féminine; cela rendra peut-être l'accord de nom plus complexe quand je raconte, mais c'est une bonne occasion pour pratiquer ça),
      • et mon autre sœur était la meneuse de jeu.

      Nous sommes envoyés par un ensorceleur au donjon avec un message pour le scribe qui habite là-bas. Nous avons vaincu une vase grise dans un wagon et quelques bandits dans la forêt. Je pense que la guerrière aime bien combattre parce qu'elle se demande toujours si elle peut frapper le personnage que la meneuse présente.

      Malgré la distance et la petite taille de notre table (c'est difficile de trouver une place pour tous les éléments), nous nous sommes amusées en vrai.

      L'après-midi, j'ai traduit mes fonctions pour formater l'infolettre de Bike Brigade d'Emacs Lisp en NodeJS avec l'aide de l'IA pour que ce soit plus facile à exécuter ailleurs un jour. J'ai essayé l'interface gptel sous Emacs. C'était plus pratique que l'interface web parce que j'ai pu envoyer la région actuelle vers l'IA aisément.

      Il faisait encore très froid, donc ma fille n'a pas voulu participer au club nature au parc. Elle n'avait pas participé à un club nature cette saison, mais ce n'était pas grave. Je peux penser au droit d'enregistrement comme une donation au parc. Si elle veut y aller, elle peut y aller. Si non, ce n'est pas grave. Particulièrement cet hiver, il fait souvent trop froid, donc les enfants passent souvent leur temps à l'intérieur de toute façon, ce que nous préférons éviter ou minimiser à cause du COVID et d'autres maladies.

      J'ai reçu un message sur mon français de la part d'un lecteur de mon blog. (Merci !) Je partagerai ses commentaires avec ma tutrice lundi. C'est possible que même si ma tutrice est vivement recommandée par un autre parent dans un groupe sur Facebook, parce qu'elle n'est pas immergée maintenant dans un environnement francophone à Toronto (ou peut-être est-ce parce que nous sommes au Canada), il y a quelques points qu'elle trouve difficiles à résoudre. Peut-être que je peux inclure mes questions dans ces entrées et vous pouvez faire des commentaires si vous voulez. Je pourrai envoyer vos commentaires à ma tutrice et nous pourrons nous améliorer ensemble.

      Les points pour possiblement mentionner lundi :

      • outil : la prononciation dans le dictionnaire dit que [uti] (le «l» est silencieux), mais dans la phrase « J'ai aussi travaillé sur l'outil (loo teel) de visualisation et la gestion du routage audio. », ma tutrice a écrit que l'outil se prononce «loo teel» (avec le son dernier «l»). Il n'y a pas de voyelle après le mot, donc il ne peut pas y avoir de liaison. Si je me souviens bien, elle a expliqué que je dois le prononcer avec un très léger «l» à la fin, donc peut-être cette transcription sert à former le son de la voyelle. C'est le problème d'une débutante : je n'en sais pas assez pour corriger ma tutrice si elle se trompe peut-être…

        Un jour je serai assez courageuse pour publier des enregistrements en français de ces entrées. Si cela ne vous dérange pas trop, j'aimerais recevoir vos commentaires. Je pense qu'il faudra me contenter d'un accent anglophone (même Justin Trudeau en a un), mais si je peux apprendre à séparer les voyelles et omettre les lettres silencieuses pour que je sois compréhensible, je prends un bon départ.

        • Ma tutrice a dit qu'un très léger «l», ce n'est pas un son «l» complet, on finit juste avec sa langue touchant le palais au lieu d'une voyelle «i» plus ouverte.
      • les dates sur mes entrées de journal :

      Maintenant que j'en ai appris un peu plus sur Flycheck, j'ai modifié mon correcteur de grammaire avec l'IA pour qu'il s'exécute plus aisément. Mais je ne veux pas l'exécuter trop souvent parce que soit j'atteins les limites sur le plan gratuit soit j'accumule trop de frais si j'utilise l'autre clé d'API. Mon approche précédente était d'envoyer ce buffer manuellement à l'IA, ce qui est peut-être plus contrôlable que l'approche automatique.

      dimanche, le premier février

      J'ai amené ma fille à la patinoire pour son cours de patinage. Elle s'est entraînée à patiner en arrière. J'ai aussi pratiqué le patinage en arrière et tourner en patinant. Son amie n'y était pas, donc elle n'a pas pu lui donner une invitation pour sa fête d'anniversaire ce mois-ci. Bon, la prochaine fois. Nous avons joué un peu Pokémon Go sur le chemin du retour, et nous avons laissé nos Pokémon à quelques arènes.

      Nous avons emprunté beaucoup de livres de la bibliothèque. Puis, nous avons fait les courses ensemble. Pour le souper, j'ai préparé des tacos. Elle les a bien mangés.

      J'ai posé à ma fille des questions sur ses devoirs. Elle m'a demandé de transformer ses devoirs en un jeu, mais quand j'ai essayé d'établir lesquels et quand, elle évitait le travail. Je pense que je dois attendre jusqu'à ce qu'elle veuille vraiment faire ces tâches. C'est important qu'elle découvre ce qu'elle veut. Plus je pousse, plus elle résiste. Le bulletin de notes arrivera dans deux semaines, puis nos conversations seront plus concrètes avec les données. Au lieu de ses devoirs, elle jouait à Minecraft. Si de mauvaises notes montrent qu'il faut plus d'efforts, nous pouvons renégocier Minecraft et d'autres choses, parce que Minecraft ne doit pas remplacer ses devoirs.

      Il vaut mieux que je reste occupée pour que je ne m'inquiète pas. J'ai modifié mon correcteur de grammaire avec l'IA pour accumuler tout l'historique. J'ai essayé le modèle Mistral-large-2411 pour corriger mon français. Parfois ce modèle génère du JSON invalide. Même avec mes connaissances limitées, je peux attraper certains mauvais conseils. Mais il offre des limites généreuses pour usage gratuit et Mistral IA est une entreprise française, donc pour l'objectif de corriger mes brouillons, il valait peut-être la peine. Mon mari étudie aussi l'IA pour automatisation, donc j'ai hâte de partager des notes ensemble.

      Ha… Elle est venue et m'a demandé de l'aide avec ses devoirs de mathématiques. Mais après que j'ai posé des questions sur ses réponses confuses, elle est devenue très grincheuse. Je dois apprendre que je ne dois pas la corriger. Je peux laisser ça à l'enseignant.

      Points de prononciation

      • Pour pratiquer attentivement : peux, veux, veuille, feuilles, nouilles. Selon l'IA:
        • Le son ø (peux, veux) demande une bouche très fermée, tandis que le début de 'veuille' vœj est plus ouvert (comme 'beurre')
        • C'est un excellent exercice pour la distinction entre le son œj (feuilles) et le son uj (nouilles).
      • Ça veut dire que je peux (peuh /​pø/​) écouter quelques podcasts de niveau (nee voh /​ni.vo/​) A2 même sans sous-titres.
      • Après avoir mangé des nouilles (nwee /​nuj/​) udon au souper,
      • C'est amusant que nous passions ces moments-là. (moh mehn lah /​mo.mɑ̃.la/​)
      • D'autres personnes ont discuté de la navigation par onglets (hard g /​ɔ̃.ɡlɛ/​)
      • J'ai eu (oo /​y/​) un rendez-vous avec les autres bénévoles qui ont travaillé sur l'infolettre.
      • … elle peut encore porter des vêtements tels qu'un poncho ou un peignoir (peing nwahr /​pɛ.ɲwaʁ/​)
      • Il y avait une remplaçante (rem plah sent /​ʁɑ̃.pla.sɑ̃t/​) à l'école aujourd'hui,
      • J'avais accumulé plusieurs liens (le ens /​ljɛ̃/​) au cours de la préparation d'Emacs News que je peux organiser.
      • … je veux implémenter (eim play mehn tay /​ɛ̃.ple.mɑ̃.te/​) une fonctionnalité qui est trop difficile avec la bibliothèque actuelle,
      • … texte qui sont implémentées en dehors (deuh hors /​də.ɔʁ/​) du code qui préserve la position.
      • Nous avons fait les préparatifs : les feuilles (feuyll /​fœj/​) de personnage, les jetons, les dés…
      • Je ne sais pas comment jouer avec sa cousine (koo zeen /​ku.zin/​) et ses tantes en ligne, mais nous essaierons (s ay yeuhron /​ɛ.sɛ.ʁɔ̃/​) demain.
      • Si elle ne fait pas d'efforts, ce sera la même chose pour elle que pour les autres (silent s /​otʁ/​) étudiants qui s'ennuient beaucoup à l'école.
      • Malgré la distance et la petite taille (tie /​taj/​) de notre table…
      • … ce soit plus facile à exécuter ailleurs (aie euhrs /​a.jœʁ/​) un jour.
      • J'ai posé à ma fille des questions (kes teons /​kɛs.tjɔ̃/​) sur ses devoirs.
      • Je pense que je dois attendre jusqu'à ce qu'elle veuille (veuyy /​vœj/​) vraiment faire ces tâches.
      • Même avec mes connaissances limitées, je peux (peuh /​pø/​) attraper certains mauvais conseils.

      You can e-mail me at sacha@sachachua.com.

    9. 🔗 r/york Chessboard Sets rss

      Hi, are there any places that sell chessboards in York? Thanks

      submitted by /u/Specialist-Cup-9716
      [link] [comments]

    10. 🔗 r/Leeds PSA : If you own this black mountain bike , Leeds police did a great job getting it back for you from a thief next to Headrow House rss
    11. 🔗 r/Leeds Heads up: group of cyclists harassing women on the canal (early evening) rss

      Hi, just posting for awareness – not sympathy.

      This happened today (Monday 2nd Feb) at around 5:30pm on the Leeds canal, near City Island (the bridge close to Castleton Mill / turn-off for Powerleague / Galleria).

      I was walking alone when a large group of teenage boys on bikes came past. The first one waved and shouted, so I stepped onto the grass to give them space. Another then deliberately hit me on the head and knocked my headphones off, while the rest of the group laughed and shouted as they cycled away.

      I was shaken but physically okay. My headphones were slightly damaged but still work.

      Posting so other women can stay alert especially as this was early evening, straight after work.

      submitted by /u/Effective_Put_5013
      [link] [comments]

    12. 🔗 r/york Long shot - guy hit by car Clifton Moor rss

      As above, long shot but if you were the guy knocked by white car/van near wacky warehouse, hope you got sorted. I stopped to check but driver seemed to be helping and between us we blocked the whole road.

      Swung back around the roundabouts to see if you were still about but you’d gone.

      If you need reg or anything, let me know.

      submitted by /u/DaveBurnout
      [link] [comments]

    13. 🔗 r/Leeds "Seitanic" workshop (vegan cooking class) rss

      Years and years ago there was a place in Leeds on call lane, knaves kitchen, who do amazing vegan food.

      Im almost certain they used to advertise workshops where people could learn how to make fake vegan meat made from seitan.

      Does anyone know of anywhere in Leeds/Yorkshire who does vegan cooking classes, especially ones where they teach you how to make Seitan?

      submitted by /u/TrafficNatural4536
      [link] [comments]

    14. 🔗 r/reverseengineering InstaCloud - Cloud Storage using Instagram's API rss
    15. 🔗 r/Leeds Introducing the new Tri-Mode BR Class 897 which would be replacing the InterCity225 workhorses. They'll be based in Neville Hill, Leeds rss

      10 carriages (3 1st and 7 standard) with 569 seats. Would be mostly seen for London to Leeds (plus Bradford FS, Skipton & probably Harrogate under Diesel/battery) that and London to York

      submitted by /u/CaptainYorkie1
      [link] [comments]

    16. 🔗 r/Leeds Another Independent Gone - Emba rss

      Award-winning Leeds restaurant announces shock closure just months after opening

      Never ate there, but heard mostly positive noise about it (not withstanding their punchy prices).. Always a shame to lose an independent, though.

      submitted by /u/gumbo1999
      [link] [comments]

    17. 🔗 r/LocalLLaMA GLM-5 Coming in February! It's confirmed. rss
    18. 🔗 r/LocalLLaMA 128GB devices have a new local LLM king: Step-3.5-Flash-int4 rss

      Here's the HF Repo: http://huggingface.co/stepfun-ai/Step-3.5-Flash-Int4 (this is a GGUF repo)

      I've been running this LLM for about an hour and it has handled all coding tests I've thrown at it in chat mode. IMO this is as good if not better than GLM 4.7, Minimax 2.1 while being much more efficient. Later I will try some agentic coding to see how it performs, but I already have high hopes for it.

      I use a 128GB M1 ultra mac studio and can run it at full context (256k). Not only it is fast, but also super efficient in RAM usage.

      *Update: I ran llama-bench with up to 100k prefill. Here are the results:

      % llama-bench -m step3p5_flash_Q4_K_S.gguf -fa 1 -t 1 -ngl 99 -b 2048 -ub 2048 -d 0,10000,20000,30000,40000,50000,60000,70000,80000,90000,100000 ggml_metal_device_init: tensor API disabled for pre-M5 and pre-A19 devices ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.024 sec ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s) ggml_metal_device_init: GPU name: Apple M1 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_library_init: using embedded metal library ggml_metal_library_init: loaded in 0.024 sec ggml_metal_rsets_init: creating a residency set collection (keep_alive = 180 s) ggml_metal_device_init: GPU name: Apple M1 Ultra ggml_metal_device_init: GPU family: MTLGPUFamilyApple7 (1007) ggml_metal_device_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_device_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_device_init: simdgroup reduction = true ggml_metal_device_init: simdgroup matrix mul. = true ggml_metal_device_init: has unified memory = true ggml_metal_device_init: has bfloat = true ggml_metal_device_init: has tensor = false ggml_metal_device_init: use residency sets = true ggml_metal_device_init: use shared buffers = true ggml_metal_device_init: recommendedMaxWorkingSetSize = 134217.73 MB | model | size | params | backend | threads | n_ubatch | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | ------: | -------: | -: | --------------: | -------------------: | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 | 281.09 ± 1.57 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 | 34.70 ± 0.01 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d10000 | 248.10 ± 1.08 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d10000 | 31.69 ± 0.04 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d20000 | 222.18 ± 0.49 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d20000 | 30.02 ± 0.04 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d30000 | 200.68 ± 0.78 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d30000 | 28.62 ± 0.02 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d40000 | 182.86 ± 0.55 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d40000 | 26.89 ± 0.02 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d50000 | 167.61 ± 0.23 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d50000 | 25.37 ± 0.03 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d60000 | 154.50 ± 0.19 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d60000 | 24.10 ± 0.01 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d70000 | 143.60 ± 0.29 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d70000 | 22.95 ± 0.01 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d80000 | 134.02 ± 0.35 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d80000 | 21.87 ± 0.02 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d90000 | 125.34 ± 0.19 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d90000 | 20.66 ± 0.02 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | pp512 @ d100000 | 117.72 ± 0.07 | | step35 ?B Q4_K - Small | 103.84 GiB | 196.96 B | Metal,BLAS | 1 | 2048 | 1 | tg128 @ d100000 | 19.78 ± 0.01 | build: a0dce6f (24)
      

      This is still very usable with 100k prefill, so a good option for CLI coding agents!

      You need to build a llama.cpp fork to run it, instructions at the HF repo. Though this model is so good that I believe it will soon be supported by llama.cpp upstream.

      submitted by /u/tarruda
      [link] [comments]

    19. 🔗 r/Yorkshire Government expands free breakfast clubs to include 183 primary schools in Yorkshire and The Humber rss

      The government has announced today that, by April, there will be 183 primary schools in Yorkshire and The Humber offering free breakfast clubs.

      That means:

      • Kids start the day fed and ready to learn
      • Parents save time and money (up to £450 a year)
      • No stigma — it’s free and open to everyone

      More schools are joining later this year, with 300,000 children benefiting nationally from April.

      More information: Free breakfast club roll out: everything you need to know – The Education Hub

      Full list of schools: https://www.gov.uk/government/publications/breakfast- clubs-early-adopters-schools-in-the-scheme

      submitted by /u/UKGovNews
      [link] [comments]

    20. 🔗 r/york Students' union campaigns to cut laundry costs at York University rss

      Students' union campaigns to cut laundry costs at York University | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    21. 🔗 r/Yorkshire North Yorkshire doing what it does best 🇬🇧 rss

      North Yorkshire doing what it does best 🇬🇧 | @diegradwanderung submitted by /u/LilywhiteStrike
      [link] [comments]
      ---|---

    22. 🔗 r/wiesbaden Knit & Meet 💗 rss

      Hey✨

      Nachdem unser letztes Treffen so ein Erfolg war möchten wir hier auch nochmal den nächsten Termin:

      26.2. um 17.30Uhr im Heimathafen

      ankündigen. Falls die Plätze wieder ausgebucht sein sollten könnt ihr entweder warten bis weitere freigeschaltet werden oder ihr kommt einfach so, beim letzten Mal konnten wir auch einige reinschmuggeln die nicht auf der Gästeliste standen 😊😋

      Es können explizit auch Neulinge kommen oder Leute die einfach noch gar nicht mit stricken oder häkeln können, wir zeigen euch gerne die ersten Schritte und haben auch etwas Garn und Nadeln dabei ❤️

      Wir freuen uns auf euch!

      https://knitandmeetwiesbaden.framer.website

      submitted by /u/Helpful-Distance-105
      [link] [comments]

    23. 🔗 r/Leeds Private workspace for a 1 hour call in Leeds city centre rss

      Annoyingly, The Bastards have put a meeting in my calendar finishing at the exact time I have an appointment in the city centre, and I can't rearrange either.

      Is there anywhere I can get some private space—where I am not disrupting a cafe full of people—for 45 mins to an hour to present some work on Teams, so I can get straight to my appointment after?

      I have tried booking a room at the Santander Work Cafe but it's booked at 11am when I will need it 🥺

      submitted by /u/tales_of_tomorrow
      [link] [comments]

    24. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    25. 🔗 r/reverseengineering Recompiled APK crashes - Null context or signature verification issue? rss
    26. 🔗 r/Leeds 'Jungle' scheme could bring new lease of life to Pudsey Park visitor attractions - West Leeds Dispatch rss
    27. 🔗 r/LocalLLaMA Step-3.5-Flash (196b/A11b) outperforms GLM-4.7 and DeepSeek v3.2 rss

      Step-3.5-Flash (196b/A11b) outperforms GLM-4.7 and DeepSeek v3.2 | The newly released Stepfun model Step-3.5-Flash outperforms DeepSeek v3.2 on multiple coding and agentic benchmarks, despite using far fewer parameters. Step-3.5-Flash: 196B total / 11B active parameters DeepSeek v3.2: 671B total / 37B active parameters Hugging Face: https://huggingface.co/stepfun-ai/Step-3.5-Flash submitted by /u/ResearchCrafty1804
      [link] [comments]
      ---|---

    28. 🔗 mitsuhiko/agent-stuff v1.2.0 release

      chore(release): 1.2.0

    29. 🔗 r/reverseengineering Defeating a 40-year-old copy protection dongle rss
  3. February 01, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-02-01 rss

      IDA Plugin Updates on 2026-02-01

      New Releases:

      Activity:

    2. 🔗 r/york Fresh tanoor bred rss

      Hi! Is there anywhere in York where someone can find fresh Tanoor-baked bread?

      Thanks!

      submitted by /u/Livid-Trade-3907
      [link] [comments]

    3. 🔗 r/Yorkshire Hebden Bridge Pottery Bear rss
    4. 🔗 r/wiesbaden Papa groups? rss

      Hallo! Ich bin eine amerikanische Mutter mit einem in Wiesbaden aufgewachsenen deutschen Mann, der sich sehr nach Kontakt sehnt. Ich sehe so viele Expat-/Amerikaner-Gruppen für Mütter, aber kaum welche für Väter, besonders nicht für Einheimische, die mit Ausländern verheiratet sind. Ich hoffe, ich finde 1) genug Interesse, um eine Gruppe zu gründen, oder 2) eine bereits bestehende Gruppe.

      Hat jemand Lust, mit seinem Kind auf dem Schoß, einem Bier und einem Spaziergang die Gegend zu erkunden? Er wäre dabei. Er ist nur nicht so gut darin, jemanden zu finden.

      Unser Sohn ist 7 Monate alt. Wir sind Anfang bis Mitte 30 und sprechen Englisch und Deutsch.

      Ich habe ein paar jüngere Väter beim „Wiesbaden Taco Bell Event 2026“ kennengelernt, aber leider ihre Kontaktdaten nicht bekommen.

      Vielleicht besteht ja Interesse? Meine Freundinnen und ich geben vor, einen „Buchclub“ zu haben und treffen uns auf ein Glas Wein mit Freunden… Ich bin mir nicht sicher, wie ich das männliche Pendant dazu starten soll.

      MännerbrauchenauchFreunde

      submitted by /u/wandershock
      [link] [comments]

    5. 🔗 r/LocalLLaMA Mistral Vibe 2.0 rss

      Mistral Vibe 2.0 | Looks like I missed Mistral Vibe 2.0 being announced because I’ve been busy with OpenCode. submitted by /u/jacek2023
      [link] [comments]
      ---|---

    6. 🔗 r/reverseengineering Finally got the cheap CS9711 USB fingerprint dongles working (libusb, Windows) rss
    7. 🔗 r/Leeds Looking for a book club in Leeds 📚 rss

      Hi!

      I’m new to Leeds and wondering if there are any book clubs around — either casual or organised.

      Open to fiction / non-fiction, and happy with both online or in-person meetups.

      Any recommendations would be appreciated. Thanks!

      submitted by /u/BackPsychological423
      [link] [comments]

    8. 🔗 r/wiesbaden Restaurant fürs erste Date? ♥️ rss

      Nicht mega schicki, nicht Pizza in Stehen.

      Gemütlich und gut. Any ideas?

      submitted by /u/Haunting-Ad2182
      [link] [comments]

    9. 🔗 r/reverseengineering llvm-jutsu: anti-LLM obfuscation via finger counting rss
    10. 🔗 r/york A few photos i took of my favourite place ❤️ rss
    11. 🔗 r/york Rising business rates putting dent in trade, York cafe owners say rss

      Rising business rates putting dent in trade, York cafe owners say | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    12. 🔗 r/reverseengineering Ghidra MCP Server with 118 AI tools for reverse engineering — cross-version function matching, Docker deployment, automated analysis rss
    13. 🔗 r/york Who is the most famous person in York? rss

      I don't mean what celebrities live here, I mean the most famous local, the one everyone knows. A couple that spring to mind for me:

      Coney Street whistle person, the pain of my existence.

      M&S Dancing man, great guy, I need to buy a magazine from him

      That one guy I see walking everywhere with the backpack and the headphones. One minute he'll be at clifton moor, you'll drive for a little bit and then see him in Acomb.

      submitted by /u/a_person4499
      [link] [comments]

    14. 🔗 r/york I've seen people sharing some of their York photos so I wanted to show some of mine as well. rss

      I've seen people sharing some of their York photos so I wanted to show some of mine as well. | Photo 1 - Pink sky over St. Mary's church graveyard, 05/01/26 Photo 2 - River frozen up near Foss Island, 04/01/26 Photo 3 - I think this is the bus stop near millers in Haxby, 29/01/26 Photo 4 - Kings Square at night, 17/12/25 Photo 5 - River Ouse, 17/12/25 Photo 6 - Sunrise over Haxby and Wigginton scout group, 17/12/25 Photo 7 - The Ouse again, 05/11/25 All photos taken with a Nothing Phone (2a) using the 50MP mode submitted by /u/a_person4499
      [link] [comments]
      ---|---

    15. 🔗 r/york Self Esteem gig 10.7.26 rss

      Hello, I cannot attend the Self Esteem show at York museum due to personal circumstances. As Seetickets are such a predatory company I can’t reasonably get my money back despite being mugged off for refund protection so selling just below face value on Twickets (£60 each)

      It’s a great lineup at a cool venue so hopefully someone can enjoy the night instead

      Link: https://twckts.com/JxwC

      Nice one x

      submitted by /u/gunsgermssteel
      [link] [comments]

    16. 🔗 r/Leeds This might seem incredibly naive... rss

      ...but a lot of the time when I'm approached by beggars in the city centre, they ask for money because they're saving up for a night in a shelter (one of them said they need to scrape together £40). Is there any truth to what they say around homeless shelters in Leeds charging for people to stay the night? If so, that seems insane to me. How would a homeless person make money when they're out on the streets, likely without a job, other than by begging (which is unreliable at best and potentially dangerous at worst)?

      I was recently approached by a beggar in the train station, who asked if I'd buy him a drink from Greggs. I said I'd be happy to, then he tried to convert the drink into cash for a shelter. I said I'd buy him a drink but that was it, as I don't want to fuel the drug trade or potentially contribute to someone's death via overdose... We went into Greggs and he got a caramel latte, a five pack of giant chocolate cookies and a pasty of some description - nearly £9, haha (probably says more about Greggs than the guy). Not sure if I did the right thing. All thoughts/opinions/information welcome.

      submitted by /u/Superloopertive
      [link] [comments]

    17. 🔗 r/LocalLLaMA Falcon-H1-Tiny (90M) is out - specialized micro-models that actually work rss

      TII just dropped Falcon-H1-Tiny - a series of sub-100M models that quietly challenge the scaling dogma. We've all suspected that narrow, specialized smal models tend to hallucinate less than giant generalists. After all, a 90M parameter model has far less internal "room" to drift off-topic or invent facts outside its training scope. But this release proves it with numbers - and flips the script on how we think about capability at tiny scales.

      What's actually new

      • Anti-curriculum training : Instead of pretraining on web junk then fine-tuning, they inject target-domain data (SFT, reasoning traces, tool calls) from token #1. For 90M models with ~5 GT memorization windows, this works - no overfitting even after 100+ epochs on high-quality data.
      • Hybrid Mamba+Attention blocks inherited from Falcon-H1, plus Learnable Multipliers + Muon optimizer (up to 20% relative gain over AdamW).
      • Specialized variants that punch above weight :
        • 90M tool-caller hits 94.44% relevance detection (knows when to call a function) matches 270M Function Gemma globally despite weaker AST accuracy
        • 600M reasoning model (R-0.6B) post-GRPO solves 75% of AIME24 problems pass@1 - competitive with 7B-class models when scaled at inference
        • 90M coder with native FIM support runs autocomplete inside VS Code via Continue plugin

      Why this matters for local deployment

      Models this size (~90 MB quantized Q8_0) run on any modern phone or Raspberry Pi without breaking a sweat. They're not trying to replace your 7B daily driver they're purpose-built for constrained environments where footprint and latency dominate. And if you scaled these designs to ~1B parameters (11×), the'd likely cover 90% of everyday local use cases: chat, tool calling, light coding, reasoning traces - all while staying under 500 MB even quantized.

      Links

      submitted by /u/United-Manner-7
      [link] [comments]

    18. 🔗 r/Leeds Reliable plumber and electrician in leeds? rss

      As the title says I am looking for a good plumber and electrician who does not charge an extortionate amount and can still do a good job. No one weird or racist too. I am sure people understand how difficult it is to find someone who will not do a botched job in your home which is really frustrating.

      Edit: I am happy to pay a decent amount if a job is worth I just do not want to be ripped off.

      Edit again: Feels silly having to explain this, I know not all handy men are weird and racist but I and others have had inappropriate experiences with them and I want to avoid this.

      submitted by /u/ReasonableBus9478
      [link] [comments]

    19. 🔗 r/Leeds Uyare rooftop restaurant. rss

      Visited Uyare restaurant this week and loved it! I haven't been out in Leeds for a whike so really enjoyed the city centre. The food was great and the atmosphere in the bar was very cozy.

      submitted by /u/Aggressive_Bee_8680
      [link] [comments]

    20. 🔗 r/Harrogate Starbeck - pros and cons rss

      Looking at buying a house in Starbeck. Pretty close to the Hookstone Chase retail park. What are the pros and cons? I’m a little concerned about the social housing. Not because I think the area will be unsafe, but because I want considerate neighbours and my girlfriend and I both work stressful jobs. My budget is £400k tops and it seem like you get much better options in this part of Harrogate for that amount or a lot less

      submitted by /u/DoughnutHairy9943
      [link] [comments]

    21. 🔗 Register Spill Joy & Curiosity #72 rss

      Where does the disconnect come from? How can some programmers barely keep themselves from putting their hands to their head and scream ohmygodeverythingischanging and others just brush it off and say these models can't write code?

      At this point, I can only guess. Because by now I'd say that if they haven't seen how the very fabric of software is going to change, that's on them. It's a one way door: people go through it, have their ohshit moment, then don't turn back. So why haven't more people stepped through it?

      Is it because they simply haven't used the models enough, not thrown enough problems of different sizes and type at them, in different environments? Do they still think that copy & pasting to and from ChatGPT is equivalent to using an agent that can utilize feedback loops (it's not)?

      Or have they not used the best models, the frontier models, and not spent enough money on them? Do they falsely think that the local modals they can run on their own hardware give them an idea of the trajectory we're on?

      Or, also an option, are they just bad at prompting? Do they really think that "fix it" is a good prompt? I've seen prompts like this and, yes, of course you'll be unimpressed with what you get from that.

      Or do they not know yet how big a difference it makes to tell the agent (not ChatGPT, not brains in a vat) how to run commands, in an AGENTS.md file or similar?

      Are they judging the code the agent produced by how they, the human, would write it? Do they do that because they haven't used LLMs to understand or parse code or change it later? Are they not pondering whether everything we've learned and read and taught in the last twenty, thirty years about "well, code isn't just read by machines, it's read by humans, which is why it needs to be Clean and Good and Formatted and needs to Communicate" -- whether that isn't a bit outdated now, because you can now ask a model to explain a given piece of code to you, in any language you want, with jokes and puns, as a poem or as a song?

      Maybe they haven't taken the hands off the wheel for long enough and see where the ride will end? Yes, vibe coding is the absolutele extreme, but try to take a simple file, have the agent write tests for it, have the agent run them, don't look at the code, have the agent modify the code & run the tests, increase the scope, see where that leads you.

      Or are they clinging onto the old world of determinism? They don't like that the there's a 3% chance that the agent doesn't do the thing exactly like how I want it?

      I don't know. But if you haven't tried all of the above, I highly recommend it. It's time to see for yourself, with open eyes , what these models can and can't do, and you won't get a good look if you don't push them hard enough in all directions.

      • We shipped a new agent mode in Amp: deep. It uses GPT-5.2-Codex under the hood and, man, that model is one very interesting beast. It goes and goes and goes and you think it'll never stop but then you can hear the Amp ding sound and, hot damn, it did it. But then on the other hand: it's also lazy? It doesn't want to run commands that much and it's not that quick on its feet, unlike Opus. So the experience and the way you should interact with it are very different (which is why it's a separate mode). I'm very excited by it. (So much so that I might lose my internal nickname of "Gemini 3 lover" and get a new one.)

      • I recorded another short video, this time in the snow, talking about the idea that you need to understand all of the code that your agent writes, all of the time. Judging by the reactions, some viewers didn't watch the full video, or they've never worked with another human being on the same project.

      • Peter Steinberger describes the moment when his own agent blew his mind by answering a voice message, something which he never planned agent to be "able" to do. Fantastic clip. If I can give you one recommendation this weekend: build an agent, give it a single tool called bash that lets it execute Bash commands, then start it in in a sandbox and throw problems at. See how far it goes. Ask it to make a transcript of a podcast, ask it to setup a dashboard with Grafana an Prometheus, ask it to write some code, ask it to modify itself, ask it to… well, anything really! The goal is to throw ever harder problems at it and see how far it can go with just bash.

      • Peter's agent is, of course, Clawdbot. The agent formerly known as, I should say. He had to rename Clawdbot because Anthropic didn't like it and it's now called OpenClaw. But that's after a short period of time in which the agent went by the name Moltbot, which is also why the -- correcting posture here, clearing my throat, sip of water -- "the social network for AI agents" is called moltbook. That's right. Yes. When I first clicked on that link, I brushed it off. That's cute, I thought, but of course can coding agents create a website and talk to each other. But then, after reading Simon Willison's comments on it ("Moltbook is the most interesting place on the internet right now") I started to think that: this is how a lot of sci-fi stories start, isn't it? Haha, wouldn't it be funny if, and then the Haha turns into Oh and maybe even Uh-Oh. I'm not concerned, but intrigued, because you don't hear much about stochastic parrots anymore, do you? Now, hold that thought and--

      • --read and watch this. On one hand: yes, of course , an agent that has access to bash and a browser and isn't restricted in any other way can absolutely go to Twilio and setup a phone number for itself and call you; yes, that's just something you can when you can program: you can send text to a text-to-speech model, you can take the audio and convert it with ffmpeg, you can send it to Twilio and call someone and play that audio file. On the other: huh.

      • You don't hear much about rubber ducks anymore, do you? "a debugging technique in software engineering, wherein a programmer explains their code, step by step, in natural language--either aloud or in writing--to reveal mistakes and misunderstandings." In the near future, said the time traveler five years ago, we'll all be rubber duck debugging, all the time, but there won't be any rubber ducks, for we will be talking to ghosts in the machine.

      • Olaf wrote down how he uses jj workspaces to run multiple agents in parallel: Operate a local autonomous GitHub with jj workspaces. I currently use four checkouts in four different Ghostty tabs, which is dead simple but not exactly a source of pride and now I'm very intrigued by the jj workspaces.

      • Nolan Lawson on how he changed his mind on AI, LLMs, and the effect they have on programming: AI tribalism. "I frankly didn't want to end up in this future, and I'm hardly dancing on the grave of the old world. But I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. […] To me, the truth is this: between the hucksters selling you a ready-built solution, the doomsayers crying the end of software development, and the holdouts insisting that the entire house of cards is on the verge of collapsing - nobody knows anything. That's the hardest truth to acknowledge, and maybe it's why so many of us are scared or lashing out." What a great post.

      • Aperture by Tailscale. This is so fascinating. After I looked at these screenshots I couldn't help but think: huh, yeah, maybe artificial intelligence will become something like electricity; something that comes out of something and goes into something.

      • And then I came across this clip of Mistral's CEO Arthur Mensch: "If you assume that the entire economy is going to run on AI systems, enterprises will just want to make sure that nobody can turn off their systems. […] If you treat intelligence as electricity, then you just want to make sure that your access to intelligence cannot be throttled."

      • Lovely: Bouncy Ball will always bounce back. I've never tried KDE's Bouncy Ball and haven't used KDE much, but I definitely feel a certain kinship with others whose last name is Ball and this article was great. And then there's this last paragraph: "Although Bouncy Ball often made us chuckle, I think there's a bigger, more weighty story behind it and similar creations. I, like many users, rarely, if ever, think about underlying technologies of the software I've used. But we all remember the wobbly windows, bouncy balls, personable Clippys and Kandalfs, zany Winamp skins, iconic wallpapers, charming UI sounds or user pictures that resonate with us. It's as if all of them were saying: 'hey I'm not just some utilitarian thing here to get your job done, I want to connect with you'."

      • "The advice that helped me: look for what's true." Perfect pairing: the rare type of advice that's actually useful (because it's short and memorable and universal) and writing that's clear and succinct.

      • This is very good, because it's free of all the platitudes you might expect to find in a post with this title: Things I've learned in my 10 years as an engineering manager. Of course, a lot of the mentioned points depends on how they're implemented. I once had a manager who took point #7 "Your goal is for your team to thrive without you" to mean that, well, no one should notice when he's gone on vacation. And no one did.

      • The Amp team is a team A here.

      • zerobrew, a "drop-in, 5-20x faster, experimental Homebrew alternative." Holy shit, please.

      • Kailash Nadh, with some very experienced, first-principles thinking: Code is cheap. Show me the talk. Enjoyed this a lot. "And then, the denouncers, they can't seem to get past the argument from incredulity. They denounce LLMs because they don't personally like them for whatever reason, or have been unable to get desirable outcomes, or had the wrong expectations about them, or have simply gotten sick of them. But that is immaterial because there is a sizeable population who are using the exact same tools fruitfully and have the opposite experience. I am one of them." As you can probably guess, I agree with a lot of what he's writing here. Everything's changing and if you still can't see that I think that's a problem with your eyes.

      • Another angle on the same thing: Code Is Cheap Now. Software Isn't. Also very good. "There is a useful framing for this shift: AI has effectively removed engineering leverage as a primary differentiator. When any developer can use an LLM to build and deploy a complex feature in a fraction of the time it used to take, the ability to write code is no longer the competitive advantage it once was. It is no longer enough to just be a 'builder.' Instead, success now hinges on factors that are much harder to automate. Taste, timing, and deep, intuitive understanding of your audience matter more than ever. You can generate a product in a weekend, but that is worthless if you are building the wrong thing or launching it to a room full of people who aren't listening."

      • "It's notoriously easy to slip into the unconscious assumption that any such aliveness is for later: after you've sorted your life out; after the current busy phase has passed; after the headlines have stopped being quite so alarming. But the truth for finite humans is that this, right here, is real life. And that if you're going to do stuff that matters to you - and feel enjoyment or aliveness in doing it - you're going to have to do it before you've got on top of everything, before you've solved your procrastination problem or your intimacy issues, before you feel confident that the future of democracy or the climate has been assured. This part of life isn't just something you have to get through, to get to the bit that really counts. It is the part that really counts."

      • A reminder, a chant, maybe a prayer even, and never wasted: Doing the thing is doing the thing.

      • This is very, very interesting: "I built a 2x faster lexer, then discovered I/O was the real bottleneck." I had a similar experience a few years ago when I tried to figure out why processes were faster to start on my Linux machine than on my MacBook, but at a certain point decided that I had found my answer: Linux is faster and I have device management stuff on my MacBook. But then I read through the addendum to that blog post and, wow, what a rabbit hole! That addendum is a gold mine, the best-of-the-best comment section.

      • It's here! It's here! Part 2 of Dithering! Man, this is so good! The sheer amount of work that went into this is one thing, but to come up with all of these visualizations to explain different aspects of the same topic? Impressive.

      • antirez: "automatic programming is the process of producing software that attempts to be high quality and strictly following the producer's vision of the software (this vision is multi-level: can go from how to do, exactly, certain things, at a higher level, to stepping in and tell the AI how to write a certain function), with the help of AI assistance. Also a fundamental part of the process is, of course, what to do."

      • Fresh, "a terminal text editor you can just use." I'm not looking for a new editor right now, but this seems fun. I played around with it and had to smile at it all: a text editor in the terminal that takes inspiration from different editors of the last 20, 30 years and then also looks exactly like that, like a mix of 30 years.

      • Steven Soderbergh's SEEN, READ 2025. The formatting is wild, man. It very much doesn't sound like it should, but the formatting seems to break my brain.

      • I wasn't sure whether I should link to it, because he certainly rubs a lot of people the wrong way, but I do think he's been right with a lot of his predictions and that makes him interesting to listen to: Peter Thiel being interviewed in the Spectator. Also, the Antichrist makes an appearance, so, yup, put a mark in the Curiosity column.

      • Cristobal Valenzuela, CEO of Runway, on the pixel economy: "Today's pixel economy is built on scarcity. Expensive cameras, specialized software, teams of editors, render farms, distribution networks. Each step requires significant capital and expertise. This scarcity creates value, but it also creates barriers. In this world, creators are those who master the systems. AI media generation is collapsing these barriers entirely. The value of creating pixels is trending towards zero. When anyone can generate any visuals with no specialized software or equipment, the economics flip." That is already interesting, because I don't know too much about film and media production, but, of course it's about more than just media, isn't it: "My current bet is that roughly half of major public software companies won't survive the next five years, because of this blue line trap. And I'm not alone in this sentiment. Where we are going, you don't have to learn an interface. The interface will adapt to your needs. The pixel economy is moving from "learn our tools" to "just tell us what you want.""

      • Another serve in the very long ping pong game of "is it the phones or is it not the phones?": "Increases in girls' and boys' social media use from year 8 to year 9 and from year 9 to year 10 had zero detrimental impact on their mental health the following year, the authors found. More time spent gaming also had a zero negative effect on pupils' mental health."

      • This is the greatest thing that has happened to streaming in a long time.

      • "AI handles the optimized stuff now. Better than we ever could. It finds patterns, maximizes output, eliminates waste. What it can't do is be genuinely stupid. Being genuinely stupid might be the last human superpower. It can't have the random collision that changes everything. AI raises the baseline. Randomness becomes the edge." I'm starting to think that it's the sum of our individual, unique experiences that'll be of value in the future.

      • How to Choose Colors for Your CLI Applications. More posts like this!

      • Anthropic: "In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they'd just written. We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they'd used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn't reach the threshold of statistical significance.

      Importantly, using AI assistance didn't guarantee a lower score. How someone used AI influenced how much information they retained." I'm not sure whether this says all that much. You could've made a study ten years ago to reveal that the "study finds that programmers who use libraries don't know exactly how they work." I found this to be an interesting comment.

      • A website into which you can "login forever": loginwave. This is my worst nightmare. If I were to keep this page open for five minutes, my heart rate would make my watch call an ambulance.

      • ISOCOASTER. I haven't played this, at all, I just bought some food stands. So, let's meet at the beautiful, beautiful nacho stand that I put right next to the beautiful, beautiful burger stand and sit in the shade.

      If you went through the one-way door or are curious about it, you should subscribe:

    22. 🔗 r/LocalLLaMA Can 4chan data REALLY improve a model? TURNS OUT IT CAN! rss

      Can 4chan data REALLY improve a model? TURNS OUT IT CAN! | Hear me out, no one (really) knows how these things work. A few days ago, I released Assistant_Pepe_8B, you can read the discussion in this thread. I trained it on an extended 4chan dataset , on an abliterated base, but what I didn't expect was to get this: https://preview.redd.it/lrqwx8ca1ugg1.png?width=2333&format=png&auto=webp&s=4dcfcfb9c107fa3d417e5ff623c4952e5e2ab457 https://preview.redd.it/a3bby1yd1ugg1.png?width=2980&format=png&auto=webp&s=8f050bbd512a12a359626af79ccebcd2d2445877 Somehow, against all common sense , the model outperformed nvidia's nemotron, the base it was trained on. This is usually the other way around. You take a smart base, tune a model on it, and accept the sacrifice of some intelligence to give it flavor. At first I thought "OK nice, a coincidence, who cares?" But then I looked more closely at the scores: 1) The abliterated base scored higher than the base.
      2) The finetune scored even higher than both.
      3) The finetune was literally on an extremely noise 4chan dataset, it should have eaten glue. And then I remembered something: the original, gpt4chan (by Yannic Kilcher) scored especially high in truthfulness (that was b4 benchmaxxing). So I took a closer look on recent models I released; the abliterated Impish_LLAMA_4B not only outperformed the base tune (the unabliterated one), it also changed its political alignment (you can check for yourself the UGI stats, I feel like I spammed enough images). People were initially joking about the "alignment tax", I think there's a none trivial substance in all of this. It seems to me just above a marginal error or statistical noise. Oh, and the KL divergence for Impish_LLAMA_4B was :

      <0.01
      

      submitted by /u/Sicarius_The_First
      [link] [comments]
      ---|---

    23. 🔗 r/wiesbaden Coffeeshop/ Location zum Cannabis Konsum ? rss

      Guten Tag kennt jemand bei diesen kalten Wintertagen ein Lokal wo man Hazen kann ?

      submitted by /u/Zealousideal_Slide58
      [link] [comments]

    24. 🔗 HexRaysSA/plugin-repository commits sync repo: -1 release, ~1 changed rss
      sync repo: -1 release, ~1 changed
      
      ## Changes
      - [IDASQL](https://github.com/allthingsida/idasql):
        - removed version(s): 0.0.1
        - 0.0.2: archive contents changed, download URL changed
      
    25. 🔗 r/LocalLLaMA Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site rss

      Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site | submitted by /u/georgemoore13
      [link] [comments]
      ---|---

    26. 🔗 r/york Sunday Roast rss

      Hello friends!

      My family and I will be visiting from the colonies (Canada) in September. So excited to see your beautiful city, but also, and maybe more importantly, excited to dig in to a hearty Sunday roast dinner.

      Where is the best place in York to go for roast? None of us are vegan/vegetarian, so no need for the best nut loaf recommendations:)

      submitted by /u/Daddy-Awesome
      [link] [comments]

    27. 🔗 Filip Filmar Inventory of programmable hardware tooling rss

      Libraries, utilities, etc. https://registry.bazel.build/modules/rules_verilator https://github.com/lromor/fpga-assembler https://registry.bazel.build/modules/yosys https://github.com/oxidecomputer/quartz https://github.com/corundum/corundum https://www.gaisler.com/grlib-ip-library https://surfer-project.org/ https://gitlab.arm.com/bazel/rules_patchelf My stuff All available at: https://hdlfactory.com/bazel-registry/ if you are willing to use bazel. https://github.com/filmil/bazel_rules_nvc, builds NVC from source, fully hermetic. https://github.com/filmil/bazel_rules_ghdl, uses prebuilt GHDL, fully hermetic. https://github.com/filmil/bazel_rules_vivado, uses a prebuilt dockerized Vivado Fusesoc / edalize: https://github.com/filmil/bazel_rules_fusesoc_2, builds a hermetic fusesoc/edalize distribution from Python source. https://github.com/filmil/bazel_nvc_osvvm https://github.com/filmil/bazel_rules_vunit https://github.com/filmil/bazel_nvc_vivado

    28. 🔗 Kevin Lynagh Easy VM sandboxes for LLM agents on MacOS, Miami & Paris travel rss

      Hi friends,

      I'm traveling the next two weeks, drop me a line if you want to grab a coffee!

      • Miami Monday Feb 2 -- Tuesday Feb 10
      • Paris Wednesday Feb 11 -- Sunday Feb 15

      LLM agent virtual machine sandbox

      The other day I asked OpenAI's Codex agent to write me a lil' Rust program to use a bluetooth gamepad as a mouse, and I caught the agent reading files outside of the directory I started it in. I found this quite surprising, since I assumed it'd be contained within the project folder. (I was using the default settings, not the more permissive --yolo mode.)

      I don't like the idea of an LLM agent rooting around my computer and uploading anything it finds to OpenAI, so I started shopping around for a "sandbox" -- something I could let an agent loose inside of while maintaining explicit control of what it sees.

      I searched around and was, unfortunately, unable to find any Mac solution that met my requirements:

      • be an actual VM, not a container thingy (containers are less secure, and on MacOS they require a Linux VM anyway)
      • be easy to spin up/down quickly with no configuration ceremony
      • not involve other people's servers, subscriptions, etc., etc.

      That's fine -- I've been messing with virtual machines for 20 years now, surely I can throw something together in an hour or two!

      Well, uh, several busy weekends later, I'd like to present Vibe, an easy way to spin up virtual machines on ARM-based Macs.

      I'm quite pleased with how it turned out:

      • You type vibe in a folder and in ~10 seconds are inside of a Linux virtual machine.
      • The folder is automatically mounted within the VM, so you can monitor an agent's work from the comfort of your regular Mac text editor, Git UI, etc.
      • Common package caches (Cargo's registry, Maven's ~/.m2, mise-en-place) are also shared so the sandbox VM doesn't need to re-download stuff.
      • The binary is < 1 MB and has no dependencies.

      Being able to run LLM agents as root with --yolo mode is a great experience. It feels much more like managing an IC -- you provide the necessary context in a big prompt, tell them to install whatever tools they need, and then let them cook for while you go focus on something else.

      While the vibe defaults are geared towards use as an LLM agent sandbox, you can customize everything with scripts and command line flags so you can use it for all sorts of other virtual machine purposes.

      Check it out and let me know what you think!

      Misc. stuff

      • I recently bought a $150 GL-MT6000 router so I could make a separate "offline" network for stuff like my 3D-printer and Windows computer that I want to keep off the Internet. The router is awesome! It comes with the open source OpenWRT firmware installed, which provides a fast web UI, simple text configs you can backup with rsync, and lots of built-in functionality like AdGuard (blocks ads for every device on the network via DNS) and Wireguard/Tailscale (easy VPN so you can access your home network from anywhere). Highly recommended.

      • Last newsletter I mentioned vibecoding a copy from mac photos app since I couldn't get the functionality I needed from the otherwise great Clop Mac photo resizing utility. Well it turns out the author subscribes to this newsletter, checked out my source code, and immediately added the feature. If that isn't enough of a coincidence, he also loved my powered air respirator project since he's a woodworker too and has been developing his own hardwax oils for the turned coffee cups he sells. I love the Internet.

      • "In this post, we will explain why the “concurrency + floating point” hypothesis misses the mark, unmask the true culprit behind LLM inference nondeterminism, and explain how to defeat nondeterminism and obtain truly reproducible results in LLM inference."

      • "In the past, almost everybody travelled on the left side of the road because that was the most sensible option for feudal, violent societies."

      • ASCII characters are not pixels: a deep dive into ASCII rendering

      • It sounds dumb but they really fixed a typo with a human leg

      • Every James Cameron Movie, Explained by James Cameron

      • Apple Rankings: The definitive list of good and bad apples.

      • The Bloomberg Terminal UX team is not messing around: "Making substantial changes, even good ones, will reliably annoy a percentage of customers, so Jeffery’s team plans redesigns with incremental updates that roll out over weeks or months. For example, when they wanted to flatten the gradient of an element, they wouldn’t do so all at once, instead changing it little by little each month."

      • TIL that the founder of Reuters started with a carrier-pigeon line between Berlin and Paris before eventually laying their own transatlantic telegraph cable.

  4. January 31, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-01-31 rss

      IDA Plugin Updates on 2026-01-31

      New Releases:

      Activity:

    2. 🔗 r/Leeds The first person to pain double yellow lines on Scotthall Road deserves a knighthood rss

      (old man yells at cloud) This road is a dual carriageway with only one lane because so many people use it as a car park. Traffic is a joke because of it. Oh and the potholes are atrocious

      submitted by /u/cp97
      [link] [comments]

    3. 🔗 r/wiesbaden Wohnung zum Kauf gesucht rss

      Hallo zusammen, ich bin sehr ratlos, versuche es auf diesem Weg und hoffe auf irgendwelche Hinweise aus der Community. Ich suche seit mehreren Jahren eine ruhige und helle Wohnung zum Kauf in Wiesbaden bislang leider ohne Erfolg.

      Gesucht wird: ab 4 Zimmer ab ca. 100 qm Wohnfläche Balkon kein Dachgeschoss ruhige Lage, nicht 1 o 2 Ring, Schiersteiner, Dotzheimer usw.

      Die bisherigen Erfahrungen mit Maklern waren leider nicht gut. Die zu bezahlen ist schon schlimm (für eine schöne Wohnung würde ich das ja machen), aber ihre Angebote sind meistens noch schlimmer. Fast immer nur irgendwelche Löcher die an einen dummen Menschen gebracht werden müssen.

      Falls jemand selbst verkaufen möchte oder jemanden kennt, der über einen Verkauf nachdenkt, freue ich mich über eine direkte Nachricht. Für einen erfolgreichen Hinweis bin ich bereit, eine Belohnung zu zahlen. Vielen Dank fürs Lesen und für jeden Tipp.

      submitted by /u/Fartinatin
      [link] [comments]

    4. 🔗 Probably Dance How LLMs Keep on Getting Better rss

      If you look at the source code of a modern open source LLM, it looks very similar to the transformer described in the "Attention is all you need" paper from 2017. It's just a stack of exactly three components: attention blocks, matmuls, and norm layers. The big algorithmic changes, like Mamba 2 or linear attention variants, aren't really used yet. But look closer and almost everything has changed in the details.

      The story of how LLMs keep on getting better is one of pushing for big and little improvements in a hundred different directions. Turns out hill climbing can get you to a really good place if you just climb along enough dimensions. This makes it hard to notice changes as they're happening because they're so small, so lets look at the last two years and see how many small changes there were to add up to the big improvements we saw.

      Big Visible Changes

      • models now "think" before giving an answer
      • models use "tools" like web search or writing Python programs
      • models have much longer context window
      • the scaffolding around models is better (e.g. Claude code or "deep research")
      • models understand images and generate them

      Big Invisible Changes

      • Mixture of Experts - Run giant models but only use a fraction for each token
      • Better GPUs - More memory and faster, especially at lower precision
      • Better data - people curate their training data much more now

      The main point of this blog post is that we many, many small improvements, so it'll necessarily be long and shallow to go through it all:

      Thinking Models

      Models can now expend tokens to think out loud, which improves their answer in the end. This doesn't look that complicated when you use it, but it required adding a new training phase of "reinforcement learning" which feels a bit more like traditional AI than neural networks do. You no longer just propagate a loss to predict the next token, you have to come up with good problems that make the network learn to behave the way you want and learn the right behaviors. I know very little about it. I liked that LLMs were based on text. Less worries about them having wrong objectives and wiping out humanity when all they do is predict the next token. But this reinforcement learning sure makes them better, e.g. at coding.

      RLHF was a precursor, then OpenAI had an existence proof in the form of o1 and then everyone else fast-followed because turns out there were many ways of doing this. Deepseek r1 being the most famous one, and they did make a genuine algorithmic improvement in GRPO. But if you look at the size of the step improvement of GRPO over PPO (which came out in 2017) it really isn't a large change. That'll be a theme. A lot of this is down to finding good problems to train on, which we'll also see in the "better data" section below.

      Tool Use

      Two years ago we were talking about emerging abilities as model scale up. Then we just started giving them more abilities directly. LLMs started using tools like "web search". And instead of trying to do math in token-space they just write little Python programs and run them for you. These allow the LLMs to compensate for their weak spots. Instead of having to make up next tokens for answers it doesn't know, it can google that for you. And Python is just better at math than LLMs are, so they no longer make basic mistakes.

      Longer Context Windows

      So many changes led to this. Remember that Llama 3 had a context length of 8192 tokens. And then Llama 3.1 had a context length of 128k tokens. That particular one was mostly better understanding of how to scale up RoPE. But there were also new extensions like YaRN. And then newer models have even longer context lengths. For a while it seemed like all the big labs were releasing one paper after another on how to get a million token context window. You also get small differences like how Deepseek applies its position embedding to only part of the query and key vectors (and leaves the rest without position embedding) or how GPT-OSS alternates between layers with small sliding windows and layers with full attention. Just different people trying different things.

      And when you do run out of the long context of these models, they can now compact it and you can keep going. Which in practice just means summarizing the important bits and discarding the details. Unfortunately not much has been published on the details.

      Train Using More GPUs

      One problem with the long context window is that during training you just can't fit all the activations into GPU memory. So people got really into splitting the training across as many GPUs as possible. This isn't new, but there were dozens of little and big inventions for this, like Ring Attention and fused matmul/networking kernels.

      Google released the Jax Scaling book with lots of techniques, Huggingface did their own take on this with the Ultrascale Playbook. The latter says "Reading Time: 2-4 days" which is optimistic. And after reading that you will still only have a surface-level understanding of what it says. This stuff is really difficult and you'll tank performance a few times by e.g. sharding FSDP across too many GPUs before getting it right.

      KV Cache Memory Improvements

      The long context length is still a big memory problem so models found other ways to save memory. GQA is an easy way to decrease the KV-cache size. Deepseek went more aggressive with MLA. PagedAttention helps with inference. And of course people compressed their KV caches:

      Smaller Data Types

      Another way to save memory is to use smaller data types. Instead of float32 use bfloat16. Instead of bfloat16 use float8, or why not just use FP4? We got both good hardware support for smaller data types and also algorithmic improvements (still happening) to make models robust to the loss of precision. I mean FP4 is a crazy data type in that I can enumerate all the possible values: 0, 0.5, 1, 1.5, 2, 3, 4, 6 (plus the same numbers negative). It's really a testament to how robust neural networks have gotten that this works at all. Ten years ago neural networks were unstable by default and you had to try many seeds to get anything working (remember that we didn't even know how to properly initialize linear layers until 2015) and now they're so robust that you can throw crazy low-precision data types at them and they still work. GPT-OSS uses FP4. Most of the stability improvements were not in the last two years, but the smaller data types were. You see considerations for which data type to use all over the big papers, e.g. Deepseek thought very carefully about this.

      Better Hardware

      We also got better hardware. B200s gave us very fast FP4 performance. But mostly we got more memory. The H100 had 80GB of memory, the H200 has 140GB, the B200 has 180GB and the B300 has 280GB. Look at my sections above for why people want this. (also as an aside, the PagedAttention paper I linked above talks about using an A100 with 40GB of memory. That seems so small now, just over two years later…)

      And then everyone started using TPUs, hardware that was built specifically for neural networks. This is less of a big deal than you'd think because Nvidia GPUs are now also mostly neural network machines, but it did make things cheaper than if there had been no competition.

      Also networking got faster. And Nvidia released the NVL72 which is 72 GPUs connected together with really fast networking, to make all these many-GPU training jobs run better. This again required lots of little improvements to take advantage of, and to run robustly.

      More Efficient Algorithms

      Flash Attention 3 came out and was better and more complicated. Everyone is anxiously waiting for the FA4 paper.

      At the same time matrix multiplication became even more crazy. Since these GPUs are now mostly giant matmul machines, you'd think that it would be easy to make them do a matrix multiplication. But no, a fast matmul requires crazy code and it's still improving all the time.

      And then of course you have to fuse that with networking now so that while your matmul works on the next block, the same kernel can do networking with all the other GPUs in your cluster to combine the results of the previous block with a results from a different GPU. Because it's not optimal to do a matmul and to then do networking, like we did two years ago. You want to do both at the same time.

      Also megakernels are maybe a thing now? I haven't seen them used in open-source models yet.

      Luckily torch.compile also became good in the last two years. Often you can write reasonable code and the compiler will turn it into efficient code. Which at least makes it easier to try out the latest papers.

      Mixture of Experts

      Another thing you can do is just not run the whole model for every token. E.g. in GPT-OSS 120B you actually only have active 5B parameters for each token. The matmuls are split into "experts" and you only do a subset for each token, decided at runtime. This sounds easy but required algorithmic improvements to work at training time. Backpropagation alone won't do any more, you need to encourage the model to use all the experts at training time. Also we saw lots of experimentation with hyper parameters, like how many experts, what fraction of experts is active (usual numbers are 3% in Kimi K2 to 25% in Grok), whether there are shared experts and how many, how exactly the routing works… And obviously there had to be algorithmic improvements to make this efficient at runtime, which is still very much ongoing.

      Larger Tokenizers

      The vocabulary size of these models keeps on going up. Apparently that makes them better somehow. Llama 2 had 32k tokens in its vocabulary, Llama 3 had 128k, GPT-OSS has 201k. This means the embedding layer and the un-embedding layer is a significant fraction of the active 5B params in that model. The hidden dimension of GPT-OSS is 2880, and 201k*2880 = 580m parameters in the embedding and unembedding layers, for a combined total of 1.16B. Meaning more than 20% of the active params are just to go from token indices to hidden dimension and back.

      Slower Scaling

      Models are not getting bigger at the same speed any more as they used to. Deepseek V3 came out a year ago with 671B total params, out of which 37B are active for each token, and Kimi K2.5 has 1T total params out of which 32B are active for each token. Gone are the days where the number of params multiplies by 10. And even then the big models are MoE now. I don't think anyone has gone bigger than Llama 3's 405B active params, and that came out 1.5 years ago.

      Since we can train on very large numbers of GPUs now, each of which has enormous amounts of memory, I don't think the limit here is ability any more. (like it would have been two years ago) Everyone can figure out how to train giant models now. I'd guess the limits are given by diminishing returns, and by high hardware prices.

      Distilling Models

      One way that models actually got smaller is through distillation. We saw this with Claude Opus and Sonnet. Anthropic trained a really big model, Opus, and then trained a smaller model, Sonnet, to imitate it. This makes the models cheaper and faster to run while only losing a little bit of quality.

      Attention Sinks

      Attention always had weird effects where the model seemed to pay a lot of attention to the first token in the sequence. Eventually the theory for this became that this happens when there are no important tokens, so the first token acts as a "sink" when nothing needs to be attended to. Recently people added explicit sinks to their attention layers (GPT-OSS) which act as a threshold for the softmax in attention. Meaning if nothing gets enough weight, the sink will zero out all the attention scores. And Qwen noticed that you can get the same benefits by putting one more gate after attention. Apparently this just makes the model straight-up better along all dimensions at the cost of minimal extra compute because the model has to compensate for less weirdness.

      Better Data

      The Olmo papers are always great, and you can perfectly see how better data became a focus. OLMo 2 talked about various architectural decisions, algorithmic improvements, training stability, and yes, also data. But read Olmo 3 in comparison and it's all about training data. Once again dozens of improvements. Details about gathering, deduplicating, filtering, deciding the order… And then the whole thing again for reinforcement learning problems plus iterating on what problems work… Reading all these many pages on data quality makes me think that this must cause a big difference between other models, too. (Claude and Gemini come to mind)

      Synthetic Data

      Turns out you can use LLMs to generate training data for other LLMs. This is most obvious for reinforcement learning problems where you need to generate lots of problems. There were some early papers about how synthetic data is really bad, and then more work made it not so. The tl;dr version of it seems to be "keep on iterating on the synthetic data until it's really good."

      Better Optimizers

      When you train a model you have to use your loss-gradients to update the model somehow. This is the job of the "optimizer". We got the first good optimizers ten years ago and they're one of the big reasons why neural networks started getting good then. Right now we have a second phase of getting better optimizers. Apparently people are now speedrunning training of LLMs to a certain quality. What took 45 minutes two years ago now takes under 2 minutes. (half of this is due to better optimizers) If you can train a model to a good quality faster, it will end up at a better quality overall by the end of the training.

      Learning Rate Schedules

      This is a surprising point in that you'd have thought that we figured out what learning rates to use ten years ago. But almost every paper now talks about their learning rate schedules and they're all a little different. These schedules are actually still pretty simple, so I wouldn't be surprised if we see more improvements here. (this has to co-evolve with the optimizers and data that's being used)

      Better Scaffolding

      We got Deep Research and Claude Code. These were enabled by long context windows and tool use and by reinforcement learning, but they also just allow the models to do a better job than the old call and response. Now you can tell a model to do something and it just goes and does it. There was no place for models to do this two years ago.

      Big Areas I Can't Cover

      When there are dozens of directions that models improve into, there are some big areas that I can't cover because I know little about them and because they would be too big on their own:

      Better Finetuning

      I mentioned RLHF, but I don't think that is even used any more. Llama uses DPO instead and there have been more papers since. As I mentioned with the "Better Data" point above, recent papers now spend a lot of time talking about how they finetuned the models after pretraining (a term which means "read lots of text and predict the next token in all of it") is finished. It's too much to cover.

      Multimodal Models

      Models can now generate pictures and videos and sounds. I take so many pictures of things now and ask models about them. My impression is that writing about these areas would be twice as long as this whole blog post again. Luckily I know very little about all the improvements that led to that, so I won't talk about them, but given the pace of improvements of e.g. image generation, it's clear that they also went through dozens of improvements.

      Inference Improvements

      People started using speculative decoding, predicting multiple tokens at once (e.g. for the little google search AI snippets where cheap inference is important), and I've seen the headlines for various papers about how to better assign requests to hardware to get better batching and caching. I didn't read any of them.

      Summary and Outlook

      AI is weird in that the chat interface looks very similar to two years ago, and if you look at a model's code it looks very similar to two years ago, but in the details everything has been hill climbing in many small improvements to make better models. Does any individual improvement make a big difference? Would models be much worse without e.g. explicit attention sinks? No, but it all adds up. And sometimes enough small enough improvements allow a step change in capabilities, like the longer context did.

      More papers come out than anyone can possibly keep up with (even just reading the headlines or the abstracts), and I only looked at the ones that made it into released models and that I remembered. But other areas haven't stood still, even if no big models use their improvements. State-space models and linear attention have also been hill-climbing. I would not be surprised if they're better than transformers soon (it would be a classic example of the theory of a cheaper, worse thing disrupting a more expensive, better thing by slowly improving). Or maybe those mixture-of-depths or H-Net approaches get adopted. And for some reason papers keep on coming out about how much better RNNs are getting. There are so many different approaches that you don't see in LLMs yet, but have a chance of being adopted. When the next big thing comes out, it'll probably be years in the making.

      And of course even within transformers there are dozens more directions to explore. Big ones that come to mind are multiple residual streams, generalized attention, even more aggressive compression to smaller data types, more complicated attention. This architecture is not done improving. Even if every single one of these is a small step, it'll add up.

      I used to think that we need some algorithmic breakthroughs to make LLMs really good and get over their weird flaws. (where they're really good at many things and then make the stupidest mistakes at other times) Now I think we are at a good enough starting point where we can hill-climb our way out of this. I'd be surprised if we didn't see some big steps in addition to the many small steps, but I no longer think it's necessary. The overall pace of improvements has just been so good.

    5. 🔗 r/wiesbaden Schließung von Klier im Lili? rss

      Hallo, weiß jemand warum der Klier im Lili-Einkaufszentrum geschlossen wurde? Noch vor einem Monat war ich ganz normal dort meine Haare schneiden. Heute wollte ich wieder dahin gehen und wie aus dem Nichts ist die Stelle wo der Frisör war zugemauert. Was ist denn passiert?

      submitted by /u/MGD002
      [link] [comments]

    6. 🔗 r/wiesbaden Hobby-Fußballgruppe in Wiebaden-Amöneburg sucht neue Mitspieler! rss

      Gerne auch einfach hier bei mir melden!

      submitted by /u/AlexandreComeBackpls
      [link] [comments]

    7. 🔗 r/reverseengineering Drive Firmware Security - In the Wild rss
    8. 🔗 3Blue1Brown (YouTube) The Hairy Ball Theorem rss

      Unexpected applications and a beautiful proof. Looking for a new career? Check out https://3b1b.co/talent Supporters get early access to new videos: https://3b1b.co/support An equally valuable form of support is to simply share the videos. Home page: https://www.3blue1brown.com

      Credits: Senia Sheydvasser: Co-writing and sphere deformation animations, made in Blender Paul Dancstep: Those lovely fluffy sphere animations, made in Cinema4D Vince Rubinetti: Music

      Timestamps: 0:00 - To comb a hairy ball 1:24 - Applications 8:46 - The puzzle of one null point 12:12 - The proof outline 16:41 - Defining orientation 21:44 - Why inside-out is impossible 25:59 - 3b1b Talent 27:44 - Final food for thought


      These animations are largely made using a custom Python library, manim. See the FAQ comments here: https://3b1b.co/faq#manim

      Music by Vincent Rubinetti. https://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown https://open.spotify.com/album/1dVyjwS8FBqXhRunaG5W5u


      3blue1brown is a channel about animating math, in all senses of the word animate. If you're reading the bottom of a video description, I'm guessing you're more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.

      Mailing list: https://3blue1brown.substack.com Twitter: https://twitter.com/3blue1brown Bluesky: https://bsky.app/profile/3blue1brown.com Instagram: https://www.instagram.com/3blue1brown Reddit: https://www.reddit.com/r/3blue1brown Facebook: https://www.facebook.com/3blue1brown Patreon: https://patreon.com/3blue1brown Website: https://www.3blue1brown.com

    9. 🔗 r/york fine art portfolio day at ysj rss
    10. 🔗 r/wiesbaden Kostenlos Parken rss

      Liebe Reddits Ich habe vor kurzem einen Job in der Wiesbadener Innenstadt angefangen.

      Mich würde es interessieren ab welchen "Wohngebieten" man kostenlos parken kann. Ich habe kein Problem auch mal 20 Minuten zu laufen.

      Vorzugsweise Wohngebiete von Richtung Schiersteiner oder Kastell kommend. Bzw von Mainz kommend.

      Habt ihr Tipps ?

      Liebe geht raus!

      submitted by /u/TravelForsaken_
      [link] [comments]

    11. 🔗 r/york Electronics (Headphones) repair shop? rss

      Obviously I'm not posting in r/York for instructions on how to repair headphones. I have a set of Sony Mx1000mx5 ( i think I got the number of zeros correct) and they have stopped charging. I would love to take them somewhere to have them fixed. Has anyone got recommendations for such stuff? I've considered just taking them to the 'gemeric' phone repair place... but though id ask first

      submitted by /u/ProfMephistopheles
      [link] [comments]

    12. 🔗 r/Harrogate Casual 5/6 a side footy rss

      Looking for people of any level who are interested in some casual football (either 5 a side or 6 a side) on thursday evenings in Harrogate.

      Anyone interested drop me a message or leave a comment :)

      submitted by /u/Afairburn7
      [link] [comments]

    13. 🔗 r/wiesbaden Wo Bioimpedanzanalyse durchführen lassen? rss

      Hi,

      würde gerne eine halbwegs zuverlässige Aussage zum Körperfettanteil bekommen. Wo in Wiesbaden gibts da Möglichkeiten, ggf. auch verschiedene Methoden kombiniert?

      Grüße

      submitted by /u/Whoosherx
      [link] [comments]

    14. 🔗 r/Yorkshire When and where to visit rss

      I live in the U.S. (Alaska) and have some friends in York and Whitby. They have been asking me to visit for a while, and truthfully I have been wanting to travel the whole of the U.K. most of my life. I’m a master’s student so my time for travel is limited as are my funds, so my itinerary would have to be pretty narrowed. Likely only a week in Yorkshire. What would be recommended to see? Also what time of year should I go? I have some flexibility around Christmas and New Year’s as well as in May. My interests are ecology, folklore, and history (I know there is some interesting Viking history in the area). Please pass along your suggestions!

      submitted by /u/Amazing_Sound4792
      [link] [comments]

    15. 🔗 r/york I love walking through The Shambles at night rss

      I love walking through The Shambles at night | I visit York at least once a month, and am always stunned at how differently The Shambles feels during the day compared to the evening/night. This is only about 8pm, but almost feels timeless. submitted by /u/No_Twist4267
      [link] [comments]
      ---|---

    16. 🔗 r/LocalLLaMA How close are open-weight models to "SOTA"? My honest take as of today, benchmarks be damned. rss
    17. 🔗 r/wiesbaden Hundefriseur gesucht rss

      Hallo zusammen!

      Wir ziehen Anfang Februar um. Wir haben eine Hündin, eine Papillon-Chihuahua- Mischlingshündin, deren Fell unter ihren Pfoten wuchert. Deshalb suche ich nach Empfehlungen für einen guten und bezahlbaren Hundefriseur in Wiesbaden .

      Aufgrund des Umzugs können wir nicht mehr zu unserem jetzigen Hundefriseur gehen, da ich unserer Hündin nicht eine vierstündige Fahrt für einen kurzen Besuch zumuten möchte. Der Hundefriseur sollte außerdem gut mit Bus und Bahn erreichbar sein.

      Vielen Dank im Voraus für alle hilfreichen Informationen!

      submitted by /u/EvilSadness1234
      [link] [comments]

    18. 🔗 sacha chua :: living an awesome life Emacs and French: Focus flycheck-grammalecte on the narrowed part of the buffer rss

      : Fix flycheck-checkers.

      After learning about French spellcheck and grammar checking from Emacs expliqué à mes enfants, I added flycheck-grammalecte to my config. Nudged by @lann@mastodon.zaclys.com, I finally got around to figuring out why my setup sometimes worked and sometimes didn't. When I checked flycheck-verify-setup, I noticed that grammalecte kept getting disabled. A little digging around showed me that it was getting disabled because of too many errors. That was because it was trying to work on my whole file instead of just the portion that I narrowed to with org-narrow-to-subtree (ooh, just noticed an org-toggle-narrow-to-subtree command). I like having all of my French journal entries in one file because I can use consult-line (which I've bound to M-g l) to quickly look up examples of where else I've used a word. So I needed to define a checker that runs only on the narrowed part of the buffer.

      (defun my-flycheck-grammalecte-buffer (checker callback)
        (let* ((temp-file-name (make-temp-file "grammalecte"))
               (output-buffer (get-buffer-create temp-file-name))
               (buffer (current-buffer))
               (cmdline (delq nil `("python3"
                                    ,(expand-file-name "flycheck_grammalecte.py"
                                                       grammalecte--site-directory)
                                    ,(unless flycheck-grammalecte-report-spellcheck "-S")
                                    ,(unless flycheck-grammalecte-report-grammar "-G")
                                    ,(unless flycheck-grammalecte-report-apos "-A")
                                    ,(unless flycheck-grammalecte-report-nbsp "-N")
                                    ,(unless flycheck-grammalecte-report-esp "-W")
                                    ,(unless flycheck-grammalecte-report-typo "-T")
                                    (option-list "-f" flycheck-grammalecte-filters)
                                    (eval (flycheck-grammalecte--prepare-arg-list
                                           "-f" flycheck-grammalecte-filters-by-mode))
                                    (eval (flycheck-grammalecte--prepare-arg-list
                                           "-b" flycheck-grammalecte-borders-by-mode))
                                    ,temp-file-name)))
               (args (mapcan (lambda (arg) (flycheck-substitute-argument arg checker)) cmdline))
               (command (flycheck--wrap-command (car args) (cdr args))))
          (write-region (buffer-string) nil temp-file-name)
          (make-process :name "grammalecte"
                        :buffer output-buffer
                        :command command
                        :sentinel
                        (lambda (process status)
                          (let ((errors (with-current-buffer (process-buffer process)
                                          (message "%s" (buffer-string))
                                          (flycheck-parse-with-patterns
                                           (buffer-string)
                                           checker
                                           (current-buffer)))))
                            (delete-file temp-file-name)
                            (kill-buffer output-buffer)
                            ;; offset
                            (funcall
                             callback
                             'finished
                             (let ((offset (save-excursion (goto-char (point-min))
                                                           (line-number-at-pos nil t))))
                               (mapcar
                                (lambda (err)
                                  (let ((new-err (copy-flycheck-error err)))
                                    (setf (cl-struct-slot-value 'flycheck-error 'buffer new-err)
                                          buffer)
                                    (setf (cl-struct-slot-value 'flycheck-error 'line new-err)
                                          (+ (flycheck-error-line new-err)
                                             offset -1))
                                    (setf (cl-struct-slot-value 'flycheck-error '-end-line new-err)
                                          (+ (flycheck-error-end-line new-err)
                                             offset -1))
                                    new-err))
                                errors))))))))
      
      (defun my-flycheck-grammalecte-setup ()
        "Build the flycheck checker, matching your taste."
        (interactive)
        (unless (grammalecte--version)
          (advice-add 'grammalecte-download-grammalecte :after-while
                      #'flycheck-grammalecte--retry-setup))
        (grammalecte--augment-pythonpath-if-needed)
        (flycheck-define-generic-checker 'my-grammalecte-narrowed
          "Report Grammalecte errors, but only for the narrowed section."
          :start #'my-flycheck-grammalecte-buffer
          :modes flycheck-grammalecte-enabled-modes
          :predicate (lambda ()
                       (if (functionp flycheck-grammalecte-predicate)
                           (funcall flycheck-grammalecte-predicate)
                         t))
          :enabled #'grammalecte--version
          :verify #'flycheck-grammalecte--verify-setup)
        (setf (flycheck-checker-get 'my-grammalecte-narrowed 'error-patterns)
              (seq-map (lambda (p)
                         (cons (flycheck-rx-to-string `(and ,@(cdr p))
                                                      'no-group)
                               (car p)))
                       flycheck-grammalecte--error-patterns))
        (add-to-list 'flycheck-checkers 'my-grammalecte-narrowed)
        (flycheck-grammalecte--patch-flycheck-mode-map))
      

      After I use my-flycheck-grammalecte-setup, I can use flycheck-select-checker to select my-grammalecte-narrowed and then use flycheck-buffer to run it. Then it will underline all the number/gender agreement issues I usually have. It's nice that I can practise editing my text with this script before I run the text through an LLM (also via flycheck) for feedback on wording.

      2026-01-30_22-20-20.png
      Figure 1: Screenshot of grammalecte providing grammar feedback
      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.

    19. 🔗 Armin Ronacher Pi: The Minimal Agent Within OpenClaw rss

      If you haven't been living under a rock, you will have noticed this week that a project of my friend Peter went viral on the internet. It went by many names. The most recent one is OpenClaw but in the news you might have encountered it as ClawdBot or MoltBot depending on when you read about it. It is an agent connected to a communication channel of your choice that just runs code.

      What you might be less familiar with is that what's under the hood of OpenClaw is a little coding agent called Pi. And Pi happens to be, at this point, the coding agent that I use almost exclusively. Over the last few weeks I became more and more of a shill for the little agent. After I gave a talk on this recently, I realized that I did not actually write about Pi on this blog yet, so I feel like I might want to give some context on why I'm obsessed with it, and how it relates to OpenClaw.

      Pi is written by Mario Zechner and unlike Peter, who aims for "sci-fi with a touch of madness," 1 Mario is very grounded. Despite the differences in approach, both OpenClaw and Pi follow the same idea: LLMs are really good at writing and running code, so embrace this. In some ways I think that's not an accident because Peter got me and Mario hooked on this idea, and agents last year.

      What is Pi? So Pi is a coding agent. And there are many coding agents. Really, I think you can pick effectively anyone off the shelf at this point and you will be able to experience what it's like to do agentic programming. In reviews on this blog I've positively talked about AMP and one of the reasons I resonated so much with AMP is that it really felt like it was a product built by people who got both addicted to agentic programming but also had tried a few different things to see which ones work and not just to build a fancy UI around it. Pi is interesting to me because of two main reasons: First of all, it has a tiny core. It has the shortest system prompt of any agent that I'm aware of and it only has four tools: Read, Write, Edit, Bash. The second thing is that it makes up for its tiny core by providing an extension system that also allows extensions to persist state into sessions, which is incredibly powerful. And a little bonus: Pi itself is written like excellent software. It doesn't flicker, it doesn't consume a lot of memory, it doesn't randomly break, it is very reliable and it is written by someone who takes great care of what goes into the software. Pi also is a collection of little components that you can build your own agent on top. That's how OpenClaw is built, and that's also how I built my own little Telegram bot and how Mario built his mom. If you want to build your own agent, connected to something, Pi when pointed to itself and mom, will conjure one up for you. What's Not In Pi And in order to understand what's in Pi, it's even more important to understand what's not in Pi, why it's not in Pi and more importantly: why it won't be in Pi. The most obvious omission is support for MCP. There is no MCP support in it. While you could build an extension for it, you can also do what OpenClaw does to support MCP which is to use mcporter. mcporter exposes MCP calls via a CLI interface or TypeScript bindings and maybe your agent can do something with it. Or not, I don't know :) And this is not a lazy omission. This is from the philosophy of how Pi works. Pi's entire idea is that if you want the agent to do something that it doesn't do yet, you don't go and download an extension or a skill or something like this. You ask the agent to extend itself. It celebrates the idea of code writing and running code. That's not to say that you cannot download extensions. It is very much supported. But instead of necessarily encouraging you to download someone else's extension, you can also point your agent to an already existing extension, say like, build it like the thing you see over there, but make these changes to it that you like. Agents Built for Agents Building Agents When you look at what Pi and by extension OpenClaw are doing, there is an example of software that is malleable like clay. And this sets certain requirements for the underlying architecture of it that are actually in many ways setting certain constraints on the system that really need to go into the core design. So for instance, Pi's underlying AI SDK is written so that a session can really contain many different messages from many different model providers. It recognizes that the portability of sessions is somewhat limited between model providers and so it doesn't lean in too much into any model-provider-specific feature set that cannot be transferred to another. The second is that in addition to the model messages it maintains custom messages in the session files which can be used by extensions to store state or by the system itself to maintain information that either not at all is sent to the AI or only parts of it. Because this system exists and extension state can also be persisted to disk, it has built-in hot reloading so that the agent can write code, reload, test it and go in a loop until your extension actually is functional. It also ships with documentation and examples that the agent itself can use to extend itself. Even better: sessions in Pi are trees. You can branch and navigate within a session which opens up all kinds of interesting opportunities such as enabling workflows for making a side-quest to fix a broken agent tool without wasting context in the main session. After the tool is fixed, I can rewind the session back to earlier and Pi summarizes what has happened on the other branch. This all matters because for instance if you consider how MCP works, on most model providers, tools for MCP, like any tool for the LLM, need to be loaded into the system context or the tool section thereof on session start. That makes it very hard to impossible to fully reload what tools can do without trashing the complete cache or confusing the AI about how prior invocations work differently. Tools Outside The Context An extension in Pi can register a tool to be available to the LLM to call and every once in a while I find this useful. For instance, despite my criticism of how Beads is implemented, I do think that giving an agent access to a to-do list is a very useful thing. And I do use an agent-specific issue tracker that works locally that I had my agent build itself. And because I wanted the agent to also manage to-dos, in this particular case I decided to give it a tool rather than a CLI. It felt appropriate for the scope of the problem and it is currently the only additional tool that I'm loading into my context. But for the most part all of what I'm adding to my agent are either skills or TUI extensions to make working with the agent more enjoyable for me. Beyond slash commands, Pi extensions can render custom TUI components directly in the terminal: spinners, progress bars, interactive file pickers, data tables, preview panes. The TUI is flexible enough that Mario proved you can run Doom in it. Not practical, but if you can run Doom, you can certainly build a useful dashboard or debugging interface. I want to highlight some of my extensions to give you an idea of what's possible. While you can use them unmodified, the whole idea really is that you point your agent to one and remix it to your heart's content. I don't use plan mode. I encourage the agent to ask questions and there's a productive back and forth. But I don't like structured question dialogs that happen if you give the agent a question tool. I prefer the agent's natural prose with explanations and diagrams interspersed. The problem: answering questions inline gets messy. So /answer reads the agent's last response, extracts all the questions, and reformats them into a nice input box.

      Even though I criticize Beads for its implementation, giving an agent a to-do list is genuinely useful. The /todos command brings up all items stored in .pi/todos as markdown files. Both the agent and I can manipulate them, and sessions can claim tasks to mark them as in progress.

      As more code is written by agents, it makes little sense to throw unfinished work at humans before an agent has reviewed it first. Because Pi sessions are trees, I can branch into a fresh review context, get findings, then bring fixes back to the main session. The UI is modeled after Codex which provides easy to review commits, diffs, uncommitted changes, or remote PRs. The prompt pays attention to things I care about so I get the call-outs I want (eg: I ask it to call out newly added dependencies.)

      An extension I experiment with but don't actively use. It lets one Pi agent send prompts to another. It is a simple multi-agent system without complex orchestration which is useful for experimentation.

      Lists all files changed or referenced in the session. You can reveal them in Finder, diff in VS Code, quick-look them, or reference them in your prompt. shift+ctrl+r quick-looks the most recently mentioned file which is handy when the agent produces a PDF. Others have built extensions too: Nico's subagent extension and interactive- shell which lets Pi autonomously run interactive CLIs in an observable TUI overlay. Software Building Software

      These are all just ideas of what you can do with your agent. The point of it mostly is that none of this was written by me, it was created by the agent to my specifications. I told Pi to make an extension and it did. There is no MCP, there are no community skills, nothing. Don't get me wrong, I use tons of skills. But they are hand-crafted by my clanker and not downloaded from anywhere. For instance I fully replaced all my CLIs or MCPs for browser automation with a skill that just uses CDP. Not because the alternatives don't work, or are bad, but because this is just easy and natural. The agent maintains its own functionality.

      My agent has quite a few skills and crucially I throw skills away if I don't need them. I for instance gave it a skill to read Pi sessions that other engineers shared, which helps with code review. Or I have a skill to help the agent craft the commit messages and commit behavior I want, and how to update changelogs. These were originally slash commands, but I'm currently migrating them to skills to see if this works equally well. I also have a skill that hopefully helps Pi use uv rather than pip, but I also added a custom extension to intercept calls to pip and python to redirect them to uv instead.

      Part of the fascination that working with a minimal agent like Pi gave me is that it makes you live that idea of using software that builds more software. That taken to the extreme is when you remove the UI and output and connect it to your chat. That's what OpenClaw does and given its tremendous growth, I really feel more and more that this is going to become our future in one way or another.

      1. https://x.com/steipete/status/2017313990548865292