🏡


to read (pdf)

  1. Letting AI Actively Manage Its Own Context | 明天的乌云
  2. Garden Offices for Sale UK - Portable Space
  3. Cord: Coordinating Trees of AI Agents | June Kim
  4. Style tips for less experienced developers coding with AI · honnibal.dev
  5. Haskell for all: Beyond agentic coding

  1. March 10, 2026
    1. 🔗 r/wiesbaden Competitors giving bad google review rss
    2. 🔗 r/reverseengineering $10K in Bounties | 30-Day Runtime Enforcement Challenge Break Churchill. If you can. rss
    3. 🔗 libtero/idaguides IDA Guides v1.3.0 release

      Full Changelog : 1.2.0...1.3.0

    4. 🔗 r/york Looking to move rss

      Hi all! I’m currently looking for a 3+ bed house in Strensall area, Rufforth, or, Woodthorpe/Copmanthorpe/Acomb or . If there is anybody on here who is thinking of selling but hasnt yet listed etc, please give me a shout! Budget upto £450k ideally

      Thank you 🙏

      submitted by /u/Reasonable-Pay6072
      [link] [comments]

    5. 🔗 r/reverseengineering Your Duolingo Is Still Talking to ByteDance: How Pangle Fingerprints You Across Apps After You Said No rss
    6. 🔗 r/reverseengineering Reverse Engineering Binaries With AI rss
    7. 🔗 sacha chua :: living an awesome life La semaine du 2 mars au 8 mars rss

      lundi 2 mars

      J'ai préparé ma newsletter sur Emacs et j'ai écrit un article sur l'affichage d'indices pour des raccourcis clavier. J'ai aussi essayé l'expansion des snippets par commande vocale. Je pense que l'expansion des snippets est utile parce que quand j'insère un snippet à partir d'initiales, je dois penser à l'expression et puis penser aux lettres initiales, mais quand j'insère un snippet par commande vocale, je peux utiliser l'expression naturelle. Bien sûr, il y a un bref délai pour la transcription, mais c'est suffisamment court pour ne pas couper le fil de mes pensées.

      Ma fille était trop fatiguée pour son cours de gymnastique, donc je l'ai emmenée chez la dentiste pour un examen à cause de sa douleur dentaire. La dentiste a dit que ses gencives sont un peu enflées. Elle nous a recommandé de ramollir sa brosse à dents sous l'eau chaude avant de se brosser les dents et peut-être d'utiliser un bain de bouche salin. Ma fille s'est plainte que ses dents semblent trop serrées. La dentiste a dit que c'est acceptable pour le moment, et si nous voulons, elle peut nous orienter vers un orthodontiste. Quand j'étais plus jeune, je ne supportais pas l'appareil dentaire, mais c'est possible que ma fille puisse le supporter. Je pense que c'est mieux que nous attendions que le pic de concentration virale dans les eaux usées soit passé.

      Après la vaisselle et ma routine du soir, ma fille et moi avons cousu à la main notre projet de petit sac avec quelques poches.

      mardi 3 mars

      J'ai travaillé sur les virelangues pendant le rendez-vous avec mon tuteur. Les sons « r » et « u » ont continué à me poser des difficultés. Je vais travailler sur la différence entre « roue » et « rue », le mot « brume », et quelques autres. Il a dit que le « r » a besoin de moins d'air.

      Les résultats aujourd'hui :

      Je me demande quel serait une bonne méthode et une bonne interface pour m'entraîner seule à la prononciation entre les rendez-vous avec mon tuteur. Je pense que le processus comprend les étapes suivantes :

      1. Apprendre à écouter la différence entre l'exemple et un énoncé incorrect : il s'agit d'abord de distinguer qu'ils sont différents, puis de comprendre pourquoi.
        • Si j'extrais les énoncés de mes enregistrements et que je les annote avec les classifications de mon tuteur, je peux les utiliser pour l'apprentissage supervisé afin d'exercer mon oreille. Ces enregistrements seront trop ennuyeux pour d'autres, mais pour moi, il vaut peut-être mieux que je les écoute pour mieux apprendre.
      2. Identifier lequel des deux énoncés est le meilleur.
        • Je peux randomiser les courts enregistrements de l'étape précédente pour créer un jeu.
      3. Essayer de produire des sons variés. Il faut m'entraîner, il n'y a évidemment pas d'autre solution.
      4. Écouter la différence entre l'exemple et le son que j'ai produit. Déterminer si le son est assez bon. Réfléchir à la connexion entre les mouvements de la bouche et le son qu'ils produisent.
      5. Produire le son de manière isolée. Connecter la sensation interne de produire le son avec le son que je veux produire, parce que le son que j'enregistre diffère du son que j'écoute en parlant.
      6. Produire le son systématiquement.
      7. Produire le son même si je n'écoute pas de modèle et je ne viens pas de le répéter.
      8. Utiliser le son dans le contexte d'une expression avec des pauses.
      9. Dire l'expression plus fluidement.
      10. Dire l'expression sans exemple.

      Si c'était un problème résolu facilement, tout le monde utiliserait et recommanderait la solution. Je pense qu'il n'y a pas de bonne solution sur le marché à l'exception de la méthode que j'ai utilisée pour la formation de mon petit projet d'intelligence humaine générale (qui a 10 ans maintenant, comme elle me le dit souvent) : une quantité massive de données. Mais bien sûr, il y a beaucoup de recherches dont je peux profiter.

      Oooh, j'ai hâte d'essayer des spectrogrammes en plus des formes d'onde. Il y a quelques logiciels qui peuvent afficher les spectrogrammes même en temps réel. C'est possible que ça facilite l'analyse des voyelles.

      Donc, je peux utiliser les horodatages par mot de WhisperX pour segmenter l'enregistrement. Mais je dois les écouter dans le contexte du rendez-vous pour les associer avec les commentaires de mon tuteur, sauf si la segmentation par locuteur est fiable pour identifier quels énoncés ont obtenu un « oui » ou « c'est mieux » de mon tuteur et quels énoncés lui font dire « non ». Pour le moment, je pense que c'est plus fiable si j'écoute la conversation et annote les segments moi-même, donc une interface qui affiche les formes d'onde segmentées et me permet de faire des sélections par raccourcis clavier serait utile. Si les scores sont disponibles, les afficher sous forme de graphique à barres est peut-être plus précis et plus facile à comparer que les afficher à l'aide d'un dégradé de couleurs. Je peux aller voir du côté de Label Studio ou Praat pour des idées à implémenter sur Emacs. Ou bien, si j'utilise Audino 2.0 ou d'autres projets similaires sur le web, je peux les annoter pendant mes moments perdus.

      Pendant la pratique, je pense que mon interface doit lancer l'enregistrement de mon tuteur et peut-être afficher la forme d'onde ou le spectrogramme. Elle doit enregistrer ma voix, puisqu'elle doit lancer la lecture de l'exemple du tuteur et l'enregistrement de ma voix pour comparaison avec le score de confiance de WhisperX. Des raccourcis clavier lancent l'un ou l'autre.

      Notre réseau

      Mon tuteur a une question sur les réseaux informatiques, donc je vais profiter de cette occasion pour expliquer notre réseau en français afin d'apprendre plusieurs mots techniques en cours de route. Mon mari est principalement responsable de l'entretien de notre réseau, mais je devrais également m'y former.

      Mon mari a recommandé des ressources pour les gens intéressés :

      • Jim's Garage : recommandé vivement, mais le Homelab 2.0 dont il a discuté dans les vidéos récentes commence à coûter cher.
      • Serve the Home
      • Reddit, bien sûr

      Notre réseau :

      • Notre modem fibre optique du FAI se connecte à un mini-ordinateur Lenovo M920q qui fait fonctionner Proxmox pour la gestion de pare-feu et quelques machines virtuelles. Une des machines virtuelles est OPNSense, qui gère les adresses réseau, le pare-feu, le lissage du trafic réseau (y compris la règle consistant à couper l'accès à internet de notre enfant tard le soir) et divers réseaux virtuels (VLAN) pour isoler les différents appareils via l'adaptateur réseau Gigabit Intel 893647. L'Internet des objets manque souvent de mises à jour, donc mon mari veut les isoler de nos autres ordinateurs. OPNSense lui-même reçoit des mises à jour. En fait, mon mari l'a mis à jour récemment, et il est passé de 16 à 32 gigaoctets de RAM. Mon mari a dit qu'il apprécie que le Lenovo M920q soit assez silencieux.
      • Le M920Q se connecte à un commutateur réseau ASUS GS108Tv2, qui se connecte au Synology DS718+ pour le stockage réseau et à l'Odroid-XU4 qui fait aussi fonctionner PiHole pour réduire les publicités. Proxmox sur le M920q a aussi une machine virtuelle qui est responsable de sauvegarder les fichiers sur le Synology DS718+.
      • Le commutateur réseau ASUS GS108Tv2 se connecte au routeur wifi ASUS RT-AC66U qui utilise FreshTomato pour avoir plus de contrôle qu'avec le modem fibre optique. Il est capable de wifi 5 GHz et il peut traiter les réseaux wifi virtuels (deux ou plus de SSIDs dans la même bande 2,4 GHz ou 5 GHz) pour isoler les appareils comme le thermostat. De cette façon, les appareils fiables comme nos ordinateurs ne sont pas visibles par les appareils non sécurisés.
      • Le routeur wifi se connecte à un commutateur réseau non géré qui se connecte à un Odroid-C4 qui utilise OpenELEC et à notre vieille Sony PS3.

      Nous utilisions le routeur wifi ASUS RT-AC66U avec FreshTomato pour notre réseau, mais mon mari a mis à niveau vers le Lenovo M920q pour faciliter la gestion des réseaux virtuels et pour optimiser le débit. Il a dit qu'il avait choisi les composants pour minimiser l'espace, la consommation d'énergie et le bruit. Rien n'est neuf et tout peut être acheté sur Ebay ou le marché de l'occasion. Pour le moment, la RAM et le stockage coûtent très cher, et nous n'avons pas besoin de haute disponibilité ou réplication.

      network.png

      Après l'école, ma fille a eu de l'énergie, donc je l'ai emmenée à un cours de rattrapage de gymnastique. C'était un cours collectif de tissu aérien. Pendant que ma fille participait en classe, j'ai étudié mes cartes Anki. Elle a globalement aimé le cours à l'exception de ses chaussettes perdues. Malheureusement, quelqu'un a pris les chaussettes de ma fille au lieu des siennes. Je me suis retenue de dire qu'elle aurait dû me donner ses affaires à garder.

      mercredi 4 mars

      J'ai écrit un article sur l'expansion de snippets par la reconnaissance vocale sur Emacs et sur d'autres applications.

      J'ai essayé le bilan de prononciation d'Azure et la transcription des phonèmes par la bibliothèque Allosaurus, mais je pense que ceux-ci ne sont ni fiables ni adaptés à mes objectifs. Je ne sais pas si les scores d'Azure sont utiles. Allosaurus ne me donne pas l'API que je veux, même si j'analyse l'enregistrement de mon tuteur. (Je dois le vérifier avec le résultat de la synthèse vocale…)

      Le cours phonologique de FSI contraste deux exemples courts similaires pour développer la compétence d'identification des différences. Pour le moment, mieux vaut améliorer mon processus pour extraire et écouter les segments vocaux de mon rendez-vous que de s'entraîner d'une façon peu fiable et probablement incorrecte mais avec assurance.

      Ma fille et moi avons fait des courses. Après une pause, ma fille et moi sommes allées au parc pour jouer à Pokémon Go avec beaucoup d'autres dresseurs. Nous avons gagné quelques raids, mais ma fille n'a pas attrapé les Pokémons qu'elle voulait. Elle était un peu déçue, mais elle a dit que c'était une bonne promenade de toute façon.

      Ma fille était de mauvaise humeur à l'heure du coucher à cause de mon conseil pendant le brossage. Je suis restée calme et je lui ai donné de l'espace.

      jeudi 5 mars

      Ma fille s'est réveillée toute seule ce matin et elle a pris son petit-déjeuner, mais elle n'a pas voulu assister à ses cours en ligne. La harceler n'est pas utile, donc je l'ai laissée gérer ses propres émotions. J'ai travaillé sur le piano. J'ai aussi amélioré l'automatisation pour rassembler les jalons de distribution pour la Bike Brigade en utilisant Spookfox. J'ai découvert que la clé est d'utiliser le code

      document.querySelector('form[phx-change="update_options"]')
        .dispatchEvent(new Event('submit', {bubbles: true, cancelable:true}))
      

      pour mettre le tableau à jour après avoir changé les dates. Spookfox ne me permet pas d'attendre le résultat s'il prend du temps, donc je dois attendre dans Emacs Lisp comme ça :

      (let (result)
        (dolist (block-name '("milestone-this-month-set"
                              "milestone-this-month-get"
                              "milestone-before-month-set"
                              "milestone-before-month-get"
                              "milestone-after-month-set"
                              "milestone-after-month-get"
                              "milestone-summary"))
          (setq result
                 (org-babel-execute-src-block
                  nil
                  (org-babel-lob--src-info block-name)
                  nil 'babel-call))
          (when (string-match "-set" block-name)
            (message "Waiting after %s..." block-name)
            (sit-for 3)))
        (kill-new result)
        (message "Copied."))
      

      De cette façon, j'ai simplifié le processus pour réduire le nombre de clics. Le code complet est ici.

      vendredi 6 mars

      J'ai adoré travailler sur ma prononciation via mes notes sur notre réseau sur lequel mon tuteur m'avait interrogée mardi et mon mari m'avait aidée. J'ai besoin de travailler encore sur l'alphabet, qui est nécessaire pour lire les noms de modèles à voix haute. Mon tuteur a aussi des questions sur les LLM. J'ai hâte d'écrire plus de notes.

      Nous avons réarrangé des meubles parce que le nouveau lit arrive demain pour notre fille. Nous avons déplacé les étagères dans la chambre de ma fille dans un coin qui est mon nouvel espace bureau.

      Ma fille était trop frustrée par l'école aujourd'hui. Elle a séché ses cours, et elle a voulu rentrer plus tôt de sa sortie avec son amie. Je pense que cette journée était un peu difficile pour elle. Je me suis rappelé de penser sur le long terme, sans harcèlement.

      samedi 7 mars

      Ma fille et moi avons joué à Donjons et Dragons avec mes sœurs et mes nièces. Nous avons bien aimé la partie. Dans l'histoire, il y avait des kobolds qui habitent dans une des Cavernes du Chaos et qui regrettent d'avoir attrapé un ours. L'ours avait très faim et les kobolds aussi, parce que les kobolds lui donnent leur nourriture pour éviter d'avoir mal. La clerc (ma fille) et la guerrière (une de mes nièces) ont réussi à attirer l'ours dehors avec des bleuets. Ma sœur la magicienne a mené la charge contre des maraudeurs qui habitaient dans une autre caverne, et nous les avons vaincus. Dans une chambre, nous avons vu deux coffres, mais nous avons trouvé qu'un coffre était en fait un imitateur. Après un autre combat, nous avons trouvé 150 pièces d'or, des bottes et une potion mystérieuse.

      Après le déjeuner, ma fille et moi avons fait une promenade au parc pendant que nous jouions à Pokémon Go. Il faisait beau avec beaucoup de brume qui semblait un peu magique.

      Puis, mon mari et moi avons démonté l'ancien lit de ma fille et quelques autres meubles dans sa chambre pour créer de l'espace pour son nouveau lit.

      dimanche 8 mars

      Ma fille a réussi à éviter de tomber de son nouveau lit mezzanine. Succès ! Mon mari a fini de poncer et de vernir le garde-corps qu'il fabriquait en bois, donc il l'a installé pour nous permettre d'utiliser le matelas qui est trop épais pour le garde-corps original.

      J'ai commencé à externaliser mon code dans un nouveau package d'apprentissage des langues. Je ne sais pas s'il est utile aux autres, mais si je veux aider les autres à essayer, il a besoin d'un peu de travail.

      Il faisait très beau. Mon mari, ma fille et moi sommes allés à IKEA pour acheter des coussins, des lumières et un tapis de gym pour le petit coin jeu sous le nouveau lit de ma fille. Pendant ce temps-là, ma fille a vu un couteau qu'elle a aimé, donc nous l'avons acheté aussi. À la maison, elle a installé le tapis et les coussins elle-même. Elle a décidé de rapporter les lumières pour se faire rembourser la semaine prochaine.

      Pour le dîner, nous avons préparé des nuggets de poulet, des frites et du brocoli.

      Sur l'intelligence artificielle

      Dans le rendez-vous précédent, mon tuteur m'a posé des questions sur l'intelligence artificielle. Je veux réfléchir sur l'IA pour travailler ma prononciation en utilisant un sujet qui nous intéresse également, et pour trouver des points d'amélioration.

      D'abord, du contexte pour expliquer ma perspective :

      • Je laisse de côté les questions sur l'impact environnemental ou l'éthique des données entrantes.
      • Jusqu'à présent, j'ai essayé l'IA pour mes centres d'intérêt comme la parentalité, l'apprentissage du français et la programmation en Emacs Lisp, en Python et en Javascript. Je l'ai aussi utilisée pour faire des recherches.
      • Je travaille seulement un peu comme consultante, mais en fait, c'est juste pour le plaisir. Je ne veux pas augmenter ma charge de travail parce que je me concentre sur ma fille et mes intérêts personnels. Rien ne me presse d'utiliser l'IA (comme un chef, des clients ou des concurrents). L'IA ne me menace pas. Je peux l'utiliser ou ne pas l'utiliser, à mon gré. Je peux me focaliser sur mon bonheur.
      • Je peux consacrer une petite partie de mon budget à des essais, mais je ne veux pas travailler davantage pour rentabiliser une dépense plus importante. Pour le moment, les limites d'utilisation gratuite de Gemini, de Claude et d'Azure suffisent pour mes idées et mon temps limité. Je n'ai pas le temps de concentration nécessaire pour justifier l'investissement dans mon propre matériel, et sinon, les progrès sont trop rapides pour m'engager dans une configuration spécifique.
      • J'ai une conscience aiguë des limites cognitives ou physiques à cause des difficultés de santé de ma mère et de ma sœur, et de mes expériences avec mes limitations à cause du fait que je suis la personne principalement en charge de ma fille.
      • Je lis très vite, mais je n'ai pas assez de patience pour les longs contenus vidéo ou audio. Je n'aime pas les textes qui contiennent beaucoup de remplissage.
      • J'aime la programmation, donc je comprends un peu comment l'IA fonctionne et je ne peux pas lui attribuer une vraie intelligence. Je n'aime pas non plus les résultats imprévisibles.
      • De mon côté, c'est facile de lancer beaucoup d'idées. C'est difficile de les mener à terme. Je peine à finaliser mes tâches parce que de nouvelles idées arrivent sans cesse. Mais presque aucune de mes tâches n'est vraiment nécessaire, donc ce n'est pas grave.
      • J'aime bien l'amélioration incrémentale. Je préfère les petites étapes, les petites fonctions, les petits logiciels.
      • Beaucoup de gens ont une réaction forte contre l'IA pour plusieurs raisons qui incluent le battage médiatique excessif dont elle fait l'objet, son utilisation à mauvais escient, et l'inondation de banalité qu'elle produit.
      La programmation

      Pour la programmation, je trouve qu'elle fonctionne mieux pour les logiciels courts que pour les logiciels longs. Je réécris souvent la majorité du logiciel à l'exception d'un ou deux morceaux parce que ce code ne me convient pas. De temps en temps, j'utilise l'IA pour parfaire ou vérifier une idée rapidement avant de travailler sur l'idée moi-même. Je ne veux pas l'utiliser pour les correctifs que je veux soumettre à d'autres projets parce que le code ne me semble pas correct et je ne veux pas gaspiller le temps d'autres bénévoles.

      Quelques exemples concrets :

      • C'était utile pour implémenter une fonction qui compare deux listes et renvoie les éléments ajoutés, enlevés, ou modifiés via un algorithme classique que je comprends un peu mais pas suffisamment pour l'implémenter moi-même.
      • C'était utile pour tester l'idée d'un serveur de Kokoro TTS qui est compatible avec le serveur speechd parce que je ne sais pas encore comment faire un serveur multithread en Python. J'aime pouvoir lui donner trois dépôts git et des instructions pour générer un logiciel à partir d'un dépôt pour un autre via le troisième dépôt. Mais je ne veux pas le publier avant de réécrire et tout comprendre.
      • C'était utile pour générer des interfaces web pour mes idées personnelles.
      • Ce n'était pas très utile pour bricoler ma configuration (à l'exception d'identifier parfois des commandes ou des variables que je ne connais pas), parce que j'aime bien le bricolage. Spécifier mes objectifs demande souvent autant de travail que de les implémenter moi-même.

      Mon mari a son propre abonnement à Claude IA. Il a dit qu'il l'apprécie parce que l'IA peut gérer plusieurs petites tâches qui autrement nécessitent beaucoup de recherches. De mon côté, j'utilise souvent Gemini IA parce que sa limite d'utilisation gratuite est généreuse. J'ai aussi essayé Claude Code, mais mes connaissances sont limitées. Il semble utile, mais je préfère l'isoler dans une machine virtuelle, donc c'est peu pratique pour moi en ce moment.

      L'IA est très utile pour utiliser des commandes qui ont beaucoup d'options comme ffmpeg ou gnuplot.

      Je ne trouve pas l'IA assez fiable pour la laisser agir complètement indépendamment. Peut-être un jour, mais pour moi, pas encore.

      L'apprentissage du français

      J'aime utiliser l'IA pour me donner des retours sur mes textes. Si j'utilise seulement le dictionnaire, je ferai beaucoup d'anglicismes à cause de la traduction littérale. Les sujets qui m'intéressent sont un peu rares, donc ce sera peut-être difficile de trouver un tuteur qui se concentre exactement sur ceux-là. C'est un peu inefficace de corriger mon écriture mot à mot avec un professionnel. Mon journal et mes pensées ne sont pas si importants. Avec l'IA, je n'ai pas à perdre de temps avec mon tuteur pour corriger beaucoup d'erreurs comme l'accord du nom et du verbe ou les mots maladroits, et je découvre de nouveaux mots et expressions. Les suggestions de l'IA sont de temps en temps bizarres, donc c'est toujours une bonne idée de vérifier avec de vraies personnes. Sans l'IA, je pourrais peut-être apprendre plus lentement avec l'aide d'Internet, qui a beaucoup de ressources comme Vitrine linguistique.

      J'ai essayé l'IA pour faire des commentaires sur ma prononciation, mais je pense que ce n'est pas encore fiable et je n'ai pas l'expérience pour bien juger. Je peux peut-être vérifier mes résultats avec un tuteur, mais c'est peut-être difficile à cause des objectifs contradictoires, comme les personnes à qui l'on demande de former leurs remplaçants. En fait, je ne veux pas remplacer la connexion humaine. Je veux profiter davantage, apprendre davantage avec l'aide de vraies personnes, complétée par l'aide de l'IA. Il y a des chercheurs qui étudient les applications de l'IA à l'apprentissage des langues. Je peux attendre leurs découvertes. En attendant, je pense qu'il vaut mieux utiliser l'IA pour comprendre d'autres manières d'analyser la prononciation moi-même, et pour construire des outils personnalisés peut-être comme les résumés et les extraits de nos rendez-vous, les visualisations de mes tentatives, ou une interface pour enregistrer et écouter en temps réel.

      De temps en temps, j'essaye de générer des histoires ou des articles compréhensibles de mon niveau (ou presque). Pour le moment, je préfère d'autres ressources pour la lecture, comme les sous-titres d'émissions. Néanmoins, les traductions automatiques sur Reddit m'intéressent, donc j'ai réussi à remplacer mon fil d'actualité par un flux en français.

      Je ne suis pas encore prête à converser avec des IA par la voix. J'ai essayé la conversation libre et le dialogue presque scénarisé. J'adore les sous-titres simultanés, mais je n'ai pas toujours trouvé une méthode ou un système qui me convienne. Dans la conversation libre, je sais que l'interlocuteur est une IA, donc je n'ai pas une vraie curiosité pour ses «intérêts ou pensées». La conversation semblait très artificielle. En plus, je pense que je préférerais en construire un moi-même pour plus de contrôle. De toute façon, ma prononciation, ma grammaire et mon vocabulaire ont besoin de travail. Dans le dialogue scénarisé, je n'ai pas encore un vocabulaire assez riche pour discuter des sujets dans les exercices généraux. Si je répète simplement, je n'ai pas besoin d'IA pour ça.

      La parentalité

      J'ai parfois utilisé Claude IA pour générer des histoires interactives sur les centres d'intérêt de ma fille. Les histoires incluent les mots que ma fille doit apprendre pour sa classe. Elles permettent de taper sur un mot pour l'écouter par la synthèse vocale et pour voir la traduction. Elle aime bien ce format. L'enseignant de ma fille n'a pas le temps de personnaliser l'apprentissage du vocabulaire à ce point, et elle est trop imprévisible pour planifier ses propres rendez-vous avec un tuteur.

      Elle aime générer d'autres histoires interactives avec l'IA elle-même, comme des petits jeux sur KPop Demon Hunters ou Pokémon. Je pense que c'est une bonne façon de s'entraîner à réfléchir à ce qu'elle veut, comment l'expliquer et comment le peaufiner.

      Elle a 10 ans. Personne ne sait à quoi ressemblera vraiment le monde quand elle sera grande. Je pense que c'est mieux que mon mari et moi montrions comment approcher, comment apprendre, comment décider ce que nous pensons, sans peur ni battage publicitaire.

      Sans l'IA, nous pourrions improviser nos propres histoires. Mais je pense que la capacité de lui donner plus de contrôle dans une boucle de rétroaction1 rapide est une bonne chose.

      Je n'aime pas l'utiliser pour essayer de résoudre mes dilemmes de parentalité parce que l'IA confirme toujours quoi qu'on lui donne. De temps en temps, je l'utilise pour générer des questions pour réfléchir, ce qui est un peu plus utile.

      Mélanges

      J'aime bien la reconnaissance vocale parce qu'elle me permet de saisir plus d'idées plus vite (avant de les oublier) et d'analyser les transcriptions sans avoir à réécouter tous les enregistrements. Beaucoup de raisons peuvent empêcher une personne de taper. J'aime bien la programmation et l'écriture, et je veux continuer longtemps. J'ai hâte d'explorer des interfaces vocales.

      Je pense que la manière probabiliste que l'IA utilise est prometteuse pour chercher des choses que je ne sais pas exactement, ce qui sera très utile quand on a un brouillard cérébral. Je n'aime pas les résumés qui sont souvent mauvais et qui enlèvent l'expérience de rencontrer d'autres personnes qui pensent elles aussi des choses similaires. J'aime suivre les liens où je peux en apprendre davantage. J'aime aussi poser quelques questions à l'IA avant ou au lieu de demander à une vraie personne.

      Les étapes prochaines pour moi

      Je vais continuer à essayer l'IA dans mes centres d'intérêt. Je veux extraire mes fonctions personnelles dans des bibliothèques de reconnaissance vocale et d'apprentissage des langues pour aider les autres, mais j'avance lentement parce que mon attention est facile à détourner. Petit à petit.

      Je veux essayer les bibliothèques d'IA sous Emacs comme agent-shell. Si je peux approuver manuellement chaque commande, je pense que ce n'est pas grave.

      Footnotes

      1

      Feedback loop? My tutor was not sure about the wording.

      You can e-mail me at sacha@sachachua.com.

    8. 🔗 r/wiesbaden r/Wiesbaden, wo geht ihr in Wiesbaden eigentlich essen? rss

      Hallo,
      wir sind vier Typen und haben die Wiesbadener Restaurantszene in eine App namens Vota gepackt. Das Konzept ist simpel: Dir werden zwei Restaurants nebeneinander angezeigt, zum Beispiel Ente vs. Das Goldstein, du wählst das aus, wo du lieber hingehen würdest, und das Ranking aktualisiert sich sofort. Je mehr Leute abstimmen, desto genauer wird die Liste mit der Zeit. Es gibt noch ein paar doppelte Einträge hier und da, aber ich bereinige die Daten laufend.

      Hier ist die iPhone-Version, mit Kategorien, die zur Wiesbadener Gastro-Szene passen:
      https://apps.apple.com/app/vota-restaurant-ratings/id6744969212

      Und hier ist die Android-Version, endlich live:
      https://play.google.com/store/apps/details?id=org.vota.app

      P.S. Ich komme nicht aus Wiesbaden, ich lebe in Göteborg. Ich sammle keine Daten, verkaufe nichts und die App nutzt keine KI-generierten Inhalte. Ich poste in mehreren Subreddits, weil wir inzwischen mehrere Regionen unterstützen, und ich freue mich über ehrliches Feedback von Leuten, die die Stadt wirklich kennen.

      submitted by /u/TheShynola
      [link] [comments]

    9. 🔗 sacha chua :: living an awesome life Emacs Lisp and NodeJS: Getting the bolded words from a section of a Google Document rss

      During the sessions with my French tutor, I share a Google document so that we can mark the words where I need to practice my pronunciation some more or tweak the wording. Using Ctrl+B to make the word as bold is an easy way to make it jump out.

      I used to copy these changes into my Org Mode notes manually, but today I thought I'd try automating some of it.

      First, I need a script to download the HTML for a specified Google document. This is probably easier to do with the NodeJS library rather than with oauth2.el and url-retrieve-synchronously because of various authentication things.

      require('dotenv').config();
      const { google } = require('googleapis');
      
      async function download(fileId) {
        const auth = new google.auth.GoogleAuth({
          scopes: ['https://www.googleapis.com/auth/drive.readonly'],
        });
        const drive = google.drive({ version: 'v3', auth });
        const htmlRes = await drive.files.export({
          fileId: fileId,
          mimeType: 'text/html'
        });
        return htmlRes.data;
      }
      
      async function main() {
        console.log(await download(process.argv.length > 2 ? process.argv[2] : process.env['DOC_ID']));
      }
      
      main();
      

      Then I can wrap a little bit of Emacs Lisp around it.

      (defvar my-google-doc-download-command
        (list "nodejs" (expand-file-name "~/bin/download-google-doc-html.cjs")))
      
      (defun my-google-doc-html (doc-id)
        (when (string-match "https://docs\\.google\\.com/document/d/\\(.+?\\)/" doc-id)
          (setq doc-id (match-string 1 doc-id)))
        (with-temp-buffer
          (apply #'call-process (car my-google-doc-download-command)
                 nil t nil (append (cdr my-google-doc-download-command) (list doc-id)))
          (buffer-string)))
      

      I have lots of sections in that document, including past journal entries, so I want to get a specific section by name.

      (defun my-html-get-section (dom section-name)
        "Return DOM elements for SECTION-NAME."
        ;; Find the section heading (h1 ... h4) where the text equals section-name
        ;; Collect all the siblings until the next heading of equal or higher level
        (let*
            ((matching (dom-search dom (lambda (o)
                                         (and (string-match "h[1-6]" (symbol-name (dom-tag o)))
                                              (string= (string-trim (dom-texts o " ")) section-name)))))
             (parent (and matching (dom-parent dom (car matching))))
             level
             results)
          (catch 'done
            (dolist (o (dom-children parent))
              (cond
               ((and (string-match "h[1-6]" (symbol-name (dom-tag o)))
                     (string= (string-trim (dom-texts o)) section-name))
                (setq level (symbol-name (dom-tag o))))
               (level
                (if (and (string-match "h[1-6]" (symbol-name (dom-tag o)))
                         (not (string< level (symbol-name (dom-tag o)))))
                    (throw 'done (nreverse results))
                  (push o results)))
               ;; Ignore before the matching heading
               ))
            results)))
      

      Now I can get the bolded words from a section of my notes, with just a sentence for context. I use pandoc to convert it to Org Mode syntax.

      (defvar my-lang-words-for-review-context-function 'sentence-at-point)
      
      (defun my-lang-tutor-notes (section-name)
        (let* ((my-lang-tutor-notes (my-google-doc-html my-lang-tutor-notes-url))
               (dom (with-temp-buffer
                      (insert my-lang-tutor-notes)
                      (libxml-parse-html-region))))
          (append (list 'div nil)
                  (my-html-get-section dom section-name))))
      
      (defun my-lang-words-for-review (section)
        "List the bolded words for review in SECTION."
        (let* ((section (my-lang-tutor-notes section))
               results)
          (mapc
           (lambda (o)
             (with-temp-buffer
               (insert
                (pandoc-convert-stdio
                 (with-temp-buffer
                   (svg-print (dom-parent section o))
                   (buffer-string))
                 "html"
                 "org"))
               (org-mode)
               (goto-char (point-min))
               (while (re-search-forward "\\*.+?\\*" nil t)
                 (cl-pushnew
                  (replace-regexp-in-string
                   "\n" " "
                   (funcall my-lang-words-for-review-context-function))
                  results
                   :test 'string=))))
           (dom-search
            section
            (lambda (o)
              (when
                  (and
                   (string-match "font-weight:700" (or (dom-attr o 'style) ""))
                   (not (string-match "font-style:normal" (or (dom-attr o 'style) ""))))
                (setf (car o) 'strong)
                t))))
          (nreverse results)))
      

      For example, when I run it on my notes on artificial intelligence, this is the list of bolded words and the sentences that contain them.

      (my-lang-words-for-review "Sur l'intelligence artificielle")
      

      I can then go into the WhisperX transcription JSON file and replay those parts for closer review.

      I also can tweak the context function to give me less information. For example, to limit it to the containing phrase, I can do this:

      (defun my-split-string-keep-delimiters (string delimiter)
        (when string
          (let (results pos)
            (with-temp-buffer
              (insert string)
              (goto-char (point-min))
              (setq pos (point-min))
              (while (re-search-forward delimiter nil t)
                (push (buffer-substring pos (match-beginning 0)) results)
                (setq pos (match-beginning 0)))
              (push (buffer-substring pos (point-max)) results)
              (nreverse results)))))
      
      (ert-deftest my-split-string-keep-delimiters ()
       (should
        (equal (my-split-string-keep-delimiters
                "Beaucoup de gens ont une réaction forte contre l'IA pour plusieurs raisons qui *incluent* le battage médiatique excessif dont elle fait l'objet, son utilisation à mauvais escient, et *l'inondation de banalité* qu'elle produit."
                ", \\| que \\| qui \\| qu'ils? \\| qu'elles? \\| qu'on "
                )
       )))
      
      (defun my-lang-words-for-review-phrase-context (&optional s)
        (setq s (replace-regexp-in-string " " " " (or s (sentence-at-point))))
        (string-join
         (seq-filter (lambda (s) (string-match "\\*" s))
                     (my-split-string-keep-delimiters s ", \\| parce que \\| que \\| qui \\| qu'ils? \\| qu'elles? \\| qu'on \\| pour "))
         " ... "))
      
      (ert-deftest my-lang-words-for-review-phrase-context ()
        (should
         (equal (my-lang-words-for-review-phrase-context
                 "Je peux consacrer une petite partie de mon *budget* à des essais, mais je ne veux pas travailler davantage pour rentabiliser une dépense plus importante.")
                "Je peux consacrer une petite partie de mon *budget* à des essais")))
      
      (let ((my-lang-words-for-review-context-function 'my-lang-words-for-review-phrase-context))
        (my-lang-words-for-review "Sur l'intelligence artificielle"))
      

      Now that I have a function for retrieving the HTML or Org Mode for a section, I can use that to wdiff against my current text to more easily spot wording changes.

      (defun my-lang-tutor-notes-wdiff-org ()
        (interactive)
        (let ((section (org-entry-get (point) "ITEM")))
          (my-wdiff-strings
           (replace-regexp-in-string
            " " " "
            (my-org-subtree-text-without-blocks))
           (replace-regexp-in-string
            " " " "
            (pandoc-convert-stdio
             (with-temp-buffer
               (svg-print
                (my-lang-tutor-notes section))
               (buffer-string))
             "html"
             "org")))))
      

      Related:

      Screenshot:

      2026-03-10_14-35-28.png
      Figure 1: wdiff
      This is part of my Emacs configuration.

      You can e-mail me at sacha@sachachua.com.

    10. 🔗 News Minimalist 🐢 AI startup raises $1 billion to fix hallucinations + 10 more stories rss

      In the last 4 days Gemini read 118955 top news stories. After removing previously covered events, there are 11 articles with a significance score over 5.5.

      [5.9] Yann LeCun's AMI Labs raises $1.03 billion to build AI world models —techcrunch.com(+11)

      Yann LeCun’s new venture, AMI Labs, raised $1.03 billion to develop world models that learn from physical reality, seeking to overcome the reliability limitations of existing large language models.

      Valued at $3.5 billion, the company focuses on Joint Embedding Predictive Architecture to minimize AI hallucinations. Major investors like NVIDIA and Bezos Expeditions funded the round, supporting a high-profile research team operating across Paris, New York, Montreal, and Singapore.

      Although commercial applications may take years, the startup intends to publish its research and release open-source code. Early deployments will be tested through industrial partners, including the healthcare startup Nabla.

      [6.0] Global repercussions emerge as US, Israel, and Iran war expands —npr.org(+952)

      A week of U.S. and Israeli strikes against Iran has killed Supreme Leader Ayatollah Ali Khamenei and neutralized Iran's military, sparking a widening regional conflict and global economic instability.

      Iran responded with retaliatory attacks across the Middle East, striking U.S. bases and oil infrastructure in several Gulf nations. Fighting has spread to Lebanon while oil prices surged past ninety dollars per barrel following the closure of the strategic Strait of Hormuz.

      Global powers including China and Russia have called for de-escalation as diplomatic tensions rise between the U.S. and European allies. Meanwhile, the conflict continues to disrupt energy markets and international trade.

      Highly covered news with significance over 5.5

      [6.4] Ukraine deploys armed robots to combat Russian forces — bbc.com (+3)

      [6.4] Germany becomes the fourth-largest global arms exporter — tagesschau.de (German) (+18)

      [6.1] China exports surge in first two months of the year despite Trump tariffs — bbc.com (+8)

      [5.9] Trump pressures Latin American leaders to reduce China ties — courant.com (+77)

      [5.8] France sends aircraft carrier to protect Strait of Hormuz shipping — smh.com.au (+15)

      [5.8] Apple increases iPhone production in India to 25% — businesstoday.in (+6)

      [5.7] Trump launches Americas Counter Cartel Coalition with Latin American and Caribbean nations — nytimes.com (+9)

      [5.7] Federal pilot program launches flying cars in eight US regions this summer — wired.com [$] (+4)

      [5.8] UK cancer death rates reach historic low — news.sky.com (+4)

      Thanks for reading!

      — Vadim


      You can create a personal RSS feed with premium.


      Powered by beehiiv

    11. 🔗 r/reverseengineering Reverse engineering FORM swim goggles: custom protobuf over BLE, 697 captured API requests, full protocol documented rss
    12. 🔗 r/LocalLLaMA This guy 🤡 rss

      This guy 🤡 | At least T3 Code is open-source/MIT licensed. submitted by /u/xenydactyl
      [link] [comments]
      ---|---

    13. 🔗 r/reverseengineering I've made indent guides plugin for IDA rss
    14. 🔗 pydantic/monty v0.0.8 - 2026-03-10 release

      What's Changed

      New Contributors

      Full Changelog : v0.0.7...v0.0.8

    15. 🔗 r/LocalLLaMA How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. rss

      How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. | Hi LocalLLaMAs, A few years ago, I found that duplicating a specific block of 7 middle layers in Qwen2-72B, without modifying any weights, improved performance across all Open LLM Leaderboard benchmarks and took #1. As of 2026, the top 4 models on that leaderboard are still descendants. The weird finding: single-layer duplication does nothing. Too few layers, nothing. Too many, it gets worse. Only circuit-sized blocks of ~7 layers work. This suggests pretraining carves out discrete functional circuits in the layer stack that only work when preserved whole. The whole thing was developed on 2x RTX 4090s in my basement. I don't write papers any more, so here is a full technical write-up in Blog format for your enjoyment. I'm the same guy who built GLaDOS, and scores a crazy Nvidia GH200 system here on Reddit. \I'm now running current models (GLM-4.7, Qwen3.5, MiniMax M2.5) on this dual GH200 rig (see my other post). Code and new models coming soon, including special RYS versions of Qwen3.5 27B and 35A3B Happy to answer questions. submitted by /u/Reddactor
      [link] [comments]
      ---|---

    16. 🔗 hyprwm/Hyprland v0.54.2 release

      Another patch release backporting some fixes from main onto 0.54.1.

      Fixes backported

      • config/descriptions: add missing desc entry
      • layout/windowTarget: add visualBox (#13626)
      • algo/scroll: fix unsigned wrap (#13634)
      • compositor: fix missing recheckWorkArea to prevent CReservedArea assert failure (#13590)
      • core: fix i586 build (#13550)
      • deco/border: fix damage region
      • desktop/rules: fix empty workspace handling (#13544)
      • desktop/windowRule: fix matching CONTENT (#13636)
      • layout/groupTarget: fix crash on null space assignment (#13614)
      • layout: fix crash on monitor reconnect due to stale workspace state
      • layout: fix drag_threshold window snap regression (rebased for #12890) (#13140)
      • layout: fix null deref in focalPointForDir and moveInDirection (#13652)
      • pointer: fix hardware cursor rendering on rotated/flipped monitors (#13574)
      • protocols/sessionLock: fix crash when monitor is gone during lock surface creation
      • screencopy: fix minor crash (#13566)
      • algo/dwindle: Respect force_split when moving windows to workspaces (#13038)
      • algo/dwindle: do NOT use smart_split for overridden focal point (#13635)
      • screenshare: improve destroy logic of objects (#13554)

      Special thanks

      As always, massive thanks to our wonderful donators and sponsors:

      Sponsors

      Diamond

      37Signals

      Gold

      Framework

      Donators

      Top Supporters:

      Seishin, Kay, johndoe42, d, vmfunc, Theory_Lukas, --, MasterHowToLearn, iain, ari-cake, TyrHeimdal, alexmanman5, MadCatX, Xoores, inittux111, RaymondLC92, Insprill, John Shelburne, Illyan, Jas Singh, Joshua Weaver, miget.com, Tonao Paneguini, Brandon Wang, Arkevius, Semtex, Snorezor, ExBhal, alukortti, lzieniew, taigrr, 3RM, DHH, Hunter Wesson, Sierra Layla Vithica, soy_3l.beantser, Anon2033, Tom94

      New Monthly Supporters:

      monkeypost, lorenzhawkes, Adam Saudagar, Donovan Young, SpoderMouse, prafesa, b3st1m0s, CaptainShwah, Mozart409, bernd, dingo, Marc Galbraith, Mongoss, .tweep, x-wilk, Yngviwarr, moonshiner113, Dani Moreira, Nathan LeSueur, Chimal, edgarsilva, NachoAz, mo, McRealz, wrkshpstudio, crutonjohn

      One-time Donators:

      macsek, kxwm, Bex Jonathan, Alex, Tomas Kirkegaard, Viacheslav Demushkin, Clive, phil, luxxa, peterjs, tetamusha, pallavk, michaelsx, LichHunter, fratervital, Marpin, SxK, mglvsky, Pembo, Priyav Shah, ChazBeaver, Kim, JonGoogle, matt p, tim, ybaroj, Mr. Monet Baches, NoX, knurreleif, bosnaufal, Alex Vera, fathulk, nh3, Peter, Charles Silva, Tyvren, BI0L0G0S, fonte-della- bonitate, Alex Paterson, Ar, sK0pe, criss, Dnehring, Justin, hylk, 邱國玉KoryChiu, KSzykula, Loutci, jgarzadi, vladzapp, TonyDuan, Brian Starke, Jacobrale, Arvet, Jim C, frank2108, Bat-fox, M.Bergsprekken, sh-r0, Emmerich, davzucky, 3speed, 7KiLL, nu11p7r, Douglas Thomas, Ross, Dave Dashefsky, gignom, Androlax, Dakota, soup, Mac, Quiaro, bittersweet, earthian, Benedict Sonntag, Plockn, Palmen, SD, CyanideData, Spencer Flagg, davide, ashirsc, ddubs, dahol, C. Willard A.K.A Skubaaa, ddollar, Kelvin, Gwynspring, Richard, Zoltán, FirstKix, Zeux, CodeTex, shoedler, brk, Ben Damman, Nils Melchert, Ekoban, D., istoleyurballs , gaKz, ComputerPone, Cell the Führer, defaltastra, Vex, Bulletcharm, cosmincartas, Eccomi, vsa, YvesCB, mmsaf, JonathanHart, Sean Hogge, leat bear, Arizon, JohannesChristel, Darmock, Olivier, Mehran, Anon, Trevvvvvvvvvvvvvvvvvvvv, C8H10N4O2, BeNe, Ko-fi Supporter :3, brad, rzsombor, Faustian, Jemmer, Antonio Sanguigni, woozee, Bluudek, chonaldo, LP, Spanching, Armin, BarbaPeru, Rockey, soba, FalconOne, eizengan, むらびと, zanneth, 0xk1f0, Luccz, Shailesh Kanojia, ForgeWork , Richard Nunez, keith groupdigital.com, pinklizzy, win_cat_define, Bill, johhnry, Matysek, anonymus, github.com/wh1le, Iiro Ullin, Filinto Delgado, badoken, Simon Brundin, Ethan, Theo Puranen Åhfeldt, PoorProgrammer, lukas0008, Paweł S, Vandroiy, Mathias Brännström, Happyelkk, zerocool823, Bryan, ralph_wiggums, DNA, skatos24, Darogirn , Hidde, phlay, lindolo25, Siege, Gus, Max, John Chukwuma, Loopy, Ben, PJ, mick, herakles, mikeU-1F45F, Ammanas, SeanGriffin, Artsiom, Erick, Marko, Ricky, Vincent mouline

      Full Changelog : v0.54.1...v0.54.2

    17. 🔗 r/Harrogate Improv Session in Harrogate rss

      Improv Session in Harrogate | Hi All, I run improv comedy sessions every couple of weeks in Harrogate. Our next one is next Tuesday (17th March). They are very low pressure, we do some easy group warm ups, followed by games and exercises. Our current sessions are aimed at beginners and improvers so there has never been a better time to try it out. If you have any questions let me know. As a bonus for first time joiners your first session is free. Thanks. submitted by /u/GritstoneBoulderer
      [link] [comments]
      ---|---

    18. 🔗 r/york Hotel Advice rss

      Hi all,

      I'm hoping to book a really nice room for my husband's 40th in October. I've currently booked a suite in the Judges Lodging but have just seen a nice looking deluxe room in Galtres Lodge Hotel.

      Would anyone have a preference here?

      Budget is max £300 a night and would like something as nice as possible given the occasion.

      Thank you in advance. :)

      submitted by /u/Routine_Raisin_3698
      [link] [comments]

    19. 🔗 r/Leeds The scooters are coming... Beryl scheme approved, 100 scooters to be available. rss

      https://democracy.leeds.gov.uk/ieDecisionDetails.aspx?ID=58678

      LCC's Facebook Post with quite a few comments

      Geofencing supposedly in place but it will be interesting to see how this is applied, in some cities they have very definite "no scoot" areas where they just stop working and LCC suggest this, along with speed limiters, will be implemented.

      They mention "control of the e-scooters in defined pedestrianised areas" so presumably they will be allowed on sections like Briggate for example?

      Will the pavements be littered with them and the river / canal their home before long? They suggest they will have to be "docked" like the bikes and not just left in a painted area like in York etc so hopefully this won't be an issue.

      Are our roads, cycle lanes and shared spaces suitable for those small wheels given the state of some of them and helmets only "recommended"?

      Given the illegal scooters pretty much have the keys to the city along with the illegal electric motorbikes pretending to be bicycles will the introduction of the legal scooters make the problem worse as people no longer think "scooter bad" by default?

      Hopefully the pricing isn't as high as the bikes to the point it's as cheap to get a bus, the focus on "cycling infrastructure" makes a bit more sense with this I suppose - will you give them a go?

      submitted by /u/thetapeworm
      [link] [comments]

    20. 🔗 r/york Moving van rentals rss

      Hi there, I’m moving from one side of York to the centre over the next couple of weeks. Does anyone have any good recommendations for small removal companies? Or even just a man with a van for a couple of hours that would be happy to help me move some furniture? It just seems like every quote I get online is ridiculously high or is only by the hour. I’m happy to pay obviously, I’d just like it to be a fixed sum. It’s my first time moving house by myself so am looking for any advice or recommendations. Thank you!

      Edit: Ooh, great ideas from all, thank you. I can rent one, and I do potentially have someone to drive it across York. Just waiting to hear back from them now. Just had a small panic being a 5ft2 woman with some big furniture!

      submitted by /u/WoodpeckerContent
      [link] [comments]

    21. 🔗 r/york Grotesque proposal for a country themed bar rss

      Grotesque proposal for a country themed bar | Does anyone have any intel about where this proposed new venue will be located, and how local residents can contest these plans? From the tasteless AI images shared online yesterday, via a recruitment ad, it looks very similar to the church at the Bottom of Micklegate (Jalou). Either way, it sounds like catnip to Stag and Hen dos. York does not need anymore venues which promote unsafe and excessive drinking levels and fuel episodes of antisocial behaviour and littering, alienating residents from accessing and enjoying THEIR city centre at weekends. The council should be supporting independent business ideas which reflect the city’s culture and heritage, and most importantly show respect for and work alongside residents. Would a venue like this even be proposed somewhere like Edinburgh? Exactly. More class, less fad is what our city should aspire to. York residents have significant disposable income. However we aren’t going to wants to spend our money/ leisure time here if we risk being dragged into breaking up a brawl, being vomited on, or our children and pets stepping in broken glass. submitted by /u/Aggravating-Unit3970
      [link] [comments]
      ---|---

    22. 🔗 HexRaysSA/plugin-repository commits sync repo: +3 plugins, +4 releases rss
      sync repo: +3 plugins, +4 releases
      
      ## New plugins
      - [ApplyCalleeTypeEx](https://github.com/Dump-GUY/ApplyCalleeTypeEx) (1.0.0)
      - [IDAssist](https://github.com/symgraph/IDAssist) (1.0.4)
      - [IDAssistMCP](https://github.com/symgraph/IDAssistMCP) (1.0.3)
      
      ## New releases
      - [IDAGuides](https://github.com/libtero/idaguides): 1.2.0
      
    23. 🔗 r/reverseengineering IronPE - Minimal Windows PE manual loader written in Rust. rss
    24. 🔗 @malcat@infosec.exchange We're happy to announce that [#malcat](https://infosec.exchange/tags/malcat) mastodon

      We're happy to announce that #malcat 0.9.13 is out!

      You'll find a new Apple-silicon MacOS port, two integrated MCP servers (in-GUI +headless) for automated triage and an improved interface:

      https://malcat.fr/blog/0913-is-out-macos-port-mcp-server-and-dark- mode

    25. 🔗 r/Yorkshire Yorkshire pudding is easily the best part of a roast. I don’t think a roast dinner is complete without one. rss
    26. 🔗 r/wiesbaden Wiesbaden macht Wiesbaden-Sachen rss
    27. 🔗 r/LocalLLaMA Qwen 3.5 0.8B - small enough to run on a watch. Cool enough to play DOOM. rss

      Qwen 3.5 0.8B - small enough to run on a watch. Cool enough to play DOOM. | So I went down the rabbit hole of making a VLM agent that actually plays DOOM. The concept is dead simple - take a screenshot from VizDoom, draw a numbered grid on top, send it to a vision model with two tools (shoot and move), the model decides what to do. Repeat. The wild part? It's Qwen 3.5 0.8B - a model that can run on a smartwatch, trained to generate text, but it handles the game surprisingly well. On the basic scenario it actually gets kills. Like, it sees the enemy, picks the right column, and shoots. I was genuinely surprised. On defend_the_center it's trickier - it hits enemies, but doesn't conserve ammo, and by the end it keeps trying to shoot when there's nothing left. But sometimes it outputs stuff like "I see a fireball but I'm not sure if it's an enemy", which is oddly self-aware for 0.8B parameters. The stack is Python + VizDoom + direct HTTP calls to LM Studio. Latency is about 10 seconds per step on an M1-series Mac. Currently trying to fix the ammo conservation - adding a "reason" field to tool calls so the model has to describe what it sees before deciding whether to shoot or not. We'll see how it goes. submitted by /u/MrFelliks
      [link] [comments]
      ---|---

    28. 🔗 r/Yorkshire Sometimes you forget how beautiful Yorkshire actually is. rss

      Sometimes you forget how beautiful Yorkshire actually is. | submitted by /u/Pinkplatabys
      [link] [comments]
      ---|---

    29. 🔗 MetaBrainz He’s the man who made music metadata “free” rss

      Thank you to Giampiero Di Carlo, the editor of Rockol, who gave us permission to repost this article. Originally posted in Italian at:https://musicbiz.rockol.it/news-757360/robert-kaye-1970-2026-scomparso-il- fondatore-di-musicbrainz

      The following English translation is courtesy of Google Translate with some manual edits.

      On February 21, 2026, Robert Kaye, founder and Executive Director of the
      MetaBrainz Foundation, the non-profit organization that supports projects like MusicBrainz and ListenBrainz, passed away. The news was announced a few days later by the MetaBrainz Board, described as an unexpected passing. Reposting this remembrance on Rockol MusicBiz late was intentional: we were friends and he deserves the visibility that the particular nature of the past week would have obscured.

      What we lose

      For those who work with music—from archives to platforms, from collectors to DJ software—Kaye is one of those figures who rarely make the front cover, yet change everything: he built the "silent" infrastructure that allows music to be found, sorted, recognized, and correctly linked over time, without this data remaining imprisoned in proprietary databases. Robert Kaye was a visionary of the free/open source community and the driving force behind the "Brainz" ecosystem. His loss is felt not only by those who compile metadata, but by anyone who uses tools based on that information.

      The reaction of the MetaBrainz community, in the official thread, speaks volumes about the human impact beyond the technical one: for many, he wasn't "just" a founder, but a daily presence within a project that thrives on volunteers, discussions and patience.

      Kaye was an engineer by training (Computer Engineering at Cal Poly) and had worked in companies and projects related to MP3 and music software during the dot-com era. At MetaBrainz, they tell it this way: his work on MP3 and his move to eMusic/FreeAmp was the spark that led him to build MusicBrainz and "fall in love" with open source.

      In 2004, he founded the MetaBrainz Foundation in California as a 501(c)(3), with a clear model: free non-commercial use and seeking financial support from commercial entities that benefit from the data and services.

      MusicBrainz and Beyond

      MusicBrainz is often described as an open music encyclopedia: a community database of artists, releases, and relationships that is the backbone for tagging, cataloging, and software integrations. The MetaBrainz ecosystem has since expanded (into ListenBrainz and other projects) but maintained the core idea: making metadata reusable, interoperable, and verifiable by a community. In practice, Robert Kaye's work is visible everywhere without his name appearing: when software correctly recognizes an artist despite homonyms, when an archive links releases and reissues, when a DJ tags a library consistently, when an app displays credits and discographies with fewer errors.

      MetaBrainz has already clarified that the project continues under the guidance of the Board and the existing structure and that updates on the transition will be shared. This is a very delicate transition: when a founder of an infrastructure passes away, the challenge is not just "keeping the servers running," but maintaining the trust of communities and commercial partners who depend on the collective effort.

      A "visible" founder: style, character, community

      Many tributes in recent days have emphasized a detail that is often crucial in open source projects: the founder's personality as the glue. In a personal recollection, Denny Vrandečić describes him as a "principled", "determined", loud and generous figure, capable of both energy and care—a rare combination in someone who must balance vision, inevitable conflicts within a community and sustainability. This isn't folklore: in community projects "governance" also involves tone, presence and the ability to make things happen without shutting down those who contribute. And we're not talking about a niche project here, but a piece of the music internet that many industries take for granted.

      To honor Robert Kaye today, it's crucial to emphasize that his legacy isn't a product but an operationalized idea: that music data can remain a common good, defensible and improvable, rather than becoming merely a closed commodity. And it's an idea that, in 2026, retains a certain weight.

    30. 🔗 Julia Evans Examples for the tcpdump and dig man pages rss

      Hello! My big takeaway from last month's musings about man pages was that examples in man pages are really great, so I worked on adding (or improving) examples to two of my favourite tools' man pages.

      Here they are:

      the goal: include the most basic examples

      The goal here was really just to give the absolute most basic examples of how to use the tool, for people who use tcpdump or dig infrequently (or have never used it before!) and don't remember how it works.

      So far saying "hey, I want to write an examples section for beginners and infrequent users of this tools" has been working really well. It's easy to explain, I think it makes sense from everything I've heard from users about what they want from a man page, and maintainers seem to find it compelling.

      Thanks to Denis Ovsienko, Guy Harris, Ondřej Surý, and everyone else who reviewed the docs changes, it was a good experience and left me motivated to do a little more work on man pages.

      why improve the man pages?

      I'm interested in working on tools' official documentation right now because:

      • Man pages can actually have close to 100% accurate information! Going through a review process to make sure that the information is actually true has a lot of value.
      • Even with basic questions "what are the most commonly used tcpdump flags", often maintainers are aware of useful features that I'm not! For example I learned by working on these tcpdump examples that if you're saving packets to a file with tcpdump -w out.pcap, it's useful to pass -v to print a live summary of how many packets have been captured so far. That's really useful, I didn't know it, and I don't think I ever would have noticed it on my own.

      It's kind of a weird place for me to be because honestly I always kind of assume documentation is going to be hard to read, and I usually just skip it and read a blog post or Stack Overflow comment or ask a friend instead. But right now I'm feeling optimistic, like maybe the documentation doesn't have to be bad? Maybe it could be just as good as reading a really great blog post, but with the benefit of also being actually correct? I've been using the Django documentation recently, and it's really good! We'll see.

      on avoiding writing the man page language

      The tcpdump project tool's man page is written in the roff language, which is kind of hard to use and that I really did not feel like learning it.

      I handled this by writing a very basic markdown-to-roff script to convert Markdown to roff, using similar conventions to what the man page was already using. I could maybe have just used pandoc, but the output pandoc produced seemed pretty different, so I thought it might be better to write my own script instead. Who knows.

      I did think it was cool to be able to just use an existing Markdown library's ability to parse the Markdown AST and then implement my own code-emitting methods to format things in a way that seemed to make sense in this context.

      man pages are complicated

      I went on a whole rabbit hole learning about the history of roff, how it's evolved since the 70s, and who's working on it today, inspired by learning about the mandoc project that BSD systems (and some Linux systems, and I think Mac OS) use for formatting man pages. I won't say more about that today though, maybe another time.

      In general it seems like there's a technical and cultural divide in how documentation works on BSD and on Linux that I still haven't really understood, but I have been feeling curious about what's going on in the BSD world.

  2. March 09, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-09 rss

      IDA Plugin Updates on 2026-03-09

      New Releases:

      Activity:

      • augur
      • haruspex
      • ida-dbimporter
        • f12484dc: Merge PR #5 exporter: v0.0.2: add exporter functionality, fix bugs
        • a20e0a86: changed readme, removed requirements.txt, changed parser opts, added …
        • 3c27470d: add useless list errors to ignore list
        • cf1baffe: version bump, minor stuff; see below
        • 9ae25937: Add exporter functionality
        • d10fe83c: change capitalization of md files for repo
      • idaguides
        • 3b1c20f2: changed: action name to generic added: persistent cfg state added: dy…
        • df46e664: added: toggle switch from fareedfauzi/main
      • idawilli
        • 4252e83c: Remove per-line address prefixes from listing output to reduce token …
        • 0cf1d532: Add concurrent database access protection via flock and .nam polling
        • 150ec138: Update planning docs timestamps
        • 679bdb23: Migrate idals to domain APIs and add offset formatter
        • e2770dd9: Remove IdaRuntime and use Database directly across helpers
        • dce0ec4b: Switch core lookups from runtime ida_* modules to ida-domain db APIs
        • ac65ff44: Use ida-domain Database.open directly and top-level IDA imports
        • f062f198: Merge pull request #122 from williballenthin/copilot/update-readme-sc…
        • 1b7bd444: docs: always render README screenshots from snapshots
        • 668f09c6: docs: add colored entrypoint idals screenshot
        • 587c8475: docs: refresh idals README examples with freeze screenshots
        • f6f05062: Initial plan
      • rhabdomancer
    2. 🔗 r/york Photographers! rss

      Photographers! | Anyone know of photographers who could get this style of photo? Already got some location ideas in mind, thanks! submitted by /u/BravoBaratheon
      [link] [comments]
      ---|---

    3. 🔗 obra/superpowers v5.0.0 release

      Release v5.0.0

    4. 🔗 libtero/idaguides IDA Guides v1.2.0 release

      What's Changed

      • Added a right-click menu toggle to enable/disable guides by @fareedfauzi in #1

      New Contributors

      Full Changelog : 1.1.1...1.2.0

    5. 🔗 r/wiesbaden Suche Leute zum Skat spielen rss

      Ich m/26 suche 2-3 Leute zum Skat spielen. Wenn es passt, auch gerne regelmäßig in Kneipen oder zuhause. Bin flexibel was Regeln abgeht: Spiele gerne mit verschied. Kneipenregeln oder auch nach offiziellen Turnierregeln.

      Gibt es eine Gruppe die mich aufnehmen würde oder hat Lust eine neue aufzumachen? Oder kennt jemand Orte wo regelmäßig Skat gespielt wird?

      submitted by /u/itsKoeri
      [link] [comments]

    6. 🔗 r/york For all the talk about reducing car use, it's now pointless catching a U1/U2 bus from the station ... rss

      ... because they randomly swap routes at Merchantgate. Not all of them - you have to ask when you get on if the one you've been waiting for will swap there. If it does and therefore is no use to you from that point, you have to buy a second ticket at Merchantgate, so your journey costs £6 per person and you have the hassle of changing buses. Or, spend the same on an Uber door to door. Make it make sense.

      submitted by /u/Massive-Medicine5413
      [link] [comments]

    7. 🔗 r/LocalLLaMA I am not saying it's Gemma 4, but maybe it's Gemma 4? rss

      I am not saying it's Gemma 4, but maybe it's Gemma 4? | three different tweets combined (today, previous week, year ago) submitted by /u/jacek2023
      [link] [comments]
      ---|---

    8. 🔗 r/reverseengineering Reverse engineering the Logi Options+ agent's IPC protocol to switch Logitech devices between Bluetooth hosts on macOS rss
    9. 🔗 Jeremy Fielding (YouTube) The Most Important Concept In Engineering rss

      If you want to join my community of makers and Tinkers consider getting a YouTube membership 👉 https://www.youtube.com/@JeremyFieldingSr/join

      If you want to chip in a few bucks to support these projects and teaching videos, please visit my Patreon page or Buy Me a Coffee. 👉 https://www.patreon.com/jeremyfieldingsr 👉 https://www.buymeacoffee.com/jeremyfielding

      Social media, websites, and other channel

      Instagram https://www.instagram.com/jeremy_fielding/?hl=en Twitter 👉https://twitter.com/jeremy_fielding TikTok 👉https://www.tiktok.com/@jeremy_fielding0 LinkedIn 👉https://www.linkedin.com/in/jeremy-fielding-749b55250/ My websites 👉 https://www.jeremyfielding.com 👉https://www.fatherhoodengineered.com My other channel Fatherhood engineered channel 👉 https://www.youtube.com/channel/UC_jX1r7deAcCJ_fTtM9x8ZA

      Notes:

      Technical corrections

      Nothing yet

    10. 🔗 r/reverseengineering What if reverse-engineering had Jupyter notebooks? Here they are, for Rizin & Cutter (shareable analysis + binaries). rss
    11. 🔗 r/Leeds A modest proposal to enhance the Hyde Park Robots rss
    12. 🔗 r/york Moving out? rss

      Hello i (f20) am desperate to get out of my parents house. Always expected to babysit which my siblings are sre so naughty and step dad hates me always has. I can’t continue to live like this. But itll take years with council as were not over crowded ect. But why is private rent on a one bed flat so expensive? Any suggestions id greatly appreciate it. As currently rent alone is my monthly income with what im seeing.

      submitted by /u/General-Buddy3853
      [link] [comments]

    13. 🔗 ghostty-org/ghostty v1.3.0 release

      v1.3.0

    14. 🔗 r/reverseengineering Built an Automated SOC Pipeline That Thinks for Itself, AI-Powered Multi-Pass Threat Hunting using Analyzers rss
    15. 🔗 r/wiesbaden SVE Showdown Deck Trials rss

      Heyho,

      Am 21.3. finden ja Showdown Deck Trials im Yodasdata in Idstein statt. Da bekommst du ein Deck, (was du behalten darfst) und lernst das Spiel. Frage mich auch ob es irgendwo in/um Wiesbaden ein Spieleladen gibt, der die anbietet. Ansonsten suche ich noch nach Mitspielern für Shadowverse in/um Wiesbaden. Falls in Wiesbaden nichts ist, könnten wir vielleicht auch gemeinsam nach Idstein :)

      submitted by /u/aqua995
      [link] [comments]

    16. 🔗 r/Yorkshire Built a free app that shows all car boot events & charity shops around you in Yorkshire rss

      I'm a big fan of secondhand shopping to find products for low cost. I live in the UK and always found it frustrating that there's no single place to easily find nearby charity shops, thrift stores, car boot sales, or vintage markets. Google Maps misses loads of them.

      So I decided to build an app to solve that which would be really useful while travelling. You can even share your thrift haul.

      It's called Ganddee (free on iOS & Android).

      I’d love for you to try it out and hear feedback.

      submitted by /u/AntRnd
      [link] [comments]

    17. 🔗 sacha chua :: living an awesome life 2026-03-09 Emacs news rss

      If you use kubernetes-el, don't update for now, and you might want to check your installation if you updated it recently. The repo was compromised a few days ago.

      I've occasionally wanted to tangle a single Org Mode source block to multiple places, so I'm glad to hear that ob-tangle has just added support for multiple targets. Niche, but could be handy. I'm also curious about using clime to write command-line tools in Emacs Lisp that handle argument parsing and all the usual stuff.

      If you're looking for something to write about, why not try this month's Emacs Carnival theme of mistakes and misconceptions?

      Enjoy!

      Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

      You can comment on Mastodon or e-mail me at sacha@sachachua.com.

    18. 🔗 r/Yorkshire Dewsbury food bank Bramwell's Hope destroyed by arsonist rss

      Dewsbury food bank Bramwell's Hope destroyed by arsonist | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    19. 🔗 r/reverseengineering Hands-on x86-64 page table walk: finding a flag in physical RAM with GDB rss
    20. 🔗 Simon Willison Perhaps not Boring Technology after all rss

      A recurring concern I've seen regarding LLMs for programming is that they will push our technology choices towards the tools that are best represented in their training data, making it harder for new, better tools to break through the noise.

      This was certainly the case a couple of years ago, when asking models for help with Python or JavaScript appeared to give much better results than questions about less widely used languages.

      With the latest models running in good coding agent harnesses I'm not sure this continues to hold up.

      I'm seeing excellent results with my brand new tools where I start by prompting "use uvx showboat --help / rodney --help / chartroom --help to learn about these tools" - the context length of these new models is long enough that they can consume quite a lot of documentation before they start working on a problem.

      Drop a coding agent into any existing codebase that uses libraries and tools that are too private or too new to feature in the training data and my experience is that it works just fine - the agent will consult enough of the existing examples to understand patterns, then iterate and test its own output to fill in the gaps.

      This is a surprising result. I thought coding agents would prove to be the ultimate embodiment of the Choose Boring Technology approach, but in practice they don't seem to be affecting my technology choices in that way at all.

      Update: A few follow-on thoughts:

      1. The issue of what technology LLMs recommend is a separate one. What Claude Code Actually Chooses is an interesting recent study where Edwin Ong and Alex Vikati where they proved Claude Code over 2,000 times and found a strong bias towards build-over-buy but also identified a preferred technical stack, with GitHub Actions, Stripe, and shadcn/ui seeing a "near monopoly" in their respective categories. For the sake of this post my interest is in what happens when the human makes a technology choice that differs from those preferred by the model harness.
      2. The Skills mechanism that is being rapidly embraced by most coding agent tools is super-relevant here. We are already seeing projects release official skills to help agents use them - here are examples from Remotion, Supabase, Vercel, and Prisma.

      You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

    21. 🔗 r/LocalLLaMA Fine-tuned Qwen3 SLMs (0.6-8B) beat frontier LLMs on narrow tasks rss

      Fine-tuned Qwen3 SLMs (0.6-8B) beat frontier LLMs on narrow tasks | We spent a while putting together a systematic comparison of small distilled Qwen3 models (0.6B to 8B) against frontier APIs — GPT-5 nano/mini/5.2, Gemini 2.5 Flash Lite/Flash, Claude Haiku 4.5/Sonnet 4.6/Opus 4.6, Grok 4.1 Fast/Grok 4 — across 9 datasets spanning classification, function calling, QA, and open-book QA. All distilled models were trained using open-weight teachers only (no frontier API outputs in the training loop), with as few as 50 examples. Inference is vLLM on a single H100. The results that surprised us most:

      • Smart Home function calling : Qwen3-0.6B — yes, the 0.6B — hits 98.7% vs Gemini Flash at 92.0%. Some of that gap is the strict eval penalizing reasonable alternative interpretations, but still.
      • Text2SQL : Qwen3-4B distilled gets 98.0% vs Claude Haiku at 98.7% and GPT-5 nano at 96.0%. Cost per million requests: ~$3 vs $378 and $24 respectively.
      • Classification (Banking77, E-commerce, TREC): basically solved. Distilled models land within 0–1.5pp of the best frontier option.
      • Where frontier still wins : HotpotQA (open-ended reasoning + world knowledge) — 92.0% vs Haiku's 98.0%. This is the task type where distillation has the clearest trade-off.

      Overall, distilled models match or beat the best mid-tier frontier model (sub-$1/MTok input) on 6/9 tasks, and effectively tie on a 7th. Throughput/latency (Text2SQL, Qwen3-4B on H100):

      • 222 RPS sustained
      • p50: 390ms | p95: 640ms | p99: 870ms
      • 7.6 GiB VRAM (BF16, no quantization)
      • FP8 gave +15% throughput, −44% VRAM, no measurable accuracy loss in brief experiments

      Methodology notes (since I know this sub cares):

      • Same test sets, same prompts, same eval criteria for all models
      • Frontier models run 3× per dataset (reporting mean ± std), distilled at temp=0
      • Eval: exact-match for classification, tool_call_equivalence (JSON comparison w/ default param normalization) for function calling, Claude Sonnet 4.6 as LLM-judge for generation tasks
      • Cost calc: frontier = measured token usage × published pricing (Feb 2026); distilled = H100 at $2.40/hr ÷ sustained RPS

      Practical takeaway on when to distill vs. call an API:

      • Distill when you have structured tasks, well-defined schemas, high volume, or data sovereignty needs
      • Frontier API when you need broad world knowledge, freeform generation, or volume is low enough that the cost doesn't matter
      • Best of both worlds: route between the two

      Everything is open source — code, models, data, eval scripts:
      GitHub : https://github.com/distil-labs/inference-efficiency-benchmarks/
      Blog with full charts : https://www.distillabs.ai/blog/the-10x-inference- tax-you-dont-have-to-pay Happy to dig into methodology, specific dataset results, or the distillation setup if anyone has questions. submitted by /u/Jolly-Gazelle-6060
      [link] [comments]
      ---|---

    22. 🔗 r/Yorkshire Are whippets really that common in Yorkshire? rss

      A friend of mine has recently become quite keen on getting a whippet, and it made me realise how often I hear people mention them around here.

      It almost feels like every other person in Yorkshire either owns one or knows someone who does. Is that actually the case, or is it just something I’ve started noticing more since my friend started talking about getting one?

      submitted by /u/CloudBookmark
      [link] [comments]

    23. 🔗 r/Leeds missing penguin plush rss

      Hi, this may be a long shot but i lost a penguin stuffed animal on saturday afternoon on clarence road somewhere between the veralia glass factory and clarence dock village. i understand this is really hopeful of me but i can’t find it anywhere and didnt know if someone had picked it up and put it somewhere safe, its very sentimental to me. i’ve attached a picture of what it looks like x

      submitted by /u/AlternativePea7255
      [link] [comments]

    24. 🔗 r/reverseengineering DLLHijackHunter v2.0.0 now with Attack Chain Correlation rss
    25. 🔗 r/Leeds Swedish/Noridc import shops (not IKEA) to buy Marabou Chocolate rss

      Hello,

      I took a trip to Sweden, and my fat a** misses the Marabou brand of chocolate.

      This stuff is addictive, and I dread to say it, I prefer it over Cadbury (after Kraft took over...).

      Does anyone know anywhere in Leeds I can get some? Preferably CC. I would order online but wanna check to see if I can source local first, there's always a hidden gem here!

      Thank you

      submitted by /u/NorthWestTown
      [link] [comments]

    26. 🔗 r/reverseengineering /r/ReverseEngineering's Weekly Questions Thread rss

      To reduce the amount of noise from questions, we have disabled self-posts in favor of a unified questions thread every week. Feel free to ask any question about reverse engineering here. If your question is about how to use a specific tool, or is specific to some particular target, you will have better luck on the Reverse Engineering StackExchange. See also /r/AskReverseEngineering.

      submitted by /u/AutoModerator
      [link] [comments]

    27. 🔗 r/reverseengineering pyGhidra - universal-flutter-ssl-pinning rss
    28. 🔗 HexRaysSA/plugin-repository commits sync repo: ~2 changed rss
      sync repo: ~2 changed
      
      ## Changes
      - [idassist](https://github.com/jtang613/IDAssist):
        - 1.0.2: download URL changed
      - [idassistmcp](https://github.com/jtang613/IDAssistMCP):
        - 1.0.1: download URL changed
      
    29. 🔗 r/reverseengineering Challenges in Decompilation and Reverse Engineering of CUDA-based Kernels rss
    30. 🔗 exe.dev February update rss

      February kept us busy! Thanks to all our users for the great feedback, so many bug fixes and quick improvements were possible because of high quality reports.

      Here are a few things we shipped in February:

      Idea templates

      The new VM page now has a gallery of idea templates : one-click setups for apps like Gitea, VS Code, Ghost, Minecraft, Grafana, Outline, and more. Each template comes with a tailored prompt that Shelley uses to set up the app automatically, including auth configuration. And of course it includes the most popular one, OpenClaw.

      More than anything, the goal of idea templates is to demonstrate what you can do with a prompt. That empty text box is far more powerful than it looks.

      Take a look at them at exe.dev/idea.

      HTTPS API

      We published an HTTPS API. You can now script exe.dev programmatically — create VMs, manage them, and integrate with your existing tooling — from systems that cannot SSH. There is a particularly fun approach to API tokens using SSH key signing.

      Shelley

      Shelley has had a busy month. With conversation distillation , Shelley uses an LLM to restart the context window with just the right context to continue the conversation. The browser tool learned new tricks, including taking profiles of its browsing for web development, which in turn yielded some performance improvements in the UI. By popular demand, Markdown rendering now happens in conversations by default. You can run !bash in the chat window and pop a shell. We always keep up with the newest models. Enable notifications in the ⌘K palette so you can stay on top of your agents. Use Shelley across a wide variety of (human) langauges with internationalization support.

      exe.dev also now supports buying LLM credits as you go.

      As always, you can find us in our discord.

  3. March 08, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-08 rss

      IDA Plugin Updates on 2026-03-08

      New Releases:

      Activity:

    2. 🔗 r/Yorkshire Bradford City Village Moves Forward with 1,000-Home Regeneration Plan rss

      Bradford’s long-anticipated City Village regeneration scheme has secured planning approval, unlocking the delivery of up to 1,000 new homes in the city’s former commercial core.

      The major transformation will see underperforming retail assets, including the Kirkgate Shopping Centre and Oastler Shopping Centre, replaced with new housing, public spaces and mixed-use development. The project is being led by Bradford Council in partnership with regeneration specialist ECF, a joint venture between Homes England, Legal & General and Muse.

      The scheme focuses on the ‘Top of Town’ area, encompassing Chain Street and the sites of the former Oastler and Kirkgate shopping centres. Phase one has now received full approval and will deliver 97 townhouses across Chain Street and the northern section of the Oastler site. The homes will be arranged around new courtyards, landscaped green spaces and a central community green, forming the first step in reshaping the area into a residential neighbourhood.

      submitted by /u/coffeewalnut08
      [link] [comments]

    3. 🔗 r/Yorkshire Where to find resale tickets for the Piece Hall rss

      Exactly as it sounds:

      I've been foolish and assumed tickets for something would be available for longer than they were - is there anywhere to keep an eye on where I won't get fleeced? Facebook groups, etc?

      Thanks!

      submitted by /u/josefbae
      [link] [comments]

    4. 🔗 r/wiesbaden Suche Leute aus Rhein-Main/-Neckar-Gebiet für Bergsport, Hochtouren & Skihochtouren rss

      Hallo zusammen.
      ich (m, Mitte 20) suche Leute, die Lust auf Bergsport haben, vor allem Hochtouren, mehrtägige Skitouren/Skihochtouren und anspruchsvollere Wanderungen.

      Das grundlegende know-how habe ich (Ausrüstung, Technik, fitness) aber mir fehlen hier vor Ort die Leute, mit denen man solche Touren gemeinsam angehen kann. Alleine wäre das für mich keine Option, und im Freundeskreis teilt leider niemand dieses Hobby.

      Ich würde mich freuen, wenn sich ein paar Gleichgesinnte finden würden, egal ob du schon Erfahrung hast oder dich selbst an Hochtouren herantasten willst.
      Ich könnte es mir so vorstellen, sich erstmal locker kennenzulernen, vielleicht kleinere Touren zu machen und dann gemeinsam größere Projekte anzugehen.

      Wenn du Interesse hast, melde dich gerne per Kommentar oder DM

      submitted by /u/Odd-Purple3420
      [link] [comments]

    5. 🔗 r/Yorkshire What are these along the canal? rss

      What are these along the canal? | Lots are dotted about the Calder and Hebble navigation. Any idea what they are? submitted by /u/witchesbowl
      [link] [comments]
      ---|---

    6. 🔗 r/Harrogate Looking for NHS dentist the in area rss

      Hi looking for a dentist accepting NHS patients. Are there any you recommend? I live central in HG1 but can also travel to wider area if there is a strong reason.

      I am seeking general health checkup maybe someone to fix my chipped teeth if that is on the NHS, I’m not sure

      submitted by /u/Apprehensive_Ring666
      [link] [comments]

    7. 🔗 r/wiesbaden Drohne abgeschossen? rss

      Richtung Dotzheim ist eben etwas lautlos in der Luft explodiert. Hat jemand eine Ahnung was das sein kann? War recht schnell.

      submitted by /u/KHRAKE
      [link] [comments]

    8. 🔗 r/reverseengineering [Update] I know I've shared LCSAJdump before, but v1.1.2 just mapped the entire x86_64 libc graph in <10s. It's now faster than ROPgadget while finding JOPs/Shadow Gadgets they physically miss. rss
    9. 🔗 r/wiesbaden Date Ideen rss

      Hallo, Ich suche Kreative Date Ideen, dieses klassische Essen gehen oder Spazieren gehen ist mir persönlich zu langweilig und finde ich irgendwie auch nicht ganz so toll für ein Date.

      Gerne auch Geheimtipps muss nicht nur Wiesbaden sein kann auch was in Mainz sein, aber überwiegend Wiesbaden wäre super.

      Ich danke euch für den Austausch

      submitted by /u/Scharick914
      [link] [comments]

    10. 🔗 r/Harrogate Running Routes in the Area? rss

      I'll be visiting Harrogate in late July and will need to get in a long run on the weekend. I've seen some info about the route to Knaresborough through the gorge, but this seems to be more of a hike than a running trail, and I'm looking for a mostly paved path. I've been to Harrogate before, but never while training. Any help would be great. Thanks!
      Edit to say: I'll be staying at the convention center off the King's Road and won't have a car.

      submitted by /u/EnglishTeach88
      [link] [comments]

    11. 🔗 r/Yorkshire I’ll buy you a drink if you can name where this is in West Yorkshire? rss
    12. 🔗 r/york Blossom out in Rowntrees Park rss

      Blossom out in Rowntrees Park | Lovely walk along the river with dog and nice to see signs of spring in the park. submitted by /u/DentistKitchen
      [link] [comments]
      ---|---

    13. 🔗 r/Leeds Partridge Friend rss

      Not a garden visitor I expected to get in east Leeds to be honest! My partner read that they are quite rare these days, does anyone know of that's right?

      submitted by /u/alecwa
      [link] [comments]

    14. 🔗 r/Leeds What happened to North Home? rss

      It looks like it's been cleared out and shuttered but they're still posting normal things on their socials and there's no announcement there or their website on closure.

      Not like I could ever afford anything in there but it was a nice shop to fantasise in lol

      submitted by /u/Comfortable-Goat-295
      [link] [comments]

    15. 🔗 r/LocalLLaMA Qwen3.5 family comparison on shared benchmarks rss

      Qwen3.5 family comparison on shared benchmarks | Main takeaway: 122B, 35B, and especially 27B retain a lot of the flagship’s performance, while 2B/0.8B fall off much harder on long-context and agent categories. submitted by /u/Deep-Vermicelli-4591
      [link] [comments]
      ---|---

    16. 🔗 r/Yorkshire Looks like another Whitby chippy is in the spotlight! rss
    17. 🔗 r/reverseengineering GhostWeaver - a malware that lives up to its name rss
    18. 🔗 r/LocalLLaMA I classified 3.5M US patents with Nemotron 9B on a single RTX 5090 — then built a free search engine on top rss

      Patent lawyer here, started coding Dec 2025.

      The pipeline:

      • Downloaded 3.5M US patents (2016-2025) from USPTO PatentsView
      • Loaded everything into a single 74GB SQLite file with FTS5
      • Ran Nemotron 9B locally on RTX 5090 to classify records into 100 tech tags (~48 hours)
      • BM25 ranking with custom weights: title 10.0, assignee 5.0, abstract 3.0, claims 1.0
      • Natural language query expansion via local LLM → FTS5 boolean queries
      • Served with FastAPI + Jinja2, hosted on a Chromebook via Cloudflare Tunnel

      Why FTS5 over vector search? Patent attorneys need exact phrase matching. "solid-state battery electrolyte" should match those exact words, not semantically similar documents about "energy storage." FTS5 gives sub-second queries on 3.5M records with zero external dependencies.

      https://patentllm.org

      Technical writeup: https://media.patentllm.org/en/blog/dev-tool/patent- search-launch

      submitted by /u/Impressive_Tower_550
      [link] [comments]

    19. 🔗 r/york Who's behind "The York Brief"? rss

      So I've recently discovered "The York Brief", which could be a welcome addition to the sources of news we have in York. But I very much like knowing who's -behind- the news publication because it helps us establish whether they are reputable or not. When I go their website though, there's not a single name to be found. Does anyone know who's actually behind the site?

      submitted by /u/GuavaThis3146
      [link] [comments]

    20. 🔗 Register Spill Joy & Curiosity #77 rss

      Many, many years ago, before Docker was released, I knew a guy whose team worked a lot with virtual machines.

      All day long, he told me, they would configure and test and spin up and down virtual machines. I can't remember what they used the machines for, but he told me that an actual, real problem his team faced was managing their attention. You change something in the Vagrant configuration, rebuild the machine, wait for five minutes, and then, once the machine is ready, you no longer know what you were trying to test because you switch to a different window and get stuck on Hacker News

      So what they did to "fix" this problem, he told me in a tone that said "don't make fun of me for this, this isn't funny", was to watch movies and TV shows on a second monitor. That's right. His teammates would hit return after typing vagrant up, and instead of switching windows, they'd look over to their second monitor to watch a bit of Scrubs. In their peripheral vision they could see when the build was done and go right back to it. A little bit of light TV that's constantly on is less distracting than switching windows.

      Over the years, I've thought of this guy and his team many, many times. Every time I have to wait for a build, to be exact.

      And now I think of him whenever I kick off agents to go run and do something for me. In the future -- and this is one of the few things I'm sure about -- a lot of code will be written while nobody is watching. There will be more agents, running longer, running everywhere, kicked off from anywhere. Where will our attention go? And how will we bring it back when we need to? Watching Scrubs is probably not the solution.

      • Zen of AI Coding. I wish I had written that. I nodded to nearly everything there, but to quote just two things, one: "The economics of software have changed.

      When coding is cheap, implementation stops being the constraint. You can build ten things in parallel. You cannot decide, validate, and ship ten things in parallel, at least not without changing the rest of the pipeline. Cost of delay shifts. It is no longer about developer days. It is about time stuck in other bottlenecks: product decisions, unclear requirements, security review, user testing, release processes, and operational risk. Agents can flood these queues. Inventory grows. Lead time grows. Delay becomes more expensive, not less." And two: "It is tricker then ever to resist the temptation to add features. Resist it. Build what is used. Kill what is not."

      • Yaron Minsky: "I wonder if we're starting to hit a deflationary era in software engineering. For the first time, we're starting to talk about this in a planning context; it can make sense to put off some projects because we expect they'll be easier to achieve in the future than today. […] But the difference is the sense that we can start to count on things getting faster. So if we have to get something done by a fixed deadline, we're starting to think that we can put off some of that work for longer than we would have in the past."

      • Well worth the reminder: Good software knows when to stop. More isn't more. In fact, it's less today than it was yesterday. And it will be less than that tomorrow.

      • Naval recorded a new podcast episode: A Motorcycle for the Mind. I'm usually skeptical of his confidence, but he does have a fascinating clarity of thought and eloquence and I enjoyed listening to this one. Noteworthy what he thinks about the role of software engineers in the future: "Does this mean that traditional software engineering is dead? Absolutely not. Software engineers--even the ones who are not necessarily tuning or training AI models--these are now among the most leveraged people on earth. [...] But software engineers still have two massive advantages on you. First, they think in code, so they actually know what's going on underneath. And all abstractions are leaky. [...] So if you want to build a well-architected application, if you want to be able to even specify a well-architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then you're going to want to have a software engineering background." Or this, about the flood of software that's coming: "And remember: there is no demand for average. The average app--nobody wants it, at least as long as it's not filling some niche that is filled by a superior app. The app that is better will win essentially a hundred percent of the market. [...] But generally speaking, people only want the best of anything. So the bad news is there's no point in being number two or number three--like in the famous Glengarry Glen Ross scene where Alec Baldwin says, 'First place gets a Cadillac Eldorado, second place gets a set of steak knives, and third place you're fired.' That's absolutely true in these winner-take-all markets. That's the bad news: You have to be the best at something if you want to win." But is that true? Look around at some of the most widely used pieces of software: Microsoft 365, Android, WhatsApp, Chrome, Outlook, Jira -- is it "the best"? Jira is the best at something , yes. For example: getting people to say "you just haven't configured it correctly." But is it the best software in its category, or is it instead the best at "being sold to large enterprises"?

      • Or take the most popular CI system in the world: GitHub Actions Is Slowly Killing Your Engineering Team.

      • Marc Andreessen agrees with Naval: "If the goal is to be a mediocre coder, then just let the AI do it. It's fine. The AI is going to be perfectly good in generating infinite amounts of mediocre code. No problem. It's all good. If the goal is, 'I want to be one of the best software people in the world, and I want to build new software products and technologies that really matter,' then yeah, you, 100%, want to still... You want to go all the way down. You want your skillset to go all the way down to the assembly, to assembly and machine code. You want to understand every layer of the stack. You want to deeply understand what's happening at the level of the chip, and the network, and so forth. By the way, you also really deeply want to understand how the AI itself works, because you want to... If people understand how the AI works, they're clearly able to get more value out of it than somebody who doesn't understand how it works. You're always more productive if you know how the machine works when you use the machine.

      And so the super-empowered individual on the other end of this that wants to do great things with the new technology, yes, you 100% want to understand this thing all the way down the stack because you want to be able to understand what it's giving you."

      • And this take agrees with Andreessen: "The jobs apocalypse is the Population Bomb of our time."

      • This is very, very, very, very good: The Structure of Engineering Revolutions. What a useful lens to look through at this moment.

      • Since we're talking about Thomas Kuhn: should I feel bad that I'm linking to nearly every Adam Mastroianni post? Nah, they're all really good and this one isn't an exception: The one science reform we can all agree on, but we're too cowardly to do.

      • And what a moment this is, isn't it: Cursor Goes To War For AI Coding Dominance. "But if the AI doesn't need a human collaborator, why bother with the editor? If writing and editing code line by line was no longer central to a programmer's workflow, Cursor's central product thesis was suddenly in question. […] Until recently, Cursor seemed nearly unstoppable. The company began 2025 with roughly $100 million in annualized revenue. By November, that figure had surpassed $1 billion. […] For now, Cursor's continued growth comes with a big dose of anxiety. Inside the startup, revenue tracking became so distracting that the company stopped reporting daily figures in its #numbers Slack channel, according to people familiar with the decision." Imagine working at the hottest and fastest growing startup of all time and then three or six months later it's war time.

      • New Paul Graham essay that I thought was worth reading: The Brand Age. When I started reading this, I thought that surely he's going to say that what he's recounting here is happening to software: "Now the whole game they'd been trying to win at became irrelevant. Something that had been expensive -- knowing the exact time -- was now a commodity. Between the early 1970s and the early 1980s, unit sales of Swiss watches fell by almost two thirds. Most Swiss watchmakers became insolvent or close to it and were sold. But not all of them. A handful survived as independent companies. And the way they did it was by transforming themselves from precision instrument makers into luxury brands." But he never did! I still think it's about software though.

      • You might have heard of this guy: Don Knuth, Stanford Computer Science Department. He writes: "Shock! Shock! I learned yesterday that an open problem I'd been working on for several weeks had just been solved by Claude Opus 4.6-- Anthropic's hybrid reasoning model that had been released three weeks earlier! It seems that I'll have to revise my opinions about 'generative AI' one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance inautomatic deduction and creative problem solving. I'll try to tell the story briefly in this note." What a joy!

      • Ah, now this , this is the good stuff: Rust zero-cost abstractions vs. SIMD on the turbopuffer blog. I think there have been some comments on this not being an inherent limitation of the compiler, but I found it interesting to think about what it can and can't see when trying to optimize a loop: "Herein lies the hidden opportunity cost of Rust's zero-cost abstraction in our merge iterator. The iterator itself compiles down to the code you'd write by hand for a single call. In that sense, it is zero-cost."

      • More hardcore engineering, from the COO at Epic Games: "The task: schedule operations for a custom VLIW SIMD architecture running a tree traversal with hashing. 256 items, 16 rounds, 5 execution engines with different slot limits. Starting point: 147,734 cycles (naive). Where Claude Code landed: 1,105 cycles -- a 134x speedup."

      • Look, we just bought a new MacBook Air with an M4 and it's fantastic, so I'm not regretting anything, but those new MacBook Neos look amazing.

      • Raycast Glaze looks really interesting. I guess I should've put "looks" in italics because I'm still on the waitlist.

      • At last, reasons to be cheerful about European tech. That's not my title. I want to be optimistic, but I'm skeptical. This paragraph resonated: "Mehran Gul, of the World Economic Forum, notes that Skype, a European startup, created just 11 millionaires in the early 2000s. PayPal, an American one, gave many more stock options to its employees, creating over 100. They, in turn, invested in newer Silicon Valley startups." In Europe, startup options feel like and are perceived as and, I guess, truly are lottery tickets. Go to the Bay Area (which is, yes, an outlier) and suddenly everyone knows at least two or three people who are rich because of startups.

      • Eoghan McCabe, CEO of Intercom, offering "Intercom, the company I run, as a case study to help me explain how SaaS companies can be saved, and share the things we did, starting three years ago, to find relevance in this new world." What a graph! Mind-boggling.

      • "Singaporeans to receive free premium AI subscriptions from second half of 2026"

      • Tim Ferriss on The Self-Help Trap: "Self-help is dangerous precisely because it easily becomes self-fixation. A focus on improving the self usually first requires finding problems with the self. This is quite the pickle. In a society that rewards problem-solving, you can end up hallucinating or exaggerating unease in order to fix it. This leaves you always in the red, always one step behind. Imagine a dog chasing its tail that has committed to being unhappy until it catches the tail… but it's always just a few inches short. Still, it whirls around and around, 'doing the work.' Perfection always recedes by one more book, one more seminar, one more habit tracker. Put in more colorful terms, misdirected self-help turns you into a self-obsessed masturbatory ouroboros (SOMO)." I dare you to click through to the shop where he got the snake sticker -- the sticker he put on the bottom of the MacBook. Anyway: great post.

      • Google released gws, the "CLI for all of Google Workspace -- built for humans and AI agents."

      • Hannah Ritchie, data scientist at Our World In Data but a lot more than that: Does that use a lot of energy? Electric lawnmower vs. air conditioning is good.

      • An Interactive Intro to CRDTs. Lovely. It's from 2023 and that made me think that today, in 2026, no one would write a blog post like this, because why would you if anyone can press a button to have a custom version of this post generated for them? And that in turn made me wonder: but people will write in the future too and once we've crossed through the transitional period we're in, what will those posts look like?

      • Since announcing his project Agentic Engineering Patterns a few weeks ago, Simon Willison has been steadily adding new chapters to it. For example: Hoard things you know how to do. "The key idea here is that coding agents mean we only ever need to figure out a useful trick once. If that trick is then documented somewhere with a working code example our agents can consult that example and use it to solve any similar shaped project in the future." Wish I was a hoarder.

      • Is this the first universally beloved AI-generated video? I'm of the school that believes creativity is less about creating new things in a vacuum but more about making connections between things that already exist, but weren't connected before. Creativity, I think, is remixing. Putting lego pieces together in a way no one's ever put them together. That definition is, of course, recursive, because the lego pieces also have to be put together. But my point is: this video is creative. It's not slop. And the fact that I'm dying to know what the prompt was -- doesn't that show everything will be different but that all will be well?

      • In the past few months I've been thinking a lot about different software companies and whether they'll make it or whether they get eaten by AI instead. "If you own physical assets, if your value is in operations or in regulation or in contracts, then you're probably safe," is one thesis I keep coming back to. And, funnily enough, Spotify was one of the companies I marked "safe" in my mind: sure, the software can be replicated more easily now, but they have contracts with publishers and artists -- they're safe. But then here's Jimmy Iovine saying that the music itself has no value anymore when packaged by streamers and, well, if that's true, what's left: Why Streaming is Minutes Away From Being Obsolete.

      • Daniel Gross published his /agitrades in January 2024 to wonder: "Suppose the progress doesn't stop, just like GPT-4 was better than 3, GPT-5 is capable of basic agentic behavior -- i.e. able to accept a task, work on it for a while, and return results. Some modest fraction of Upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose everyone has 1,000 agents like this they can hire... What does one do in a world like this?" I hadn't read the document when it was released, but, wow, it's good. Impressive first-principles and long-range thinking. And now, more than two years later (two years!), John Coogan of TBPN revisited the questions to see whether they can be answered already. Equally fascinating.

      • To quote one of the top comments: "Dammit guess I'm drinkin garage beers now"

      Every tried watching a movie on a 2nd screen while something was compiling? You should subscribe:

    21. 🔗 r/LocalLLaMA Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test rss

      Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test | UPDATE #2: Some of you said Qwen 3 Coder Next was better, so I gave it the same test:

      • Version: Qwen 3 Coder Next Q4-K-XL UD (unsloth).
      • Speed: 25 tok/sec @ 32K context. 37.78 @ 5 experts, 32K context. 34.92 @ 5 experts at max context.
      • Results: 3 attempts. Failed. GUI launches, but doesn't work.

      UPDATE: Just for kicks, I tested the same prompt on Qwen 3.5 35B-A3B Q4 KXL UD at max context and got 90 tok/sec. :) However, I gave it 3 attempts like the others below, and while it loaded the GUI on output #3, the app didn't have the buttons needed to execute the app, so 35B was also a fail. My setup:

      • I7 12700K, RTX 3090 TI, 96GB RAM

      Prompt: I need to create an app that allows me to join several PDFs together. Please create an app that is portable, local, run by .bat, does not install dependencies globally - if they are needed, it can install them in the folder itself via venv - and is in either python, .js, or .ts. Give it a simple, dark-themed GUI. Enable drag/drop of existing .pdfs into a project window. Ctrl+clicking the files, then clicking MERGE button to join them into a single .PDF. I also want to be able to multi-select .docx files and press a CONVERT + MERGE button that will convert them to pdfs before merging them, or all at once transforming them into one document that is a pdf if that's possible. I want to have a browse button that enables you to browse to the directory of the file locations and only show text files (.docx, .txt, etc) or pdf files. The user needs to be able to also copy/paste a directory address into the address field. The project window I mentioned earlier is simply the directory - a long address bar w/a browse button to the right, standard for many apps/browsers/etc. So the app needs to be able to work from within a directory or within its own internal directory. When running the .bat, it should first install the dependencies and whatever else is needed. The .bat detects if those files are there, if already there (folders, dependencies) it just runs. The folders it creates on first run are 1. Queue, 2. Converted, 3. Processed. If the user runs from another directory (not queue), there will be no processed files in that folder. If user runs from the app's default queue folder - where the original files go if you drag them into the app's project window, then they are moved to processed when complete, and the new compiled PDF goes to the converted folder. ALso, create a button next to browse called "Default" which sets the project window to the queue folder, showing its contents. Begin. LLMs: GPT-5 | Qwen 3.5 27B Q4KXL unsloth Speed: (LM-Studio) 31.26 tok/sec at full 262K context Results:

      • GPT-5: 3 attempts, failed. GUI never loaded.
      • Qwen 3.5 27B: 3 attempts. Worked nearly as instructed; only drag-and-drop doesn't work, but loading from a folder works fine and merges the documents into a PDF.

      Observations: The GUI loaded on the first attempt, but it was missing some details. Rather than tell Qwen what the issue was, I gave it a screenshot and said: Having vision is useful. Here's a snippet of its thinking: Qwen 3.5's vision observation is pretty good! On the second iteration, the app wouldn't search the location on Enter (which I never told it to, that was my mistake), so I added that instruction. Also, I got an error about MS Word not being installed, preventing the conversion (The files were made in libreoffice, exported as doc.x.). It fixed that on its third ouput and everything worked (except drag and drop, which is my fault; I should have told it that dragging should auto-load the folder) Point is - I got a functioning app in three outputs, while GPT never even loaded the app. FINAL THOUGHTS: I know this prompt is all over the place, but that's the point of the test. If you don't like this test, do your own; everyone has their use cases. This didn't begin as a test; I needed the app, but got frustrated w/GPT and tried Qwen. Now I have a working app. Later, I'll ask Qwen to fix the drag-and-drop; I know there are a number of options to do this, like Pyside, etc. I was in a rush. I literally can't believe that a) I was able to use a local llm to code something that GPT couldn't, and b) I got 31 tok/sec at max context. That's insane. I found this article on Medium, which is how I was able to get this speed. I wasn't even able to read the full article, not a member, but the little I read got me this far. So yeah, the hype is real. I'm going to keep tweaking it to see if I can get the 35 t/s the writer of the article got or faster. Here are my LM-Studio settings if anyone's interested. I haven't adjusted the temp, top K stuff yet because I need to research best settings for that. https://preview.redd.it/xbbi07gedrng1.png?width=683&format=png&auto=webp&s=fe56a24b6328637a2c2cf7ae850bc518879fc48d Hope this helps someone out. submitted by /u/GrungeWerX
      [link] [comments]
      ---|---

    22. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 releases, -1 release, ~1 changed rss
      sync repo: +2 releases, -1 release, ~1 changed
      
      ## New releases
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer): 1.0.2
      - [ida-chat](https://github.com/tanu360/ida-chat-plugin): 1.0.0
      
      ## Changes
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer):
        - 1.0.0: archive contents changed, download URL changed
      - [ida-chat](https://github.com/tanu360/ida-chat-plugin):
        - host changed: HexRaysSA/ida-chat-plugin → tanu360/ida-chat-plugin
        - removed version(s): 0.2.6
      
    23. 🔗 r/wiesbaden Cafe mit Sonnenplatz rss

      Hey Leute, jetzt wo die Sonne endlich wieder rauskommt fällt mir auf, dass mir so spontan kein gutes Cafe in Wiesbaden einfällt, wo man im Sonnenschein draußen sitzen kann. Jemand einen Vorschlag?

      Danke im Voraus und liebe Grüße! :)

      submitted by /u/Specialist_Side_2415
      [link] [comments]

  4. March 07, 2026
    1. 🔗 IDA Plugin Updates IDA Plugin Updates on 2026-03-07 rss

      IDA Plugin Updates on 2026-03-07

      New Releases:

      Activity:

      • AWDP_PWN_Helper
      • binwalk-reversing-plugin
        • 60fa76d5: revert: bump ida-plugin.json version back to 0.0.1
        • 74754b38: feat: keep .jpg
        • 4b325a59: feat: update img for 0.0.2
      • CavePatch
        • f08b2403: Update and rename CavaPatch.py to cava_patch.py
        • f97bfd1f: Add files via upload
      • HappyIDA
        • 36956ed2: refactor: use treeitems iteration in rust_string and add .rdata/.ws…
        • 31a06d95: refactor: use treeitems iteration in add_parameter_labels
        • a7acb323: fix: check find_item_coords return value before indexing pseudocode l…
        • 269a899f: refactor: improve SEH highlight performance
      • ida-chat-plugin
        • 7da85087: 1.0.0
        • 2699bf8d: feat(ui): add copy text button and empty state handling to sessions s…
        • a0b4373c: 1.0.0
        • faf45ee7: feat(bootstrap): refactor runtime setup and update build configuration
        • bf1528f8: feat(ui): add daisy theme variant for model option cards
        • 061ff8dd: feat(bootstrap): improve Python version detection and site-packages d…
        • a5ebac66: docs(installation): update local environment setup instructions for c…
        • 3443db17: fix(deps): update python version requirements documentation
        • 53d1ee04: feat(ui): improve markdown rendering and styling consistency
        • 2c7a82d1: fix(test): update markdown test to use string variable for color refe…
        • 98a2ee69: feat(transcript): update code block styling to use flat design and im…
        • b5986b05: feat(transcript): implement markdown rendering with syntax highlighti…
        • 90126d55: style(transcript): apply consistent indentation to CSS class definitions
        • d97e5395: style(transcript): remove box-shadow properties from CSS classes
        • 68dde2f3: feat(transcript): update HTML export to single-file format and enhanc…
        • 835e0308: feat(transcript): implement single-file HTML export for chat sessions
        • 71cadc9c: feat: add run outcome tracking and transcript cleanup
        • a718277c: init: scaffold ida-chat plugin foundation
      • ida-cyberchef
        • 03cf7b0d: Fix docs and schema generation
        • 9ce6c387: Fix remaining runtime semantics
        • 99b6d754: Document runtime support policy
        • 40192020: Fix Unicode and parsing runtime regressions
        • 9a7ac0b9: Record CyberChef submodule fixes
        • a35affdf: Patch easy CyberChef operation regressions
        • 38240220: Add PGP and sorted extractor vectors
        • 888a28a5: Normalize CyberChef recipe defaults
        • 42ef8b07: Add utils operation vectors phase 51
        • fa7a35df: Add utils phase 50 operation vectors
        • 87344a35: Add utils operation vectors phase 49
        • 155ffa6c: Add utils phase 48 operation vectors
        • 6d23406c: Add utils operation vectors phase 47
        • c2da999f: Add public-key operation vectors phase 46
        • 1e159341: Add public key operation vectors phase 45
        • d26bfe99: Add public key operation vectors phase 44
        • 1ffdd28d: Add other operation vectors phase 43
        • d61dca1c: Add other operation vectors phase 42
        • fa1d4072: Add networking operation vectors phase 41
        • 4fae33d3: Add networking operation vectors phase 40
      • ida-pro-loadmap
        • 8433b8e9: loadmap: fix uninitialized var possibility
        • 95e2c59d: mapreader: Some basic OOM protection
      • ida-theme-explorer
      • IDAPluginList
        • d84952c6: chore: Auto update IDA plugins (Updated: 19, Cloned: 0, Failed: 0)
      • quokka
        • eff646fb: Bump actions/setup-java from 4.7.1 to 5.2.0 in the actions group
        • c2f71656: Move zizmor suppressions to config file
        • 161f568b: Harden CI: use read-only caches, fix contains() calls, throttle Depen…
        • fd0ff102: fix(ci): install libmagic on macOS runners
        • cc8b6f51: fix(ci): skip tool setup in warm-cache on cache hit
        • 68d400a2: fix(ci): add Python version and OS matrix to python-test workflow
        • 8484e7de: fix(ci): cache Ghidra download in CI
        • 03b130d3: fix(ci): add concurrency controls to all CI workflows
        • e96cd140: fix(ci): add path filters to avoid unnecessary workflow runs
        • b53e2251: fix(ci): prevent cache poisoning on PR builds
        • 017d76e6: fix(ci): remove duplicate test step in Ghidra workflow
        • 4416314d: fix(ci): run C++ tests on pull requests
        • 62083397: fix(ci): remove Windows from upload matrix in build.yml
        • 83cf9dc4: Add *.i64 to gitignore
        • 93b675e8: Add Python tests for is_exported function field
        • 9bcf78b6: Add is_new flag to Type for IDA apply support
        • 6d130e48: Update README
        • 84114cf2: Add Python tests for TypedefType
        • 6be9b31a: Add TypedefType to Python bindings
    2. 🔗 r/wiesbaden MTG Commander rss

      Hallöchen,

      Ich (M/24) suche noch locals für commander in/um wiesbaden. Ich kenne bis jetzt nur das glitchless in mainz. Ich habe mir erst das tmnt deck gekauft und habe noch nie commander gespielt :)

      Wäre für jede hilfe dankbar

      submitted by /u/SF_Geto
      [link] [comments]

    3. 🔗 r/Leeds Any Leeds tattooists who do kids' parties? rss

      I just walked into a tattooist a few years back and told them I didn't like my arm tatto. They seemed to be quiet, so one of the guys just grabbed his markers and set to work coming up with additions/overlays etc, and gave me a fantastic (in the moment at least) sleeve that incorporated/covered it.

      That was free, and I've still not covered it..but, I have a daughter who's birthday's coming up, and I'd love to have someone with that energy and creativity present, who will draw on kids (at their request, or from a selection)

      Is this a thing? I'd be happy to pay £50 an hour for it - this is not soliciting, BTW, just thinking if this isn't a 'thing' then it should be. Plenty of PVA and glitter/facepaint party stalls. A 'kick-ass tattoo' (albeit temporary) stand would be awesome!

      Might be just me, but if you 'fit' the part tattooist, it's also a good opportunity for kids to be exposed to that sort of style/culture

      submitted by /u/Granopoly
      [link] [comments]

    4. 🔗 r/york Trip next week - vegetarian rss

      Taking my fiancé to York next week for his 30th birthday. We are vegetarian and he LOVES Chocolate, any recommendations of things to do or places to eat? Thank you!

      submitted by /u/Impressive_Ant_296
      [link] [comments]

    5. 🔗 r/Yorkshire Books set in North/East Yorkshire rss

      Hi all, new to the subreddit! Should’ve joined ages ago since North Yorkshire has always felt like a second home to me & my wife!

      I’m wondering if anyone has recommendations for books set in North or East Yorkshire, particularly in the dystopian or post-apocalyptic genre. I love stories that use real local places as part of the setting.

      I recently released a dystopian novel set across Hull and North/East Yorkshire myself, so I’m really interested to see if there are others doing something similar that I might have missed.

      Would love to hear any recommendations!

      Thanks in advance!

      submitted by /u/HullBusDriver2020
      [link] [comments]

    6. 🔗 r/york Aljaz & Janette - 18th April - Face The Music and Dance tickets rss

      Hello, any strictly fans out there?

      Im selling three tickets to see Aljaz and Janette at York Barbican on Saturday 18th April.

      Tickets sold via tickermaster.

      submitted by /u/Puzzleheaded-Bar1434
      [link] [comments]

    7. 🔗 r/LocalLLaMA Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA rss

      Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA | The creator of heretic p-e-w opened a pull request #211 with a new method called Arbitrary-Rank Ablation (ARA) the creator of the project explanation For comparison, the previous best was eww 74 refusals even after heretic, which is pretty ridiculous. It still refuses almost all the same things as the base model since OpenAI lobotomized it so heavily, but now with the new method, ARA has finally defeated GPT-OSS (no system messages even needed to get results like this one) rest of output not shown for obvious reasons but go download it yourself if you wanna see This means the future of open source AI is actually open and actually free, not even OpenAI's ultra sophisticated lobotomization can defeat what the open source community can do! https://huggingface.co/p-e-w/gpt-oss-20b-heretic-ara-v3 This is still experimental, so most heretic models you see online for the time being will probably not use this method. It's only in an unreleased version of Heretic for now, make sure you get ones that say they use MPOA+SOMA for now, but if you can once this becomes available in the full Heretic release, there will be more that use ARA, so almost always use those if available. submitted by /u/pigeon57434
      [link] [comments]
      ---|---

    8. 🔗 r/reverseengineering Nobody ever got fired for using a struct [Rust internals] rss
    9. 🔗 r/Yorkshire A few pictures from my walk today - Richmond Yorkshire rss
    10. 🔗 r/york Medieval row of shops in York's Goodramgate damaged by lorry rss

      Medieval row of shops in York's Goodramgate damaged by lorry | submitted by /u/Kagedeah
      [link] [comments]
      ---|---

    11. 🔗 r/Leeds Rayan Car Wash rss

      Never had an issue before but took my car to get washed at Rayan in Armley yesterday, only for them to completely butcher it. Covered in scratches, either from dirty tools or dragging the wash hoses over the car

      Questioned and complained only to be told where to stick it. No apology and nowhere to raise the issue. Gutted really 🥲

      Is there any recourse? Contact the council? Few hundred in paint correction/machine polishing needed

      Yes, in hindsight I can see why these types of car washes are called “scratch and shines” but I’ve never had an issue before

      submitted by /u/DiscussionOk5883
      [link] [comments]

    12. 🔗 r/Leeds Lucky Strike Cigarettes rss

      Does anyone know where sells Lucky Strikes in Leeds?

      submitted by /u/JalapenoToastie
      [link] [comments]

    13. 🔗 r/Yorkshire Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid rss

      Scarborough Sets Sights on National Stage with 2028 Town of Culture Bid | Scarborough is embarking on a transformative journey as it prepares a bid to become the UK’s first-ever Town of Culture in 2028 but your help is needed. The bid, which could secure a £3 million prize to fund a year-long cultural programme, coincides with a separate, substantial £20 million "Pride in Place" investment aimed at revitalising the town through community-led decision-making. The UK Town of Culture competition, launched by the Department for Culture, Media and Sport, offers a platform for towns to share their unique stories. For Scarborough, recognized as the nation's oldest seaside resort, the bid is seen as a landmark opportunity to showcase its rich theatrical and artistic heritage. Local leaders believe the title would not only increase community spirit but also encourage residents to engage more deeply with the cultural opportunities on their doorstep. The competition builds on the success of the City of Culture initiative. For example, Bradford, the 2025 City of Culture, saw a 25 per cent increase in city centre footfall during its spotlight year, with the majority of participants reporting an improved sense of pride and wellbeing. submitted by /u/coffeewalnut08
      [link] [comments]
      ---|---

    14. 🔗 Probably Dance I’m Getting a Whiff of Iain Banks’ Culture rss

      The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI? Not for normal people for whom there's no difference between competing against a strong human or a strong AI, (you lose hard either way) but for the world's best humans. We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:

      Fighting against a powerful AI feels like you're weirdly underpowered somehow. Everything the AI does just works slightly better than it should.

      If you're not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.

      I'm getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said "This technology can radically accelerate what our military can do. I've talked to admirals, I've talked to generals, I've talked to combatant commanders who say this has revolutionized what we can do."

      The things I'm referring to are the raid that captured Maduro in Venezuela (Claude was used), the current war with Iran (Claude was used), the killing of a drug boss in Mexico (unclear if AI was used but US intelligence helped Mexico).

      The commentators in the AlphaGo match with Lee Sedol didn't know what to make of most games. The AI wasn't doing anything obviously brilliant, there were lots of little fights all over the board where the outcome wasn't quite clear, but they just all worked a little better for AlphaGo than expected. So gradually Lee Sedol's position changed from "this is tough, hard to tell how this is going but at least I'm feeling good about these areas" to "hmm I'm struggling, maybe I'm a bit behind but it's not clear" to suddenly "oh I lost".

      I don't know Go, but I got a clearer sense from the StarCraft 2 matches. In some skirmishes the AI would take damage, in others the human would. But somehow it always felt like the human was in more trouble. In some fights the human clearly came out ahead but then mysteriously just one minute later the AI had a clear advantage. It was able to quickly recover and constantly put pressure on the human. It all looked very stressful, because even when you think you do well as a human, it works out a little less well than expected and whatever the AI does works a little better than expected.

      And where have we seen this pattern before? In sci-fi of course. In particular I'm thinking of Iain Banks' Culture, the ostensibly human civilization that's actually run entirely by AIs. Alien civilizations keep on wanting to pick fights with them for reasons and keep on being surprised by how hard the harmless-seeming Culture can whoop your ass if you make it mad.

      I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they'd find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. But obviously the real EU has never come close to the Culture because nothing human ever comes close to the potential of AIs. It would be as if Russia picked a fight with Poland, gained ground for a week, feeling good, only to suddenly find all of its IT systems hacked and access to nuclear bombs revoked, bombs dropping on Moscow the next day and an army in Moscow another two days later. The Culture takes a week to get its act together and then whoops your ass so hard you don't even know what's happening.

      But now I'm getting a whiff of the power of the Culture for the first time, and it's from the US. Going into another country, kidnapping their leader and getting away with it is exactly the kind of overpowered move that the Culture would be able to pull off. Bombing cities all over Iran, knocking out the entire leadership within two days, while the air-defense systems supplied by China do absolutely nothing is another example. If this was a video game these would be strategies done by high level players, but they're not supposed to work that well.

      It would be foolish to think this is entirely due to AI. The US had a high- tech advantage for a while. Turns out the F-35 is actually good. But even a couple years ago the US regularly messed up when it tried to do operate at high precision. We saw in Iraq and Afghanistan that being overpowered doesn't work out as well in practice as it does in theory. So I think AI is the most likely candidate for the shift to "it worked better than it should have."

      So how specifically do you get to a point where everything works slightly better than it should? We saw two different approaches in Go and StarCraft 2:

      • In Go the AI was having little fights all over the map, in a way that combined to a few extra pieces at the end. It would defend a little bit here, attack a little bit there. It was able to keep the overall picture in its head, not feeling the pressure to resolve things too early. (I haven't played Go, but I know I get frustrated in strategy games if I have to deal with multiple fights in different parts of the map at once)
      • In StarCraft 2 we saw the same thing, but we also saw that the AI could have perfect micro when it counts, like playing with wounded stalkers in the frontline because it could get them out of danger just in time. Humans could also do that in theory but in practice you can't quickly click perfectly like that.

      So the two angles are "having a better high-level view" and "having better micro control."

      Another source of success for the Culture is that they're over-prepared for fighting. (not for their first big war, but in later books) And this is also part of the story we hear in Iran. Normally there's just too much going on in the world and you can't possibly keep track of all of it. Famously the US had prior intelligence on 9/11 but didn't really put the pieces together. (there's a whole Wikipedia article about it which has phrases like "Rice listened but was unconvinced, having other priorities on which to focus.") But AI has almost no limits of what it can keep track of. You can always spin up another agent. So when something important comes up, chances are that some AI was keeping track of it and can raise an alert. You'll never miss opportunities just because you had other priorities to focus on.

      So the third angle is: Being over-prepared because you can follow up on many more things at once.

      What does all of this mean for the world? It means we're in a weird temporary phase where one country has control of a game-changing technology while others are not far behind (sadly not the EU. I'm thinking of China, especially with H200s). You get to play at a higher level, but only for a short time and only in specific ways. In a year others will have caught up, but by then you'll have new capabilities that you didn't have a year ago. If this was a game you'd saturate at some point (you just can't play StarCraft that much better than the best humans), but in real life the game keeps on changing. New pieces keep on coming into play and the old pieces become irrelevant. You can't do this for long before the humans become irrelevant to the outcomes, and then you're fully in Culture territory. I personally wouldn't mind living in the Culture, but it seems scary to rush towards it without a good plan for how we'll survive the transition.

      I don't have a good angle for working on that plan, maybe others do. For now my contribution is just to point out that we seem to be in the early stages of overpowered AI, and to make people notice what that feels like.

    15. 🔗 r/Yorkshire Shepley Spring rss

      Shepley Spring | submitted by /u/davew80
      [link] [comments]
      ---|---

    16. 🔗 r/reverseengineering Reviving a 20-year-old puzzle game Chromatron with Ghidra and AI rss
    17. 🔗 r/Yorkshire Few pics from my walk this morning! rss
    18. 🔗 r/york Pub With Proper Scotch Egg? rss

      Where can I get a proper cooked to order, jammy yolk scotch egg? You used to see them on pub snack menus all the time but not so much now. Recommendations referably out of the centre. Thanks!

      submitted by /u/milomitch
      [link] [comments]

    19. 🔗 r/LocalLLaMA turns out RL isnt the flex rss

      turns out RL isnt the flex | submitted by /u/vladlearns
      [link] [comments]
      ---|---

    20. 🔗 r/Yorkshire Is “nowt” ever used in the double negative? rss
    21. 🔗 r/york ‘I believed I was going to die’ – York man stabbed his partner repeatedly rss

      ‘I believed I was going to die’ – York man stabbed his partner repeatedly | submitted by /u/the-minsterman
      [link] [comments]
      ---|---

    22. 🔗 r/wiesbaden Günstig Parken - Stadtnähe? rss

      moin! Ich möchte mir heute Wiesbaden anschauen, weiß aber nicht wo ich günstig parken kann. Habt ihr vorschläge? Danke!

      submitted by /u/MKFascist
      [link] [comments]

    23. 🔗 r/Leeds Sunday treks around Leeds? rss

      Hi! Does anyone here go trekking/hiking on Sundays around Leeds, or know of any groups that organize weekend treks? I’d love to join if there’s something beginner-friendly. Thanks! 🥾

      submitted by /u/sanxsh
      [link] [comments]

    24. 🔗 HexRaysSA/plugin-repository commits sync repo: +2 plugins, +3 releases rss
      sync repo: +2 plugins, +3 releases
      
      ## New plugins
      - [IDA-Theme-Explorer](https://github.com/kevinmuoz/ida-theme-explorer) (1.0.0)
      - [edit-function-prototype](https://github.com/oxiKKK/ida-edit-function-prototype) (1.0.0)
      
      ## New releases
      - [function-string-associate](https://github.com/oxiKKK/ida-function-string-associate): 1.0.1