]> 2025-02-18 2025-02-18T14:49:49Z GitHub - getzep/graphiti: Graphiti: Temporal Knowledge Graphs for Agentic Applications Sanmi Koyejo KGGen: Extracting Knowledge Graphs from Plain Text with Language Models Lisa Yu 2025-02-18 [2502.09956] KGGen: Extracting Knowledge Graphs from Plain Text with Language Models Charilaos Kanatsoulis 2502.09956 including MINE, a benchmark to evaluate how well a text-to-KG extractor captures information. Kyssen Yu 2025-02-14T07:28:08Z Joshua Kazdan Chris Cundy Proud Mpala Belinda Mo 2025-02-14T07:28:08Z Belinda Mo Recent interest in building foundation models for KGs has highlighted a fundamental challenge: knowledge-graph data is relatively scarce. The best-known KGs are primarily human-labeled, created by pattern-matching, or extracted using early NLP techniques. While human-generated KGs are in short supply, automatically extracted KGs are of questionable quality. We present a solution to this data scarcity problem in the form of a text-to-KG generator (KGGen), a package that uses language models to create high-quality graphs from plaintext. Unlike other KG extractors, KGGen clusters related entities to reduce sparsity in extracted KGs. KGGen is available as a Python library (\texttt{pip install kg-gen}), making it accessible to everyone. Along with KGGen, we release the first benchmark, Measure of of Information in Nodes and Edges (MINE), that tests an extractor's ability to produce a useful KG from plain text. We benchmark our new tool against existing extractors and demonstrate far superior performance. 2025-02-18T15:07:20Z 2025-02-08 Danielle Allen: "A Time to Choose" - AI, Science and Society Conference - YouTube "A Time to Choose", vibrante (et courageuse) intervention de Danielle Allen (Harvard). (AGI) Power to the People! En intro, une histoire familiale de lutte pour "l'empowerment" (émancipation, pouvoir de chacun de contrôler sa propre vie), puis une histoire des pensées économiques dominantes depuis Roosevelt (Keynes, puis Reagan). Et maintenant, alors que la démocratie libérale est menacée (cf. Trump/Musk), à l'heure des bouleversements impliqués par l'IA ("socio-political transformation driven by technology", comparaison avec l'électrification), "a time to choose wrt paradigm of technology developement". 3 voies possibles : - [NRx](tag:nrx) (eg. Peter Thiel) - [Effective altruism](tag:effective_altruism) (eg. [Sam Altman](tag:sam_altman)) - [Digital democracy](tag:digital_democracy) (eg. [Audrey Tang](tag:audrey_tang)) - cf. Taiwan, "where tech is of the people, for the people, by the people" (cf. [Pol.is](tag:pol_is)) > Government of the people, by the people, for the people ([Lincoln](tag:lincoln)) > > Hunger for empowerment, and not just bread > > Slaves rather weaken than strengthen the State, and there is therefore some difference between them and sheep. Sheep will never make any insurrections ([Benjamin Franklin](tag:benjamin_franklin)) 2025-02-08T10:28:22Z 2025-02-19T20:40:04Z 2025-02-19 Use LLMs to Turn CSVs into Knowledge Graphs: A Case in Healthcare | by Rubens Zimbres | Medium 2025-02-07T00:34:27Z Bernhard Schölkopf [1911.10500] Causality for Machine Learning 2025-02-07 Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them. 2019-11-24T11:04:56Z Causality for Machine Learning 1911.10500 Bernhard Schölkopf 2019-12-23T16:20:53Z 2025-02-21 2025-02-21T09:48:58Z Microsoft’s Majorana 1 chip carves new path for quantum computing > "Le CFPB a été créé pour protéger les Américains de la prédation financière et a relativement bien réussi. Mais maintenant, nous avons un gouvernement de et pour les prédateurs financiers." ([Paul Krugman](tag:paul_krugman)) L’administration Trump fait voler en éclats la régulation financière américaine 2025-02-27T22:26:09Z 2025-02-27 LLM Knowledge Graph Builder — First Release of 2025 | by Michael Hunger | Neo4j Developer Blog | Feb, 2025 | Medium 2025-02-19T20:38:59Z 2025-02-19 AI, Science and Society - AI Conference - IP Paris 2025-02-06T00:18:28Z 2025-02-06 Des sels et des briques du vivant trouvés dans les échantillons de l’astéroïde Bénou 2025-02-01 2025-02-01T22:40:17Z 2025-02-12T21:23:47Z 2025-02-12 Un record d’énergie battu pour un neutrino, observé en Méditerranée « Trump, Musk, Zuckerberg : la nouvelle “trinité” du pouvoir américain incarne le pire de l’Internet et envoie le meilleur aux oubliettes » 2025-02-12 2025-02-12T23:31:04Z 2025-02-19T23:28:14Z 2025-02-19 Kassav Zenith 89 - YouTube 2025-02-09 AI, Science and Society Conference - AI ACTION SUMMIT - DAY 2 - YouTube 2025-02-09T16:04:47Z [Yoshua Bengio](tag:yoshua_bengio), [Danielle Allen](tag:danielle_allen), ... 2025-02-19 > (cf.[LoRA](tag:lora), NSA deepseek, etc) we see sparse or low-rank structures repeatedly play important roles in processing high-dimensional data at scale. These are NOT coincidences. 2025-02-19T13:22:19Z Yi Ma sur X : High-Dimensional Data Analysis with Low-Dimensional Models 2025-02-27T21:04:01Z Aux Etats-Unis, l’usage généralisé de maïs OGM insecticides nourrit la résistance des ravageurs 2025-02-27 2025-02-09T12:18:43Z > **The Learning Progress hypothesis**: Interestingness of goal is proportional to empirical absolute learning progress (absolute value of derivative) [Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey](doc:2025/02/autotelic_agents_with_intrinsic) Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey 2025-02-09T15:56:41Z > Building autonomous machines that can explore open-ended environments, discover possible interactions and build repertoires of skills is a general objective of artificial intelligence. Developmental approaches argue that this can only be achieved by autotelic agents: intrinsically motivated learning agents that can learn to represent, generate, select and solve their own problems. In recent years, the convergence of developmental approaches with deep reinforcement learning (rl) methods has been leading to the emergence of a new field: developmental reinforcement learning. 2025-02-09 2025-02-09 Curiosity-driven Autotelic AI Agents that Use and Ground Large Language Models - YouTube 2025-02-24T13:34:19Z deepseek-r1 Model by Deepseek-ai | NVIDIA NIM 2025-02-24 > DeepSeek-R1 is a first-generation **reasoning model trained using large-scale reinforcement learning** (RL) to solve complex reasoning tasks across domains such as math, code, and language. The model leverages RL to develop reasoning capabilities, which are further enhanced through supervised fine-tuning (SFT) to improve readability and coherence. 2025-02-11 > Avec la généralisation de ses assistants de conduite, appelés « God’s Eye », à quasiment toute sa gamme, y compris des modèles à moins de 10 000 euros, la firme de Shenzhen établit un nouveau standard. 2025-02-11T16:33:18Z Voiture autonome : « Familier des actions coup de poing, BYD entend faire le ménage dans la concurrence » Kassav' - Live au Zenith 2016 -Le Concert Complet - YouTube 2025-02-19 2025-02-19T23:34:52Z 2025-02-24 Cameron R. Wolfe, Ph.D. sur X : "The trajectory of research for open LLMs and open reasoning models has been shockingly similar, but there are still many open questions…" 2025-02-24T13:55:04Z > To me, these are pivotal questions to answer for current research on open reasoning models: > - Do the smaller / distilled models generalize well? > - Are we missing any gaps in performance? > - How do these findings relate to findings from traditional LLM research? Ordinateurs quantiques : Microsoft dit être en mesure de les commercialiser d’ici à quelques années 2025-02-21 2025-02-21T23:55:32Z > ChatGPT aura au moins eu le mérite de démocratiser la triche. Intelligence artificielle : comment ChatGPT métamorphose la triche scolaire 2025-02-09 2025-02-09T12:23:47Z cf. [Toolformer](tag:toolformer) (?) 2025-02-21T09:55:49Z 2025-02-21 Leonie sur X : "Fine-tuning a model for function calling ..." Yoshua Bengio (IA): «Des prises de risques dangereuses vont s’accentuer à mesure qu'elle va progresser» - RFI 2025-02-08T08:58:34Z 2025-02-08 2025-02-03T22:05:06Z 2025-02-03 Jusqu’où ira l’intelligence artificielle ? Le Monde Logiciel Adaptiv’Math, (société EvidenceB), disponible dans toutes les écoles élémentaires > la **curiosité** est stimulée par « le progrès en apprentissage », c’est-à-dire le fait de s’apercevoir qu’on progresse, en investissant un peu de temps, dans une activité qui semblait difficile au départ. [Pierre-Yves Oudeyer](doc:2025/02/pierre_yves_oudeyer_artificia) 2025-02-09T10:36:08Z L’intelligence artificielle à l’école, une révolution déjà en marche 2025-02-09 > I use machines as tools to understand better how children learn and develop, and I study how one can build machines that learn autonomously like children, as well as integrate within human cultures 2025-02-09T10:29:57Z 2025-02-09 Pierre-Yves Oudeyer – Artificial Intelligence, Machine Learning, Cognitive Science Nash equilibrium - Wikipedia 2025-02-07T01:31:18Z 2025-02-07 > a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed) 2025-02-24T13:45:41Z 2025-02-24 OpenAI o1 Hub > a new series of AI models designed to spend more time thinking before they respond 2025-02-24 diffuse.one/reasoning_reflections: AI for science with reasoning models 2025-02-24T14:08:53Z 2025-02-12 2025-02-12T23:51:16Z Bellême, Ville Close, Place du Château « L’intelligence artificielle permet d’inventer de nouveaux processus démocratiques » 2025-02-11 2025-02-11T16:43:56Z > La puissance de l’IA, bien employée, peut devenir un vecteur de cohésion et d’expression citoyenne, plaide un collectif d’entrepreneurs et de décideurs internationaux dans une tribune au « Monde ». > Some of the hard open problems of machine learning and AI are intrinsically related to causality, and progress may require advances in our understanding of how to model and infer causality from data. 2204.00607 From Statistical to Causal Learning Julius von Kügelgen Bernhard Schölkopf 2025-02-07T00:48:32Z Bernhard Schölkopf [2204.00607] From Statistical to Causal Learning We describe basic ideas underlying research to build and understand artificially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of causality. Some of the hard open problems of machine learning and AI are intrinsically related to causality, and progress may require advances in our understanding of how to model and infer causality from data. 2025-02-07 2022-04-01T17:55:22Z 2022-04-01T17:55:22Z > KGGen uses an LLM-driven, multi-stage pipeline to improve graph sparsity issues: > 1. Extract entities & relations > 2. Aggregate info across multiple docs > 3. Cluster entities and relations based on semantic similarity (e.g. "Supplier A LLC" and "Supplier-A" are 1 node) > > we cluster similar nodes and edges respectively, which helps with curating a denser, richer graph. ([tweet](https://x.com/belindmo/status/1891621779073831171)) 2025-02-18 2025-02-18T15:05:08Z GitHub - stair-lab/kg-gen: Knowledge Graph Generation from Any Text Bernhard Schölkopf | Max Planck Institute for Intelligent Systems 2025-02-07T00:31:25Z 2025-02-07 2025-02-12T23:58:57Z 2025-02-12 plateforme ouverte du patrimoine - Bellême Drogues : « Tuer les trafics passe par la légalisation, c’est le meilleur moyen d’assécher le marché illicite » 2025-02-17 2025-02-17T20:44:08Z 2025-02-18T13:13:59Z 2025-02-18 > AI Agents need memory and must know how to keep it updated over time (This is difficult, and it's the main reason most agents get dumber overnight!) This is where a knowledge graph helps: 1. They make it easier for the agent to extract facts from memory 2. They make it easier for the agent to update facts as they change. > Zep, a memory layer service that uses Graphiti, a knowledge graph engine. > > How Zep works: > > 1. You send messages to your AI agent > 2. Zep synthesizes the information into a knowledge graph > 3. You can then retrieve any relevant facts from memory extremely fast [[2501.13956] Zep: A Temporal Knowledge Graph Architecture for Agent Memory](doc:2025/02/2501_13956_zep_a_temporal_kn) Santiago sur X : "Knowledge graphs are a game changer for AI Agents!..." 2025-02-18T14:47:20Z Pavlo Paliychuk Preston Rasmussen Zep: A Temporal Knowledge Graph Architecture for Agent Memory We introduce Zep, a novel memory layer service for AI agents that outperforms the current state-of-the-art system, MemGPT, in the Deep Memory Retrieval (DMR) benchmark. Additionally, Zep excels in more comprehensive and challenging evaluations than DMR that better reflect real-world enterprise use cases. While existing retrieval-augmented generation (RAG) frameworks for large language model (LLM)-based agents are limited to static document retrieval, enterprise applications demand dynamic knowledge integration from diverse sources including ongoing conversations and business data. Zep addresses this fundamental limitation through its core component Graphiti -- a temporally-aware knowledge graph engine that dynamically synthesizes both unstructured conversational data and structured business data while maintaining historical relationships. In the DMR benchmark, which the MemGPT team established as their primary evaluation metric, Zep demonstrates superior performance (94.8% vs 93.4%). Beyond DMR, Zep's capabilities are further validated through the more challenging LongMemEval benchmark, which better reflects enterprise use cases through complex temporal reasoning tasks. In this evaluation, Zep achieves substantial results with accuracy improvements of up to 18.5% while simultaneously reducing response latency by 90% compared to baseline implementations. These results are particularly pronounced in enterprise-critical tasks such as cross-session information synthesis and long-term context maintenance, demonstrating Zep's effectiveness for deployment in real-world applications. 2025-01-20T16:52:48Z Preston Rasmussen 2025-02-18 Travis Beauvais Daniel Chalef [2501.13956] Zep: A Temporal Knowledge Graph Architecture for Agent Memory 2025-01-20T16:52:48Z Jack Ryan 2501.13956 > There is an emerging pattern of fine-tuning a small language model followed by reinforcement learning. > A reasoning model is a large language model that is trained to output both a chain of thought and a response. The chain of thought should be relatively long ( > 1,000 tokens) and the reasoning should improve its performance relative to a similar-sized non-reasoning models. This is sometimes called "test-time" or "inference-time" scaling because reasoning models emit more tokens per completion and gain some performance as a result. diffuse.one/reasoning_update_0 2025-02-24 2025-02-24T13:21:09Z While reliable data-driven decision-making hinges on high-quality labeled data, the acquisition of quality labels often involves laborious human annotations or slow and expensive scientific measurements. Machine learning is becoming an appealing alternative as sophisticated predictive techniques are being used to quickly and cheaply produce large amounts of predicted labels; e.g., predicted protein structures are used to supplement experimentally derived structures, predictions of socioeconomic indicators from satellite imagery are used to supplement accurate survey data, and so on. Since predictions are imperfect and potentially biased, this practice brings into question the validity of downstream inferences. We introduce cross-prediction: a method for valid inference powered by machine learning. With a small labeled dataset and a large unlabeled dataset, cross-prediction imputes the missing labels via machine learning and applies a form of debiasing to remedy the prediction inaccuracies. The resulting inferences achieve the desired error probability and are more powerful than those that only leverage the labeled data. Closely related is the recent proposal of prediction-powered inference, which assumes that a good pre-trained model is already available. We show that cross-prediction is consistently more powerful than an adaptation of prediction-powered inference in which a fraction of the labeled data is split off and used to train the model. Finally, we observe that cross-prediction gives more stable conclusions than its competitors; its confidence intervals typically have significantly lower variability. 2025-02-06 Tijana Zrnic 2024-02-28T22:34:51Z Emmanuel J. Candès [2309.16598] Cross-Prediction-Powered Inference Tijana Zrnic > We introduce **cross-prediction: a method for valid inference powered by machine learning**. > Machine learning is increasingly used as an efficient substitute for traditional data collection when the latter is challenging. For example, predictions of conditions such as poverty, deforestation, and population density based on satellite imagery are used to supplement accurate survey data, which requires significant time and resources to collect. However, predictions are imperfect and potentially biased, calling into question the validity of conclusions drawn from such data. This manuscript introduces a method for valid inference powered by machine learning. **The method enables researchers to draw more reliable and accurate conclusions from machine learning predictions**. [src PNAS](https://www.pnas.org/doi/abs/10.1073/pnas.2322083121) [Implemented in python here](https://ppi-py.readthedocs.io/en/latest/crossppi.html) Cross-Prediction-Powered Inference 2023-09-28T17:01:58Z 2309.16598 2025-02-06T16:01:34Z AI, Science and Society Conference - AI ACTION SUMMIT - DAY 1 - YouTube 2025-02-09T16:02:52Z 2025-02-09 Michael Jordan, [Bernhard Schölkopf](tag:bernhard_scholkopf), [Emmanuel Candès](tag:emmanuel_candes), [Yann LeCun](tag:yann_lecun)... "Toward Next Generation Al Systems Beyond Lingual Intelligence" Eric Xing Benjamin Clavié sur X : "What if a [MASK] was all you needed?..." 2025-02-11 2025-02-11T00:25:23Z 2025-02-23 2025-02-23T09:56:32Z Comment la droite tech américaine a pris le pouvoir