]> Faced with a climatic crisis, the inhabitants abandoned the city around 1900 BC. When they left, they covered the pyramids with stones, hoping perhaps to find them again on their return. 2022-06-16 Au Pérou, le plus ancien complexe archéologique des Amériques menacé par des occupations sauvages 2022-06-16T16:37:50Z 2022-06-29T00:28:06Z 2022-06-29 Evaluation Measures in Information Retrieval | Pinecone [Hum](https://twitter.com/SchmidhuberAI/status/1544939700099710976?s=20&t=PVRl5kQfAICedh-tSU1sug) 2022-06-28 2022-06-28T02:38:09Z Yann LeCun sur Twitter : "My position/vision/proposal paper is finally available: "A Path Towards Autonomous Machine Intelligence" How Attention works in Deep Learning: understanding the attention mechanism in sequence models | AI Summer 2022-06-09T00:49:32Z 2022-06-09 Stanford Open Virtual Assistant Lab 2022-06-15 2022-06-15T12:52:45Z 2022-06-05T09:15:50Z 2022-06-05 huggingface/evaluate: A library for easily evaluating machine learning models and datasets. Le pergélisol arctique est un réservoir de gènes de résistance à certains antibiotiques | CNRS 2022-06-22 2022-06-22T01:14:44Z 2022-06-11 Satyanarayan Kar Ankush Agarwal Prabhjit Thind Asif Ekbal Pushpak Bhattacharyya 2022-05-31T16:49:55Z Knowledge Graph -- Deep Learning: A Case Study in Question Answering in Aviation Safety Domain 2205.15952 Ankush Agarwal Shreya Laddha Ravi Shankar In the commercial aviation domain, there are a large number of documents, like, accident reports (NTSB, ASRS) and regulatory directives (ADs). There is a need for a system to access these diverse repositories efficiently in order to service needs in the aviation industry, like maintenance, compliance, and safety. In this paper, we propose a Knowledge Graph (KG) guided Deep Learning (DL) based Question Answering (QA) system for aviation safety. We construct a Knowledge Graph from Aircraft Accident reports and contribute this resource to the community of researchers. The efficacy of this resource is tested and proved by the aforesaid QA system. Natural Language Queries constructed from the documents mentioned above are converted into SPARQL (the interface language of the RDF graph database) queries and answered. On the DL side, we have two different QA models: (i) BERT QA which is a pipeline of Passage Retrieval (Sentence-BERT based) and Question Answering (BERT based), and (ii) the recently released GPT-3. We evaluate our system on a set of queries created from the accident reports. Our combined QA system achieves 9.3% increase in accuracy over GPT-3 and 40.3% increase over BERT QA. Thus, we infer that KG-DL performs better than either singly. 2022-05-31T16:49:55Z [2205.15952] Knowledge Graph -- Deep Learning: A Case Study in Question Answering in Aviation Safety Domain Raj Gite Rajesh Zele 2022-06-11T01:48:52Z 2022-06-13T12:38:47Z 2022-06-13 sentence bert model in onnx format · Issue #46 · UKPLab/sentence-transformers 2022-06-22T01:03:23Z « Une expérience terrifiante » : au Sénat, les supporteurs relatent à leur tour les incidents au Stade de France 2022-06-22 2022-06-02T13:55:12Z 2022-06-02 Domain transfer with GGPL: German Generative Pseudo Labeling 🥨 | by Matthias Richter | Jun, 2022 | ML6team [Domain transfer with GGPL: German Generative Pseudo Labeling](doc:2022/06/domain_transfer_with_ggpl_germ) Nils Reimers sur Twitter : "GPL goes multi-lingual..." 2022-06-01 2022-06-01T17:45:24Z 2022-06-03 2022-06-03T09:17:26Z Understanding Semantic Search and Question Answering | deepset Lucas Oliveira Souza 2022-04-25T15:36:13Z [2201.00042] Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments Abhiram Iyer 2022-06-26T01:23:08Z Akash Velu A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows. First, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve. Abhiram Iyer Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments 2022-06-26 Jeremy Forest 2201.00042 Karan Grewal Subutai Ahmad 2021-12-31T19:52:42Z Christian Wolf sur Twitter : "The fact that catastrophic forgetting is a thing, indicates that something is fundamentally wrong with our approaches" 2022-06-26 2022-06-26T01:16:27Z > Sparse models stand out among the most promising approaches for the future of deep learning. Instead of every part of a model processing every input (“dense” modeling), sparse models employing conditional computation learn to route individual inputs to different “experts” in a potentially huge network 2022-06-26T01:20:55Z Google AI Blog: LIMoE: Learning Multiple Modalities with One Sparse Mixture-of-Experts Model 2022-06-26 [@bronzeagepapi's answer](https://twitter.com/bronzeagepapi/status/1540821943607336960) : - [Google AI Blog: LIMoE: Learning Multiple Modalities with One Sparse Mixture-of-Experts Model](doc:2022/06/google_ai_blog_limoe_learning) - [[2201.00042] Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments](doc:2022/06/2201_00042_avoiding_catastrop) - [Continual-T0: Progressively Instructing 50+ Tasks to Language Models Without Forgetting](https://twitter.com/ThomasScialom/status/1529163763462688768) 2022-06-04T17:04:01Z 2022-06-04 Le glyphosate altère la reproduction des bourdons 2022-06-15T12:54:49Z 2022-06-15 > AI that supports human goals, but is constrained by human values > Electricity is the new AI? > Virtual Assistant Progamming Language > Language: a way to network human brains together [SAIF2020] Day2: Natural Language Processing - Christopher Manning | Samsung - YouTube 2022-06-09T09:41:53Z 2022-06-09 Guatemala : un grillage géant va tenter de retenir une grande quantité des déchets qui se déversent dans les océans L’archéologie dans tous ses états | CNRS Le journal 2022-06-16T00:57:21Z 2022-06-16 2022-06-12 2022-06-12T14:02:44Z > Une telle absence de volonté politique est l’indice que le Green Deal agricole n’aura pas lieu. Là encore, la France aura été pionnière : en 2008, le plan Ecophyto fixait l’objectif d’une réduction de 50 % du recours aux pesticides en dix ans. Quinze ans plus tard, il n’a fait que croître. « Le Green Deal agricole n’aura pas lieu » Lenka Zdeborova sur Twitter : "Amazing lecture of @SebastienBubeck giving intuition on transformers and how to approach them theoretically..." 2022-06-30T14:21:53Z 2022-06-30 > To me, what's good about transformers is that they have relative filters. I mean **a standard NN tests an input against a fixed filter w, but here we test part of x against another part of x**. (#[Self-Attention](tag:self_attention)) > > This potentially allows for reasonning to emerge: the network can associate concepts that it encounters, compare them, make analogies > LEGO: Learning Equality and Group Operations. It's a very **basic reasoning task**, where a sentence is made of clauses defining variables as a function of some other variable, and the goal is to **resolve the value of the variables**. Unveiling Transformers with LEGO - YouTube > Clearest talk on transformers I have ever seen. [Unveiling Transformers with LEGO - YouTube](doc:2022/06/unveiling_transformers_with_leg) 2022-06-30 2022-06-30T12:52:54Z ELS-RD/transformer-deploy: Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀 2022-06-13 2022-06-13T12:40:13Z 2022-06-07T12:18:30Z > The most widely used Multi-Class classification loss function is Categorical Cross-Entropy loss, also named SoftMax loss, i.e. SoftMax activation followed by a Cross-Entropy loss. However, does it explicitly maximize class separability? ArcFace approach, which obtains highly discriminative features for face recognition Face Recognition and ArcFace: Additive Angular Margin Loss for Deep Face Recognition | by Daniela Gingold | Analytics Vidhya | Medium 2022-06-07 2022-06-03T21:38:14Z 2022-06-03 Tomas Pueyo sur Twitter : "France is weird..." 2022-06-10 2022-06-10T14:46:28Z (PDF) Improving Sterile Insect Technique for tsetse flies through research on their symbiont and pathogens Dr. GARBA Moussa 🇳🇪🇳🇬🇫🇷 sur Twitter : "Hausa NLP specialist Ibrahim Said Ahmad" 2022-06-22 2022-06-22T01:17:53Z FP Servant sur Twitter : "@GDarmanin « Vos mensonges sans fin et vos faux récits n’ont fait qu’amplifier notre traumatisme. On ne les pardonnera jamais » -- Ted Morris, Liverpool Disabled Supporters Association #DarmaninDémission" 2022-06-22T01:06:10Z 2022-06-22 AmenRa/ranx: Python Library for Ranking Evaluation 2022-06-01 2022-06-01T18:26:42Z library of fast ranking evaluation metrics Maps - Sahel and West Africa Club Secretariat 2022-06-03T19:07:51Z 2022-06-03 toutes les cartes imaginables [Agro climatic conditions](https://www.oecd.org/media/oecdorg/satellitesites/swac/maps/reg-atlas/REG_26_5_2_Agroclimatic_conditions_WEB_EN.jpg) <https://www.oecd.org/media/oecdorg/satellitesites/swac/maps/reg-atlas/REG_102_15_1_Rainfall_and_climate_zones_WEB_EN.jpg> 2022-06-22T01:24:53Z 2022-06-22 HausaNLP Research Group 2022-06-07T17:58:34Z 2022-06-07 ACL 2022 Highlights Chris Olah sur Twitter : "I'm excited to *finally* be making progress on understanding the first MLP layer in large transformer LMs. I've tried really hard and prior to SoLU had little success." / Twitter 2022-06-27T19:48:41Z 2022-06-27 2022-06-13T12:36:08Z 2022-06-13 Hugging Face Transformer Inference Under 1 Millisecond Latency | by Michaël Benesty | Towards Data Science 2022-06-04T09:53:55Z 2022-06-04 World History Encyclopedia Ruofei Zhang Yangfeng Ji 2020-08-28T18:58:15Z Jianfeng Gao [2008.12813] HittER: Hierarchical Transformers for Knowledge Graph Embeddings Jian Jiao HittER: Hierarchical Transformers for Knowledge Graph Embeddings Sanxing Chen > HittER, a deep hierarchical Transformer model to learn representations of entities and relations in a knowledge graph jointly by aggregating information from graph neighborhoods. > learning knowledge graph embeddings from one triplet at a time ignores the abundant structural information in the graph context > Unlike the previous shallow KGE methods that cannot be trivially utilized by widely used Transformer-based models for language tasks (Peters et al., 2019), our approach benefits from the unified Transformer architecture and its extensibility. As a case study, **we show how to integrate the learned representations of HittER into pre-trained language models like BERT**. [GitHub](https://github.com/microsoft/HittER) This paper examines the challenging problem of learning representations of entities and relations in a complex multi-relational knowledge graph. We propose HittER, a Hierarchical Transformer model to jointly learn Entity-relation composition and Relational contextualization based on a source entity's neighborhood. Our proposed model consists of two different Transformer blocks: the bottom block extracts features of each entity-relation pair in the local neighborhood of the source entity and the top block aggregates the relational information from outputs of the bottom block. We further design a masked entity prediction task to balance information from the relational context and the source entity itself. Experimental results show that HittER achieves new state-of-the-art results on multiple link prediction datasets. We additionally propose a simple approach to integrate HittER into BERT and demonstrate its effectiveness on two Freebase factoid question answering datasets. 2022-06-30T18:33:10Z 2021-10-06T04:52:07Z 2008.12813 2022-06-30 Xiaodong Liu Sanxing Chen 2022-06-29T18:09:51Z 2022-06-29 Using BERT For Classifying Documents with Long Texts | by Armand Olivares | Medium