J'y étais
http://www.semanlink.net/tag/j_y_etais
Documents tagged with J'y étaisJerry Liu sur Twitter : "The DSP project carries a lot of insights for improving RAG..."
http://www.semanlink.net/doc/2023/06/jerry_liu_sur_twitter_the_ds
> - value of few-shot ex’s
> - declarative modules
> - compile an optimized system with distilled LM’s
2023-06-18T10:27:05ZNAVER LABS Europe : "@Nils_Reimers of @huggingface on 'Unsupervised domain adaptation for neural search'"
http://www.semanlink.net/doc/2022/03/naver_labs_europe_nils_reim
2022-03-09T10:53:24ZMatching Resumes to Jobs via Deep Siamese Network | Companion Proceedings of the The Web Conference 2018
http://www.semanlink.net/doc/2020/02/matching_resumes_to_jobs_via_de
Siamese adaptation of CNN, using contrastive loss. The document embedding of resumes and job descriptions
(dim 200) are generated using [#Doc2Vec](/tag/doc2vec.html) and are given as
inputs to the network.
2020-02-10T13:43:44ZJournée commune AFIA - ARIA - 2 décembre 2019
http://www.semanlink.net/doc/2019/12/journee_commune_afia_aria_2
2019-12-01T23:30:03ZParis NLP Season 4 Meetup #1 at Algolia
http://www.semanlink.net/doc/2019/10/paris_nlp_season_4_meetup_1_at
Slides of the ["Language and Perception in Deep Learning"](/doc/2019/10/language_and_perception_in_deep) talk
2019-10-07T23:04:39Zthunlp/OpenKE: An Open-Source Package for Knowledge Embedding (KE)
https://github.com/thunlp/OpenKE
[paper at EMNLP 2018](https://www.aclweb.org/anthology/papers/D/D18/D18-2024/)
2019-04-23T20:10:11ZMicrosoft Academic
https://academic.microsoft.com/
2019-02-25T10:15:38ZMulti-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction - ACL Anthology
https://aclanthology.info/papers/D18-1360/d18-1360
Attempting to answer questions such as: "What is the task described in this paper?", "what method was used in solving the task?", "what dataset did the paper use?". The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links.
2019-02-09T11:28:06Z10 Exciting Ideas of 2018 in NLP
http://ruder.io/10-exciting-ideas-of-2018-in-nlp/
2018-12-19T21:48:10ZHighlights of EMNLP 2018 – Chris Zhu – Medium
https://medium.com/@chriszhu12/highlights-of-emnlp-2018-55892fba4247
2018-11-25T17:24:27ZInteresting Stuff at EMNLP (part II) – Valentin Malykh – Medium
https://medium.com/@madrugado/interesting-stuff-at-emnlp-part-ii-ce92ac928f16
2018-11-25T15:55:26ZInteresting Stuff in EMNLP (part I) – Valentin Malykh – Medium
https://medium.com/@madrugado/interesting-stuff-in-emnlp-part-i-4a79b5007eb1
2018-11-25T15:53:56ZGoogle AI Blog: Google at EMNLP 2018
https://ai.googleblog.com/2018/10/google-at-emnlp-2018.html
2018-11-25T15:14:25ZAssociative Multichannel Autoencoder for Multimodal Word Representation (2018)
https://aclanthology.coli.uni-saarland.de/papers/D18-1011/d18-1011
learning multimodal word representations by integrating textual, visual and auditory inputs.
2018-11-15T01:27:25ZEMNLP 2018 Thoughts and Notes · Supernatural Language Processing
https://supernlp.github.io/2018/11/10/emnlp-2018/
2018-11-13T00:22:21ZTrying to Understand Recurrent Neural Networks for Language Processing (slides)
http://u.cs.biu.ac.il/~yogo/blackbox2018.pdf
2018-11-11T23:29:46ZWord Mover's Embedding: From Word2Vec to Document Embedding (2018)
https://aclanthology.coli.uni-saarland.de/papers/D18-1482/d18-1482
unsupervised embeddings of sentences of variable length from pre-trained word embeddings (better on short length text).
(Builds on the word mover's distance, but using ideas borrowed from kernel methods approximation, gets a representation of sentences, instead of just a distance between them)
2018-11-10T15:38:38ZEMNLP 2018 Highlights: Inductive bias, cross-lingual learning, and more
http://ruder.io/emnlp-2018-highlights/
2018-11-08T23:49:49ZAdapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization
https://twitter.com/feiliu_nlp/status/1058985012945735680
2018-11-06T23:11:24ZDeep Chit-Chat: deep learning for chatbots (EMNLP 2018 Tutorial)
http://ruiyan.me/pubs/tutorial-emnlp18.pdf
by Dr Wei Wu (Microsoft Xiaolce - chatbot with 200 millions users in China) and Dr Rui Yan (Peking Univ)
- Chit-chat (casual, non goal oriented) open-domain. Must be relevant to the context and diverse (informative) to be engaging.
- why creating a chat? to prove an AI can speak like a human, commercial reasons, link to services.
Task oriented vs non task oriented: this tutorial is about the second one.
Retrieval based vs generation based.
Basic knowledge of DL for chatbots:
- word embeddings
- sentence embeddings (CNN, RNN)
- dialogue modeling: seq-to-seq with attention
Response selection for retrieval based chatbots:
- single turn response selection (slides 37-57)
- framework 1: matching with seq embeddings
- framework 2: matching with message-response interaction (46)
- extension of 1: KnowledgeMatching with External Knowledge (53)
- extension of 2: RepresentationsMatching with Multiple Levels of Representations (54)
- insights from comparison between 1 and 2 (57)
- multi turn response selection (62)
- context is now: mess + history
- again, 2 frameworks
Emerging directions (79):
- matching with better representations
- Self-Attention (82)
- fusing multiple types of repr. But how to fuse matters (83)
- pre-training
Learning a matching model for response selection (84)
Generation based models for chatbots:
- single turn generarion (89)
- Basic generation model
- seq2seq
- Attention
- Bi-directional modeling
- multi turn generation
- Contexts are important
- Context sensitive models
- Hierarchical context modeling
- Latent variable modeling
- Hierarchical memory networks
Diversity in conversations (99)
Content introducing (106)
Additional elements (113)
- Topics in cnversation
- Emotions
Persona in chat:
- Persona
- ...
- Knowledge
- Common sense
RL and Adversarial learning in conversations (125)
Evaluation (132)
Future trends:
- Reasoning in dialogues
- X-grounded dialogues
2018-11-06T14:37:53ZJoint Models in NLP - Slides - Tutorial (EMNLP 2018) - Yue Zhang
https://frcchang.github.io/tutorial/EMNLP2018_joint_models.pdf
**Joint models: solve 2 tasks at once.**
Related tasks: POS tagging, NER, chuncking. Pipeline tasks
Motivations:
- reduce error propagation
- information exchange between tasks
Challenges:
- Joint learning
- Search
History: statistical models. 2 kinds:
- Graph-Based Methods
- Traditional solution:
- Score each candidate, select the highest-scored output
- Search-space typically exponential
- Transition-Based Methods
- Transition-Based systems: Automata
- State: partial result during decoding, Action: operations that can be applied for state transition
- Output constructed incrementally
- Deep learning based model
- Neural transition based models
- Neural graph-based models
- Cross task
- Seminal work: Collobert, Ronan, et al. "Natural language processing (almost) from scratch."
- Not all tasks are mutually beneficial
- Ramachandran, et al. “Unsupervised pretraining for sequence to sequence learning.”
- Peters, Matthew E., et al. "Deep contextualized word representations." (ELMo)
- "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding."
- ULMFIT
- Correlation between multi-task learning and pretraining
- Cross lingual
- Cross domain
- Cross standard
2018-11-06T11:22:04ZPROCEEDINGS of the BlackboxNLP Workshop
https://aclanthology.coli.uni-saarland.de/volumes/proceedings-of-the-2018-emnlp-workshop-blackboxnlp-analyzing-and-interpreting-neural-networks-for-nlp
2018-11-06T10:06:41ZAnalyzing and interpreting neural networks for NLP (Workshop's Home page)
https://blackboxnlp.github.io/
2018-11-06T09:58:57ZWriting code for Natural language processing Research
https://medium.com/@hadyelsahar/writing-code-for-natural-language-processing-research-emnlp2018-nlproc-a87367cc5146
2018-11-05T18:48:58ZTransfer learning with language models
https://drive.google.com/file/d/1kmNAwrSlFYo0cN_DcURMOArBwe9FxWxR/view
2018-11-05T13:50:50ZMulti-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction
https://aclanthology.coli.uni-saarland.de/papers/D18-1360/d18-1360
> A multi-task setup of identifying
and classifying entities, relations, and coreference
clusters in scientific articles.
> The framework supports **construction of a scientific
knowledge graph**
[http://nlp.cs.washington.edu/sciIE/](http://nlp.cs.washington.edu/sciIE/)
2018-11-04T09:31:50ZConference Schedule - EMNLP 2018
http://emnlp2018.org/schedule
2018-11-04T00:49:44ZSelf-Governing Neural Networks for On-Device Short Text Classification - Sujith Ravi | Zornitsa Kozareva (2018)
https://aclanthology.coli.uni-saarland.de/papers/D18-1092/d18-1092
[same paper](https://aclweb.org/anthology/papers/D/D18/D18-1092/)
2018-11-02T23:20:31ZEMNLP (2018) - ACL Anthology - Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
https://aclanthology.coli.uni-saarland.de/events/emnlp-2018
2018-11-02T23:16:49ZDeep Latent-Variable Models for Natural Language - Tutorial - harvardnlp
http://nlp.seas.harvard.edu/latent-nlp-tutorial.html
[arxiv](https://arxiv.org/abs/1812.06834.pdf)
2018-11-01T22:28:15ZFacebook Research at EMNLP – Facebook Research
https://research.fb.com/facebook-research-at-emnlp/
2018-11-01T17:12:02ZTrying to Understand Recurrent Neural Networks for Language Processing (tweets)
https://twitter.com/yuvalpi/status/1057909000551964673
2018-11-01T16:58:32ZWriting Code for NLP Research, AllenNLP's tutorial at #emnlp2018
https://docs.google.com/presentation/d/17NoJY2SnC2UMbVegaRCWA7Oca7UCZ3vHnMqBV4SUayc/edit#slide=id.p
2018-10-31T18:11:21ZTutorials - EMNLP 2018
http://emnlp2018.org/program/tutorials/
2018-10-31T15:56:28Z[1809.00782] Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
https://arxiv.org/abs/1809.00782
QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus.
> In practice, some questions are best answered
using text, while others are best answered using
KBs. A natural question, then, is how to effectively
combine both types of information. Surprisingly
little prior work has looked at this problem.
2018-09-06T01:38:28Z[1601.03764] Linear Algebraic Structure of Word Senses, with Applications to Polysemy
https://arxiv.org/abs/1601.03764
> Here it is shown that multiple word senses reside
in linear superposition within the word
embedding and simple sparse coding can recover
vectors that approximately capture the
senses
> Each extracted word sense is accompanied by one of about 2000 “discourse atoms” that gives a succinct description of which other words co-occur with that word sense.
> The success of the approach is mathematically explained using a variant of
the random walk on discourses model
("random walk": a generative model for language). Under the assumptions of this model, there
exists a linear relationship between the vector of a
word w and the vectors of the words in its contexts (It is not the average of the words in w's context, but in a given corpus the matrix of the linear relationship does not depend on w. It can be estimated, and so we can compute the embedding of a word from the contexts it belongs to)
[Related blog post](/doc/?uri=https%3A%2F%2Fwww.offconvex.org%2F2016%2F07%2F10%2Fembeddingspolysemy%2F)
2018-08-28T11:00:08Z2018 Conference on Empirical Methods in Natural Language Processing - EMNLP 2018
http://emnlp2018.org/
2018-08-23T22:37:54ZLa Marseillaise du bicentenaire de la Révolution
http://www.ina.fr/video/I05266317/jessye-norman-video.html
2018-07-14T14:27:33ZModule google/universal-sentence-encoder | TensorFlow
https://www.tensorflow.org/hub/modules/google/universal-sentence-encoder-large/1
[Paper presented at EMNLP 2018](https://aclanthology.coli.uni-saarland.de/papers/D18-2029/d18-2029)
2018-05-23T16:35:31ZImproving Word Embedding Compositionality using Lexicographic Definitions
https://doi.org/10.1145/3178876.3186007
comment obtenir les meilleures représentations de texte à partir de représentations de mots (word embeddings) ? L'auteur utilise des ressources lexicographiques (wordnet) pour ses tests : l'embedding obtenu pour la définition d'un mot est-il proche de celui du mot ?
Le papier s'appuie sur une [thèse du même auteur](/doc/?uri=https%3A%2F%2Fesc.fnwi.uva.nl%2Fthesis%2Fcentraal%2Ffiles%2Ff1554608041.pdf), claire et bien écrite.
2018-05-10T16:29:46ZThat Makes Sense: Joint Sense Retrofitting from Contextual and Ontological Information
https://dl.acm.org/citation.cfm?doid=3184558.3186906
post-processing method for generating low-dimensional sense embedding. Emploies the ontological and contextual information simultaneously.
(Poster at the Web Conf) [Github](https://github.com/y95847frank/Joint-Retrofitting)
Calcule des "sense embeddings", en partant de word embeddings pré-calculés (par ex avec word2vec), et de données de type lexicographiques (ex wordnet), en contraignant, pour un sens, la distance entre sense et word embedding.
Abstract:
> While recent word embedding models demonstrate their abilities to capture syntactic and semantic information, the demand for sense level embedding is getting higher. In this study, we propose a novel joint sense embedding learning model that retrofits the word representation into sense representation from contextual and ontological information. The experiments show the effectiveness and robustness of our model that outperforms previous approaches in four public available benchmark datasets.
> Given a trained word embedding and a lexical ontology that contains sense level relationships (e.g., synonym, hypernym, etc.), our model generates new sense vectors via constraining the distance between the sense vector and its word form vector, its sense neighbors and its contextual neighbors
[Influenced by](/doc/?uri=https%3A%2F%2Farxiv.org%2Fabs%2F1411.4166) (which post-processes and modifies word vectors to incorporate knowledge from semantic lexicons, while this creates new sense vectors)
2018-05-10T14:57:18ZWeakly-supervised Relation Extraction by Pattern-enhanced Embedding Learning
https://dl.acm.org/citation.cfm?doid=3178876.3186024
Extraction de relations de corpus de textes de façon semi-supervisée, dans un contexte où on a peu de données labellisées décrivant les relations.
Par exemple, des données labellisées indiquent que le texte "Beijing, capital of China" correspond à la relation entre entités : ("Beijing", "Capital Of", "China), et on voudrait pouvoir extraire les entités et relations pertinentes à partir de texte tel que "Paris, France's capital,..."
Le papier décrit une méthode qui combine deux modules, l'un basé sur l'extraction automatique de patterns (par ex "[Head], Capital Of [Tail]") et l'autre sur la "sémantique distributionnelle" (du type "word embeddings"). Ces deux modules collaborent, le premier permettant de créer des instances de relations augmentant la base de connaissance sur lequel entrainer le second, et le second aidant le premier à déterminer des patterns informatifs ("co-entrainement")
2018-05-10T14:42:58ZTUTORIAL: Graph-based Text Representations (SLIDES)
http://www.lix.polytechnique.fr/~mvazirg/gow_tutorial_webconf_2018.pdf
Slides of [tutorial](https://www2018.thewebconf.org/program/tutorials-track/tutorial-213/)
2018-05-10T14:02:48ZTUTORIAL: Graph-based Text Representations: Boosting Text Mining, NLP and Information Retrieval with Graphs
https://www2018.thewebconf.org/program/tutorials-track/tutorial-213/
Comment dépasser les limites du modèle Bag Of Word en modélisant le texte sous forme de graphe.
Organisé par [Michalis.Vazirgiannis](http://www.lix.polytechnique.fr/Labo/Michalis.Vazirgiannis/) (Polytechnique) et [Fragkiskos D. Malliaros](http://fragkiskos.me) (CentraleSupelec)
[Slides](http://www.lix.polytechnique.fr/~mvazirg/gow_tutorial_webconf_2018.pdf)
2018-05-10T13:51:07ZTUTORIAL: Representation Learning on Networks - TheWebConf 2018
http://snap.stanford.edu/proj/embeddings-www/index.html#materials
Network representation learning (NRL): Approaches that automatically learn to encode network structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction
**Goal of representation learning for networks: efficient task-independant feature learning for ML in networks.** But it's hard. DL toolbox are designed for single sequences or grids (for instance CNN for images, RNN or word2vec are fixed size), but networks are far more complex!
from the abstract:
> In this tutorial, we will cover key advancements in NRL over the last decade, with an emphasis on fundamental advancements made in the last two years. We will discuss classic matrix factorization-based methods (e.g., Laplacian eigenmaps), random-walk based algorithms (e.g., DeepWalk and node2vec), as well as very recent advancements in graph convolutional networks (GCNs). We will cover methods to embed individual nodes (see [node embeddings](/tag/node_embeddings)) as well as approaches to embed entire (sub)graphs, and in doing so, we will present a unified framework for NRL.
2018-05-05T13:31:59ZSmart-MD: Neural Paragraph Retrieval of Medical Topics
https://dl.acm.org/citation.cfm?doid=3184558.3186979
2018-04-28T17:45:44ZL’inventeur du Web exhorte à réguler l’intelligence artificielle
http://www.lemonde.fr/pixels/article/2018/04/27/l-inventeur-du-web-exhorte-a-reguler-l-intelligence-artificielle_5291555_4408996.html
2018-04-28T16:16:19ZHighLife: Higher-arity Fact Harvesting
https://dl.acm.org/citation.cfm?id=3186000
**Best paper award** at theWebConf 2018.
An approach to harvest higher-arity facts from textual sources. Our method is distantly supervised by seed facts, and uses the fact-pattern duality principle to gather fact candidates with high recall. For high precision, we devise a constraint-based reasoning method to eliminate false candidates. A major novelty is in coping with the difficulty that higher-arity facts are often expressed only partially in texts and strewn across multiple sources. For example, one sentence may refer to a drug, a disease and a group of patients, whereas another sentence talks about the drug, its dosage and the target group without mentioning the disease. Our methods cope well with such partially observed facts, at both pattern-learning and constraint-reasoning stages.
2018-04-28T01:06:34Z« Le Web a développé des résistances antibiotiques à la démocratie »
http://www.lemonde.fr/pixels/article/2018/04/25/le-web-a-developpe-des-resistances-antibiotiques-a-la-democratie_5290627_4408996.html
2018-04-26T08:17:19ZGraphChain – A Distributed Database with Explicit Semantics and Chained RDF Graphs
http://delivery.acm.org/10.1145/3200000/3191554/p1171-sopek.html?ip=37.71.228.186&id=3191554&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1524690407_a6f0908759ebcbdcb90ed0cfa942743c
2018-04-25T23:03:51ZGraphChain
https://www.slideshare.net/sopekmir/graphchain
2018-04-25T23:02:27ZPROCEEDINGS – The Web Conference in Lyon
https://www2018.thewebconf.org/proceedings/
2018-04-23T17:33:50ZImproving the Compositionality of Word Embeddings (2017)
https://esc.fnwi.uva.nl/thesis/centraal/files/f1554608041.pdf
(MS thesis, a [paper at TheWebConf 2018](/doc/?uri=https%3A%2F%2Fdoi.org%2F10.1145%2F3178876.3186007))
> This thesis explores a method to find better encodings of meaning a computer can work with. We specifically want to combine encodings of word meanings in such a way that a good encoding of their joint meaning is created. The act of combining multiple representations of meaning into a new representation of meaning is called semantic composition.
Analysis of four word embeddings (Word2Vec, GloVe, fastText and Paragram) in terms of their semantic compositionality. A method to tune these embeddings towards better compositionality, using a simple neural network architecture with definitions and lemmas from WordNet.
> Since dictionary definitions are semantically similar to their associated lemmas, they are the ideal candidate for our tuning method, as well as evaluating for compositionality. Our architecture allows for the embeddings to be composed using simple arithmetic operations, which makes these embeddings specifically suitable for production applications such as web search and data mining. We also explore more elaborate and involved compositional models, such as recurrent composition and convolutional composition.
2018-02-13T11:39:04ZRESEARCH TRACK: Web Content Analysis, Semantics and Knowledge
https://www2018.thewebconf.org/program/web-content-analysis/
[CFP](https://www2018.thewebconf.org/call-for-papers/research-tracks-cfp/web-content-analysis/)
> In previous years, ‘content analysis’ and ‘semantic and knowledge’ were in separate track. This year, we combined these tracks to emphasize the close relationship between these topics; **the use of content to curate knowledge and the use of knowledge to guide content analysis and intelligent usage**.
Some of the accepted papers:
### [Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN](https://doi.org/10.1145/3178876.3186005)
[Hierarchical Text Classification](/tag/nlp_hierarchical_text_classification): Text classification to a hierarchical taxonomy of topics, using graph representation of text, and CNN over this graph
Renvoie à ce qui a été vu dans le tutorial "Graph-based Text Representations"
from the abstract:
> a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph. Graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics. CNN models have the advantage of learning different level of semantics. To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels
Conversion of text to graph: potentially given a single document
### [Weakly-supervised Relation Extraction by Pattern-enhanced Embedding Learning](https://doi.org/10.1145/3178876.3186024 )
Extraction de relations de corpus de textes de façon semi-supervisée, dans un contexte où on a peu de données labellisées décrivant les relations.
Par exemple, des données labellisées indique que le texte "Beijing, capital of China" correspond à la relation entre entités : ("Beijing", "Capital Of", "China), et on voudrait pouvoir extraire les entités et relations pertinentes à partir de texte tel que "Paris, France's capital,..."
Le papier décrit une méthode qui combine deux modules, l'un basé sur l'extraction automatique de patterns (par ex "[Head], Capital Of [Tail]") et l'autre sur la "sémantique distributionnelle" (du type "word embeddings"). Ces deux modules collaborent, le premier permettant de créer des instances de relations augmentant la base de connaissance sur lequel entrainer le second, et le second aidant le premier à déterminer des patterns informatifs ("co-entrainement")
### [Scalable Instance Reconstruction in Knowledge Bases via Relatedness Affiliated Embedding](https://doi.org/10.1145/3178876.3186017)
Knowledge base completion problem: usually, it is formulated as a link prediction problem, but not here. A novel knowledge embedding model ("Joint Modelling and Learning of Relatedness and Embedding")
### [Improving Word Embedding Compositionality using Lexicographic Definitions](https://doi.org/10.1145/3178876.3186007)
comment obtenir les meilleures représentations de texte à partir de représentations de mots (word embeddings) ? L'auteur utilise des ressources lexicographiques (wordnet) pour ses tests : l'embedding obtenu pour la définition d'un mot est-il proche de celui du mot ?
Le papier s'appuie sur une [thèse du même auteur](https://esc.fnwi.uva.nl/thesis/centraal/files/f1554608041.pdf), claire et bien écrite.
### [CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information](https://doi.org/10.1145/3178876.3186030)
Amélioration de l'extraction de triplets (nom phrase, property, nom phrase) à partir de texte en calculant des embeddings pour les "nom phrases" (~entités)
### [Short-Text Topic Modeling via Non-negative Matrix Factorization Enriched with Local Word-Context Correlations](https://doi.org/10.1145/3178876.3186009)
Topic modeling for short texts, leveraging the word-context semantic correlations in the training
### [Towards Annotating Relational Data on the Web with Language Models](https://doi.org/10.1145/3178876.3186029)
### A paper by [David Blei](/tag/david_blei): (Dynamic Embeddings for Language Evolution)
2018-01-27T15:36:02ZTUTORIAL: Representation Learning on Networks - TheWebConf 2018
https://www2018.thewebconf.org/program/tutorials-track/tutorial-225/
2018-01-27T15:18:02ZWORKSHOP: BigNet @ WWW 2018 Workshop on Learning Representations for Big Networks
https://aminer.org/bignet_www2018
2018-01-27T15:13:16ZKassav Zenith 89 - YouTube
https://www.youtube.com/watch?v=jk2rZTwesp4&index=1&list=RDjk2rZTwesp4
2017-01-07T01:20:32ZTopic modeling with network regularization
http://www.scopus.com/record/display.url?eid=2-s2.0-57349152312&origin=inward&txGid=7A2D7638D1A90FC842E0E0E1C688AFC1.kqQeWtawXauCyC8ghhRGJg
In this paper, we formally define the problem of topic modeling with network structure (TMN). We propose a novel solution to this problem, which regularizes a statistical topic model with a harmonic regularizer based on a graph structure in the data. The proposed method combines topic modeling and social network analysis, and leverages the power of both statistical topic models and discrete regularization. The output of this model can summarize well topics in text, map a topic onto the network, and discover topical communities.
2014-04-23T10:54:41ZUsing SKOS vocabularies for improving Web Search
http://www2013.org/companion/p1253.pdf
2013-05-30T09:26:35ZDescribing Customizable Products on the Web of Data (LDOW 2013)
http://fr.slideshare.net/fpservant/ldow2013
2013-05-28T16:19:18ZBloco do Sargento Pimenta
http://blocodosargentopimenta.com.br/
2013-05-21T08:48:25ZRethinking the Web as a Personal Archive
http://www2013.org/proceedings/p749.pdf
2013-05-21T07:58:56ZTutorial: Linked Data Query Processing
http://db.uwaterloo.ca/LDQTut2013/
2013-05-18T22:32:02ZLive topic generation from event streams
http://fr.slideshare.net/troncy/live-topic-generation-from-event-streams
2013-05-17T11:09:05ZRio Scenarium
http://www.rioscenarium.com.br/
2013-05-15T16:26:37Z“We need people to translate the whole Web”: Luis Von Ahn | WWW 2013 – Rio de Janeiro, Brazil
http://www2013.org/2013/04/03/we-need-people-to-translate-the-whole-web-luis-von-ahn-a-visionary-of-human-computation/
2013-05-15T15:43:20ZBuilding a Web of Needs, Florian Kleedorfer
http://events.linkeddata.org/ldow2013/papers/ldow2013-paper-13.pdf
"Tell what you need, and give it a URI"
2013-05-14T18:37:26ZAn introduction to semantic web and linked data
http://www-sop.inria.fr/members/Fabien.Gandon/docs/www2013/WWW2013_Tutorial_WebSem_FabienGandon.pdf
2013-05-14T18:33:20ZDescribing Customizable Products on the Web of Data, LDOW 2013
http://events.linkeddata.org/ldow2013/papers/ldow2013-paper-11.pdf
2013-04-25T23:56:38ZLinked Data on the Web (LDOW2013) - Workshop at WWW2013, Rio de Janeiro, Brazil
http://events.linkeddata.org/ldow2013/
2013-03-02T16:44:11ZNotes from the WWW 2012 conference
http://www.bbc.co.uk/blogs/researchanddevelopment/2012/04/notes-from-the-www12-conferenc.shtml
2012-04-30T12:00:08ZFrom Linked Data to Linked Entities: a Migration Path - Giovanni Bartolomeo, Stefano Salsano
http://www2012.org/proceedings/companion/p115.pdf
2012-04-20T11:58:23ZLinked Data on the Web Workshop at WWW 2012 - semanticweb.com
http://semanticweb.com/linked-data-on-the-web-workshop-at-www-2012_b28328
2012-04-20T01:26:08ZLINDEN: Linking Named Entities with Knowledge Base via Semantic Knowledge
http://www2012.org/proceedings/proceedings/p449.pdf
Wei Shen, Jianyong Wang, Ping Luo, Min Wang
2012-04-19T14:27:44ZCounting beyond a Yottabyte, or how SPARQL 1.1 Property Paths will prevent adoption of the standard | Semantic Web Dog Food
http://data.semanticweb.org/conference/www/2012/paper/809/html
Best paper award at www 2012.
<a href="http://www.w3.org/blog/SW/2012/04/19/no-more-counting-beyond-a-yottabyte-or-why-the-w3c-process-works/">see also</a>
2012-04-19T10:32:00ZTracks & Accepted papers | www2012
http://www2012.wwwconference.org/program/accepted-papers/
2012-04-18T17:05:53ZFactorizing Yago: scalable machine learning for the sw
http://www2012.wwwconference.org/proceedings/proceedings/p271.pdf
RECAL is a relational learning approach that can be applied ti vomplete knowledge bases in the lod cloud
<a href="http://www.cip.ifi.lmu.de/~nickel">code, etc. </a>
2012-04-18T16:17:13ZSemantic CMS and Wikis as Platforms for Linked Learning
http://ceur-ws.org/Vol-840/03-paper-26.pdf
2012-04-17T11:57:37ZTalis Aspire
http://www.talisaspire.com/
2012-04-17T11:51:35ZLinked Data on the Web Workshop, Lyon « Ivan’s private site
http://ivan-herman.name/2012/04/17/linked-data-on-the-web-workshop-lyon/
2012-04-17T11:22:29ZA Spectrometry of Linked Data
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-15.pdf
Entity mining is still a troublesome open problem. In past years many approaches allowed to automate the generation of equivalence links between references using schema matching or various heuristics based on the recognition of similar property values. In contrast, few of them considered the analysis of the network of equivalence links (“equivalence network”) as an indication of the likelihood and strength of the equivalence.
Could a URI reference (URIRef) be thought as exactly “attached” to its referent? Could it make sense to talk about entity “identifiers” or would it be better to talk about more ambiguous “references”, i.e., placeholders for any model that satisfies the formal semantics of the Semantic Web (Hayes)? Booth observes that the aforementioned question, which in the past has been often regarded as fundamental in the debate about identity on the Web, is relatively unimportant. As long as an entity, identified by whatsoever URIRef, is associated to at least one description containing machine understandable information, this information can be automatically processed and used by applications.
2012-04-16T16:23:34ZInteracting with the Web of Data through a Web of Inter-connected Lenses
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-12.pdf
2012-04-16T15:13:44ZQuerying the Web of Interlinked Datasets using VOID Descriptions
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-06.pdf
2012-04-16T11:43:41ZUsing read/write Linked Data for Application Integration – Towards a Linked Data Basic Profile
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-04.pdf
2012-04-16T10:26:54ZNERD meets NIF: Lifting NLP Extraction Results to the Linked Data Cloud
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-02.pdf
NERD, an API and a front-end user inter- face powered by an ontology to unify various named entity extractors<br/>
NIF: AN NLP INTERCHANGE FORMAT
2012-04-16T09:35:13ZSynote: Weaving Media Fragments and Linked Data
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-01.pdf
2012-04-16T09:28:44ZLinked Data on the Web (LDOW2012) - Workshop at WWW2012, Lyon, France
http://events.linkeddata.org/ldow2012/
2012-04-16T09:24:20ZAutomated interlinking of speech radio archives
http://events.linkeddata.org/ldow2012/papers/ldow2012-paper-11.pdf
2012-04-14T12:03:28ZEmerging Web Technologies, Facing the Future of Education — EducTice
http://eductice.ens-lyon.fr/EducTice/ressources/journees-scientifiques/EWFE2012/
2012-04-11T00:52:50ZEmerging Web Technologies, Facing the Future of Education — EducTice
http://eductice.inrp.fr/EducTice/ressources/journees-scientifiques/EWFE2012/
2011-12-22T22:54:36ZLinked Learning 2012
http://lile2012.linkededucation.org/
2011-12-17T02:21:19ZProceedings of the WWW2008 Workshop on Linked Data on the Web
http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-369/
2010-09-16T22:26:22ZWhat is Linked Data? » AI3:::Adaptive Information
http://www.mkbergman.com/?p=447
> The Linking Open Data (LOD) group formed about 18 months ago to showcase Linked Data techniques began with open data. As a parallel concept to sever the idea that it only applies to open data, François-Paul Servant has specifically identified Linking Enterprise Data.
Zitgist Offers a Definition and Some Answers to Enterprise Questions
2008-06-24T20:26:42ZKeynotes at WWW 2008 - Geeking with Greg
http://glinden.blogspot.com/2008/04/keynotes-at-www-2008.html
the guy is at ms
2008-06-22T02:40:54ZWWW 2008 keynotes - Erik Selberg » Blog Archive » Themes from Beijing
http://selberg.org/2008/04/23/themes-from-beijing/
2008-06-22T02:39:20ZSir Tim Berners-Lee addresses WWW2008 in Beijing
http://blogs.zdnet.com/semantic-web/?p=131
2008-06-22T02:17:02ZCommercialising the Semantic Web (panel at www 2008)
http://blogs.zdnet.com/semantic-web/?p=132
2008-05-17T23:12:12ZMore News: Update from WWW2008
http://morenews.blogspot.com/2008/04/update-from-www2008.html
2008-05-16T15:51:21ZLinking Enterprise Data (slides)
http://events.linkeddata.org/ldow2008/slides/Servant-ldow2008-slides.pdf
Slides of my talk at LDOW2008
2008-05-08T14:30:09ZLinking Enterprise Data
http://events.linkeddata.org/ldow2008/papers/21-servant-linking-enterprise-data.pdf
My paper at LDOW2008
2008-05-08T14:21:36ZFlickr: Items tagged with ldow2008
http://www.flickr.com/photos/tags/ldow2008/
2008-05-06T21:28:42ZVirtualChaos - Nadeem’s blog » WWW2008: Day 2 - LDOW2008 Workshop
http://www.virtualchaos.co.uk/blog/2008/04/23/www2008-day-2-ldow2008-workshop/
2008-05-04T20:15:39ZLinked Data Trip Report - Part 1 (WWW2008)
http://www.openlinksw.com/dataspace/kidehen@openlinksw.com/weblog/kidehen@openlinksw.com's%20BLOG%20%5B127%5D/1343
2008-05-04T15:49:31Zswig-2008-04-22
http://tuukka.iki.fi/tmp/swig-2008-04-22.html
Include notes about my talk at ldow 2008
2008-05-04T14:55:43ZLinked Data and Information Architecture
http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling's%20Blog/1347
2008-05-04T14:49:00ZBoaB interactive - Web design, graphic design, multimedia, Content Management System (CMS)
http://www.boabinteractive.com.au/
BoaB is exploring a collaboration with leading Semantic Web organizations and natural resource management agencies such as the Great Barrier Reef Marine Park Authority (Climate Change) to develop cooperative information systems — systems that make sense of distributed data; built with an open software architecture; running on the global infrastructure of the web.
2008-05-04T14:27:10ZSitePoint Blogs » WWW2008 Beijing: Day 1 - Linked Data on the Web (LDOW 2008) Workshop
http://www.sitepoint.com/blogs/2008/04/22/www2008-beijing-day-1-linked-data-on-the-web-ldow-2008-workshop/
2008-05-04T13:52:58ZSemantic MediaWiki - Semantic-mediawiki.org
http://www.semantic-mediawiki.org/wiki/Semantic_MediaWiki
2008-04-25T11:05:14ZI Really _Don't_ Know: LDOW2008
http://www.dynamicorange.com/blog/archives/internet-technical/ldow2008.html
2008-04-24T14:10:09ZAttending WWW2008 : Alexandre Passant
http://apassant.net/blog/2008/04/22/attending-www2008/
2008-04-24T10:29:51ZLinked Data on the Web, WWW2008 | The Semantic Web | ZDNet.com
http://blogs.zdnet.com/semantic-web/?p=128
2008-04-24T09:48:13ZApex Data & Knowledge Management Lab
http://apex.sjtu.edu.cn/
Apex Data & Knowledge Management Lab focuses on the research and development in the data and knowledge management area. Current interests include Next Generation Search and Retrieval, Ontology Theory and Engineering, and Semantic Web.
2008-04-23T14:42:51ZCOMM: Core Ontology on Multimedia
http://comm.semanticweb.org/
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We propose COMM - A Core Ontology for Multimedia based on both the MPEG-7 standard and the DOLCE foundational ontology.
2008-04-23T14:16:37ZStructured Objects in OWL: Representation and Reasoning. In Proc. of the 17th Int. World Wide Web Conference (WWW 2008), Beijing
http://web.comlab.ox.ac.uk/oucl/work/boris.motik/publications/mgs08-structured-objects.pdf
Very good presentation at WWW 2008. Nominated for the best paper award<br/>
Abstract: Applications of semantic technologies often require the representation of and reasoning with structured objects—that is, objects composed of parts connected in complex ways. Although OWL is a general and powerful language, its class descriptions and axioms cannot be used to describe arbitrarily connected structures. An OWL representation of structured objects can thus be underconstrained, which reduces the inferences that can be drawn and causes performance problems in reasoning. To address these problems, we extend OWL with description graphs, which allow for the description of structured objects in a simple and precise way. To represent conditional aspects of the domain, we also allow for SWRL-like rules over description graphs. Based on an observation about the nature of structured objects, we ensure decidability of our formalism. We also present a hypertableau-based decision procedure, which we implemented in the HermiT reasoner. To evaluate its performance, we have extracted description graphs from the GALEN and FMA ontologies, classified them successfully, and even detected a modeling error in GALEN.
2008-04-23T11:14:12ZBen Adida - RDFa, slides du workshop à WWW2008
http://ben.adida.net/presentations/www2008-rdfa/
2008-04-21T15:37:42ZGreat Hall of the People - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Great_Hall_of_the_People
2008-04-16T21:25:47ZWWW2008 Conference: program
http://www2008.org/program/program-overview.html
2008-04-14T20:50:43ZAn Entity Name System for Linking Semantic Web Data
http://events.linkeddata.org/ldow2008/papers/23-bouquet-stoermer-entity-name-system.pdf
The Semantic Web should provide a global space for the seamless integration of small knowledge bases (or local “semantic webs”)
into a global, open, decentralized and scalable knowledge
space. <br/>
In this paper, we will try to defend the view that the practical realization of the grand vision of the Semantic Web as
a huge graph of interlinked data would be much easier and
faster if we could count on a service which, by analogy with
the DNS, we call an Entity Name System (ENS), namely a
service which stores and makes available for reuse URIs for
any type of entity in a fully decentralized and open knowledge publication space.
2008-04-05T00:28:10ZMeaning Of A Tag: A Collaborative Approach to Bridge the Gap Between Tagging and Linked Data
http://events.linkeddata.org/ldow2008/papers/22-passant-laublet-meaning-of-a-tag.pdf
2008-03-30T21:32:39ZWeaving SIOC into the Web of Linked Data
http://events.linkeddata.org/ldow2008/papers/01-bojars-passant-weaving-sioc.pdf
2008-03-30T20:23:15ZLinked Data on the Web (LDOW2008) - Workshop at WWW2008, Beijing, China
http://events.linkeddata.org/ldow2008/
2007-12-17T12:12:34ZFalcons
http://iws.seu.edu.cn/services/falcons/objectsearch/index.jsp
Falcons is a keyword-based search engine for Semantic Web entities. It enables searching concepts guided by recommended vocabularies, searching objects guided by recommended concepts, and browsing entity summarization via concept spaces.
2007-11-09T10:00:51Z