Você está na página 1de 1

What semantic web/linked data/smart data 2016 expectations you had were not

fulfilled?

My expectations are pretty much in line with what I had expected in the views I had
shared in previous years: slow progress in broader adoption by industry of Semantic
Web standards, struggle with technical challenges hindering linked data usage, and
faster adoption of semantic techniques (not necessarily using Semantic Web
standards) especially involving building and uses of knowledge graphs. One key
challenge that continues to hinder more rapid adoption of semantic web and linked
data is the lack of robust yet very easy to use tools when dealing with large and
diverse data that can do what tools like Weka did for machine learning.

What are your top three expectations for semantic web/linked data/smart data
events/milestones developments in 2017?
And why?

Given the tremendous success of machine learning and bottom-up data processing
(emphasizing learning from data), I expect to see increasing emphasis on developing
knowledge graphs and using them for top-down (emphasizing the use of models or
background knowledge) or middle-out processing in conjunction with the bottom-
up processing. While everyone is using DBPedia and a few high quality, broad-based
knowledge bases, or in domain-specific applications like health, they are using
well-curated knowledge bases like UMLS, I see more and more companies investing in
developing their own knowledge graphs as their investment in intellectual property.
A good example is the Google Knowledge Graph, which has grown from a fairly modest
size based on Freebase to one that is much larger. However, this has required
significant human involvement, and not many companies have been able to put in
processes and develop tools to reduce human involvement in knowledge graph
development and maintenance.

I expect we will make progress in this direction for example, by extracting the
right subset of a bigger knowledge graph for a particular purpose. Still, the pace
of progress will be at a moderate pace (as discussed in the past, one reason is the
lack of skilled personnel in Semantic Web and knowledge-enhanced computing topics).
A broad variety of tools and applications, including search, chatbots, and
knowledge discovery, are waiting to exploit such purpose-built knowledge graphs by
using them in conjunction with machine learning and NLP.

The second expectation is that there will be deeper and broader information
extraction from a wider variety of textual as well as multimodal content that
exploits semantics, especially knowledge-enhanced machine learning and NLP. First,
the necessary data to serve advanced applications will increasingly come from
complementary sources and in different modalities (e.g., personalized semantic
mHealth approach to asthma management). Second, in addition to extracting entities,
relationships of limited types (i.e., types known a priory for which learning
solutions can be developed), and sentiments and emotion, we will develop a deeper
understanding through more types of subjectivity and semantic or domain-specific
extraction. As an example of the latter, for clinical text, we will identify more
phenotype-specific relationships, intentions for a clinician or consumer search for
health content, severity of disease, etc.

Finally, while a lot of attention has been given by the semantic web research
community to OWL and its variants, what we need is an enhancement for the
representation at semantic data level: can everything be represented as triples? We
need better ways to represent and compute with provenance, complex representations
(e.g., nested statements) and context as needed by real-world applications. One
bright spot is the work on singleton property; I expect further progress in near
term, followed by broader adoption.

Você também pode gostar