Wikipendium

History Compendium
Log in
This is an old version of the compendium, written June 5, 2016, 12:14 p.m. Changes made in this revision were made by thormartin91. View rendered version.
Previous version Next version

TDT4215: Web-intelligence

# Curriculum As of spring 2016 the curriculum consists of: - __A Semantic Web Primer__, chapter 1-5, 174 pages - __Sentiment Analysis and Opinion Mining__, chapter 1-5, 90 pages - __Recommender Systems__, chapter 1-3, and 7, 103 pages - _Kreutzer & Witte:_ [Opinion Mining Using SentiWordNet](http://stp.lingfil.uu.se/~santinim/sais/Ass1_Essays/Neele_Julia_SentiWordNet_V01.pdf) - _Turney:_ [Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews](http://www.aclweb.org/anthology/P02-1053.pdf) - _Liu, Dolan, & Pedersen:_ [Personalized News Recommendation Based on Click Behavior](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.308.3087&rep=rep1&type=pdf) # Semantic Web ## The Semantic Web Vision / Motivation The Semantic Web is an extension of the normal Web, that promotes common data formats and exchange protocols. The point of Semantic Web is to make the internet machine readable. It is not a matter of artificial intelligence; if the ultimate goal of AI is to build an intelligent agent exhibiting human-level intelligence (and higher), the goal of the Semantic Web is to assist human users in their day-to-day online activities. Semantic Web uses languages specifically designed for data: - Resource Description Framework (RDF) - Web Ontology Language (OWL) - Extensive Markup Language (XML) Together these languages can describe arbitrary things like people, meetings or airplane parts. ## Ontology An ontology is a formal and explicit definition of types, properties, functions, and relationship between the entities, which result in an abstract model of some phenomena in the world. Typically an ontology consist of classes arranged in a hierarchy. There are several languages that can describe an ontology, but the achieved semantics vary. ## RDF > Resource Description Framework RDF is a data-model used to define ontologies. It's a model that is domain-neutral, application-neutral, and it supports internationalization. RDF is an abstract model, and therefore not language-specific. XML is often used to define RDF data, but it is not a necessity. ### RDF model The core of RDF is to describe resources, which basically are "things" in the world that can be described in words og relations to other resources etc. RDF consists of a lot of triples that together describe something about a particular resource. A triple consist of _(Subject, Predicate, Object)_, and is called a __statement__. The __subject__ is the resource, the thing we'd like to talk about. Examples are authors, apartments, people or hotels. Every resource has a URI _(unique resource identifier)_. This could be an ISBN-number, a URL, coordinates, etc. The __predicate__ states the relation between the subject and the object and are called property. For example Anna is a _friend of_ Bruce, Harry Potter and the Philosopher's Stone was _written by_ J.K. Rowling or Oslo is _located in_ Norway. The __object__ is another resource, typically a value. The statements are often modeled as a graph, and an example could be: _(lit:J.K.Rowling, lit:wrote, lit:HarryPotter)_. ![A statement graph](http://s33.postimg.org/lwi8whv5b/Skjermbilde_2016_06_04_18_08_02.png) Below is a more complex example where you can see that _http://www.w3.org/People/EM/contact#me_ has the property _fullName_ defined in _http://www.w3.org/2000/10/swap/pim/contact_, which is set to the value of _Eric Miller_. ![Example of RDF graph](https://upload.wikimedia.org/wikipedia/commons/5/58/Rdf_graph_for_Eric_Miller.png) ### Serialization formats N-triples, Turle, RDF/XML, RDF/JSON. N-triples : It is plain text format for encoding an RDF graph. Turtle : Turtle can only serialize valid RDF graphs. It is generally recognized as being more readable and easier to edit manually than its XML counterpart. RDF/XML : It expresses RDF graph as XML document. RDF/JSON : It expresses RDF graph as JSON document. ## RDFS > Resource Description Framework Schema While the RDF language lets users describe resources using their own vocabulary, it does not specify semantics for any domains. The user must define what those vocabularies mean in terms of a set of basic domain independent structures defined by RDF Schema. RDF Schema is a primitive ontology language, and key concepts are: - Classes and subclasses - Properties and sub-properties - Relations - Domain and range restrictions ### Classes A class is a set of elements. It defines a type of objects. Individual objects are instances of that class. Classes can be structured hierarchical with sub- and super-classes. ### Properties Properties are restriction for classes. Objects of a class must obey these properties. Properties are globally defined and can be applied to several classes. Properties can also be structured hierarchical, in the same way as classes. ## SPARQL > SPARQL Protocol and RDF Query Language RDF is used to represent knowledge, SPARQL is used to query this representation. The SPARQL infrastructure is a __triple store__ (or Graph store), a database used to hold the RDF representation. It provides an endpoint for queries. The queries follow a syntax similar to Turtle. SPARQL selects information by matching __graph patterns__.
Variables are written with a questionmark prefixed: (?var ), SELECT is used to determine which variable is of interest, and the query to be match is placed inside WHERE { ... }.
SPARQL provides facilities for __filtering__ based on both numeric and string comparison. eg. FILTER (?bedroom > 2). UNION and OPTIONAL are constructs that allow SPARQL to more easily deal with __open world__ data, meaning that there is always more information to be known. This is a consequence of the principle that anyone can make statements about any resource. UNION lets you combine two queries. eg. {} UNION {}. OPTIONAL lets you add a result to the graph IF the optional-part matches, else it creates no bindings, but continues (does not eliminate the other results). There exists a series of possibilities for organizing the result set, among others: LIMIT, DISTINCT, ORDER BY, DESC, ASC, COUNT, SUM, MIN, MAX, AVG, GROUP BY. UPDATE provides mechanisms for updating and deleting information from triple stores, and ASK (insted of SELECT) returns true/false instead of the result-set and CONSTRUCT (insted of SELECT) returns a subset of graph insted of a list of results. Can be used to construct new graphs
Since schema information is represented in RDF, SPARQL can query this as well. It allows one to retrieve information, and also to query the semantics of that information. ### Example query _Returns all sports, from the ontology, that contains the text "ball" in their name._ PREFIX dbpediaowl:<http://dbpedia.org/ontology/> SELECT DISTINCT ?sport where { ?sport rdf:type dbpediaowl:Sport . FILTER regex(?sport, "ball", "i") . } An online endpoint for queries can be found [here](http://dbpedia.org/sparql).
## OWL > Web ontology language RDF and RDF Schema are limited to binary ground predicates and class/property hierarchy. We sometimes need to express more advanced, more expressive knowledge. The requirements for such a language is: - well-defined syntax - a formal semantics - sufficient expressive power - convenience of expression - efficient reasoning support More specific, __automatic reasoning support__, which is used to: - check the consistency of the ontology - check for unintended relations between classes - check for unintended classifications of instances There is a trade-off between expressive power and efficient reasoning support. The richer the logical formalism, the less efficient reasoning support, often crossing the border to __decidability__: reasoning on such logic is not guaranteed to terminate. Because this trade-off exists, two approaches to OWL have been made: [Full](Web ontology language) ### Basics of OWL full) and [DL](### OWLanguage constructs DL). ### OWL Full > RDF-based semantics This approach uses all the OWL primitives. It is great in the way that any legal RDF document is also a legal OWL Full document, and any valid RDF Schema inference is also a valid OWL Full conclusion. The problem is that it has become so powerfull that it is undecidable, there is no complete (efficient) reasoning support. ### OWL DL > Direct semantics DL stans for Descriptive Logic, and is a subset of predicate logic (first-order logic). This approach offers efficient reasoning support and can make use of a wide range of existing reasoners. The downside is that it looses full compability with RDF: a RDF document must be extended in some way to be a valid OWL DL document. However, every legal OWL DL document is a legal RDF document. Three __profiles__, OWL EL, OWL QL, and OWL RL, are syntactic subsets that have desirable computational properties. In particular, OWL2 RL is implementable using rule-based technology and has become the de facto standard for expressive reasoning on the Semantic Web. ### The OWL language [#] TODO: constructs, classes, properties, cardinality, axioms OWL uses an extension of RDF and RDFs. It can be hard for humans to read, but most ontology engineers use a specialized ontology development tool, like [Protégé](http://protege.stanford.edu). There are four standard __syntaxes__ for OWL: RDF/XML : OWL can be expressed using all valid RDF syntaxes. Functional-style syntax : Used in the language spesification document. A compact and readable syntax. OWL/XML : XML-syntax, allows us to use off-the-shelf XML authoring tools. Does not follow RDF conventions but closely maps onto function-style. Manchester syntax : As human readable as possible. Used in Protégé. # Sentiment analysis Sentiment analysis is about searching through data to get people's opinions about something. Sentiment analysis is also called opinion mining, opinion extraction, sentiment mining, subjectivity analysis, affect analysys, emotion analysis, or review mining. ## Basics of Sentiment analysis Sentinemt analysis is important because opinions drive decisions. People will seek other people's opinions when buying something. It is useful to be able to mine opinions on a product, service or topic. Businesses and organizations are interested in knowing what people think of their products. Individuals are interested in what others think of a product or service. We would like to be able to do a search like "How good is iPhone" and get a general opinion on how good it is. With "explosions" of opinion data in social media and similar applications it is easy to gather data for use in decisionmaking. It is not always necessary to create questionnaires to get peoples opinions. It can, however, be difficult to extract meaning from long blogposts and arcticles, and make a summary. This is why we need _Automated Sentiment analysis systems_. There are two types of evaluation: - Regular opinions ("This camera is great!") - Comparisons ("This camera is better than this other camera.") The basic components of an opinion are: - Opinion holder (Whoever holds the opinion) - Entity (The entity on which the opinion is expressed) - Opinion (The actual opinion being held) An entity can be divided into subcomponents, like a camera having both a lens and a battery. These subcomponents are called _aspects_. ## Sentiment analysis models ### Model of an entity ### Model of a review ### Model of an opinion ### Different Levels of Analysis Research has mainly been done on three different levels.
#### ClassesDocument-level Document-level sentiment classification works with entire documents and tries to figure out of the document has a positive or negative view of the subject in question. An example of this is to look at many reviews and see what the authors mean, if they are positive, negative or neutral.
#### PropertiesSentence-level Sentence-level sentiment classification works with opinion on a sentence level, and is about figuring out if a sentence is positive, negative or neutral to the relevant subject. This is closely related to _subjectivity classification_, which distinguishes between objective and subjective sentences.
#### Property characteristics #### Cardinality #### Individuals #### Others ### Semantics and Reasoning #### Description Logics ## Ontology guidelines # Sentiment analysis Sentiment analysis is about searching through data to get people's opinions about something. Sentiment analysis is also called opinion mining, opinion extraction, sentiment mining, subjectivity analysis, affect analysys, emotion analysis, or review mining. ## Basics of Sentiment analysis Sentinemt analysis is important because opinions drive decisions. People will seek other people's opinions when buying something. It is useful to be able to mine opinions on a product, service or topic. Businesses and organizations are interested in knowing what people think of their products. Individuals are interested in what others think of a product or service. We would like to be able to do a search like "How good is iPhone" and get a general opinion on how good it is. With "explosions" of opinion data in social media and similar applications it is easy to gather data for use in decisionmaking. It is not always necessary to create questionnaires to get peoples opinions. It can, however, be difficult to extract meaning from long blogposts and arcticles, and make a summary. This is why we need _Automated Sentiment analysis systems_. There are two types of evaluation: - Regular opinions ("This camera is great!") - Comparisons ("This camera is better than this other camera.") The basic components of an opinion are: - Opinion holder (Whoever holds the opinion) - Entity (The entity on which the opinion is expressed) - Opinion (The actual opinion being held) An entity can be divided into subcomponents, like a camera having both a lens and a battery. These subcomponents are called _aspects_. ## Sentiment analysis models ### Model of an entity ### Model of a review ### Model of an opinion ### Different Levels of Analysis Research has mainly been done on three different levels. #### Document-level Document-level sentiment classification works with entire documents and tries to figure out of the document has a positive or negative view of the subject in question. An example of this is to look at many reviews and see what the authors mean, if they are positive, negative or neutral. #### Sentence-level Sentence-level sentiment classification works with opinion on a sentence level, and is about figuring out if a sentence is positive, negative or neutral to the relevant subject. This is closely related to _subjectivity classification_, which distinguishes between objective and subjective sentences. #### Entity and aspect-level
Entity and aspect-level sentiment classification wants to discover opinions about an entity or it's aspects. For instance we have the following sentence, "The iPhone’s call quality is good, but its battery life is short", two aspects are evaluated: call quality and battery lifetime. The entity is the iPhone. The iPhone's call quality is positive, but the battery duration is negative. It's hard to find and classify these aspects, as there are many ways to express positive and negative opinions, metaphors, comparisons and so on. ## Sentiment Lexicon and Its Issues Some words can be identified as either positive or negative immediately, like good, bad, excellent, terrible and so on. There are also subsentences/phrases that can be identified as positive or negative. A list of such words or sentences is called a sentiment lexicon. Use of just this is not enough. Below are several known issues: 1. Words and phrases can have different meaning in different contexts. For example, "suck" usually has a negative meaning, but can be positive if put in the right context: "This vacuum cleaner really sucks". 2. Sentences containing sentiment words sometimes do not reflect a sentiment. For example, "If this product X contain a _great_ feature Y, I'll buy it". _Great_ does not express a positive or negative opinion on the product X. 3. Sarcastic sentences are hard to deal with. They are mostly found in political discussions, e.g. "What an awesome product! It stopped working in two days". 4. Sentences without sentiment words can also express opinions. "This laptop consumes a lot of power.", reflects a partially negative opinion about the laptop, but the sentence is also objective as it states a fact. ## Opinion lexicon generation ## Aspect-based opinion mining ## Opinion mining of comparative sentences ## Opinion spam detection ## Unsupervised search-based approach ## Unsupervised lexicon-based approach # Recommender systems ## Problem domain Recommender systems are used to match users with items. This is done to avoid information overload and to assist in sales by guiding, advising and persuading individuals when they are looking to buy a product or a service. Recommender systems elicit the interests and preferences of an individual, and make recommendations. This has the potential to support and improve the quality of a customers decision. Different recommender systems require different designs and paradigms, based on what data can be used, implicit and explicit user feedback and domain characteristics. ## Purpose and success criteria Different perspectives/aspects: – Depends on domain and purpose – No holistic evaluation scenario exists Retrieval perspective: – Reduce search costs – Provide "correct" proposals – Users know in advance what they want Recommendation perspective: – Serendipity – idendify items from the Long Tail – Users did not know about existence Prediction perspective: – Predict to what degree users like an item – Most popular evaluation scenario in research Interaction perspective – Give users a "good feeling" – Educate users about the product domain – Convince/persuade users - explain Finally, conversion perspective – Commercial situations – Increase "hit", "clickthrough", "lookers to bookers" rates – Optimize sales margins and profit ## Paradigms of recommender systems There are many different ways to design a recommendation system. What they have in common is they take input into a _recommendation component_ and uses it to output a _recommendation list_. What varies in the different paradigms is what goes into the recommendation component. One of the inputs to get personalized recommendations is a user profile and contextual parameters. _Collabarative recommender systems_, or "tell me what is popular among my peers", uses community data. _Content-based recommender systems_, or "show me more of what I've previously liked", uses the features of products. _Knowledge based recommender systems_, or "tell me what fits my needs", uses both the features of products and knowledge models to predict what the user needs. _Hybrid recommender systems_ uses a combination of the above and/or compositions of different mechanics. ## Collaborative filtering ## Content-based flitering ## Semantic Vector space model
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!