Invited Speakers

Gabiella Pasi

Monday, 13/06/2016-14:00-15:00

On the issue of aggregation in Information Retrieval.

This lecture will address the issue of defining flexible approaches to Information Retrieval by means of the usage of aggregation operators in various phases of the retrieval process. In particular, these approaches rely on the interpretation of Information Retrieval as a Multi-Criteria Decision Making (MCDM) problem, from various perspectives. The first, more straightforward perspective, is to interpret the overall IR process as a MCDM process aimed at selecting the best alternatives (documents) based on the assessment of the performance of multiple criteria (the keywords specified in a user’s query). Another and strongly related perspective is to see the assessment of the overall relevance estimate of a document (still an alternative) to a query as the process of evaluating the performance of several relevance dimensions (e.g., topicality, novelty, recency, etc.), which in this case represent the criteria to be aggregated. Another process that may benefit from the application of appropriate aggregation operators is the indexing process, when applied to structured documents. Metasearch constitutes another interesting task that can be seen as an instance of a Multi-Expert Decision Making (MEDM) problem, also strongly relying on the appropriate choice of an aggregation operator. By this task a user query is separately evaluated by different search engines, each one providing its own relevance assessment of the considered documents. Metasearch aims to merge the ranked lists generated by the various search engines (experts) in response to a query, to the aim of providing a unique, consensual ranked list of results. A quite interesting aspect implied by the above interpretations of various phases of the IR process is that the choice of different aggregation operators can produce different results. In other words, the semantics of aggregation implies an interpretation of the affected process. Despite the potential impact of aggregation on the whole IR process, this aspect has not received the proper attention in the literature. Only recently some approaches have appeared demonstrating the importance of this issue, and its potential impact on the searching process. This lecture aims to shortly review the main contributions that in the literature have made use of aggregation operators in Information Retrieval.

Gabriella Pasi is Full Professor at University of Milano-Bicocca, Department of Informatics, Systems and Communication (DISCo), since March 1, 2015.
Since 2005 she leads the Information Retrieval Laboratory (IR LAB) in the same Department.
To learn more about her you may visit her Website

 

Fabien Gandon

Tuesday, 14/06/2016-9:00-10:00

One Web of pages, One Web of peoples, One Web of Services, One Web of Data, One Web of Things… and with the Semantic Web bind them.

In the well-known acronym U.R.L. the “R” stands for Resources, meaning that the Web is weaving links between things far more varied than just pages: data, services, peoples, things, concepts, etc. The Web became a virtual place where persons and softwares interact in mixed communities. They also interact with connected things, and use and contribute to more or less structured data. To maintain one Web where everything is connected we need means to add and maintain links and exchanges between these different facets. In particular, this requires models and algorithms to reconcile formal semantics of computer science (e.g. logics, ontologies, typing systems, etc.) on which the Web architecture is built, with the soft semantics of people (e.g. posts, tags, status, etc.) on which the Web content is built [1]. We will see several contributions of the Wimmics team [2] to these problems proposing multidisciplinary approaches [3] to model and analyse Web information systems, their communities of users and their interactions and formalizing and reasoning on these models using graphs-based knowledge representation from the semantic Web to propose new analysis tools and indicators, and support new functionalities and better management.
[1] Fabien Gandon, Michel Buffa, Elena Cabrio, Olivier Corby, Catherine Faron-Zucker, et al., Challenges in Bridging Social Semantics and Formal Semantics on the Web. Hammoudi, S. and Cordeiro, J. and Maciaszek, L.A. and Filipe, J. 5h International Conference, ICEIS 2013, Jul 2013, Angers, France. Springer, 190, pp.3-15, 2014, Lecture Notes in Business Information Processing; Free Preview Enterprise Information Systems.
[2] http://wimmics.inria.fr
[3] Fabien Gandon. The three 'W' of the World Wide Web callfor the three 'M'of a Massively Multidisciplinary Methodology. Valérie Monfort; Karl-Heinz Krempels. 10th International Conference, WEBIST 2014, Apr 2014, Barcelona, Spain. Springer International Publishing, 226, Web Information Systems and Technologies.

Dr. Fabien Gandon is research Director in Informatics and Computer Science at Inria and Leader of the Wimmics team at the Sophia-Antipolis (UCA, Inria, CNRS, I3S). Fabien is also the Inria representative at the World-Wide Web Consortium (W3C) where he participated in several standardization groups. His professional interests include: Web, Semantic Web, Social Web, Ontologies, Knowledge Engineering and Modelling, Mobility, Privacy, Context-Awareness, Semantic Social Network / Semantic Analysis of Social Network, Intraweb, Distributed Artificial Intelligence. Fabien previously worked for the Mobile Commerce Laboratory of Carnegie Mellon in Pittsburgh and was General Co-chair of WWW 2012, Program Co-chair ESWC 2014 and General Chair ESWC 2015.
http://fabien.info

Tsuyoshi Murata

Wednesday, 15/06/2016-9:00-10:00

Mining and learning on heterogeneous networks.

Social network analysis is one of the popular research fields in Web mining. Understanding the structures and processes on real networks is important for recommending similar items and controlling the behaviors on networks such as information diffusion and disease propagation. Although social media (such as twitter or Facebook) are artifacts, mechanisms and models of their structural growth are not clarified. Most of the early researches on social network analysis are for simple networks that are composed of only one type of nodes and edges. However, real social media are not so simple. For example, YouTube can be regarded as a huge network of three types of nodes (users, videos, and tags). Most of the methods for social network analysis are mainly for simple networks. In order to represent complicated social media as simple networks, excessive abstraction is made. This abstraction loses the information the original social networks have, which makes the interpretation of the social network analysis results more difficult. In this talk, our attempts for mining and learning on heterogeneous networks are explained.

(1) community detection in n-partite networks

Community detection is one of the main topics in social network analysis, but most of them are for unipartite networks composed of only one type of nodes. Many social media can be represented as heterogeneous networks of many types of nodes. Previous methods for detecting communities in heterogeneous networks do not fully consider the correspondence between communities of different vertex types. We invented a method for detecting communities of tripartite networks that contain three types of nodes. We also propose benchmark bipartite networks for community detection.

(2) constrained community detection in simple/multiplex networks

Many automatic methods for community detection have been already proposed. If we can utilize background knowledge of node similarity or dissimilarity in a network, more convincing communities will be detected. Eaton et al. proposed a method for constrained community detection which optimizes constrained Hamiltonian. But their method is slow for large-scale networks because they employ simulated annealing. Our method uses Louvain method for optimizing constrained Hamiltonian and it accelerates constrained community detection greatly.

(3) transductive classification on n-partite networks

Transductive classification on heterogeneous networks is important as semi-supervised learning. When a network and the labels of some vertices are given, transductive classification is used to classify the labels of the remaining vertices. We proposed a method for transductive classification on heterogeneous networks composed of multiple types of vertices (such as papers, authors and conferences). Based on novel definition of edge betweenness for heterogeneous networks, our method gained around 5% increase in accuracy from the state-of-the-art methods including GNetMine.

Associate Professor, Department of Computer Science, School of Computing, Tokyo Institute of Technology (Japan), Created the Murata Laboratory on artificial intelligence, Web mining and link mining.
To learn more about him you may visit his Website.