July 25, 2019, Paris, France

The 2nd International Workshop on ExplainAble Recommendation and Search (EARS 2019)

Co-located with The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval

About

The motivation of the workshop is to promote the research and application of Explainable Recommendation and Search, under the background of Explainable AI in a more general sense. Explainable recommendation and search attempt to develop models or methods that not only generate high-quality recommendation or search results, but also intuitive explanations of the results for users or system designers, which can help improve the system transparency, persuasiveness, trustworthiness, and effectiveness, etc.

In a broader sense, researchers in the whole artificial intelligence community have also realized the importance of Explainable AI, which aims to address a wide range of AI explainability problems in deep learning, computer vision, automatic driving systems, and natural language processing tasks. Recent AI regulations such as EU GDPR and California Consumer Privacy Act of 2018 also encourage the explainability and users' right to explanation of algorithmic decisions in AI systems. As an important branch of AI research, it highlights the importance our IR/RecSys community to address the explainability issues of various recommendation and search systems.

We welcome contributions of both long and short papers from a wide range of topics, including but not limited to explainable recommendation and search models, incorporating multi-modal information for explanation, evaluation of explainable recommendation and search, user study for explainable recommendation and search, etc. More topics are listed in the call for papers. Papers must be submitted to easychair at https://easychair.org/conferences/?conf=ears2019 by 23:59, AoE (Anywhere on Earth) on May 15th (Abstract) and May 20th (Fullpaper), 2019.

EARS'19 (co-located with SIGIR'19)
Paris, France
Sponsers

Keynote Speech

Dr. Xin Luna Dong, Principal Scientist, Amazon

Title: Building a Broad Knowledge Graph for Products

Abstract: Knowledge graphs have been used to support a wide range of applications and enhance search results for multiple major search engines, such as Google and Bing. At Amazon we are building a Product Graph, an authoritative knowledge graph for all products in the world. The thousands of product verticals we need to model, the vast number of data sources we need to extract knowledge from, the huge volume of new products we need to handle every day, and the various applications in Search, Discovery, Personalization, Voice, that we wish to support, all present big challenges in constructing such a graph.
In this talk we describe our efforts in building a broad product graph, a graph that starts shallow with core entities and relationships, and allows easily adding verticals and relationships in a pay-as-you-go fashion. We describe our efforts on knowledge extraction, linkage, and cleaning to significantly improve the coverage and quality of product knowledge. We also present our progress towards our moon-shot goals including harvesting knowledge from the web, hands-off-the-wheel knowledge integration and cleaning, human-in-the-loop knowledge learning, and graph mining and graph-enhanced search.

Bio: Dr. Xin Luna Dong is a Principal Scientist at Amazon, leading the efforts of constructing Amazon Product Knowledge Graph. She was one of the major contributors to the Google Knowledge Vault project, and has led the Knowledge-based Trust project, which is called the “Google Truth Machine” by Washington’s Post. She has co-authored book “Big Data Integration”, was awarded ACM Distinguished Member, VLDB Early Career Research Contribution Award for "advancing the state of the art of knowledge fusion", and Best Demo award in Sigmod 2005. She serves in VLDB endowment and PVLDB advisory committee, and is a PC co-chair for VLDB 2021, ICDE Industry 2019, VLDB Tutorial 2019, Sigmod 2018 and WAIM 2015.

Keynote Speaker: Dr. Xin Luna Dong, Principal Scientist, Amazon

Accepted Papers

  1. Personalized Attention for Textual Profiling and Recommendation.
    Charles-Emmanuel Dias (University of Paris 6), Vincent Guigue (University of Paris 6) and Patrick Gallinari (University of Paris 6)
  2. Generating Natural Language Explanations for Personalized Recommendation.
    Hanxiong Chen (Rutgers University), Xu Chen (Tsinghua University), Shaoyun Shi (Tsinghua University) and Yongfeng Zhang (Rutgers University)
  3. Selection and Interpretation of Embedding Subspace for Query Classification.
    Kyoung-Rok Jang (Korea Advanced Institute of Science and Technology), Sung-Hyon Myaeng (Korea Advanced Institute of Science and Technology), Hee-Cheol Seo (Naver) and Joo-Hee Park (Naver)
  4. Learning-to-Explain: Recommendation Reason Determination Through Q20 Gaming.
    Xianchao Wu (Microsoft Japan)
  5. Can Structural Equation Models Interpret Search Systems?.
    Massimo Melucci (University of Padova)
  6. DiaQueTT: A Diachronic and Queryable Topic-Tracking Model.
    Yuta Nakamura (Kyoto University), Yasuhito Asano (Kyoto University) and Masatoshi Yoshikawa (Kyoto University)
  7. Model Explanations under Calibration.
    Rishabh Jain (Imperial College London) and Pranava Madhyastha (Imperial College London)
  8. Assessing the Helpfulness of Review Content for Explaining Recommendations.
    Diana Carolina Hernandez Bocanegra (University of Duisburg-Essen) and Jürgen Ziegler (University of Duisburg-Essen)
  9. Metrics for Explainable Ranking Functions.
    Abraham Gale (Rutgers University) and Amelie Marian (Rutgers University)
  10. Effects of Foraging in Personalized Content-based Image Recommendation.
    Amit Kumar Jaiswal (University of Bedfordshire), Haiming Liu (University of Bedfordshire) and Ingo Frommholz (University of Bedfordshire)
  11. Learning Hierarchical Item Categories from Implicit Feedback Data for Efficient Recommendations and Browsing.
    Farhan Khawar (The Hong Kong University of Science and Technology) and Nevin L. Zhang (The Hong Kong University of Science and Technology)
  12. Understanding The Influence of Task Difficulty on User Fixation Behavior.
    Masoud Davari (GESIS - Leibniz institute for the social sciences), Ran Yu (GESIS - Leibniz institute for the social sciences) and Stefan Dietze (GESIS - Leibniz institute for the social sciences)

Call for Papers

We welcome contributions of both long and short papers from a wide range of topics, including but not limited to the following topics of interest:

  1. New Models for Explainable Recommendation and Search
    • Explainable shallow models for recommendation and search
    • Explainable neural models for recommendation and search
    • Explainable sequential modeling
    • Explainable optimization algorithms and theories
    • Causal inference for explainable recommendation
  2. Using Different Information Sources for Explanation
    • Text-based modeling and explanation
    • Image-based modeling and explanation
    • Using knowledge-base for explanation
    • Audio/Video-based modeling and explanation
    • Integrating heterogenous information for explanation
  3. User Behavior Analysis and HCI for Explanation
    • Explanation and user satisfaction
    • Eye tracking and attention modeling
    • Mouse movement analysis
  4. New Types of Explanations for Search and Recommendation
    • Textual sentence explanations
    • Visual explanations
    • Statistic-based explanations
    • Aggregated explanations
    • Context-aware explanations
  5. Evaluation of Explainable Recommendation and Search
    • Offline evaluation measures and protocols
    • Online evaluation measures and protocols
    • User study for explanation evaluation
  6. Applications of Explainable Recommendation and Search
    • Explainable product search and recommendation
    • Explainable web search
    • Explainable social recommendation
    • Explainable news recommendation
    • Explainable point-of-interest recommendation
    • Explainable multi-media search and recommendation

PAPER SUBMISSION GUIDLINES

EARS 2019 paper submissions can either be long (maximum 9 pages plus reference) or short (maximum 4 pages plus reference). Each accepted paper (no matter short or long) will have an oral presentation in a plenary session, and will also be allocated a presentation slot in a poster session to encourage discussion and follow up between authors and attendees.

EARS 2019 submissions are double-blind. All submissions and reviews will be handled electronically. EARS 2019 submissions should be prepared according to the standard double-column ACM SIG proceedings format. Additional information about formatting and style files is available on the ACM website. Papers must be submitted to easychair at https://easychair.org/conferences/?conf=ears2019 by 23:59, AoE (Anywhere on Earth) on May 15th (Abstract) and May 20th (Fullpaper), 2019.

For inquires about the workshop and submissions, please email to ears2019@easychair.org

Important Days

All time are 23:59, AoE (Anywhere on Earth)
May 15, 2019: Abstract due
May 20, 2019: Submission due
May 31, 2019: Paper notification
June 20, 2019: Camera ready submission
July 25, 2019: Workshop day

Workshop Co-Chairs

Image

Yongfeng Zhang Rutgers University

Image

Yi Zhang UC Santa Cruz

Image

Min Zhang Tsinghua University

Image

Chirag Shah Rutgers University

THE VENUE

EARS'19 will be co-located with The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, to be held at Paris, France on July 25, 2019.