Recently, recommendation based on causal inference has gained much attention in the industrial community. The introduction of causal techniques into recommender systems (RS) has brought great development to this field and has gradually become a trend. However, a unified causal analysis framework has not been established yet. On one hand, the existing causal methods in RS lack a clear causal and mathematical formalization on the scientific questions of interest. Many confusions need to be clarified: what exactly is being estimated, for what purpose, in which scenario, by which technique, and under what plausible assumptions. On the other hand, technically speaking, the existence of various biases is the main obstacle to drawing causal conclusions from observed data. Yet, formal definitions of the biases in RS are still not clear. Both of the limitations greatly hinder the development of RS.In this paper, we attempt to give a causal analysis framework to accommodate different scenarios in RS, thereby providing a principled and rigorous operational guideline for causal recommendation. We first propose a step-by-step guideline on how to clarify and investigate problems in RS using causal concepts. Then, we provide a new taxonomy and give a formal definition of various biases in RS from the perspective of violating what assumptions are adopted in standard causal analysis. Finally, we find that many problems in RS can be well formalized into a few scenarios using the proposed causal analysis framework.
We evaluated the effectiveness of new indices of text comprehension in measuring relative text difficulty. Specifically, we examined the efficacy of automated indices produced by the web-based computational tool Coh-Metrix. In an analysis of 60 instructional science texts, we divided texts into groups that were considered to be more or less difficult to comprehend. The defining criteria were based on Coh-Metrix indices that measure independent factors underlying text coherence: referential overlap and vocabulary accessibility. In order to validate the text difficulty groups, participants read and recalled two “difficult” and two “easy” texts that were similar in topic and length. Easier texts facilitated faster reading times and better recall compared to difficult texts. We discuss the implications of these results in the context of theoretically motivated techniques for improving textbook selection.