Antje Marx | Wednesday September 28th, 2022
The following tech blog article is part of a series of detailed tech explanations of a recommender system built together by realeyz (EYZ Media GmbH) and DAI-Labor (TU Berlin). The tech blog functions as a knowledge base and exchange about technical solutions supporting transnational marketing, branding and distribution of European audiovisual works on VOD services.
A popular approach for computing recommendations is Collaborative Filtering. The approach analyzes existing user-item interactions (e.g. assigned ratings) and predicts the missing preference entries (ratings) applying the knowledge about closely similar users and items. Collaborative Filtering recommendation algorithms are usually implemented using user-item matrixes. The recommendations are computed using user-based KNN (K-Nearest Neighbors) or item-based KNN. In order to reduce the sparsity of the user-item matrix low-rank approximation methods (e.g. Singular Value Decomposition) can be applied.
The realeyz movie portal integrates several different methods for computing recommendations. The portal also includes Collaborative Filtering. The CF-based recommender analyzes the user interests in videos, movie descriptions, and trailers. The data provide the basis for the computation of suggestions. The challenge for the recommender component is finding appropriate models and parameter settings for the algorithm. The implemented solution addresses this problem by building different recommender candidates that are evaluated using 10-fold cross-validation based on the collected log data.
Figure 1 shows the algorithm and model selection process for the Collaborative Filtering component applied by realeyz. Once the evaluation metric is defined, the candidate recommenders are built and evaluated. The best model (algorithm and hyper parameter configuration are computed based on the training data) is selected for the algorithms deployed in the live system. Recommendation results generated by the best model are cached in the database insuring reliably short response times.
Figure 1: Collaborative Filtering Model Selection Flow
In VOD Platforms, the interactions between items and users can be modeled based on power law distributions. When applying Collaborative Filtering recommender approaches, popular items have a strong influence on the results (biased model). Recommending popular items usually leads to good evaluation results, but reduces the diversity of the recommendations. Thus, the evaluation metrics should consider this in order to ensure serendipitous suggestions.
Figure 2: The figure shows the power law distribution on items and users.
We take a deeper look on the log data and study the relation between users and items. Figure 2 shows that the top 20% users contribute 66% of the total interactions; the top 20% items are part of 70% of the interactions. The strong influence of the top users and top items often results in less diverse suggestions. Moreover, the bias may overrule additional aspects (such as contextual factors) important when computing recommendations.
To address this problem, we compute several different models based on samples defined with respect to relevant contextual factors (e.g., time, location). The analysis of various recommender models improves the diversity of recommendations.
Figure 3. The model selection approach used in realeyz CF-based recommender component.
For the optimization of the realeyz recommender component, we focus on time as the most important contextual factor in the Collaborative Filtering component. We use a sliding time window approach for splitting the dataset into a training set and a test set. Cross-validation is applied for each time window. Thus models are selected independently inside each time window (“bin”). Figure 3 shows the basic idea of this dynamic model generation approach. Different metrics as well as bin sizes are analyzed for computing the best model.
The experimental results of 9 time windows defined based on the realeyz data are given in Table 1. For the evaluation three metrics have been analyzed: RMSE (Root Mean Squared Error), MAE (Mean Average Error), and FCP (Fraction of Concordant Pairs). For each metric, we tested three algorithms: SVD, user based KNN and item based KNN. The experimental results show that the evaluation metric FCP leads to the highest variance concerning the optimal algorithm.
Table 1. The optimal algorithms selected along sliding time windows w.r.t. evaluation metric.
Overall, the tested recommender models (optimized for specific contexts) provide highly precise recommendation results. The combination of models trained for different contexts ensures high-quality suggestions and simultaneously improves the diversity of the recommendation. The approach reduces the popularity bias caused by the power law distribution between users and items and improves the serendipity of the recommendation results.
We´d shortly like to introduce you to the project team:
Andreas Lommatzsch works as a senior researcher at the Distributed Artificial Intelligence Lab (DAI-Labor) at the TU Berlin. His research focuses on distributed knowledge management and machine learning algorithms. His primary interests lie in the areas of recommendations based on data-streams and context-aware meta-recommender algorithms.
Jing Yuan is a Ph.D. student working at Distributed Artificial Intelligence Lab (DAI-Labor) in TU Berlin. Her research interest includes recommender system, information retrieval, and machine learning algorithms.
Phani Saripalli works as a Data Engineer at EYZ Media GmbH (operator of realeyz.de) and is coordinating the project on site. He is specialized in building data pipelines and data wrangling. He works with Redis, AWS, Airflow, Flask, Python and Postgres to ensure data is transformed from its raw form to something that is insightful.
Khalit Hartmann is a Bachelor of Computer Science (Informatik) student working at Distributed Artificial Intelligence Lab (DAI-Labor) in TU Berlin. His current fields of research include recommender systems based on natural language processing and machine learning algorithms.