It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Recommender systems have been extensively applied in various big data scenarios with the rapid development of online platforms. However, big data brings three challenges for recommender systems: cold-start, data sparsity, and low efficiency. Hashing is a promising technique for efficient online recommendations by mapping representing users and items with binary codes in a shared Hamming subspace, where users’ preferences over items can be efficiently estimated by the fast XOR operations of their hash codes.
However, existing hashing-based recommender systems provide efficient recommendations at the price of sacrificing accuracy. To improve accuracy, we explore the following four hashing-based recommendations to cope with challenges caused by approximate discrete algorithms, imbalanced data, cold-start, and sparse issues.
• We propose Discrete Ranking-based Matrix Factorization (DRMF) to improve the accuracy of discrete collaborative filtering. That is formulated with a cross-entropy loss between each user’s actual pairwise preference ranking and predicted pairwise preference ranking. Due to the non-linearity of cross-entropy loss, we seek a local quadratic upper bound to formulate the upper bound into a series of binary quadratic programming problems and solve them with semi-definite programming solvers.
• To capture preferences over items from imbalanced categories for collaborative filtering, we develop a Discrete Scale-invariant Metric Learning (DSML), which captures users’ preferences over items from categories with different intra-class variations. Specifically, we propose a scale-invariant margin based on angles at negative item points in the shared Hamming subspace. Then, we derive a scale-invariant triple hinge loss based on the margin. Finally, we integrate a pairwise ranking loss into the scale-invariant loss to capture more preference information.
• By considering the marketing application of recommendation, we propose another discrete collaborative filtering technique, Collaborative Generated Hashing (CGH). That can be applied to find potential users or items in marketing applications, where the generative network is designed with the principle of Minimum Description Length (MDL) for learning compact and informative binary codes. Besides, we explore evaluation metrics for evaluating the performance of marketing.
• We propose a discrete content-aware recommendation approach, Discrete Pairwise Hashing (DPH), to solve cold-start and sparse issues. The user-item interactive data and item content information are unified to learn effective representations. Specifically, we first pretrain robust item representations from item content data by a Denoising Auto-encoder(DAE); then, we formulate a collaborative model by combining the learned DAE and a pairwise loss; finally, we optimize with the alternating optimization strategy.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer