Toutiao Recommendation System: P1 Overview50423 2019-02-19 01:33
We are finding the best
function below to maximize
user satisfaction .
user satisfaction = function(content, user profile, context)
- Content: features of articles, videos, UGC short videos, Q&As, etc.
- User profile: interests, occupation, age, gender and behavior patterns, etc.
- Context: Mobile users in contexts of workspace, commuting, traveling, etc.
Measurable Goals, e.g.
- click through rate
- Session duration
- Frequency control of ads and special-typed contents (Q&A)
- Frequency control of vulgar content
- Reducing clickbait, low quality, disgusting content
- Enforcing / pining / highly-weighting important news
- Lowly-weighting contents from low-level accounts
It is a typical supervised machine learning problem to find the best
function above. To implement the system, we have these algorithms:
- Collaborative Filtering
- Logistic Regression
- Factorization Machine
A world-class recommendation system is supposed to have the flexibility to A/B-test and combine multiple algorithms above. It is now popular to combine LR and DNN. Facebook used both LR and GBDT years ago.
Correlation, between content’s characteristic and user’s interest. Explicit correlations include keywords, categories, sources, genres. Implicit correlations can be extract from user’s vector or item’s vector from models like FM.
Environmental features such as geo location, time. It’s can be used as bias or building correlation on top of it.
Hot trend. There are global hot trend, categorical hot trend, topic hot trend and keyword hot trend. Hot trend is very useful to solve cold-start issue when we have little information about user.
Collaborative features, which helps avoid situation where recommended content get more and more concentrated. Collaborative filtering is not analysing each user’s history separately, but finding users’ similarity based on their behaviour by clicks, interests, topics, keywords or event implicit vectors. By finding similar users, it can expand the diversity of recommended content.
- Users like to see news feed updated in realtime according to what we track from their actions.
- Use Apache storm to train data (clicks, impressions, faves, shares) in realtime.
- Collect data to a threshold and then update to the recommendation model
- Store model parameters , like tens of billions of raw features and billions of vector features, in high performance computing clusters.
They are implemented in the following steps:
- Online services record features in realtime.
- Write data into Kafka
- Ingest data from Kafka to Storm
- Populate full user profiles and prepare samples
- Update model parameters according to the latest samples
- Online modeling gains new knowledge
It is impossible to predict all the things with the model, considering the super-large scale of all the contents. Therefore, we need recall strategies to focus on a representative subset of the data. Performance is critical here and timeout is 50ms.
Among all the recall strategies, we take the
Key can be topic, entity, source, etc.
|Tags of Interests||Relevance||List of Documents|
- Features depends on tags of user-side and content-side.
- recall strategy depends on tags of user-side and content-side.
- content analysis and data mining of user tags are cornerstone of the recommendation system.
If you find this article helpful