๐ŸŽฌ Deep Learning for Content-Based Filtering

Building a Neural Network Movie Recommender System

In this post, we'll explore how to build a content-based recommender system using deep learning. Unlike collaborative filtering, which relies purely on user-item interactions, content-based filtering uses movie features — such as genres, year, and average ratings — to recommend similar items or predict user ratings.


๐Ÿงฉ Step 1: Importing the Essentials

We'll use familiar tools like NumPy, Pandas, TensorFlow, and scikit-learn for model building and feature scaling.

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
import tabulate
from recsysNN_utils import *

These libraries handle everything from data preprocessing to model training and evaluation.


๐ŸŽฅ Step 2: The MovieLens Dataset

Our data comes from the MovieLens project — one of the most popular datasets for recommender systems research.

  • 397 users
  • 847 movies
  • 25,521 ratings

Each movie includes its title, release year, and a list of genres.

Example: Toy Story 3 (2010) → Adventure, Animation, Children, Comedy, Fantasy

This data is ideal for constructing feature vectors that represent both users and movies.


๐Ÿง  Step 3: Content-Based Filtering with Neural Networks

In traditional collaborative filtering, we learn two latent vectors — one for the user and one for the movie — based only on ratings. Here, we extend that idea:

The model learns these vectors from content features (genres, average rating, year, etc.) rather than just interactions.

3.1 Training Data

Movie content includes year, genre one-hot vectors (14 total), and average ratings.

User content includes per-genre average ratings, user ID, rating count, and rating average (some excluded from training).

Each training example looks like:

(user vector, movie vector) → user’s rating

3.2 Preparing the Data

We scale the data to improve training stability:

scalerItem = StandardScaler().fit(item_train)
scalerUser = StandardScaler().fit(user_train)
scalerTarget = MinMaxScaler((-1, 1)).fit(y_train.reshape(-1, 1))

item_train = scalerItem.transform(item_train)
user_train = scalerUser.transform(user_train)
y_train = scalerTarget.transform(y_train.reshape(-1, 1))

Then we split into training and test sets (80/20).


๐Ÿ—️ Step 4: Building the Neural Network

Our recommender uses two twin networks — one for users and one for items. Each generates a 32-dimensional embedding vector, and their dot product predicts the rating.

Exercise 1 — Build the Twin Towers

num_outputs = 32
tf.random.set_seed(1)

user_NN = tf.keras.models.Sequential([
    keras.layers.Dense(256, activation='relu'),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(num_outputs)
])

item_NN = tf.keras.models.Sequential([
    keras.layers.Dense(256, activation='relu'),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(num_outputs)
])

Combine them with the Functional API:

input_user = keras.layers.Input(shape=(num_user_features))
vu = user_NN(input_user)
vu = tf.linalg.l2_normalize(vu, axis=1)

input_item = keras.layers.Input(shape=(num_item_features))
vm = item_NN(input_item)
vm = tf.linalg.l2_normalize(vm, axis=1)

output = keras.layers.Dot(axes=1)([vu, vm])
model = keras.Model([input_user, input_item], output)

Train with Adam optimizer and mean squared error loss:

model.compile(optimizer=keras.optimizers.Adam(0.01),
              loss=tf.keras.losses.MeanSquaredError())

model.fit([user_train[:, u_s:], item_train[:, i_s:]], y_train, epochs=30)

The test loss closely matches training loss, suggesting no major overfitting.


๐Ÿ”ฎ Step 5: Making Predictions

5.1 Predictions for a New User

Create a new user who loves adventure and fantasy films:

new_user = np.array([[5000, 3, 0.0,
                      0.0, 5.0, 0.0, 0.0,
                      0.0, 0.0, 0.0,
                      0.0, 5.0, 0.0, 0.0,
                      0.0, 0.0, 0.0]])

Replicate the user vector across all movies and generate predictions:

user_vecs = gen_user_vecs(new_user, len(item_vecs))
suser_vecs = scalerUser.transform(user_vecs)
sitem_vecs = scalerItem.transform(item_vecs)
y_pred = model.predict([suser_vecs[:, u_s:], sitem_vecs[:, i_s:]])

5.2 Predictions for an Existing User

uid = 2
user_vecs, y_vecs = get_user_vecs(uid, user_train_unscaled, item_vecs, user_to_genre)

The model’s predictions are generally within ±1 of actual ratings, showing that it captures user preference patterns effectively.

5.3 Finding Similar Items

After training, each movie has an embedding vector v_m. We can measure similarity using squared distance.

def sq_dist(a, b):
    return np.sum((a - b)**2)

Compute pairwise distances:

dim = len(vms)
dist = np.zeros((dim, dim))
for i in range(dim):
    for j in range(dim):
        dist[i, j] = sq_dist(vms[i, :], vms[j, :])

Mask the diagonal and display the most similar movies:

Movie 1 Genres Most Similar Movie Genres
Toy Story 3 Animation, Adventure Finding Nemo Animation, Adventure
The Dark Knight Action, Crime Batman Begins Action, Crime

๐Ÿ Step 6: Congratulations!

๐ŸŽ‰ You’ve built a deep learning content-based recommender system!

This architecture underpins many modern recommendation engines. You can extend it by adding more user features (age, demographics, preferences) or item metadata (keywords, descriptions).

Key Takeaways

  • Content-based filtering leverages item features, not just user-item interactions.
  • Neural networks learn nonlinear relationships between users and items.
  • Embedding vectors can be reused for item similarity searches.
  • Combining with collaborative filtering forms hybrid recommenders.

๐Ÿ’ก Next step: try deploying your model via TensorFlow Serving or integrating it into a web app!






Post a Comment

Previous Post Next Post