Building Recommender Systems with Machine Learning and AI

Frank Kane, Founder of Sundog Education, ex-Amazon

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
109 Lessons (8h 30m)
    • 1. Introduction and Setup: Build a Recommender!

      9:05
    • 2. Please Follow Me on Skillshare

      0:16
    • 3. Course Roadmap

      3:52
    • 4. Types of Recommenders

      3:22
    • 5. Understanding You through Implicit and Explicit Ratings

      4:25
    • 6. Top-N Recommender Architecture

      5:53
    • 7. Quiz

      4:46
    • 8. Intro to Python: Basics

      5:04
    • 9. Intro to Python: Data Structures

      5:17
    • 10. Intro to Python: Functions

      2:46
    • 11. Intro to Python: Booleans and Loops

      3:52
    • 12. Testing Methodologies for Recommenders

      3:49
    • 13. Measuring Prediction Accuracy

      4:06
    • 14. Measuring Hit Rate on Top-N Recommenders

      4:35
    • 15. Coverage, Diversity, and Novelty

      4:55
    • 16. Churn, Responsiveness, and A/B Tests

      5:06
    • 17. Quiz

      2:55
    • 18. Coding up recommender metrics

      6:53
    • 19. Coding up a test framework

      5:08
    • 20. Evaluating SVD recommendation results

      2:24
    • 21. Architecture of a Recommender Engine

      7:27
    • 22. Coding the Evaluator class

      3:55
    • 23. Coding the EvaluationData class

      3:51
    • 24. Reviewing the Results of our Engine

      3:10
    • 25. Content-Based Recommendations, and the Cosine Similarity Metric

      8:58
    • 26. KNN Recommenders

      3:59
    • 27. Running content-based KNN

      5:23
    • 28. Bleeding Edge Alert! Mise en Scene Recommendations

      4:31
    • 29. Exercise: Dive Deeper into Content-Based Recs

      4:26
    • 30. Measuring Similarity, and Sparsity

      4:49
    • 31. Similarity Metrics

      8:32
    • 32. User-based Collaborative Filtering

      7:25
    • 33. Implementing User-Based CF

      4:59
    • 34. Item-based Collaborative Filtering

      4:14
    • 35. Implementing Item-Based CF

      2:23
    • 36. Exercises in Collaborative Filtering

      3:31
    • 37. Evaluating Collaborative Filtering Methods

      1:28
    • 38. Exercise 2 in Collaborative Filtering

      2:17
    • 39. User-based KNN

      4:03
    • 40. Activity: KNN Recommenders

      2:25
    • 41. Exercise: KNN Recommenders

      4:25
    • 42. Bleeding Edge Alert! Translation-Based Recommendations

      2:29
    • 43. Principal Component Analysis (PCA)

      6:31
    • 44. Singular Value Decomposition

      6:56
    • 45. Activity: SVD

      3:46
    • 46. Improving on SVD

      4:33
    • 47. Exercise: SVD Recommendations

      2:02
    • 48. Bleeding Edge Alert! Sparse Linear Methods (SLIM)

      3:30
    • 49. Introduction to Deep Learning [Optional section]

      1:30
    • 50. Deep Learning Intro: Prerequisites

      8:13
    • 51. Deep Learning Intro: Artificial Neural Networks

      10:51
    • 52. Deep Learning Intro: Playing with Tensorflow

      12:02
    • 53. Deep Learning Intro: Training Neural Nets

      5:47
    • 54. Deep Learning Intro: Overfitting and Tuning

      3:55
    • 55. Deep Learning Intro: Tensorflow Introduction

      11:29
    • 56. Deep Learning Intro: Tensorflow Activity, part 1

      13:19
    • 57. Deep Learning Intro: Tensorflow Activity, part 2

      12:03
    • 58. Deep Learning Intro: Keras

      2:48
    • 59. Deep Learning Intro: Keras activity

      9:55
    • 60. Deep Learning Intro: Classification with Keras

      3:58
    • 61. Deep Learning Intro: Keras Exercise

      9:55
    • 62. Deep Learning Intro: Convolutional Neural Networks (CNNs)

      8:59
    • 63. Deep Learning Intro: CNN Architectures

      2:54
    • 64. Deep Learning Intro: CNN Activity

      8:38
    • 65. Deep Learning Intro: Recurrent Neural Networks (RNNs)

      7:38
    • 66. Deep Learning Intro: Training RNN's

      3:21
    • 67. Deep Learning Intro: RNN Activity

      11:01
    • 68. Recommendation Systems with Deep Learning

      2:19
    • 69. Restricted Boltzmann Machines (RBM's)

      8:02
    • 70. RBM Activity, part 1

      12:46
    • 71. RBM Activity, part 2

      7:11
    • 72. RBM Activity, part 3

      3:43
    • 73. Exercise: Tuning RBM's

      1:43
    • 74. RBM Tuning Results

      1:15
    • 75. Auto-Encoders for Recommendations: Deep Learning for Recs

      4:27
    • 76. Activity: Deep Learning on Sparse Ratings Data

      7:23
    • 77. RNN's for recommendations: GRU4Rec

      7:23
    • 78. GRU4Rec Exercise

      2:42
    • 79. Exercise Solution (GRU4Rec)

      7:51
    • 80. Bleeding Edge Alert! Deep Factorization Machines

      5:49
    • 81. More Emerging Tech to Watch

      5:14
    • 82. Introduction to Apache Spark

      5:49
    • 83. Spark Architecture

      5:13
    • 84. Movie recommendations with Spark, MLLib, and ALS

      6:02
    • 85. Scaling it up to 20 million ratings with Spark

      4:57
    • 86. Amazon DSSTNE

      4:41
    • 87. Activity: Amazon DSSTNE in action

      9:36
    • 88. Scaling up DSSTNE

      2:14
    • 89. AWS SageMaker and Factorization Machines

      4:24
    • 90. Activity: Recommendations with SageMaker

      7:38
    • 91. The Cold Start Problem (and solutions)

      6:12
    • 92. Exercise: Implement Random Exploration

      0:54
    • 93. Exercise solution

      2:18
    • 94. Stoplists

      4:48
    • 95. Exercise: Implement a Stoplist

      0:32
    • 96. Exercise solution

      2:22
    • 97. Filter Bubbles, Trust, and Outliers

      5:39
    • 98. Exercise: Remove outlier users

      0:44
    • 99. Exercise solution

      4:00
    • 100. Fraud, The Perils of Clickstream, and International Concerns

      4:33
    • 101. Temporal Effects, and Value-Aware Recommendations

      3:30
    • 102. Case Study: YouTube, Part 1

      3:42
    • 103. Case Study: YouTube, Part 2

      7:04
    • 104. Case Study: Netflix, Part 1

      3:59
    • 105. Case Study: Netflix, Part 2

      3:55
    • 106. Hybrid Recommenders and Exercise

      2:54
    • 107. Exercise solution

      4:17
    • 108. More to Explore

      2:31
    • 109. Let's Stay in Touch

      0:46
12 students are watching this class

Project Description

For your final exercise in this course, I’ll challenge you to build your own hybrid recommender. As with most things, there’s more than one way to do it – perhaps you could reserve some slots in your top-N results for certain recommenders. Perhaps you use one recommender primarily, with various fall-backs to use in case the primary recommender could not fill all of your top-N slots. Or perhaps you generate rating predictions from many recommenders in parallel, and add or average their scores together before ranking them.

The latter approach is what I’m going to challenge you to do. Write a new algorithm in the recommender framework we’ve created for this course, called “HybridAlgorithm.” All it should do is take in a list of other algorithms that are compatible with surpriselib and a weight you wish to assign to each, and train each of those algorithms when calling fit() on your HybridAlgorithm. When estimate() is called to generate a rating prediction for a specific user/item pair, compute a weighted average based on each algorithm in the list, and the weights associated with each.

Then, try it out – and see how your HybridAlgorithm performs compared to using its component algorithms individually. Which recommenders you choose to combine is up to you; personally, I chose to combine our RBM algorithm with the Content-based KNN algorithm we developed earlier, in the spirit of combining behavior-based and semantic-based information together into a single system.

The framework files you need are included in the course materials you'll be directed toward in the course videos, but the bits you need are also attached to this project.

It’s easier than it sounds, so try and give it a shot yourself.

Resources(1)

Student Projects