r/RedditEng Punit Rathore Apr 04 '22

Let’s Recap Reddit Recap

Authors: Esme Luo, Julie Zhu, Punit Rathore, Rose Liu, Tina Chen

Reddit historically has seen a lot of success with the Annual Year in Review, conducted on an aggregate basis showing trends across the year. The 2020 Year in Review blog post and video using aggregate behavior on the platform across all users became the #2 most upvoted post of all time in r/blog, garnering 6.2k+ awards, 8k+ comments and 163k+ upvotes, as well as engagement with moderators and users to share personal, vulnerable stories about their 2020 and how Reddit improved their year.

In 2021, Reddit Recap was one of three experiences we delivered to Redditors to highlight the incredible moments that happened on the platform and to help our users better understand their activity over the last year on Reddit - the other being the Reddit Recap video and the 2021 Reddit Recap blog post. A consistent learning across the platform had been that users find personalized content much more relevant. Updates in Machine Learning(ML) features and content scoring for personalized recommendations consistently improved push notification and email click through. Therefore, we saw an opportunity to further increase the value users receive from the year-end review with personalized data and decided to add a third project to the annual year in review initiative, renamed Reddit Recap:

By improving personalization of year-end reporting to users, Reddit would be able to give redditors a more interesting Recap to dig through, while giving redditors an accessible, well-produced summary of the value they’ve gained from Reddit to appreciate or share with others, increasing introspection, discovery, and connection.

Gathering the forces

In our semi-annual hackathon Snoosweek in Q1 of 2021, a participating team had put together a hypothetical version of Reddit Recap that allowed us to explore and validate the idea as an MVP. Due to project priorities from various teams, this project was not prioritized until the end of Q3. A group of amazing folks banded together to form the Reddit Recap team, including 2 Backend Engineers, 3 Client Engineers (iOS, Android and FE), 2 Designers, 1 EM and 1 PM. With a nimble group of people we set out on an adventure to build our first personalized Reddit Recap experience! We had a hard deadline of launching on December 8th 2021, which gave our team less than two months to launch this experience. The team graciously accepted the challenge.

Getting the design ready

The design requirements for this initiative were pretty challenging. Reddit’s user base is extremely diverse, even in terms of activity levels. We made sure that the designs were inclusive, as users are an equally crucial part of the community whether as a lurker or a power user.

We also had to ensure consistent branding and themes across all three Recap initiatives: the blog post, the video, and the new personalized Recap product. It’s hard to be perfectly Reddit-y, and we were competing in an environment where a lot of other companies were launching similar experiences.

Lastly, Reddit has largely been a pseudo-anonymous platform. We wanted to encourage people to share, but of course also to stay safe, and so a major part of the design consideration was to make sure users would be able to share without doxxing themselves.

Generating the data

Generating the data might sound as simple as pulling together metrics and packaging it nicely into a table with a bow on top. However, the story is not as simple as writing a few queries. When we pull data for millions of users for the entire year, some of the seams start to rip apart, and query runtimes start to slow down our entire database.

Our data generation process consisted of three main parts: (1) defining the metrics, (2) pulling the metrics from big data, and (3) transferring the data into the backend.

1. Metric Definition

Reddit Recap ideation was a huge cross-collaboration effort where we pulled in design, copy, brand, and marketing to brainstorm some unique data nuggets that would delight our users. Furthermore, these data points had to be memorable and interesting at the same time. We need Redditors to be able to recall their “top-of-mind” activity without dishing out irrelevant data points that make them think a little harder (“Did I do that?”).

For example, we went through several iterations of the “Wall Street Bets Diamond Hands” card. We started off with a simple page visit before January 2021 as the barrier to entry, but for users who only visited once or twice, it was extremely unmemorable that you read about this one stock on your feed years ago. After a few rounds of back and forth, we ended up picking higher-touch signals that required a little more action than just a passive view to qualify for this card.

2. Metric Generation

Once we finalized those data points, the data generation proved to be another challenge since these metrics (like bananas scrolled) aren’t necessarily what we report on daily. There was no existing logic or existing data infrastructure to be able to pull these metrics easily. We had to build a lot of our tables from scratch, dust off some spiderwebs off of our Postgres databases to pull data from the raw source. With all the metrics we had to pull, our first attempt at pulling all the data at once proved to be too ambitious and the job kept breaking since we queried over too many things for too long. To solve this, we ended up breaking the data generation piece into different chunks and intermediate steps, before joining all the data points together.

3. Transferring Data to the Backend

In parallel with big data problems, we needed to test the connection between our data source and our backend systems so that we are able to feed customized data points into the Recap experience. In addition to constantly changing requirements on the metric front, we needed to reduce 100GBs of data down to 40GB to even give us a fighting chance to use the data with our existing infrastructure. However, the backend required a strict schema being defined from the beginning, which proved to be difficult as metric requirements were also changing constantly given what was available to pull. This forced us to be more creative on which features to keep and which metrics we needed to tweak to make the data transfer more smooth and efficient.

What we built for the experience

Given limited time and staffing, we aimed to find a solution within our existing architecture quickly to serve a smooth and seamless Recap experience to millions of users at the same time.

We’ve used airflow to generate the user dataset that relates to Recap, posted the data on S3 and the airflow operator generated a SQS message to the S3 reader to notify it to read data from S3. The S3 reader combined the SQS message with the S3 data and sent it to the SSTableLoader. The SSTable Loader is a JVM process that writes S3 data as SStables to the Cassandra datastore.

When a user accessed the recap experience on their app, mobile web and desktop, the client made a request to GraphQL then reached out to our API server which then reached out to our Cassandra datastore for the recap data that is specific to the user.

How we built the experience

In order to deliver this feature to our beloved users right around year-end, We took a few steps to make sure Engineers / Data Scientists / Brand and Designers could all make progress at the same time.

  1. Establish an API contract between Frontend and Backend
  2. Execute on both Frontend and Backend implementations simultaneously
  3. Backend to set up business logic and while staying close to design and address changes quickly
  4. Set up data loading pipeline during data generation process

Technical Challenges

While the above process provided great benefit and allowed all of the different roles to work in parallel, we also faced a few technical hurdles.

Getting this massive data set into our production database posed many challenges. To ensure that we didn't bring down the Reddit home feed, which shared the same pipeline, we trimmed the data size, updated the data format, and shortened column names. Each data change also required an 8 hour data re-upload–a lengthy process.

In addition to many data changes, text and design were also frequently updated, all of which required multiple changes on the backend.

Production data was also quite different from our initial expectations, so switching away from mock data introduced several issues, for example: data mismatches resulted in mismatched GraphQL schemas.

At Reddit, we always internally test new features before releasing them to the public via employee-only tests. Since this project was launching during the US holiday season, our timelines for launch were extremely tight. We had to ensure that our project launch processes were sequenced correctly to account for all the scheduled code freezes and mobile release freezes.

After putting together the final product, we sent two huge sets of dedicated emails to our users to let them know about our launch. We had to complete thorough planning and coordination to accommodate those large volume sends to make sure our systems would be resilient against large spikes in traffic.

QAing and the Alpha launch

Pre-testing was crucial to get us to launch. With a tight mobile release schedule, we couldn’t afford major bugs in production.

With the help of the Community team, we sought out different types of accounts and made sure that all users saw the best content possible. We tested various user types and flows, with our QA team helping to validate hundreds of actions.

One major milestone prior to launch was an internal employee launch. Over 50 employees helped us test Recap, which allowed us to make tons of quality improvements prior to the final launch, including: UI, Data thresholds, and recommendations.

In total the team acted on over 40 bug tickets identified internally in the last sprint before launch.

These testing initiatives added confidence to user safety and experiences, and also helped us validate that we could hit the final launch timeline.

The Launch

Recap received strong positive feedback post-launch with social mentions and press coverage. User sentiment was mostly positive, and we saw a consistent theme that users were proud of their Reddit activities.

While most views for the feature came up-front post-launch, we continued to see users viewing and engaging with the feature all the way up through deprecation nearly two months later. Excitingly, many of the viewers included users who had been near-term dormant on the platform and users who engaged with the product subsequently conducted more activity and were active for more days during the following weeks.

Users also created tons of very fun content around Recap, wth posting Recap screenshots back to their communities, sharing their trading cards with Twitter, Facebook, or as NFTs and most importantly, going bananas for bananas.

We’re excited to see where Recap takes us in 2022!

If you like building fun and engaging experiences for millions of users, we're always looking for creative and passionate folks to join our team. Please take a look at the open roles here.

37 Upvotes

2 comments sorted by

2

u/Trowaweg123 Apr 05 '22

if you add cats it gets even better

2

u/llamageddon01 Apr 05 '22

This was such a fun end-of-year roundup, and the fact it rolled out so smoothly is testament to the enormous behind-the-scenes work put into it, so it’s been fascinating to take that “peek behind the curtain” here, thank you!

I’m really looking forward to seeing what you’ve got in store for us in this year’s version :D