# Less Is More: Optimizing Email Volume Part 2

##
July 26, 2016

In Part 1 of this series, we introduced the problem of email volume optimization. We showed that on the one hand, email is among the principal drivers of engagement, but on the other hand, excessive email volume results in increased negative responses such as members unsubscribing from an email type or reporting emails as spam. We discussed an approach for email volume optimization that minimizes email volume while maximizing downstream sessions and minimizing the number of member complaints (unsubscribes and spam reports). In this post, we’ll discuss our learnings from this approach and describe our new, improved approach that has resulted.

Earlier approach to volume optimization

In Part 1, we presented our formulation of the problem as a multi-objective optimization (MOO) problem which read as follows. Given a pool of generated emails where each email is targeted to a particular user:

What this means is that we would like to send the fewest number of emails that allow for the number of sessions that begin in response to be above a certain target, and the number of complaints on these emails to be below a certain tolerance.

Analysis of the earlier approach

This approach retains those emails for which the probability of a downstream session is high and the probability of a complaint is low. Intuitively, the emails that are retained in this approach are ones that the targeted user likes. We performed a fine grained analysis of the results of this approach on our four user segments represented below. The ratio of the number of emails sent divided by the number of emails generated within each segment is tabulated in this chart.

It is evident that this approach drops fewer emails for the more active users. To be able to argue whether this result is satisfactory, we need to understand the value of an email.

We think that emails serve as reminders for users to visit the mobile or web application to carry out certain desired tasks. For example, an email could prompt a user to engage in a group discussion, apply for a job position, make a new connection, or read the latest news on a topic of interest.

However, it would not be unreasonable to assume that not all users need or like such prompts. So we set up an experimental user bucket to ascertain to what extent different groups of users require email prompts. In this bucket, each email generated for a user was dropped with a random probability. We plotted the email volume received by a member within a week vs. the percentage of members who were active (visited the mobile or web application at least once) among the members who received the same email volume. This plot is shown below for the four user segments. We also plotted the impact of email volume on the total number of sessions by a member within a week.

*Figure 1: Impact of emails on member activeness and total sessions.*

The two plots show similar trends, so we’ll discuss only one of them here. Two important observations can be drawn from the plot on the left:

Not all users need emails to trigger a visit. This is reflected in the intercept of the various lines in the plot. For example, almost all the users in the daily-active segment visit the mobile or web application organically even when no emails are sent to these users. Likewise, a significant percentage of users within the weekly-active and the monthly-active segments are active even when no emails are sent to these users.

The impact of emails on the probability of a member being active varies across the user segments. This is reflected in the slope of the various lines in the plot. The impact is most pronounced in the monthly-active segment and least pronounced in the daily-active segment. The impact of email is small but very important in the dormant segment because organic engagement within this segment is very low.

In other words, we can say that most of the daily-active users do not need frequent emails to drive engagement, while a significant percentage of the dormant users ignore emails altogether.

These observations motivated us to re-evaluate whether we were optimizing for the right metrics. In Equation (1) we are optimizing for an engagement metric that is attributable solely to emails—namely, downstream sessions. We call such a metric an “email metric.”

However, in reality, this is a narrow view of the ecosystem of a networking service, which is comprised of several tools and utilities that contribute towards delivering value to the members. Within the broader view of the ecosystem, email is only one such utility. A networking service should ideally need only to monitor the health of the overall ecosystem, without having to explicitly keep track of all the various contributors.

New approach to volume optimization

Based off of this conclusion, we decided to shift our focus from the email-specific metric of downstream sessions to a site-wide metric instead—namely, total active users. This metric needs to be measured over a time frame. For example, we could measure the number of active users within a day, a week, or a month. We use a time frame of one week. As observed in Figure 1, the probability of a member being active within a time frame depends, in part, on the number of emails received by the member over that time frame. It goes without saying that this probability depends not just on the number of emails, but also on the kind of emails received by the member. We call the set of emails received by a member over a certain time frame the member's “email diet.” A better email diet should result in a higher probability of the member being active. The volume optimization problem now looks like:

where the target and tolerance are specified as some fractions of the expected active users and complaints if all generated emails are sent. We set target to a large value and tolerance to a small value.

To express the above problem mathematically, let

*D*= The set of emails sent to user_{u}*u**a*(_{u}*D*) = A function which predicts the probability of*u*being active over the time frame if email diet*D*is sent to*u**c*(_{u}*D*) = A function which predicts the expected number of complaints by*u*if email diet*D*is sent to*u*

Then the optimization problem becomes:

Note that:

- Obtaining the prediction functions
*a*and_{u}(D)*c*is a non-trivial task. This is because there can be exponential combinations of emails and hence exponential number of email diets_{u}(D)*D*. - The complexity of the optimization problem will depend on the functional forms of these prediction functions.

Solving the new volume optimization problem

We know how to approximate *c _{u}(D)* from the method described in Part 1 of this series. We make an independence assumption between emails and approximate the expected total number of complaints as a summation over the individual probabilities of complaints from all emails

*e*in

*D*.

where, *c _{e}* = Pr(complaint from

*e*|

*e*is sent).

We train a logistic regression model for *c _{e}* with each email sent in the past as a training example. The response is 1 or 0 depending on whether the targeted user

*u*complained on email

*e*.

*a _{u}(D)* cannot be approximated in the same way. Unlike the complaint prediction function,

*a*0. Also, unlike in the case of a complaint, where we know exactly which email resulted in a particular complaint event, there is no simple way to attribute the event of a user being active to an individual email.

_{u}(ɸ) ≠Here is how we capture the contribution of a user’s email diet towards their activeness. We model user activeness through a logistic regression model in which features representing the email diet are included in the regression function. To be more precise, we express activeness of a user u as a function of the following features:

- Features representing the email diet
*D*: Each email may be represented by a set of features. We will call each possible combination of such feature values as a kind_{u}*k*of email. Each email will be of one such kind. For example, we could represent each email by two features, the type of email and the language of the email. Then if there are 10 possible email types and 5 possible languages, each email will be one of the 50 possible kinds of emails. We now create features*n*which represent the number of emails of each kind_{k}*k*in the diet. These features are expected to capture the impact of emails on user activeness, averaged across all users. - Features representing
*u*: These are features which are independent of the email diet, such as*u*'s locale, age, last visit time, activities within the mobile or web application over the past week, etc. We denote these features as*f*. These features are expected to capture the tendency of a user to be active in the absence of any email triggers._{i} - Features representing interaction between the above features. These features are expected to capture the personalized impact of emails on the activeness of
*u*.

If we let *A _{u}* denote the Bernoulli random variable corresponding to the activeness of user

*u*, then

We train this model using training data collected over a one week period. Each user is a training example. For a given user *u*, the features representing the diet correspond to the set of emails received by *u* over the training period. The response is 1 or 0 depending on whether the user was active over the training period.

Once the model coefficients *α, β _{k}, 𝛾_{i}, 𝛿_{ki }*are obtained, we collect the terms involving the diet features and re-write the above equation as:

We now take the first order Taylor series approximation of the above function around *𝜌 _{u0}* for each user

*u*to obtain

*a*. We approximate around

_{u}(D)*𝜌*so that we are roughly in the correct region of the sigmoid function for a given user

_{u0}*u*. This approximation makes the resulting MOO problem amenable to standard solvers.

The *a _{u}(D)* above comprises two parts which may be nicely interpreted in the following way. The first part,

*⍵*, captures the organic engagement of user

_{u0}*u*. This would correspond to the intercept of the lines in Figure 1. The second part captures the impact of emails on

*u*'s activeness.

*⍵*would correspond to the slope of the lines in Figure 1.

_{uk}Once we have the prediction functions* c _{u}(D)* and

*a*, we can down the precise optimization problem from Equation (2) to obtain a send/drop decision for each generated email. The form of this optimization problem turns out to be no different from the form of the optimization problem in Part 1. The solution is hence of the same form. Let 𝜇 and 𝜈 be the solutions to the dual problem corresponding to the active users constraint and the complaints constraint respectively. Then for an email

_{u}(D)*e*of kind

*k*which is generated for a user

*u*:

Analysis of the new approach

Since the multipliers *⍵ _{uk}* above are personalized for each user

*u*, each user receives an email diet comprised of emails that the user is most interested in.

The ratio of the number of emails sent divided by the number of emails generated within each user segment is tabulated below. The results are consistent with our observations in Figure 1. From the slopes of the lines in Figure 1, we know that emails have maximum impact on the monthly-active segment, followed by the weekly-active and the daily-active segments. We observe the same trend in table below. For the dormant segment, we know that emails have a small but important role to play. The volume optimization model is able to capture this quite well.

The new approach of optimizing for site-wide metrics achieves a significant reduction in email volume while maximizing one of, if not the most, important site-wide metrics: active users. The result is a better distribution of emails across the user segments. This approach can be easily extended to incorporate other site-wide metrics of interest as well, such as total pageviews. In that case, we would need to express pageviews by a user as a function of the email diet through a suitable regression model.