Building a platform: Learnings from our pursuit for leverage
April 3, 2020
One of our core engineering principles here at LinkedIn is to create leverage. In practice, what this means is to build software that is easy to reuse by other teams. An effective way we are able to do this is to favor platforms that generalize long-term functionality rather than build inflexible, single-purpose software that will only serve specific, short-term needs.
In that vein, when we were tasked with building a product that would help members prepare and practice for interviews, we envisioned building a generic platform capable of serving similar assessment-related use cases. The effort, which involved bringing together teams from different organizations to design and build a solution that could scale and serve all use cases, taught us several important lessons in platform building and project execution that we will share below.
Background: Interview Prep
Interview Prep is a feature that helps members prepare for interviews. It provides access to tips from experts on how to answer commonly asked interview questions and provides a space for members to privately practice answering them. Members’ interactions with this feature can be categorized into the following buckets:
- Accessing expert interview content, including tips on how to approach these questions and sample answers
- Drafting responses to sample questions that are then stored privately on LinkedIn
- Receiving private feedback from one (or more) of your LinkedIn connections
Typical interview preparation workflow
As we started discussing our strategy for the Interview Prep feature more widely across LinkedIn and began sketching out concrete engineering requirements, we encountered similar use cases being developed by different teams. Unfortunately, as we dug into the technical details, we discovered that the technology behind each of these existing use cases was coupled too tightly with their distinct use cases for us to be able to leverage without a significant rebuild. At the same time, it was clear that there was considerable overlap across the functionalities supporting each of these existing use cases. We had to make a choice: add Interview Prep to the list of independent use cases or create a platform that could provide the features that comprised this overlap.
Designing a platform for Assessments
Deciding whether to build a component of software to be generic or specific is a common question engineers regularly face. However, it’s not always an easy choice to make when there are multiple teams involved and the decision may determine an entire team’s roadmap for months, quarters, or even years. This is the scenario we found ourselves in after realizing that the common denominator behind Interview Prep and the previously mentioned use cases was the ability to serve questions and assess responses to them. We decided to kick the project off by meeting with the relevant teams to gain perspective on the pros and cons for creating a general platform. Ultimately, driven by the potential of leverage to be created, we began designing a central platform to serve everyone’s needs, and so the Assessments platform was born.
At the core of this new “Assessments” platform are the relationships among:
- “Assessments,” which group questions together, and
- “QuestionResponses,” which are the member-created responses to questions
Each member-facing interaction or feature boils down to operations on these three entities. An important condition we had for pursuing the new platform route was to ensure that each of the features we envisioned building on top of this platform would unlock value and leverage for all of our partners. Examples of these types of features include an expanded variety of supported questions types, sophisticated evaluation and scoring methodologies, usage reporting, and so on. We stress-tested and refined our design by running it through the gamut of these current (and future) features to ensure it supported them naturally, such that the platform remained intuitive and extensible. As we did so, one question that kept coming up and making us pause was how to define and handle “client-specific” concerns.
Drawing the line: Client vs. platform
In general, platforms aim to minimize the overhead needed to take advantage of already-built features. However, clients frequently need support for some amount of additional data or functionality unique to their use cases. Platforms typically respond to these types of feature requests with one of the following:
- “Yes”: Working with the requester to generalize the request into a platform feature
- “No”: Find an alternative solution outside the platform
- “Let’s talk...”: If neither of these is a suitable option, we will turn to adding use case-specific functionality to the platform
The first of these options generally requires the most work up front, but can leave both the platform and client for the better. The last option is generally discouraged as it often prevents separation of concerns and leads to problems around ownership and maintainability. However, when business objectives and project timelines work against the first two options, case-by-case functionalities are sometimes a must.
When it came to Interview Prep, we often found ourselves at a crossroads to choose one of these three options. As owners and developers of both the platform and the client use case, we had to juggle keeping a clean separation of concerns on behalf of the platform and finding the quickest path to launch Interview Prep.
An example of Interview Prep
As an example, when members view a question in Interview Prep, they are provided supplement videos or text blurbs providing tips, advice, and even sample responses designed to help in an interview. To support this, we needed a place to store the association between questions and this contextual content. At this point in development, storing associations in the existing database for the platform would have been a light lift. However, because we considered this content unique to Interview Prep, we couldn’t rationalize simply adding it in there because it was easy. Nor did we feel like we could generalize the content as a platform feature at the time since no other clients were interested in it and designing and implementing for non-existent use cases would require making too many assumptions. Instead, we built a separate storage solution for Interview Prep. It required more work, but allowed us to keep the platform concerned with only platform-wide functionality. Decisions like this helped set a precedence for our partners to follow as they onboarded to our platform and faced similar situations.
Utilizing leverage: Content management
A crucial feature of the Assessments platform is to be able to carefully control and manage the quality of questions and assessments. The ideal execution of this is a content management system (CMS) with an easy-to-use interface for content creation, editing, versioning, archival, deletion, etc. Rather than building a new CMS from scratch, we realized we could work off of one of our platform partners in LinkedIn Learning.
LinkedIn Learning houses a large library of high quality instructional video courses along with additional supplemental content. As it turned out, to implement course quizzes to accompany their content, the Learning team was already working on a brand new application for their content editors called Cosmo. Cosmo is a specialized application that enables the curation of learning content at scale through features that go beyond a traditional CMS. Thanks both to its sophisticated feature set and the types of content it specializes in, Cosmo presented a very appealing option to utilize for content management in the Assessments platform. With their experience building software focused around content and its management, the Cosmo team also helped open our eyes to additional considerations and features we hadn’t thought of.
Service diagram for Assessments platform clients
Making Cosmothe LinkedIn Learning CMS work with our Assessments platform was no small feat—we had to connect two separate systems that were being independently developed. This was eventually accomplished by using a “publishing” component that takes data from a source system, transforms it to be compliant with the destination system, and then stores it at the destination. An inevitable downside to this type of component is that forces the two systems to be coupled. That said, we decided that access to sophisticated content management features outweighed the cons.
Since completing the integrations, we have experienced some of the pains we anticipated in coupling the systems. Small changes such as simple data schema updates required more effort to coordinate and execute between the systems and teams than we would have liked. That being said, we have also reaped the benefits of having an application CMS that helps our partners’ content editors scale their products beyond what they would be able to otherwise.
In summary, there are a couple of important lessons that we’ve learned over the past year of building our Assessments platform:
It’s hard to build a software solution to be “perfect.” While building the Assessments platform, we occasionally found ourselves in dilemmas between designs that had minor faults or risks that we refused to accept. This would inevitably drag out decision-making longer than we wanted, only to end up near where we started. Looking back, we could have saved ourselves some time and hair pulling by better time-boxing our decisions and moving forward with the best available solution.
Sometimes less is more
Requirements and priorities constantly change. As a platform team, this is true not only for your own requirements, but also for those of your clients. As such, we’ve learned that it’s generally best to solve for immediate and concrete requirements, and be wary of getting too far ahead ourselves by over-engineering for future or hypothetical use cases. As obvious as it sounds now, we frequently caught ourselves following the rabbit hole of “What would be cool is if…” Over time, we’ve come to trust an iterative approach of implementing short-term solutions that optimize for future extension to guide the platform in the right direction.
This is a long-standing core value at LinkedIn that we came to appreciate the longer we worked on this platform. As mentioned, there were several incumbent use cases for Assessments before the idea of a central platform was even conceived. Convincing partner teams that there was sufficient long-term benefit of our platform to justify the short-term costs (and, at times, pain) of working with us required a common culture of trust and collaboration from all sides. We found this in abundance with our partners and is not something we ever take for granted.
In conclusion, we view these lessons learned as pillars of a framework for approaching important project or design decisions, and considering the tradeoff between investment and ROI. We carry these experiences and points forward as we continue to scale our platform and develop new and exciting features that can be used not only within the Interview Prep experience, but also for all of our clients.
And of course we wouldn’t be where we are without the hard work put in by the Platform Engineering team. Thanks to Himanshu Khurana, Xixi Xiao, Xingyu Chen, Kevin Lai, Ke Jin, Joey Addona, Karthik Naidu DJ, David Ding, Eduardo Monroy Martinez, Marcos Santanna, Eric Chan, Richard Cook, April Dong, Yikang Gu, Quynh Nguyen, and Akshay Mehta, as well as our partners, Bef Ayenew, Joey Bai, Jie Zhang, Shiqi Wu, Christian Mathiesen, Mahir Shah, Joel Young, Alp Artar, Richard Meng, Smitha George, Chris Fong, Jeremy Owen, Amy Slawson, Kevin Bevis, and David Dong.