Podcast

How LinkedIn's platform team leverages real-time feedback

Max Kanat-Alexander and Or Michael Berlowitz (Berlo), share how they gather both periodic and real-time feedback from developers.

Timestamps

  • (0:58) Overview of the listening channels used by Max and Berlo’s team
  • (2:49) Origin story of the Developer Engagement and Insights team - [00:02:49]
  • (5:00) Perspectives on volume metrics
  • (8:51) How the periodic surveys work
  • (14:20) Investment required to build the periodic surveys and real-time feedback
  • (15:28) How results are handled
  • (21:40) How the real-time feedback tool works
  • (25:15) Where the idea for the real-time feedback tool came from
  • (28:58) Building an MVP for the real-time feedback tool
  • (35:40) Other stakeholders involved in triaging feedback
  • (37:34) The experience developers have when encountering the real-time feedback tool
  • (40:44) How feedback collected via surveys differs from that of the real-time feedback tool
  • (41:46) Advice for other teams considering implementing this approach

Listen to this episode on Spotify, Apple Podcasts, Pocket Casts, Overcast, or wherever you listen to podcasts.

Transcript

Abi: Max and Berlo, thanks so much for coming on the show today. Really excited to chat.

Max: Yeah. Thanks Abi, I’m really happy to be here.

Berlo: Thank you for having us.

Abi: So, kicking off, Max, I understand you’re the tech lead of the, it’s called developer engagement and insights team. You had mentioned to me that the areas of focus for this team are system metrics, surveys, and real-time feedback. Could you briefly just define each of these areas a little bit better so that listeners can understand?

Max: Yeah, sure. So, when we say metrics, we mean quantitative data. We mean numbers that are good when they go up and bad when they go down or one or the others. There’s something you can trend on a graph and that you can use as a manager or as a leader to understand the direction of your system. In our case, we’re not talking about things like operational metrics usually. We’re not talking about looking at debugging a service in real time or anything like that. We’re talking about numbers on a graph that managers use to determine strategic or tactical directions for teams. And these would be things like build time or numbers related to code of view, but not for individuals but for whole teams, for large groups of people.

When we say surveys, we mean periodic questionnaires that we send out to our developers that have a significant number of questions in them that they answer all at once and that they ask them about their experience for an entire quarter or an entire half year or something like that. And then, our real-time feedback systems are things where right after a developer has taken an action or completed a particular workflow, we can ask them a question right then so the context is fresh in their mind.

Abi: Well, we’ll be talking about all these different components throughout this episode. But just to confirm, Berlo, I understand you help lead the real-time feedback system, is that correct?

Berlo: Yeah, I’m in fact the tech lead for the developer engagement portion of the team. And yeah, real-time feedback is one of the main products that we have.

Abi: Awesome. Well, Max, I’d love to ask you a little bit more about the current team. I remember you mentioning to me that this developer engagement and insights team was formed out of nothing. So, I want to ask you, was there a specific event or moment that sparked the need for this team or was it more of a gradual process?

Max: I would say it was both. So, LinkedIn does have a history of using data to understand its developers, but we just realized that we could have more data and do an even better job than we were doing. And that one of the reasons that sometimes the data could be occasionally inconsistent or unreliable was that there wasn’t a dedicated team behind getting and gathering the data. Also, we realized that we wanted to have this real-time feedback system and that required a lot of actual engineering effort that we were going to have to make a thing out of nothing. And it all came around the time that we were doing another reorganization into what is now called Developer Productivity and Happiness.

And so, at that time we actually, when we formed the Developer Productivity and Happiness org out of the previous tools org that we had at that time, we created developer engagement and insights as one of the major pillars of the organization.

Abi: So, this developer insights team is really a part of a broader org focus on developer productivity and satisfaction. You mentioned there was this recognition of wanting more data. I mean, where was that ask coming from? Was it from within your org just feeling like you were at a loss of understanding the developer population at LinkedIn and where to invest? Or was it coming from managers and leaders outside of your organization who wanted more insights about their teams?

Max: At first, I think most of it came from within the Developer Productivity and Happiness Organization because we are the people tasked with making developer tools and we want to be sure that we’re doing the best possible job that we can. Sometimes in a business, when you get requests like this, it’s sort of tied back in a nebulous way to performance management or something like that. And that was very much not our goal. So that was not a thing we wanted to do. We were just people who wanted to make sure that we could provide the best possible products for our users internally, if that makes sense.

Abi: Yeah. I remember when we were talking before the show, just reminding me because you mentioned the nebulous tie to performance management. I remember when we were talking before the show, I asked you when you mentioned metrics, I said, “What? You mean things like lines of code, number of pull request, things like that?” And you said to me, “I hate volume metrics.” And so, I wanted to ask for you to elaborate on that, what you mean by that.

Max: Sure. So, it’s very dangerous to try to measure the volume of output of an individual developer as a metric that you want to use to understand the performance of an individual. It leads to all sorts of odd behaviors because people will know that that’s the metric and they will game the system in a way that does not lead to, in a way that’s not good for the business. And this is a statement about my experience or observations across the industry that let’s say for example that you wanted to measure push volume as a metric for an individual.

What that does is it makes people find ways to create automation that runs as their username or make a lot of very small changes that maybe could have been larger changes. I’m a proponent of small changes, so that’s not necessarily terrible that you’re making small changes. But let’s say you’re like, “I’m going to fix one typo and then I’m going to submit another change to fix another typo and then I’m going to submit another change to fix another typo.” And they’re all right next to each other in the same sentence. Obviously, you’re padding the numbers.

But one of the worst problems with that is that it makes the metrics that you actually need, which are the metrics that help you understand your users and the success of your efforts as developer tools be obscured by the behavior of developers who have been coached into behaving in an odd way based on those individual metrics. So, all of a sudden, you’re trying to understand what the real problems are with code review in your company or what the real problems are with build in your company. And because you have decided to measure individuals on these metrics, they have all engaged in some strange behavior that makes it impossible for you to now actually sort through what is real behavior and what is real problems and what is people trying to game the system.

Abi: I really love your description of that. And it’s interesting, you had mentioned this was sort of a viewpoint directed at the industry as a whole and I would agree with everything you said and also say it’s so fascinating how prevalent those types of value metrics still are across organizations. And so, hopefully, people listening can take to heart what you shared and find better ways to measure or better ways to understand their organizations as well as performance.

Max: I think that it comes out of people trying to draw an analogy between manufacturing processes and software development, but they’re really not the same thing, right? In manufacturing process, you have an assembly line or a series of people and they take actions and they produce a physical object. When I describe software engineering to people, I say imagine that there’s a thousand people and they all have to write a book that has a million pages and they all have to change what’s in the book. And the ultimate product is the book. The whole book is the product and that is what users use and buy. It’s not the same. Knowledge workers are not factory workers.

Abi: I love that. And Berlo, I want to ask you, because I’m sure you have an opinion on this as well. But what have you seen throughout your career around these types of metrics and do you have any great analogies like the book?

Berlo: That’s a great question. I think Max is the master of analogies here. But yeah, I think that, yeah, I’ve seen in fact many times that people try to measure the wrong set of things and as the saying goes, “You are what you measure.” So, it’s really important to measure the right things, otherwise you turn into something that you don’t want to be. So, that’s why it’s really important to focus on the right set of metrics.

Abi: That makes sense. And I know you guys spend a lot of time on measurement and feedback, so I want to get into some of those other specific approaches. And one thing I’m really excited to highlight in this conversation is this real-time feedback approach because I think I’ve mentioned to both of you, it’s something that I think is pretty unique and it’s something I’ve seen leaders of other developer platform teams thinking about and also trying to figure out.

But before we talk about the real-time approach, I’d love to ask you about the periodic survey you run because you also have some, I think, pretty unique approaches in how you do that. So, maybe just starting with some overall context and background. Berlo, I’d love to ask you, how long have you been running this survey for? What types of questions does it include and how often do you run it?

Berlo: Yeah, that’s a great question. We’ve been running the survey for about four years now and most of the questions are basically satisfaction questions like, how satisfied are you of a certain tool or with a certain process? And we also provide some open text questions for people to provide general comments on entire areas, entire phases of development as we call them. And that gives us the balance of understanding the satisfaction of specific pieces, but also getting general comments, which can be specific about entire sections of the SDLC, right?

We’ve been running this survey quarterly and what we’ve recently done is actually we introduced a recommendation engine that based on certain conditions makes a recommendation for question owners to come in and say, “Should this question really be asked this time? Or can we actually skip it in order to minimize survey fatigue?” And we’re in the middle of evolving it. So, one of the things that we want to be able to do is come in and look at the quantitative metrics and see for example, if build times suddenly have gone up for a certain cohort of users, we want to be able to target that cohort and say, “Maybe we should measure their satisfaction and see if their satisfaction has changed. Maybe it’s just a different type of behavior and their satisfaction is still the same, but maybe it actually went down.”

And this whole recommendation engine still keeps the power in the hands of the question owners. We want to make sure that the people that own the questions can still override the system and come in and say, “We actually do want to ask this question or we actually want to skip it even though the system’s recommending that we do ask it.” And basically, it’s about asking the right set of questions and making sure that we minimize survey fatigue wherever it’s possible.

Abi: Well, I think you touched on two things that I’ve never heard of before. I’ve never heard of any other company or tool during the types of things that you’re describing. So, I want to double click on these. So, the first is you’re describing the recommendation engine. So, if I understand correctly, you have a system that before you begin a survey, before you send out a survey, it helps sort of filter through all the possible questions and trim the list down. Is that correct?

Berlo: Yeah.

Abi: And I’d love to understand how that works. So how do you actually do that?

Berlo: Yeah, so what we do is we look at basically the patterns of behavior. So, basically, also the objective patterns of behavior and also the previous answers and things like that. And we want to see is there really a change quarter of a quarter? Was there a specific investment that was made in a specific area that merits asking this question again? Or is this basically status quo? If it’s status quo, so maybe we can actually pick and choose and say, “Instead of asking everyone this question, we’ll only ask a subset of users.” Because if all you want to do is maintain a baseline, maybe there is no need to ask thousands of engineers this question, right? Yeah. So, that’s essentially what the recommendation system does. It comes in and trims the questionnaire if you will.

Abi: Well that’s incredible. I’m really a little bit blown away. So yeah, it makes a lot of sense. I mean, if you’re just trying to get a baseline of something every once in a while, and if you know that no change has been made in that area in the past six months, it’s stayed relatively level, you probably don’t need to ask every engineer at the company again the following quarter. So that’s really interesting. I remember Max mentioning to me, and I think you’ve touched on it as well. But you also personalized the survey per developer. Is that correct? Based on the tools that each engineer specifically uses, is that also something that’s built into your system?

Berlo: Yeah. So, the beating heart of essentially the developer engagement and insights team is data, the data collection aspect that we have. So, the same data that actually powers our system metrics is also used to identify who are the right people to target for specific questions for every tool that’s onboarded to the survey. So, essentially what we do is we build a custom survey for every target user of the survey based on their usage patterns of the tools and processes. And that also means that we get the added benefit that the results that we get are from actual users and they do not contain experiences of other people that may have heard something from someone else. And those experiences are just being amplified. Essentially, we’re asking the right set of people and getting the right set of information.

Abi: Really powerful approach. Would you be able to share, I mean is this just all a proprietary system you guys have all built or is it built on top off the shelf like survey tools? I’m curious how you’ve actually developed this because it’s such an advanced system.

Berlo: Yeah. So, it’s all in-house basically. We just build it ourselves. There’s this off the shelf software in terms of technologies like database technologies or programming languages obviously, but other than that, it’s all in-house.

Abi: And I mean, loosely speaking, what kind of investment has this taken? You mentioned you’ve been doing this survey for four years. Have you been continually, you built and then continually improving this proprietary system for four years? Is it a pretty major effort? Is it a maintenance mode now or is it actively constantly being improved?

Berlo: So, if I tried to look back, I think this team has started I think three years back. So, the survey existed before the team did. So, in terms of investment, we started investing in this space about three years ago. Essentially, we started with real time feedback as the first approach and then we folded back into adding more improvements to the periodic survey as well. And in terms of investment, I think that the recommendation system, for example, is an investment that we recently made. We started it, I think, roughly half a year ago. So yeah, I would say we are continually investing in this space.

Abi: That’s awesome. And I remember Max mentioning to me that, or actually you mentioned to me that you did do this double take where you did a lot with real time feedback. I then realized there were some things that were actually better suited to be asked periodically. So, we’ll touch back on that later. But I want to ask about how you deal with the results. And Max, I’d love to ask you this, you run this quarterly survey. What do you actually do in terms of the analysis, sharing this out, communicating it, taking action? Because I think that’s an important part of survey fatigue as well.

Max: Totally. So, this actually brings up a whole other concept, which is that we have segmented our developer population into various cohorts and we call these cohorts the developer personas. So, this is different types of developers and they’re separated by their workflows. So, the categories are things like backend engineer or web engineer for front end web engineer or iOS engineer, android engineer, SRE, categories like that. And for each one of those cohorts, we have assigned an owner and that owner is a member of that cohort. So, if you are the backend owner, you are actually a backend developer at LinkedIn and they have multiple jobs. One of their jobs is to advocate to tool and infrastructure owners on behalf of their cohort.

But one of their major duties is that they actually perform the survey analysis for their cohort. So, every quarter we get this survey data and the persona owner will, oh, that’s what they’re called. The persona owner is the cohort. It’s the persona owner. They will put together an analysis document and they will present that to various executives and tech leads in charge of the relevant infrastructure for that person, for that persona.

Abi: That’s really interesting. So, you distribute or decentralize the post survey work by, delegating isn’t really the right word, but having each of these persona domain owners lead that effort. And is there a company level? Is there an aggregate analysis as well?

Max: There is an aggregate analysis. It’s composed of our overall satisfaction scores. And then, also just a broad overview of what we learned from the persona owners. However, we discovered that it’s important not to focus too hard on the company overall compared to the personas. Because if you do, you end up only focusing on whatever the largest group of developers is. And importance to the business is not necessarily equal to the number of people who are working on the thing. In every company, there are fewer iOS engineers in almost every company, there are fewer iOS engineers than there are say backend engineers in most large tech companies.

Yet, iOS platforms are for the users of those companies, some of the most important platforms. Same with Android, same with any mobile platform. And very often what happens in companies that look at their developers as just a single mass, these mobile developers get heavily overlooked even though they are critical to the functioning of the business and improving their experience can have a dramatic impact on how we actually deliver. Like in LinkedIn, how we deliver value to our members and customers. So, yes.

Abi: That’s a great point. Yeah. It’s such great advice because a lot of leaders I talk to, they run surveys and then say, “Yeah, the top issue was X.” But now that you bring up this example of the iOS developers being a small percentage, and I’m imagining that a lot of these organizations, those types of personas are potentially being overlooked because of the focus on the aggregate rather than breaking it down by persona. That’s a really great tip. I appreciate you sharing that.

I wanted to ask, so these persona leads perform the analysis, present the results and are advocating back to your group, your broader organization around what the challenges and pain points or opportunities maybe. So, what’s the final piece of that loop around? How do you close that loop? Is there an email that your org sends out that says, “Hey, we got all the different feedback from all the different personas and we heard you.” Or do you begin triaging those things and they end up on your roadmap that quarter? What’s the action piece of the workflow?

Max: The idea is that it feeds into the planning process. So, if we have a quarterly or half yearly or yearly planning process, depending on who is doing what and what level we’re planning at. We take that data and we include it as, so we try to time everything so that people have this data right before they’re about to do planning so they can actually take into account when they write their plans. And then, a lot of this actually gets discussed when we are doing planning. So, there’s a question from an executive or something, especially if you’re the executive who was there in the meeting who got the presentation. You can be like, “Well, why are we focusing on this instead of this? So, we should have a conversation about maybe we should help people with this particular pain point.”

Berlo: And to add on top of that a little bit. So, also the other aspect is making sure that we close the loop back with the people that provided the feedback in the first place. So that sharing aspect and what we have is in fact we share improvements to the persona populations about things that have been done and also feedback that has been heard. So, as part of this analysis, we also have a short statement from the persona owners. What are the important things that we’ve heard from you this round? And we share that back to the population that provided this feedback in the first place. And that’s extremely powerful because at that point in time, people feel that they’ve been heard.

Abi: That makes sense. And just communicating that back, restating the feedback you’ve received, definitely I think is a good way to close the loop and just make sure people know that their feedback’s not going into a vacuum, as people tend to say. So, you have this periodic survey, which is very impressive, this proprietary approach. But I understand you identified some challenges with it that led you to develop a real-time feedback approach. And again, as I mentioned earlier, I’m really excited to dive into this. Berlo, I’d love to ask you before we get into it, can you just briefly describe how real-time feedback works so people listening have a general sense of what we’re talking about?

Berlo: Yeah, so real-time feedback is essentially a system that first collects information and actions that developers take across all the different tools, although different tooling ecosystem. And based on this contextual information that it has collected, it decides if when and how to solicit feedback from the developer. So, let’s take an example. Say a developer came in and created a PR and she hasn’t been recently asked about her tooling experience. So, the real-time feedback system may reach out to her, maybe over email or using an instant messages platform and ask her to provide feedback. And if she has been recently asked, then the system can decide, “Oh let’s actually skip this person this time because we don’t want to bombard users with too much of requests.”

Abi: That makes sense. So, you throttle it. One question that popped into my head when you were describing it. When you talk about gathering feedback and you mentioned you do this over different channels, sometimes instant message or web, what are you actually asking? I mean, are these multiple choice multi question surveys or is this just an open-ended, hey we’d love your feedback? Is it more open-ended? Is it multi-question, single question, multiple choice? What kind of feedback do you actually gather?

Berlo: Yeah, that’s a great question. So, when we first started, we started with a very simple proof of concept just to see if it’s working, if it’s resonating with people. And there, essentially, we asked the same type of questions that we were asking the survey. We asked people to provide a satisfaction rating and provide some comments if they have some additional information to provide. Now, in terms of real-time feedback system, we have the capability to introduce deep dive questions and essentially provide multiple choice. And even based on the answer to a specific question, introduce a secondary question.

Say, the first question you answered that you were extremely satisfied. So, maybe, there is no follow up question. But if you were extremely unsatisfied, then there is a followup question like, what aspect were you extremely unsatisfied about? Was this the reliability, was this latency, was this something else, the user experience, ergonomics, whatever it is.

Abi: That’s really interesting and we’ll get back to the mechanics of how this system works and everything because I’m really interested in that. But since we’re on this thread, we just talked about the kind of analysis and follow up closing the loop process for the periodic surveys. What does it look like for the real-time feedback, since I imagine you’re getting this real-time feedback in real time throughout the quarter? So, do you wait until the end of the quarter to follow up or are you having conversations with these people regularly in the moment or on the same day that you receive the feedback?

Berlo: So, generally speaking, we look at it as the upper limit. We want to look at the quarterly boundary as the upper limit to come back to and close the loop and share back information. Sometimes, there may be follow ups or closing the loop that happens earlier. If there’s a piece of feedback that’s actually acted on beforehand that have gotten through the real-time feedback system, then yeah, definitely we’ve had cases in which we’ve reached back out to the people that provided this feedback and closed the loop of them.

Abi: That makes sense. I want to rewind a little bit and you know mentioned when your team was formed that you had actually started with this real-time feedback approach. That was really the first thing you took on and invested in before coming back to the periodic survey system. So, I’d love to better understand where did this idea for real-time feedback even come from? Was it born specifically out of the challenges you were facing in the periodic surveys or was it just an independent idea that you guys had and were excited about as well?

Berlo: So, I think that there were definitely advantages that we were looking to get out of the real-time feedback approach. We wanted to get information in real time instead of waiting for the quarter boundary. Sometimes, that is just too late. And what we did essentially is, we looked at what we were getting through the periodic survey and we tried to see can we actually construct a different listening channel that would provide us benefits that the existing channel did not. When we went back to the periodic survey, we made improvements there as well.

Essentially, we took our learnings and a lot of the benefits of the real-time feedback system, such as making sure that we target the right users and so forth based on actual usage patterns and we applied those learnings to our periodic survey listening channel as well.

Abi: Got it. That’s really interesting. You identified some gaps in the periodic survey, which sort of led to the genesis of the real time, but you’ve also reapplied some of those learnings back to the periodic. Max, I’d love to get your perspective. I know you were there as well when these ideas were being brainstormed and this project kicked off.

Max: Yeah, I did. The only thing I wanted to add is I did want to give some credit to our vice president of Developer Productivity and Happiness, Jared Green, because as far as I know, the idea came from him. He had this visionary idea of, “Let’s just ask developers right after the thing happened.” And I was skeptical, honestly. I was like, “That’s not going to work.” And then, but I was like, “look, we’ll do it, we’ll do it. It doesn’t sound like impossible. It just sounds unlikely.” That’s what I thought. And we did it, and yeah, he was right. So, credit to Jerry.

Abi: That’s a funny story I’d love to ask, why were you skeptical? Where did that skepticism come from?

Max: I was skeptical because it’s hard to remember now because now I have all of this experience of having seen the thing work. So now, I have to go back in my mind to a time when I hadn’t seen the thing work. I had lots of concerns. I was concerned about annoying developers. That’s definitely a concern that I had. I was concerned about the fidelity of the information that we would get. A person just had one experience, are we sure that randomly sampling experiences is going to give us the pain points that we want to see? And I guess I was just, I thought, “Wow, this is really going to be a lot of engineering effort. Are we really going to get as much or the same value out of this that we’re getting out of the survey?” But I was wrong. So, mea culpa.

Abi: I mean, this is from your perspective. Oh yeah, go ahead, Berlo.

Berlo: Yeah, well I think that the challenges are real. Like, when it comes to, for example, for annoying our developers. So, one of the things that we have done is we introduced a preference system. So, developers can come in and say, “You know what? My contribution limit when it comes to feedback participation, I don’t want to be asked more than say once every two weeks or once every month. And I prefer to only be asked over email and I never want to be asked over an instant message,” for example. So, these challenges are real and I think it’s fair to say that we did our best to overcome them.

Max: And I also recall that I was concerned, now that I’m thinking about it more, I was concerned about response rate. I was concerned that we would never get a significant enough response rate that we would get statistically significant information. And with the stuff that Berlo and team implemented, I think they showed me about three months in and they definitely proved me wrong. So, way to go.

Abi: I’d love to ask about, it sounds like you guys put together an initial, I don’t know, MVP or V1 to prove out that this could work. What was that V1 and what kind of scenario, what did you capture feedback on?

Max: Yeah, so we started with a very limited scope because obviously we didn’t know if this was going to work. So, we onboarded very select user journeys. We started with the deployment system and then we also onboarded the continuous integration system via a feedback widget on the webpage that allowed people to view the current status of CI jobs. And then, after we saw that the information that we got was so valuable, we basically expanded to more and more use cases. And at this point in time, I think we have over a hundred different scenarios that are onboarded through real-time feedback.

Abi: Well that’s incredible. Well, I know we won’t have time to get through all a hundred, but we’ll definitely ask you for some more examples. Earlier, Max was mentioning he was skeptical a little early on, what do we get out of this, especially given the investment required that we don’t already get from the periodic survey. So just using, in that V1, those two scenarios, what kind of feedback did you get that was made it clear that this was different than the type of feedback you were getting from the periodic surveys?

Berlo: Yeah, so one example that I think is pretty powerful about the signals that we were getting through real-time feedback versus the survey, the periodic survey. So, via a periodic survey, we knew that people were perceiving the CI system to be unreliable, but we really didn’t understand why because the reliability metrics were fine. So, basically, using a targeted real-time feedback campaign, we found out what is the source of that perception. It turns out that the estimate run times of CI jobs weren’t as accurate as we wanted them to be. So that is explicit feedback that we got from the real-time feedback system. And by providing this information back to the team that develops CLI tooling, we were able to change that perception and make sure that the developers are more satisfied with the system.

Abi: Well, that’s a great scenario and example and it sounds like it clearly demonstrated the value of this approach. I’m curious to go more into how the system itself works. So, you mentioned it’s, I don’t know, three parts. You capture the context and data about what developers are doing, which then triggers these surveys or feedback gathering mechanisms. So, starting off with the capturing context piece. It sounds like, I mean you’ve instrumented all of your developer tools to feed in what developers are, who’s active on that tool and the specific actions they’re performing within them. Could you share a little bit more about how that aspect of this tool works?

Berlo: Yeah, so LinkedIn is very much a Kafka company. So, a lot of things are built on top of Kafka and there’s a lot of information that exists in our Kafka sources. So, what we did is we made sure that we find those out and we collect the information that’s coming on those Kafka streams. And then, basically, as you said, we collect this information and then we make a decision if, how, and when to solicit feedback from developers. And then, once developers provide us feedback, it comes back into the feedback capturing system that we have. And that goes into the analysis phase that Max was talking about before, the persona program and all of that. So, that is essentially the whole loop. And then there’s the aspect also of sharing back to developers that we mentioned. So that is the entire loop that we have.

Abi: And so, I’d love to ask about then that second piece. So, you gather a lot of data through Kafka from your various developer tools. How do you actually configure the survey? Is this something that you configure by a UI or does your development team go in and instrument on top of the data layer to logic to fire something off? How’s that defined and configured and managed?

Berlo: Yeah, that’s a great question. So, we have several listing channels as we’ve mentioned. So, when it comes to, for example, the feedback widget, that’s completely configurable. So, the people that own the specific UI, the specific tool, they can come in and they can configure where does this feedback widget show up? Where do they want it? Do they want it as a button on the side? Do they want it to pop up? What is the specific configuration that they want? For what we call our out of band channels, so things like emails and instant messages, we do not have at this point in time a UI that allows you to configure that. But it’s from the system perspective, it’s something that’s configurable. Think about it as config files. They can come in and provide a system and then everything works from there.

Abi: And do you use this for mostly your proprietary internally developed tools or do you also use this to gather feedback on top of third-party vendor developer tools as well?

Berlo: Yeah, that’s another great question. So, most of our efforts and attention are focused towards the internal tools that we develop in house. We do collect this feedback also for some external tools, where essentially, it’s either feedback that we want to go back and provide to the external vendor and come in and say, “This is a pain point that we’ve seen from our developers and we’d like you to improve on this.” That’s one aspect. Another aspect is when we’re comparing two systems, sometimes we have two external systems that maybe overlap in functionality and we can come in and provide the executives an insight and say, "This system is much more satisfactory to use according to our users versus the other one.

Max: Yeah. Also, there are situations like GitHub for example. LinkedIn uses GitHub. When you use GitHub, a lot of your workflow as a company is web hooks that you’ve implemented. And when developers think of those, they don’t think of them as individual products. So, you can’t ask them how was your experience with this particular CI web hook? So, you just ask them what was your experience with GitHub as a developer at LinkedIn, and then you get the data.

Abi: That makes sense. So yeah, sometimes you have proprietary things within third party things and you can’t really separate them all out in a practical way, so you just ask them a little more broadly about their experience within the broader tool like GitHub, it sounds like. I’d love to ask, you mentioned earlier how with the feedback you capture here, it’s shared out in a similar way as your periodic surveys where it goes to persona leads and things like that. But I’m curious, is this real-time feedback system, you’ve mentioned other teams maybe come to you and ask for scenarios to be built in.

So, who really uses this besides your group? Are there PMs and is it the PMs and tech leads of the different internal tools teams or is it more the executives who oversee these groups? Who’s really the primary point person? I mean, let’s take something like GitHub. Who is the point person for triaging that feedback?

Berlo: So, there are several types of stakeholders that are involved. It’s mostly the direct owners, right? So, people like tech leads and sometimes PMs of the tools and processes, they interact deeply with our listening mechanisms, but not always. So, we talked about the persona program before persona owners as well. They can come in and say, “We want to ask our persona users this question because it would help us understand what are the pain points for this persona.” And yeah, executives have their questions of their own. Taking the GitHub example, we have people at LinkedIn that are responsible for making sure that GitHub is something that the developers at the company enjoy using. So, those would be the people that would introduce questions into the survey or into a real-time feedback campaign.

Abi: Got it. That makes sense. So, there’s a lot of collaboration between your team and a lot of these internal tool owners. I imagine you’re constantly working with them to set up and design these campaigns.

Berlo: Yeah. So, what we do is we work closely with them and we make sure that the questions make sense from a user perspective and we make sure that we tune the system in a way that again minimizes survey fatigue and doesn’t cost too much in terms of the fatigue for the developers.

Abi: And I’d love to double click on that. I mean, Max had mentioned that the fatigue aspect of this was a real concern and you mentioned over a hundred scenarios now. So, I mean what does this actually look and feel like to a developer these days? Are they getting pinged typically multiple times a day or are they getting pinged once a week? What’s the typical experience that a developer at LinkedIn is now encountering as far as real-time feedback?

Berlo: So, the preferences system that we mentioned before, basically the default is, do not ask me more than once a week. And you can come in and say, “I actually want to be asked once a month.” That’s essentially the parameters that we’re talking about. And also, again, in respect to the different channels, you can come in and say, “I prefer this channel over that channel.” So, definitely, it would not be acceptable if we would’ve asked multiple times a day the developers because that would just break their flow and that would not cause developer happiness or productivity.

Abi: And if there’s some sort of thro throttling mechanism you’re describing, I imagine, I mean it’s almost like when I think of digital ads, people bid for ad placement and then the highest bidder gets to actually display the ads. So, if you have a hundred scenarios and the developer only wants one question per week, how do you actually determine who gets priority or precedence in terms of getting to ask their question amongst the hundred scenarios?

Berlo: Yeah. So, what we do is we do have a priority mechanism that’s baked into the system. And essentially, we work closely with all the stakeholders and we have a discussion. If we’re seeing that a majority of our solicitation budget, let’s call it that. So, if we see that a majority of that goes to one party, if that party is actively working on the feedback that they’re getting, then that’s amazing. If they’ve taken a step back and they’re like, “Oh, we actually have a lot now and we actually don’t need that much more feedback information at this point.” Then we can bump them down on the priority list and bump someone back up.

Abi: That makes sense. Well, I’d love to cap off talking about how this works by, if you wouldn’t mind just sharing a couple of more scenarios out of those 100. I’m particularly curious and you mentioned, I know in your article that you wrote about the CLI integration. So, I’d love to know, do you have campaigns that are triggered off of CLI tools or is that mostly web tools?

Berlo: Yeah, we have actually both. There are many CLIs that are used for internal usages within LinkedIn, especially for developers. Developers are CLI heavy users. And one of the major CLIs that we have is actually used for many different purposes. And that was actually one of the pain points because sometimes trying to do, it’s the Swiss army knife sort of problem. When you’re trying to do a lot of things with one thing, sometimes it doesn’t actually work the way that you expect it to do, to be.

So yeah, that’s a scenario that we onboarded and we essentially found out what were the main pain points in that CLI. And by finding those out, we were able to provide that back to the team. And one of the things they did is they took this feedback and they implemented it. They focused on what is their core business. They found out their identity as a CLI and took that forward.

Abi: One thing you mentioned earlier, and I wanted to loop back on. You had mentioned that you realized there were some things that actually didn’t make sense to use a real-time approach for and it was better to use the periodic surveys for. So, how would you classify what works well real time and what’s better suited for a periodic survey?

Berlo: Yeah. So, the value of real-time feedback is that you can create targeted campaigns that show you what’s happening in real time in detail. So, for example, you’re launching a new feature and you want to closely monitor the before and after effect of launching that new feature on satisfaction. So that’s a great example, the right way to use real-time feedback. Now, the value of the periodic survey is that it lets you continuously understand the trend at a high level. So, even for areas where we don’t necessarily think there’s room for immediate improvement, we can still keep tabs on. So, we want to make sure that we’re requesting feedback from our users in a way that minimizes friction for them overall while providing meaningful feedback to the owners of the tools and processes.

Abi: That makes sense. Well Max, I have a question for you. Earlier you mentioned you started off a skeptic with this approach. So, if another platform team or company is listening to this and wondering about, “Man, would this be worth it? How are we actually going to benefit?” What advice would you have? Who should consider implementing this approach? And generally, what are the biggest benefits they’ll gain and how can they get buy-in for pursuing something like this?

Max: Sure. So, here’s what I would say. I would say if you have no collection mechanism at all right now, if you have no qualitative collection mechanism at all, I still think you should start with a periodic survey because of the ROI question about it. Like, how much work it’s going to take for you to spin up a periodic survey versus how much work it’s going to take for you to spin this up? Now, if there’s a product that you can buy where you can just get this information immediately and then that’s a different trade off. But if you’re going to have to build it, there was significant engineering work involved in creating this and maintaining this system.

But if you have gotten to the point where you have a survey and you’re concerned about the nature of the insights you’re getting, maybe the people who are responding haven’t used the tool in the last six months, maybe they haven’t used it for the last year, and their responses are all just about a memory that they have from a year ago when they had one bad experience. That was one of our concerns. And we did find that to be true. Actually, what’s funny is, the real-time feedback scores are routinely higher than the survey scores. So that turned out to be valid. Our concern turned out to be valid.

And actually, I think once we also removed people’s questions from the survey who hadn’t used the tool in a long time, I think the scores for the tools also improved because it was people who had actually used the thing recently and experienced the improvements that we’ve made. So, if you want to have a higher fidelity and you have an engineer who can invest the time into it, obviously, you don’t have to go as far as we did it. It’s just about instrumenting the things. One of the most basic versions of what we did was let’s look for an event and let’s send a person an email. That basic thing, you know can use a Google form or Microsoft form or whatever you want to use and just ask people a question.

The very basic version of this actually isn’t that hard and it’s totally worth doing an experiment on even just to see what the data that you get is, especially for things where you are not getting very clear feedback in your other channels and you want people to have a closer memory of the context they had when they took the action.

Abi: I love that advice. And building on that, so if you, let’s say, you want to get started with the simplest idea based on some event, send an email with a Google form. And Berlo, I ask you this question. So, if you were going to do that, if you wanted to show the value of this to your organization, where are some good places to look initially to build that in? Would you start with something like GitHub and just send an email with a form once in a while after a poll request is merged? Or what are maybe the most obvious scenarios that you would start with based on what you all have learned with a hundred scenarios?

Berlo: So, I think it’s really important to identify what are the points where you think there’s a lot of benefit for improvement. So, maybe it’s anecdotal evidence that people are talking about the specific system or something that, I don’t know, people are mentioning in whatever coffee rooms or public channels and target one of those and see. Okay, we know that people are unsatisfied but we don’t exactly know why and we want to double click on it.

So, I will take one of those scenarios, use the telemetry that exists most, the vast majority of tools have some sort of telemetry, maybe not the best telemetry, but a telemetry that you can use for this purpose. Understand who is doing what with a given tool and yeah, use that information to trigger a feedback request to those people that are using it. And then, once you get back the results, then you can come back to your executive chain and be, “This is the information that we found using this very simple proof of concept, and yeah, we can definitely scale this to the entire SDLC.”

Abi: That’s awesome. Well, Max and Berlo, this has been an inspiring conversation. I’m really impressed with how you guys both do your periodic surveys in this investment in the real-time feedback system you all have developed. I think this is going to be really inspiring to listeners and all the other platform and DevX leaders out there. Thanks so much for coming on the show and sharing your experiences.

Berlo: Thank you so much. Thanks for having us.

Max: Yeah, thank you Abi. I loved this conversation.