Shopify has always had a strong investment in developer experience and productivity. They formed a Developer Acceleration organization early on, dedicated to making developers at the company highly productive. They were also among the first in the industry to conduct biannual developer experience surveys.
When I first interviewed Mark Côté on the podcast, he briefly spoke of Shopify’s internal developer survey. He said it was evolving, and I was eager to bring him back on the show to learn more. For context, Mark leads Developer Infrastructure at Shopify, which rolls up into the Developer Acceleration organization. (Developer Acceleration is split into two orgs: Developer Infrastructure and Ruby and Rails Infrastructure.) Mark’s organization is responsible for administering the internal developer survey.
In this interview, Mark shared how the survey is designed, what he’s seen influence participation rates, and how they interpret their data. I’ve summarized the key parts of our conversation below. You can also listen to the full interview here.
Our content frequently covers studies that use developer surveys to understand developer productivity and the factors affecting it, and to measure the impact of improvement efforts. This conversation gives a real-world example of what it looks like to run developer experience surveys within an organization.
Can you share what prompted this survey, and what you’re using it for?
Mark: If you’re investing in developer experience or productivity, and especially if you’re investing in a dedicated team, you need a feedback loop with developers. Developers know how well their systems are working, they know what their blockers are, and they like sharing their opinions if you ask them the right questions. For a Developer Acceleration group, this feedback loop is invaluable.
The reason we use surveys specifically is because while we can measure some things with data from systems, such as usage or CI times, there are other areas that are extremely hard to measure. For example, how tidy code is, whether a testing suite is comprehensive enough, and to what extent test failures and flaky tests are getting in developers’ way. The only way you can know that stuff is to ask people.
We’ve been running our survey since 2019 when the Developer Acceleration organization was formed. It was originally started as a way to understand what our developers need. Today, we have three objectives with the survey:
1. Identify developer pain points and prioritize roadmap items. This has changed over time: when we first started, we looked at a combination of our tools, systems, and overall engineering satisfaction. We’ve since reconsidered this scope to focus on the areas where we believe our organization can have the biggest impact.
We ask questions about specific tools and workflows. We also ask about priority, which is the developers’ rating of which items they think we should be working on. All of this serves as input for the Developer Acceleration organization to determine their projects, or to validate whether they’re still focused on the right things.
2. Chart progress on improvements. The most important result of any work you’re doing on engineering productivity is that the developers feel it. If you think you’ve improved something, but developers don’t think it’s improved, you probably haven’t solved the problem yet.
As an example, we spent a lot of time improving our CI, which at the time was 45 minutes on our monolith. It was way too long. We brought it down to 15-20 minutes. That looked amazing on paper, and we also saw in the survey that CI dropped down in the list of developers’ pain points.
3. Track trends over time. We have some questions that we’ve used since the beginning. For example, we ask questions like: “What’s your overall satisfaction with our tooling,” “How does our tooling compare to companies you worked at in the past,” and “Would you say our tooling is getting better or worse in the last six months?” These answers have been more or less stable for a while. The important thing is to watch the trend. If scores change drastically, we need to dig in.
We’ve pigeonholed ourselves some because we’ve only used the survey for actionable feedback for the Developer Acceleration department. Moving forward, we’re trying to make it more actionable to a broader audience, and provide insights back to other leaders. This is in part because of an increased interest from various stakeholders at Shopify to better understand engineering health and satisfaction.
Who is involved in creating the survey, and how has it been designed?
Mark: The Developer Acceleration organization is the steward of the survey. Every iteration we go through our master list of survey items, from top to bottom. We pull in other teams to understand what insights from the previous survey were most valuable. We also pull in VPs that are interested in engineering health and productivity from a broader sense. We add and refine questions. Then, we pull in the people analytics team to help refine and rephrase the questions for clarity and consistency, since that’s not our specialty. We’ve never beta tested survey items before, but we plan to going forward. Also, the people analytics team also administers the survey, sends out reminders, and analyzes the data. So it’s definitely an investment.
We run the survey twice per year, and deliver the survey to 50% of the developer population each time. Each survey takes between 15-20 minutes for developers to complete, and we usually see a 40–50% participation rate.
The survey asks questions about overall satisfaction: for example, one asks about overall satisfaction as compared to previous companies where developers worked. These questions have never changed, so we’re able to track trends over time. We also add new questions in when there’s a new technology. For example, when we launched our own cloud development environment, we added some specific questions on that topic because we were worried that developers were finding it difficult to work with. Most questions use a 5-point Likert scale, but we also have a few multiple choice questions as well as a lot of opportunities to add freeform comments.
One interesting thing we’ve changed is that we used to skip new hires, meaning anyone who had been at Shopify for six months or less. We’ve actually learned that it’s a very interesting cohort, so we don’t skip them anymore. We collect demographic information to determine the people who have more tenure versus those who don’t.
What’s the process for analyzing the data and then sharing insights?
Mark: After the survey closes, the people analytics and talent research teams crunch all the data. They prepare a detailed document for us that includes a high-level summary, comparisons with previous surveys, and insights from the open-ended responses.
Following this, we do a joint presentation. The people analytics team goes over their findings. Then our team discusses the actions we’ve taken since the last survey and outlines our future plans to address the latest report’s findings. All of this is documented in our internal wiki.
About the analyses: our people analytics team conducts regression analyses, which have led to some interesting insights:
After all of this, it’s critical to follow up with developers and take action on their feedback. If I were giving advice to others running a survey, I’d say that if you show that you’re taking developers’ feedback seriously, and that you’re taking action, you’ll continue to build trust. You’ll continue to build a great developer experience; people will talk about it, and more people will want to work for you.
This was a summary of my conversation with Mark. For more, you can listen to the full interview here.