In this episode, Abi has a conversation with Rebecca Parsons, ThoughtWorks's CTO, Camilla Crispim, and Erik Dörnenburg on the ThoughtWorks Tech Radar. The trio begins with an overview of Tech Radar and its history before delving into the intricate process of creating each report involving multiple teams and stakeholders. The conversation concludes with a focus on the evolution of Tech Radar's design and process and potential future changes. This episode offers Tech Radar fans an exclusive behind-the-scenes look at its history and production.
Abi Noda: Rebecca, Camilla, Erik, so great to have you on the show today. Thanks so much for your time.
Rebecca Parsons: Thanks for having us, Abi.
Camilla Crispim: Happy to be here.
Abi Noda: I’ve personally followed the Thoughtworks TechRadar for many years, and I’m a big fan. I imagine most listeners of this show have at least heard of the TechRadar, but I’d love to begin with you guys just giving a brief overview of what the TechRadar is, and then we’ll get into the history of the TechRadar and more of what listeners can take away from it.
Rebecca Parsons: The TechRadar is our take on the breadth of technologies that we have exposure to. It has four quadrants: languages, frameworks, tools, techniques, and platforms, and it has four rings. The outer ring is the Hold ring, and that’s probably the most ambiguous because that might mean don’t go there yet; it’s not ready. Or it might also mean, please stop doing this. This is not a good idea anymore. And I try to keep very close control over the Hold ring, frankly.
But then we also have an Assess ring, which is, hey, this looks pretty interesting. We’re not saying you should use it yet, but this might be something you want to take a look at. The Trial ring is something that we have actual production experience with, and this is something that we believe our enterprise clients can use in real projects. Then, the Adopt ring is something that we think is the sensible default for its category, and that last clause for its category is important because, for example, we put Neo4j as a graph database. Well, that’s not saying you abandoned your relational databases. Everything should be in a graph database. If you’re looking for a graph database, this is one that you can use. So, Adpot is the sensible default for its category.
The TechRadar is a compilation of our global experiences with a broad range of technologies, and there are several key aspects to that. The first is global because the experience that people have with technology in Singapore might be different than what people have in Brazil, India, or Germany. So we source all our projects across all of our currently 18 countries to find out what our people are actually using and their specific experiences with it. We want this to be very grounded in actual experience as opposed to this is what the vendor has to say about things.
So we might say, this is great to do this thing, but it might say on the tin that you can do this with it, but we wouldn’t recommend it because there are these sorts of problems. So it’s grounded in actual experience, and it’s globally sourced, and it’s across a broad range of technologies. We make no claims that it is comprehensive. We very often get ‘why isn’t X on the list?’ Well, because none of our teams have used it. Also, we make no claims to, and we make no attempts to talk to the people responsible for a tip for a particular language. I got an email not all that long ago, “Why didn’t you talk to us before you put us on your radar?” We don’t talk to anybody. This is based on our experience, and that’s all it’s based on. You can’t pay to get on the radar, and the way to get on the radar is for one of our teams to have some actual experience with it or at least take a look at it for something.
Erik Dörnenburg: I would add one more thing to your list, and that is that the radar is recent. It’s a snapshot of what we’ve seen the last six months because that’s also often the question or the answer to the question, why is something not on the radar? It is simply that we have mentioned it six months before, 12 months before a year ago. We do not keep everything that we feel is relevant on the radar. Every six months, we report on what we’ve discovered for ourselves in the previous six months, and that is what we report on.
Camilla Crispim: This is actually something that changed over time, something which we call blip. Used to be on the radar for two or three editions before it actually fades. Now, something has to fight to be there, so we should have something to say about it in order to keep from one volume to the next volume; otherwise, it’s going to disappear from the current volume.
Abi Noda: Thanks for the overview of what the TechRadar is. If you’re listening to this, there’s a bit of terminology that’s thrown out. One you’ll hear is a blip, and for listeners, just so you know what that means, a blip just refers to an actual item, a particular technology or platform practice that is in the TechRadar.
One thing I’ve been curious about is I’ve dug into TechRadar and followed it for a number of years. How did TechRadar actually begin? I’m curious to know what the V1 of the TechRadar was. Was it just a blog post, or was it a report like it is today, and what sparked the TechRadar? Why was it originally created?
Erik Dörnenburg: Since you mentioned version one, and I can tell you what version 0.9 was as a technologist, and that was the so-called hot technology list, which we compiled as a list, and I remember being responsible for that in a precursor to the meeting where we designed the radar as just a list of technologies that we needed to be aware of. Then, the person who was the assistant to the group at the time improved on that notion and said, “Why do we have a list?” And came up with this radar metaphor, and then that was very, very quick, and we’ve settled on that metaphor. We’ve settled on the structure in the first edition.
The only change I would say that was made was we viewed it as we did that technology list as an internal resource, but very quickly, other people told us, our clients, and other people that we talked to about this told us that they see value in it. So we made it public, and I don’t know, Rebecca, was it the very first one that was internal, or did we even publish the first one in retrospect?
Rebecca Parsons: I think there was limited publication of the first one because there were several people who knew members of the group who put the first one together and said, “I’d like to see that,” and so there was a limited publication. But it very quickly became something that we just put out there and said, “Hey, if you’re interested, here it is.” And then, of course, it matured, and we got a bit more structured around it.
Camilla Crispim: One thing that I wanted to share about this first version of the radar was that I did a retrospective. And in the ten years of the TechRadar, I presented in a webinar, and I could see things that are industry standards or the way to go, for example, evolutionary architecture. It was in our first-ever tech reader, so it was interesting to see how different things or components were built on top of each other and then became like a concept over time and things of that nature. So it was a pretty nice exercise to do.
Abi Noda: It’s really interesting that the TechRadar and the hot technology list began internally, and then there was organic pull from your clients to make it external. Before we go into the philosophy of the TechRadar and how it’s designed to be used, I mean, in your view, what was the market pull coming from? What was the problem that your clients had that they were hoping that your sharing this list would help them solve?
Rebecca Parsons: Well, I think part of it is if you think about the day-to-day life of a developer within an enterprise, they’re in a domain. They’re working on a technology stack. They’re probably working on some aspect of the technology estate of that enterprise, and that’s the view they have of the technology industry. And to get a different domain or a different tech stack, they have to move companies. By definition, thought workers work across companies, across domains, and very often across tech stacks. So, we get a breadth of experience as individuals that most developers in the software industry simply don’t get.
So one of the common questions we get asked when consulting with our clients is Tell me what’s happening elsewhere. Well, this is a vehicle to tell people this is what’s happening elsewhere because, again, this is grounded; in the early days, the things that we were doing that we thought were relevant, that we thought more should be happening with. And that was true across the different clients that we have in the different countries in the different technology stack. So this is not a Java-only radar or a Microsoft radar or whatever radar. There are technologies from many different technology providers and different parts of the ecosystem, but we get that breadth that most enterprise developers don’t see unless they move companies.
Erik Dörnenburg: And in many ways, Rebecca, what you described as the earlier use cases was a bit of de-risking. People know about new technologies but not quite knowing should I jump on them because I don’t have experience, and I don’t have necessarily the time to experiment with them. And then drawing on the experience of Thoughtworks of all the experience we gathered in different engagements. And that’s maybe also a sign of how I’m working. Increasingly personally, but also when I talk to people, they’re also seeing it not only as a risk mitigation mechanism but as a discovery mechanism. Not so much to say I know about this technology, but I haven’t had a chance to try it. But just to say, because there is so much technology to say, “I didn’t even know this existed.” And then they probably don’t necessarily draw on the conclusions as Thoughtworks has taken, but they might just think, “Now that I know this tool exists, I’m going to do my own evaluation.” I mean there are so many tools in the space of data analytics that fall into this category because they just keep appearing.
Abi Noda: It’s such an interesting point you shared, Rebecca, that Thoughtworks is unique and that you have a view into so many different teams and companies, whereas, as you said, most practitioners only have familiarity with the environment in which they currently work, which doesn’t necessarily change all that often. So, I can see how the TechRadar is a remarkable tool for insight for leaders and practitioners across the industry. I’m curious. Have you also seen it as a tool for driving change and influence within an organization? We’ll talk about how TechRadar is different than things like the Gartner and Forrester Quadrant, but those tools are used by champions to internally drive decisions. I’m curious if you’ve seen similar things happen with the TechRadar, where folks within organizations are using it to influence decision-making or influence architecture decisions or technology adoption decisions.
Rebecca Parsons: We have certainly heard of many cases where our thought workers have gone into a client situation and said, “This is what the radar has to say about this,” and used it to try to influence decisions. The Hold ring is particularly popular when we are trying to help organizations understand the consequences of some of the drag that they have from legacy architectural choices, and we’ll try to talk about and use the language from the radar as a way of being more specific. It’s one thing to say X is bad. It’s much more powerful to say X is hurting you in this specific way, and we’ve seen that specific pain play out in these other places, so we can use it as a proof point, if you will, for some of the recommendations that we have.
Abi Noda: I want to remind listeners that the Hold ring is things that you have experience with but caution against, and I think you mentioned this is a category that you have a lot of passion for. I think it’s so important in tech because, as we know, there are so many new hot things that folks jump on the bandwagon with. Then, years later, we often start hearing horror stories or cautionary tales about certain technologies or practices.
The other day, when we were chatting, we had a discussion around the philosophy of the TechRadar, and one thing that you all brought up was that it’s really important that this be advice for your audience rather than just the compilation of what’s interesting. Can you explain more about what you mean by this and how you try to make the TechRadar actionable?
Erik Dörnenburg: One of the key things really is on the TechRadar, as you said, Abi, that we are trying to provide more than just it exists, and oftentimes, I mean, it’s partly built into the mechanism of how the radar is created because there are proposals from our teams, from individuals who are saying, “I think this should be included in the Thoughtworks technology radar and the group that gets together that all the three of us are part of.” We could simply take them, put them on the radar, copy-paste the marketing text or the text from the open source pages onto the radar, and be done with it. But especially in the inner rings of the radar that Rebecca described in the inner rings, we do want to have the notion of saying we need to give some advice here. We don’t only want to say that tool exists.
We do a couple of sanity checks, usually saying if it’s open-source software, it’s easy to see. Is this tool still being developed? And part of the implied advice, if it appears in the inner rings, is yes, this tool is actually usable. Some of the more explicit advice could be because teams are proposing these tools or techniques or practices to us that we’re saying we are actually not listing all the features. We are saying this tool is particularly good for X, or we are saying our experience was the combination of those tools, which really makes a lot of sense. So when we are discussing, and we haven’t mentioned this so far, we get way more proposals for new entries per radar volume than we can actually fit on the radar. It’s oftentimes three or four times as many as would actually fit on it.
And then that’s often a criterion that we apply to say; we have not much more to say about this entry than it exists. If it’s a really new thing and part of the group or the majority of the group even says, “I’ve never heard of it,” that might be enough because it’s clearly something that is interesting, and not enough people by some definition know about it. But in many other cases, we’re like, “Yeah. Of course everybody knows this tool.” If you are in front-end development, everybody would know this tool. So, us listing it on the radar doesn’t contain any advice.
So then we would maybe contrast it with, if it’s a new one, contrast it with tools that maybe a lot of people are using, which we know from previous radars, which we know from our working experience, and say, “This is something you should try if that’s your context,” or, “This is different from the one that we’ve listed three volumes ago because in the simplest case, it does the same thing but it’s faster.” That’s rarely the case that it’s as simple. But just to give you an example, this is really how we try to get advice into the radar beyond just listing technologies, and that’s why it’s such a step forward from that technology list that I mentioned earlier.
Rebecca Parsons: Another important piece of advice is when a tool can do many different things, or a framework can do many different things. And we have specific advice, whether it be positive or negative, about one particular use case for it, where actually it does really well here, but there is a limitation. If you try to get too complex or too many branches or something, we’ve hit a limit where it went from being incredibly easy to use and effective to just being way too convoluted. Helping people understand a vendor might tell you, yes, it works in all of these cases for some definition of works, and so we can characterize a case, stay in this box, and you’ve got a great tool that’s easy to use and does the job really well. Go outside the box; there be dragons there.
Abi Noda: That’s a really good leeway into the next question I had, which is I brought up Gartner and Forrester a few minutes ago and I’m sure this question comes up all the time in conversations, but how is TechRadar different than similar recommendations and insights that come from analyst firms like Gartner and Forrester?
Rebecca Parsons: Well, I would say the first is, to my understanding, we don’t have working development teams in Forrester and Gartner who are actually using these. So, although they will talk to customers, it’s often indirect from the perspective of the practitioner who’s actually hands-on keyboard doing something with this technology. So that’s one distinction. We don’t talk to the people responsible for the tool or the technique. Even when it’s a thought worker, we’ll put an open source project for a thought worker on the radar, and we won’t even go talk to them about it because we want it to be grounded in the experience of people who aren’t the creators of it. So that’s another distinction there.
I do think that Forrester and Gartner are more concerned with at least at a particular scale, being comprehensive. We make no claims about that. We’ve had some various persistence mechanisms. I’m sure there are some that we’ve missed, and that’s okay because we didn’t have what we felt was anything useful to say about it, whereas they are more concerned about being a bit more comprehensive even if they may only be looking at the major players. And we might put a major player and some very niche player that we, for whatever reason, have experience with on the same radar. And that’s highly unlikely to happen in one of the more analyst reports, although they do often have special mentions for players that they’ve spotted that aren’t big enough to make the magic quadrant or wave or whatever the particular main report is. But I’d say those are some of the distinctions that come to my mind.
Abi Noda: Another difference between TechRadar and some of the analyst reports is that, as you’ve mentioned, TechRadar Thoughtworks doesn’t really commingle with vendors who, in regards to those other analyst firms, are often part of the process of those reports getting developed, and have influence over the people developing them. Could you share more about how TechRadar remains neutral and independent and why you think that’s so important and valuable for your audience?
Rebecca Parsons: As I said, we don’t talk to the vendors. I have been approached over the years; what does it take to get on the radar? And my reply is always the same. If it’s used on one of our projects and somebody proposes it, it can get on the radar. That’s the only way I did have somebody, in fact, be so bold as to say, “How much money do I have to pay you to get on the technology radar?” And I said, “There is no amount of money that you could pay to get on the technology radar.” I think the importance of it is, again, we are grounding this in our experience, and the importance of that can’t be overstated. We want to be able to tell people this is what is going to happen if you use this thing in these situations. And by grounding it only in our experiences, there’s a purity to that.
There isn’t the marketing speak. There isn’t the, okay, I’ve gone through all of the checkboxes, now put me on the radar. I was actually talking to an alumni of Thoughtworks who works for a product company now, and the president of the company said, “You used to work for Thoughtworks; get us on the radar.” And the response was the same. You get on a radar by being used on one of the projects. So, figure out how to start working with one of their clients or maybe bring them in on one of our clients so they have experience working with the product.
That’s such a central part of what the radar represents for our readers and why it’s so valuable that they know this isn’t some vendor in the background whispering in our ears saying, and just saying how blindingly fast or blazingly fast. That’s one of the phrases that we hear a lot. Grounding in the actual experience is essential to getting something on the radar, and you simply can’t do that by evolving the vendors because they are going to want to give you these four-page explanations for why you might’ve run into that particular problem. And it’s like, “I don’t want that. We need to tell people that this is going to be the problem.”
Erik Dörnenburg: So something else is, at least my understanding of the difference in the technology radar is that we don’t have to compare a class of tools. So if something changes over time, we’re not publishing at one point in time a comparison to stay with the example we used before; all the JavaScript frameworks we are at liberty to a year later say, “Here’s a new framework,” and describe why this is an improvement over the existing ones, but just list one of them, which means we can get that comprehensiveness that some of the reports that need to choose a point in time to compare a whole class of related technologies that they can’t do. The other freedom we also get is we have so many consultants working on engagements with our clients that they will often find really obscure things that you would normally never get.
We talked at length about how maybe more commercial companies want to get on the radar, but there are these useful little nuggets, these really small tools that you have to almost discover by accident. And that is something we can do on the radar because just somebody of the over, like thousands of consultants in Thoughtworks, has discovered and has proposed, I guess we’ll talk later about how we make the radar, but just somebody has to find this, and we can feature it. And that is, I guess, something that wouldn’t generally work that way with the larger reports. So we don’t have the comprehensiveness of comparing everything at the same time, but we get comprehensiveness in other ways or other kinds of comprehensiveness if you know what I mean.
Abi Noda: I’d love to shift into explaining to the listeners how this report is actually created and published twice per year. So, if I’m understanding, first, there’s an unstructured nomination process where ideas are just being shared by different teams, different folks across Thoughtworks. Then it moves into slightly more formal rounds of nomination where people are in live meetings pitching or justifying the reasoning for nominating something, and then there’s a process of finalizing that list. And can you share more than how that second step works? That is the finalization of the list, and I imagine this is still a step prior to where you determine the details of where the thing being nominated should live within the radar.
Camilla Crispim: We do have different formats for doing the collection of the blips because people engage in different ways. So they might join calls, or they might fill up forms, or they do pinging us privately and say, “Hey, you should consider this or that,” and then we do the featuring, and once we get together, then that’s when you are going to pitch whatever thing you want on the radar, and we do vote for it to be on the radar in that specific place. So there are no rounds of, let’s say 'it should be or not on the radar. And then we come back and say this ring or that ring, which we do vote specifically for a certain quadrant and ring, and we can still re-propose that to be in a different ring, and it might get to the radar if we do that, but usually it is blip by blip, and then it always can change. But usually, we do get to the final quadrant and ring once we are talking about it.
Rebecca Parsons: And it’s important to talk about the people that are involved in the different phases. There is a group internally we call a Doppler for the radar, and that is the group that actually makes the decisions about what ends up on the radar, where that actually writes up the blips and all of that. But the collection process, which happens before that, we don’t have members of that group for all of the countries where we have offices. So we try to make sure that there is some member of that group that is involved enough in the blip collection process that they can do that pitch that Camilla mentioned. But we go out across all of Thoughtworks and talk to as many people as we can, get as many blip proposals with as much information as we can. As Erik pointed out earlier, I think the last radar we had was something like 367 proposals for the radar, roughly that number, and that was after the filtering that the individuals went through. So those were the ones that actually made it to the master list, and this is what we are going to go through.
Rebecca Parsons: And then that list goes to the Doppler group to start to call through and decide, yes, this is important enough. No, this is not. We try for roughly 100 blips at the end. So, each radar comes in with about 100 blips, and then we have all the new entries. We filter those down, and then we look at things that might be moving, say, from Assess to Trial because we’ve got more experience with it. And during that process, as Camilla said, things are located in particular places. We go through a round of what we call the final call of, okay, we have to get down to this number. We have 130, that’s too many. We have 40 in this particular quadrant and ring, and that’s too many. So we filter things to try to make sure only the most important blips. Are the things that we have the most to say about actually represented on the radar? And then we do a lifeboat session where if somebody lost an argument, but I think I’ve got a new argument, I’m going to try once again, and sometimes that works, and sometimes that doesn’t.
Abi Noda: One of the things you’ve described is this process of live meetings where there’s debate and discussion and argument about whether something should be on the TechRadar and where it should live.
I was reading your FAQ on the website this morning, and you describe it in a fun way. You say, “This discussion is always enjoyable. There are lots of opinions and experiences around the room, but there’s also a friendliness and mutual respect that makes the arguments much less grating than these kinds of discussions sometimes become.”
As I read that, I just pictured what these meetings must be like because, at Thoughtworks, of course, you have so many strong-minded and brilliant people with so much experience and diverse perspectives coming together to have these types of discussions. It must be lively, to say the least. So, bring us behind the scenes a little. How are these meetings?
Erik Dörnenburg: So I think one thing that is almost unique, I would say, to these meetings is you have 20 technologists in the room. In the earliest versions, the meetings were lively to the extent that everybody was just talking over each other, and it wasn’t a good discourse anymore. It was just almost waiting for the other person to stop talking so you could just jump in and make your point. And the key thing that changed all that was when Rebecca started having speaker lists. I guess if you were an observer, it might look a bit weird.
But people raise their hand a little bit, and Rebecca writes down the initials on a card, and you know that we’ll work through the speakers, so there is no rush or anything, which also means you don’t have to wait for the other person to stop talking. You can actually listen to them, which makes your response later on much better because you have actually had the time to actively listen to the other person. And more often than not, it can also lead to the point where you’re saying, my point has been made already. And it really gives the conversation a much, much better spread across the different angles from which you can look at one of the arguments.
Camilla Crispim: I think this also speaks to the diversity of the room. We do have a lot of Native English speakers, but we also have a lot of people who don’t have English as their first tongue. I think it gives us, and speaking as the ones who are not English Native speakers, the opportunity to make a point we felt like rushing or being afraid of being cut off or something like that. So it’s independent of how loud you are or not, or introvert, extrovert, or differences in that nature. You have a voice, and it is different to be in that room because of that.
Abi Noda: Rebecca, what’s it like to moderate this process? I think about our company and having one meeting about one technology; I feel pretty exhausted afterward. So, what’s it like being in the room and facilitating this process?
Rebecca Parsons: Once the meeting is finally over, I enjoy not talking to anybody because it is exhausting. As you say, we’ve got people who are very passionate, very articulate, and intelligent, and yet it is respectful. So we don’t get into these personal attacks or anything like that. I don’t have to moderate those. What I have to moderate is making sure that people have a chance to have their say. There are some disadvantages because sometimes the list gets so long that Erik puts his hand up for a particular point, and we go through seven or eight people on the list. And it might not necessarily be that the point has been made, but it isn’t particularly relevant anymore. So that is one of the things that you lose.
Now, I have modified the process slightly to allow for if there’s a really important clarification like Camilla made a point, and Erik was next and said something specific about the point. I will often allow Camilla to either clarify or something like that. But in general, by structuring it that way, we get the breadth of perspectives, and then, at the end of the discussion, we come to a resolution. It’s also, by the way, a good way for me to detect rabbit holes because when I start my second line of writing two initials, and we’re still talking about the same thing, it’s like, “Okay. I’ve got this person, this person, this person, this person, and then we’re going to call a vote.” Sometimes that works, sometimes that doesn’t, but it is a good way of just detecting when you’re starting to circle the drain a little bit, and it’s like, “Okay. We’ve got to start to put some specific proposals on the table.”
Abi Noda: So everyone’s opinions, rationale, and arguments have been heard, and votes have been cast. You mentioned earlier there’s the Doppler team, which is the final decision-maker. So, what happens after this review process? Does the Doppler team or group make the final call and then move on to the publishing step? Describe to listeners what happens after these meetings.
Rebecca Parsons: The first step after the meeting is we have roughly 100 write-ups to do. They’re all very small. One of the categories, by the way, for a blip one dispensation, is too complex to blip. If we can’t explain the nuance in three to five sentences, we’re not going to put it on the radar because, with 100 of them, we can’t have a page on each. Nobody would read it. So the target is to have something where we can get our description and advice out succinctly, and those are written by the Doppler group, but we send it out to review to the various mailing lists and chat rooms within Thoughtworks and incorporate comments from the broader Thoughtworks. But the Doppler group has the final say on what’s on the radar and what the radar says.
So if somebody brings up a point and the Doppler group does not agree, it’ll go out the way the Doppler group decided. There’s no way I’m going to try for consensus around all the thousands of technical thought workers across the globe. And that just doesn’t happen. However, we’re usually not very far off. Then, the whole publication process begins with the translations. We publish it in Chinese, Portuguese, and Spanish. We have, from time to time, done a Thai version and an Italian version. There have been some efforts to do a version in French, and so different translations also happen for this.
Erik Dörnenburg: What is sometimes fun to see though is when the people who are part of the Doppler group have decided on what’s on the radar. And then there’s an informal, not really a process, but people just start picking the entries that need to be written and do the writeups. And then, as Rebecca described, they get put; we use a Google doc for this. They could be put into a Google doc and then reviewed by essentially anyone who wants in Thoughtworks and provide comments on it. And what is sometimes fun to see is that the same rabbit holes that we ran into at the meetings of the Doppler group that preceded us putting it on the radar begin to repeat in the margin where you can put the comments on a document and then sometimes the people who write it see there’s interesting feedback and maybe we just revise the text a little bit to almost preempt the readers to go down the same rabbit hole.
So we’re not necessarily changing the content, but it’s often good. In all fairness, though, I mean, it’s only a few sentences, and most of the time, it doesn’t take that long, but I’ve spent three hours writing some of those sentences and still not got them right. And then I’m really grateful that we have all the consultants. I mean, not everybody looks over everything obviously, but I’m really grateful for the consultants to look over this and then say, “Are you sure about this?” Or maybe more bluntly, saying, “You got this wrong.” And I’m like, “Thank you.”
So there’s this process that gives us a bit of safety, and I think it is essentially, in the end, responsible for what I believe at least to be the high quality of the radar because it has so many reviewers, and it is being reviewed by people who actually use the technology and use them on a day-to-day level because that’s clearly the 20 people in the Doppler group. We clearly cannot use all the technology on a daily level, so things may slip, or we may misunderstand something in the proposal process. And that then gets caught later in the process. And I don’t know when it started, but it’s definitely at least 3, 4, 5 years ago that we added this review step before we published the radar.
Abi Noda: One of the things you shared with me earlier that I was surprised by and found really interesting is that everything you’re doing with TechRadar is actually tracked in a Git repository. Share with listeners how this works and what the value is for your audience, and perhaps go check out the Git repository.
Erik Dörnenburg: There’s more technology, to be honest; even it’s not only Git, but we are also using Trello, and we use Google Docs and Google Sheets, so I wish there was one comprehensive suite, and that’s, I think the dream of all members of the Doppler group, that we would get actually an end-to-end process that does it. But the Git part is clearly the one where we’re writing the text, and that is exactly what Git is really good for, to manage text to see the differences. One step we haven’t mentioned so far is that Camilla alluded to it in a way, noting that not all of us are English speakers, but even Native English speakers are not necessarily great writers. And also, I would say even though you can detect certain variants when reading the radar, there is a common language style in it. So there’s a copy editor.
The copy editor, though, isn’t the technology at heart. I mean, there are a couple of them by now, and they’ve gotten quite good, but sometimes when they try to make the non-Native English that Camilla and I write more English, they change something, which is what we meant maybe two parallel statements, they create a causality. And with Git, it’s super good because when we make the changes, we can write the text, and somebody else can edit it. The copy editor makes their changes, and then we can look over and see what the copy editor actually changes. And we can see the diff and say they’re all good, or they will know better what we know. I mean, that looks like better English than what I wrote, but in certain cases, you read it and think, “Actually, this is now creating or making a statement that wasn’t intended, or actually it’s not correct,” and then you can revert the change. So we’re managing the text entries, and of course, it is really good if we revisit something after three years because we have something new to say. It’s quite nice to have the old text.
Rebecca Parsons: Erik mentioned that the copy editor also happens in translation as well because we need to have a very specific set of translators. You can’t just send this out to a translation agency or automatic translation, even though it’s getting better and all of that. Because, again, there are those nuances that just don’t come through properly. So there’s actually quite a bit of work that goes on in making sure that the translated text makes sense, and that does impact sometimes, too, what we call a blip.
There was one blip that English speakers were so excited about many years ago, and it was the security sandwich that we put on Hold that you think about security at the beginning, and then you ignore it, and then you think about security at the end. And somebody from our Brazil offices, I have no idea how I’m going to translate that into Portuguese. It just loses that connotation that made so much sense. So we’re certainly more cognizant of that now as well than we were in the early days of the translations. But it is a very specific and highly technical language, and therefore, we have to be very careful not only with copy editing but also with translation.
Abi Noda: Awesome. Okay. One of the things that you all have shared with me in prior conversations is that the process for putting together the TechRadar is constantly changing. So, I’d love to share with listeners some ways in which the process has recently changed or is currently in the process of changing that you think are interesting.
Erik Dörnenburg: With the last volume of the radar, we got so many blip proposals for one specific area that we didn’t treat the proposals individually anymore. Normally, we go through them as we described. I mean, we talk about them. Is it a valid proposal, or is it not? And then later, if we have too many in a specific category, we go through it again and take out the ones that are the least strong or where we have the least advice. It feels, though, that in the tech industry, maybe it’s just something that is in the moment and it’ll stop, but it feels that the hypes are getting bigger, and the Thoughtworks is getting bigger. We’re getting more proposals. The Web3 wasn’t that strong because we had actually listed a lot of technologies earlier around the blockchain, the development environments, and so on. So, the technology ran ahead of the actual hype that we saw in a lot of the media, but already we did see quite a bit.
But this time around, the last time in Spring and now, especially the one that we just wrote, the peak of tools around Gen AI was incredible. I think we had 60 or 70 proposals for technologies in that space, and it broke our process to a certain extent. So, we actually ended up using yet another tool. We used Mural to actually put all the proposals and then started shuffling them around, seeing which one you could group. Which one are we saying too much about this area? Even though there are maybe ten things we could say something about it, but it’s maybe a niche area. This was particularly about how you can make LMSs run on different hardware.
And there were interesting entries in their own right, but it really broke the process we had before. And maybe that is something because I think the experience was quite good. If we face this again, that we get so many proposals that are all interrelated where we could go, let’s summarize five of them in one blip around, say that’s a technique for a specific thing that we could repeat that as well in the future. But maybe we won’t see another hype wave as strong for another three or four years. We don’t know this, but at least now we have another tool to deal with such things.
Abi Noda: Camilla, Rebecca. Erik, this has been a really fascinating conversation. I’ve really enjoyed learning more about the history and the process around TechRadar. Thanks so much for your time today and for sharing this with listeners.