346: Occasional Biscuits

The Bike Shed - Un pódcast de thoughtbot - Martes

Categorías:

Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon. Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?" This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today. Greenland Geostorm San Andreas Armageddon This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack. Become a Sponsor of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles] I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby. CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling. STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride. CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact? STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right? CHRIS: They sure do. STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go? CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say. It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not? STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones. CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great. STEPH: Oh yeah, that's a good combo. CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that. STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts. CHRIS: Right. That is what they say in the movie. STEPH: [laughs] Okay. CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space. STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role. CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out. STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up. CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically. And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work. I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time. STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo. CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me. In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think... I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah. STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'. CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two. STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced. I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people. And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either. In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it. And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability. And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point." And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up. CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it’s also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase. Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that. But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it. I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles] STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day. CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially? STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing. CHRIS: Sold. STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs] CHRIS: And occasional biscuits, I hope. STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world? CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things. Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work. We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood. And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that. I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true. So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice. I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication. STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing. Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point. CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything. We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us. As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them. And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain. STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up." CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations. But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff. But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please. So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that. So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application. STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that. CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system. In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought. MIDROLL AD: Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph? STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase. So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application. I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized? Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage? I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app. CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software? Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't. And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points. If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code. Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application? I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button. But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works. The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency. Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box. But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this. STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one. I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there. I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity. CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way. STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application? That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me. One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me. And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase. CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration. I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is. Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo. Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff. STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on. But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code? There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs] CHRIS: We've been burned before. STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question. CHRIS: What would Michael Bay do? STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at [email protected] via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:Airbrake: Deploy fearlessly and fix bugs faster with Airbrake Error & Performance Monitoring. Airbrake notifiers are available for all major programming languages and frameworks, and install in minutes, with an open-source SDK-based install and near-zero technical debt. Spend less time tracking down bugs and more time developing. Visit Frictionless error monitoring and performance insight for your app stack.BuildPulse: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.Support The Bike Shed

Visit the podcast's native language site