QA Friday 2016-May-06

Take Up Code - Un pódcast de Take Up Code: build your own computer games, apps, and robotics with podcasts and live classes

Categorías:

Do it in place. What does that mean? This is a common expression for programmers and I’ll explain what it means and what the alternatives are. If the size of the data you’re working with is large compared to the memory available, then you just may not be able to perform some operations on the data unless you can do this in place. When programmers talk about doing something in place, this means that you need to use the memory that some data already occupies and maybe a small amount extra. That’s it. Let’s say that you need to rearrange some characters in a word. Maybe you want the characters to be in a random order or maybe just backwards. Instead of allocating more memory for a copy of the word that you’ll build from the original based on how you need the characters rearranged, an in place algorithm will swap two characters at a time right in the original word until you are done. This has the advantage of not requiring any extra memory beyond a reasonable and fixed amount of memory. But it also has the downside that once you begin, then the original word is no longer available unless you reverse the work done. Listen to the full episode for more or you can also read the full transcript below. Transcript This is a common expression for programmers and I’ll explain what it means and what the alternatives are. First of all, just think of what it means to run in place. Your legs and arms are moving but you don’t move around. When you have a bunch of people who are running in place in a small room, they don’t interfere with each other. Imagine how many people would fall down if they were all running around the room. That brings us to one of the main reasons why you might need to do something in place in your code. If the size of the data you’re working with is large compared to the memory available, then you just may not be able to perform some operations on the data unless you can do this in place. When programmers talk about doing something in place, this means that you need to use the memory that some data already occupies and maybe a small amount extra. That’s it. Let me give you an example. Let’s say that you need to rearrange the letters in a word so they’re in a random order. This would be good for a word guessing game. If you’re building this game for a modern computer, then it’s not going to matter really how you rearrange the letters as long as your technique is correct. You don’t need to worry about if your algorithm is an in place or out of place algorithm. An algorithm is just the steps you take to accomplish something. It’s the rules you follow no matter what the word happens to be. But maybe you’re writing this program to run on a small microcontroller with very limited memory. Even then, a single word at a time isn’t going to make much of a difference. I’m just using the example of randomly rearranging the letters in a word because it’s a simple example to explain even though it’s not the best example to show the need for an in place algorithm. So to fix that, let’s momentarily consider something where an in place algorithm really could make all the difference. Instead of scrambling the letters in a word, maybe you need to scramble the pieces of a large puzzle. It’s common to find cardboard puzzles in stores with a thousand pieces and I’m sure there are much more challenging puzzles available. Even a puzzle with a million pieces though would fit in a modern computer many thousands of times over with no problems. At least it would start to stress a small embedded control board. So what kinds of problems would this really be an issue then? For that, think about a server computer that regularly handles requests for millions of users. Now you can see that it doesn’t take a very large problem to cause difficulty when that problem is being solved in slightly different varia

Visit the podcast's native language site