98: Multithreading. The Great Divide.

Take Up Code - Un pódcast de Take Up Code: build your own computer games, apps, and robotics with podcasts and live classes

Categorías:

How do you assign work to threads? This episode explains several ways you can think about this and when to use them. You’ll learn about the following four ways to divide work to be done by your threads: When you have small work items arriving at different times, then assign a task to a thread as needed. You’ll probably want to use a thread pool so that you can avoid the extra overhead of creating and destroying threads for each small task. When you have a bunch of small tasks that are ready to be processed, just divide them up into smaller groups and assign each group to a thread. If it’s not clear how to divide a task or when you have several tasks that could take different amounts of work to complete, then consider a queue where a thread can initially take a piece of work from the queue and either decide to complete the task or divide the work into smaller tasks and put each smaller part back into the queue. And if you have large work items arriving at different times, then consider splitting the work itself into fixed stages and assigning the tasks to a queue for the first stage. Then a thread can take a task out of the first stage queue, perform just its part related to the first stage, and then put the task into another queue for the second stage. Listen to the full episode or you can also read the full transcript below. Transcript You’re all eager to use multiple threads but for what purpose? Assuming you have enough work for multiple threads, how do you divide the work between your threads? You have some choices that will depend a lot on your specific circumstance. Here’s a description of various scenarios and how you might approach dividing the work among threads for each scenario. This will be a short episode focused on four different scenarios and suggestions for each about how to manage your threads. Let’s say that you have a line of messengers waiting to deliver internal memos and collect other memos for delivery. Maybe this is the mail room for a large company located in a high-rise office building. Every so often, you send one of the messengers to a specific floor to collect any memos that need to be sent to another employee. The messenger comes back with a handful of papers. You then start handing one memo at a time to a waiting messenger. You can use this approach with threads too. If you have a series of small tasks that keep arriving, then hand each task to a waiting thread. Usually, you’ll want to use a thread pool for this instead of creating new threads each time and destroying them when done with each task. A thread pool acts like that line of messengers and when a thread is done with its task, it goes back into the pool of waiting threads. Other times, you might know up-front exactly how many tasks you need to perform. Let’s say you have a thousand tasks to perform. This is sort of like the previous example only instead of the tasks arriving bit by bit, you have all of them to work with at once. This scenario allows you to plan how many threads to use based on the available number of cores. If you know that you can efficiently run four threads at the same time, then give the first 250 tasks to the first thread, the second 250 to the second thread, etc. This will divide all thousand tasks between the four threads. You may not need to use a thread pool in this case since you’re taking a more active role in calculating how many threads you want and giving each thread a major piece of work to perform. I’ll explain two more scenarios right after this message from our sponsor. ( Message from Sponsor ) The next scenario is a bit like the second where you have all the items available. The difference is that in this case, the tasks aren’t in easily identifiable pieces. In other words, you may not have 1000 separate tasks where you can just split them into even groups. Actually, you may want to follow this suggestion even if you do have separate

Visit the podcast's native language site