In an interactive application, whenever you must perform an operation that can take more than few seconds, assign the operation to a worker thread having a lower priority than normal.
In this way, the main thread is ready to handle new user commands and by assigning the worker thread a lower priority, the user interface remains responsive.
Strictly speaking, this guideline does not improve the speed of the application, only its responsiveness. However, this is perceived by users as an improvement in speed.
Multiple worker threadsEdit
In a multicore system, if you can split a CPU-intensive operation across several threads, use as many worker threads as there are processor cores.
In this way, every core can process a worker thread. If more worker threads are assigned than there are cores, the result will be heavy thread-switching, thus reducing execution speed. The main thread does not affect operational speed as it is mostly inactive.
This recipe does not hold for I/O-bound tasks; scheduling threads that are all waiting for the same disk only causes overhead. But one thread can compute while another is reading from disk, so two threads can perform useful work in some I/O-bound programs. Similarly, two threads can sometimes make better use of a full-duplex network link than one can.
Use of multi-threaded librariesEdit
If you are developing a single-threaded application, don't use libraries designed only for multi-threaded applications.
The techniques used to make a library thread-safe may have memory and time overheads. If you don't use threads, avoid paying their costs.
Creation of multi-threaded librariesEdit
If you are developing a library, handle its use by multi-threaded applications correctly, but also optimize for cases where it is used by single-threaded applications.
The techniques used to make a library thread-safe may have memory and time overheads. If the users of your library don't use threads, avoid forcing your users to pay the cost of threads.
Use mutual exclusion primitives only when several threads access the same data concurrently and at least one of these accesses is for writing.
Mutual exclusion primitives have an overhead.
If you are sure that, in a given period, no thread is writing in a memory area, there is no need to synchronize read accesses for such area.