Thread Pools Configuration - Storm Streaming Server

Storm Streaming Server was designed as a scalable, multithreading server application. Depending on the approach, the server can automatically scale its thread pool sizes as shown in the example below.

                        
<ThreadPools mode="automatic"/>
                    

... or server-pools can be adjusted manually:

                        
<ThreadPools mode="manual">
    <ReaderThreads>10</ReaderThreads>
	<WorkerThreads>30</WorkerThreads>
	<TransportThreads>10</TransportThreads>
	<WriterThreads>50</WriterThreads>
</ThreadPools>
                    

Field Explanation

ThreadPools:modeThread pool sizes can be adjusted according to system capabilities with automatic mode, or specified manually with manual mode.
ReaderThreadsNumber of threads responsible for decoding incoming packages across all virtual hosts. The value should be equal to number of physical cores of the processor.
WorkerThreadsNumber of threads responsible for tasks such as recording video files or performing checks. The value should be equal to ¼ of physical cores of the processor.
TransportThreadsNumber of threads responsible for transporting data packets within the server. The value should be equal to max number of broadcast for this instance times 1,5.
WriterThreadsNumber of threads responsible for encoding outgoing packages to end clients. The value should be equal to number of physical cores of the processor times 2.
Table 1. Field explanation table

Memory Requirements for Threadpools

Please keep in mind that higher number of threads within pools will require more memory for their allocation. You might need to increase total maximum memory allocation pool for Java Virtual Memory (it is safe to assume that each single thread consumes up to 10 MB).

You can check how to increase Java VM memory allocation in our Native Installation guide Docker Installation guide for Docker-based setup.

Support Needed?

Create a free ticket and our support team will provide you necessary assistance.