Multi-Threading, I love to hate this mechanism in now-a-days, since most of the time a performance war happening inside the server. Earlier, I felt happier that I have implemented more parallel execution solutions. Of course, it scaled very well. But the flip-side on the server end, application container fights with memory & I/O unit. So, it’s required to control the threads beyond certain limit. Heap memory allocations, allotting an amount of space in the memory, it’s all big overhead.
Another important caveat is, continuous polling for the requests. Let me explain my own use-case here. I have a daemon thread which listening the queue for the new messages at regular intervals. The message contains the information about the file URL path from the remote server. Each thread will read the message and download the file from the remote server and extract the content then it will index the content. For every message, parent thread spawns every new child thread. But spawning the threads is controllable
from the configuration property. It’s all more about I/O operations inside the server. Such a loop would be a CPU hog, as it would spike the CPU at 100% for the entire duration of the program that processing the messages.Though it is multi-threaded, but the process is one thing at a time per core. Hyper threading is a dirty lie.
Event – Driven approach
What is Event driven? Simply, an event can be defined as a type of signal to the program that something has happened. In real world scenario, let’s take a School admissions program. A famous school has opened the admissions for the children. People are trying to get the admissions for their kids.
There is a big queue in front of the admission office. The procedure is to fill the form and pay the money to get the seat. Assume, there is one administrate officer (one thread) doing the admission approving formalities. Every person is blocking him or her until fills the form, from servicing other persons. The only real way to scale a thread-based system is to add more officers. This, however, has financial implications in that you have to pay more people and physical implications in that you have to make the room for the additional officer windows. In an event based system, the administrative officer gives you the form and tells you to come back once you complete the form. You go out from the queue and complete the form then come back into the same queue. Meanwhile, the officer can serve for others in same manner. So it’s faster and makes more available.
Node.js with event driven call back
Asynchronous Non-blocking I/O is one of the main advantages in Node.js. How this approach is very fast? It’s running only one thread; whenever an I/O calls happen it does in asynchronous mode and it gets the notification from the operating system using epoll in Linux. So Node.js never waits for the completeness of the I/O calls since its working on event call back mechanism. During this time, it serves for another request. It allocates small heap memory for an event and also does not have many stacks in your memory even in the concurrency level. It scales very well without any overheads.
It is very happy to use light weight at the same time very powerful framework “Node.js” to avoid Dead lock situations, Race-Conditions, heavy Context Switching head-aches and big memory consumptions issues.