Web architectures are an important asset for various large-scale web applications, such as social networks or e-commerce sites. Being able to handle huge numbers of users concurrently is essential, thus scalability is one of the most important features of these architectures.
The reduction of latency for a request can be maintained by parallelizing the request/ operations using the CPU-bound and I/O-bound operations, but the latency of independent, CPU-bound operations can only be decreased by using more threads on more cores.
The concept of adding more threads are used for heavy I/O parallelism, we roughly approach the same problem as seen previously for web servers.
The thread-based approach basically associates each incoming connection with a separate thread (resp. process). In this way, synchronous blocking I/O is the natural way of dealing with I/O.It is a common approach that is well supported by many programming languages.
It also leads to a straight forward programming model, because all tasks necessary for request handling can be coded sequentially. It provides a simple mental abstraction by isolating requests and hiding concurrency. Real concurrency is achieved by employing multiple threads/processes at the same time.
How useful was this post?
Click on a star to rate it!
No votes so far! Be the first to rate this post.
We are sorry that this post was not useful for you!
Let us improve this post!
Thanks for your feedback!