A forum for questions and answers about network programming on Linux and all other Unix-like systems

You are not logged in.

#1 2002-07-27 01:13 AM

From: Colombia
Registered: 2002-06-12
Posts: 353

Re: 6.10 - To fork or not to fork?

From: Chris Briggs

I'm relatively new to socket programming, most of my socket
coding knowledge stems from working with tcp/ip socket
wrappers of extreme abstraction (ie: delphi apps, etc).

I'm in the process of writing a server for a middle-tier c/s
suite. I'm writing it using BSD-style sockets, under freebsd.
When handling incoming client connections, is it best to fork()
a process for each client? or would this result in too much
overhead. I'm trying to write a system that acts almost identically
to ircd, however it won't be used for chatting, etc. I have noticed
how daemons such as ftpd and apache fork ps's for each (or groups of clients)
I also noticed that in my skeletal, barebones server: each process
forked takes up ~300K. i can only image what would happen if
i have 3,000 clients connected. I like the way ircd is written
in that you are able to load balance between servers while still
allowing all servers/clients to "see each other". Any suggestions
on whether forking is what i need? Thank you in Advance..

-Chris Briggs

From: senthil

fork() will create a separate process for each of the clients.
This will eat lot of memory.Best thing for this type architecture is to use
Refer pthreads for more info.

From: Garen Parham

I think it's important to note that probably the most important reason that ircd uses I/O multiplexing ala select() or poll(), is because the file descriptors need to interact. The traditional model of blocking I/O on a listening socket and then immediately fork()ing a process for each new client also is not always bad. Indeed, the memory required for each new process is overhead, but it is fairly constant. It does become a problem though in the case where you have an extraordinary amount of processes that the kernel needs to service, and if this amount approaches the memory limitations of the host machine. Which is likely why programs such as apache have a timeout of typically 15 seconds of inactivity before the process closes. To handle a whole lot of clients (like 3000), you'll probably want to use I/O multiplexing.

From: Min

The ideal way to handle this issue is depend on your system and your server. Actually, You may count the incoming connections. if the count is more than 3000. than fork() a child process to handle the first 3000 clients.


Board footer

Powered by FluxBB