UNIX Socket FAQ

A forum for questions and answers about network programming on Linux and all other Unix-like systems

You are not logged in.

  • Index
  • » Threads
  • » TCP sockets as inter-thread signalling mechanism

#1 2007-06-13 04:50 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

Hi all,

For quite a while, I've been using TCP sockets as my inter-thread signalling mechanism.  That is to say, in my multithreaded server apps:

- Every thread runs an event loop around select()
- When a thread wants to get another thread to do something, it places a Message object into the target thread's Queue, then sends a byte via TCP to the target thread
- The target thread receives the TCP byte, which causes select() to return.  The target thread then checks its Message Queue and acts on the Message(s) it finds there
- Access to the Queues is serialized by mutexes as appropriate

This works fine, and it has the following qualities that I like:

- It integrates with select(), which all my threads are blocking on anyway (since they also have networking tasks to do)
- It works more or less the same on all modern operating systems (i.e. there's almost no porting required across Windows, Mac, Unix, BeOS, etc)
- It's reasonably simple and works reliably

So my questions are:

- Is this the best way to handle this problem?  Is there another way to do it that would be better?</li>
- Currently I use TCP sockets for the inter-thread signalling... would there be any performance advantages or portability pitfalls to using UDP or AF_INET sockets or some other type of socket instead?

Also, a somewhat unrelated question:  Yesterday I had the bright idea of using a TCP socket to indicate a thread's readiness to accept more data.... the idea was that I would "fill up" the socket's buffers with garbage data, and whenever the thread wanted more data, it would read a byte from its end of the socket, causing the other thread's end of the socket to select() as ready-for-write.  The problem with this was that the buffers don't seem to stay full:  I wrote a little test program to verify the technique, which simply created a non-blocking socket-pair and then filled up the socket with data until no more could be written, then sat in a spinloop calling select() on the socket to see what happened.  What I found was that after a second or so, the socket selected as ready-for-write again, even though the connection had been "full" and no bytes had ever been read from the other end.... as if the O/S was occasionally resizing the TCP buffers larger to accomodate some more bytes.  Since I saw this behavior under both Linux and MacOS/X, I assume that it's a feature and not a bug... but can someone explain what is going on there?  I thought it was a bit puzzling.

Thanks,
Jeremy

Offline

#2 2007-06-13 08:37 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#3 2007-06-13 08:43 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Oh, and as for the strange writability thing, I'm not sure...  It sounds rather odd to
me, but with non-blocking sockets, it COULD indeed be growing buffers...  Ie: the
kernel starts you out with some smaller buffer not near your real SO_SNDBUF size,
and only grows out to that as needed...  With a blocking socket, your writes would
probably block while the resize happens, but with non-blocking, it seems perfectly
reasonable it'd temporarily fail while the resize happens in the background...  But,
in either case, once you hit your SO_SNDBUF size, it shouldn't grow it any more
beyond that...  So, if you want to test things, set SO_SNDBUF fairly low (but, beware
the OS might impose some minimum necessary size, so you probably can't do
anything stupidly low)...

Offline

#4 2007-06-13 10:32 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/socket.h>
#include <sys/select.h>
#include <sys/types.h>
#include <stdlib.h>
#include <netdb.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <net/if.h>
#include <sys/ioctl.h>

enum {
   B_ERROR = -1,
   B_NO_ERROR = 0
};

typedef int status_t;
typedef unsigned short uint16;
typedef unsigned long uint32;

static void FillSocket(int s)
{
   printf("\nFillSocket:  Writing as much junk data as possible to non-blocking socket #%i\n", s);

   static const int BUF_SIZE = 512;
   char junk[BUF_SIZE]; memset(junk, 0, BUF_SIZE);
   while(1)
   {
      int x = send(s, junk, BUF_SIZE, 0);  // fill up the socket buffers to start
      printf("FillSocket:  Wrote %i/%i bytes to socket\n", x, BUF_SIZE);
      if (x < BUF_SIZE) break;
   }
}

static int CreateAcceptingSocket(uint16 port, int maxbacklog, uint16 * optRetPort, uint32 optFrom)
{
   struct sockaddr_in saSocket;
   memset(&saSocket, 0, sizeof(saSocket));
   saSocket.sin_family      = AF_INET;
   saSocket.sin_addr.s_addr = htonl(optFrom ? optFrom : INADDR_ANY);
   saSocket.sin_port        = htons(port);

   int acceptSocket;
   if ((acceptSocket = socket(AF_INET, SOCK_STREAM, 0)) >= 0)
   {
#ifndef WIN32
      // (Not necessary under windows -- it has the behaviour we want by default)
      const long trueValue = 1;
      (void) setsockopt(acceptSocket, SOL_SOCKET, SO_REUSEADDR, &trueValue, sizeof(long));
#endif

      if ((bind(acceptSocket, (struct sockaddr *) &saSocket, sizeof(saSocket)) == 0)&&(listen(acceptSocket, maxbacklog) == 0)) 
      {
         if (optRetPort == NULL) optRetPort = &port;
         socklen_t len = sizeof(saSocket);
         *optRetPort = (uint16) ((getsockname(acceptSocket, (struct sockaddr *)&saSocket, &len) == 0) ? ntohs(saSocket.sin_port) : 0);

         printf("Created socket #%i for accepting a TCP connection on port %u.\n", acceptSocket, optRetPort?*optRetPort:Port);
         return acceptSocket;
      }
      close(acceptSocket);
   }
   return -1;
}


static status_t SetSocketBlockingEnabled(int sock, bool blocking)
{
   printf("Setting socket #%i to %s mode\n", sock, blocking?"blocking":"non-blocking");

   if (sock < 0) return B_ERROR;

#ifdef WIN32
   unsigned long mode = blocking ? 0 : 1;
   return (ioctlsocket(sock, FIONBIO, &mode) == 0) ? B_NO_ERROR : B_ERROR;
#else
   int flags = fcntl(sock, F_GETFL, 0);
   if (flags < 0) return B_ERROR;
   flags = blocking ? (flags&~O_NONBLOCK) : (flags|O_NONBLOCK);
   return (fcntl(sock, F_SETFL, flags) == 0) ? B_NO_ERROR : B_ERROR;
#endif
}

static status_t SetSocketNaglesAlgorithmEnabled(int sock, bool enabled)
{
   printf("%s Nagle's algorithm for socket #%i\n", enabled?"Enabling":"Disabling", sock);

   if (sock < 0) return B_ERROR;

   int enableNoDelay = enabled ? 0 : 1;
   return (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &enableNoDelay, sizeof(enableNoDelay)) >= 0) ? B_NO_ERROR : B_ERROR;
}

static int Accept(int sock)
{
   printf("Accepting a connection on socket #%i\n", sock);

   struct sockaddr_in saSocket;
   socklen_t nLen = sizeof(saSocket);
   return (sock >= 0) ? accept(sock, (struct sockaddr *)&saSocket, &nLen) : -1;
}

static int Connect(uint32 hostIP, uint16 port)
{
   int s = socket(AF_INET, SOCK_STREAM, 0);
   printf("Creating socket #%i to connect to port #%u\n", s, port);

   if (s >= 0)
   {
      int ret = -1;
      struct sockaddr_in saAddr;
      memset(&saAddr, 0, sizeof(saAddr));
      saAddr.sin_family      = AF_INET;
      saAddr.sin_port        = htons(port);
      saAddr.sin_addr.s_addr = htonl(hostIP);
      ret = connect(s, (struct sockaddr *) &saAddr, sizeof(saAddr));
      if (ret == 0) return s;
               else close(s);
   }
   return -1;
}

/* similar to socketpair(), but uses AF_INET sockets */
static status_t CreateConnectedSocketPair(int & socket1, int & socket2, bool blocking, bool useNagles)
{
   printf("CreateConnectedSocketPair() begins...\n");

   const uint32 localhostIP = ((((uint32)127)<<24)|((uint32)1));
   uint16 port;

   socket1 = CreateAcceptingSocket(0, 1, &port, localhostIP);
   if (socket1 >= 0)
   {
      socket2 = Connect(localhostIP, port);
      if (socket2 >= 0)
      {
         int newfd = Accept(socket1);
         if (newfd >= 0)
         {
            close(socket1);
            socket1 = newfd;
            if ((SetSocketBlockingEnabled(socket1, blocking) == B_NO_ERROR)&&(SetSocketBlockingEnabled(socket2, blocking) == B_NO_ERROR))
            {
               (void) SetSocketNaglesAlgorithmEnabled(socket1, useNagles);
               (void) SetSocketNaglesAlgorithmEnabled(socket2, useNagles);
               printf("CreateConnectedSocketPair() succeeded.  Socket1=%i Socket2=%i\n", socket1, socket2);
               return B_NO_ERROR;
            }
         }
         close(socket2);
      }
      close(socket1);
   }
   socket1 = socket2 = -1;
   return B_ERROR;
}

status_t SetSocketSendBufferSize(int sock, uint32 sendBufferSizeBytes)
{
   printf("Setting send buffer size to %lu for socket #%i\n", sendBufferSizeBytes, sock);
   if (sock < 0) return B_ERROR;

   int iSize = (int) sendBufferSizeBytes;
   return (setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char *)&iSize, sizeof(iSize)) >= 0) ? B_NO_ERROR : B_ERROR;
}

status_t SetSocketReceiveBufferSize(int sock, uint32 receiveBufferSizeBytes)
{
   printf("Setting receive buffer size to %lu for socket #%i\n", receiveBufferSizeBytes, sock);
   if (sock < 0) return B_ERROR;

   int iSize = (int) receiveBufferSizeBytes;
   return (setsockopt(sock, SOL_SOCKET, SO_RCVBUF, (char *)&iSize, sizeof(iSize)) >= 0) ? B_NO_ERROR : B_ERROR;
}

int main(int, char **)
{
   int socks[] = {-1, -1};
   if (CreateConnectedSocketPair(socks[0], socks[1], false, false) != B_NO_ERROR)
   {
      perror("Couldn't create signalling socket pair!\n");
      exit(10);
   }

   SetSocketSendBufferSize(socks[0], 1);
   SetSocketReceiveBufferSize(socks[1], 1);

   // First, write to the socket until it can't hold any more
   FillSocket(socks[0]);

   int loopCount = 0;
   while(1)
   {
      fd_set writeSet; FD_ZERO(&writeSet); FD_SET(socks[0], &writeSet);
      struct timeval poll = {0,0};
      int selRet = select(socks[0]+1, NULL, &writeSet, NULL, &poll);
      if (selRet >= 0)
      {
         bool isReadyForWrite = FD_ISSET(socks[0], &writeSet);
         printf("#%i: select() returned socket is %s\n", ++loopCount, isReadyForWrite?"READY FOR WRITE!?!?!? ERROR!?!?!?" : "not ready for write (expected)");
         if (isReadyForWrite) FillSocket(socks[0]);  // just to see how much more we can write to it!
         sleep(1);
      }
      else
      {
         perror("select failed!");
         exit(10);
      }
   }
   return 0;
}

Offline

#5 2007-06-14 01:16 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#6 2007-06-14 05:39 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#7 2007-06-14 07:25 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Well, realize you're actually dealing with TWO buffers here: the send buffer on the
sending side and the receive buffer on the receiving side...  BOTH of these will have
to be full, before your send()'s will permanently fail...  So, maybe the OS is giving
you a real 1-byte send buffer, but refuses a mere 1-byte receive buffer...  So, you'd
be able to only send 1 byte at a time, but that would almost immediately get sent
into the receive buffer of the other end of the connection...  And, you'd then be able
to write another byte, which would then do the same thing...  And, so on, until the
receive buffer on the other end filled up...

Or, it could be that it's merely pretending to give you 1-byte buffers, but really
storing more than that... *shrug*  The point is, have you even tried it with larger
sizes?  I would suspect it should behave a lot more like you'd expect with more
reasonable sizes...  (You still have to take into account BOTH buffers needing to
be filled, though...)

Offline

#8 2007-06-14 10:22 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#9 2008-05-15 09:02 PM

Kolenka
Member
Registered: 2008-05-15
Posts: 9

Re: TCP sockets as inter-thread signalling mechanism

jfriesne, one thing you need to remember is that since your code snippet is using TCP sockets, the send/receive buffers don't really mean much of anything.

The send buffer empties as data is written onto the network. It isn't a guarantee that the other side has read anything or even received the packet yet, just a guarantee that it has been sent.

This sort of signaling is best for Fire and Forget type signals. Let the receiving end decide what to do with piled up signals, and the sender should send signals on events and not care about the buffers. As long as the receiving end is clearing out the read buffer of all data from the signal socket when it updates, you should be good to go.

Offline

#10 2008-05-16 01:06 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#11 2008-05-16 05:48 PM

Kolenka
Member
Registered: 2008-05-15
Posts: 9

Re: TCP sockets as inter-thread signalling mechanism

Good point, I think I spend too much time in the UDP world where you aren't guaranteed arrival. :)

I still believe that the send buffer emptying doesn't guarantee that the receiving application has read data out of its buffer though, just that the ACK has arrived.

Offline

#12 2008-05-16 05:52 PM

i3839
Oddministrator
From: Amsterdam
Registered: 2003-06-07
Posts: 2,239

Re: TCP sockets as inter-thread signalling mechanism

But even if it did read it, then you don't know if the application processed it yet,
so that line of thinking doesn't really work. So after sending data you can consider
it sent. If you want to know for sure it made it and all that, then the peer need to
send a sort of ack, no matter if you use UDP or TCP.

Offline

#13 2008-05-16 06:45 PM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism

Hmmm...

Old message but what the heck... since others are commenting so will I... herd mentality and all of that stuff...

First observation is that when sending / receiving from a local address... especially when it is the local host addr e.g. 127.0.0.1 ... a lot of network implementations will short circuit the operation...

That is - the data will not actually go all the way down to the interface / driver level... and a lot of the operations required for network transmission are avoided... this is done because in such a situation things like a checksum, retransmission check and so on are ( usually ) meaningless... as if they were to occur in this situation it means the machine itself is failing - as in - the memory / network card(s) / storage is going bad...  and e.g. no number of 'retransmissions' will correct or help the situation...

Point being that those concerns are really off the table with such a setup... as long as you are certain it will ONLY be using local communication...

The second thought is...

While an entertaining idea... why not simply use mutexes and condition variables for the thread 'signaling' ?  These constructs are available on all *NIX / Linux, Windows and Apple OSes... along with the majority of embedded RTOS...

Using that setup - the functionality could be written up with a lot fewer lines of code... less ( system ) overhead... would give explicit control over things such as data buffer allocation... and would have less dependencies on the runtime configuration of the system...

Of course I may be missing something here... e.g. a final design that was meant to provide thread 'communication' on the local machine AND distributed communication... but even then - I would be inclined to still use the condvar setup for local coordination... and only drop to pure networking when / if necessary...


Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#14 2008-05-16 07:58 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#15 2008-05-16 08:43 PM

RobSeace
Administrator
From: Boston, MA
Registered: 2002-06-12
Posts: 3,839
Website

Re: TCP sockets as inter-thread signalling mechanism

Offline

#16 2008-05-16 09:36 PM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#17 2008-05-16 11:22 PM

i3839
Oddministrator
From: Amsterdam
Registered: 2003-06-07
Posts: 2,239

Re: TCP sockets as inter-thread signalling mechanism

That approach adds communication and context switch overhead for all network IO
operations, so probably not good if that is a performance sensitive part, or when
inter thread communication is relatively rare.

Offline

#18 2008-05-16 11:34 PM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism

Mine ( ? ) - I am not certain what you mean... but can provide details if desired...

We all know I like typing... lol


Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#19 2008-05-16 11:47 PM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

I see two problems with the do-everything-in-its-own-thread design:

1) All the I/O bottlenecks through that single (main) I/O thread, which (I believe) would be less efficient than having each client's TCP stream feeding directly to its own thread.

and worse,

2) Doing everything in separate threads means you have to worry about locking your data structures, race conditions, deadlocks, and that whole bag of worms.  Not really worth it unless you absolutely need multiple threads for whatever reason.

Offline

#20 2008-05-17 12:42 AM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism

Well... for item 1...

Lets first look at ( if I understand properly ) the first design... considering just for laughs a 1024 socket limit on select... and not considering accepting incoming connections i.e. not a server design... and not creating outgoing connections... except for inter-thread communication...

Under full load we would be limited to 512 threads... not because of thread limitations but because we would need each thread to have a socket associated with it to receive 'signals' ... and of course we would need a socket for each of those receivers by which we could send the signal...

Now lets look at select...

You call it with 1 to 3 sets of handles and a the max descriptor... assuming you are only sending 1 set of handles e.g. read or write - then under full load the internal library / sub-system will have to iterate thru / check on avg 1024 / 2 handles per call... which translates to ( all threads reading / writing to each other ) 512 * 1024 / 2 = 256 K checks... quite a bit of processing overhead...

This isn't even considering internal locking that occurs at the lower level to make certain the true state of the sockets / handles are obtained...

But wait - it gets even worse...

The reason being is that again - internally - the system has to remember each handle(s) and the thread it is associated with... each time one goes active ( date received / sent ) it will have to look up that association... then check with the scheduler... if the thread is ok to be activated i.e. hasn't used up too many time slices it is activated... if not the scheduler has to track that there is a deferral AND maintain the IO action and thread activation association....

But wait - it gets EVEN WORSE... lol

Imagine just two threads...

A gets data indicated via select and gets activated by the scheduler... immediately after B gets data... A for a few cycles ( CPU bound ) so the activation of B is deferred... but then A calls read which is an IO bound op - which means an indication it can be switched out by the scheduler... so the scheduler puts A to sleep and wakes up B... B is in CPU bound mode for a few cycles but then it calls read also... same thing - scheduler sees it can be switched out... so then it checks thread A - sees there is really data ready so B is put to sleep and A is re-activated... A returns from the read and processes the data ( CPU bound ) and then calls select again - becoming IO bound... scheduler then puts A in the queue and wakes up B...

Thats actually a lot of work... and just imagine 512 threads reading and writing and bouncing between being CPU and IO bound states... and the thrashing that ( may ) ensue...

There are many other issue too...


To compare the above with what I was suggesting...

A single master thread doing the selects and playing coordinator... that means 2 * 1024 = 2048 checks ( max ) and associated burned cycles under full load...  a lot less than the 256K value above...

A single thread that is IO bound and triggered / having to twist the scheduler... and the rest CPU bound ( processing messages to / from the message queue )... which means they are very simple to handle when it comes to activation etc. ... as they have discrete time slices...

Also for all the CPU bound threads that are blocked on condvars... they eat no CPU time... the scheduler actually will keep them in their own queue as currently inactive... and when they are signaled on the condvar - since they aren't going into an IO op - there is much less of a chance of a context switch occurring except in the circumstances where the scheduler sees the thread has run out of time in its slice...

Now you may be thinking that as the condvars are signaled that the scheduler will try to wake up all the threads that have data ready and a lot of context switching will occur... perhaps but that depends on the scheduling algorithm the system uses... and if you want the simple solution to avoid this situation ( no matter what )... just have all the condvars share a mutex - the scheduler will then order things for you - as it will see that if on thread has already been signaled and holds the mutex - the next one must be kept in the queue and will NOT be woken up / cause a context change...

For item 2...

Yes you have to handle locking etc. the data structures yourself...  and that does add complexity... but on the other hand YOU control it... as opposed to relying on lower level behavior... possible strange runtime configs ( and there are a lot of configurable options for networking at both the system and app level )...

Not to mention the possibility of ( several orders of ) a magnitude better efficiency...


Oops - another addition... in ref to the do everything in its own thread... there is nothing preventing e.g. in a server setting of having each worker thread handle multiple sockets further increasing efficiency... as mentioned in my earlier method I was using that as the example only for the sake of simplicity...

Anyway... the solution is that instead of associating a condvar per socket... you instead associate a condvar per worker thread... then simply associate each worker with multiple sockets in your hash map... no need for the equivalent of a WaitForMultipleObjects ( a la Windows style )... as you would be effectively doing the same exact thing...



Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#21 2008-05-17 12:52 AM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism

Oh - something I forgot that may be part of the source of confusion...

On a single cpu / core system...

You would set the priority of your ( single ) IO bound thread to close to the highest level... and the ( n ) CPU bound to close to the lowest level... that will assure that you get a good response time when reading / writing data with the IO thread...

On a multiple cpu / core system...

Where x is the number of cores... you could leave all threads at the same priority... and instead would have ( x - 1 ) mutexes associated with the condvars... and would coordinate / round-robin / whatever gets ya off which one was used each time a wait was performed... by doing this you would make certain that there was always 1 core free for the io bound thread...


Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#22 2008-05-17 02:23 AM

jfriesne
Administrator
From: California
Registered: 2005-07-06
Posts: 348
Website

Re: TCP sockets as inter-thread signalling mechanism

The main lesson I see here is that 512 threads is not an efficient way to go.  ;^)  Better to either go single-threaded, or (if you really need to) use a fixed number of worker threads in a thread pool.  The number of worker threads probably shouldn't vary too much from the number of CPUs in the system.

Offline

#23 2008-05-17 02:45 AM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: TCP sockets as inter-thread signalling mechanism

Yeah...

The 1 thread per connection set-up is typically bad on many levels... despite what many people are taught... its primary objective is simplicity... but the cons far outweigh that single positive benefit...

As for myself... and this may seem strange... when I do a design similar to what I have given I usually do 1 thread for IO... and ( 2 to 4 ) * ( number of cores - 1 ) CPU workers... with each 'cluster' of ( 2 to 4 ) worker threads sharing a mutex and condition variable...

That may sound odd - as one might expect the simpler 1 IO and n - 1 worker threads... but the thing is I realize there are other processes on the system... and I ( normally ) want to give my servers an edge in case something outside it tries to hog the CPU... so doing the 2 to 4 bit helps to skew things into my favor - especially for 1st level scheduler only systems...


Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#24 2008-05-19 09:50 AM

Kolenka
Member
Registered: 2008-05-15
Posts: 9

Re: TCP sockets as inter-thread signalling mechanism

The head-scratcher for me on this topic (and why I still use an AF_UNIX socket pair to control my I/O thread in a couple instances), is how to figure out which connections are ready for writing without relying on a select() timeout to let me poll my connections. Instead, I use the socket pair to send simple commands to the I/O thread telling it that a socket has been opened, and to start watching it, or that a socket wishes to start writing. Once the thread takes over, it is a pretty simple matter to let it decide when to remove the socket from the write fd_set, or the opened sockets fd_set.

If there is a better way, I'd love to hear it, but at this point, 2 fds is a small price to pay (IMO) to avoid the latency and polling required to know which sockets are trying to write.

Offline

#25 2008-05-19 10:50 AM

i3839
Oddministrator
From: Amsterdam
Registered: 2003-06-07
Posts: 2,239

Re: TCP sockets as inter-thread signalling mechanism

If using Linux's epoll you can just add sockets to it from one thread while another
one waits on them. If you make all peer state reachable via epoll_data_t then it all
happens transparent. Something similar can probably be done with BSD's kqueue.

Or use poll() and modify events in struct pollfd from another thread. But then
you've the timeout problem.

Offline

  • Index
  • » Threads
  • » TCP sockets as inter-thread signalling mechanism

Board footer

Powered by FluxBB