UNIX Socket FAQ

A forum for questions and answers about network programming on Linux and all other Unix-like systems

You are not logged in.

#1 2003-02-06 12:46 AM

prasiths
Member
Registered: 2003-02-06
Posts: 4

Re: cpu time

Hi

I am doing some testing on a TCP transmission process.  I have seen that the sime of the CPU is very high, but the putime and pstime of the process is very low.

I am wondering what kind of mechanism in the TCP transmission process that makes the stime very high.

PS. I am doing tests on the performance of a TCP transmission process over different scenarios.

Thanks in advance
--Phongsak

Offline

#2 2003-02-07 07:55 AM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: cpu time

Ok... I am translating that you're saying the process time (at the app level - putime) and the process time (at the system level - pstime) are both very low but the system time in general is spiking up...

Still not really enough to go on but a couple things I have noticed myself on some op systems etc. using different network program models... hopefully one of these will relate to your question...

Some systems do a copy on write / modification for threads (and even for forked processes!)... which means that when you create the thread, socket and so forth ~ very little time is accounted against the user app...  e.g. you spawn a proc / thread and it returns quickly... you then start doing reads / writes to a app buffer which causes the copy on write to kick in and mem etc. to actually be allocated for the proc / thread...  the time performing that op may (usually) is accounted against the system in general...  under light load it shouldn't be noticeable but if you are pushing the limits of your system memory and segments are being swapped in and out on slow media (disk etc.) you may see huge system useage peaks...

Quite a few systems supply AIO these days but some implement  it as a thread (in the kernel) per connection model... a prime example of this is Linux (though there are others)...  this means that under high load, that a lot of (scheduling) thrash occurs between those threads and the time used get accounted against the system in general (remembering they are at the kernel level)...  the time to create the connection thread may be getting allotted against either the app/user or the system... depends on the impl... 

Actually, the scheduling thrash applies to the non-AIO stuff also...  lots of child processes being forked or threads being spawned (assuming the system thread model is kernel and not app / user layer) can also cause surges in accounting against system time (not applied to the app)...

If your program does something like forking / creating procs / threads, writes a lot of data out and then exits the second send "completes", the system must STILL send the data though the time used is not accounted against the app...  actually, even in a single proc application, the time spent by the system sending data which has been buffered goes against the system... not against the app that caused the buffering (if such occurred) by calling send(...)... note that the time spent buffering the data goes against the app still... just not the time the system spends on actually sending it down the wire... this means its possible to have an app sending a lot of data and showing a small amount of process time used... yet see the system time increase (quickly)... especially if you app happens to be sending just enough data to keep the buffers close to peak capacity but not overfilling them (and causing a block to occur)...

Blah... there are many more possibilities...

So the point is that there really is nothing in the TCP transmit processes proper that should be causing this...  but there are a LOT of scenarios which can cause the type of thing you are seeing...  the point of most of the above items is that the app does have the time spent calling send( ) accounted against it... but the systems buffer that data (normally) and the time sending the (now buffered) data down the line goes against the system, not the app...

Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#3 2003-02-10 08:19 PM

prasiths
Member
Registered: 2003-02-06
Posts: 4

Re: cpu time

Offline

#4 2003-02-12 09:38 PM

mlampkin
Administrator
From: Sol 3
Registered: 2002-06-12
Posts: 911
Website

Re: cpu time

Unless you have a hardware accelerator on board, you assumptions should be right on the mark...  since all the encryption / decryption when using OpenSSL and the other common libraries is occurring within user space...  if you are using a hardware accelerator, you may see the process system time and user time flip (depending on how the hardware accel drivers are integrated into the system)...

Btw... I'm assuming that the encrypt function you are calling is actually of the format <algorithm name>_encrypt(...) ?  There actually isn't a plain encrypt(...) call within OpenSSL is there since it would conflict with the standard function by the same name?

Michael


"The only difference between me and a madman is that I'm not mad."

Salvador Dali (1904-1989)

Offline

#5 2003-02-14 05:00 AM

prasiths
Member
Registered: 2003-02-06
Posts: 4

Re: cpu time

Hi

Yes you are right. I just give a general name for encryption function. There is no encrypt() in OpenSSL.  Anyway, thanks for your comment.

cheers,

Offline

Board footer

Powered by FluxBB