out-of order data…

$cat /proc/sys/net/ipv4/tcp_reordering
3
$
tcp_reordering - INTEGER
Maximal reordering of packets in a TCP stream.
Default: 3
proj-rep/kernel_code/tcp_ipv4.c
  1921:         tp->reordering = sysctl_tcp_reordering;
  1922:
nms.csail.mit.edu/~kandula/data/tcp-mult.tgz - Unknown - C

irestarter-0.9.0/src/netfilter-script.c
   271:         fprintf (script, "# Set TCP Re-Ordering value in kernel to '5'\n");
   272:         fprintf (script, "if [ -e /proc/sys/net/ipv4/tcp_reordering ]; then\n"
   273:         "  echo 5 > /proc/sys/net/ipv4/tcp_reordering\nfi\n\n");
   274:
packetstormsecurity.nl/.../firewall/firestarter/firestarter-0.9.0.tar.gz - GPL - C

cifs-1.13/fs/cifs/file.s
 64934: .LC2778:
 64935:         .string "NET_TCP_REORDERING"
 64936: .LC873:
de.samba.org/samba/ftp/cifs-cvs/cifs-1.13-2.6-bad.tar.gz - Unknown - Assembly


• Load splitting: To balance the load among the multiple
paths, different packets of the same stream take different
routes leading to different delays causing reordering.
Problems caused by reordering are handled at different
levels in TCP/IP suite. TCP allows adjustment of ‘dupthresh’
parameter, i.e., the number of duplicate ACKs to be allowed
before classifying a following non-acknowledged packet as
lost [3]. This parameter (also called tcp_reordering in Linux
implementations) allows the reordering to occur to a certain
extent without affecting the throughput. At application level,
the out-of-sequence packets are buffered until they can be
played back in sequence. An increase in out-of-order delivery
however consumes more resources and also affects the end-to-
end performance. Consequently, certain techniques attempt to
reduce reordering at intermediate nodes, i.e., at IP level.

A Comparative Analysis of Packet Reordering Metrics*
Nischal M. Piratla1, 2
1
Anura P. Jayasumana1 Abhijit Bare1
Computer Networking Research Laboratory, Colorado State University, Fort Collins, CO 80523, USA
2
Deutsche Telekom Laboratories, Ernst-Reuter-Platz 7, D-10587 Berlin, Germany


About RTO retransmission and tcp_orphan_retries

About RTO retransmission and tcp_orphan_retries

tcp_orphan_retries - INTEGER
This value influences the timeout of a locally closed TCP connection,when RTO retransmissions remain
unacknowledged.

source :
Linux Kernel Documentation . 2.6.32

TYPICAL SHELL ON PROC FILESYSTEM
[bash]
$cat /proc/sys/net/ipv4/tcp_orphan_retries
0
$
[/bash]

TYPICAL SOURCE CODE RELATED
[c]
net/ipv4/tcp_timer.c – 39 identical
99: static int tcp_orphan_retries(struct sock *sk, int alive)
100: {
101: int retries = sysctl_tcp_orphan_retries; /* May be zero. */
157: retry_until = tcp_orphan_retries(sk, alive);
158:
android.git.kernel.org/kernel/msm.git – GPL – C – More from msm.git »
[/c]

[bash]
shaper.queues
176: echo "Set number of orphant retries to 5"
177: echo 5 > /proc/sys/net/ipv4/tcp_orphan_retries
178:
www.chronox.de/tc+filter/shaper-0.2.tar.bz2 – Unknown – Shell – More from shaper-0.2.tar.bz2 »
[/bash]
[text]
usr/share/man/man7/tcp.7
282: .TP
283: .B tcp_orphan_retries
284: The maximum number of attempts made to probe the other
www2.cddc.vt.edu/linux/distributions/7linux/7v6/7base/7v6a11.tar.bz2 – Unknown – Troff –
[/text]
TYPICAL EXPLANATION RELATED

The  tcp_orphan_retries variable  tells the  TCP/IP stack how many times to retry  to kill connections on
the other side before killing  it on our own side.  If your machine runs  as a  highly loaded  http
server it  may be  worth thinking  about lowering  this value.  http  sockets will consume large amounts of resources if not checked

This variable  takes an integer value.The default value for this variable is 7, which  would approximately correspond to 50 seconds  through 16 minutes depending on the Retransmission Timeout (RTO).


source :
Ipsysctl tutorial 1.0.4
Oskar Andreasson
blueflux@koffein.net
Copyright © 2002 by Oskar Andreasson

when RTO retransmissions remain unacknowledged…

$cat /proc/sys/net/ipv4/tcp_orphan_retries
0
$


tcp_orphan_retries - INTEGER
This value influences the timeout of a locally closed TCP connection,
        when RTO retransmissions remain unacknowledged. See
 tcp_retries2 for more details.


The default value is 7

If your machine is a loaded WEB server, you should think about
 lowering this value, such sockets may consume significant resources.
 Cf. tcp_max_orphans.


source :
Linux Kernel Documentation . 2.6.32
net/ipv4/tcp_timer.c - 39 identical
    99: static int tcp_orphan_retries(struct sock *sk, int alive)
   100: {
   101:         int retries = sysctl_tcp_orphan_retries; /* May be zero. */
   157:                         retry_until = tcp_orphan_retries(sk, alive);
   158:
android.git.kernel.org/kernel/msm.git - GPL - C - More from msm.git »


shaper.queues
   176:   echo "Set number of orphant retries to 5"
   177:   echo 5 > /proc/sys/net/ipv4/tcp_orphan_retries
   178:
www.chronox.de/tc+filter/shaper-0.2.tar.bz2 - Unknown - Shell - More from shaper-0.2.tar.bz2 »

usr/share/man/man7/tcp.7
   282: .TP
   283: .B tcp_orphan_retries
   284: The maximum number of attempts made to probe the other
www2.cddc.vt.edu/linux/distributions/7linux/7v6/7base/7v6a11.tar.bz2 - Unknown - Troff -


3.3.15. tcp_orphan_retries
The tcp_orphan_retries variable tells the TCP/IP stack how many times
 to retry to kill connections on the other side before killing it on our
 own side. If your machine runs as a highly loaded http server it may
 be worth thinking about lowering this value. http sockets will consume
 large amounts of resources if not checked.


This variable takes an integer value. The default value for this variable
 is 7, which would approximately correspond to 50 seconds through 16
 minutes depending on the Retransmission Timeout (RTO). For a
 complete explanation of the RTO, read the "3.7. Data Communication"
 section in RFC 793 - Transmission Control Protocol.


source :
Ipsysctl tutorial 1.0.4
Oskar Andreasson
blueflux@koffein.net
Copyright © 2002 by Oskar Andreasson


characteristics about the last connection…

$cat /proc/sys/net/ipv4/tcp_no_metrics_save
0
$

tcp_no_metrics_save

Normally, TCP will remember some characteristics about the last
 connection in the flow cache. If tcp_no_metrics_save is set, then it
 doesn't. Useful for benchmarks or other tests.

net/ipv4/sysctl_net_ipv4.c - 24 identical

   452:   {
   453:           .procname       = "tcp_no_metrics_save",
   454:           .data           = &sysctl_tcp_nometrics_save,

github.com/github/linux-2.6.git - GPL - C - More from linux-2.6.git »


dccpmon-1.0.0/get_sys_info.pl - 2 identical

   171: print LOG "/proc/sys/net/ipv4/tcp_no_metrics_save\n";
  172: $cmd = "/proc/sys/net/ipv4/tcp_no_metrics_save";
   173: print "cmd=".$cmd."\n";

www.hep.man.ac.uk/u/rich/Tools_Software/dccpmon/dccpmon-1.0.0.tar - Unknown - Perl

test-cases/chunked-size-mem-advise.pl

     8: #  #!/bin/bash
     9: #  sysctl -w net.ipv4.tcp_no_metrics_save=1;
    10: #  while true

netsend.berlios.de/test-cases/chunked-size-mem-advise.pl - Unknown - Perl
"A study of large ?ow interactions in high-speed shared networks
with Grid5000 and GtrcNET-10 instruments".

a selection follows--


As we are using a Linux timer (HZ) of 250 and 1500 bytes’ packet
 size, the max bandwidth would be about 375 MBps, which is large
 enough to use 1 Gbps NICs. The tcp_no_metrics_save variable
 speci?es that the kernel isn’t supposed to remember the TCP
 parameters corresponding to a network route and so ensures the
 independance of each successive experiment.

A study of large ?ow interactions in high-speed
shared networks with Grid5000 and GtrcNET-10
instruments
Romaric Guillier,
Ludovic Hablot,
Yuetsu Kodama,
Tomohiro Kudoh,
Fumihiro Okazaki,
Pascale Primet,
Sébastien Soudan,
Ryousei Takano
November 2006

controls tcp packetization-layer ..

$cat /proc/sys/net/ipv4/tcp_mtu_probing
0
$
tcp_mtu_probing - INTEGER

Controls TCP Packetization-Layer Path MTU Discovery.
Takes three values:

0 - Disabled
1 - Disabled by default, enabled when an ICMP black hole detected
2 - Always enabled, use initial MSS of tcp_base_mss.

source :
Linux kernel Documentation. 2.6.32

   147:                         /* Black hole detection */
   148:                         tcp_mtu_probing(icsk, sk);
   149:
android.git.kernel.org/kernel/msm.git - GPL - C - More from msm.git »

boot/current/System.map
 22728: c044a390 B sysctl_tcp_workaround_signed_windows
 22729: c044a394 B sysctl_tcp_mtu_probing
 22730: c044a398 B sysctl_tcp_orphan_retries
download.asn.pl/.../lintrack-2.0/pkg/linux-2.6.17-8-big.pkg.tar.gz - Unknown


Path MTU discovery (PMTUD) is a technique in computer networking for
 determining the maximum transmission unit (MTU) size on the network
 path between two Internet Protocol (IP) hosts, usually with the goal of
 avoiding IP fragmentation.


--- Wikipedia The Free Encyclopedia

tcp moderate …receive buffer auto-tuning

$cat /proc/sys/net/ipv4/tcp_moderate_rcvbuf
1
$
tcp_moderate_rcvbuf - BOOLEAN

If set, TCP performs receive buffer auto-tuning, attempting to
automatically size the buffer (no greater than tcp_rmem[2]) to
match the size required by the path for full throughput.  Enabled by
default.

source :
Linux kernel Documentation   kernel ( 2.6.32 )

trunk/coverage/common/ServerSpecs.pm - 1 identical
   278:  247                                                      tcp_max_syn_backlog               1024
   279: 248                                                      tcp_moderate_rcvbuf               1
   280: 249                                                         tcp_reordering                    3
maatkit.googlecode.com/svn - Unknown - Perl - More from svn »

kernel/sysctl_check.c - 57 identical
   371:   /* NET_TCP_DEFAULT_WIN_SCALE unused */
   372:   { NET_TCP_MODERATE_RCVBUF,              "tcp_moderate_rcvbuf" },
   373:   { NET_TCP_TSO_WIN_DIVISOR,              "tcp_tso_win_divisor" },
android.git.kernel.org/kernel/common.git - GPL - C


xscribe/logs/expxscribe.sh
    24: #set TCP parameters
    25: sysctl -w net.ipv4.tcp_moderate_rcvbuf="0"
    26: sysctl -w net.ipv4.tcp_bic="0"
bruno1.iit.cnr.it/xscribe_exp/xscribe_exp.tgz - Unknown - Shell
A. Receive Buffer Dynamics
Receive buffer dynamics include statistics pertaining to the
receive buffer while its owning process is active. It encom-
passes two primary statistics: buffer ?uctuation and actual
buffer size. Actual buffer size is the amount of data present on
the queue while the process is active. We call this the actual
buffer size because it can vary quite largely from what is set
in /proc.
Buffer ?uctuation, Figure 3(a), reveals the amount of pro-
cessing that goes into the receive buffer while a task is active.
It is a measurement of the receive queue before and after the
receiving processes time slice. When a process becomes active,
we note the size of its receive queue. We subtract this value
from the size of the buffer at process deactivation, thereby
measuring the ?uctuation of the buffer while the process was
active. Formally:
n
1
yi ? xi
(1)
n i=1
where n is the number of timeslices, and xi and yi are
snapshots of the buffer size before and after, respectively, a
given timeslice i. A positive value means data was added to the
queue during a timeslice, while a negative value corresponds
to data being removed.

source :
Effect of Receive Buffer Size: An OS-based Perspective
Jerome White and David X. Wei
California Institute of Technology


tcp mem vector of 3 integers

$cat /proc/sys/net/ipv4/tcp_mem
40704	54272	81408
$
tcp_mem

 vector of 3 INTEGERs: min, pressure, max
min: below this number of pages TCP is not bothered about its memory appetite.

pressure: when amount of memory allocated by TCP exceeds this
number of pages, TCP moderates its memory consumption and
 enters memory pressure mode, which is exited when memory
 consumption fall  under "min".

max: number of pages allowed for queueing by all TCP sockets.
Defaults are calculated at boot time from amount of available
memory.
net/ipv4/tcp_input.c - 11 identical

   308:   (int)tp->rcv_ssthresh < tcp_space(sk) &&
   309:   !tcp_memory_pressure) {
   310:       int incr;

   391:   !tcp_memory_pressure &&
   392:   atomic_read(&tcp_memory_allocated)  /proc/sys/net/ipv/tcp_mem

33:

luotao1130.googlecode.com/svn - Unknown - Shell

"IV CONFIGURATION"

a selection follows ...

The ideal TCP window size is approximately equal to
the bandwidth-delay product. For some of the
experiments, the window size required is higher than the
default allowed by Linux. To be able to use the proper
window sizes required for the experiments, the TCP
buffers must be tuned. The five Linux configuration
values tcp_mem, tcp_rmem, tcp_wmem, wmem_max
and rmem_max must be modified to accommodate the
higher window sizes. This is done by executing the
following commands on both the sender and the
receiver.
echo “48128 48640 49152” >
/proc/sys/net/ipv4/tcp_mem
echo 4096 33554432 33554432 >
/proc/sys/net/ipv4/tcp_rmem
echo 4096 33554432 33554432 >
/proc/sys/net/ipv4/tcp_wmem
echo “33554432” > /proc/sys/net/core/wmem_max
echo “33554432” > /proc/sys/net/core/rmem_max

source :

Implementing a Testbed for the Evaluation of
FAST TCP in DOCSIS-based
Access Networks
David Kennedy and Irena Atov
Centre for Advanced Internet Architectures. Technical Report 060119A
Swinburne University of Technology
Melbourne, Australia
{dkennedy,iatov}@swin.edu.au

timewait sockets held…tw_bucket

$cat /proc/sys/net/ipv4/tcp_max_tw_buckets
180000
$
tcp_max_tw_buckets

Maximal number of timewait sockets held by system simultaneously.
If this number is exceeded time-wait socket is immediately destroyed
and warning is printed. This limit exists only to prevent simple DoS
attacks, you _must_ not lower the limit artificially, but rather increase
 it (probably, after increasing installed memory), if network conditions
 require more than default value.

source :

Linux kernel Documentation .


cifs-1.13/fs/cifs/file.s

 65032:
.LC2776:
65033:         .string "NET_TCP_MAX_TW_BUCKETS"
65034: .LC4101:

de.samba.org/samba/ftp/cifs-cvs/cifs-1.13-2.6-bad.tar.gz - Unknown - Assembly - More from cifs-1.13-2.6-bad.tar.gz »
"Firewall performance measurement"

--- a selection follows.

Size of available TCP port range:
When connecting to the same server on the same port, there are
64,512 non-privileged ports available on the client side as source
 ports. According to RFC793[5], a port cannot be reused until the
 TCP_TIME_WAIT state expires. The recommended timeout value in
 the RFC is 4 minutes, which would mean 268 new request per
 second at the maximum. In the Linux kernel the timeout value of
the TCP_TIME_WAIT state is around 1 minute, which means a
maximum of 1075 new request per second.

source :
Netfilter Performance Testing
József Kadlecsik
KFKI RMKI
kadlec@sunserv.kfki.hu
György Pásztor
SZTE EK
pasztor@linux.gyakg.u-szeged.hu

grep .. -m .. stop reading a file…

-m
Stop reading a file after NUM matching lines.
$grep -m 1  wc  wcwidth.c
#include 
$grep -m 2  wc  wcwidth.c
#include 
wchar_t cr;
$grep -m 3  wc  wcwidth.c
#include 
wchar_t cr;
value=wcwidth(cr);
$grep -m 4  wc  wcwidth.c
#include 
wchar_t cr;
value=wcwidth(cr);
$grep -m 0  wc  wcwidth.c
$
scripts/makelst - 338 identical

    19: t1=`$3 --syms $1 | grep .text | grep -m1 " F "`
    20: if [ -n "$t1" ]; then

android.git.kernel.org/kernel/common.git - GPL - Shell
Searching for a pattern in a text file is a very common operation
 in many applications ranging from text editors and databases to
 applications in molecular biology. In many instances the pattern
does not appear in the text exactly. Errors in the text or in the
 query can result from misspelling or from experimental errors
 (e.g., when the text is a DNA sequence). The use of such
 approximate pattern matching has been limited until now to
 specific applications. Most text editors and searching programs
 do not support searching with errors because of the com-
plexity involved in implementing it. In this paper we describe a
new tool, called agrep, for approximate pattern matching. Agrep
 is based on a new efficient and flexible algorithm for approximate
 string matching. Agrep is also competitive with other tools for
 exact string matching; it include many options that make
 searching more powerful and convenient.

source :

AGREP — A FAST APPROXIMATE PATTERN-MATCHING TOOL
(Preliminary version)
Sun Wu and Udi Manber1
Department of Computer Science
University of Arizona
Tucson, AZ 85721
(sw | udi)@cs.arizona.edu

remembered connection requests .. syn .. backlog

$cat /proc/sys/net/ipv4/tcp_max_syn_backlog
1024
$
Maximal number of remembered connection requests, which still did not
receive an acknowledgement from connecting client. The default value
 is 1024 for systems with more than 128 MB of memory, and 128 for low
 memory machines. If server suffers of overload, try to increase this
 number.
source :
http://www.linuxinsight.com/proc_sys_net_ipv4_tcp_max_syn_backlog.html

Script.January/HPUX11/GEN003600 - 11 identical
    12: # 6.10.06 JM created initial check.
    13: # 9.20.06 JMazz chnage net.ipv4.tcp_max_syn_backlog test condition
    14: #         from -ne to -lt and the test value from 0 to 1280.
   158:   # "9.20.06:Code mod."
   159:   if [ `sysctl -a |grep "net.ipv4.tcp_max_syn_backlog" | awk -F"=" '{print $2}'` -lt 1280 ]
   160:   then
iase.disa.mil/stigs/SRR/UNIX_51-15January07.tar.bz2 - Unknown - Shell

Testing Response to a SYN Flood Attack
-- a selection follows

Also after the system is started, the value 0 must be
written in the /proc file system into the pseudo file
/proc/sys/net/ipv4/tcp_max_syn_backlog. This specifies
the maximum number of pending connection requests
allowed. When it equals 0, only one connection request
may be pending.

source :
Verifying TCP Implementation
Fang Fang
University of New Brunswick
POBox 4400
(506) 453-4566
(506) 453-4566
John M. DeDourek
University of New Brunswick
POBox 4400
(506) 453-4566
(506) 453-4566
q2a6z@unb.ca
dedourek@unb.ca