
{"id":1679,"date":"2010-06-16T00:37:57","date_gmt":"2010-06-15T19:07:57","guid":{"rendered":"http:\/\/www.jeffrin.in\/?p=1679"},"modified":"2010-06-16T00:37:57","modified_gmt":"2010-06-15T19:07:57","slug":"tcp-latency-throughput","status":"publish","type":"post","link":"https:\/\/www.trueangle.org\/index.php\/2010\/06\/16\/tcp-latency-throughput\/","title":{"rendered":"tcp latency&#8230;.throughput.."},"content":{"rendered":"<pre>\n$cat \/proc\/sys\/net\/ipv4\/tcp_low_latency\n0\n$\n\n<pre>\ntcp_low_latency - BOOLEAN\nIf set, the TCP stack makes decisions that prefer lower\nlatency as opposed to higher throughput.  By default, this\noption is not set meaning that higher throughput is preferred.\nAn example of an application where this default should be\nchanged would be a Beowulf compute cluster.\nDefault: 0\nsource : linux kernel source Documentation.\n<\/pre>\n<pre>\n\/*\n * CONFIG_LATENCYTOP enables a kernel latency tracking infrastructure that is\n * used by the \"latencytop\" userspace tool. The latency that is tracked is not\n * the 'traditional' interrupt latency (which is primarily caused by something\n * else consuming CPU), but instead, it is the latency an application encounters\n * because the kernel sleeps on its behalf for various reasons.\n *\n * This code tracks 2 levels of statistics:\n * 1) System level latency\n * 2) Per process latency\n *\n * The latency is stored in fixed sized data structures in an accumulated form;\n * if the \"same\" latency cause is hit twice, this will be tracked as one entry\n * in the data structure. Both the count, total accumulated latency and maximum\n * latency are tracked in this data structure. When the fixed size structure is\n * full, no new causes are tracked until the buffer is flushed by writing to\n * the \/proc file; the userspace tool does this on a regular basis.\n* A latency cause is identified by a stringified backtrace at the point that\n * the scheduler gets invoked. The userland tool will use this string to\n * identify the cause of the latency in human readable form.\n *\n * The information is exported via \/proc\/latency_stats and \/proc\/&lt;pid&gt;\/latency.\n * These files look like this:\n *\n * Latency Top version : v0.1\n * 70 59433 4897 i915_irq_wait drm_ioctl vfs_ioctl do_vfs_ioctl sys_ioctl\n * |    |    |    |\n * |    |    |    +----&gt; the stringified backtrace\n * |    |    +---------&gt; The maximum latency for this entry in microseconds\n * |    +--------------&gt; The accumulated latency for this entry (microseconds)\n * +-------------------&gt; The number of times this entry is hit\n *\n * (note: the average latency is the accumulated latency divided by the number\n * of times)\n *\/\n\nsource : linux kernel source 2.6.32  kernel\/latencytop.c\n<\/pre>\n<pre>\nThe Hop Protocol\nThe Hop protocol operates over an unreliable datagram\nservice such as UDP\/IP. The core goal of the Hop protocol\nis to provide the lowest latency and highest throughput pos-\nsible when transferring packets across wide-area networks.\n\nThe key elements of the Hop protocol are:\nNon-Blocking: packets are forwarded despite the loss\nof packets ordered earlier.\n\nLazy-Selective-Retransmits: nacks are sent for speci?c\nlost packets after a short delay to avoid requesting data\nwhich was not lost but merely arrived out of order or\nis sequenced after lost data.\n\nRate-based flow control: a rate based flow regula-\ntor provides explicit support for high delay-bandwidth\nnetworks. In addition, the rate based regulator can uti-\nlize bandwidth reservations services if such exist in the\nphysical network.\n\nsource :\nA Low Latency, Loss Tolerant Architecture and Protocol for Wide Area Group\nCommunication\nYair Amir, Claudiu Danilov, Jonathan Stanton\nDepartment of Computer Science\nJohns Hopkins University\n3400 North Charles St.\nBaltimore, MD 21218 USA\nyairamir, claudiu, jonathan @cs.jhu.edu\n\u00a0\n<\/pre>\n[audio:http:\/\/www.freeinfosociety.com\/media\/sounds\/118.mp3]\n","protected":false},"excerpt":{"rendered":"<p>$cat \/proc\/sys\/net\/ipv4\/tcp_low_latency 0 $ tcp_low_latency &#8211; BOOLEAN If set, the TCP stack makes decisions that prefer lower latency as opposed to higher throughput. By default, this option is not set meaning that higher throughput is preferred. An example of an application where this default should be changed would be a Beowulf compute cluster. Default: 0 &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.trueangle.org\/index.php\/2010\/06\/16\/tcp-latency-throughput\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;tcp latency&#8230;.throughput..&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[466,548,1009,1035,1548],"_links":{"self":[{"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/posts\/1679"}],"collection":[{"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/comments?post=1679"}],"version-history":[{"count":0,"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/posts\/1679\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/media?parent=1679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/categories?post=1679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.trueangle.org\/index.php\/wp-json\/wp\/v2\/tags?post=1679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}