β

kernel tuning

运维者 168 阅读
#!/bin/bash
# STM kernel tuning V0.1

##### File Descriptors
# Set a minimum of one million file descriptors unless resources are seriously constrained.  
# See also the Stingray setting maxfds below
echo 2097152 > /proc/sys/fs/file-max

##### Ephemeral port range
# Each TCP and UDP connection from Stingray to a back-end server consumes an ephemeral port, 
# and that port is retained for the ‘fin_timeout’ period once the connection is closed.  
# If back-end connections are frequently created and closed, it’s possible to exhaust the 
# supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce 
# the fin_timeout to 30 seconds if necessary
echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range
echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout

##### SYN Cookies
# SYN cookies should be enabled on a production system.  The Linux kernel will process
# connections normally until the backlog grows , at which point it will use SYN cookies
# rather than storing local state.  SYN Cookies are an effective protection against syn
# floods, one of the most common DoS attacks against a server.
 
#If you are seeking a stable test configuration as a basis for other tuning, you should 
# disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter
#  dropped connection attempts

# Production Setting
#echo 1 > /proc/sys/net/ipv4/tcp_syncookies

# Load Testing Setting
echo 3 > /proc/sys/net/ipv4/tcp_syncookies

##### Request backlog
# The request backlog contains TCP connections that are established (the 3-way handshake 
# is complete) but have not been accepted by the listening socket (Stingray).  See also
# the Stingray tunable ‘listen_queue_size’.  Restart the Stingray software after changing
# this value.
# If the listen queue fills up because the Stingray does not accept connections sufficiently
# quickly, the kernel will quietly ignore additional connection attempts.  Clients will then
# back off (they assume packet loss has occurred) before retrying the connection.
echo 1024 > /proc/sys/net/core/somaxconn


##### @@ Advanced kernel and operating system tuning @@ #####

##### Packet queues
# In 10 GbE environments, you should consider increasing the size of the input queue:
# echo 5000 > net.core.netdev_max_backlog

##### TCP TIME_WAIT tuning
# TCP connections reside in the TIME_WAIT state in the kernel once they are closed. 
# TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. 
# If you see the error “TCP: time wait bucket table overflow”, consider increasing the 
# size of the table used to store TIME_WAIT connections:
echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets

##### TCP slow start and window sizes
# In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.
# The impact of a small initial window size is that peers communicating over a high-latency
# network will take a long time (several seconds or more) to scale the window to utilize
# the full bandwidth available – often the connection will complete (albeit slowly)
# before an efficient window size has been negotiated.
# The 2.6.39 kernel increases the default initial window size from 2 to 10.  
# If necessary, you can tune it manually:

# ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10

# If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size 
# significantly in an attempt to respond to congestion.  Many commentators have suggested
# that this behavior is not necessary, and this “slow start” behavior should be disabled:
echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle

##### TCP options for Spirent load generators
# If you are using older Spirent test kit, you may need to set the following tunables 
# to work around optimizations in their TCP stack:
 
# echo 0 > /proc/sys/net/ipv4/tcp_timestamps
# echo 0 > /proc/sys/net/ipv4/tcp_window_scaling
作者:运维者
临渊羡鱼,不如退而织网
原文地址:kernel tuning, 感谢原作者分享。