Tickrate

From Whisper's Wiki

Contents

Definitions

tickrate
From Valve: During each tick, the server processes incoming user commands, runs a physical simulation step, checks the game rules, and updates all object states. After simulating a tick, the server decides if any client needs a world update and takes a snapshot of the current world state if necessary. A higher tickrate increases the simulation precision, but also requires more CPU power and available bandwidth on both server and client.

When a client connects to a server, the clients Source Engine matches the SRCDS (Source Dedicated Server) tickrate that the client connected to.

  • Server tickrate 100 = Client tickrate 100
  • Server tickrate 66 = Client tickrate 66
  • Server tickrate 33 = Client tickrate 33

THE ONLY PLACE YOU CAN CHANGE THE TICKRATE IS VIA THE COMMAND STARTUP LINE.
NO, THERE IS NO WHERE ELSE, GOT IT?

SRCDS
Source Dedicated Server. The program you should be running and trying to optimise if you are here.
FPS
Frames per Second.
Client FPS
Is the number of times per second the game checks for inputs, either from keyboard/mouse and incoming game packets, basically any I/O operation.
Server FPS
Because there are no keyboard or mouse I/O's occurring, it only deals with how often the server checks for game packets.
fps_max
Sets an upper limit on the frames per second the server runs at. Default=300
sv_maxrate
The Maximum amount of data in Bytes per Second the server will send to the client. Conversely, the Maximum amount of data in Bytes per Second the Client can request from the Server. sv_maxrate overrides the clients rate setting if sv_maxrate is less than the clients rate setting. Default=0 Maximum=30000
sv_minrate
The Minimum amount of data in Bytes per Second the server will send to the client. Conversely, the Minimum amount of data in Bytes per Second the Client can request from the Server. sv_minrate overrides the clients rate setting if sv_minrate is greater than the clients rate setting. Default=0
sv_maxupdaterate
The Maximum amount of Update per Second the server will send to the client. Conversely, the Maximum amount of Updates per Second the Client can request from the Server. sv_maxupdaterate overrides the clients cl_updaterate setting if sv_maxupdaterate is less than the clients cl_updaterate setting. Default=60
sv_minupdaterate
The Minimum amount of Update per Second the server will send to the client. Conversely, the Minimum amount of Updates per Second the Client can request from the Server. sv_minupdaterate overrides the clients cl_updaterate setting if sv_minupdaterate is greater than the clients cl_updaterate setting. Default=0
rate
The Maximum amount of Bytes Per Second the Client will request from the server. rate overrides the servers sv_maxrate setting if rate is less than the servers sv_maxrate setting. Default=(Depends Upon Clients STEAM Internet Connection Setting) Maximum=30000
cl_updaterate
The Maximum amount of Updates Per Second the Client will request from the server. cl_updaterate overrides the servers sv_maxupdaterate setting if cl_updaterate is less than the servers sv_maxupdaterate setting. Default=20
cl_cmdrate
The Maximum amount of Updates Per Second the Client will Send to the server. Default=30 Minimum=10 Maximum=100

NB: sv_maxupdaterate and cl_updaterate cannot cause more data to be sent to the client than the sv_maxrate and rate settings allow, or the servers, or most likely the clients actual available bandwidth allows. Choke occurs when either;

The servers sv_maxupdaterate causes the amount of bandwidth to exceed the bandwidth allocated per client by sv_maxrate or the total amount of bandwidth the server has access to,

Or

The clients cl_updaterate causes the amount of bandwidth required by the client to exceed the clients rate setting or the total amount of bandwith the client has access to.

More Info here: Valve's Source Multiplayer Networking Explanation

Instructions

  1. tickrate is set by adding the -tickrate 100 (for a tickrate 100 server) in the command line start up parameters. Tickrate cannot be changed on the fly via console, HLSW or rcon, it can only be changed in the commandline and the server must be restarted for the change to take effect.
  2. If you want your tickrate changes to have any noticeable benefit you must change a few other server variables as well as change the Windows Kernel Timer Resolution (pingboosting)
  3. To change the Windows Kernel Timer Resolution (pingboost a server) all you need to do is run Windows Media Player. It does not need a file open, it just has be running in the background, if you do not do this, your servers fps will be limited to around 64 frames a second.
    You can also use a little app that somebody wrote, which you can find here: srcdsfpsboost.zip
    The file now includes the source code so you can compile it yourself if you wish and I can confirm that the version is safe. When compiling you must link this to libwinmm.a.
    Thank you needforspeed for the source code
    Thanks to trog for compiling it.
  4. The servers fps can be seen by issuing the stats command in console or via HLSW or via the RCON STATS if you are logged in via rcon on the server.
  5. The server fps is regulated by the fps_max command (default is 300) which ends up producing around 256 fps in RCON STATS. Don't ask me why, I've asked Valve, but the next step up is to run your fps_max at 600 so you then get 512 fps in RCON STATS. It can be set somewhere between 512 and 600 via console or HLSW and permanently by adding the commandline parameter +fps_max 600, but if you set it at 511 or lower you will see that your FPS according to RCON STATS will still sit at 256 fps even after a map change, so you must set your fps_max higher than what you actually want, to achieve the desired affect. fps_max 512 produces strange results on ping boosted Windows based SRCDS, where as fps_max changes on Linux SRCDS is affected by many variables, such as, Kernel Version, Kernel Timer, and Hardware so the rules for fps_max that apply to Windows SRCDS has minimal relevance to Linus SRCDS. DO NOT take my word for it, test it yourself! The reason for running a high server fps, is to ensure that when a server does run a tickrate calculation, that it is using the most up to date information available.
  6. So you have a high tickrate, your server is pingboosted and is running at high fps, none of this is of any use to your clients (the players) if you do not change your servers rates, specifically the sv_maxrate and sv_maxupdaterate variables.
  7. sv_maxrate (default is 0, maximum = 30,000 ). I personally have found that the sv_maxrate 0 setting is detrimental to server performance (purely subjective opinion, but there you go, feel free to ignore it until your clients start complaining about stupid lag and player warping issues that don't correlate to any actual network or cpu usage or over usage issues as the case may be) then set your sv_maxrate to 20000 or if you have player numbers in excess of 20 use sv_maxrate 30,000
  8. sv_maxupdaterate (default 60) setting must be changed to start using all this server generated data more effectively and get the data out to your players who want to run 101/101/20000/10000 cl_cmdrate/cl_updaterate/rate/cl_rate (yes I know cl_rate is defunct but some people can't be told so I humour them and leave it in) settings, thus you need to change your sv_maxupdaterate equal the tickrate. You only have to do this if you run a tickrate higher than 50. eg for tickrate 66 run sv_maxupdaterate 66 or even 100, for tickrate 100 run sv_maxupdaterate 100. If you do not do this, your clients will NEVER see the full benefit of your tickrate changes, and even then, because of server load the clients will not see the full sv_maxupdaterate or tickrate reflected in a net_graph 3. (See below for more information about net_graph 3)
  9. Do not run tickrate higher than 100, Valve have admitted that there will be issues if you push the tickrate too high. In fact as of 09January2006 players will have problems on 100 tickrate servers, with doors that won't open and get stuck in spots that will not get them stuck on 66 tickrate servers, e.g. Crouched hard up against boxes on angles.
  10. Make sure you have the bandwidth and CPU to cope with SRCDS running with these settings. If you don't have at least a 10Mbps Full Duplex link , you probably do not have the bandwidth to see the full benefit of following the above instructions. This is aimed at people with servers in dedicated data centres with appropriate high speed Internet connections. Most home users will not have the necessary bandwidth or hardware to take full advantage of ALL of these settings. You may though be able to increase your end users overall experience, just by pingboosting your server and increasing the tickrate and fps_max whilst leaving the sv_maxrate and sv_maxupdate rate settings low.
  11. 24, 32 & 40 player servers should be run with a tickrate of no more than 66 and sv_maxupdaterate of 100. I've tried higher but you get strange issues for the clients if you do. Well you can if you want, but you need A LOT of CPU dedicated to a single SRCDS process.
  12. The fps_max setting of 600 does not appear to hit the CPU as hard as other settings I have mentioned here do, your mileage may vary, but try reducing this back to default of 300 if your clients get strange lag issues and you have tried reducing the other server variables mentioned here. i.e. Change this one last! NB: Your servers fps will not exceed the kernel timer resolution, which varies depending upon what Operating System is being run and how it is setup.
  13. For competition servers, or any server at the 18 player or less mark, then you should be able to use a tickrate of 100 and an sv_maxupdaterate of 100 successfully without any issues, so long as you have the bandwidth and CPU to cope!
  14. For changes to take affect, the settings must be changed in the server.cfg file (except tickrate & fps_max which should be command line variables) and the server restarted, or if done via RCON, a map change must be done.
  15. This information is for Windows Source Dedicated Server only, do not ask me about LINUX, I cannot help you.
  16. This was written on the basis that your SRCDS is a default SRCDS Installation with no Mods/Plugins or Non-Standard anything else, such as sounds, skins, maps etc on the server. Using them (Mods/Plugins) will increase CPU utilisation and thus limit the final result. Obviously you need to monitor these for your particular situation.
  17. Finally, you need to actually play on your server for several hours with all SRCDS processes full to see if there are any issues that do not show up by normal performance monitoring tools, to ensure everything is running ok. Subjective in game experience can deviate significantly from Objective Server Statistics, thus you will not know there is a problem unless you are on the server playing at the time it happens.
  18. Sample SRCDS Command Startup Line:
    C:\srcds\srcds.exe -console -game cstrike -tickrate 66 +fps_max 600 +maxplayers 18 -port 27015 +exec server.cfg +map de_dust2

Linux Kernel Timer Instructions

Thanks to triphammer in the STEAM Linux SRCDS Forum

You need to do a custom (re)compile of the linux kernel in order to change kernel interruptability / timer.

 

Since Kernel 2.6.14 you change the HZ with "make menuconfig", just go to: "Processor type and features" > "Timer frequency (XXXXX HZ)". The default HZ for 2.4 Kernels is 100. You can also change the HZ via the "USER_HZ" variable located in: include/asm-<arch>/param.h.

param.h:

define USER_HZ 100 /* .. some user interfaces are in "ticks" */


More along the lines of your question, you can also set the kernel timer frequency by chaning the HZ variable in the same file

define HZ 1000 /* Internal kernel timer frequency */

Also:

+ config HZ
+ int "Frequency of the Timer Interrupt (1000 or 100)"
+ range 100 1000
+ default 1000
+ help
+ Allows the configuration of the timer frequency. It is customary
+ to have the timer interrupt run at 1000 HZ but 100 HZ may be more
+ beneficial for servers and NUMA systems that do not need to have
+ a fast response for user interaction and that may experience bus
+ contention and cacheline bounces as a result of timer interrupts.
+ Note that the timer interrupt occurs on each processor in an SMP
+ environment leading to NR_CPUS * HZ number of timer interrupts
+ per second.
+
endmenu

For a server fps to cater for you high tickrate under Linux, you either need to recompile your 2.4 Kernel, with its Kernel timer resolution changed, but the easiest and probably best course of action is to use the 2.6 Kernel and change the "USER_HZ" variable, (I would suggest starting at 500 and seeing what happens before experimenting with other numbers) which will enable higher server fps on your Linux server.


Instructions for compiling the Linux kernel:

The following webpages provide instructions for compiling the Linux kernel:

Client Settings

Clients must have their STEAM Internet Connection Settings setup correctly for their Internet connection. See here for explanation on how to do this.

The clients rate should = the servers sv_maxrate

The clients cl_updaterate should = the servers sv_maxupdaterate which equals the servers tickrate

Thus a server with sv_maxrate 20000 tickrate 100 sv_maupdaterate 100 the clients though run the following settings:

  • rate 20000
  • cl_updaterate 100
  • cl_cmdrate 100
  • cl_interpolate 1
  • cl_interp 0.1
  • cl_smooth 0

These settings will provide the best client experience so long as your server & network can cope with running with a high tickrate and the rates required to take advantage of them.

NB: If your server settings are different to the example just mentioned, your client settings will have to change accordingly. This is just an example, do not think that these rates are optimum for all server settings, they are not, and your optimum client settings will need to change accordingly.

Summary

< 20 Player servers
-tickrate 100
sv_maxrate 30000
sv_maxupdaterate 100
fps_max 600

> 20 Player servers
-tickrate 66
sv_maxrate 20000
sv_maxupdaterate 66
fps_max 600

Make sure you have the CPU and bandwidth to cope

What you need to look out for is high CPU usage on the server and/or choke on clients that did not get it before you made changes to your servers tickrate and associated settings, and/or fps that running constantly well below the Kernel Timer and/or below the tickrate. Or otherwise, just blatanly obvious crap lag on the server to put it bluntly.

i.e. If you run at 66 tickrate with 50% CPU and 100 tickrate at 90% CPU then its obvious that 66 tickrate is what you are going to have to run your server at.

If you set your kernel timer to 500Hz or there abouts, and fps_max at 600, but your server is only getting 150-200 fps constantly, then its obvious you need to change the kernel timer and/ore the fps_max to a lower setting.

Server Bandwidth Calculation for Dummies

sv_maxrate and rate are the 2 variables that decide the maximum amount of bandwidth each player will use. Both are measured in Bytes per Second, so an sv_maxrate of 20000 = 20,000 Bytes per Second! A Rate of 15000 = 15,000 Bytes per Second.

Network Speeds are by convention quoted on bits per second, whether Kilobits (Kb) 1,000 bits, Megabits (Mb) 1,000,000 bits or Gigabits (Gb) 1,000,000,000 bits.

The other convention is b is for bit, B is for Byte, it is important not to confuse the two.

8 bits = 1 Byte

To calculate the amount of upload bandwidth your server must have, you multiply your sv_maxrate by the number of players on the server. Thus a sv_maxrate of 20000 with 20 players will require at least 20 * 20,000 = 400,000 Bytes per Second of Bandwidth. I say at least, because your theoretical maximum upload speed is just that, theoretical, and you will find that most connections will not sustain their theoretical maximums for long periods of time, which is exactly how GameServers must operate to provide a positive end user experience.

Now going back to our example, we have calculated that you are going to require 400,000 Bytes per Second of Bandwidth to serve 20 players. We now need to convert this to normal Networking conventions, so we can compare apples with apples. To do this, the calculation for this example is as follows:

400,000 Bytes * 8 bits / 1,000 = 3,200 KiloBits/Second (3,200Kbps) or 400,000 Bytes * 8 bits / 1,000,000 = 3.2 Megabits/Second (3.2Mbps)

The point of this calculation is that whatever Bytes Per Second a particular SRCDS setup requires, you need to convert that into a bit speed, by multiplying the total about of bytes generated per second by 8 (8 bits = 1 Byte) and then convert that into either kilobits or megabits, by dividing by 1,000 for kilo or 1,000,000 for mega to give you a value in Kilobits per Second (Kbps) or Megabits per Second (Mbps), whichever is easier to read so you are able to make a correct comparison with your connection speed.

Please realise that your X Mbps connection maybe rated very close to what the server requires, but it is nearly always necessary to leave an overhead of between 10%-25% to make sure the server can always cope since many connections are not able to constantly run at their peak theorectical speeds. So an sv_maxrate 20000 server with 20 players is probably going to require a 4Mbps Upstream Connection to adequately cope with the load.

Final Calculation looks like this: sv_maxrate * {player number} * 8 / 1,000 = Maximum Upstream Speed in Kbps your server requires.

This calculation will work for multiple SRCDS processes on the one physical server.

If you want to turn this calculation around, and wish to calculate the maximum theoretical sv_maxrate your server can run for a given upload speed (in kbps) and player number, the calculation is as follows:

upload bandwidth in kilobits per second / 8 * 1000 / player number = the theoretical maximum sv_maxrate you can run your server at.

This Calculation only works for a single SRCDS process on a single physical server.

Hopefully now, you can all work out just how much Upstream Bandwidth your server requires for any given sv_maxrate, upstream bandwidth and player number values.

Below are tables that with calculation values already worked out for you.

Maximum theoretical required upload bandwidth in kilobits per second for a given player number & sv_maxrate
sv_maxrate --> 3,000 5,000 7,500 10,000 12,000 15,000 17,500 20,000 25,000 30,000
Total Players 6 144 Kbps 240 Kbps 360 Kbps 480 Kbps 576 Kbps 720 Kbps 840 Kbps 960 Kbps 1,200 Kbps 1,440 Kbps
Total Players 8 192 Kbps 320 Kbps 480 Kbps 640 Kbps 768 Kbps 960 Kbps 1,120 Kbps 1,280 Kbps 1,600 Kbps 1,920 Kbps
Total Players 10 240 Kbps 400 Kbps 600 Kbps 800 Kbps 960 Kbps 1,200 Kbps 1,400 Kbps 1,600 Kbps 2,000 Kbps 2,400 Kbps
Total Players 12 288 Kbps 480 Kbps 720 Kbps 960 Kbps 1,152 Kbps 1,440 Kbps 1,680 Kbps 1,920 Kbps 2,400 Kbps 2,880 Kbps
Total Players 14 336 Kbps 560 Kbps 840 Kbps 1,120 Kbps 1,344 Kbps 1,680 Kbps 1,960 Kbps 2,240 Kbps 2,800 Kbps 3,360 Kbps
Total Players 16 384 Kbps 640 Kbps 960 Kbps 1,280 Kbps 1,536 Kbps 1,920 Kbps 2,240 Kbps 2,560 Kbps 3,200 Kbps 3,840 Kbps
Total Players 18 432 Kbps 720 Kbps 1,080 Kbps 1,440 Kbps 1,728 Kbps 2,160 Kbps 2,520 Kbps 2,880 Kbps 3,600 Kbps 4,320 Kbps
Total Players 20 480 Kbps 800 Kbps 1,200 Kbps 1,600 Kbps 1,920 Kbps 2,400 Kbps 2,800 Kbps 3,200 Kbps 4,000 Kbps 4,800 Kbps
Total Players 22 528 Kbps 880 Kbps 1,320 Kbps 1,760 Kbps 2,112 Kbps 2,640 Kbps 3,080 Kbps 3,520 Kbps 4,400 Kbps 5,280 Kbps
Total Players 24 576 Kbps 960 Kbps 1,440 Kbps 1,920 Kbps 2,304 Kbps 2,880 Kbps 3,360 Kbps 3,840 Kbps 4,800 Kbps 5,760 Kbps
Total Players 28 672 Kbps 1,120 Kbps 1,680 Kbps 2,240 Kbps 2,688 Kbps 3,360 Kbps 3,920 Kbps 4,480 Kbps 5,600 Kbps 6,720 Kbps
Total Players 32 768 Kbps 1,280 Kbps 1,920 Kbps 2,560 Kbps 3,072 Kbps 3,840 Kbps 4,480 Kbps 5,120 Kbps 6,400 Kbps 7,680 Kbps
Total Players 36 864 Kbps 1,440 Kbps 2,160 Kbps 2,880 Kbps 3,456 Kbps 4,320 Kbps 5,040 Kbps 5,760 Kbps 7,200 Kbps 8,640 Kbps
Total Players 40 960 Kbps 1,600 Kbps 2,400 Kbps 3,200 Kbps 3,840 Kbps 4,800 Kbps 5,600 Kbps 6,400 Kbps 8,000 Kbps 9,600 Kbps
Total Players 44 1,056 Kbps 1,760 Kbps 2,640 Kbps 3,520 Kbps 4,224 Kbps 5,280 Kbps 6,160 Kbps 7,040 Kbps 8,800 Kbps 10,560 Kbps
Total Players 48 1,152 Kbps 1,920 Kbps 2,880 Kbps 3,840 Kbps 4,608 Kbps 5,760 Kbps 6,720 Kbps 7,680 Kbps 9,600 Kbps 11,520 Kbps
Total Players 56 1,344 Kbps 2,240 Kbps 3,360 Kbps 4,480 Kbps 5,376 Kbps 6,720 Kbps 7,840 Kbps 8,960 Kbps 11,200 Kbps 13,440 Kbps
Total Players 64 1,536 Kbps 2,560 Kbps 3,840 Kbps 5,120 Kbps 6,144 Kbps 7,680 Kbps 8,960 Kbps 10,240 Kbps 12,800 Kbps 15,360 Kbps
Total Players 72 1,728 Kbps 2,880 Kbps 4,320 Kbps 5,760 Kbps 6,912 Kbps 8,640 Kbps 10,080 Kbps 11,520 Kbps 14,400 Kbps 17,280 Kbps
Total Players 80 1,920 Kbps 3,200 Kbps 4,800 Kbps 6,400 Kbps 7,680 Kbps 9,600 Kbps 11,200 Kbps 12,800 Kbps 16,000 Kbps 19,200 Kbps
Total Players 88 2,112 Kbps 3,520 Kbps 5,280 Kbps 7,040 Kbps 8,448 Kbps 10,560 Kbps 12,320 Kbps 14,080 Kbps 17,600 Kbps 21,120 Kbps
Total Players 96 2,304 Kbps 3,840 Kbps 5,760 Kbps 7,680 Kbps 9,216 Kbps 11,520 Kbps 13,440 Kbps 15,360 Kbps 19,200 Kbps 23,040 Kbps
Total Players 100 2,400 Kbps 4,000 Kbps 6,000 Kbps 8,000 Kbps 9,600 Kbps 12,000 Kbps 14,000 Kbps 16,000 Kbps 20,000 Kbps 24,000 Kbps
Total Players 104 2,496 Kbps 4,160 Kbps 6,240 Kbps 8,320 Kbps 9,984 Kbps 12,480 Kbps 14,560 Kbps 16,640 Kbps 20,800 Kbps 24,960 Kbps
Total Players 110 2,640 Kbps 4,400 Kbps 6,600 Kbps 8,800 Kbps 10,560 Kbps 13,200 Kbps 15,400 Kbps 17,600 Kbps 22,000 Kbps 26,400 Kbps
Total Players 112 2,688 Kbps 4,480 Kbps 6,720 Kbps 8,960 Kbps 10,752 Kbps 13,440 Kbps 15,680 Kbps 17,920 Kbps 22,400 Kbps 26,880 Kbps
Total Players 128 3,072 Kbps 5,120 Kbps 7,680 Kbps 10,240 Kbps 12,288 Kbps 15,360 Kbps 17,920 Kbps 20,480 Kbps 25,600 Kbps 30,720 Kbps
Calculation for theoretical total kilobits per second speed = (sv_maxrate * player number * 8 / 1,000)

 

Maximum theoretical required upload bandwidth in megabits per second for a given player number & sv_maxrate
sv_maxrate --> 3,000 5,000 7,500 10,000 12,000 15,000 17,500 20,000 25,000 30,000
Total Players 6 0.144 Mbps 0.240 Mbps 0.360 Mbps 0.480 Mbps 0.576 Mbps 0.720 Mbps 0.840 Mbps 0.960 Mbps 1.200 Mbps 1.440 Mbps
Total Players 8 0.192 Mbps 0.320 Mbps 0.480 Mbps 0.640 Mbps 0.768 Mbps 0.960 Mbps 1.120 Mbps 1.280 Mbps 1.600 Mbps 1.920 Mbps
Total Players 10 0.240 Mbps 0.40 Mbps 0.60 Mbps 0.80 Mbps 0.96 Mbps 1.20 Mbps 1.40 Mbps 1.60 Mbps 2.00 Mbps 2.40 Mbps
Total Players 12 0.288 Mbps 0.48 Mbps 0.72 Mbps 0.96 Mbps 1.15 Mbps 1.44 Mbps 1.68 Mbps 1.92 Mbps 2.40 Mbps 2.88 Mbps
Total Players 14 0.336 Mbps 0.56 Mbps 0.84 Mbps 1.12 Mbps 1.34 Mbps 1.68 Mbps 1.96 Mbps 2.24 Mbps 2.80 Mbps 3.36 Mbps
Total Players 16 0.384 Mbps 0.64 Mbps 0.96 Mbps 1.28 Mbps 1.54 Mbps 1.92 Mbps 2.24 Mbps 2.56 Mbps 3.20 Mbps 3.84 Mbps
Total Players 18 0.432 Mbps 0.72 Mbps 1.08 Mbps 1.44 Mbps 1.73 Mbps 2.16 Mbps 2.52 Mbps 2.88 Mbps 3.60 Mbps 4.32 Mbps
Total Players 20 0.480 Mbps 0.80 Mbps 1.20 Mbps 1.60 Mbps 1.92 Mbps 2.40 Mbps 2.80 Mbps 3.20 Mbps 4.00 Mbps 4.80 Mbps
Total Players 22 0.528 Mbps 0.88 Mbps 1.32 Mbps 1.76 Mbps 2.11 Mbps 2.64 Mbps 3.08 Mbps 3.52 Mbps 4.40 Mbps 5.28 Mbps
Total Players 24 0.576 Mbps 0.96 Mbps 1.44 Mbps 1.92 Mbps 2.30 Mbps 2.88 Mbps 3.36 Mbps 3.84 Mbps 4.80 Mbps 5.76 Mbps
Total Players 28 0.672 Mbps 1.12 Mbps 1.68 Mbps 2.24 Mbps 2.69 Mbps 3.36 Mbps 3.92 Mbps 4.48 Mbps 5.60 Mbps 6.72 Mbps
Total Players 32 0.768 Mbps 1.28 Mbps 1.92 Mbps 2.56 Mbps 3.07 Mbps 3.84 Mbps 4.48 Mbps 5.12 Mbps 6.40 Mbps 7.68 Mbps
Total Players 36 0.864 Mbps 1.44 Mbps 2.16 Mbps 2.88 Mbps 3.46 Mbps 4.32 Mbps 5.04 Mbps 5.76 Mbps 7.20 Mbps 8.64 Mbps
Total Players 40 0.960 Mbps 1.60 Mbps 2.40 Mbps 3.20 Mbps 3.84 Mbps 4.80 Mbps 5.60 Mbps 6.40 Mbps 8.00 Mbps 9.60 Mbps
Total Players 44 1.056 Mbps 1.76 Mbps 2.64 Mbps 3.52 Mbps 4.22 Mbps 5.28 Mbps 6.16 Mbps 7.04 Mbps 8.80 Mbps 10.56 Mbps
Total Players 48 1.152 Mbps 1.92 Mbps 2.88 Mbps 3.84 Mbps 4.61 Mbps 5.76 Mbps 6.72 Mbps 7.68 Mbps 9.60 Mbps 11.52 Mbps
Total Players 56 1.344 Mbps 2.24 Mbps 3.36 Mbps 4.48 Mbps 5.38 Mbps 6.72 Mbps 7.84 Mbps 8.96 Mbps 11.20 Mbps 13.44 Mbps
Total Players 64 1.536 Mbps 2.56 Mbps 3.84 Mbps 5.12 Mbps 6.14 Mbps 7.68 Mbps 8.96 Mbps 10.24 Mbps 12.80 Mbps 15.36 Mbps
Total Players 72 1.728 Mbps 2.88 Mbps 4.32 Mbps 5.76 Mbps 6.91 Mbps 8.64 Mbps 10.08 Mbps 11.52 Mbps 14.40 Mbps 17.28 Mbps
Total Players 80 1.920 Mbps 3.20 Mbps 4.80 Mbps 6.40 Mbps 7.68 Mbps 9.60 Mbps 11.20 Mbps 12.80 Mbps 16.00 Mbps 19.20 Mbps
Total Players 88 2.112 Mbps 3.52 Mbps 5.28 Mbps 7.04 Mbps 8.45 Mbps 10.56 Mbps 12.32 Mbps 14.08 Mbps 17.60 Mbps 21.12 Mbps
Total Players 96 2.304 Mbps 3.84 Mbps 5.76 Mbps 7.68 Mbps 9.22 Mbps 11.52 Mbps 13.44 Mbps 15.36 Mbps 19.20 Mbps 23.04 Mbps
Total Players 100 2.400 Mbps 4.00 Mbps 6.00 Mbps 8.00 Mbps 9.60 Mbps 12.00 Mbps 14.00 Mbps 16.00 Mbps 20.00 Mbps 24.00 Mbps
Total Players 104 2.496 Mbps 4.16 Mbps 6.24 Mbps 8.32 Mbps 9.98 Mbps 12.48 Mbps 14.56 Mbps 16.64 Mbps 20.80 Mbps 24.96 Mbps
Total Players 110 2.640 Mbps 4.40 Mbps 6.60 Mbps 8.80 Mbps 10.56 Mbps 13.20 Mbps 15.40 Mbps 17.60 Mbps 22.00 Mbps 26.40 Mbps
Total Players 112 2.688 Mbps 4.48 Mbps 6.72 Mbps 8.96 Mbps 10.75 Mbps 13.44 Mbps 15.68 Mbps 17.92 Mbps 22.40 Mbps 26.88 Mbps
Total Players 128 3.072 Mbps 5.12 Mbps 7.68 Mbps 10.24 Mbps 12.29 Mbps 15.36 Mbps 17.92 Mbps 20.48 Mbps 25.60 Mbps 30.72 Mbps
Calculation for theoretical total megabits per second speed = (sv_maxrate * player number * 8 / 1,000,000)

 

Maximum theoretical sv_maxrate value you can run for a given upload speed
Upload Bandwidth --> 128 kbps 256 kbps 384 kbps 512 kbps 768 kbps 1,024 kbps 1,544 kbps 2,048 kbps 5,000 kbps 10,000 kbps
Total Players 4 4,000 8,000 12,000 16,000 24,000 30,000 30,000 30,000 30,000 30,000
Total Players 6 2,667 5,333 8,000 10,667 16,000 21,333 30,000 30,000 30,000 30,000
Total Players 8 2,000 4,000 6,000 8,000 12,000 16,000 24,125 30,000 30,000 30,000
Total Players 10 1,600 3,200 4,800 6,400 9,600 12,800 19,300 25,600 30,000 30,000
Total Players 12 1,333 2,667 4,000 5,333 8,000 10,667 16,083 21,333 30,000 30,000
Total Players 14 1,143 2,286 3,429 4,571 6,857 9,143 13,786 18,286 30,000 30,000
Total Players 16 1,000 2,000 3,000 4,000 6,000 8,000 12,063 16,000 30,000 30,000
Total Players 18 889 1,778 2,667 3,556 5,333 7,111 10,722 14,222 30,000 30,000
Total Players 20 800 1,600 2,400 3,200 4,800 6,400 9,650 12,800 30,000 30,000
Total Players 22 727 1,455 2,182 2,909 4,364 5,818 8,773 11,636 28,409 30,000
Total Players 24 667 1,333 2,000 2,667 4,000 5,333 8,042 10,667 26,042 30,000
Total Players 26 615 1,231 1,846 2,462 3,692 4,923 7,423 9,846 24,038 30,000
Total Players 28 571 1,143 1,714 2,286 3,429 4,571 6,893 9,143 22,321 30,000
Total Players 30 533 1,067 1,600 2,133 3,200 4,267 6,433 8,533 20,833 30,000
Total Players 32 500 1,000 1,500 2,000 3,000 4,000 6,031 8,000 19,531 30,000
Calculation for maximum theoretical sv_maxrate = (upload bandwidth in kilobit per second / 8 * 1000 / player number)


b is for bit

B is for Byte

There are 8 bits to a Byte


Networking speeds are always measured in bits and quoted as multiples of 1,000 for Kilo, 1,000,000 for Mega and 1,000,000,000 for Giga.

Do not argue, that's just the way it is!

The really technical reason why data speeds are in multiples of 1,000, or more to the point, do not generally correlate to binary maths measurments, is that data is not sent down the wire, it is only a signal that represents data that is sent down the wire. That signal is measured in Hertz, and has nothing to do with binary maths, even though the signal is representing binary data, and all you so called networking experts should know this, and if you don't, well you are not much of a networking expert, are you?

Can you tell that I am sick of this argument yet? :)

30,000 is the maximum sv_maxrate for SRCDS. That is why all the calculations above, max out at 30,000

 

Maximum theoretical updates per second a server must deal with for a given player number
Total players --> 10 Players 12 Players 14 Players 16 Players 18 Players 20 Players 24 Players 28 Players 32 Players 40 Players
33 Updates/Second 330 396 462 528 594 660 792 924 1,056 1,320
50 Updates/Second 500 600 700 800 900 1,000 1,200 1,400 1,600 2,000
66 Updates/Second 660 792 924 1,056 1,188 1,320 1,584 1,848 2,112 2,640
100 Updates/Second 1,000 1,200 1,400 1,600 1,800 2,000 2,400 2,800 3,200 4,000
Calculation for maximum theoretical updates per second = (updates per second * player number)


This is why it is generally better to run at a higher fps_max so long as your server can cope.

I would suspect that 2,000 updates per second is the most a SRCDS process is ever going to realistically going to have to deal with due to per player & tickrate/total player number considerations.

It is important to note that in SRCDS, fps = I/O per second.

So the goal of raising your fps_max is to keep a low ratio of server frames per second to updates per second

It is also important when designing a Gaming Network, that all your network devices (Routers & Switches) can sustain the packets/frames per second these setups can generate. That is to say, your Network might be fine with 1 SRCDS running on 1 box, BUT, running 10 boxes with 6 SRCDS each, all with high rates, and then you realise you might have a problem!

Hardware Spec Example

We run 6 x 16 Player SRCDS on DUAL Xeon 3.0GHz (or better) Servers with 3GB of RAM with bonded dual 1Gb Switched and Load Balanced Network Connections into 1Gbps or 10Gbps BackBones with a 66 tickrate. So when I say good hardware with good Network Connectivity, this is the benchmark I am basing my opinions on.

Suffice it to say we could probably run 4 x 12player 100 tickrate servers, BUT although the difference between 33 tickrate and 66 tickrate is the difference between night and day, the difference between 66 tickrate and 100 tickrate is negligible on the Internet, when all issues are taken into consideration. Also some maps and player numbers caused intermittent issues at 100 tickrate that a GSP does not really want to have to worry about, especially when you can have 6 x 16 player servers that run excellently at 66 tickrate! :)

Choke

THE MAIN CAUSE OF BAD CHOKE IS A CLIENTS STEAM INTERNET CONNECTION SPEED BEING SET INCORRECTLY

Please ensure that all clients STEAM Internet Connection Speeds are setup correctly. See BAD CHOKE SOLUTION

Choke is quite simply the server wanting to send an update to the client, but cannot.

  • If the server cannot sustain the tickrate, you get choke (You may not actually get choke, but the server will lag very badly)
  • If the server cannot sustain the fps the tickrate requires, you get choke
  • If the server cannot sustain the fps the sv_minupdaterate requires, you get choke
  • If the server cannot sustain the the sv_minupdaterate, you get choke
  • If the server connection cannot sustain the bandwidth required to support the updaterate, you get choke
  • If the server connection cannot sustain the bandwidth required to support the sv_minrate, you get choke
  • If the required bandwidth demanded by the sv_maxupdaterate exceeds the sv_maxrate, you get choke
  • If the clients connection cannot sustain the bandwidth required to support the cl_updaterate you get choke
  • If the clients required bandwidth demanded by the cl_updaterate exceeds the rate, you get choke

Notes regarding Netgraph Updates per second measurements you need to be aware of:

You won't get higher updates than;

a) The servers tickrate

b) The servers sv_maxupdaterate

c) As fast as your server fps allows (Limited by fps_max, hardware and the Kernel Timer Resolution)

d) As fast as your servers sv_maxrate allows

e) As fast as the client/server connection allows

f) As fast as the clients rate allows

g) As fast as the clients cl_updaterate allows

h) As fast as the clients fps allows (Limited by fps_max, hardware, and Refresh Rate)

h) Client FPS controls how fast the client can send updates to the server, this the OUT on the net_graph 3

NB: Other than tickrate, Choke is caused by any of the above list of things not being large enough. This usually occurs because the sv_maxupdaterate or cl_updaterate being higher than the sv_maxrate or rate allows, since the server will not send more data per second than sv_maxrate or rate allows.

Fixing Clients Choke

Besides making sure clients follow BAD CHOKE SOLUTION and set their STEAM Internet Connection Settings correctly, the only 2 Variables that are going to really help the clients choke problems are RATE and CL_UPDATERATE.

  1. If in doubt about your STEAM Internet Connection Setting, set it 1 higher than what you have.
  2. If you are getting choke and the throughput on the net_graph 3 (see below) is lower than what you expect then raise your RATE.
  3. If you still get choke, then make sure you set the CL_UPDATERATE to the servers tickrate and then in steps of 5, lower CL_UPDATERATE until choke disappears or is at least minimised. eg. Start at 100 and try 95, 90, 85, 80 etc. etc.
  4. The blame may not always be with you, the client! Try another server, or another server on another Game Service Provider.
  5. Finally, don't buggerise around trying to fix choke problems if you have loss problems. You are just wasting your time and everybody elses if you ask them to help fix your choke problems if you have LOSS!!! Loss is a network problem. (See point 6. of net_graph 3)

Net_graph 3 Explanation

Image:net_graph3_explanation.jpg

1. fps is how many frames per second the client is rendering. This is limited by the clients fps_max setting or the refresh rate of the monitors vertical refresh rate if Vertical-Sync is enabled.

2. ping is:

a) netgraph ping is the round trip time for game packets, NOT including any tickrate or updaterate induced calculation delays
b) Scoreboard latency (ping) is one way trip latency (I have to find out in which direction)
c) rcon status command ping, well nobody really knows what this means yet, but I am aiming to find out.

IN is what is being received by you the client, FROM the server.
OUT is what is being sent by you the client, TO the server.

The IN & OUT both have 3 components, starting from left to right:

3. The size of the game packet in bytes being sent and received (Not sure if this includes UDP Segment + IP Packet overhead)

4. The Average amount of KiloBytes Per Second being Sent or Received of GameData + UDP Segment + IP Packet overhead

5. The Average amount of Updates being Sent or Received per Second

If you multiply 3. by 5. and then divide by 1000 you will get a close approximation of the value of 4. which includes rounding errors because 4. and 5. are only averages. So using the numbers we see above, for IN we get (154*102.4/1000)=15.7696 with the value shown in the picture above for net_graph 3 being 15.16. Meh, close enough. :)


The amount of IN Updates received by the client per second (controlled by cl_updaterate) will in most cases equal the servers tickrate, but will NEVER exceed:

  • The Clients cl_updaterate
  • The Servers sv_maxupdaterate
  • The Server/Clients tickrate which are always the same, as the client will always use the same tickrate as the server it connects to

Which ever is the smallest of those 3 numbers will determine the number you see for Updates per second RECEIVED by the client.

If the clients AND/OR servers bandwidth is not sufficient, or is limited by the clients rate or servers sv_maxrate, then the client will NOT see the IN Updates received by the client per second equaling the servers advertised tickrate. This is one example of when the client will see choke.

If the server does not have enough CPU to sustain the servers fps above the servers tickrate, the client will NOT see the IN Updates received by the client per second equaling the servers advertised tickrate. This is another example of when the client will see choke.


The amount of OUT Updates sent by your computer per second (controlled by cl_cmdrate) will NEVER exceed:

  • The Clients cl_cmdrate
  • The Server/Clients Tickrate
  • The Clients Frames Per Second

Which ever is the smallest of those 3 numbers will determine the number you see for Updates per second SENT by the client.

It may look like the OUT Updates sent by your computer per second does exceed the fps, but it reality it does not. It is that the net_graph 3 readings are not always perfectly in sync or there are rounding errors in the calculations, because the the two per second counts 4. and 5. shown in net_graph 3, are only averages. There is also the error in the net_graph 3 that occurs when the Average Updates Received by you per Second magically seem to exceed the servers sv_maxupdaterate, servers tickrate, and the clients cl_updaterate, which were all set to 100 at the time the screenshot was taken above, despite what is shown in the picture above.


6. Loss Are lost packets due to Network problems, either with your computers connection to your ISP, your ISP, or the ISP that is hosting the Server or anywhere in between. If you have loss then you will probably have choke. Do not bother trying to solve Choke problems if you have Loss problems. Resolving loss problems is done by following standard Network Trouble Shooting Procedures. Get a friend to help you or call your ISP, or ask in the Game Server Providers Forum for help. Helping you with network problems is outside the purview of this document, and people who know what they are doing get paid 3 or 4 figure dollar amounts an hour to solve them.

7. Choke Is quite simply the server wanting to send you data but cannot. The reason for this though are not always simple to understand, diagnose or fix. See the Choke explanation above.

8. You bring up your net_graph by typing net_graph 3 into console. You may find it helpful to centre the net_graph using the net_graphpos 2 command, and raising it a little so it does not overlay your HUD using the net_graphheight 100 command in console. The net_graphheight command is a function of your screen resolution, so you will need to adjust it accordingly with net_graphheight 100 working well for 1024x768. Increasing the value of net_graphheight makes the net_graph higher and Decreasing the value of net_graphheight lowers the net_graph.

Here is a little script you can put into your autoexec.cfg for cycling through the various net_graphs that will work in all Valve games:

//netgraph script
alias graph "graph1"
alias graph1 "net_graphpos 2; net_graphheight 100; net_graph 1; alias graph graph2"
alias graph2 "net_graphpos 2; net_graphheight 100; net_graph 2; alias graph graph3"
alias graph3 "net_graphpos 2; net_graphheight 100; net_graph 3; alias graph graph4"
alias graph4 "net_graphpos 1; net_graph 0; alias graph graph1"
bind "r" "graph"

Obviously adjust net_graphheight and bind "r" where "r" is the keyboard key you use to cycle through the different net_graphs to suit your own personal preferences.


Important Information for both Players and Server Administrators

The Server will not send more data and/or updates than the Client is setup to receive unless the clients violate the server minimums, in which case the servers sv_minrate and sv_minupdaterate will be used by the client.

The Client cannot make the Server send more data and/or updates than the Server is set up to send

You should, after reading the entire article above, now know what it is that controls what the Server & Client can and cannot send & receive, how often, and why.

If you do not, you either did not read what I have written, or some part of my explanation was not clear to you. Suffice it to say all the information you need is in here, even if you do not realise it.

For those of you who are still struggling to comprehend the above, try the Noobies Guide to Netgraph & Ping.

Why don't my clients get 100 Updates a Second?

Assuming you have set the tickrate correctly to 100 (in this example) and the server is in fact running at 100 tickate, the FIVE main causes of clients not receiving 100 Updates per Second are as follows:

  1. If you have a Windows SRCDS and your kernel timer resolution is not increased (ping boosted), you won't see 100 Updates per Second. Most likely it will be stuck at around 64 as that is how many fps SRCDS will run at. This happens alot, even to me, because the person who updates the Window Installation and reboots the box, forgets to make sure srcdsfpsboost.exe is running.
  2. If you have a Linux SRCDS on a default Linux OS installation, you probably will not see 100 Updates per Second. Most likely it will be stuck at around 50 as that is how many fps SRCDS will run at.
  3. If you do not change the sv_maxupdaterate (Default = 60) you will obviously not see 100 Updates per Second.
  4. If you do not have enough CPU for the number of players you are running, and the SRCDS fps keeps falling below the 100 mark, you will not see 100 Updates per Second.
  5. If the clients cl_updaterate is not set to 100, then obviously they will not see 100 Updates per Second.

There are more reasons than this, but you these are the main causes of not seeing as many updates as you might expect.

This whole guide, if you have read and understood it all, seeks to address how to resolve the issue of sending and receiving as many updates as you want your server to.

Finally, never discount that it is in fact a client side issue with the client computer that is connecting to your server, unless of course there has been a recent Valve SRCDS update, and everything has suddenly inexplicably gone to hell.

For Client Side issues please refer to Fixes for FPS Problems with Counter-Strike and most other games

Solving the mystery of cl_interp_ratio

The client side setting cl_interp (it is not suppose to exist any longer) has been replaced by the client side setting cl_interp_ratio

cl_interp_ratio simply causes the interpolation delay to be calculated off the clients cl_updaterate (The amount of updates the clients receive per second. This will not exceed the servers sv_maxupdaterate or servers tickrate, which ever is the smaller)

The outcome for this is as follows:

cl_interp_ratio 1.0 cl_updaterate 30 interpolation = 0.033
cl_interp_ratio 1.0 cl_updaterate 35 interpolation = 0.029
cl_interp_ratio 1.0 cl_updaterate 40 interpolation = 0.025
cl_interp_ratio 1.0 cl_updaterate 50 interpolation = 0.020
cl_interp_ratio 1.0 cl_updaterate 60 interpolation = 0.017
cl_interp_ratio 1.0 cl_updaterate 66 interpolation = 0.015
cl_interp_ratio 1.0 cl_updaterate 75 interpolation = 0.013
cl_interp_ratio 1.0 cl_updaterate 80 interpolation = 0.013
cl_interp_ratio 1.0 cl_updaterate 100 interpolation = 0.010

 

cl_interp_ratio 1.5 cl_updaterate 30 interpolation = 0.050
cl_interp_ratio 1.5 cl_updaterate 35 interpolation = 0.043
cl_interp_ratio 1.5 cl_updaterate 40 interpolation = 0.038
cl_interp_ratio 1.5 cl_updaterate 50 interpolation = 0.030
cl_interp_ratio 1.5 cl_updaterate 60 interpolation = 0.025
cl_interp_ratio 1.5 cl_updaterate 66 interpolation = 0.023
cl_interp_ratio 1.5 cl_updaterate 75 interpolation = 0.020
cl_interp_ratio 1.5 cl_updaterate 80 interpolation = 0.019
cl_interp_ratio 1.5 cl_updaterate 100 interpolation = 0.015

 

cl_interp_ratio 2.0 cl_updaterate 30 interpolation = 0.067
cl_interp_ratio 2.0 cl_updaterate 35 interpolation = 0.057
cl_interp_ratio 2.0 cl_updaterate 40 interpolation = 0.050
cl_interp_ratio 2.0 cl_updaterate 50 interpolation = 0.040
cl_interp_ratio 2.0 cl_updaterate 60 interpolation = 0.033
cl_interp_ratio 2.0 cl_updaterate 66 interpolation = 0.030
cl_interp_ratio 2.0 cl_updaterate 75 interpolation = 0.027
cl_interp_ratio 2.0 cl_updaterate 80 interpolation = 0.025
cl_interp_ratio 2.0 cl_updaterate 100 interpolation = 0.020


Clients can have higher cl_interp_ratio values to accomodate for packetloss & choke.

There are 2 server side variables that limit how large a clients cl_interp_ratio can be, these are:

"sv_client_min_interp_ratio" = "1" replicated - This can be used to limit the value of cl_interp_ratio for connected clients (only while they are connected). -1 = let clients set cl_interp_ratio to anything any other value = set minimum value for cl_interp_ratio

"sv_client_max_interp_ratio" = "2" replicated - This can be used to limit the value of cl_interp_ratio for connected clients (only while they are connected). If sv_client_min_interp_ratio is -1, then this cvar has no effect.

We will be using the defaults for now

The bottom line is this: interp is calculated off your updaterate. Change your updaterate and your interp will change, it is as simple as that!

P.S. This is only the theoretical interpretation of what is supposed to happen. Please do not attempt to blame the author if your reality does not agree with the theoretical explanation.

P.P.S. This new information (18January2006) invalidates small parts of the other sections of this tickrate guide, please take this into consideration. Thank You

Useful Links

In conclusion

I hope this helps clear up some of the mystery of setting up a server with high tickrate, don't worry if it does not, I am still asking questions myself.

Special Thanks to Alfred Reynolds and Martin Otten for their input that helped me put this guide together.

If you are still having problems with SRCDS then please refer to my Troubleshooting_valve_HLDS-SRCDS Guide

Cheers

Whisper

Whisper's Very Basic Competitive Counter-Strike War Strategy Guide

PostScript: Questions Remaining

  • The actual fps a server generates does not equal the fps_max number and certain intermediate values cause no change to the reported server fps. eg. fp_max 300 produces approximately 250fps fps_max 400 still produces 250fps where as fps_max 600 will raise the reported server fps to 500.
  • There are 3 measures of ping: Scoreboard, net_graph, status/rcon status, none of them remotely agree with each other. What does each measure, what is the relationship between them?
    • This is almost answered.
  • The net_graph 3 reported 'IN' & 'OUT' when calculated as a combined total, does not appear to exceed the sv_maxrate or rate settings even though the definition of both settings is meant to only control how much data the server can send the client in Bytes per second?
  • The sort order of 'rcon status' screen does not appear to have any order whatsoever.
    • Apparently it is based on player position on the server, which means I now know as much as before I asked the question.
    • The full answer is: Because that is the easiest way to iterate players from the code :) They are sorted in entity order, which doesn't match player id or connected time.
  • Need a precise explanation for all causes of choke and how to resolve each cause?
  • An indication of how much data is actually generated per player for a given tickrate and player number assuming the clients and server have enough bandwidth and the server has enough CPU capacity?
  • Hardware to tickrate benchmarks to provide people an indicator on how many players they can run for a given tickrate and available bandwidth?
  • Explanations for all causes of Loss according to the clients net_graph 3
  • Effect of -pingboost in command line
    • you mean - effect for srcds servers ? for hlds: 1 = standard, 2=more cpu (better frames/pings), 3=get as much cpu as you can (not recommended when running more than one server on the computer)
      • Is pingboost still used for SRCDS? I thought it was, but I don't deal with Linux SRCDS day in day out so I wouldn't know for 100% sure, but as far as I knew it does still exist for SRCDS.
  • Kernel Timer Explanations for Linux Servers (We are slowly getting there, thanks to the contributors so far)
  • More Answers for Linux Server Admins, and make the article less Windows Centric