How to Test a Web Server’s Page Load Times in Linux

How to Test a Web Server's Page Load Times in Linux

The speed and responsiveness of a website play a crucial role in user experience and search engine rankings. Webmasters and administrators often need to ensure that their web servers are optimized for the best performance.

One of the key metrics to monitor is the server’s response time and overall speed.

This tutorial will guide you through the process of testing a web server’s page load times on a Linux system. By the end of this guide, you’ll be able to measure both the server’s response time and the overall speed of your website.


  • A Linux-based system or server.
  • Command-line access (terminal or SSH).
  • Basic knowledge of Linux commands.

Let’s get started.

Step 1: Install curl

Before you can test the page load times, you need to have the right tools installed. One of the most commonly used tools for this purpose is curl.

sudo apt update
sudo apt install curl

Step 2: Measure Server Response Time with CURL

To measure the server’s response time, you can use the curl command with the -o /dev/null option to discard the output and the -w option to specify what metrics you want to display.

The curl command is a versatile tool used for transferring data with URLs. It supports a wide range of protocols, including HTTP, HTTPS, FTP, and many more. In the context of testing a web server’s response time, we use curl to make a request to a given website and measure the time it takes to receive a response.

curl -o /dev/null -s -w "Total: %{time_total}s\n" [YOUR_WEBSITE_URL]
  • -o /dev/null: This option tells `curl` to redirect the output (i.e., the webpage content) to `/dev/null`, which is a special file in Unix-like systems that discards all data written to it. In essence, we’re telling `curl` that we’re not interested in the actual content of the webpage; we just want to measure the time it takes to fetch it.
  • -s: Stands for “silent mode.” When using this option, `curl` won’t show progress or error messages. It ensures that the command’s output remains clean, displaying only the information we’re interested in.
  • -w “Total: %{time_total}s\n”: This is where the magic happens. The `-w` option allows us to define a custom output format. In this case, we’re instructing `curl` to display the total time (`%{time_total}`) it took to complete the request. The `s\n` at the end simply appends the unit (seconds) and a newline to make the output more readable.
  • [YOUR_WEBSITE_URL]: This is a placeholder for the actual URL of the website you want to test. You’ll replace this with the web address of the site you’re interested in.

This command will display the total time taken to fetch the webpage, which includes the server’s response time.

Suppose you want to measure the response time of “”. You would replace [YOUR_WEBSITE_URL] with the actual URL:

curl -o /dev/null -s -w "Total: %{time_total}s\n"

Upon executing this command, you might see an output like:

Total: 0.423s

This indicates that it took 0.423 seconds (or 423 milliseconds) for the server to fully respond to the request, which includes the server’s processing time and the data transfer time.

Step 3: Measure Server Response Time with SIEGE

Siege is a popular load testing and benchmarking tool used to stress-test web servers and web applications. By simulating multiple users accessing a website concurrently, siege helps webmasters and developers understand how their site performs under load, making it invaluable for performance tuning and capacity planning.

siege -c 10 -r 5 [YOUR_WEBSITE_URL]
  • -c 10: The `-c` option specifies the number of concurrent users or “clients” that `siege` will simulate. In this case, `10` means that `siege` will mimic the behavior of 10 users accessing the website simultaneously.
  • -r 5: The `-r` option determines the number of requests each simulated user will make. With a value of `5`, this means every one of the 10 users will make 5 requests to the website, resulting in a total of 50 requests.
  • [YOUR_WEBSITE_URL]: This is a placeholder for the actual URL of the website you want to test. You’ll replace this with the web address of the site you’re interested in.
See also  How to Setup Icinga to Monitor Server Uptime on Linux

Suppose you want to load test the website “”. You would replace [YOUR_WEBSITE_URL] with the actual URL:

siege -c 10 -r 5

After running the command, siege will start simulating the 10 users making their 5 requests each. Once the test concludes, siege will display a summary of the results. This summary typically includes:

  • Transactions: Total number of server hits (in this case, it should be around 50).
  • Availability: Percentage of socket connections successfully made.
  • Elapsed time: Total time the test took.
  • Data transferred: Total amount of data transferred during the test.
  • Response time: Average time taken for the server to respond.
  • Transaction rate: Average number of transactions per second.
  • Concurrency: Average number of simultaneous connections.
  • Successful transactions: Number of requests that were successful.
  • Failed transactions: Number of requests that failed.

Step 4: Measure Server Response Time with APACHEBENCH

Apachebench, often abbreviated as ab, is a benchmarking tool provided by the Apache HTTP server project. It’s designed to test the performance of your Apache HTTP server, but it can also be used to benchmark any HTTP server. apachebench helps in gauging the performance of a web server by simulating multiple users sending requests to a target server.

ab -n 50 -c 10 [YOUR_WEBSITE_URL]
  • -n 50: The `-n` option specifies the total number of requests that `apachebench` will perform during the test. In this example, `apachebench` will make a total of 50 requests to the target URL.
  • -c 10: The `-c` option determines the number of multiple requests to perform at a time, essentially simulating the number of concurrent users. With a value of `10`, this means `apachebench` will send 10 requests concurrently until it reaches the total of 50 requests.
  • [YOUR_WEBSITE_URL]: This is a placeholder for the actual URL of the website you want to test. You’ll replace this with the web address of the site you’re interested in.

Suppose you want to load test the website “”. You would replace [YOUR_WEBSITE_URL] with the actual URL:

ab -n 50 -c 10

After executing the command, apachebench will start sending requests to the specified URL. Once the test is complete, apachebench will provide a detailed report that includes:

  • Server Software: The software used by the target server.
  • Server Hostname: The domain name of the target server.
  • Server Port: The port number used by the target server.
  • Document Path: The specific path or endpoint tested.
  • Document Length: The size of the response from the server.
  • Concurrency Level: The number of multiple requests performed at a time.
  • Time taken for tests: The total time taken to complete all requests.
  • Complete requests: The total number of successful requests.
  • Failed requests: The number of requests that failed.
  • Total transferred: Total amount of data transferred during the test.
  • Requests per second: The average number of requests per second.
  • Time per request: The average time taken per request.
  • Transfer rate: The rate of data transfer during the test.
See also  How to Use ‘hdparm’ to Measure the Speed of Data Reads/Writes on Storage Devices in Linux

Step 5: Consider External Tools

While the command-line offers a direct and efficient way to quickly gauge server performance, it often provides a limited scope of the overall user experience and website optimization. For a holistic view of your website’s performance, several external web-based tools can offer in-depth insights and actionable recommendations.

Three of the most renowned tools in this domain:

1. Google PageSpeed Insights

Google PageSpeed Insights is a free tool from Google that analyzes the content of a web page and then generates suggestions to make that page faster. It provides performance scores for both mobile and desktop devices.

Key Features:

  • Performance Metrics: Measures metrics like First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), Time to First Byte (TTFB), and Interaction to Next Paint (INP).
  • Optimization Recommendations: Offers actionable recommendations such as image optimization, minification of CSS/JS, and server response times.
  • Core Web Vitals: Highlights the new set of metrics related to speed, responsiveness, and visual stability.

2. WebPageTest

WebPageTest is an open-source project that is primarily used for measuring and analyzing the performance of web pages. It allows users to run tests from multiple locations around the globe using real browsers at real consumer connection speeds.

Key Features:

  • Multiple Test Locations: Choose from various global test locations to understand performance from different geographical areas.
  • Advanced Visualizations: Provides waterfall charts, connection view, and more to visualize what’s happening behind the scenes.
  • Performance Breakdown: Offers insights into content type, domain, and other requests to pinpoint performance bottlenecks.

3. Pingdom

Pingdom is a premium tool that offers website monitoring and performance insights. It’s known for its user-friendly interface and detailed reports.

Key Features:

  • Uptime Monitoring: Monitors website uptime and notifies users of any downtime.
  • Page Load Breakdown: Analyzes which components of a webpage (like images, scripts, or CSS) take the longest to load.
  • Performance Grades: Provides grades based on best practices in website performance, helping users understand areas of improvement.

While command-line tools offer a quick snapshot of server performance, leveraging the power of comprehensive web-based tools can provide a 360-degree view of your website’s performance. By understanding various metrics, from server response times to user-centric performance metrics, webmasters can make informed decisions to optimize their sites, ensuring a seamless user experience and better search engine rankings.

Commands Mentioned

  • curl – Used to fetch web content and measure server response time.
  • siege – A load testing and benchmarking tool to analyze overall page load time.
  • ab -n 50 -c 10 [YOUR_WEBSITE_URL] – This command instructs `apachebench` to send a total of 50 requests to the specified website URL, with 10 requests being sent concurrently. It’s used to benchmark the performance of an HTTP server by simulating multiple users sending requests simultaneously.
See also  How to Test a Web Server with the MTR Command


  1. What is server response time?

    Server response time, often referred to as Time to First Byte (TTFB), is the time it takes for a server to send the first byte of the response to a user’s browser after receiving a request. It’s an essential metric as it indicates the server’s efficiency and the initial delay before the page starts loading.

  2. Why is it important to test page load times?

    Testing page load times is crucial for user experience and SEO. Slow-loading pages can deter users, leading to higher bounce rates. Additionally, search engines like Google consider page speed as a ranking factor, meaning faster sites are likely to rank higher in search results.

  3. How often should I test my server’s performance?

    Regularly testing your server’s performance is recommended, especially after making significant changes to your website or server configuration. Monthly checks or after major updates can help ensure consistent performance and user experience.

  4. Are there other factors that can affect page load times?

    Yes, several factors can influence page load times, including server location, web hosting type (like dedicated server, VPS server, cloud hosting, or shared hosting), website optimization, the size and type of content, and more.

  5. Can CDN improve my page load times?

    Yes, a Content Delivery Network (CDN) can significantly improve page load times. CDNs distribute your website’s content across multiple servers worldwide, ensuring that users access the site from a server closest to them, reducing latency and speeding up access.


Testing a web server’s page load times is essential for ensuring optimal user experience and improving search engine rankings. By using tools like curl, siege, and ab on a Linux system, webmasters can easily measure server response times and overall website speed.

  • The curl command provides a straightforward way to measure a website’s response time from the command line. By understanding and utilizing its various options, webmasters and administrators can gain valuable insights into server performance and identify potential areas for optimization.
  • The siege tool offers a powerful way to simulate real-world traffic to a website, helping administrators and developers gauge performance under various load conditions. By understanding the command’s options and interpreting the results, one can make informed decisions about server optimization, infrastructure scaling, and overall website performance improvements.
  • Apachebench is a powerful tool to assess the performance of their web servers. By simulating real-world traffic scenarios, it provides valuable insights into how a server responds under different load conditions. This information can be instrumental in optimizing server configurations, making infrastructure decisions, and ensuring a smooth user experience.

Regular monitoring and optimization can lead to better performance, satisfied users, and higher search engine rankings. Always remember to consider the type of web server software and hosting type you’re using, as these can play a significant role in your website’s performance. For those who are keen on understanding different web servers, you can explore more about web servers, including popular ones like Apache, Nginx, and LiteSpeed.

Welcome to the comments.


Leave a Reply

Your email address will not be published. Required fields are marked *