How to Setup Locust for Stress Testing on Linux (Ubuntu and CentOS)

How to Setup Locust to Stress Test a Web Server

Ensuring that your web server can handle high traffic is crucial. Whether you’re running a blog, an e-commerce site, or a web application, it’s essential to understand how your server reacts under heavy load.

One of the best tools for this purpose is Locust, a scalable user load testing tool written in Python. Locust allows you to simulate millions of simultaneous users and observe how your system behaves under such conditions.

This tutorial will guide you through the process of setting up Locust on two popular Linux distributions: Ubuntu and CentOS. By the end of this guide, you’ll be able to perform stress tests and gain insights into your server’s performance.

Let’s get started.

Step 1. Installing Python

Locust is written in Python, so you’ll need to have Python installed.

On Ubuntu:

sudo apt update
sudo apt install python3 python3-pip

On CentOS:

sudo yum update
sudo yum install python3 python3-pip

Step 2. Installing Locust

With Python installed, you can now install Locust using pip:

pip3 install locust

Step 3. Creating a Locust Test File

Locust operates using a Python script that defines the simulated behavior of users during a test. This script is typically named and serves as the blueprint for your load testing scenarios.

3.1 Setting Up the File

To begin, you’ll want to create the file. If you’re working directly on the server or using a terminal, you can use the nano editor for this:


This command will open the nano text editor with a new or existing file named

3.2 Understanding the Basic Structure

Before diving into the code, it’s essential to understand the basic structure of a Locust test file:

  • Imports: At the beginning of the file, you’ll import necessary modules and classes from the Locust library.
  • User Class: This class defines the behavior of the simulated users. It inherits from one of the Locust user classes, like HttpUser for web testing.
  • Tasks: Within the user class, you’ll define one or more tasks. These tasks represent the actions that the simulated users will perform.

3.3 Writing the Test Script

In the provided example, the behavior of users visiting a website’s homepage is simulated:

from locust import HttpUser, task, between

class WebsiteUser(HttpUser):
    wait_time = between(5, 15)
    def index(self):

Here’s a breakdown of the script:

  • Imports: The necessary classes and functions are imported from the Locust library.
  • WebsiteUser Class: This class represents a user visiting a website. It inherits from HttpUser, indicating that it’s designed for web testing.
  • wait_time: This variable determines the wait time between tasks. In this case, users will wait for a random duration between 5 to 15 seconds before executing the next task.
  • index Task: This is a simple task where the user sends a GET request to the root path (“/”) of the website, simulating a visit to the homepage.

3.4 Customizing Your Test

The provided example is a basic scenario. Depending on your needs, you can add more tasks, simulate different user behaviors, make POST requests, handle login scenarios, and more. The flexibility of Locust allows you to create complex and realistic load testing scenarios tailored to your application’s specific requirements.

See also  How to Use Sysbench to Test Database Performance on a Linux Machine

3.5 Saving and Exiting

After writing your test script in nano, press CTRL + O to save the file, followed by Enter. Then, press CTRL + X to exit the editor.

With your ready, you can proceed to run the Locust test, set the number of users, and analyze the results. Remember, the effectiveness of your load test largely depends on the accuracy and realism of your test scenarios, so always ensure your reflects the actual user behaviors you expect on your application.

Step 4. Running the Test

Once you’ve crafted your to simulate the desired user behavior, the next step is to run the test using Locust. This step is crucial as it will provide insights into how your system behaves under simulated load.

4.1 Initiating the Test

To initiate the test, use the following command:

locust -f
  • locust: This is the command to run Locust.
  • -f The -f flag specifies the file to use for the test. In this case, we’re pointing to our previously created

4.2 Accessing the Locust Web Interface

Upon running the command, Locust will start its built-in web interface. This interface is intuitive and provides a user-friendly way to configure and monitor your tests.

To access the web interface:

  • Open a web browser.
  • Navigate to http://your_server_ip:8089.
  • Replace your_server_ip with the IP address of the server where you’re running Locust.

4.3 Configuring the Test Parameters

Within the Locust web interface:

  • Number of total users to simulate: This field allows you to specify the total number of virtual users that will be simulated during the test. Depending on your server’s capacity and the nature of your test, you can start with a small number and gradually increase it to see how your system behaves under varying loads.
  • Spawn rate: This determines how many users will be added per second until the total number of users is reached. For instance, if you set the spawn rate to 10 and the total number of users to 100, it will take 10 seconds to reach the full load.

Click on the Start swarming button to initiate the test.

4.4 Monitoring the Test in Real-Time

Once the test starts, the Locust web interface will display real-time statistics, including:

  • Requests per second (RPS): This shows the number of requests being made to your server every second.
  • Response times: This provides insights into how quickly your server responds to requests. It includes metrics like average, median, and max response times.
  • Number of failures: If any of the requests fail, they will be logged here, allowing you to identify potential issues.

4.5 Stopping the Test

You can stop the test anytime by clicking the Stop button on the web interface. It’s advisable to monitor the test and stop it if you notice any severe performance issues or if the server becomes unresponsive.

See also  How to Test a Web Server with the MTR Command

Step 5. Analyzing the Results

After initiating your stress test with Locust, the real work begins: analyzing the results. Proper analysis will not only give you insights into the current performance of your system but also guide future optimizations and infrastructure decisions.

5.1 Accessing the Web Interface

The Locust web interface is your primary tool for monitoring and analysis. By navigating to http://your_server_ip:8089, you can view a dashboard displaying real-time statistics about the ongoing test.

5.2 Key Metrics to Monitor

Within the web interface, several critical metrics will provide insights into your server’s performance:

  • Total Requests: This shows the cumulative number of requests made since the start of the test. It gives you a sense of the total load placed on your server.
  • Requests per Second: This metric indicates the rate at which requests are being made. A sudden drop in RPS might indicate a bottleneck or performance issue.
  • Response Time: This is broken down into several statistics:
    • Median Response Time: The middle value in the list of all response times. It provides a good general indicator of performance.
    • Average Response Time: The mean of all recorded response times.
    • Min/Max Response Time: The fastest and slowest response times recorded. Large discrepancies between these values can indicate inconsistent server performance.
  • Number of Failures: This metric logs any failed requests. A high failure rate can indicate server issues, application errors, or other problems that need addressing.
  • Users: Displays the current number of simulated users. This helps correlate user load with performance metrics.

5.3 Interpreting the Data

While raw metrics are valuable, understanding what they mean in the context of your application is crucial:

  • Consistent High Response Times: If your server consistently returns high response times, even under low load, there might be inherent performance issues or misconfigurations that need addressing.
  • Spikes in Response Times: Sudden spikes can indicate specific operations or requests that are resource-intensive and might need optimization.
  • Failed Requests: Investigate the cause of any failed requests. It could be due to server errors, application bugs, or issues with the test configuration.

5.4 Using the Charts

Locust’s web interface also provides graphical representations of the test data. These charts can help visualize:

  • Response time distribution: Shows how response times are spread out, helping identify outliers or consistent performance issues.
  • Number of active users over time: Helps correlate user load with other metrics.
  • Requests per second over time: Visualizes the load on your server throughout the test.

For more in-depth analysis or to share results with your team, you can export the test data. Locust allows you to download the data in CSV format, which can be imported into data analysis tools or spreadsheet applications for further examination.

Analyzing the results of a stress test is a critical step in the performance optimization process. By understanding how your system behaves under load, you can make informed decisions to ensure a smooth and responsive user experience, even during peak traffic periods. For example:

  • Identify bottlenecks and optimize accordingly.
  • Consider scaling your infrastructure if necessary.
  • Address any application-specific issues or bugs that were uncovered.
  • Plan for future tests, adjusting parameters based on what you’ve learned.
See also  How to Setup Link Cutter to Check the Resilience of the Server's Network on a Linux Machine

Commands Mentioned

  • sudo apt update – Updates the package list on Ubuntu.
  • sudo apt install python3 python3-pip – Installs Python 3 and pip on Ubuntu.
  • sudo yum update – Updates the package list on CentOS.
  • sudo yum install python3 python3-pip – Installs Python 3 and pip on CentOS.
  • pip3 install locust – Installs Locust using pip.
  • nano – Opens a new file named in the nano editor.
  • locust -f – Starts Locust with the specified test file.


  1. What is Locust used for?

    Locust is an open-source load testing tool used to simulate multiple users to stress test and understand the performance of web applications and servers under heavy traffic conditions.

  2. Why is stress testing important?

    Stress testing helps identify the limits of a system, ensuring that it can handle peak traffic and revealing potential bottlenecks or weaknesses that need optimization. This ensures a smooth user experience during high traffic periods.

  3. Can I use Locust on other operating systems?

    Yes, Locust is platform-independent and can be used on various operating systems like Windows, macOS, and different Linux distributions, provided Python is installed.

  4. How many users can Locust simulate?

    Locust can simulate millions of simultaneous users, making it suitable for testing large-scale web applications and systems. The actual number depends on the hardware and network limitations of the machine running Locust.

  5. Is Locust free to use?

    Yes, Locust is open-source and free to use. It’s licensed under the MIT license, allowing developers and businesses to use, modify, and distribute it without incurring any costs.


Stress testing is an essential aspect of ensuring that your web applications and servers can handle real-world traffic scenarios. Locust provides a powerful and flexible solution for this purpose. By setting up Locust on your Linux server, be it Ubuntu or CentOS, you equip yourself with a tool that can simulate millions of users, providing invaluable insights into how your system behaves under pressure.

Remember, the key to a successful stress test is not just identifying the maximum load your server can handle but understanding how it behaves as it approaches that limit. This knowledge allows you to make informed decisions about optimizations, scaling, and infrastructure investments.

By continuously monitoring and testing your systems, you ensure a seamless experience for your users, even during peak traffic times.

For a deeper understanding of web servers, you might want to explore popular web server software, or dig into specific servers like Apache, Nginx, and LiteSpeed. If you’re considering different hosting options, check out dedicated server, VPS server, cloud hosting, and shared hosting articles.


Leave a Reply

Your email address will not be published. Required fields are marked *