Introduction
Humphrey is a very fast, robust and flexible HTTP/1.1 web server. It provides an executable web server, similar to Nginx, a Rust crate for building your own web applications, first-party WebSocket support, and a simple authentication system. In this guide, you'll get a strong understanding of how to use and build upon all of these components.
The executable web server component of the project is often referred to as "Humphrey Server", and you can learn how to install, configure and run it here. It also supports plugins, which provide limitless extensibility of the server and the creation of which is also covered in this guide.
The underlying Rust crate is often referred to as "Humphrey Core", and provides a framework for building web applications, with the ability to act as both a client and a server. You can learn how to set up and build your own web application using Humphrey Core here.
The WebSocket functionality is provided by a separate crate, often referred to as "Humphrey WebSocket", which integrates with the core crate for ease of development. You can learn how to Humphrey WebSocket in your own application here.
Humphrey also provides a simple JSON library called "Humphrey JSON". It allows for the manipulation of JSON data in a variety of ways. You can learn how to use Humphrey JSON here.
The simple authentication system is also provided by a separate crate, often referred to as "Humphrey Auth", which extends the core crate with authentication-related features. You can learn how to use Humphrey Auth in your own application here.
Quick Reference
- Setting up Humphrey Server
- A basic web application using Humphrey Core
- Using WebSocket with Humphrey Core
- Using Humphrey as a Client
- Using PHP with Humphrey Server
- Creating a Humphrey Server plugin
- Using Humphrey JSON
Latest Versions
This book is up-to-date with the following crate versions.
Crate | Version |
---|---|
Humphrey Core | 0.7.0 |
Humphrey Server | 0.6.0 |
Humphrey WebSocket | 0.4.0 |
Humphrey JSON | 0.2.1 |
Humphrey Auth | 0.1.3 |
Humphrey Core
Humphrey Core is a high-performance web server crate which allows you to develop web applications in Rust. With no dependencies by default, it is very quick to compile and produces very small binaries, as well as being very resource-efficient.
This section of the guide will cover the following topics:
- Creating and running a basic Humphrey Core web application
- Handling state between requests
- Integrating static and dynamic content
- Serving applications over HTTPS
- Monitoring and logging internal events
- Using the Tokio async runtime with Humphrey
- Using Humphrey Core as a client
It's recommended that you have basic familiarity with Rust before reading this section, as only Humphrey-specific concepts are explained, and knowledge of the Rust language is required to understand many of them.
Getting Started
This chapter will walk you through the steps to get started with Humphrey Core. It assumes that you already have Rust and Cargo installed, but installation steps for these can be found in the Rust book.
Creating a New Project
A Humphrey Core web application is a Rust binary crate, so to begin we'll need to create a new project using Cargo.
$ cargo new my-app
Next, we need to add the humphrey
crate as a dependency of our project, which can be done by editing the Cargo.toml
file within the project directory as follows. Please note that it is not good practice to use the *
version number for a real project, as if a new version of Humphrey adds breaking changes, this could cause your application to stop working properly. In that case, you should check the latest Humphrey version on crates.io and use that version number.
[dependencies]
humphrey = "*"
With that, our project will compile and run, but at the moment it doesn't do anything.
Creating a Humphrey App
We can now initialise a new Humphrey application in the main.rs
file of the src
directory.
use humphrey::http::{Response, StatusCode};
use humphrey::App;
fn main() {
let app: App = App::new().with_stateless_route("/", |_| {
Response::new(StatusCode::OK, "Hello, Humphrey!")
});
app.run("0.0.0.0:80").unwrap();
}
If we now run cargo run
, our application will successfully compile and start the server, which you can access at http://localhost. You should see the text "Hello, Humphrey!" in your browser. If so, congratulations - you've successfully created a Humphrey web application! Let's go into a little more detail about what this code actually does.
First, we create a new App
instance, which is the core of every Humphrey application. We need to specify the type App
as well, since the app is generic over a state type, which we'll cover in the Using State chapter. This shouldn't be necessary since the default state type is the Rust empty type ()
, but it must be done due to current technical limitations of Rust (see rust-lang/rust issue #36887).
We then call with_stateless_route
on the App
instance, passing in the path of the route and a closure that will be called when the route is matched. The closure takes one argument, the request, which we ignore with the _
. It returns a Response
object with the success status code (200), containing the text "Hello, Humphrey!".
Finally, we call run
on the App
instance, passing in the address and port to listen on. This will start the server and block the main thread until the server is shut down.
Adding Multiple Routes
At the moment, our app only shows a message for the root path, but we can add more routes by calling with_stateless_route
again with different handlers. In most cases, these would not be passed in as closures, but rather as functions that return a Response
object. Let's add another route called /api/time
that shows the current time.
We'll start by creating a handler function and adding it to the app by calling with_stateless_route
again. We'll also move the root handler to a function as well to improve the code readability. We also need to import the Arc
type.
use humphrey::http::{Request, Response, StatusCode};
use humphrey::App;
use std::sync::Arc;
fn main() {
let app: App = App::new()
.with_stateless_route("/", root_handler)
.with_stateless_route("/api/time", time_handler);
app.run("0.0.0.0:80").unwrap();
}
fn root_handler(_: Request) -> Response {
Response::new(StatusCode::OK, "Hello, Humphrey!")
}
fn time_handler(_: Request) -> Response {
// todo: get the current time
}
This code won't compile, as the time_handler
function does not yet return a Response
object. Let's use the built-in DateTime
type from humphrey::http::date
to get the current time in the HTTP date format, which looks like .
use humphrey::http::date::DateTime;
// --snip--
fn time_handler(_: Request) -> Response {
let time = DateTime::now();
let time_http = time.to_string();
Response::new(StatusCode::OK, time_http)
}
If we now run cargo run
again, and go to http://localhost/api/time in the browser, we should see the current time in the format described earlier.
Wildcard Routes
Humphrey supports the *
wildcard character in route paths, so one handler can handle many paths. Let's add another route called /api/greeting/*
which will greet the user with the name they provide. Again, we'll need to create another route handler function and add it to the app:
// --snip--
fn main() {
let app: App = App::new()
.with_stateless_route("/", root_handler)
.with_stateless_route("/api/time", time_handler);
.with_stateless_route("/api/greeting/*", greeting_handler);
app.run("0.0.0.0:80").unwrap();
}
// --snip--
fn greeting_handler(request: Request) -> Response {
// todo: greet the user
}
In our newly-created greeting handler, we want to extract the name from the path and return a response depending on the name provided. We can do that with the Rust standard library's strip_prefix
function for strings. You'll notice that we haven't ignored the request argument as we did previously, and this is so we can access the path of the request.
// --snip--
fn greeting_handler(request: Request) -> Response {
let name = request.uri.strip_prefix("/api/greeting/").unwrap();
let greeting = format!("Hello, {}!", name);
Response::new(StatusCode::OK, greeting)
}
If we now visit http://localhost/api/greeting/Humphrey in the browser, we should see the text "Hello, Humphrey!". You can replace the name Humphrey with your own name or any other name you want, and you should see the greeting change accordingly.
Conclusion
As you can see, Humphrey provides an intuitive and easy-to-use API to create web applications. Next, let's look at the Using State chapter, which will cover how to safely share state between routes and requests.
Using State
This chapter covers the basics of sharing state between requests to a Humphrey web application. In this chapter, we will demonstrate how to use the App
's State
type parameter to share state between requests by building a simple application which displays a button and how many times it has been clicked.
Basic knowledge of JavaScript is useful to fully understand this chapter.
Creating a Stateful App
Once you've created an empty Rust project with the Humphrey dependency installed, as described in the previous chapter, you'll need to define a struct to hold the state of your application.
use humphrey::handlers::serve_dir;
use humphrey::http::{Request, Response, StatusCode};
use humphrey::App;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
#[derive(Default)]
struct AppState {
button_presses: AtomicUsize,
}
fn main() {}
You'll notice that we derive the trait Default
on our state struct. This is not required, but it means we don't need to explicitly define the initial state of the application in our main
function, as it will be set to zero button presses.
We can now create our App
instance in the main function with three routes, one API endpoint to get the current number of button presses, one which increments this number by one, and a catch-all route at the bottom which serves the static
directory if none of the other endpoints are matched. You'll see that we use the serve_dir
built-in handler with the with_path_aware_route
method, which you can read about further in the next section. We also use the with_route
method instead of with_stateless_route
, since we want access to the app's state.
// --snip--
fn main() {
let app: App<AppState> = App::new()
.with_route("/api/getPresses", get_presses)
.with_route("/api/incrementPresses", increment_presses)
.with_path_aware_route("/*", serve_dir("./static"));
app.run("0.0.0.0:80").unwrap();
}
Defining the API Endpoints
We now need to create the two API endpoints which get and increment the button presses. If you are familiar with Rust, you'll know that the AtomicUsize
type makes it very easy to share and increment a number between threads.
Our /api/getPresses
endpoint just needs to load the value of the button_presses
field from the state and return it as the response body, as follows. We use Ordering::SeqCst
to ensure that the value is sequentially consistent, which means that every subsequent call of the API will never be less than the value returned by the previous call.
// --snip--
fn get_presses(_: Request, state: Arc<AppState>) -> Response {
let presses = state.button_presses.load(Ordering::SeqCst);
Response::new(StatusCode::OK, presses.to_string())
}
Creating the /api/incrementPresses
endpoint is similar, but we need to increment the value of the button_presses
field instead of returning it. This is done as follows.
/// --snip--
fn increment_presses(_: Request, state: Arc<AppState>) -> Response {
state.button_presses.fetch_add(1, Ordering::SeqCst);
Response::new(StatusCode::OK, b"OK")
}
Creating a Simple Front-End
We now need to create a basic HTML page with a button and some text to interface with our Humphrey application. We can do this with some simple HTML and JavaScript as follows.
index.html
<html>
<head>
<title>Humphrey Stateful Tutorial</title>
</head>
<body>
<h1>Button has been pressed <span id="presses">x</span> times</h1>
<button onclick="incrementPresses()">Press Me</button>
<script src="index.js"></script>
</body>
</html>
index.js
function updatePresses() {
fetch("/api/getPresses").then(res => res.text())
.then(text => {
document.querySelector("#presses").innerHTML = text;
});
}
function incrementPresses() {
fetch("/api/incrementPresses").then(updatePresses);
}
window.onload = updatePresses;
This code simply fetches the current number of button presses from the API and updates the page accordingly. It also shows a button which increments the number of button presses by one.
Running our App
When we run cargo run
in the terminal and visit http://localhost in the browser, we'll see the text "Button has been pressed 0 times" and a button which increments the number of button presses by one. If you press the button, you'll see the number increase. You can refresh the page or visit from a different device, and the number will be consistent.
Conclusion
In this chapter, we've learnt how to create a stateful application with Humphrey. In the next chapter Serving Static Content, we'll discuss the number of ways Humphrey provides to serve static content.
Static Content
This chapter covers how to serve static content with Humphrey. Serving static content is a vital part of many web applications, and Humphrey provides a simple way to do this.
The handlers
module provides a number of useful built-in functions to handle requests for static content.
Serving a File
The serve_file
handler is the simplest way to serve a single file.
use humphrey::handlers::serve_file;
use humphrey::App;
fn main() {
let app: App<()> = App::new()
.with_route("/foo", serve_file("./bar.html"));
app.run("0.0.0.0:80").unwrap();
}
Serving a Directory
The serve_dir
handler allows you to serve a directory of files. The path you specify should be relative to the current directory.
This handler must be applied using the with_path_aware_route
method, since the path is used to determine how to locate the requested content. For example, a request to /static/foo.html
with a handler that looks for /static/*
should find the file at /static/foo.html
, not /static/static/foo.html
.
use humphrey::handlers::serve_dir;
use humphrey::App;
fn main() {
let app: App<()> = App::new()
.with_path_aware_route("/static/*", serve_dir("./static"));
app.run("0.0.0.0:80").unwrap();
}
Redirecting Requests
The redirect
handler allows you to redirect requests to a different path, whether it be on the same domain or a different domain.
use humphrey::handlers::redirect;
use humphrey::App;
fn main() {
let app: App<()> = App::new()
.with_route("/foo", redirect("https://www.example.com"));
app.run("0.0.0.0:80").unwrap();
}
Conclusion
In this section, we've learnt how to use Humphrey's built-in handlers to serve static content from a Humphrey web application. In the next section, we'll explore how to use HTTPS (TLS) with Humphrey using the rustls
crate.
Using HTTPS
This chapter explains how we can access our Humphrey application using HTTPS, improving the security of the application and allowing our client-side code access to more advanced browser features.
Note: While Humphrey's core features are dependency-free, using TLS requires the rustls
crate to ensure that the cryptography used is secure.
Enabling the TLS Feature
To use HTTPS with Humphrey, you must first enable the tls
feature in your Cargo.toml
file:
[dependencies]
humphrey = { version = "*", features = ["tls"] }
Setting up the TLS Certificate
TLS requires a certificate and a private key, which must be supplied to the Humphrey app. In production, these would be generated by a certificate authority like Let's Encrypt, but when developing an HTTPS application, it's often easier to use a self-signed certificate.
The mkcert
command-line tool can be used to generate a trusted certificate for local development.
Installing mkcert
mkcert
can be installed as follows (or downloaded from the aforementioned link):
Windows:
$ choco install mkcert
MacOS:
$ brew install mkcert
Linux:
$ sudo apt install libnss3-tools
$ brew install mkcert
Generating the Certificate
Once installed, the mkcert
certificate authority must be trusted by the operating system, which can be done by running the following command.
$ mkcert -install
Finally, to generate a certificate for local development, run this command, which will create two files, localhost.pem
and localhost-key.pem
.
$ mkcert localhost
Using the Certificate with Humphrey
When creating your Humphrey application, the certificate and key must be provided to the App
using the with_cert
method, which is only available when the tls
feature is enabled. A very simple example using TLS is shown below. Notice that we use run_tls
instead of run
to start the application.
use humphrey::http::{Request, Response, StatusCode};
use humphrey::App;
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let app: App<()> = App::new()
.with_stateless_route("/", home)
.with_cert("path/to/localhost.pem", "path/to/localhost-key.pem");
app.run_tls("0.0.0.0:443")?;
Ok(())
}
fn home(_: Request) -> Response {
Response::new(
StatusCode::OK,
"<html><body><h1>This is served over HTTPS!</h1></body></html>",
)
}
Forcing HTTPS
By default, when you call app.run_tls("0.0.0.0:443")
, the application will only accept connections on the HTTPS port (443). Typically, web applications will automatically redirect requests the HTTP port 80 to the HTTPS endpoint. To enable this in Humphrey, you can use the with_forced_https
method on the App
struct, as follows:
// --snip--
let app: App<()> = App::new()
.with_stateless_route("/", home)
.with_cert("path/to/localhost.pem", "path/to/localhost-key.pem")
.with_forced_https(true);
// --snip--
This starts a background thread which simply redirects HTTP requests to the corresponding HTTPS URL.
Conclusion
In this section, we've covered how to use the TLS feature of Humphrey, and how to use it to serve HTTPS applications. Next, we'll learn how to monitor internal events in the application.
Monitoring Events
In this chapter, we'll discuss how to monitor internal events in the application. It can often be useful to log events such as requests and errors, which can be useful for debugging and for general performance analysis. To learn about this, we're going to build a simple logger which logs all events in a "Hello, world!" application to both the console, and specific events to a file.
Setting up our Application
We'll start with an extremely simple application which simply responds with "Hello, world!" to every request. After creating a new crate and adding Humphrey as a dependency, as outlined in Getting Started, add the following code to the main file.
use humphrey::http::{Response, StatusCode};
use humphrey::App;
fn main() {
let app: App =
App::new().with_stateless_route("/*", |_| Response::new(StatusCode::OK, "Hello, world!"));
app.run("0.0.0.0:80").unwrap();
}
This will just return "Hello, world!" to every request.
Logging to the Console
Monitoring events in Humphrey is done over a channel of Event
s. An Event
is a simple struct which contains the event's type, as well as an optional address of the client and an optional string with additional information. We'll cover this in more detail later, but for now we just need to know that it implements the Display
trait so that we can print it.
Monitoring is configured using the MonitorConfig
struct, and events can be subscribed to using its with_subscription_to
function. You can subscribe to a single event or an event level (such as warning), since both types implement the ToEventMask
trait.
Let's create a channel and supply it to the application using the with_monitor
method, as well as subscribing to all events using the debug event level. We'll also create a thread to listen on the channel and print all events to the console.
use humphrey::http::{Response, StatusCode};
use humphrey::monitor::event::EventLevel;
use humphrey::monitor::MonitorConfig;
use humphrey::App;
use std::sync::mpsc::channel;
use std::thread::spawn;
fn main() {
let (tx, rx) = channel();
let app: App = App::new()
.with_monitor(MonitorConfig::new(tx).with_subscription_to(EventLevel::Debug))
.with_stateless_route("/*", |_| Response::new(StatusCode::OK, "Hello, world!"));
spawn(move || {
for e in rx {
println!("{}", e);
}
});
app.run("0.0.0.0:80").unwrap();
}
If we run this application and visit it in the browser, you'll see a lot of debug output in the console. You have successfully logged some internal events!
Filtering Events
Events should always be filtered at the MonitorConfig
level if possible to reduce the traffic on the channel. However, if, for example, you want to print everything to the console but only write warnings and errors to a file, you can filter events using event masks.
An event mask is simply a u32
which is a bit mask of the events you want to subscribe to. As an example, EventLevel::Debug
is simply 0xFFFFFFFF
, which means that all events are subscribed to. The individual event EventType::RequestServedSuccess
is 0x00000040
. You probably don't need to know the bit mask values, but they are useful for understanding how event filtering works.
Let's move our event listening thread to a new function, and temporarily filter out all events except warnings and errors.
use std::sync::mpsc::{channel, Receiver};
use humphrey::monitor::event::{Event, EventLevel};
// --snip--
spawn(move || monitor_thread(rx));
// --snip--
fn monitor_thread(rx: Receiver<Event>) {
for e in rx {
println!("{}", e);
}
}
Now that our thread is in a new function, we can add the following code to filter out all events except warnings and errors.
fn monitor_thread(rx: Receiver<Event>) {
for e in rx {
if e.kind as u32 & EventLevel::Warning as u32 != 0 {
println!("{}", e);
}
}
}
If you run the program now, you'll probably see no output in the console, as none of the events being received are warnings or errors. If you use a tool like Netcat to send an invalid request to the application, you'll see an error message.
Writing Events to a File
Let's add a little bit more code to the monitor thread to write all events to a file.
use std::fs::File;
use std::io::Write;
// --snip--
fn monitor_thread(rx: Receiver<Event>) {
let mut file = File::create("monitor.log").unwrap();
for e in rx {
if e.kind as u32 & EventLevel::Warning as u32 != 0 {
file.write_all(format!("{}\n", e).as_bytes()).unwrap();
}
println!("{}", e);
}
}
If we run the code again, we'll see that all of our events are again logged to the console, but warnings and errors are additionally logged to the file. Since our program isn't producing any errors at the moment, let's intentionally cause some by getting a thread to panic!
Monitoring Thread Panics
Threads should not ever panic. However, in any application there's always a chance of bugs causing threads to panic, and this should not cause the whole program to stop working. Humphrey automatically detects thread panics in the background and quietly restarts the affected thread, while letting Rust log the panic to the console in the typical way.
If your monitor is subscribed to the EventType::ThreadPanic
event, whether directly or through any event level, Humphrey will take over the panic logging from Rust, and will send the panic to the monitor channel instead of printing it to the standard error stream. It's important that this is logged in some way, as you should never miss a panic!
Let's cause a panic on the route "/panic" by adding another simple handler.
// --snip--
let app: App = App::new()
.with_monitor(MonitorConfig::new(tx).with_subscription_to(EventLevel::Debug))
.with_stateless_route("/panic", |_| panic!("this is a panic"))
.with_stateless_route("/*", |_| Response::new(StatusCode::OK, "Hello, world!"));
// --snip--
If you visit the panic route in your browser now, you won't get a response from the server as the thread has panicked, but you'll see the panic in the console and the file, as well as that the thread was restarted in the console.
Conclusion
In conclusion, Humphrey provides a flexible way for logging internal events. Next, we'll look at how to use Humphrey with the Tokio async runtime.
Tokio
This chapter covers how to use Humphrey with the Tokio async runtime. Currently, only Humphrey Core supports integration with Tokio.
Enabling Tokio
To enable Tokio support, enable the tokio
feature of the humphrey
crate in your Cargo.toml
file. You'll also need Tokio as a direct dependency of your project.
[dependencies]
humphrey = { version = "0.7", features = ["tokio"] }
tokio = { version = "1", features = ["full"] }
Using Tokio
With the Tokio feature enabled, everything you would expect to be asynchronous is now asynchronous. That's it!
Using as a Client
Humphrey Core also provides client functionality, which allows dependent programs to send HTTP requests. It optionally supports TLS with the tls
feature, the setup for which was discussed in the Using HTTPS section. This section assumes the TLS feature is enabled.
Sending a Simple Request
A simple request can be sent by creating a Client
object, creating and sending a GET request, then parsing the response from the body into a string. This basic example shows how to use the Ipify API to get your public IP address.
use humphrey::Client;
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let mut client = Client::new();
let response = client.get("https://api.ipify.org")?.send()?;
println!("IP address: {}", response.text().ok_or("Invalid text")?);
Ok(())
}
Adding Headers and Following Redirects
Headers can be added to the request by using the with_header
method on the ClientRequest
struct. For this example, we'll use the User-Agent
header to identify the client. Redirects can be followed by using with_redirects
and specifying to follow redirects.
use humphrey::http::headers::HeaderType;
use humphrey::Client;
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let mut client = Client::new();
let response = client
.get("https://api.ipify.org")?
.with_redirects(true)
.with_header(HeaderType::UserAgent, "HumphreyExample/1.0")
.send()?;
println!("IP address: {}", response.text().ok_or("Invalid text")?);
Ok(())
}
Using HTTPS
You'll notice that the previous examples have requested the HTTPS endpoint for the API. If we were to run these examples without the TLS feature enabled, an error would be encountered. Furthermore, creating the Client
object with TLS enabled is an expensive operation since certificates must be loaded from the operating system, so it is advisable to create one client per application instead of one per request.
Conclusion
In conclusion, Humphrey provides a powerful way to make requests as well as to serve them. If you want to learn more about Humphrey, consider exploring the API reference or reading the WebSocket guide.
Humphrey Server
Humphrey is a very fast, robust and flexible HTTP/1.1 web server, with support for static and dynamic content through its plugin system. It has no dependencies when only using default features, and is easily extensible with a configuration file and dynamically-loaded plugins.
This section of the guide will cover the following topics:
- Installing and running Humphrey Server
- Exploring the configuration file
- Using PHP with Humphrey Server
- Serving content over HTTPS
- Creating your own plugin
This section requires no prior knowledge, with the exception of the plugin creation section, which requires knowledge of the Rust programming language.
Getting Started
Installation
You can find the latest binaries at the Releases page on GitHub. If you want to use plugins, ensure you download the version which supports them. It is also advisable to add the executable to your PATH
, so you can run humphrey
from anywhere. This is automatically done if you install via cargo
, as outlined below.
Building from Source
To download and build the server, run the following command:
$ cargo install humphrey_server
If you are going to use plugins with the server, including the PHP plugin, you'll need to compile in plugin support, which can be done with the argument --features plugins
. If you want to serve content over HTTPS, you'll need to compile in TLS support, which can be done with the argument --features tls
. The following command automatically does both:
$ cargo install humphrey_server --all-features
Running the Server
Once Humphrey Server is installed, you can simply run humphrey
anywhere to serve the content of the current working directory. It has only one optional argument, which is the path to its configuration file, and this defaults to humphrey.conf
.
You'll see a warning that no configuration file was found. In the next section, Configuration, we'll learn how to use Humphrey's advanced configuration format to configure the server.
Configuration
Humphrey's configuration format is similar to that of Nginx. Comments begin with a #
and are ignored by the parser. Separate configuration files can be included with the include
directive, like in Nginx.
Locating the Configuration
Humphrey looks in three places for the configuration, before falling back to the default. If no configuration file can be found, the server will log a warning and start using the default configuration.
- The path specified as a command-line argument, for example when running
humphrey /path/to/config.conf
. - The
humphrey.conf
file in the current directory. - The path specified in the
HUMPHREY_CONF
environment variable.
It is important to note that if a file is found at any of these locations but is invalid, the server will log an error and exit instead of continuing to the next location.
Example
An example configuration file with all the supported directives specified is shown below.
server {
address "0.0.0.0" # Address to host the server on
port 443 # Port to host the server on
threads 32 # Number of threads to use for the server
timeout 5 # Timeout for requests, highly recommended to avoid deadlocking the thread pool
plugins { # Plugin configuration (only supported with the `plugins` feature)
include "php.conf" # Include PHP configuration (see next page)
}
tls { # TLS configuration (only supported with the `tls` feature)
cert_file "cert.pem" # Path to the TLS certificate
key_file "key.pem" # Path to the TLS key
force true # Whether to force HTTPS on all requests
}
blacklist {
file "conf/blacklist.txt" # Text file containing blacklisted addresses, one per line
mode "block" # Method of enforcing the blacklist, "block" or "forbidden" (which returns 403 Forbidden)
}
log {
level "info" # Log level, from most logging to least logging: "debug", "info", "warn", "error"
console true # Whether to log to the console
file "humphrey.log" # Filename to log to
}
cache {
size 128M # Size limit of the cache
time 60 # Max time to cache files for, in seconds
}
host "127.0.0.1" { # Configuration for connecting through the host 127.0.0.1
route /* {
redirect "http://localhost/" # Redirect to localhost
}
}
route /ws {
websocket "localhost:1234" # Address to connect to for WebSocket connections
}
route /proxy/* {
proxy "127.0.0.1:8000,127.0.0.1:8080" # Comma-separated proxy targets
load_balancer_mode "round-robin" # Load balancing mode, either "round-robin" or "random"
}
route /static/*, /images/* {
directory "/var/static" # Serve content from this directory to both paths
}
route /logo.png {
file "/var/static/logo_256x256.png" # Serve this file to this route
}
route /home {
redirect "/" # Redirect this route with 302 Moved Permanently
}
route /* {
directory "/var/www" # Serve content from this directory
}
}
Using PHP
Humphrey supports PHP over the FastCGI protocol, provided that it was compiled with the plugins
feature enabled and the PHP plugin is installed. You'll also need PHP-CGI or PHP-FPM installed and running to allow Humphrey to connect to the PHP interpreter.
Configuration
In the previous configuration example, we used included a file called php.conf
into the configuration. You'll need to create this file with the following contents:
php {
library "path/to/php.dll" # Path to the compiled library
address "127.0.0.1" # Address of the interpreter
port 9000 # Port of the interpreter
threads 8 # Threads to use (see below)
}
Multi-Threading
The PHP plugin supports multi-threading to improve performance, but this requires some tweaks to the PHP FastCGI server configuration. PHP is by default single-threaded, so you'll need to increase the PHP threads to match the number you specify in your php.conf
file.
Using HTTPS
Humphrey supports serving content over HTTPS, provided that the tls
feature is compiled in and a certificate is provided.
Setting up the TLS Certificate
TLS requires a certificate and a private key, which must be supplied to the server. In production, these would be generated by a certificate authority like Let's Encrypt, but when developing locally, it's often easier to use a self-signed certificate.
The mkcert
command-line tool can be used to generate a trusted certificate for local development.
Installing mkcert
mkcert
can be installed as follows (or downloaded from the aforementioned link):
Windows:
$ choco install mkcert
MacOS:
$ brew install mkcert
Linux:
$ sudo apt install libnss3-tools
$ brew install mkcert
Generating the Certificate
Once installed, the mkcert
certificate authority must be trusted by the operating system, which can be done by running the following command.
$ mkcert -install
Finally, to generate a certificate for local development, run this command, which will create two files, localhost.pem
and localhost-key.pem
.
$ mkcert localhost
Using the Certificate with Humphrey
To provide the certificate to the server, you'll need to include the TLS configuration section in your configuration file as follows:
tls {
cert_file "path/to/cert.pem" # Path to the TLS certificate
key_file "path/to/key.pem" # Path to the TLS key
force true # Whether to force HTTPS on all requests
}
Hot Reload
Humphrey supports hot reload through a first-party plugin, provided that the server was compiled with the plugins
feature enabled and the plugin is installed.
The Hot Reload plugin is able to automatically reload webpages when the source code changes. It is not recommended for use in production, but is useful for development. It should also be noted that, when using a front-end framework such as React, the framework's built-in HMR (hot module reloading) functionality should be used instead of this plugin.
HTML pages are reloaded by requesting the updated page through a fetch
call, then writing this to the page. This avoids the need for the page to be reloaded manually. CSS and JavaScript are reloaded by requesting the updated data, then replacing the old script or stylesheet. Images are reloaded in the same way. Other resources are currently unable to be dynamically reloaded.
When JavaScript is reloaded, the updated script will be executed upon load in the same context as the old script. This means that any const
declarations may cause errors, but this is unavoidable as without executing the new script, none of the changes can be used. For this reason, the Hot Reload plugin is more suitable for design changes than for functionality changes.
Warning: Hot Reload disables caching so that changes are immediately visible.
Configuration
In the plugins section of the configuration file, add the following:
hot-reload {
library "path/to/hot-reload.dll" # Path to the compiled library
ws_route "/ws" # Route to the WebSocket endpoint
}
Specifying the WebSocket route is optional. If not specified, the default is /__hot-reload-ws
in order to avoid conflicts with other configured WebSocket endpoints.
Creating a Plugin
One of Humphrey's main strengths is its extensibility through its plugin system. In this section, you'll learn how to write a basic plugin using Rust and load it into the server.
This section requires knowledge of the Rust programming language.
Setting Up the Project
To begin, you'll need to create a new Rust library with the following command:
$ cargo new my_plugin --lib
Then, in the Cargo.toml
file, you'll need to specify the humphrey
and humphrey_server
dependencies, as well as the plugins
feature of the latter. You must also specify the crate type as cdylib
so it can by dynamically linked into the server. The file should look like this:
[package]
name = "my_plugin"
version = "0.1.0"
edition = "2021"
[dependencies]
humphrey = "*"
humphrey_server = { version = "*", features = ["plugins"] }
[lib]
crate-type = ["cdylib", "rlib"]
Initialising the Plugin
Every Humphrey plugin is a crate which defines a type which implements the Plugin
trait. The type must be declared with the declare_plugin!
macro. In your lib.rs
file, add the following code:
use humphrey_server::declare_plugin;
use humphrey_server::plugins::plugin::Plugin;
#[derive(Default, Debug)]
pub struct MyPlugin;
impl Plugin for MyPlugin {
fn name(&self) -> &'static str {
"My Plugin"
}
}
declare_plugin!(MyPlugin, MyPlugin::default);
The only required method for the trait to be implemented is name
, which returns the name of the plugin. The declaration macro takes in the type of the plugin, and a constructor to initialise the plugin, which we've automatically generated by deriving the Default
trait.
Intercepting Requests
The on_request
method of the plugin trait is passed every request, along with the app's state and the configuration of the route which matched it. It returns an Option<Response>
, which is None
if the plugin doesn't want to handle the request, or Some(response)
if it does.
Let's add some code which will intercept all requests to the /example
route, and return a response with a body of "Hello, world!".
// --snip--
use humphrey::http::headers::HeaderType;
use humphrey::http::{Request, Response, StatusCode};
use humphrey_server::config::RouteConfig;
use humphrey_server::AppState;
use std::sync::Arc;
impl Plugin for MyPlugin {
// --snip--
fn on_request(
&self,
request: &mut Request,
state: Arc<AppState>,
_: &RouteConfig,
) -> Option<Response> {
state.logger.info(&format!(
"Example plugin read a request from {}",
request.address
));
// If the requested resource is "/example" then override the response
if &request.uri == "/example" {
state.logger.info("Example plugin overrode a response");
return Some(
Response::empty(StatusCode::OK)
.with_bytes("Hello, world!")
.with_header(HeaderType::ContentType, "text/plain"),
);
}
None
}
}
This code simply logs that the plugin intercepted each request, and if the URI is equal to /example
, it overrides the response with a the message "Hello, world!".
Intercepting Responses
Humphrey plugins can also intercept responses, and can modify them before they are sent to the client. The on_response
method takes in a mutable reference to the response, which can be modified if necessary. It also takes in the app's state.
Now, we're going to add some code which adds the X-Example-Plugin
header to every response with a value of true
.
impl Plugin for MyPlugin {
// --snip--
fn on_response(&self, response: &mut Response, state: Arc<AppState>) {
// Insert a header to the response
response.headers.add("X-Example-Plugin", "true");
state
.logger
.info("Example plugin added the X-Example-Plugin header to a response");
}
}
Conclusion
As you can see, Humphrey's plugin system allows for complex additions to be made to the Humphrey server. If you want to see a more in-depth example of a plugin, check out the source code for the PHP plugin here.
Humphrey WebSocket
Humphrey WebSocket is a crate which extends Humphrey Core with WebSocket support by hooking into the latter's WebsocketHandler
trait. It handles the WebSocket handshake and framing protocol and provides a simple and flexible API for sending and receiving messages. Using Humphrey's generic Stream
type, it supports drop-in TLS. It also has no dependencies in accordance with Humphrey's goals of being dependency-free.
Humphrey WebSocket provides two ways to architect a WebSocket application: synchronous and asynchronous.
Synchronous WebSocket applications call user-specified handler functions when a client connects, and the handler function is expected to manage the connection until it closes. This is good for applications where you expect the connection to be short-lived and/or exchange a lot of data.
Asynchronous WebSocket applications call user-specified handler functions when a client connects, sends a message, or disconnects, and the handler function is only expected to manage the specific event that triggered it. This is more convenient for long-lived connections, as well as when broadcasting data to all connected clients is required, and vastly increases the number of concurrent WebSocket connections that can be handled.
This section of the guide will cover the following topics. We'll create the same example application in each of the two ways, so you can compare them by reading this section chronologically.
It's recommended that you have basic familiarity with Rust and the Humphrey Core crate before reading this section, as only Humphrey WebSocket-specific concepts are covered.
Synchronous WebSocket
Synchronous WebSocket applications call a user-specified handler for each client that connects, and the handler manages the connection until it closes. This means that the handler treats the connection like a regular stream, reading and writing data from it. While this is simpler and quicker, it also limits the number of simultaneous connections to the thread pool size of the underlying Humphrey application, since each connection is handled by a single thread.
This subsection will cover how to create a basic synchronous WebSocket server, as well as how to broadcast messages to all connected clients using an external crate. You'll see in the next subsection how to do these things asynchronously.
Getting Started
This chapter will walk you through the steps to get started with Humphrey WebSocket synchronously.
Adding WebSocket Support to a Humphrey Project
To add WebSocket support to an existing project, you just need to add the humphrey_ws
dependency to your Cargo.toml
file. It is important to ensure that the version is acceptable for that of the core crate, so you should ideally find out the latest version for each and fix the version accordingly.
[dependencies]
humphrey = "*"
humphrey_ws = "*"
If you want to create a new project with WebSocket support, first follow the instructions in the Humphrey Core section to create a new Humphrey project, then add the humphrey_ws
dependency.
Setting up a WebSocket Handler
To add a WebSocket route to your Humphrey app, use the with_websocket_route
method on the App
struct, providing the path to match and the handler, just like you would with any other route. However, you should wrap the handler function in this crate's websocket_handler
function, which will allow it to handle the WebSocket handshake behind-the-scenes.
Let's create a new App
struct and add a route to match any path.
use humphrey::App;
use humphrey_ws::stream::WebsocketStream;
use humphrey_ws::websocket_handler;
use std::sync::Arc;
fn main() {
let app: App = App::new()
.with_websocket_route("/*", websocket_handler(my_handler));
app.run("0.0.0.0:80").unwrap();
}
fn my_handler(mut stream: WebsocketStream, _: Arc<()>) {
println!("Connection from {:?}", stream.inner().peer_addr().unwrap());
// TODO: Implement handler
}
If you run this code, the app will start, but all WebSocket connections will be immediately closed after printing their addresses since the handler function immediately returns and thus the stream is dropped. This can be a useful feature of the WebsocketStream
type, since the client is automatically sent a "close" frame when it is dropped.
Testing our WebSocket Handler (optional)
In production, it is likely that our application would only ever be accessed from a browser. However, during development, it can be useful to connect to the server from a terminal with a tool like netcat for debugging. We'll use websocat
for this, which is a simple Rust CLI to do exactly this. It can be installed with cargo install websocat
.
Let's connect to our server.
$ websocat ws://127.0.0.1/
The connection will not immediately close, but it will be closed if you attempt to send a message. The running server will however print a message to the console to indicate that the connection was successful.
Receiving Messages
Messages can be received from the client in three ways. Firstly, you can use the recv
method on the stream to block until a message is received or an error is encountered. Secondly, you can use recv_nonblocking
to check if a message is available without blocking, which will be discussed next. Finally, you can make use of the stream's implementation of the Read
trait, which allows you to use the stream with Rust's built-in functions. For this example, we'll use the first method.
Let's change our code so it continually listens for messages, and prints them to the console.
// --snip--
fn my_handler(mut stream: WebsocketStream, _: Arc<()>) {
let address = stream.inner().peer_addr().unwrap();
println!("{:?}: <connected>", address);
while let Ok(message) = stream.recv() {
println!("{:?}: {}", address, message.text().unwrap().trim());
}
println!("{:?}: <disconnected>", address);
}
Now, we loop while we are successfully receiving messages, and print each one to the console. The message.text()
function converts each message to a string, which will return an error if the message is not valid UTF-8. However, we don't need to worry about this since we are only sending text messages.
If we connect to the server again using websocat
, we can test our code.
$ websocat ws://127.0.0.1/
hello world
this is working
We should see the following output in the console:
127.0.0.1:12345: <connected>
127.0.0.1:12345: hello world
127.0.0.1:12345: this is working
127.0.0.1:12345: <disconnected>
Sending Messages
Messages can be sent to the client in either of two ways. We can either use the send
method on the stream to send a message, or we can use the stream's implementation of the Write
trait. Since we used the corresponding recv
method earlier, we'll use the former.
Now we're going to modify our code so that it echoes back each message to the client after printing it to the console, as well as sending an initial "Hello, world! message when each client first connects.
// --snip--
use humphrey_ws::message::Message;
// --snip--
fn my_handler(mut stream: WebsocketStream, _: Arc<()>) {
let address = stream.inner().peer_addr().unwrap();
println!("{:?}: <connected>", address);
stream.send(Message::new("Hello, world!")).unwrap();
while let Ok(message) = stream.recv() {
println!("{:?}: {}", address, message.text().unwrap().trim());
stream.send(message).unwrap();
}
println!("{:?}: <disconnected>", address);
}
When the client first connects, we use the Message::new
constructor to create a new message and then send it to the client with stream.send
. The message will automatically be marked as a text message since the payload is valid UTF-8. If we were to send it as a binary message, we would use Message::new_binary
, or supply a non-UTF-8 payload to the regular constructor.
You can now use websocat
again to test your code.
Conclusion
In this chapter, we've learnt about sending and receiving WebSocket messages within a Humphrey application. Next, let's look at the Broadcasting Messages chapter, which covers how to use non-blocking reads to create a simple broadcast server.
Broadcasting Messages
It's common in a WebSocket application to broadcast messages to many clients at once, so in this chapter we'll learn how to do this using Humphrey WebSocket. We will have to use an external dependency bus
to provide a single-producer, multiple-consumer channel to send messages to the client handler threads to then be sent on to each client.
The example we build in this chapter will simply echo messages back to the client like we did before, but with the addition that any messages typed into the server console will be broadcast to all connected clients.
Initialising the Project
As before, we need a new Humphrey application, along with the following dependencies:
[dependencies]
humphrey = "*"
humphrey_ws = "*"
bus = { git = "https://github.com/agausmann/bus", branch = "read_handle/lock" }
You'll notice that the bus
dependency is specified with a GitHub address. This is because we need to be able to add readers to the bus from different threads, and this functionality is not yet merged into the main crate, so we need to use Adam Gausmann's fork.
Let's copy the code we used at the start of the last chapter to create a new WebSocket-enabled application:
use humphrey::App;
use humphrey_ws::stream::WebsocketStream;
use humphrey_ws::websocket_handler;
use std::sync::Arc;
fn main() {
let app: App = App::new()
.with_websocket_route("/*", websocket_handler(my_handler));
app.run("0.0.0.0:80").unwrap();
}
fn my_handler(mut stream: WebsocketStream, _: Arc<()>) {
// TODO: Implement handler
}
Initialising the Bus
This time, we need to share some state between the handlers: the bus. We'll define the state type as simply a mutex around a read handle to the bus. This will only need to be locked very briefly when each client first connects in order to add a reader to the bus. We also need to create the bus and a read handle to it. Let's make these changes:
// --snip--
use std::sync::{Arc, Mutex};
use bus::{Bus, BusReadHandle};
type AppState = Mutex<BusReadHandle<String>>;
fn main() {
let bus: Bus<String> = Bus::new(16);
let read_handle = bus.read_handle();
let app: App<AppState> = App::new_with_config(32, Mutex::new(read_handle))
.with_websocket_route("/*", websocket_handler(my_handler));
app.run("0.0.0.0:80").unwrap();
}
fn my_handler(mut stream: WebsocketStream, read_handle: Arc<AppState>) {
// TODO: Implement handler
}
You'll see that we also changed App::new
to App::new_with_config
to specify the initial state value. This is because we need to pass the read handle to the app, so it can share it with the handlers. We also have to specify the number of threads to use as part of this more flexible constructor.
Non-Blocking Reads
Next, we need to effectively read messages from the stream and the bus at the same time. We can't do this, so we use non-blocking reads to attempt to read from the stream without blocking, then do the same with the bus.
The recv_nonblocking
function of the stream returns a Restion
, which is an enum merging the core Result
and Option
types, giving it variants Ok(value)
, Err(error)
and None
. The None
variant indicates that the read was successful, but there was nothing to read.
Let's implement this in the code:
// --snip--
use std::thread::sleep;
use std::time::Duration;
// --snip--
fn my_handler(mut stream: WebsocketStream, read_handle: Arc<AppState>) {
let mut rx = { read_handle.lock().unwrap().add_rx() };
loop {
match stream.recv_nonblocking() {
Restion::Ok(message) => stream.send(message).unwrap(),
Restion::Err(_) => break,
Restion::None => (),
}
if let Ok(channel_message) = rx.try_recv() {
stream.send(Message::new(channel_message)).unwrap();
}
sleep(Duration::from_millis(64));
}
}
We first temporarily lock the mutex to create a new bus reader, then continuously attempt to read from the stream and the bus. If the read from the stream was successful, we echo back the message. If an error occurred, we close the connection, and if no message was read, we do nothing. Then, we do the same with the bus, and if the read was successful, we send the broadcasted message to the client. Finally, we sleep for a short time to avoid busy-waiting.
If you run the server now and test it with websocat
, it will behave exactly like the server we built in the previous chapter.
Broadcasting User Input
Now our handlers are set up, we just need to give them something to broadcast. For this, we can simply read the standard input and send it line by line to the bus. This will have to take place on a separate thread, since the Humphrey application blocks the main thread indefinitely.
This can be simply implemented as follows:
// --snip--
use std::io::BufRead;
use std::thread::{sleep, spawn};
// --snip--
fn main() {
let bus: Bus<String> = Bus::new(16);
let read_handle = bus.read_handle();
spawn(move || main_thread(bus));
// --snip --
}
fn main_thread(mut bus: Bus<String>) {
let stdin = std::io::stdin();
let handle = stdin.lock();
for line in handle.lines().flatten() {
bus.broadcast(line);
}
}
Testing the Server
Let's open up three terminal windows, and run the server on one of them. In the other two, connect to the server with websocat
as we did before with websocat ws://127.0.0.1/
. If you send messages to the server in either of the client terminal, you'll see that they are individually echoed back to the client. However, if you type a message in the server terminal, you'll see it broadcasted to both connected clients. It works!
Full Example
The full source code for this example should look like this:
use humphrey::App;
use humphrey_ws::message::Message;
use humphrey_ws::restion::Restion;
use humphrey_ws::stream::WebsocketStream;
use humphrey_ws::websocket_handler;
use bus::{Bus, BusReadHandle};
use std::io::BufRead;
use std::sync::{Arc, Mutex};
use std::thread::{sleep, spawn};
use std::time::Duration;
type AppState = Mutex<BusReadHandle<String>>;
fn main() {
let bus: Bus<String> = Bus::new(16);
let read_handle = bus.read_handle();
spawn(move || main_thread(bus));
let app: App<AppState> = App::new_with_config(32, Mutex::new(read_handle))
.with_websocket_route("/*", websocket_handler(my_handler));
app.run("0.0.0.0:80").unwrap();
}
fn main_thread(mut bus: Bus<String>) {
let stdin = std::io::stdin();
let handle = stdin.lock();
for line in handle.lines().flatten() {
bus.broadcast(line);
}
}
fn my_handler(mut stream: WebsocketStream, read_handle: Arc<AppState>) {
let mut rx = { read_handle.lock().unwrap().add_rx() };
loop {
match stream.recv_nonblocking() {
Restion::Ok(message) => stream.send(message).unwrap(),
Restion::Err(_) => break,
Restion::None => (),
}
if let Ok(channel_message) = rx.try_recv() {
stream.send(Message::new(channel_message)).unwrap();
}
sleep(Duration::from_millis(64));
}
}
Conclusion
Humphrey WebSocket provides powerful WebSocket support for Humphrey applications. When paired with other crates, like the bus
crate here, it can be used for even more complex tasks with minimal code. We'll now take a look at how to do this asynchronously.
Asynchronous WebSocket
For applications which serve many clients at once, synchronous approaches can be a bottleneck. Humphrey WebSocket's second option for building a WebSocket application is asynchronously, which entails using event handlers for specific events (connection, disconnection and messages).
This subsection of the guide will cover how to create a basic asynchronous WebSocket server, as well as how to broadcast messages to all connected clients. We'll also compare this approach to the previous one as we build the same example application.
Getting Started
This chapter will walk you through building a basic asynchronous WebSocket server with Humphrey WebSocket.
Creating a New Project
For this example, we'll be building a WebSocket-only application, so we won't link it to a Humphrey Core application. If you want to use asynchronous WebSocket alongside an existing Core application, or if you want to use Humphrey's TLS integration, read the Using with an Existing Humphrey App section.
Let's create a new project with cargo new async_ws
and then add Humphrey WebSocket as a dependency in the Cargo.toml
file. Make sure you replace the "*" version with the latest version of the crate.
[dependencies]
humphrey_ws = "*"
Setting up the Application
Creating a new Humphrey WebSocket app looks very similar to creating a new Humphrey app. The AsyncWebsocketApp
struct, like Humphrey's App
, has one type parameter for the app's state, and is configured with a builder method. Unless otherwise specified, the app will manage its own Humphrey Core application behind-the-scenes, and will automatically respond to WebSocket requests to any route.
Let's set up the app with all the handlers we need.
use humphrey_ws::async_app::{AsyncStream, AsyncWebsocketApp};
use humphrey_ws::message::Message;
use std::sync::Arc;
fn main() {
let websocket_app: AsyncWebsocketApp<()> = AsyncWebsocketApp::new()
.with_connect_handler(connect_handler)
.with_disconnect_handler(disconnect_handler)
.with_message_handler(message_handler);
websocket_app.run();
}
fn connect_handler(stream: AsyncStream, _: Arc<()>) {
// TODO
}
fn disconnect_handler(stream: AsyncStream, _: Arc<()>) {
// TODO
}
fn message_handler(stream: AsyncStream, message: Message, _: Arc<()>) {
// TODO
}
This code will compile, run, and accept WebSocket connections, but it won't do anything with them yet. The connect and disconnect handlers are event handlers, and they are passed an AsyncStream
and the app's state, which we ignore. The message handler is, you guessed it, a message handler, and it is passed an AsyncStream
, the Message
which triggered it, and the app's state, which again we ignore.
But what is an AsyncStream
?
What is an AsyncStream
?
The AsyncStream
struct is how Humphrey WebSocket represents an internal client connection, without giving the handler access to the underlying stream. It provides all the functionality required to send and broadcast messages, as well as the address of the client. It communicates with the actual client through a channel, which is read from by the main thread, which forwards messages to their corresponding clients. You don't need to think about any of this, however, since the asynchronous app's runtime will handle all of the details for you.
If, for whatever reason, you need access to the raw underlying stream, you'll need to use the synchronous WebSocket architecture described in the previous subsection.
Implementing the Handlers
Let's now implement the handlers we defined earlier. For connections, we're going to send a welcome message, and for messages, we're going to respond with an acknowledgement. We're also going to print each event to the console.
// --snip--
fn connect_handler(stream: AsyncStream, _: Arc<()>) {
println!("{}: Client connected", stream.peer_addr());
stream.send(Message::new("Hello new client!"));
}
fn disconnect_handler(stream: AsyncStream, _: Arc<()>) {
println!("{}: Client disconnected", stream.peer_addr());
}
fn message_handler(stream: AsyncStream, message: Message, _: Arc<()>) {
println!(
"{}: Message received: {}",
stream.peer_addr(),
message.text().unwrap().trim()
);
stream.send(Message::new("Message received!"));
}
You'll see we use the Message
type to represent messages, which we discussed when we were talking about the synchronous WebSocket architecture. This is simply an abstraction over WebSocket frames and messages.
If we run this code now and connect to it with websocat
(which we learnt about here), it should run as expected.
Client:
william@pc:~$ websocat ws://127.0.0.1
Hello new client!
Example message
Message received!
^C
Server:
127.0.0.1:50189: Client connected
127.0.0.1:50189: Message received: Example message
127.0.0.1:50189: Client disconnected
Detecting Unexpected Disconnections
Generally, when a client disconnects, they gracefully close the connection by sending a "close" frame to the server. However, if the client disconnects suddenly, such as in the case of a loss of network connectivity, not only will the WebSocket connection not be closed, but the underlying TCP stream won't be either. This means that the disconnect handler will not be called, which could cause issues in your application.
Fortunately, Humphrey WebSocket provides a way around this by way of heartbeats. A heartbeat consists of a ping and a pong, the former being sent from the server to the client, and vice versa. An asynchronous WebSocket application can be configured to send a ping every interval
seconds, and the client will automatically respond with a pong. If no pongs are received in timeout
seconds, the connection will be closed and the disconnect handler correctly called.
To do this in our example, we'll simply need to make a small change when we first create our app. We'll be using 5 seconds for our ping interval and 10 seconds for our timeout.
use humphrey_ws::async_app::{AsyncStream, AsyncWebsocketApp};
use humphrey_ws::message::Message;
use humphrey_ws::ping::Heartbeat;
use std::sync::Arc;
use std::time::Duration;
fn main() {
let websocket_app: AsyncWebsocketApp<()> = AsyncWebsocketApp::new()
.with_heartbeat(Heartbeat::new(Duration::from_secs(5), Duration::from_secs(10)))
.with_connect_handler(connect_handler)
.with_disconnect_handler(disconnect_handler)
.with_message_handler(message_handler);
websocket_app.run();
}
// --snip--
Conclusion
In this chapter, we've learnt about sending and receiving WebSocket messages asynchronously. Next, we'll learn how to broadcast messages to all connected clients, and compare this to how we did it synchronously.
Broadcasting Messages
Many WebSocket applications broadcast messages to many clients at once, so in this chapter we'll learn how to do this asynchronously. Previously, we had to use an external dependency bus
, but using the asynchronous approach, this is no longer necessary.
The example we build in this chapter will simply echo messages back to the client as well as broadcasting any messages typed into the server console to all connected clients. Furthermore, we'll broadcast a message whenever a client connects too.
Initialising the Project
As before, we need a new Humphrey WebSocket application. We don't need to handle the disconnection event, so we won't add a handler for it.
use humphrey_ws::async_app::{AsyncStream, AsyncWebsocketApp};
use humphrey_ws::message::Message;
use std::sync::Arc;
fn main() {
let websocket_app: AsyncWebsocketApp<()> = AsyncWebsocketApp::new()
.with_connect_handler(connect_handler)
.with_message_handler(message_handler);
websocket_app.run();
}
fn connect_handler(stream: AsyncStream, _: Arc<()>) {
// TODO
}
fn message_handler(stream: AsyncStream, message: Message, _: Arc<()>) {
stream.send(message);
}
Broadcasting Messages from Event Handlers
Our connection handler needs to broadcast a message to all connected clients when a new client connects. This message will also be sent to the new client. The AsyncStream
provides functionality for this, but as we'll see later, this is not the only way to broadcast messages.
Let's add this to our connection handler.
// --snip--
fn connect_handler(stream: AsyncStream, _: Arc<()>) {
let message = Message::new(format!("Welcome, {}!", stream.peer_addr()));
stream.broadcast(message);
}
// --snip--
It's as simple as that! If we test this with websocat
and connect from a few terminals, you'll see that each message is correctly echoed back to the client, and new connections are announced to everyone.
Sending Messages without an Event
Broadcasts can also be triggered without an event. This is useful for sending messages to all connected clients from a separate thread, or for responding to non-WebSocket events. In this example, we'll broadcast the standard input to all connected clients.
To do this, we'll use an AsyncSender
, which allows us to send messages and broadcasts without waiting for an event. Let's get a new async sender from the app, and send it to a separate thread for handling user input.
// --snip--
use humphrey_ws::async_app::AsyncSender;
use std::thread::spawn;
fn main() {
let websocket_app: AsyncWebsocketApp<()> = AsyncWebsocketApp::new()
.with_connect_handler(connect_handler)
.with_message_handler(message_handler);
let sender = websocket_app.sender();
spawn(move || user_input(sender));
websocket_app.run();
}
fn user_input(sender: AsyncSender) {
// TODO
}
// --snip--
You can create as many senders as you want from the app, but they can only be created from the main thread and must be created before the application is run.
Using the Sender
Now that we have a sender, we can use it to send messages to all connected clients. Let's use the same code from our synchronous example, but slightly modify it to work with a sender instead of the bus.
// --snip
use std::io::BufRead;
// --snip--
fn user_input(sender: AsyncSender) {
let stdin = std::io::stdin();
let handle = stdin.lock();
for line in handle.lines().flatten() {
sender.broadcast(Message::new(line));
}
}
// --snip--
If we run this code now, every line we type in the server console will be broadcast to all connected clients.
Full Example
The full source code for this example should look like this.
use humphrey_ws::async_app::{AsyncStream, AsyncWebsocketApp, AsyncSender};
use humphrey_ws::message::Message;
use std::io::BufRead;
use std::sync::Arc;
use std::thread::spawn;
fn main() {
let websocket_app: AsyncWebsocketApp<()> = AsyncWebsocketApp::new()
.with_connect_handler(connect_handler)
.with_message_handler(message_handler);
let sender = websocket_app.sender();
spawn(move || user_input(sender));
websocket_app.run();
}
fn user_input(sender: AsyncSender) {
let stdin = std::io::stdin();
let handle = stdin.lock();
for line in handle.lines().flatten() {
sender.broadcast(Message::new(line));
}
}
fn connect_handler(stream: AsyncStream, _: Arc<()>) {
let message = Message::new(format!("Welcome, {}!", stream.peer_addr()));
stream.broadcast(message);
}
fn message_handler(stream: AsyncStream, message: Message, _: Arc<()>) {
stream.send(message);
}
Conclusion
In this chapter, we've learnt how to broadcast messages asynchronously. It's a lot easier than the synchronous approach, and also more flexible. In the next chapter, we'll learn how to integrate an asynchronous WebSocket application with an existing Humphrey application.
Using with an Existing Humphrey App
An asynchronous WebSocket application can be linked to a Humphrey application two ways, internally or externally. So far, we've only dealt with internal linking, which is where the WebSocket application manages its own Humphrey application. However, in many cases it might be more convenient to use a WebSocket application as part of a larger Humphrey Core application, and this is required if you want to use TLS.
For this chapter only, we'll start by looking at the entire code for this example, then learn how it works.
The Code
use humphrey::http::{Response, StatusCode};
use humphrey::App;
use humphrey_ws::async_app::{AsyncStream, AsyncWebsocketApp};
use humphrey_ws::handler::async_websocket_handler;
use humphrey_ws::message::Message;
use std::sync::Arc;
use std::thread::spawn;
fn main() {
let websocket_app: AsyncWebsocketApp<()> =
AsyncWebsocketApp::new_unlinked().with_message_handler(message_handler);
let humphrey_app: App<()> = App::new()
.with_stateless_route("/", |_| Response::new(StatusCode::OK, "Hello world!"))
.with_websocket_route(
"/ws",
async_websocket_handler(websocket_app.connect_hook().unwrap()),
);
spawn(move || humphrey_app.run("0.0.0.0:80").unwrap());
websocket_app.run();
}
fn message_handler(stream: AsyncStream, message: Message, _: Arc<()>) {
stream.send(message);
}
Creating a New, Unlinked AsyncWebsocketApp
A WebSocket application is considered to be unlinked if it doesn't have a link to an existing Humphrey application. We can create a new, unlinked WebSocket application by calling AsyncWebsocketApp::new_unlinked()
. This requires the user to link the application to a Humphrey application rather than allowing the WebSocket application to manage it internally.
Running an unlinked WebSocket application will not throw an error, but it will not be able to receive messages.
What is a Connect Hook?
We link a WebSocket application to a Humphrey application using a connect hook, which is effectively the sending end of a channel which sends new WebSocket connections from the Humphrey application to the WebSocket application. Humphrey Core processes the incoming "upgrade" HTTP request, Humphrey WebSocket completes the handshake, and then your WebSocket application takes it from there.
Asynchronous WebSocket Routes
To define a WebSocket route on the Humphrey application as an entry point to your WebSocket application, we use the async_websocket_handler
function, which provides a convenient way of performing the handshake and then passing the connection to your asynchronous application.
This function takes a connect hook as its argument, and returns the handler function for the route.
Running Both Apps
Since the WebSocket application does not manage its own Humphrey application, we need to run both apps in separate threads. It doesn't matter which runs first or which runs on the main thread, but as soon as the Humphrey application is started, new WebSocket connections are able to accumulate in through the connect hook, which could cause a performance issue if the WebSocket application is not started straight away.
In our code, we run the Humphrey application first on a new thread, and then the WebSocket application on the main thread.
Conclusion
In this chapter, we've seen how to create an unlinked WebSocket application and manually link it to a Humphrey application. If you want to learn more about Humphrey WebSocket, consider taking a look at the API reference.
Humphrey JSON
Humphrey JSON is a simple JSON library for Rust, and provides a number of features for working with JSON data. In accordance with Humphrey's principles, it has no dependencies.
This section of the guide will cover the following topics:
The Humphrey JSON crate is very similar in concept and API to serde_json
, so familiarity with the latter is very helpful. Much of Serde's documentation applies here as well.
Untyped JSON Values
Parsing Untyped JSON
JSON can be parsed into a Value
, which can represent any JSON value. This can be done with either humphrey_json::from_str
or Value::parse
. Let's look at a simple example, which we'll use throughout the rest of this chapter.
use humphrey_json::Value;
fn main() {
let data = r#"
{
"name": "John Doe",
"age": 43,
"phones": [
"+44 1234567",
"+44 2345678"
]
}"#;
let value: Value = humphrey_json::from_str(data).unwrap();
println!("{:?}", value);
}
If you run this code, you'll see the internal representation of the parsed JSON value. The Value
type must be specified, since the from_str
function can return any type which implements FromJson
, which we'll discuss later.
You can also use the Value::parse
function like this:
let value = Value::parse(data).unwrap();
Now, we'll look at how to manipulate the JSON value.
Manipulating JSON Values
Using the data from the previous example, we'll see how to access different fields of it.
You can index into the JSON value using the get
and get_mut
methods, which return Option<&Value>
and Option<&mut Value>
respectively. Alternatively, you can use Rust's indexing syntax (value[index]
) where index is a number or a string, which returns Value::Null
if the value does not exist and creates one if you attempt to set it.
You can extract the inner value of a JSON value using as_bool
, as_number
, as_str
, as_array
and as_object
, all of which return options.
let name = value["name"].as_str();
let age = value.get("age").as_number();
let phone_1 = value["phones"].get(0);
let second_phone = value["phones"][1];
value["name"] = json!("Humphrey");
Creating Untyped JSON
To create an untyped JSON value, you can use the json!
macro. This allows you to use JSON-like syntax within Rust. The earlier example could be created in this way as follows:
let value = json!({
"name": "John Doe",
"age": 43,
"phones": [
"+44 1234567",
"+44 2345678"
]
});
You can even include a number of types inside the macro and they will be converted to their JSON representations automatically, as follows:
let value = json!({
"name": username,
"age": (age_last_year + 1),
"phones": [
home_phone,
work_phone
]
});
Serializing Untyped JSON
To serialize a Value
JSON type into its string representation, you can use either the serialize
method or the humphrey_json::to_string
method. The latter has the benefit that any type which can be converted to a value can be used, as you'll see in the next section.
let string = value.serialize();
You can also format the JSON with indentation and newlines using the serialize_pretty
method, which takes the indent size as an argument.
let string = value.serialize_pretty(4);
Conclusion
In this section we've looked at the tools available for working with untyped JSON values using Humphrey JSON. Next, we'll look at how to manipulate these values using Rust data structures.
Strongly-Typed Data Structures
Humphrey JSON provides a powerful way of using Rust data structures to work with JSON data. Mappings between JSON and Rust types can be automatically generated using the FromJson
and IntoJson
derive macro, as well as configured more explicitly using the json_map!
macro.
Deriving FromJson
and IntoJson
The derive macros can only be used when the derive
feature is enabled, which it is by default. The FromJson
and IntoJson
traits can be derived for a type as follows.
use humphrey_json::prelude::*;
#[derive(FromJson, IntoJson)]
struct User {
name: String,
location: String,
}
The macros also support tuple structs and basic enums, but do not yet support enums with variants that have fields. Every type contained within the struct must already implement the traits that are being implemented on the struct.
#[derive(FromJson, IntoJson)]
struct TupleStruct(String, u8);
#[derive(FromJson, IntoJson)]
enum MyEnum {
Yes,
No,
Maybe,
}
Finally, the macros also provide a rename
attribute, which can be used to rename the fields of a struct or the variants of an enum in the JSON data.
#[derive(FromJson, IntoJson)]
struct RenamedFields {
#[rename = "dateOfBirth"]
date_of_birth: String,
}
#[derive(FromJson, IntoJson)]
enum RenamedVariants {
#[rename = "y"]
Yes,
#[rename = "n"]
No,
#[rename = "?"]
Maybe,
}
The json_map!
Macro
The json_map!
macro is used as follows. The fields on the left represent the fields of the struct, and there must be an entry for each field in the struct. The strings on the right represent the names of the fields in the JSON data. It automatically generates a FromJson
and IntoJson
implementation for the struct.
Unlike the derive macros, this macro allows you to specify exactly what names you want to use for each field, instead of just using the struct's field names. On the downside, however, you cannot use the json_map!
macro on enums.
use humphrey_json::prelude::*;
#[derive(PartialEq, Eq)] // not required, but used later in this section
struct User {
name: String,
location: String,
}
json_map! {
User,
name => "name",
location => "country"
}
Parsing into a Struct
To parse a JSON string into a struct, you can simply use the humphrey_json::from_str
function. For example, given the following JSON data, you can parse it into a User
struct as follows:
{
"name": "Humphrey",
"country": "United Kingdom"
}
let user: User = humphrey_json::from_str(json_string).unwrap();
assert_eq!(user, User {
name: "Humphrey".to_string(),
location: "United Kingdom".to_string()
});
This also works for more complex structs, provided that all nested types implement FromJson
.
Serializing into JSON
Instances of any struct which implements IntoJson
can be serialized into JSON, as follows:
let json_string = humphrey_json::to_string(&user).unwrap();
To format the JSON with newlines and to customize the indentation, you can use the to_string_pretty
function, which takes the number of spaces to indent as an argument.
let json_string = humphrey_json::to_string_pretty(&user).unwrap();
Conclusion
In conclusion, the derive macros, the json_map!
macro and their associated functions are a powerful way of working with typed JSON data. To find out more about Humphrey JSON, consider looking at the API reference.
Humphrey Auth
Humphrey Auth is simple authentication crate for Humphrey applications. It provides a simple and database-agnostic way to authenticate users and handle sessions and tokens. It does depend upon the argon2
, uuid
and rand_core
crates to ensure that it is secure.
Humphrey Auth needs to be integrated into a full-stack Humphrey application with endpoints for all the authentication-related methods, such as signing in and out. Therefore, this guide does not provide step-by-step instructions on how to use it.
It is easiest to learn how to use Humphrey Auth from the full example. Alongside this, it may be useful to refer to the API reference for more information.
Note for Contributors
If you would like to add a step-by-step guide for Humphrey Auth, please open an issue. Your help would be greatly appreciated!