Configuration Proxy and Cache
The strength of EdgePeak comes from its caching abilities and performance when serving content.
For this usage, EdgePeak will serve clients with content available on an upstream server, acting as a reverse proxy. It comes with sensible defaults so that it works out of the box by properly filtering connection-related headers (e.g., Connection, Transfer-Encoding, ...) so that you don't have to bother. In addition, the cache key is automatically managed by default, e.g., automatically including ranges in the cache key if range requests are forwarded to the origin or conditional request headers if those are forwarded to the origin. Thus, you can focus on your use cases and workflow and add or erase security/authentication-related headers.
EdgePeak can also modify headers and bodies as needed.
Defining an upstream group
An upstream group is a group of servers that can all share identical content and have similar properties regarding error handling, retries, timeouts, etc. The upstream group has a name (i) used to indicate that the group is used when proxying content in a handler and (ii) used in metrics and logs reporting.
Similarly to vHosts, it's possible to take an alias to them to ease configuration writing.
config.upstreams["origins"] = {
.endpoints = {"http://127.0.0.1:8100"}
};
auto& u1 = config.upstreams["origins"];
u1.load_balancing = balancing::rendez_vous;
u1.max_persistent_idle_sessions = 56;
Once an upstream group is defined, forwarding the request to the upstream becomes possible. The request will be cached according to the standard policy (RFC 9111) the upstream server indicates in the Cache-Control header.
config.vhost["vh1"] << route(GET | HEAD, "/bpk-tv/.*", [](
client_request req, client_reply rpy) -> future<>{
req.filter_args("minrate", "maxrate", "b"); // Filter query args to keep only those
req.remove_header("Cookie"); // Do not forward cookie to upstream server
co_await rpy.set_proxy(req, "origins");
});
Updated 3 days ago