Commits


one FastCGI connection per client FastCGI is designed to multiplex requests over a single connection, so ideally the server can open only one connection per worker to the FastCGI application and that's that. Doing this kind of multiplexing makes the code harder to follow and easier to break/leak etc on the gmid side however. OpenBSD' httpd seems to open one connection per client, so why can't we too? One connection per request is still way better (lighter) than using CGI, and we can avoid all the pitfalls of the multiplexing (keeping track of "live ids", properly shut down etc...)


plug a memory leak c->req is set in client_read but never deallocated


fmt


libevent2 fix: unfreeze the client evbuffer libevent2 has this concept of "freezeness" of a buffer. It's a way to avoid accidentally write/remove data from the wrong "edge" of the buffer. The client_tls_{read,write} functions need to add/drain data from the opposite edge, hence the need for the unfreeze call. This is the minimum change in order to work on libevent2 too. Another way would be to define evbuffer_{un,}freeze as NOP on libevent 1, but it's ugly IMHO.


new I/O handling on top of bufferevents This is a big change in how gmid handles I/O. Initially we used a hand-written loop over poll(2), that then was evolved into something powered by libevent basic API. This meant that there were a lot of small "asynchronous" function that did one step, eventually scheduling the re-execution, that called each others in a chain. The new implementation revolves completely around libevent' bufferevents. It's more clear, as everything is implemented around the client_read and client_write functions. There is still space for improvements, like adding timeouts for one, but it's solid enough to be committed as is and then further improved.


fastcgi completely asynchronous This changes the fastcgi implementation from a blocking I/O to an async implementation on top of libevent' bufferevents. Should improve the responsiveness of gmid especially when using remote fastcgi applications.


initialize mbufhead


fix possible out-of-bound access While computing the parent directory it an out-of-bound access can occur, which usually means the server process dies. In particular, it can be triggered by making a request for a non-existent file in the root of a virtual host if the path matches the `cgi` pattern. Thanks cage for helping in debugging!


style


drop unnecessary bzero the whole struct client is already memset'd to 0 in do_accept. handle_handshake doesn't touch the request or iri buffer in the code path that leads to handle_open_conn. (It does so in the error router alone.)


making more explicit the case of missing SNI Missing SNI (i.e. servname == NULL) is already handled correctly. puny_decode refuses to work on NULL servname, c->domain is still the empty string and everything flows as expected towards the error at the end. However, it's better to bail out early and make more explicit how the case of missing SNI is handled.


relax openat rule: follow symlinks O_NOFOLLOW acts only on *the last component*, so on open("/foo/bar/baz") only when baz is a symlink open fails. Checking every path component is not viable. gh issue #5 related (sort of)


style(9)-ify


gracefully shut down fastcgi backends we need to delete the events associated with the backends, otherwise the server process won't ever quit. Here, we add a pending counter to every backend and shut down immediately if they aren't handling any client; otherwise we try to close them as soon as possible (i.e. when they close the connection to the last connected client.)


strncpy -> strlcpy quoting strncpy(3) strncpy() only NUL terminates the destination string when the length of the source string is less than the length parameter. strlcpy is more intuitive. this is another warning gcc 8 found that clang didn't.