- Documentation
- Reference manual
- Packages
- SWI-Prolog HTTP support
- The HTTP server libraries
- Creating an HTTP reply
- library(http/http_dispatch): Dispatch requests in the HTTP server
- library(http/http_dirindex): HTTP directory listings
- library(http/http_files): Serve plain files from a hierarchy
- library(http/http_session): HTTP Session management
- library(http/http_cors): Enable CORS: Cross-Origin Resource Sharing
- library(http/http_authenticate): Authenticate HTTP connections using 401 headers
- library(http/http_digest): HTTP Digest authentication
- library(http/http_dyn_workers): Dynamically schedule HTTP workers.
- Custom Error Pages
- library(http/http_openid): OpenID consumer and server library
- Get parameters from HTML forms
- Request format
- Running the server
- The wrapper library
- library(http/http_host): Obtain public server location
- library(http/http_log): HTTP Logging module
- library(http/http_server_health): HTTP Server health statistics
- Debugging HTTP servers
- library(http/http_header): Handling HTTP headers
- The library(http/html_write) library
- library(http/js_write): Utilities for including JavaScript
- library(http/http_path): Abstract specification of HTTP server locations
- library(http/html_head): Automatic inclusion of CSS and scripts links
- library(http/http_pwp): Serve PWP pages through the HTTP server
- library(http/htmx): Support htmx.org
- The HTTP server libraries
- SWI-Prolog HTTP support
3.9 library(http/http_dyn_workers): Dynamically schedule HTTP workers.
Most code doesn't need to use this directly; instead use
library(http/http_server)
, which combines this library with
the typical HTTP libraries that most servers need.
This module defines hooks into the HTTP framework to dynamically schedule worker threads. Dynamic scheduling relieves us from finding a good value for the size of the HTTP worker pool.
The decision to add a worker follows these rules:
- If the load average caused by the worker threads exceeds http:max_load, no worker is added.
- Wait for some time, depending on how close we are to the
http:max_workers limit.
- If the worker is still needed, add it.
The policy depends on three settings:
http
:
max_workers
- The maximum number of workers that will be created. Default is 100.
http
:
worker_idle_limit
- The number of seconds a dynamic worker waits for a new job. If no job arrives in time it terminates. Default is 10 seconds.
http
:
max_load
- Max load average created by the HTTP server, i.e. the amount of CPU time consumed per second. Default is 10.
- [multifile]http:schedule_workers(+Dict)
- Called if there is no immediately free worker to handle the incomming
request. The request is forwarded to the thread
__http_scheduler
as the hook is called in time critical code.
3.9.1 Providing Server-Sent Events (sse)
Server-Sent Events allows for setting up a simple event stream from the server to the client. It can serve roles similar to long polling and web sockets, enabling the server to notify its clients on some event. Long polling uses a normal HTTP (usually) GET request that blocks for a long time on the server. The server finishes the request when it wants to notify the client or after some time (e.g., a minute) to avoid a timeout on the client or some proxy. After receiving an event or timeout, the client repeats the request. Web sockets upgrade a the socket used for a normal HTTP request to create a bi-directional open communication channel that exchanges encapsulated messages in both directions. Server-Sent Events open a normal HTTP channel over which the server can sent simple text messages using a format similar to the HTTP header: a sequence of Name: Value lines followed by two newlines. Unlike long polling, the request does not complete after a message.
Following the MDN documentation above, and SSE request can be served using the simple example below that generates an event, counting every minute. Note the handler declaration that processes the request on a new thread and disables timeout for this location. Note that this implementation uses a thread per client. This design limits the scalability.
:- http_handler(root(events), events, [ spawn([]), time_limit(infinite) ]). events(_Request) :- format('X-Accel-Buffering: no\r\n\c Content-Type: text/event-stream\r\n\c Cache-Control: no-cache\r\n\r\n'), between(1, infinite, Min), format('event: minute~n'), format('data: {"minute": ~d}~n~n', [Min]), flush_output, sleep(60), fail.
Of course, rather than sleep/1 to decide when to fire the next event this thread typically has to wait for events in the application. This can be achieved using thread_wait/2 or message queues.