In a cluster deployment,
the web server runs on only one node, and it serves as the balancer.
The URL to the service sends the request to the web server. By default,
the web server dispatches requests in round-robin to the nodes in
the cluster. However, load-balancing policies might be different policy
is specified during the web server configuration.
The SAS metadata server
for each middle-tier node is specified during deployment. The same
metadata server that is referenced by the middle tier can be referenced
by the middle-tier nodes. When that is the case, user management data
and application properties that are set on the middle tier are applicable
automatically to the middle-tier nodes. If different metadata servers
are referenced by the middle tier and the middle-tier nodes, any user
or application management data changes should be made in both metadata
servers.
By contrast with the
middle tier, the Instructions.html file for the middle-tier node includes
neither a web service URL nor a section on validating steps for the
web service. The web server directs requests to middle-tier nodes
based on the specified load-balancing policy in its configuration.
If a user wants to use
the same node to serve a group of requests, this can be achieved by
including the same route information in the HTTP request for that
group of requests. The cluster is enabled for a sticky session by
default. When a service request is made, the header section of the
HTTP response includes a Set-Cookie header, such as the following:
Set-Cookie: c74b1b873e98ef08505dee685863e7b2_Cluster13=EC5213E970F0655
8E63F145001F64CEC.c74b1b873e98ef08505dee685863e7b2_SASServer13_1;
Path=/SASMicroAnalyticService/; HttpOnly
The first item is a
variable=value construct. The variable is a session ID. The value
is a route.
To use the same node
to serve a group of requests, extract the route information from the
first request of the group. From the second request to the last request,
set the cookie header with the sessionID and route value, similar
to the following example:
EC5213E970F06558E63F145001F64CEC.c74b1b873e98ef08
505dee685863e7b2_SASServer13_1
Using the same node
to serve a group of requests can be useful because it avoids introducing
errors by a delay in replicating content from one cluster node to
another.
For example, the cluster
consists of two nodes, Node 1 and Node 2. You want to deploy two modules,
A and B. Also, B depends on A. Suppose A is a very big module and
takes more than 20 seconds to compile. If A is deployed on Node 1,
it must be replicated to Node 2 and then compiled on Node 2, before
it is available on Node 2. If B is deployed to Node 2 before A is
ready there, there is an error. To avoid this type of error, set the
cookie to tell the web server to use Node 1 to deploy B.
Clustering relies on
GemFire, a third-party distributed data management platform. GemFire
persists data to files that are stored in SAS/config/LevN/Web/WebAppServer/SASServer13_X/logs
.
The filenames contain the masgemfire sub-string. Those files should
not be changed. Also, make sure that sufficient disk space is allocated
to the SAS/config/LevN/Web/WebAppServer/SASServer13_X/logs
directory
so that the cache files grow.
CAUTION:
These files
should not be truncated or deleted regardless of their size.
Sometimes the file
size might appear to be zero bytes. GemFire also uses the word BACKUP
in some of the filenames. Deleting or truncating these files deletes
the modules repository.
In a typical deployment,
a middle-tier node uses the middle tier's GemFire locator. A
locator is used in the peer-to-peer cache to discover other processes.
If the whole cluster must be restarted, the commands to start the
middle tier and middle-tier nodes should be submitted immediately
one after another. The order does not matter.
Note: The GemFire locator must
be started cleanly before the other nodes are started. The other nodes
should then be stagger started, to reduce the load on the GemFire
locator. In addition, it is important to periodically back up the
GemFire persistence storage for production systems.