package loadbalancer
This package implements client side load balancing algorithms.
As an end-user, see the Balancers API to create instances which can be used to configure a Finagle client with various load balancing strategies.
As an implementor, each algorithm gets its own subdirectory and is exposed via the Balancers object. Several convenient traits are provided which factor out common behavior and can be mixed in (i.e. Balancer, DistributorT, NodeT, and Updating).
- Alphabetic
- By Inheritance
- loadbalancer
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- final class BalancerRegistry extends AnyRef
A registry of load balancers currently in use.
A registry of load balancers currently in use.
This class is thread-safe.
- See also
BalancerRegistry$.get()
TwitterServer's "/admin/balancers.json" admin endpoint.
- trait EndpointFactory[Req, Rep] extends ServiceFactory[Req, Rep]
A specialized ServiceFactory which admits that it backs a concrete endpoint.
A specialized ServiceFactory which admits that it backs a concrete endpoint. The extra information and functionality provided here is used by Finagle's load balancers.
- abstract class LoadBalancerFactory extends AnyRef
A thin interface around a Balancer's constructor that allows Finagle to pass in context from the stack to the balancers at construction time.
A thin interface around a Balancer's constructor that allows Finagle to pass in context from the stack to the balancers at construction time.
- See also
Balancers for a collection of available balancers.
The user guide for more details.
- final class Metadata extends AnyRef
Information about a load balancer.
Information about a load balancer.
This class is thread-safe and while the class itself is immutable, it proxies data from a Balancer which may be mutable.
- See also
TwitterServer's "/admin/balancers.json" admin endpoint.
- sealed abstract class PanicMode extends AnyRef
Value Members
- def defaultAddressOrdering: Ordering[Address]
Returns the default process global Address ordering as set via
defaultAddressOrdering
.Returns the default process global Address ordering as set via
defaultAddressOrdering
. If no value is set, Address.HashOrdering is used with the assumption that hosts resolved via Finagle provide the load balancer with resolved InetAddresses. If a separate resolution process is used, outside of Finagle, the default ordering should be overridden. - def defaultAddressOrdering(order: Ordering[Address]): Unit
Set the default Address ordering for the entire process (outside of clients which override it).
Set the default Address ordering for the entire process (outside of clients which override it).
- See also
LoadBalancerFactory.AddressOrdering for more info.
- def defaultBalancerFactory: LoadBalancerFactory
Returns the default process global LoadBalancerFactory as set via
defaultBalancerFactory
. - def defaultBalancerFactory(factory: LoadBalancerFactory): Unit
Set the default LoadBalancerFactory for the entire process (outside of clients which override it).
Set the default LoadBalancerFactory for the entire process (outside of clients which override it).
- See also
LoadBalancerFactory.Param for more info.
- object BalancerRegistry
- object Balancers
Constructor methods for various load balancers.
Constructor methods for various load balancers. The methods take balancer specific parameters and return a LoadBalancerFactory that allows you to easily inject a balancer into the Finagle client stack via the
withLoadBalancer
method.configuring a client with a load balancer
$Protocol.client .withLoadBalancer(Balancers.aperture()) .newClient(...)
- See also
The user guide for more details.
Example: - object FlagBalancerFactory extends LoadBalancerFactory
A LoadBalancerFactory proxy which instantiates the underlying based on flags (see flags.scala for applicable flags).
- object LoadBalancerFactory
Exposes a Stack.Module which composes load balancing into the respective Stack.
Exposes a Stack.Module which composes load balancing into the respective Stack. This is mixed in by default into Finagle's com.twitter.finagle.client.StackClient. The only necessary configuration is a LoadBalancerFactory.Dest which represents a changing collection of addresses that is load balanced over.
- object PanicMode
Panic mode is when the LB gives up trying to find a healthy node.
Panic mode is when the LB gives up trying to find a healthy node. The LB sends the request to the last pick even if the node is unhealthy. For a given request, panic mode is enabled when the percent of nodes that are unhealthy exceeds the panic threshold. This percent is approximate. For pick2-based load balancers (P2C* and Aperture*), interpret this as 1% of requests or less will panic when the threshold is reached. When the percent of unhealthy nodes exceeds the threshold, the number of requests that panic increases exponentially. For round robin, this panic threshold percent does not apply because it is not a pick two based algorithm. Panic mode is disabled for heap LB.
Please, note that this doesn't mean that 1% of requests will fail since Finagle clients have additional layers of requeues above the load balancer.
- object defaultBalancer extends GlobalFlag[String]
A GlobalFlag that changes the default balancer for every client in the process.
A GlobalFlag that changes the default balancer for every client in the process. Valid choices are ['heap', 'choice', 'aperture', and 'random_aperture'].
- Note
that 'random_aperture' should only be used in unusual situations such as for testing instances and requires extra configuration. See the aperture documentation for more information. To configure the load balancer on a per-client granularity instead, use the
withLoadBalancer
method like so: {{ val balancer = Balancers.aperture(...) $Protocol.client.withLoadBalancer(balancer) }}
- object perHostStats extends GlobalFlag[Boolean]
A GlobalFlag which allows the configuration of per host (or endpoint) stats to be toggled.
A GlobalFlag which allows the configuration of per host (or endpoint) stats to be toggled. Note, these are off by default because they tend to be expensive, especially when the size of the destination cluster is large. However, they can be quite useful for debugging.