Application and Server Lifecycle

Finatra establishes an ordered lifecycle when creating a c.t.app.App or a c.t.server.TwitterServer and provides methods which can be implemented for running or starting core logic.

This is done for several reasons:

  • To ensure that flag parsing and module installation to build the object graph is done in the correct order such that the injector is properly configured for use before a user attempts attempts to access flags.
  • Ensure that object promotion and garbage collection is properly handled before accepting traffic to a server.
  • Expose any external interface before reporting a server is “healthy”. Otherwise a server may report it is healthy before binding to a port — which may fail. Depending on how monitoring is configured (typically done by monitoring the HTTP Admin Interface /health endpoint on some frequency) it could be some interval before the server is recognized as unhealthy when in fact it did not start properly as it could not bind to a port.

Thus you do not have access to the app or server main method. Instead, any logic should be contained in overriding an @Lifecycle-annotated method or in the app or server callbacks c.t.inject.app.App#run or c.t.inject.server.TwitterServer#start methods, respectively.

Caution

If you override an @Lifecycle-annotated method you MUST first call super() in the method to ensure that framework lifecycle events happen accordingly.

With all lifecycle methods, it is important to not performing any blocking operations as you will prevent the server from starting.

Startup

At a high-level, the start-up lifecycle of a Finatra server looks like:

../../_images/FinatraLifecycle.png

Shutdown

Upon graceful shutdown, all registered onExit {...} blocks are executed. See c.t.app.App#exits).

This includes closing the TwitterServer HTTP Admin Interface, firing the TwitterModuleLifecycle#singletonShutdown on all installed modules, and for extensions of the HttpServer or ThriftServer traits closing any external interfaces.

Important

Note that the order of execution for all registered onExit {...} blocks is not guaranteed. Thus it is, up to implementors to enforce any desired ordering.

For example, you have code which is reading from a queue (via a “listener”), transforming the data, and then publishing (via a “publisher”) to another queue. When main application is exiting you most likely want to close the “listener” first to ensure that you transform and publish all available data before closing the “publisher”.

Assuming, that the close() method of both returns Future[Unit], e.g. like c.t.util.Closable, a way of doing this could be:

onExit {
   Await.result(listener.close())
   Await.result(publisher.close())
}

In the example code above, we simply await on the close of the “listener” first and then the “publisher” thus ensuring that the “listener” will close before the “publisher”.

This is instead of registering separate onExit {...} blocks for each:

onExit {
   Await.result(listener.close())
}

onExit {
   Await.result(publisher.close())
}

Which in this example, would possibly close the “publisher” before the “listener” meaning that the server would still be reading data but unable to publish it.

Modules

Modules provide hooks into the Lifecycle as well that allow instances being provided to the object graph to be plugged into the overall application or server lifecycle. See the Module Lifecycle section for more information.

More Information

As noted in the diagram in the Startup section there can be a non-trivial lifecycle especially in the case of a TwitterServer. For more information on how to create an injectable c.t.app.App or a c.t.server.TwitterServer see the Creating an injectable App and Creating an injectable TwitterServer sections.