Packages

package io

Ordering
  1. Alphabetic
Visibility
  1. Public
  2. All

Type Members

  1. abstract class AbstractByteWriter extends ByteWriter

    An abstract implementation that implements the floating point methods in terms of integer methods.

  2. abstract class AbstractWriter[-A] extends Writer[A]

    Abstract Writer class for Java compatibility.

  3. abstract class Buf extends AnyRef

    Buf represents a fixed, immutable byte buffer with efficient positional access.

    Buf represents a fixed, immutable byte buffer with efficient positional access. Buffers may be sliced and concatenated, and thus be used to implement bytestreams.

    See also

    com.twitter.io.Buf.ByteArray for an Array[Byte] backed implementation.

    com.twitter.io.Buf.ByteBuffer for an nio.ByteBuffer backed implementation.

    com.twitter.io.Buf.apply for creating a Buf from other Bufs

    com.twitter.io.Buf.Empty for an empty Buf.

  4. trait BufByteWriter extends ByteWriter

    A ByteWriter that results in an owned Buf

  5. class BufInputStream extends InputStream
  6. final class Bufs extends AnyRef

    A Java adaptation of the com.twitter.io.Buf companion object.

  7. trait ByteReader extends AutoCloseable

    A ByteReader provides a stateful API to extract bytes from an underlying buffer, which in most cases is a Buf.

    A ByteReader provides a stateful API to extract bytes from an underlying buffer, which in most cases is a Buf. This conveniently allows codec implementations to decode frames, specifically when they need to decode and interpret the bytes as a numeric value.

    Note

    Unless otherwise stated, ByteReader implementations are not thread safe.

  8. trait ByteWriter extends AnyRef

    ByteWriters allow efficient encoding to a buffer.

    ByteWriters allow efficient encoding to a buffer. Concatenating Bufs together is a common pattern for codecs (especially on the encode path), but using the concat method on Buf results in a suboptimal representation. In many cases, a ByteWriter not only allows for a more optimal representation (i.e., sequential accessible out regions), but also allows for writes to avoid allocations. What this means in practice is that builders are stateful. Assume that the builder implementations are ""not"" threadsafe unless otherwise noted.

  9. class InputStreamReader extends Reader[Buf] with Closable with CloseAwaitably

    Provides the Reader API for an InputStream.

    Provides the Reader API for an InputStream.

    The given InputStream will be closed when Reader.read reaches the EOF or a call to discard() or close().

  10. final class Pipe[A] extends Reader[A] with Writer[A]

    A synchronous in-memory pipe that connects Reader and Writer in the sense that a reader's input is the output of a writer.

    A synchronous in-memory pipe that connects Reader and Writer in the sense that a reader's input is the output of a writer.

    A pipe is structured as a smash of both interfaces, a Reader and a Writer such that can be passed directly to a consumer or a producer.

    def consumer(r: Reader[Buf]): Future[Unit] = ???
    def producer(w: Writer[Buf]): Future[Unit] = ???
    
    val p = new Pipe[Buf]
    
    consumer(p)
    producer(p)

    Reads and writes on the pipe are matched one to one and only one outstanding read or write is permitted in the current implementation (multiple pending writes or reads resolve into IllegalStateException while leaving the pipe healthy). That is, the write (its returned Future) is resolved when the read consumes the written data.

    Here is, for example, a very typical write-loop that writes into a pipe-backed Writer:

    def writeLoop(w: Writer[Buf], data: List[Buf]): Future[Unit] = data match {
      case h :: t => p.write(h).before(writeLoop(w, t))
      case Nil => w.close()
    }

    Reading from a pipe-backed Reader is no different from working with any other reader:

    def readLoop(r: Reader[Buf], process: Buf => Future[Unit]): Future[Unit] = r.read().flatMap {
      case Some(chunk) => process(chunk).before(readLoop(r, process))
      case None => Future.Done
    }

    Thread Safety

    It is safe to call read, write, fail, discard, and close concurrently. The individual calls are synchronized on the given Pipe.

    Closing or Failing Pipes

    Besides expecting a write or a read, a pipe can be closed or failed. A writer can do both close and fail the pipe, while reader can only fail the pipe via discard.

    The following behavior is expected with regards to reading from or writing into a closed or a failed pipe:

    • Writing into a closed pipe yields IllegalStateException
    • Reading from a closed pipe yields EOF (Future.None)
    • Reading from or writing into a failed pipe yields a failure it was failed with

    It's also worth discussing how pipes are being closed. As closure is always initiated by a producer (writer), there is a machinery allowing it to be notified when said closure is observed by a consumer (reader).

    The following rules should help reasoning about closure signals in pipes:

    - Closing a pipe with a pending read resolves said read into EOF and returns a Future.Unit - Closing a pipe with a pending write fails said write with IllegalStateException and returns a future that will be satisfied when a consumer observes the closure (EOF) via read - Closing an idle pipe returns a future that will be satisfied when a consumer observes the closure (EOF) via read

  11. trait Reader[+A] extends AnyRef

    A reader exposes a pull-based API to model a potentially infinite stream of arbitrary elements.

    A reader exposes a pull-based API to model a potentially infinite stream of arbitrary elements.

    Given the pull-based API, the consumer is responsible for driving the computation. A very typical code pattern to consume a reader is to use a read-loop:

    def consume[A](r: Reader[A]))(process: A => Future[Unit]): Future[Unit] =
      r.read().flatMap {
        case Some(a) => process(a).before(consume(r)(process))
        case None => Future.Done // reached the end of the stream; no need to discard
      }

    One way to reason about the read-loop idiom is to view it as a subscription to a publisher (reader) in the Pub/Sub terminology. Perhaps the major difference between readers and traditional publishers, is readers only allow one subscriber (read-loop). It's generally safer to assume the reader is fully consumed (stream is exhausted) once its read-loop is run.

    Error Handling

    Given the read-loop above, its returned Future could be used to observe both successful and unsuccessful outcomes.

    consume(stream)(processor).respond {
      case Return(()) => println("Consumed an entire stream successfully.")
      case Throw(e) => println(s"Encountered an error while consuming a stream: $e")
    }
    Note

    Once failed, a stream can not be restarted such that all future reads will resolve into a failure. There is no need to discard an already failed stream.

    Resource Safety

    One of the important implications of readers, and streams in general, is that they are prone to resource leaks unless fully consumed or discarded (failed). Specifically, readers backed by network connections MUST be discarded unless already consumed (EOF observed) to prevent connection leaks.

    ,

    The read-loop above, for example, exhausts the stream (observes EOF) hence does not have to discard it (stream).

    Back Pressure

    The pattern above leverages Future recursion to exert back-pressure via allowing only one outstanding read. It's usually a good idea to structure consumers this way (i.e., the next read isn't issued until the previous read finishes). This would always ensure a finer grained back-pressure in network systems allowing the consumers to artificially slow down the producers and not rely solely on transport and IO buffering.

    ,

    Whether or not multiple outstanding reads are allowed on a Reader type is an undefined behaviour but could be changed in a refinement.

    Cancellations

    If a consumer is no longer interested in the stream, it can discard it. Note a discarded reader (or stream) can not be restarted.

    def consumeN[A](r: Reader[A], n: Int)(process: A => Future[Unit]): Future[Unit] =
      if (n == 0) Future(r.discard())
      else r.read().flatMap {
        case Some(a) => process(a).before(consumeN(r, n - 1)(process))
        case None => Future.Done // reached the end of the stream; no need to discard
      }
  12. class ReaderDiscardedException extends Exception

    Indicates that a given stream was discarded by the Reader's consumer.

  13. final class Readers extends AnyRef

    Java APIs for Reader.

    Java APIs for Reader.

    See also

    com.twitter.io.Reader

  14. trait TempFolder extends AnyRef

    Test mixin that creates a temporary thread-local folder for a block of code to execute in.

    Test mixin that creates a temporary thread-local folder for a block of code to execute in. The folder is recursively deleted after the test.

    Note that multiple uses of TempFolder cannot be nested, because the temporary directory is effectively a thread-local global.

  15. trait Writer[-A] extends Closable

    A writer represents a sink for a stream of arbitrary elements.

    A writer represents a sink for a stream of arbitrary elements. While Reader provides an API to consume streams, Writer provides an API to produce streams.

    Similar to Readers, a very typical way to work with Writers is to model a stream producer as a write-loop:

    def produce[A](w: Writer[A])(generate: () => Option[A]): Future[Unit] =
      generate match {
        case Some(a) => w.write(a).before(produce(w)(generate))
        case None => w.close() // signal EOF and quit producing
      }

    Closures and Failures

    Writers are Closable and the producer MUST close the stream when it finishes or fail the stream when it encounters a failure and can no longer continue. Streams backed by network connections are particularly prone to resource leaks when they aren't cleaned up properly.

    Observing stream failures in the producer could be done via a Future returned form a write-loop:

    produce(writer)(generator).respond {
      case Return(()) => println("Produced a stream successfully.")
      case Throw(e) => println(s"Could not produce a stream because of a failure: $e")
    }
    Note

    Encountering a stream failure would terminate the write-loop given the Future recursion semantics.

    ,

    Once failed or closed, a stream can not be restarted such that all future writes will resolve into a failure.

    ,

    Closing an already failed stream does not have an effect.

    Back Pressure

    By analogy with read-loops (see Reader API), write-loops leverage Future recursion to exert back-pressure: the next write isn't issued until the previous write finishes. This will always ensure a finer grained back-pressure in network systems allowing the producers to adjust the flow rate based on consumer's speed and not on IO buffering.

    ,

    Whether or not multiple pending writes are allowed on a Writer type is an undefined behaviour but could be changed in a refinement.

  16. final class Writers extends AnyRef

    Java APIs for Writer.

    Java APIs for Writer.

    See also

    com.twitter.io.Writer

Value Members

  1. object Buf

    Buf wrapper-types (like Buf.ByteArray and Buf.ByteBuffer) provide Shared and Owned APIs, each of which with construction & extraction utilities.

    Buf wrapper-types (like Buf.ByteArray and Buf.ByteBuffer) provide Shared and Owned APIs, each of which with construction & extraction utilities.

    The Owned APIs may provide direct access to a Buf's underlying implementation; and so mutating the data structure invalidates a Buf's immutability constraint. Users must take care to handle this data immutably.

    The Shared variants, on the other hand, ensure that the Buf shares no state with the caller (at the cost of additional allocation).

    Note: There are Java-friendly APIs for this object at com.twitter.io.Bufs.

  2. object BufByteWriter
  3. object BufReader
  4. object ByteReader
  5. object ByteWriter
  6. object Charsets

    A set of java.nio.charset.Charset utilities.

  7. object Files

    Utilities for working with java.io.Files

  8. object InputStreamReader
  9. object Pipe
  10. object Reader
  11. object StreamIO
  12. object TempDirectory
  13. object TempFile
  14. object Writer

    See also

    Writers for Java friendly APIs.

Ungrouped