More Nil Channel Exploits
A nil assigned channel can implement dynamic (runtime) channel decoupling by blocking during read/write requests causing select statements to effectively ignore their case clause. By applying this mechanism, nil assigned channels within a goroutine can:
- Communicate feedback to other goroutines sharing the same channel. These other goroutines can respond to this feedback in order to realize dynamically scalable solutions.
- Permit a goroutine to dynamically favor processing messages from certain channels.
Dynamically Scalable Solutions
A goroutine can unilaterally disconnect (decouple) itself from one or more of its input channels by simply assigning nil to the ones it wishes to ignore. How could this be useful? Before answering this question, let me reveal the reason I enjoy programming in golang: feedback.
Feedback is extensively employed by natural systems in their regulation, as it represents the signaling necessary to maintain homeostasis (stability). More concretely, the human body is constantly adjusting its metabolism in response to feedback generated by the differential between the body’s internal temperature and external environment, in order to maintain a consistent body temperature (98.6F) that facilitates efficient conversion of nutrients into compounds needed by our bodies. Therefore, without feedback and an ability to dynamically adapt behavior in response to it, I doubt our world would have developed complex organisms let alone their diversity. Due to the significance I assign to feedback I’m drawn to mechanisms, such as golang channels, that facilitate its detection/manipulation.
In golang, synchronous channels and full asynchronous ones represent conduits that can conduct feedback, potentially in both directions, enabling, for example, a downstream goroutine to communicate its ability to consume an upstream goroutine’s outputs. Specifically, a blocked channel write operation in an upstream goroutine conveys feedback indicating an inability of the consuming downstream goroutine to effectively operate at the production rate of the upstream one. In this situation, the design can either “naturally” halt the upstream goroutine by waiting for the blocked write to clear or employ this feedback to possibly horizontally scale — start another downstream goroutine, to assist the one already running. The second choice seems more suitable for adaptive, reliable systems as the decision to horizontally scale reflects the dynamics of the situation, triggered by feedback, instead of depending on a prescriptive, statically allocated worker pool of downstream goroutines. So let’s explore a more concrete example.
Suppose a design crafted two logical goroutines, such that the upstream one is producing connection handles while the downsteam one stores theses handles to release them later in a connection’s life cycle. Since the application expects many connections and a decision has been made to reduce operational costs through the elimination of servers, the design must remain responsive as it nears completely exhausting the server’s resources. Because virtual memory thrashing is a well known and expected behavior of OSes as memory consumption approaches full utilization, it’s imperative, that every data structure limit itself to only a few virtual memory pages. Therefore, implementing the logical downstream goroutine that’s storing the connection handles cannot rely on running a lone physical goroutine that allocates a single large array, as potentially swapping in/out large chunks of consecutive memory blocks to operate on only a few of its elements promotes thrashing. Instead, it must implement a means of storing and distributing the connection handles potentially across many discontinuous virtual memory pages. One possible solution would involve running a number of physical goroutines each responsible for subset of connections such that the maximum array size is less than some small number of Virtual Memory Pages.
To implement this solution and encapsulate the decision within the downstream goroutine, this goroutine could be encoded to assign nil to its input channel variable that delivers connection handles from an upstream goroutine after storing the designated number of connection handles intended to defer thrashing. This assignment removes the downstream goroutine from the channel fed by the upstream one and this separation is communicated as feedback, conducted through the channel and realized as a blocked write operation in the upstream goroutine. At this juncture, any goroutine monitoring the channel, like the upstream, downstream, or perhaps an intermediate goroutine could detect the blocked channel and automatically spawn a replacement downstream goroutine. By considering feedback introduced by the nil channel mechanism and responding to it, the application continues operating by naturally dividing the connection requests among the necessary number of goroutine instances obsolescing solutions based on static assumptions that may require more time to consider and encode.
Dynamically Favor Messages
Perhaps there’s a requirement to read messages delivered by channels in an order different from how the messages were written to them or chosen by the default semantics of golang’s select statement? In other words, there’s a desire to apply a bias when reading from a goroutine’s input channels favoring the messages delivered by certain ones. To achieve the necessary input message ordering, one encodes a bias function by:
- Saving the value of the all channel variables to be affected by the bias function before applying it. Copies enable the bias function to restore channel variables to their original values.
- Applying nil to selectively “turn off” (bias) one or more channels, thereby ignoring their messages.
- Restoring select previously nillified channels to once again favor their input.
The words favor and bias were employed above to differentiate this technique from the notion of message priority. Although priority implies an ordering, the discrete nature of bias/favor processing, requiring the golang select statement to return control to the bias function in order to influence the outcome of the next execution of the select statement, fails to provide a continuous comparison of the significance computed for each delivered unconsumed message to all other known unconsumed ones delivered at the same time. If desired, a more faithful form of message priority can be simulated by affecting the probability function applied by select (see Continuous Priority below). Let me arc this tangent back to the bias function discussion.
The example output and code below contrast the ordering produced by golang’s native pseudo random selection method to a bias function encoded using nil channels. Through the use of modulus, the bias function imposes an ordering on channel input requiring a message be read from one channel before reading two messages from a second one by first nullifying the second channel causing select to block until the first channel receives a message or closes. Once one of these conditions occurs, the bias function then intervenes to restore the second channel variable to its original state while nillifying the first. This permits the consumption of the next message from the second channel while the first one remains blocked. Since the bias function executes for each consumed message, it repeats the same operations required to bias the first channel. After consuming the second message from the second channel, the bias function alternates to begin repeating the complete pattern.
The example below has been specifically encoded to demonstrate the bias function. For example, it relies on buffered channels loaded with the correct number of messages to satisfy the consumption of both functions when the select probability function randomly chooses all messages from either the first or second channel (1 in ~2 million). Therefore, be thoughtful when implementing a similar bias function to this and other subtle behaviors:
- If the value of at least one of the below goroutine’s channel arguments is nil, its select statement will block forever either immediately or after processing a single message.
- Selective consumption can transmit feedback to the upstream goroutines potentially causing their execution to “synchronize” when producing their outputs according to the message frequency needed to satisfy the ordering encoded by the bias function of the downstream goroutine. This lockstep behavior seems to be an example of a coupled oscillator. Also, this behavior isn’t limited to selective consumption but to a downstream goroutine’s sluggish consumption compared to the production of its connected upstream peers. A whole other post to consider.
- The complexity of implementing the bias function quickly grows with the addition of three or more channels.
When viewing the output below, the leftmost column displays the ordering of messages by applying golang’s native pseudo random selection method for messages that simultaneously appear at both channels. Notice the output doesn’t reflect a rigid ordering of type 1 messages followed by type 2, as each decision independently relies on a pseudo random coin toss. Also similar to a completely random toss, there’s usually a run of flips that can be all heads/tails. However, over a larger set of consuming message events, the total message counts of both messages should approach the same value.
In contrast to the leftmost column, the ordering of the rightmost message type values reflect the stringent frequency encoded by the bias function.
The complete example program is available on the Go Playground.
A method of implementing a more continuous form of selective message processing involves changing select’s default probability distribution for each case clause to reflect a desired probability value. As mentioned above, golang randomly chooses a single case clause from the set of case clauses defined by select whose channel expressions evaluate to a “ready” (not blocked) channel. To illustrate, consider a situation involving two unique channels, each encoded as a separate case with both simultaneously offering a message. Given these conditions, the total selection outcome population is two. Golang chooses the resultant case clause by executing a probability function that assigns equal probability to all outcomes. Therefore, given a population of two outcomes, the probability of selecting each outcome is one in two — a coin-toss. To reinforce this understanding, suppose the select statement consisted of three case clauses each referring to three unique channels and all three channels were ready at the same time. The outcome population would be three, resulting in a selection probability of one in three for each case (channel). However, if only two of this example’s three case clauses were ready, the outcome population would be two, improving the odds of selection to one in two for these outcomes. Given this understanding one can manipulate the probability assigned to a select statement’s outcomes (case clauses).
In populations where each outcome can be independently mapped to another concept, for example, each case clause can be mapped to a channel expression, one can superimpose instances of this other concept (channel) on existing outcomes (case) enabling the probability function applied to the outcome to also relate to the superimposed concept. For example, suppose three case clauses are independently map to three unique channels, with each channel in the ready state. In this situation, the probability for selecting a case statement, an outcome, is one in three. Since channels are uniquely mapped to case clauses, the outcome probability for instances of the superimposed channel concept is also one in three. However, what if the mapping between the case clause and channel instances isn’t unique?
Constructing a series of case clauses such that several of them refer to the same channel increases the probability of this channel’s selection when compared to other case clauses that refer to channels that are unique across all other instances of the remaining case clauses. Duplicating a superimposed outcome is analogous to loading a six sided die with three faces assigned the same number number of pips. Although the probability that the die lands on any given side remains one in six, the actual pip outcome has dramatically changed. The probability of rolling the duplicated pip pattern has increased from one in six to three in six, it’s no longer possible to roll the pip patterns discarded to accommodate the repeated one, and despite the conservation of individual probability for the remaining unique patterns, one in six, each one is three times less likely to appear when compared to the value of the duplicated pip faces.
Therefore, to increase a channel’s selection probability, repeat its case clause until its probability suggests the desired message output frequency. For example, if the nil bias function presented above did not require stringent ordering, it could be expressed as a probability function that prioritized type 2 messages two out of three times and type 1 messages one out of three times. In other words, type 2 messages should appear twice as often as type 1 messages. These desired probabilities can be more simply and directly encoded by repeating the case clause which consumes type 2 messages instead of relying on nil channels and the modulus operator required to compose a distinct bias function.
So what are the limitations of this technique:
- The solution approaches the desired outcome frequency as the number of events increases. Event sets of limited size won’t usually reflect the desired pattern. For example, Run 1 above, given its limited event set size, fortunately reflects the desired two to one ratio of type 2 messages to type 1. However, a second execution labeled as Run 2 demonstrates four times the number of type 2 messages than type 1 which represents a selection frequency twice the desired one for type 2 messages.
- Increasing the number of involved channels beyond two, generates combinatorial complexity ((3 choose 3) + (3 choose 2) = 4) when wishing to define distinct probability functions for outcome parings (3 choose 2)=3 other than the the set of all outcomes (all case clauses) ((3 choose 3) = 1). Expressing a distinct probability function for every outcome paring is beyond the ability of a single static select statement. A single select can only apply a single probability function over its entire list of case clauses.
- There’s a performance penalty applied in situations involving disjoint message delivery. If only a single channel is deemed ready when evaluating select and this channel is referred to by more than one case clause, golang still performs the probability function even though selecting any of the the replicated case statements will result in the same outcome. A golang benchmark indicates this penalty doubles the runtime cost when compared to the baseline. However, the baseline cost is negligibly small and doubling this figure remains so. To demonstrate how negligible, applying the probability function to each message when processing 2*10¹⁴ ( two hundred trillion) messages consumed an additional ~.4 seconds of total execution time when compared to the baseline total of ~.4 seconds.
- As previously mentioned when discussing potential pitfalls of the nil bias function, selective channel processing can induce feedback that may affect the execution frequency of upstream goroutines.
After reading Francesc Campoy’s Why are there nil channels in Go?, I wrote a respond to his insightful article suggesting nil could support specifying optional input/output channels for goroutines but then realized the additional techniques, outlined above, whose extended explanation coaxed this post.
I do not claim to be the originator of the strategies explored by this post. Since web searches will reveal any number of creators, it’s impossible to attribute them to a single one.