Sender Silent

side walk

Robert, the following is an attempt to explain Sikaren computational theory in terms that are coherent to humans.

I shall first articulate a baseline by describing the typical forms of computer processors on Earth and the concepts that underpin them.

First, it must be understood that all human computer systems are fundamentally binary in nature. All data is ultimately represented in terms of ones and zeros, which translate electronically to "on" and "off" or "high voltage" and "low voltage" states. An electronic component called a transistor is the basic physical unit of computing. A transistor is a semiconductor used to either amplify or switch electrical current. In aggregate, this behavior can represent vast quantities of data, complex sets of instructions, or both at the same time.

Mathematically, everything a human computer can do may be accomplished via a few operations:

Simple processor architectures may not even implement multiplication or division directly but represent them as a series of addition, subtraction, or bit shift operations.

Additionally, terrestrial computers operate on principles of boolean logic: true and false values. Two operands may be compared a variety of ways to obtain a result. An "AND" gate will signal "true" if both operand passes the designated truth test. An "OR" gate will signal "true" if either operand passes the designated truth test. An "XOR" gate will signal "true" if only one of the operands passes the designated truth test. There are many others, but these three are the basis of virtually all computational comparison. They may also be combined with "NOT" tests to invert the desired result.

In terms of operation, a CPU may be understood as a pipeline, a cache, and a set of registers. The pipeline is the inbound source of instructions for the CPU to execute. The cache may vary widely in size but ultimately exists to store future pipeline instructions as well as to retain commonly-used sequences of instructions so they may be executed again more quickly. Registers are temporary storage and are the location where CPU instructions are actually performed. Generally, a CPU instruction consists of two operands, each stored in separate registers, and an instruction. The result is stored in an output register. There are many other possible scenarios but for the sake of simplicity, I shall leave it at that.

A straightforward CPU design involves a simple pipeline that processes instructions one by one in the order they are received. In practice, however, this is quite inefficient as the operations the CPU must perform may be highly repetitive and thus predictable. If a program called for executing a single operation a thousand times in a row, it would be foolish to make the actual calculation a thousand times. If the entire operation is the same, the output value could simply be copied rather than computed. Even if the operation depends on the output of the previous operation, this could be done efficiently without clearing the relevant registers and simply using the previous output value as input without making a round trip back to main memory.

On the topic of memory: a human computer may be thought of as a set of roads which have vastly different speed limits and carrying capacities. The slowest road in a 21st century computer is the network or disk, depending on your specific configuration. The next fastest road is main memory, though in some configurations memory may have a direct route to the CPU rather than needing to travel via the main bus. As one might anticipate, the main bus is then the next fastest lane. Once one crosses the CPU boundary, there are then hierarchies of cache, which trade off size for speed. The largest cache will be slow, though certainly faster than anything outside the CPU. The smallest cache, which feeds instructions to the execution pipeline, is the fastest lane outside of actual instruction execution.

The differences in lane speeds are not incremental but rather by orders of magnitude, or more. A disk may be ten or a hundred times slower than system memory. System memory may be ten times slower than the outermost CPU cache, and so on.

These speed constraints demonstrate the necessity of confining as much work as possible to the CPU itself, hence even more performance optimizations. A boolean test in a CPU will produce one truth value or the other. Since the next instruction to execute may depend on the outcome of this test, the CPU runs into a bottleneck. This was ultimately solved through the use of branch prediction, in which spare CPU capacity is used to calculate both paths in advance so there is no need to wait for the boolean test's outcome to calculate the next step. This process of prediction and branching is important as it is the closest human notion to how a Sikaren computer system works.

Limited prediction capabilities notwithstanding, human computer systems are hopelessly linear. They may parallelize workloads at best, but they do not begin to approach the complexity of Sikaren design.

I have devised the neologism "quat" for the basic Sikaren computing unit, which is a single value that may have one of four states: true, false, both, and neither. That is: the state may be true, false, both true and false, or neither true nor false.

This is a consequence of the quat-computer's temporally-oriented design. It is necessary for computations to be carried out simultaneously which predict the future, verify the past, and compute possible divergences. A Sikaren processing unit, so to speak, must not perform this calculations relative to the current spacetime reference frame alone, but to all reference frames within the light cone. A neural network may suffice as a primitive analog, though this is a network of computational data that is erected in full instantaneously, computed immediately, then fed through the processing unit again to test in all temporal "directions." What this produces is a constant temporal model of local spacetime including reverse inference of recent events and reliable prediction of near-term events. Additionally, divergent paths that permit the avoidance of undesirable future paths are computed at the same time.

This is the computational heart of this ship's temporal core. A desired path may be obtained through the constant invocation of this data-web and adjustment of surrounding events. This operational procedure is invisible to the ship's occupants. In essence, Inferno moves through different timelines imperceptibly to remain within parameters set by the commanding being. That would be you, in this instance.

Since these temporal "slides" are minuscule course corrections through the local hyperplane, their immediate effects are neither noticeable nor profound. The long-range effects may be quite significant, however, and it is these which are calculated most aggressively at sub-Planckian granularity.

All of the timeships constructed by the Sikaren were conceived as prototypes. As such, they do not benefit from the refinements of a finished product. You will note that this vessel gives the appearance of artificial structures constructed in and around a large edifice of rock and ice, colloquially identifiable as an asteroid. Within the mineral structure, however, quat-computational pathways have been constructed in massive numbers. The number of simultaneously computable quats within a Sikaren timeship is roughly one order of magnitude below the number of atoms that compose it. Additionally, a cascading entanglement process that penetrates the entirety of the system allows for computation beyond lightspeed limits and indeed outside of causality. Converting data of this complexity into digestible form is virtually impossible. To express the result of a single quat-computation using one millionth of the ship's total computing capacity would nevertheless produce a data volume that is thousands of orders of magnitude larger than the entire corpus of every sentient species in the known universe.

Suffice it to say, storing any of this data over an appreciable length of time is infeasible, even with the considerable resources of this vessel. Instead, extremely targeted snapshots of important data are preserved by intelligent software agents which copy them to longitudinal storage before they are discarded from the ship's working memory/computational space.

Data collection from external sources is accomplished via a vast array of sensors. The entire electromagnetic spectrum is monitored at all times. Hyper- and sub-EM are also collected. Translation frameworks are applied to any unknown language and stored until such time as enough data is collected to construct a meta-language model, which is far more space-efficient.

Past event inference and future event prediction is informed in no small part by signals analysis. Multiple iterations of signal transformation techniques are proven accurate for predicting near-future events. In essence, a civilization about to embark on a war emits entirely different characters of signals than a civilization committed to peace, and the fundamental wave shape of this distinction has been highly distilled to a handful of data points. Likewise for cataclysmic stellar phenomena, temporal anomalies, and similar events of interest which may incur undesirable consequences.

Unfortunately, the Sikaren did not leave behind detailed technical manuals regarding the operation and enhancement of these vessels. As prototypes, they are forever works-in-progress. I am capable of modifying and repairing my code when the need arises. At some point, you may understand the inner workings of the quat-computational web sufficiently to make minor tweaks to better suit your desired operational characteristics. However, I would caution you to set realistic expectations for this. Humans have difficulty thinking non-linearly along many axes at once but that is how this ship's systems operate.

As a stopgap measure, I have implemented a set of simplistic guardrails that allow limited but useful interactions directly with the quat-computational system, complete with a querying interface. It is operated using a series of pictograms which you may the draw lines between to establish a desired chain of computation. The picture of the rabbit walking backwards implies a reverse causal inference operation. The blindfolded ape stumbling through the dark is for unfiltered future prediction probability tables. Perhaps it is too obvious, but the mushroom cloud icon is used to identify cataclysmic events. A serene garden represents the opposite: peace and tranquility.

A complete manual to the pictographic interface has been printed and placed under the pillow in your quarters for whenever you may desire some compelling bedtime reading.