Every computer professional is well acquainted with the authoritarian style of the ubiquitous bureaucracy. As soon as somebody gets an infinitesimal crumb of power, there is a phase shift in the head, and then whatever happens in the Universe tends to serve the purpose of satisfying the same dominant need. Such a mini-, maxi-, or mega-boss demands implicit obedience on the part of the presumed subordinates, so that any whim would trigger an immediate fit of servility. No delay, of course, by no means! Right here and this very second! The favorite buzz word of any boss at all is "It's urgent!" That is, I want it now; and who knows whether I'll want it a couple of moments later?

No need to stress that it is not exclusively the IT domain that is concerned. The disease is characteristic of class-based economy as such. Fancy the world where everybody is interested in productive activity for the common cause; is there any sense in elbowing one's way to the public recognition of one's personal priorities? Today, on the contrary, no good worker is beyond the risk of being sacked and blacklisted for too much defiance, and one needs to consider the possible injury to the others' hyper-conceit. Hence all kinds of vulgar stopgapping and rush job: extremal programming, patchwork administration, infrastructural inflation, bandwidth trickery etc. Once in a while, the desperate struggle for survival would bring about a novel piece of technology; this circumstance makes the chiefs burst with pride and proclaim their ultimate utility and importance. Indeed, why should we care for the health and wealth of the plebs?

The architecture of computer systems is mainly an imitation of the society's class structure. With all that, as first primitive computers imposed too many technological limitations and constraints, this transitory imperfection was a kind of safety spot for computer engineers, programmers and system administrators, since it was yet possible to justify any particular solution by the objective demands, which did not entirely cut the overblown appetites, but rather redirected them elsewhere, saving the IT departments a few headaches. No way for today. Modern computer systems admit no real bounds, so that any request is principally satisfiable, with an appropriate level of funding. Besides, the rapid expansion of the scope of targeted education leads to the ever growing army of "labor reserve" promptly offering a wide choice of cheap brainies, which a talented computer professional would hardly like to join. One needs to grow shifty, accommodative. And more resourceful, as it comes to adequate handling of influential blockheads. Which also implies certain architectural innovation.

As we know, the standard paradigm of a Turing machine refers to the processing of some resources (data) with the outcome of something else, following some built-in functionality. It does not matter, whether this operation will be sequential or parallel, localized or distributed, discrete or analog, determinate or stochastic. In the end, we are to obtain a palpable thing, to pass it to the customer and demand a due indemnity, in the tradable signs of value. Eventually, a part of the operation's result may be converted back to data. Yet another portion may be used to modify the operation modes (including both software and hardware). The rest is to be dismissed as non-productive consumption, pure loss (albeit the customer would treat it as profit).

Since no machine can be omnipotent (which is assumed by the very act of opposing the machine to its environment), the flow of data processing requests is to be somehow controlled. Basically, data supply must correspond to the processing facilities. This is what utterly beats a savage capitalist, who is firmly convinced that the boss is always right, while the slaves are made to obey. To alleviate the tension, there are buffer devices, which are known in the everyday life as executives, while computer industry prefers to speak of queues. The variety of the possible implementations is truly immense. Still, there are two primary architectural solutions: FIFO (first in first out) and LIFO (last in first out). That is, either we process the requests in the order of their registration, or entirely concentrate on what is on the agenda now (the topmost element of the stack). The boss will certainly praise the second alternative, admitting no other options. The chief's order is above all. Any command is to be immediately carried out, and dump all the rest. This is how army regulations state, although the commander is to be informed of orders previously received from a higher rank officer. The excessive use of stack is a dangerous practice: without a regular disposal of the garbage, a computer system if bound to run into stack overflow and hang, which often requires the drastic measures like power reset. In the market economy the same effect is known as a crisis, and it may need as dramatic fixup, with the seas of blood and the mountains of dead bones. T

he FIFO queue is much safer as it comes to crisis prevention. We just pay no heed to the requests that exceed the capacity of the queue, occupying ourselves with the obligations already assumed. It's up to the environment to regulate its level of pretense; by the time when there is some room in the queue, many requests would already have been dismissed as obsolete. This is absolutely not what those at power would appreciate: they don't like any objections, however objective. On the other hand, with all the despise for bureaucratic caprice, we must admit that accidents do happen, and urgent action may be vitally important. Now, what could be done?

Note that a parallel architecture cannot essentially improve the operative potential of a Turing machine. Yes, the overall production rate would be higher; still, the concurrent agents service the same queue, which implies significant overhead for thread synchronization, access control, conflict resolution, end-product aggregation etc. That is, an isolated task can be solved faster, while the system remains sticky as whole.

An obvious idea is to pass to a multiple-queue architecture. For example, the requests that may hinder the performance of the stack will go in a special FIFO queue. Certain technical issues are still to be handled, but we get much more air overall. Yet safer choice might involve a few hierarchically ordered FIFO queues, with the incoming requests first ranged by their priority: one queue accumulates dormant (pending) requests, some other queue is to support regular (scheduled) operations, yet another queue will route urgent requests, while the critical events (vital tasks) are to be dispatched by the queue of system interrupts. The agent (regardless of the architecture) will take the next task from this hierarchical queue according to some general policy. The details may depend on the kind of the system and the current state of affairs, but any choice would allow us to get rid of any overbearing: indeed, the standard answer to any request is about putting the request in an appropriate queue; if the boss wants higher priority, no problem, we just put it in a higher-priority queue, and so on. The delicate issue is that high-priority tasks are not necessarily fulfilled before regular, or even pending requests! This circumstance utterly evades bureaucratic minds.

For example, take the simplest logic of hierarchical queue processing, the multiplexing scheme in FIFO flows. In this technology, priority tasks receive more time slots (the units of operation), or a wider portion of the overall bandwidth. Still, the necessity of regular operation is beyond any doubt, and any sequence of urgent tasks will be interspersed by low-priority intrusions, within an appropriate quota. In this way, the system remains robust at any level, and local troubles will not block the functioning of the other parts. Suppose that urgent tasks are assigned three time higher weight than regular operations; this means that at least every fourth operation is reserved for low-priority tasks. Now, if the queue of urgent tasks contains 10 entries, at least 7 of them will be processed after the first regular request. In other words, a hierarchical agent is only predisposed to carry out urgent tasks in the first place, but this won't necessarily happen in the general case.

A hierarchical queue effectively corresponds to introducing two more intermediaries between the customer and the doer: request evaluator and operative dispatcher. An interested interlocutor will readily find the correspondence in all spheres of production. Still, in real life, the inhomogeneity of economic flows play an important part. Thus, we may be overcrowded by customers today, while a completely blank period expected just a month later; the importance of any activities is quite sensitive to their outcome and the changes in a wider context. So, the performance facilities will either be fully loaded or underused. Some queues are overflown, some others stay empty. Efficient multiplexing requires flexible algorithms with dynamic repartitioning. Moreover, the working agent can rarely be treated as elementary, which adds the influence of its inner states on the performance order. Finally, the tasks are often interdependent, so that the feasibility of one work would be influenced by the status of another. For instance, a bank has received a payment request, but then the laws of the country change so that the transfer between the accounts involved is no longer legally possible. The earlier queued task will then be processed differently (operation denial instead of success). Still, in a hierarchical queue, the implementation of a new rule may take some time, so that the formally illegal request could eventually avoid blocking. Such tricks are well known in the world's political and economic history; they have been widely exploited in the belletristic opuses, especially of a detective flavor.

Yet another important consideration: every operation needs sufficient resources; as long as there are no necessary prerequisites, the task is bound to remain in the queue, or be moved to a different queue, with all kinds of intermediate algorithmic realizations.

That is, in general, the acts of both placing a request in a queue and selecting the next activity to launch do not rigorously follow a pre-determined queuing scheme; there is perpetual adaptation the operation environment. In particular, the order of processing requests in a queue may largely vary. Such an architecture could be referred to as CICO (conditional in conditional out). A mass occurrence of this type of economic behavior might be called CICOnomy (or psychonomy, if you wish). It does not imply any change in the market basis of capitalism; nevertheless, each relatively self-contained economic unit becomes much more controllable and robust. Obviously, a global economy needs as global CICOnomy. In general, the hierarchy of the queue must correspond to the scope of economy: thus, the expansion of the humanity into the outer space will some day demand balancing a number of planetary priorities.

A side remark: in the capitalist society, technologies are never meant to improve whatever; their only purpose is to support the system of social (economic) inequality. In that context, a full-fledged CICOnomy is never possible; all we can observe is a variety of inconsistent, partial implementations.

For ordinary programmers, system administrators or IT managers, CICO queues are useful to resist stress and move from extremal reactivity towards healthier forms of cooperation. Today, in any company, an employee of a "service" department is, in fact, a servant of many masters. A minimal formalization of this multiple subordination will effectively bring the subordinate to the top of the hierarchy, so that the others will need to accept his/her operation modes thus imposed as an objective necessity. This effect is especially important for those working in the separate service bodies legally independent of the customer (though, of course, the economic interdependence will always stay). For example, when a system administrator is to maintain the databases of a hundred companies, he/she is free to impose certain restrictions upon the users and the modes of usage; when necessary, such a lockdown can be reflected in the legal papers. That is, we do not agree to any job at all; once we have taken the responsibility, we run it as smoothly as we can, within our technological means, and there is no sense in any outer pressure. T

he CICOnomy of distributed processes is of a particular interest. On one hand, each agent is relatively integral CICO system; the operator will pick some conveniently positioned requests from the environment and launch the processing activity, depending on resource availability. On the other hand, any individual environment is a part of the common environment, for the whole process, with its own hierarchy of data and priorities. This means that both the formulations of the problems and the schemes of their resolution are conditioned by the individual agent facilities. In the simplest (but very common) case, a global request is to be preliminarily structured by the global queue, with a broadcast inquiry sent to the participant agents; if there is no available agent for some portion of the work, the request must be rejected. Further, when a task is already in the queue, there is a variety of interrelation styles between individual agents. The "classical" paradigm assumes a competitive approach: the agent is to monopolize the resources and production facilities for the task once seized, so that the others will have to wait until some conclusion. In particular, there may be collective agents ("workgroups") acting in the same manner. There is an alternative ("quantum") paradigm: several agents try to solve the problem their own ways, and a number of solutions is available in the end. The customer does not much care about the origin of the final product. Still, just like in quantum physics, some modes may be less desirable ("forbidden transitions"), so that complex many-stage schemes are actually preferred.

Various hybrid architectures could be practically developed, with parallel exploration on the prototyping stage and a definite choice for end-product technology, selecting one of the possible routes (some mystically inclined physicists speak about "collapsing" wave functions). In fact, there is nothing supernatural, since any hierarchical structure can be folded and then unfolded in a different way, depending on what is "energetically preferable" in the current circumstances. Of course, we do mean any abstract optimization: few people have enough leisure to pursuit an ideal; a tolerable solution is often quite enough, as drive for perfection is not an economic category. As long as something is going on, however imperfect, uncomfortable, and even annoying, people are not ready to risk the sure deal for a theoretically achievable. In this way, CICOnomy, in one respect, serves to maintain staffing consistency, while in another respect, it creates premises for a revision of the economic "vertical", so that the personnel is no longer considered as subordinate to the boss, but rather an enterprise appears to be an effect of a collective activity of interdependent free personalities.

[Computers] [Science] [Unism]