The Impact of Molecular Engineering on Spacecraft Information Systems

Immortal Data Inc

Dale M. Amon

Queens University Belfast Computer Science Department, Belfast.

The application of Molecular Engineering (ME) to spacecraft information systems will require a revolution in software engineering techniques to deal with levels of system complexity so large they must be considered different in kind from present systems. This will be exacerbated by the blurring or erasure of the line between hardware and software. Agoric systems are examined as a possible means of stabilizing and controlling a vast continuously changing field of information entities. Genius materials will revolutionize the ability of information systems to interact with the environment that smart materials have already begun. Very capable Virtual Reality (VR) systems become possible and VR is suggested as a palliative for the man/machine interface bandwidth problem.


At a macroscopic level any manned spacecraft has similarities to a living entity. It has a brain consisting of crew members; it supports that brain by supplying nutrients and oxygen and removing wastes; it has a nervous system consisting of wire and optic fibre; it senses and responds to its environment, sometimes at a reflex level;  it is mobile; and it excretes wastes. In a ship with active sensing, analysis and effecting systems distributed through the very materials of the vessel and in which the crew is tightly coupled to the ship through VR the spacecraft is a single interconnected information system and the biological model is a useful paradigm for design.

The rate of advance  in computer science is such that  attempts to predict future capabilities are problematic at best. Computer science literature  is replete with vast over and under expectations. One need only look as far as the early work in speech [1], machine translation and vision for the former. Many of us need only look at the powerful workstations on our desks to exemplify  the latter.

How then does one approach an already difficult job of prediction when the additional wild card of ME is included? Our only choice is to    presume  trends will continue. The increase in computing power per monetary unit has been exponential for ninety years  and seems likely to remain so for the foreseeable future [2]. In this context ME  is just another step on our march towards the very small.

We will assume  mature, robust technologies develop  in a number of areas of computer science. Key among these are machine perception, speech

understanding, real time animated 3d graphics,  real time expert decision making and pattern recognition. For  the more advanced possibilities we require the existence of a black box  molecular assembler and disassembler technology [3].

Important advances in ME have occurred in the last few years: notably protein folding by design and the use of Scanning Tunneling Microscopes (STM) [4] [5] and lasers [6] to manipulate atoms and groups of atoms. Single atomic bonds have been selected and broken.

A great deal can be surmised about future systems without detailed knowledge of the technology used to implement it, the architecture of the spacecraft, the form of propulsion or even the specific mission.  Figure 1 is an information systems view of spacecraft systems. The interfaces define the possible flows of matter and energy between subsystems.

immortal data inc

It matters little at our level of analysis whether the computing, storage and communication structures are biological [7], based on Josephson junctions or other quantum mechanical devices [8] [9] [10] [11] [12] [13], novel materials like diamond [14], optical principles [15] [16] [17] or even mechanical. ME will make processors smaller but the basic nature of a computing element does not change. Anything which is computable can  be computed by a Turing machine; likewise any machine architecture can  be simulated on any other (there are authors who plead a special case for wetware [18]). Information flow and information processing are  independent of the underlying hardware except for performance measures.

No architecture can serve all purposes. Single Instruction Single Data (SISD)  are useful for simple tasks of all sorts. Single and Multiple Instruction Multiple Data (SIMD and MIMD) parallel processors are appropriate for simulation and modeling of large decomposable systems. A 64K element SIMD machine (Thinking Machines Connection Machine) already exists and 1024K ones are under consideration  [19] and future machines may have millions or billions of computing elements [20].  It is not a foregone conclusion that any of the current architectures are scalable. It is also not yet apparent how to go about programming such machines to use the available power efficiently. Data flow machines are good for text data base searches and optic/holographic techniques for image data searches [21] [22].  Neural nets work well when pattern discrimination learning [23] or complex stimulus response learning  is required.

If assembler and disassembler technology allow construction and destruction of processors on demand, the line between process and processor vanishes. It is an open question what level of such dynamic reconfiguration of inter-element connections and of the capabilities and numbers of the processing elements themselves will be feasible or controllable.


Systems design  begins with a definition of precisely what is to be done.  We suggest the following range of requirements for advanced manned spacecraft:

Life support: Keep the crew alive and well, mentally and physically.   This includes temperature, humidity, pressure, gas concentrations; waste management; food and water supplies; medical records and support;

entertainment, provision of an interesting and changing ships environment using visual, auditory and olfactory cues; monitoring of  trace chemicals, radiation levels and other environmental hazards.

Science support: Collect, process, archive and analyze sensor data; design and implement experiments; literature searches;  modeling, theorizing, visualizing and correlation.

Engineering support: Control, monitor, plan and project consumables use; monitor and control the energy and motive source(s); monitor  aspects of the external environment that interact with ship’s systems; communications links; engineering data on science and life support systems;  predict, detect,  locate, bypass and repair faults; develop, override,  upgrade  and add subsystems.

Command support: Management data systems including crew records, manpower requirements and allocation, consumables budgeting; control and monitoring of ship attitude, relative velocities, acceleration and location; course planning, execution and monitoring [24]; generation of scenarios for crew and ship training exercises.

Defense support:  Threat analysis including meteors, large energy or particle flows, proximity of other vessels; weapons inventory, selection, targeting and tracking; internal warnings and evasive maneuvers. Detection of hostile programs and computer viruses. Enforcement of information security and privacy. Internal security.

These capabilities may be reduced to a small number of architectural requirements based on the primary information flows and processing needs.

Crew Interface: With the crew  “sipping from a firehouse” of  information flow potentially beyond the 2 GB/day of  the currently planned Earth Observation System (EOS) [25], the human/machine interface bandwidth will need to be drastically increased.

Sensor Effector Net (SEN): The ship must respond to its environment quickly and efficiently. Much of this should be homeostatic and not normally brought to the attention of the crew. Information flows are primarily vertical: data flows upward and control downward. Feedback and reflex loops should operate at the lowest level in the system at which the appropriate data and control is available.

Archive and Retrieval: Massive quantities of data of  must be stored and readily retrievable by the crew or subsystems. The data should be distributed and held redundantly. Mission critical data may be stored in even higher redundancy.

Modeling and Simulation: The ability to use data resources, particularly for a science mission and the crew interface, requires powerful means of analysis, synthesis and presentation. In many cases a loss of data is not critical and faults may be handled by reconfiguration and restart. Time and mission critical operations may require shadowing, as in the current space shuttle [26][27].


If the  human  capacity for  integrating the global patterns of their environment and responding to threats and opportunities is to be used effectively a crew must be tightly coupled to the ship’s information systems. Ultimately it is the crew which must define policies, set goals and plan strategies. To do so they must have better means of examining  data than those extant. New man/machine interface devices [28] [29] [30] and research in VR systems are beginning to break this logjam. Even with VR there must be innovation in presentation methods and in extraction of key features [31][32].

An integrated VR system will engage as many human sensory modalities as possible. Today’s systems use audio  visual and elementary tactile inputs; future ones will include sophisticated tactile feedback and perhaps olfactory and kinesthetic ones as well. The vast array of chemical traces required for control of olfactory  input  almost certainly will require sophisticated ME. Motion, muscular, tactile, pressure, texture and temperature  sensations may also be supplied.

The primary outputs from the crew are voice and gesture. Recent advances in speech understanding [33] [34] have been significant, and some research in gestural [35] [36] [37] interfaces has been occurring. Highly interactive graphical interfaces [38] [39] [40] have been a hot research topic for many years.

A secondary, subconscious output channel is carried by body language and such  indicators of stress and psychological state as galvanic skin response, vital signs, pheromones and sweat.

There are three classes of VR:

Internal VR: A direct brain interface is a total immersion VR  indistinguishable from reality and not limited by it. Recent work with silicon to neuron interfaces

[41] [42] [43] shows such interface is possible in a very limited sense.  ME is probably necessary for the basic research preliminary to any suggestion of feasibility. We will leave this possibility in the realm of pure conjecture.

Boundary VR: The data suit [44] or an ME version derived from Drexler’s space suit musings [45] controls reality at the skin. Some elements already exist [29] [46]. Kinesthetic sensations are limited to what can be simulated with pressure at appropriate locations on the body; the use of tricks as in flight simulators; and the actual freedom of motion of the crew member at the time. Walk around molecules are already being suggested as a research tool [32].

External VR: A “value-added” reality. Crew activities are monitored and the ship responds via conventional output devices, holographic projections (computer generation of color holographic film is already possible [47]), and conceivably assembler/disassembler mediated reconfiguration of the ship. The VR is limited to what can be added to what is already there. A great deal of the work to date has been done as an artistic endeavor  [46] [48]; with some in virtual control panels [49] [50] [35] and  Computer Aided Engineering and Design (CAE/CAD)[51].

The more sensory modalities brought into play, the more information a human can process at one time [32]. Relying on one sensory modality can cause an overload, or a failure to attend to important cues. In systems from aircraft to nuclear power plants this is overcome by using color, flashing lights, audio frequency, timbre, etc. VR can combine sensory inputs in unusual ways [52] to indicate warning, data uncertainty, etc. Any sensory input data, raw or analyzed or combination thereof can be mapped onto any sense modality if it might be helpful in the extraction of patterns from that data.

The ship information systems will contain numerous expert systems. Some  will be a part of subsystem operations and will rarely interact directly with the crew. Others will be  “consultants” with expert knowledge in given fields. Given the rapid advances in 3D animation, audio generation and speech understanding, personifications of these programs in the VR may be one of the most effective means of interacting with them [53]. That simulated personalities need not be terribly sophisticated to be useful is exemplified by the Eliza program [54].

A sufficiently good VR system allows teleoperation indistinguishable from being there. This flexibility allows control (or control assistance) of the spacecraft or exploratory drones from any point close enough to satisfy response delay criteria.

For example, a landing crew could retain full control from the ground. Predictive VR could allow control from longer ranges in some cases.


It is apparent the full merger of sensor, effector and computing element technologies is rapidly approaching [55]. Integrated circuit size sensors and mechanical devices have already been built [56]. Within a few years processing and conversion between analog and digital domains will be integral with  sensing and effecting elements.

ME opens up a wide range of possible sensor and effector technologies.  Work on smart or adaptive materials is underway, and they are thought to be in use on advanced military aircraft [57] along with many other ’black’ technologies [58]. Self healing materials are already under discussion [57]. Optimistically, the entire fabric of the vessel will be  a ’genius’ material woven with micro-sensors and micro-effectors.  ME opens the possibilities of materials that can grow subsystems, heal damage and that are homeostatic like living systems.

It will be possible to monitor and react to an unheard of variety of signals.

Small size allows us to interleave elements that are optimal for different tasks [59]. The entire volume and surface of the vessel is available. For short wavelength, or high amplitude phenomena, volumetric sensor arrays can deliver both spatial and temporal variations. For low amplitude or and low frequency phenomena, the volume or surface can be used as a single antenna. The large air shower complex at Dugway Proving ground is representative of this distributed, sensing in depth approach. Sensors are distributed on the surface and  3 meters underground. Surface units have individual processors and communicate with their neighbors [60].

Fields: Monitor or generate static and dynamic electric and magnetic fields across a wide spectrum of  frequency, amplitude, phase, and direction;   acoustic waves across a wide spectrum of  frequency, amplitude, phase, and direction; and stress fields in materials.  Monitor gravitational field properties [61].

Matter: Monitor microenvironments  inside and outside  the spacecraft. This includes the direction, mass and energy of particles, dust and meteors; pressures and temperatures; detection and release of trace chemicals; incoming and outgoing projectiles; monitor and control life support systems.

Position: Control and continuous monitoring of acceleration and attitude.

If assembler and disassembler technology is available, SEN’s may reconfigure their physical structure based on current priorities. A ship need not waste volume or surface for a table or a microwave antenna when it is not required [62].

In figure 2 we show a general high level model of an SEN [63]. Recognizers are processing nodes whose primary job is the extraction of information from one or more sensor data streams. The data streams may be any combination of raw sensor data or processed data from inferior (lower level) Recognizers. The Recognizer passes the abstracted and processed data stream upwards to its superior Recognizer and laterally to a motor Ganglia on its peer level. Each level in the tree is a higher level of abstraction in both the sensor and effector worlds.

immortal data inc

Ganglia are controllers of the effector or ’motor’ systems. The higher up the effector hierarchy the Ganglia is, the more complex its  ’motor’ output. It receives abstract commands from superior motor Ganglia and generates a series of less abstract commands  to inferior Ganglia [64]. The peer relations between Recognizers and Ganglia allow reflex and feedback control [65] to occur at the lowest possible level in the system at which sufficient information and motor control are accessible.

There are times when normal filtering of data must be overridden and for this reason data flow is bidirectional. A Recognizer will supply data on demand. In some cases Recognizers will declare data to be “interesting” and pass it directly to a node further up the tree [66] .  Motor Ganglia  report exceptional conditions.

Distributed hierarchical processor networks resembling this have been built and work quite well in large scale automation systems [67]. Since behaviors are generated at a low level and actions occur in parallel, there is a great deal of similarity between the process control approach, the recently defined subsumption architecture [68] in robotics and vector force fields of neuron activation in the control of chordate limb movement [69]. The primary difference is  scale. Where existing process systems consist of a few hundred or a few thousand macroscopic sensing and effecting elements with a tree depth of three or four, the envisioned ME spacecraft systems may contain many millions of microscopic elements and much greater tree depths.

The uppermost Ganglia  is the highest reflex and control point and is the interface with the crew. Like the lower brain [70] it is a filter which decides what sensory data must be attended to. It also responds to crew commands (voice  or  gestural commands, etc.) [71]. In a fully robotic craft it is the final decision maker.

Due to its distributed nature, this architecture is robust in the face of data loss and node faults.  Retries and reconfiguration are satisfactory means of handling fault tolerance. Even if a node or set of nodes become isolated, they will still be able to carry out much of their normal duties.

Learned reflex capabilities will make each spacecraft unique. The longer it is in space, the better adapted it becomes. Whether by learning from experience, training or specific programming,  this adaptation represents valuable knowledge. The first  exploratory  vessel on a given mission [72] could short circuit  its successor’s  learning curve with a download.

Idle systems might be placed in a ’dreaming’ state in which they learn to respond to imaginary scenarios, both likely and fanciful. Replacement of the environment by an ME based simulator will create an accurate ship and crew training ability.


The archive combines a digital library with what are currently the  flight  and scientific data recorders. The importance of the library  portion will vary with the mission:  a deep space exploratory mission will obviously have different requirements than a commercial Low Earth Orbit (LEO) shuttle. A client/server architecture with multiple database and supercomputing units seems appropriate. These units should be placed in  the central core of the ship for maximum radiation shielding.

ME makes it possible to store national libraries worth of information in a small space. If data exists in digital form anywhere, duplicating it costs little.  History shows that if storage is available it will be filled, so it is likely that a significant portion of human knowledge will be on every spacecraft.

The day of the computer archive as a simple numeric data repository is long gone. Workstations are already capable of displaying high resolution high quality color images and stereo CD quality sound. The Joint Photographic Experts

Group (JPEG) standard [73] has tamed the storage requirements for these [74].

Capacity for 3D and stored video is starting to appear, the later enabled by the Moving Pictures Experts Group (MPEG) compression standard [75]. Entire archives of photographic data  and major reference works [76] [77] are becoming available on line or CD. Large businesses are  leading the way to the all digital world [78] [79]. It is likely only a few decades will pass before all written, audio and image archives are available in digital form.

Storage is not an end to itself. Vannevar Bush’s Memex paper [80] contains basic requirements relevant to this day. Sophisticated means of connecting  data items such as hypertext [81] [82] and object oriented databases (OOD) and of navigating in the data space [83] will be needed.

True hypertext client-servers such as Xanadu [84] are commercially available. But even hypertext is not enough when search for specific information or patterns over a large domain is required. ME vastly increases the computing power available but brute force cannot solve all problems. What if one wants to request the system to search all planetary image data to date for structures matching a particular verbal criteria? This is not a simple problem.

A spacecraft will not carry a crew large enough to have domain experts in all areas of human knowledge, so hypertext is also insufficient in this case. Domain expert systems (library assistants) in a VR could assist with the data navigation task and enforce information privacy and security requirements.

One must be able to display, analyze and model systems. These are some of the most compute intensive tasks in science: ones whose practitioners continuously cry out for more and more powerful supercomputer technology [85]. ME puts massively parallel supercomputing into tiny packages and makes it available as a shipboard facility. But if there are not significant advances in parallel languages and algorithms this power will be wasted.

Onboard supercomputing will allow the spacecraft to model its own interactions with the environment, whether for aerobraking or predictive modeling of engine and power plant parameters.

Close coupling of modeling, expert systems and hypertext data retrieval in HyperIntelligence [86] and Knowledge Management systems [87] are needed if all these resources are to be fully utilized.


Complexity is perhaps the most difficult issue in computer science and one which the capabilities of ME simply exacerbate. Complexity is the limiting factor on software systems from user friendly spreadsheets to banking systems to SDI.

Object Oriented Programming Systems (OOPS) help control complexity in the engineering of individual processes and allows  reuse and replacement of objects within those processes.   To a great extent it solves the complexity problem at the single process level.

Client-Server supplies a formalism for  systems design at the level of cooperating processes. But what happens when numbers of client  and server processes rises to millions (or  billions and billions) and are continuously in flux (whether on

computer or human time scales) with new services, changing system

requirements,  upgrades and bug fixes appearing at random intervals?

The NeXTstep interface [88] [89] is one of the first to attack this problem. When a new server process is loaded, it declares its willingness to provide a service to all comers. It thereafter automatically appears in the Services menu of all  ’App’ client processes. Client and server need have no a priori knowledge of each other. A standard communication mechanism guarantees they can exchange data if they share any standard types. This is a step forward but it evades the decision making issue by leaving it in the hands of the user.  There is no indication of how processes might understand the utility of new services and make trade offs among service providers with varying performance, capability and resource utilization.

Agora  systems research [90] [91] [92] may be another part of the way forward, at least in so far as identifying, evaluating and optimizing the cost and price of services. If research brings about mature systems, large programming systems will be artificial ecologies or markets. This seems an excellent match to the capabilities of ME, particularly if assembler and disassembler technology erase the dividing line between process and processor.

Cooperative hinting algorithms [93] have been explored recently for problems involving large problem spaces.  Processes share a blackboard where hints are left as to where to look or not to look.

Research seems to have not addressed the formalism by which a client ’decides’ to use a class of service that it was not originally coded to use. In many cases this is not important: a process that requires data conversion of a particular sort need only find the best performance bargain that fulfills its requirements. Anything more probably implies  the Client is (or is front ended by) some form of expert system.

In a market driven system each process will attempt to provide the best service (in quality and delivery time) possible at any given time and place. To do so means that it must minimize input costs;  maximize input quality and  delivery time; and maximize output price. This mini-max problem should lead to behaviors comparable to those of economic entities. The following are some of the major factors in decision making:

Transaction costs: The  mix of closely coupled or “merged” processes versus outside suppliers is controlled by the cost of decision making and contracting.

Communication costs: The physical location of a process should minimize the sum of communications costs with clients and other servers. Bandwidth on a communication link is a limited resource and its usage must be optimized.

Processor costs: Speed and reliability must optimized. For some applications  a massively parallel processor in the radiation safe core of the spacecraft is more desirable real estate than an SISD processor on the hull. There are optimizations regarding whether a process should  build or rent a processor;  add  to an existing one;  share a processor with other processes or even compile itself into hardware.

Storage costs: Access speed and reliability must be optimized. There are higher costs associated with safe storage in the core versus local RAM with a higher risk of damage from ionizing radiation. Local storage may be enlarged by assemblers,  a process may move to a facility with a larger or closer mass storage.

Energy costs: All processes utilize energy. If there are differences in quality or cost of energy that vary from point to point, that will be part of the location optimization equation. Energy is a limited resource and use must be optimized such that the highest value to the crew and mission goals is gained.

Matter costs: Since a ship has limited stores of matter, that matter must be used to provide the highest value to the crew and mission goals. This is not only an issue of consumables: there may be a limited number of atoms of critical isotopes. If prices of a critical material rise, some processes may profit by substituting the use of higher for lower valued resources.

Many decisions have hysteresis because there is a cost associated with a change of strategy. A process cannot move to a new processor with every slight change in client/server relationships and volume of business. One can imagine a highly successful new server starting in a shared processor, moving to a private processor and finally to a special purpose processor (and vice versa for the competition). A step in either direction is delayed in time because the gain in efficiency  must be sufficient to overcome the cost of  the changeover. This could allow systems to freeze into suboptimal configurations. Hysteresis can lead to hill-climbing phenomena. The distance from a local maxima to a global maxima may not be crossable. It is possible that the equivalent of business cycles caused by changing crew requirements will keep the system in a sufficient state of flux that resources will move quickly to their highest value usage. This could be modeled by simulated annealing  with a time dependent minimum energy state. The system could be jostled into new configurations by biasing the simulated temperature close to the freezing point and randomly changing it [94].

Systems become particularly interesting when we allow assembly and disassembly. Goals and requirements are not constant with time, so we will have cycles of boom and bust  based on the mission plan. A switch from interplanetary to planetary science observations will cause a massive turn over in processes and resource utilization. Processes that cannot adapt to the new ’economic’ environment will die off and new  processes  will be instantiated and  thrive [95]. Mass and energy utilization will shift with priorities.

This is a system in which a free market exists solely to supply the needs of an aristocracy. If the crew is the ultimate landlord to which rents acrue and is the ultimate consumer of all services,  an agora system will optimize around crew goals. Rents must be returned to the crew so the money supply remains stable and the crew remains the primary financial input. It would not do at all to have the

Chess Server corner the financial system and optimize the ship to defeat  the Milky Way Galaxy chess grandmaster. Monetary flow is like a potential field over the network of processors. The crew control of the money supply  biases  the system to supply the needs of the crew rather than optimize around a random

local field fluctuation.

Adding a new server  is much like releasing a fish in a pond: either it finds a niche and competes successfully or it  dies.  Where it ends up and whether it reproduces are  not wholly predictable in advance. This may be difficult for those schooled in current engineering dogma to accept. It can only be said that all the truly complex systems in nature work on these chaotic principles.


We have discussed a wide range of technological possibilities for future spacecraft information systems. At the conservative end of the prediction spectrum the technological revolution is mostly invisible. Capabilities are great, but an astronaut or cosmonaut would find things recognizable. Intelligent systems give the vessel autonomy and reliability far beyond that of the present. Smart materials and subsystems heal themselves to a great extent and give  very specific alerts when  there is trouble. The interior of the ship is comfortable but pragmatic.  A crew sits at control panels which are recognizable as such although they are viewing panels controlled by voice and gesture rather than the familiar physical buttons, knobs and toggles.

The optimistic prediction is an entirely different matter. The coming together of all the technologies discussed in this paper give us a spacecraft which is more living organism than vessel. The crew controls every facet of operations through VR and can couple so tightly into the vessel systems that they effectively become the ship. Physical controls are unnecessary but the ship will obligingly grow them on demand should they be required. The interior is dedicated to the comfort of the crew and is filled with ever changing art, music, color. The ship is a jinn and the crew are its masters.


Thanks for discussions and comments are due to Professor R. H. Perrott,  Pat

Crookes and John Flanagan of Queens University. Also many thanks to  Dr. K. Eric Drexler and Christine Peterson for  stimulating discussions  over many years, without which this paper would never have been written.


  1. A. Newell, J.Barnett, J. Forgie, C. Green, D. Klatt, J.C.R. Licklider, J. Munson, R. Reddy, W. Woods, ’Speech

Understanding Systems: Final Report of a Study Group’, Computer Science Department, Carnegie-Mellon University,  Pittsburgh,  May 1971,  p. 1.2. Figure shows specifications for a speech understanding system planned

for 1976 that are barely within the current state of the art.

  1. Hans Moravec, ’Mind Children’, Harvard University Press,  Cambridge, 1988,  p. 64.
  2. K. Eric Drexler, ’Engines of Creation’, Anchor Press, Doubleday, New York, 1986, p. 14,19.
  3. L.J. Whitman, Joseph A Stroscio, R.A. Dragoset and R.J. Celotta, ’Manipulation of Adsorbed Atoms and Creation of New Structures on Room-Temperature Surfaces with a Scanning Tunneling Microscope’, Science 251, 1206-1210, (1991).
  4. Joseph A Stroscio and D.M. Eigler, ’Atomic and Molecular Manipulation with the Scanning Tunneling Microscope’, Science 254, 1326-1335, (1991).
  1. Robert Pool, ’Making Atoms Jump Through Hoops’, Science 248, 1076-1078, (1990).
  2. Michael Conrad, ’On Design Principles for A Molecular Computer’, CACM 28, 464-480, (1985).
  3. Arthur L. Robinson, ’Bell Labs Generates Squeezed Light’, Science 230, 927-929, (1985).
  4. Yoshihisa Yamamoto, Susumu Machida and Wayne H. Richardson, ’Photon Number Squeezed States in

Semiconductor Lasers’, Science 255, 1219-1224, (1992).

  1. Robert Pool, ’A Small, Small, Very Small Diode’, Science 246, 1251, (1989).
  2. In-Whan Lyo and Phaedon Avouris, ’Negative Differential Resistance on the Atomic Scale: Implications for Atomic Scale Devices’, Science 245, 1369-1371, (1989).
  1. Mani Sundaram, Scott A. Chalmer, Peter F. Hopkins and Arthur C. Gossard, ’New Quantum Structures’, Science 254, 1326-1335, (1991).
  1. Richard A. Webb, and Yoseph Imry,  ’Quantum Interference and the Aharonov-Bohm Effect’, Scientific American 260, 56-62, (April 1989).
  1. Arthur L. Robinson, ’Is Diamond the New Wonder Material?’,  Science 234,  1074-1076, (1986).
  2. Dimitri A. Parthenopoulos and  Peter  M. Rentzepis,   ’Three-Dimensional Optical Storage Memory’,  Science 245, 843-845,   (1989).
  1. David H. Freedman, ’Drawing a Bead on Superdense Storage’, Science 255, 1213-1214, (1992).
  2. Ivan Amato, ’Designing Crystals That Say No to Photons’, Science 255, 1512, (1992).
  3. Hubert L. Dreyfus, ’What Computers Can’t Do’, Harper Colophon, New York, Revised Edition 1979.
  4. Peter J Denning and Walter F. Tichy, ’Highly Parallel Computation’, Science 250, 1217-1222,  (1990).
  5. We differentiate a large network of processors from a parallel processor with many elements. The former are cooperating on many related but independent tasks, whereas the latter attacks a single  problem by partitioning it among  many processors.
  1. ’Optical Interferometric Parallel Data Processor’, NASA Tech  Briefs 11,  37,   (January 1987).
  2. Yaser S Abu-Mostafa  and  Demetri Psaltis,  ’Optical  Neural Computers’,  Scientific American 256,  88-95,  (March 1987).
  1. Philip D. Wasserman, ’Neural Computing, Theory and Practice’, Van Nostrand Reinhold, New York, 1989.
  2. Alan B. Chamber and David C. Nagel, ’Pilots of the Future: Human or Computer’,  CACM  28,  1187-1199,  (1985).
  3. Eliot Marshall, ’Accountants Fret Over EOS Data’, Science 255, 1206,  (1992).
  4. Gene D. Carlow, ’Architecture of the Space Shuttle Primary Avionics Software System’, CACM 27, 926-936, (1984).
  5. John R. Garman, ’The Shuttle Orbiter Primary Avionics Software System’, NASA-S-83-02134, 1983.
  6. Paul McAvinney, ’US Patent No. 4,746,770; Method and Apparatus for  Isolating and Manipulating Graphic Objects on Computer Video Monitor’, US Patent Office,  May 24,1988.
  1. James D. Foley ’Interfaces for Advanced Computing’, Scientific American 257, 127-135,    (Oct  1987).
  2. ’Controlling Computers With A Wave of The Hand’, NASA Tech  Briefs 13, 18-19,  (August 1989).
  3. Robert Pool, ’Mathematicians Join the Computer Revolution’, Science 256, 52-53, (1992).
  4. Robert Pool, ’The Third Branch of Science Debuts’, Science 256, 44-47, (1992).
  5. Kai-Fu Lee, ’Large-Vocabulary Speaker Independent Continuous Speech Recognition: The Sphinx System’, Carnegie-Mellon University School of Computer Science,  CMU-CS-88-148,  April 1988.
  1. Kai-Fu Lee and Sanjoy Mahajan, ’Corrective and Reinforcement Learning for Speaker-Independent Continuous Speech  Recognition’,  Carnegie-Mellon University School of Computer Science,  CMU-CS-89-100,  January 1989.
  1. Roger Dannenberg and Dale Amon, ’A Gesture Based User Interface Prototyping System’ in Proceedings  of  the Second  Annual  ACM SIGGRAPH    Symposium    on   User   Interface    Software    and Technologies,  ACM , 1989,  p. 127-132.
  2. Dean Harris Rubine, ’The Automatic Recognition of Gestures’, thesis, Carnegie-Mellon University School of Computer Science, CMU-CS-91-202, December 1991.
  1. Dean Rubine,  ’Specifying  Gestures  by  Example’  submitted  to Siggraph  91.   Carnegie-Mellon  University School  of  Computer Science, 8 January 1991.
  1. Brad A. Myers, Dario A. Guise, Roger B. Dannenberg, Brad Vander Zanden, David S. Kosbie,  Edward Pervin, Andrew Mickish,   and Phillippe Marchal,  ’Garnet: Comprehensive Support for Graphical, Highly Interactive User Interfaces’,  IEEE Computer,  71-85,  (November 1990).
  1. Luca Cardelli, ’Building User Interfaces by Direct Manipulation’, DEC Systems Research Center, Palo Alto,  (1987).
  2. Michael Santori, ’An instrument that isn’t really’, IEEE Spectrum,  36-38,  (August 1990).
  3. Peter Fromherz, Andreas Offenhausser, Thomas Vetter, Jurgen Weis, ’A Neuron-Silicon Junction: A Retzius Cell of the Leech on an Insulated-Gate Field Effect Transistor’, Science  252,  1290-1293,  (1991).
  1. Sarah Williams, ’Tapping Into Nerve Conversations, Science, 248, 555,  (1990).
  2. Ivan Amato, ’Engineers Open a Dialogue with Neurons’, Science 253,  34, (1991).
  3. Steve Ditlea, ’Data Suit’,  OMNI,  22,  (Sep. 1988).
  4. K. Eric Drexler, ’Engines of Creation’, Anchor Press,  Doubleday,  New York, 1986,  p. 90-92.
  5. Myron W Kreuger, ’Artificial Reality II’ , Addison Wesley, 1991.
  6. Russell. Ruthen, ’Holochrome’ , Scientific American 259, 26-27,  (November 1988).
  7. Dean Rubine and Paul McAvinney,  ’Programmable Finger-tracking Instrument  Controllers’,  Computer Music Journal 14,  26-41, (MIT 1990).
  1. ’Sensor Frame Graphic Manipulator’, SBIR Phase I Final Report to NASA, Contract number NAS 9-17741.  (July 1987).
  1. ’The Sensor Frame Graphic Manipulator’,  SBIR Phase II Final Report to NASA, (May 1990)..
  2. Emanuel Sachs, ’Coming Soon to a CAD Lab Near  You’,  Byte,  238-239,  (July 1990).
  3. “Something about this situation smells…” could be a real input. Scent could be used to indicate the overall situation, as in rosy or rotten.
  1. Hans Moravec, ’Mind Children’,  Harvard University Press, Cambridge, 1988,  p. 96-99.
  2. Joseph Weizenbaum, ’ELIZA’, CACM  9,  36-45,  (1966).
  3. Philip H. Abelson, ’Sensor, Computers, and Actuators’,  Science 249,  9,  (1990).
  4. K.D. Wise and K. Najafi, ’Microfabrication Techniques for Integrated Sensors and Microsystems’, Science 254, 1326-1335,  (1991).
  1. Ivan Amato, ’Animating the Material World’, Science 255,  284-286,  (1992).
  2. William B Scott, ’Black World Engineers, Scientists Encourage Using Highly Classified Technology for Civil Applications’,  Aviation Week and Space Technology 136,   66-67,  (March 9, 1992).
  1. Even with nanoscale sensors, it may be less than trivial to interleave high sensitivity sensors (for photons varying from microwave to gamma) in ways that do not interfere with each other. The ability to position atoms does not imply total understanding of all things physical.
  1. Thomas Gaisser, ’Gamma Rays and Neutrinos as Clues to the Origin of High Energy Cosmic Rays’, Science 247, 1049-1056, (1990). See page 1053.
  1. Advances in physics might allow similar monitoring of gravity waves. More will be known on this topic if research plans go ahead on the next generation of Earth based gravity wave sensor.
  1. The decision to assemble, disassemble or leave “as is” will be an economic one based on a trade off of “rents” for “real estate” and energy “costs”.
  1. Inputs and outputs in this diagram may represent spacecraft internal sensing and effecting as well as the external ones depicted in figure 1. Note that the crew itself can be the object of this input and output.
  1. Under some circumstances (fault testing, fault bypass, novel circumstances, testing new control and reflex loops) a higher level Ganglia may need to directly command a lower level one.
  1. Reflex actions can be effected by either open or closed loops. Pulling your hand out of a fire is an example of an open loop reflex. Ducking a fast moving dark object is a closed loop reflex. We use reflex to mean a primarily protective response to a sudden external stimulus. A feedback control loop acts upon monitored process variable(s) to keep them within bounds specified by local limits or globally specified limits.
  1. In an Agoric system we might have a market in data meets some  ’interestingness’ criteria. Placing high value on novelty guides the system to watch for it.
  1. Unpublished commercial work by the author, 1981. A real time process control operating system with local control loops, failsafe and local autonomy at any level and the ability of higher faults to be passed upwards and demand reads of data to be passed downwards was implemented on a highly distributed tree structured hierarchy of single board microprocessors. The system is, to the best of the authors knowledge, still in use at multi-building commercial and governmental sites.
  1. M.Mitchell Wardrop, ’Fast, Cheap, and Out of Control’, Science 248,  959-961,  (1990).
  2. Emilio Bizzi, Ferdinando A. Mussa-Ivaldi and Simon Giszter, ’Computations Underlying the Execution of Movement: A Biological Perspective’,  Science 253,  287-291,  (1991).
  1. This parallel should not be taken to literally since mammalian sensory systems, particularly the visual cortex are wired more directly into upper parts of the brain. Brain interconnections are very much an evolutionary design with new systems added on top of the old. It does not make sense to pattern a designed system too closely upon an obviously workable but ad hoc prototype.
  1. It may be tricky to decide when a crew input is a command input to the top level and when it is a low level sensor input to the systems responsible for keeping the crew alive. There is an overlap between the crew-as-controllers and crew-as-environment view of crew bio-data. Beware that reflex knowledge may be domain dependent. A mosquito has a different reflex to another mosquito than does a frog to a mosquito.
  1. Gregory K. Wallace, ’The JPEG Still Picture Compression Standard’, CACM  34, 31-44,  (1991).
  2. Greg Cockcroft and Leo Kourvitz, ’NeXTstep: Putting JPEG to Multiple Uses’, CACM  34, 45 & 116,  (1991).
  3. Didier Le Gall, ’MPEG: A Video Compression Standard for Multi Media Applications’, CACM  34, 47-58,  (1991).
  4. Darrell R. Raymond and Frank William. Tompa, ’Hypertext and the New Oxford Dictionary’, CACM 31, 871-879, (1988).
  1. Robert Pool, ’Bringing the Computer Revolution Down to a Personal Level’, Science 256, 55-62, (1992).
  2. Jack Shandle, ’Lights,  Camera,  Compute!’, Electronics,  72-73, (July 1987).
  3. Samuel Weber, ’Imaging’,  Electronics,  61-64,  (July 1989).
  4. Vannevar Bush, ’As We May Think’,  Atlantic Monthly,  July 1945. Reprinted in ’CD\ROM:  The New Papyrus’,  p. 3-20.
  1. Jeff Conklin, ’Hypertext:  An Introduction and Survey’,  IEEE Computer, 17-41,  (Sep 1987).
  2. John B. Smith, and Stephen F. Weiss,  ’An Overview of Hypertext’, CACM  31,  816-819,  (1988).
  3. Jacob Nielsen, ’The Art of Navigating Through Hypertext’,  CACM  33,  296-310, (1990).
  4. ’Xanadu/Server System Overview,  Draft Revision 1.0  B4’, Xanadu Operating Company,  (June 1990).
  5. Robert Pool, ’Massively Parallel Machines Usher In Next Level of Computing Power’, Science 256, 50-51, (1992).
  6. David A. Carlson, and Sudha Ram,  ’ HyperIntelligence: The Next Frontier’, CACM  33,  311-321,  (1990).
  7. Donald M Akscyn, Donald L. McCracken  and Elise A. Yoder,  ’KMS: A   Distributed  Hypermedia  System  for

Managing  Knowledge  in Organizations’, CACM  31,  820-835,  (1988).

  1. NeXTstep is a trademark of NeXT Computer Company.
  2. ’The NeXTstep Advantage’, NeXT Computer Inc #N6033, 1991, p. 48-51.
  3. Mark S. Miller, K.Eric Drexler, ’Markets and Computation: Agoric Open Systems’ in The Ecology of Computation, ed. Bernardo Huberman, Elsevier Science Publishers, North Holland, 1988.
  4. K.Eric Drexler, Mark S. Miller, ’Incentive Engineering for Computational Resource Management’ in The Ecology of Computation, ed. Bernardo Huberman, Elsevier Science Publishers, North Holland, 1988.
  1. Mark S. Miller, K.Eric Drexler,’Comparative Ecology: A Computational Perspective’ in The Ecology of Computation,
  2. Bernardo Huberman, Elsevier Science Publishers, North Holland, 1988.
  3. Scott H. Clearwater, Bernardo A. Huberman and Tad Hogg, ’Cooperative Solution of Constraint Satisfaction Problems’, Science 254, 1181-1183,  (1991).
  1. A random factor would simulate the effect of war, civil strife and revolution in human affairs, although in a rather less destructive manner.
  1. Process death does not mean program or information is lost. Useful data and resources of the particular process instantiation will be ’sold off’  and the process terminated. But an archive will always contain the server executable so that a ’venture capital’ process can generate a ’startup’ when conditions are again favorable.


Add a Comment

Your email address will not be published. Required fields are marked *