Science plus more to please
Irreducible Mind is the title of a book that was first published in 2007
The authors are: Edward F. Kelly, Emily Williams Kelly, Adam Crabtree, Alan Gould, Michael Grosso and Bruce Greyson
The book’s contents remain defining and important ones in psychoanalysis to this day
The purpose of this blog is not to talk so much about the book and it’s contents but to look more closely as an extended review of the book by Ulrich Mohrhoff. Mohrhoff’s review discusses the implications of the book Irreducible Mind in relationship to what he considers to be metaphysical nexus between our minds and brains. Mohrhoff introduces sub-quantum ontological physics into his review ideas as he talks about the mind/brain relationship.
In future in my website I will be referring to not only the Irreducible Mind book but more especially so Mohrhoff’s words. I see both these items as being pertinent to not only my physics Awareness model but also my Dual Consciousness [Imiplicit and Explicit] model as well.
You will find Mohrhoff’s review paper here.
You will find another document of reviews relating to the perceived quality nature of the Irreducible Mind book as well.
If you have not heard about the book Irreducible Mind before I feel strongly that you will appreciate me introducing you to both the book as well as Mohrhoff’s ideas.
Is ether theory still valid for incorporation within contemporary physics or not?
I believe that it is. My reasons for saying this are based upon what Einstein said about ether in a public lecture in Germany in 1920. In his latter years Einstein continued to believe that ether was pertinent to both his Special Relativity and General Relativity models but both for different reasons. Einstein made the distinction between an immobile ether in his Special Relativity theory and in his General Relativity hypothesis he determined ether to be necessary to accommodate gravitational waves.
I have copied and posted Einstein’s 1920 lecture below and I have highlighted within the text where he has talked about his belief that ether was an important factor in his thoughts in relationship to both his relativity theories. Einstein continued to believe in ether theory until the closing days of his life, but not necessarily in relationship to his original relativity ideas. As an extension of these words keep in mind that both of Einstein’s relativity theories are the core of modern physics models and theories. Contemporary physics has elected to dismiss ether theory because it is seen an unnecessary. However, later in 1924 Einstein further affirmed his belief in the need for a universal ether. In this later document he said Quote: “…every theory of local action assumes continuous fields, and thus also an existence of an aether’.” The full essay relating to the 1924 lecture is in this pdf file.
Here is Einstein’s 1920 lecture:
Albert Einstein gave an address on 5 May 1920 at the University of Leiden. He chose as his topic Ether and the Theory of Relativity. He lectured in German but we present an English translation below. The lecture was published by Methuen & Co. Ltd, London, in 1922.
Ether and the Theory of Relativity by Albert Einstein
How does it come about that alongside of the idea of ponderable matter, which is derived by abstraction from everyday life, the physicists set the idea of the existence of another kind of matter, the ether? The explanation is probably to be sought in those phenomena which have given rise to the theory of action at a distance, and in the properties of light which have led to the undulatory theory. Let us devote a little while to the consideration of these two subjects.
Outside of physics we know nothing of action at a distance. When we try to connect cause and effect in the experiences which natural objects afford us, it seems at first as if there were no other mutual actions than those of immediate contact, e.g. the communication of motion by impact, push and pull, heating or inducing combustion by means of a flame, etc. It is true that even in everyday experience weight, which is in a sense action at a distance, plays a very important part. But since in daily experience the weight of bodies meets us as something constant, something not linked to any cause which is variable in time or place, we do not in everyday life speculate as to the cause of gravity, and therefore do not become conscious of its character as action at a distance. It was Newton’s theory of gravitation that first assigned a cause for gravity by interpreting it as action at a distance, proceeding from masses. Newton’s theory is probably the greatest stride ever made in the effort towards the causal nexus of natural phenomena. And yet this theory evoked a lively sense of discomfort among Newton’s contemporaries, because it seemed to be in conflict with the principle springing from the rest of experience, that there can be reciprocal action only through contact, and not through immediate action at a distance.
It is only with reluctance that man’s desire for knowledge endures a dualism of this kind. How was unity to be preserved in his comprehension of the forces of nature? Either by trying to look upon contact forces as being themselves distant forces which admittedly are observable only at a very small distance and this was the road which Newton’s followers, who were entirely under the spell of his doctrine, mostly preferred to take; or by assuming that the Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton’s law of force as an axiom not further reducible. But the ether hypothesis was bound always to play some part in physical science, even if at first only a latent part.
When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.
This theory – also called the theory of the stationary luminiferous ether – moreover found a strong support in an experiment which is also of fundamental importance in the special theory of relativity, the experiment of Fizeau, from which one was obliged to infer that the luminiferous ether does not take part in the movements of bodies. The phenomenon of aberration also favoured the theory of the quasi-rigid ether.
The development of the theory of electricity along the path opened up by Maxwell and Lorentz gave the development of our ideas concerning the ether quite a peculiar and unexpected turn. For Maxwell himself the ether indeed still had properties which were purely mechanical, although of a much more complicated kind than the mechanical properties of tangible solid bodies. But neither Maxwell nor his followers succeeded in elaborating a mechanical model for the ether which might furnish a satisfactory mechanical interpretation of Maxwell’s laws of the electro-magnetic field. The laws were clear and simple, the mechanical interpretations clumsy and contradictory. Almost imperceptibly the theoretical physicists adapted themselves to a situation which, from the standpoint of their mechanical programme, was very depressing. They were particularly influenced by the electro-dynamical investigations of Heinrich Hertz. For whereas they previously had required of a conclusive theory that it should content itself with the fundamental concepts which belong exclusively to mechanics (e.g. densities, velocities, deformations, stresses) they gradually accustomed themselves to admitting electric and magnetic force as fundamental concepts side by side with those of mechanics, without requiring a mechanical interpretation for them. Thus the purely mechanical view of nature was gradually abandoned. But this change led to a fundamental dualism which in the long-run was insupportable. A way of escape was now sought in the reverse direction, by reducing the principles of mechanics to those of electricity, and this especially as confidence in the strict validity of the equations of Newton’s mechanics was shaken by the experiments with b-rays and rapid cathode rays.
This dualism still confronts us in unextenuated form in the theory of Hertz, where matter appears not only as the bearer of velocities, kinetic energy, and mechanical pressures, but also as the bearer of electromagnetic fields. Since such fields also occur in vacuo – i.e. in free ether-the ether also appears as bearer of electromagnetic fields. The ether appears indistinguishable in its functions from ordinary matter. Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space. There is no fundamental difference between Hertz’s ether and ponderable matter (which in part subsists in the ether).
The Hertz theory suffered not only from the defect of ascribing to matter and ether, on the one hand mechanical states, and on the other hand electrical states, which do not stand in any conceivable relation to each other; it was also at variance with the result of Fizeau’s important experiment on the velocity of the propagation of light in moving fluids, and with other established experimental results.
Such was the state of things when H A Lorentz entered upon the scene. He brought theory into harmony with experience by means of a wonderful simplification of theoretical principles. He achieved this, the most important advance in the theory of electricity since Maxwell, by taking from ether its mechanical, and from matter its electromagnetic qualities. As in empty space, so too in the interior of material bodies, the ether, and not matter viewed atomistically, was exclusively the seat of electromagnetic fields. According to Lorentz the elementary particles of matter alone are capable of carrying out movements; their electromagnetic activity is entirely confined to the carrying of electric charges. Thus Lorentz succeeded in reducing all electromagnetic happenings to Maxwell’s equations for free space.
As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H A Lorentz. It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility. How this is to be understood will forthwith be expounded.
The space-time theory and the kinematics of the special theory of relativity were modelled on the Maxwell-Lorentz theory of the electromagnetic field. This theory therefore satisfies the conditions of the special theory of relativity, but when viewed from the latter it acquires a novel aspect. For if K be a system of coordinates relatively to which the Lorentzian ether is at rest, the Maxwell-Lorentz equations are valid primarily with reference to K. But by the special theory of relativity the same equations without any change of meaning also hold in relation to any new system of co-ordinates K’ which is moving in uniform translation relatively to K. Now comes the anxious question:- Why must I in the theory distinguish the K system above all K’ systems, which are physically equivalent to it in all respects, by assuming that the ether is at rest relatively to the K system? For the theoretician such an asymmetry in the theoretical structure, with no corresponding asymmetry in the system of experience, is intolerable. If we assume the ether to be at rest relatively to K, but in motion relatively to K’, the physical equivalence of K and K’ seems to me from the logical standpoint, not indeed downright incorrect, but nevertheless unacceptable.
The next position which it was possible to take up in face of this state of things appeared to be the following. The ether does not exist at all. The electromagnetic fields are not states of a medium, and are not bound down to any bearer, but they are independent realities which are not reducible to anything else, exactly like the atoms of ponderable matter. This conception suggests itself the more readily as, according to Lorentz’s theory, electromagnetic radiation, like ponderable matter, brings impulse and energy with it, and as, according to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy.
More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.
Think of waves on the surface of water. Here we can describe two entirely different things. Either we may observe how the undulatory surface forming the boundary between water and air alters in the course of time; or else-with the help of small floats, for instance – we can observe how the position of the separate particles of water alters in the course of time. If the existence of such floats for tracking the motion of the particles of a fluid were a fundamental impossibility in physics – if, in fact nothing else whatever were observable than the shape of the space occupied by the water as it varies in time, we should have no ground for the assumption that water consists of movable particles. But all the same we could characterise it as a medium.
We have something like this in the electromagnetic field. For we may picture the field to ourselves as consisting of lines of force. If we wish to interpret these lines of force to ourselves as something material in the ordinary sense, we are tempted to interpret the dynamic processes as motions of these lines of force, such that each separate line of force is tracked through the course of time. It is well known, however, that this way of regarding the electromagnetic field leads to contradictions.
Generalising we must say this:- There may be supposed to be extended physical objects to which the idea of motion cannot be applied. They may not be thought of as consisting of particles which allow themselves to be separately tracked through time. In Minkowski’s idiom this is expressed as follows:- Not every extended conformation in the four-dimensional world can be regarded as composed of world-threads. The special theory of relativity forbids us to assume the ether to consist of particles observable through time, but the hypothesis of ether in itself is not in conflict with the special theory of relativity. Only we must be on our guard against ascribing a state of motion to the ether.
Certainly, from the standpoint of the special theory of relativity, the ether hypothesis appears at first to be an empty hypothesis. In the equations of the electromagnetic field there occur, in addition to the densities of the electric charge, only the intensities of the field. The career of electromagnetic processes in vacuo appears to be completely determined by these equations, uninfluenced by other physical quantities. The electromagnetic fields appear as ultimate, irreducible realities, and at first it seems superfluous to postulate a homogeneous, isotropic ether-medium, and to envisage electromagnetic fields as states of this medium. But on the other hand there is a weighty argument to be adduced in favour of the ether hypothesis. To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.
It is true that Mach tried to avoid having to accept as real something which is not observable by endeavouring to substitute in mechanics a mean acceleration with reference to the totality of the masses in the universe in place of an acceleration with reference to absolute space. But inertial resistance opposed to relative acceleration of distant masses presupposes action at a distance; and as the modern physicist does not believe that he may accept this action at a distance, he comes back once more, if he follows Mach, to the ether, which has to serve as medium for the effects of inertia. But this conception of the ether to which we are led by Mach’s way of thinking differs essentially from the ether as conceived by Newton, by Fresnel, and by Lorentz. Mach’s ether not only conditions the behaviour of inert masses, but is also conditioned in its state by them.
Mach’s idea finds its full development in the ether of the general theory of relativity. According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that “empty space” in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gmn), has, I think, finally disposed of the view that space is physically empty. But therewith the conception of the ether has again acquired an intelligible content although this content differs widely from that of the ether of the mechanical undulatory theory of light. The ether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but helps to determine mechanical (and electromagnetic) events.
What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian ether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The ether of the general theory of relativity is transmuted conceptually into the ether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the ether of the general theory of relativity is the outcome of the Lorentzian ether, through relativation.
As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe.
In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.
If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.
Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or – as they might also be called – space and matter.
Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass.
Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”
From Einstein’s words, more particularly his words that I have emboldened, I hope that my readers may understand why I feel that ether theory in physics remains a valid hypothesis.
Is the only way to head off a catastrophe with software is to change software codes and how we make them?
It seems that this must be the case, and sooner than later. If you are familiar with computer software technology I am sure you will understand how serious this coding problem is. I am not computer friendly, so you must evaluate the contents of the article below as you see fit. I have emboldened text that I feel is most pertinent for you to take notice of. As far as I am aware this urgent story has not yet been discussed in the Australian media.
I present to my readers the following article derived from The Atlantic news journal
I quote the article as follows:
“James Somers, Sep 26, 2017
There were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.
The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.
Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generate a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.
Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.
It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.
“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.
Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”
The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.
Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”
This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”
The attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.
Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.
A person whose head has been replaced with a bulky desktop monitor
You Are Already Living Inside a Computer
Not Even the People Who Write Algorithms Really Know How They Work
Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.
Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.
The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.
“Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”
What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.
“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”
In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.
The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.
It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.
“If the software malfunctions and the same program that crashed is supposed to save the day, it can’t.”
Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”
Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.
There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.
The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”
The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on — so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.
Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?
The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.
Bret Victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”
“There’s a pretty strong conviction that that’s the wrong way of doing things.”
Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.
Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.
By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.
“Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.
To Victor, the idea that people were trying to understand cancer by staring at a text editor was appalling.
There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.
Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.
Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling. And it was the proper job of programmers to ensure that someday they wouldn’t have to.
There was precedent enough to suggest that this wasn’t a crazy idea. Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS. It is powerful enough to do work that once would have been done by a professional web designer.
But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.
At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like. His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen.
Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have. Then, you’d replay the game to see what actually happened.
Shadow Marios move on the left half of a screen as a mouse drags sliders on the right half.
Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.” He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.
When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”
When John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.
Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”
In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.
But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.
“I’m not sure that programming has to exist at all.”
In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.
Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.
“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.” Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”
Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.
“The people know how to code. The problem is what to code.”
It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.
“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.
Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.
Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.” The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”
Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”
Most programmers like code. At least they understand it.
He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”
Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.
Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production. “It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.” Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.
Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.
The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.
We already know how to make complex software reliable, but in so many places, we’re choosing not to.
As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”
Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.
Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.
It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”
“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”
You could do all the testing you wanted and you’d never find all the bugs.
Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”
The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”
Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?
In 2011, Chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.
“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”
Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.
“Few programmers write even a rough sketch of what their programs will do before they start coding.”
This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.” The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.
TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.
The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.
For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.
Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.
But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”
“I hope people won’t be allowed to write programs if they don’t understand these simple things.”
Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”
Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.
Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.
“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.
For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.
This code has created a level of complexity that is entirely new. And it has made possible a new kind of failure.
He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.
“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”
“And that’s completely fine until you run smack into a real problem.”
In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.
Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.
“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.
In the world of the self-driving car, software can’t be an afterthought.
“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.
“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”
One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.
“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”
“So that’s a big problem.”
It seems that there are sound reasons to believe this may be the case
I believe that you will find the following words in my blog ones to be concerned about. I have elected to combine two media articles into this blog presentation as I believe that they are mutually complementary to each other. There is also overlaps of information between both. The first item is a news story derived from the BBC and the second story is derived from the American Thinker news journal.
It is this latter article that I feel you will find to be most confrontational and disturbing. It talks about extensively about water pollution that emanates from the effects of estrogenic compounds in water supplies, from industry, agriculture and artificial birth control chemicals flowing into the public water supply system. The article focuses heavily on the dangers of birth control chemicals.
I have emboldened text that I feel may most interest you. I acknowledge that this important information has been derived from secondary sources and furthermore there may be a covert political agenda in the American Thinker article. I will leave it to you to make up your own mind about this matter.
Article 1: [From the BBC]
Sperm count drop ‘could make humans extinct’
By Pallab Ghosh Science correspondent, BBC News
25 July 2017
Humans could become extinct if sperm counts in men continue to fall at current rates, a doctor has warned.
Researchers assessing the results of nearly 200 studies say sperm counts among men from North America, Europe, Australia, and New Zealand, seem to have halved in less than 40 years.
Some experts are sceptical of the Human Reproduction Update findings.
But lead researcher Dr Hagai Levine said he was “very worried” about what might happen in the future.
The assessment, one of the largest ever undertaken, brings together the results of 185 studies between 1973 and 2011.
Dr Levine, an epidemiologist, told the BBC that if the trend continued humans would become extinct.
Decline rate ‘increasing’
“If we will not change the ways that we are living and the environment and the chemicals that we are exposed to, I am very worried about what will happen in the future,” he said.
“Eventually we may have a problem, and with reproduction in general, and it may be the extinction of the human species.”
Scientists not involved in the study have praised the quality of the research but say that it may be premature to come to such a conclusion.
Dr Levine, from the Hebrew University of Jerusalem, found a 52.4% decline in sperm concentration, and a 59.3% decline in total sperm count in men from North America, Europe, Australia and New Zealand.
The study also indicates the rate of decline among men living in these countries is continuing and possibly even increasing.
In contrast, no significant decline was seen in South America, Asia and Africa, but the researchers point out that far fewer studies have been conducted on these continents. However, Dr Levine is concerned that eventually sperm counts could fall in these places too.
Many previous studies have indicated similar sharp declines in sperm count in developed economies, but skeptics say that a large proportion of them have been flawed.
Some have investigated a relatively small number of men, or included only men who attend fertility clinics and are, in any case, more likely to have low sperm counts.
There is also concern that studies that claim to show a decline in sperm counts are more likely to get published in scientific journals than those that do not.
Another difficulty is that early methods of counting sperm may have overestimated the true count.
Taken together these factors may have created a false view of falling sperm counts.
But the researchers claim to have accounted for some of these deficiencies, leaving some doubters, such as Prof Allan Pacey of Sheffield University, less skeptical.
He said: “I’ve never been particularly convinced by the many studies published so far claiming that human sperm counts have declined in the recent past.”
“However, the study today by Dr Levine and his colleagues deals head-on with many of the deficiencies of previous studies.”
But Prof Pacey believes that although the new study has reduced the possibility of errors it does not entirely remove them. So, he says, the results should be treated with caution.
“The debate has not yet been resolved and there is clearly much work still to be done.
“However, the paper does represent a step forward in the clarity of the data which might ultimately allow us to define better studies to examine this issue.”
There is no clear evidence for the reason for this apparent decrease. But it has been linked with exposure to chemicals used in pesticides and plastics, obesity, smoking, stress, diet, and even watching too much TV.
Dr Levine says that there is an urgent need to find out why sperm counts are decreasing and to find ways of reversing the trend.
“We must take action – for example, better regulation of man-made chemicals – and we must continue our efforts on tackling smoking and obesity.”
Article 2: [From American Thinker]
July 27, 2017
Low sperm counts? Report fails to mention birth control in water supplies
By Monica Showalter
A study has found that male sperm counts have plunged since 1973, citing the evidence found in a large number of studies. Scientists say a continuation of this trend could mean the human race will go extinct.
A team of scientists is sounding the alarm about declining sperm counts among men in the Western world.
As Hagai Levine, the lead author of a recently published study, told the BBC, “If we will not change the ways that we are living and the environment and the chemicals that we are exposed to, I am very worried about what will happen in the future.”
He added, “Eventually we may have a problem, and with reproduction in general, and it may be the extinction of the human species.”
Sperm counts have fallen an average of 1.2 percent each year, and the compounded effect of that has resulted in a more than 50% drop in sperm counts today. CBS news reports that it follows a 1992 study that shows the exact same 50% decline, so nothing has changed in the rate of decline; it remains steady.
Sperm concentration decreased an average 52 percent between 1973 and 2011, while total sperm count declined by 59 percent during that period, researchers concluded after combining data from 185 studies. The research involved nearly 43,000 men in all.
“We found that sperm counts and concentrations have declined significantly and are continuing to decline in men from Western countries,” said senior researcher Shanna Swan.
The effect of estrogenic compounds in the water supply from industry, agriculture, and other sources raises concerns about human health and deserves scrutiny.
The one factor the report doesn’t mention, but probably should, is the credible reports of artificial birth control getting into the water supply.
This is not the Catholic Church’s argument against contraception going on here – the Catholic Church opposes artificial contraception because it interferes with the natural male-female relationship in marriage and discourages its use. This is something entirely different: whether one person’s right to “control her own body” entitles her to damage the reproductive system of another person’s body. Ultimately, it is a question of whether a man has a right to control his own body, too. This is deep libertarian territory.
The Competitive Enterprise Institute’s Iain Murray has done significant research on the effects of birth control pills in the water supply, pointing out that its hormones released into the water supply, which can’t be filtered out, are creating “intersex” characteristics and sterility in the fish supply. Fish exhibit sexual characteristics of both species due to estrogen contamination and cannot reproduce. Scientific American has noted that despite the claims that the amounts present are small, the presence of them has harmed wildlife in the water supply. Might be canaries in the coal mine for us.
Writing in 2008, Murray noted:
As I demonstrate in The Really Inconvenient Truths, by any standard typically used by environmentalists, the pill is a pollutant. It does the same thing, just worse, as other chemicals they call pollution. But liberals have gone to extraordinary lengths in order to stop consideration of contraceptive estrogen as a pollutant.
When Bill Clinton’s Environmental Protection Agency launched its program to screen environmental estrogens (a program required under the Food Quality Protection Act), the committee postponed considering impacts from contraceptives. Instead, it has decided to screen and test only “pesticide chemicals, commercial chemicals, and environmental contaminants.” When and if it considers the impacts from oral contraceptives, the Agency says that its consideration will be limited because pharmaceutical regulation is a Food and Drug Administration concern.
As a result, the EPA’s program will focus all energies on the smallest-possible part of endocrine exposure in the environment and the lowest-risk area.
The U.S. Geological Survey has found problems, too.
A recent report from the U.S. Geological Survey (USGS) found that birth-control hormones excreted by women, flushed into waterways and eventually into drinking water can also impact fish fertility up to three generations after exposure – raising questions about their effects on humans, who are consuming the drugs without even knowing it in each glass of water they drink.
The survey, published in March in the journal Scientific Reports, looked at the impact of the synthetic hormone 17α-ethinylestradiol (EE2), an ingredient of most contraceptive pills, in the water of Japanese medaka fish during the first week of their development.
While the exposed fish and their immediate offspring appeared unaffected, the second generation of fish struggled to fertilize eggs – with a 30% reduction in fertilization rates – and their embryos were less likely to survive. Even the third generation of fish had 20% impaired fertility and survival rates, though they were never directly exposed to the hormone.
The article states that there have been problems in mammals, too.
The Vatican, too, has spoken out about the environmental damage of artificial birth control going unfiltered into the water supply, specifically linking it to male infertility. Agence France-Presse reports:
The contraceptive pill is polluting the environment and is in part responsible for male infertility, a report in the Vatican newspaper L’Osservatore Romano said Saturday.
The contraceptive pill is polluting the environment and is in part responsible for male infertility, a report in the Vatican newspaper L’Osservatore Romano said Saturday.
The pill “has for some years had devastating effects on the environment by releasing tonnes of hormones into nature” through female urine, said Pedro Jose Maria Simon Castellvi, president of the International Federation of Catholic Medical Associations, in the report.
“We have sufficient evidence to state that a non-negligible cause of male infertility in the West is the environmental pollution caused by the pill,” he said, without elaborating further.
“We are faced with a clear anti-environmental effect which demands more explanation on the part of the manufacturers,” added Castellvi.
The blame cannot be laid on individuals who are attempting to do something they believe is responsible and useful and who have no intent to harm others. Nobody here is calling for the pill’s prohibition in a free society, where people of all religions should be free to make their own choices.
There should be reason, however, to look into whether birth control is affecting the water supply and contributing to this species-threatening low sperm count matter. The science does show that compounds excreted by users are impossible to filter from the water supply, and there are credible reports as to this affecting male fertility.
I would add that the span of years coincides with the rise of birth control pills, and it also coincides with the nations that use it.
A pro-contraception trade group, the Association of Reproductive Health Professionals, has admitted in a long editorial that there could be a problem, even as it tries to exculpate its industry, citing other possibilities.
The effect of estrogenic compounds in the water supply from industry, agriculture, and other sources raises concerns about human health and deserves scrutiny.
But all we see blamed in this and other editorials are “pesticide chemicals, commercial chemicals, and environmental contaminants,” as National Review’s article notes.
Seriously, why? Why not investigate everything and, if there is a problem found, find new ways to filter out the pollutants from the water supply? For all the global warmers’ alarmed claims about the threat to the species, here is a real threat, it’s moving fast, and nothing effective is being done about it.
Interviews, documentaries, press reports, feature films and other material relating to the sinking of the Titanic
Depicts the Titanic leaving Belfast, a press ship going out to meet the rescue ship Carpathia, interviews with the survivors, the alleged iceberg itself, crowds of people outside the White Star Line office in New York seeking information about the survivors, Marconi the inventor of the radio apparatus on the Titanic, and like. I think that my readers may be interested in photographs of the surviving crew which you will find between the 3.20 and 4.10 mark. Especially see how the young men in life jackets were lightheartedly behaving after their rescue.
New documentary adds further light the sinking of the Titanic
I think that you will find this 2016 National Geographic documentary to be very informative and interesting
The alleged oldest recorded movie of the Titanic
This movie features Captain Edward J. Smith walking around parts of the deck of the Titanic
Animated real time simulation of the Titanic sinking
I found this video to be disturbingly graphic
The 1958 movie relating to the sinking of the Titanic
This movie is entitled “A Night to Remember” and features the popular English actor Kenneth More who plays the role of Second Officer Charles Herbert Lightoller
How does an unsinkable ship drown on it’s maiden voyage?
Inside the Titanic. A full length movie about the event
Full length Nazi propaganda film about the Titanic
It is in German without subtitles and features exceptional photography
I think what is most interesting about this 1957 interview is the informal nature of the interviewees and that the Titanic sinking was still fresh in their collective minds. In my opinion it is this fact that makes this Titanic sinking story far more personalized than those that were recorded many years later when people were much older. The quality of the film is not good. I feel that you should take special interest in the section that talks about the desperate attempts made by the wireless operator on the Titanic. This is when he was frantically attempting to get the rescue Carpathia to assist the Titanic. The radio operator of the Carpathia was one of the interviewees.
The Last Seven Titanic Survivors Tell Their Story
This is a 1997 remaster of the original interviews
The last British survivor of the Titanic sinking interviewed
The last American survivor of the Titanic sinking interviewed
This video has historical background information as well
In 1898 did Morgan Robertson correctly predict the circumstances in which the Titanic sank?
The answer to this question seems to depend on who you ask
“Although the novel was written before the RMS Titanic was even conceptualized, there are some uncanny similarities between both the fictional and real-life versions. Like the Titan, the fictional ship sank in April in the North Atlantic, and there were not enough lifeboats for all the passengers. There are also similarities between the size (800 ft (244 m) long for Titan versus 882 ft 9 in (269 m) long for the Titanic), speed (25 knots for Titan, 22.5 knots for Titanic) and life-saving equipment.”
The ship Californian was allegedly visually nine miles away instead of nineteen miles away according to some eyewitnesses
One survivor said that she saw a ship standing nearby the Titanic as it was sinking at it probably was the Californian
It seems that the eminent physicist David Bohm was profoundly affected by his association with both Albert Einstein and the internationally respected philosopher Jiddu Krishnamurti
I feel that this is interesting. In this short thirteen minute video presentation Bohm talks about his implicate order theory in physics as it relates to all things. This includes both the universe as well as wider reality as well. You will notice that the Dalai Lama was present at different times during this discussion. I have not included this video into my other blog entitled “Jiddu Krishnamurti and David Bohm talk about life and philosophy” because I believe that this video is more to the point and easier to understand. Readers should note that Bohm died in 1992.