New science archive

Older science articles that I would normally send to trash, but they may still be of interest to you

Readers should note that I may now not necessarily identify with some of the content of this older material. My views about science and my ability to express such views have changed over time. Also note it is likely that most of the hyperlink connections are no longer functional. However, I still believe that you will find a considerable amount of scientific knowledge within this blog that you have never heard about before. Have a great read!

 

1] What is the Planck length?

I refer to it as the Planck line

Throughout my science writings I refer to the expression Planck line. I have done this because I feel it is an easier concept for layperson readers to identify with rather than its proper physics name the Planck length. The definition of Plank length below is a very elementary one. can be found in Wikipedia. What is the Planck length? It can be broadly seen as follows:

Quote:

“Physicists primarily use the Planck length to talk about things that are ridiculously tiny.  Specifically; too tiny to matter.  By the time you get to (anywhere near) the Planck length it stops making much sense to talk about the difference between two points in any reasonable situation.  Basically, because of the uncertainty principle, there’s no (physically relevant) difference between the positions of things separated by small enough distances, and the Planck length certainly qualifies. Nothing fundamentally changes at the Planck scale, and there’s nothing special about the physics there, it’s just that there’s no point trying to deal with things that small. Part of why nobody bothers is that the smallest particle, the electron, is about 1020 times larger (that’s the difference between a single hair and a large galaxy). Rather than being a specific scale, The Planck scale is just an easy to remember line-in-the-sand (the words “Planck length” are easier to remember than a number).”

http://www.askamathematician.com/2013/05/q-what-is-the-planck-length-what-is-its-relevance/

The Planck length has a more formal meaning as well. I have copied and edited the following quotation on your behalf in order for you to better understand what I mean here:

Importance of Planck

Quote:

“… The idea of a fifth dimension is not new (our fourth dimension including time, plus one other)” “… extra dimensions needn’t be curled up as small as the Planck scale, their effects could be felt by particles at lower energy” “… unification happened when the forces were still weak enough to be handled by conventional mathematical techniques” “… researchers were amazed because unification at a such low energy was supposed to be impossible” “… Fortunately, a fifth dimension comes to the rescue” “… The implications of being able to observe events on the GUT (grand united theory), string and Planck scales are truly mind boggling. We would for the first time be able to see strings, the ultimate foundation stones of reality. And with the Planck scale lowered, experimental tests of quantum gravity-the long sought unification of Einstein’s theory of gravity with quantum theory-might just be around the corner” “… For physics however, the consequences are huge” “… Suddenly, people are seeing extra dimensions as not just a theoretical theory but as every day things whose consequences we could actually measure” “… If (unification) occurs at lower energy, it would change everything, including our picture of evolution of the Universe from the big bang” “… Even simpler laws of physics would change …” “… the discovery (of a timeless fourth dimension) is simply another vital piece of the cosmic jigsaw”

Source:
New Scientist, Volume 2157, page 28

You will notice that this latter Planck article also talks about the possible discovery of other dimensions, including where I noted the possible discovery of a timeless fourth dimension within this process. I argue that the fourth dimension can be located at the Planck length and this is why I wrote a major blog entitled “Is the universe floating in a fourth dimension”. (1st August 2017 this blog is now being amended).

The following sound cloud presentation features Arthur C Clark talking about the Planck length and Clark also introduces another audio extract therein relating to the same subject and this is spoken by  Stephen Hawkins.

 

2] The fallacy of the notion of pseudoscience

When is pseudo-science not pseudoscience?

My answer to this question is simple: I believe if the intent of the believers of a given hypothesis is honorable and peer accountable it is not pseudoscience. I believe decrying pseudoscience without rational debate is not credible science in itself. Because this topic is a contentious one in the science community I strongly urge my readers to peruse a blog Is the scientific method living up to its own expectations?

By way of example it is common practice for persons who dare to think about science from a philosophical perspective to be seriously maligned characters. I point out that just as metaphysical science is called crank science so is pseudoscience. I point out that within this webpage I quote Albert Einstein as saying throughout his life metaphysics was a key player in conventional physics and furthermore if one investigates Wikipedia they will find that metaphysics is a legitimate part of cosmology. The article quoted below is one that sets out to demonstrate that pseudoscience is a very ambiguous label that often unfairly derogates philosophy that has a legitimate scientific value. The author gives sound reasons for drawing these conclusions. It seems to me that it is inappropriate to say that creative thinkers in the realms of the sciences are titled metaphysical or pseudoscience cranks. I have also noticed in the literature that mathematics is open to philosophical interpretation, often creatively so. Some contemporary scientists such as famous cosmologist Lawrence Krauss are also stating that physics should be similarly daring and creative and look beyond blinkers. I quote Donald E. Simanek regarding this topic as follows:

What is science? What is Pseudoscience? by Donald E. Simanek

Quote:

“A visitor to my web site asks “What is the definition of pseudoscience?” That’s a fair, but challenging, question. Normally one would expect the practitioners of a discipline to define it, but in this case the practitioners of pseudoscience don’t recognize the validity of the label. The question translates to “How does one distinguish between science and pseudoscience.” Perhaps we should first settle on a definition of science. Even that isn’t an easy task, for it has so many nuances. Whole books have been written on the subject. The scientist might answer “I know pseudoscience when I see it.” But the boundary between science and pseudoscience is murky. Sometimes it’s hard to tell cutting edge scientific speculation from pseudoscience. Let’s recognize two uses of the word ‘science’. First, it is an activity carried out by scientists, with certain raw materials, purpose and methodology. Second, it is the result of this activity: a well-established and well-tested body of facts, laws and models that describe the natural world. Scientists accept that the observations and the results of science must be “objective.” That is they must be repeatable, testable and confirmable by other scientists, even (and especially) sceptical ones. The edifice of law and theory that science builds must be representative of a “shared” perception that can be observed and verified by anyone equipped with good observation skills and appropriate measuring tools. Much of modern science uses language and concepts that go far beyond the directly and immediately observable, but there must always be logical links and experimental operational links between these concepts and things we can observe. As part of the process of crafting scientific models and theories, scientists must brainstorm, innovate and speculate. That’s the creative component of the activity. But they must also maintain a disciplined rigor to ensure that their theories and models fit into a logical and consistent interrelated structure. The final edifice called science allows deduction of predictions about the world, predictions that may be tested against observations and against precise measurements made on nature. Nature is unforgiving of mistakes, and when experiments disagree with the predictions of scientific laws and models, then those laws and models must be modified or scrapped. Scientists’ personal styles, prejudices and even limitations are ever-present realities in the process. But rigorous and sceptical testing of the final result must be sufficiently thorough to weed out any mistakes. It’s fairly easy to distinguish science from pseudoscience on the basis of the final product, the laws and theories. If the results (1) cannot be tested in any way, (2) have been tested and always failed the test, or (3) predict results that are contradictory to well established and well tested science, then we can fairly safely say that we are dealing with pseudoscience. At the level of speculation, it’s not so easy. Consider these two examples.

  1. Is the notion that hypothetical particles (tachyons) may travel faster than light a pseudoscientific idea? Well this speculation was proposed by scientists with perfectly respectable credentials, and other respectable experimenters took time to look for such particles. None have been found. * We no longer expect to find any, but we do not consider the idea to have been “unscientific”.
  2. Is it scientific to hypothesize that one could build a perpetual motion machine that would run forever with power output, but no power input? Most scientists would answer “No.”

* Note, what is stated above is not necessarily correct. “Tachyon fields are an essential tool in QM, nevertheless, negative squared mass fields are commonly referred to as “tachyons”,[5] and in fact have come to play an important role in modern physics.” http://en.wikipedia.org/wiki/Tachyon “What is the essential difference between these two examples? In the first case, the hypothetical tachyons would not violate any known principles of physics. In the second case, a perpetual motion machine would violate the very well-established laws of thermodynamics, and also violate even more basic laws as well, such as Newton’s laws, and conservation of momentum and angular momentum. But are the laws and theories of physics sacred? Of course not; they represent part of the logical structure called “established physics” that is the culmination of our accumulated scientific knowledge. We fully expect that future discoveries and insights will cause us to modify this structure in some ways. This won’t invalidate the whole of physics, for the old laws and theories will continue to work as well as they always did, but the newer structure may have more precision, power, breadth or scope, and may have more appealing conceptual structure. Such continual evolution and modification of physics is gradual and generally changes only a small portion of the vast edifice of physics. Once in a while, a “revolution” of thought occurs causing us to rethink or reformulate a major chunk of physics, but even that doesn’t make the old formulations wrong within their original scope of applicability.” http://www.lhup.edu/~dsimanek/pseudo/scipseud.htm.

 

3] A seven point guide to the day to day workings of reality

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

The following words have been written in order to assist you to understand how I look at, interpret and broadly describe reality

I consider that…

1] All phenomena in reality as being either virtual or real. Virtual (metaphysical)  means it either has not yet scientifically discovered characteristics or it is not materialistically observable or testable. On the other hand, real phenomena are phenomena that are scientifically known to exist and are observable, describable and testable.

2] The laws of physics are different throughout what ever the multiverse may be (reality as may be mathematically described). Furthermore these laws are forever changing and this includes the role of nature throughout reality.

3] These same laws of physics exist in both real and virtual phenomena, which continually interact with each other within an all encompassing atmosphere of clock-timelessness; that is, absolute (Lorentz) time that has no real ending. It can only be considered to have a virtual ending.

4] My words in item 3 suggest that what most people may perceive as reality is virtual and not real.

5] There are numerous common features that permeate reality. They include consciousness, awareness, thought, timelessness, clock-time, energy and waves. All of these features may be either virtual or real, or virtual and real at the same time because of quantum entanglement. Or they may be in a separate concurrent relationship with each other. In layperson’s language, it could be said that reality is an analogical, boundless cauldron of inexplicable activity [information], and as such reality is not, and can never be, materialistically understood or even testable. Reality is simply what it is. It is “something” and this “something” can perhaps be best described as an experience of both real and virtual phenomena.

6] I suggest that real experiences are explicit, and virtual experiences are implicit. This means that you and I have both implicit and explicit characteristics that are entangled with each other, and this entanglement process is the natural essence of all cosmic life. Furthermore I suggest that your life experiences are also virtually entangled with mine. All things are somehow connected to each other.

7] Real and virtual mathematics is the common language of reality and all of its entangled experiences as well.

I suggest that the foregoing words mean that anything whatsoever, regardless of circumstances, location or time is both possible and feasible within my concept of primordial reality.

 

4] Unity science endeavour

Commencement of my unity blog

I have hesitated commencing to write this blog because I have not been sure as to how to go about it. I know the sorts of things I would like to say and discuss with you. I have most of the material on hand to achieve this objective, but the appropriate methodology I need still eludes me. My problem relates to the substantial volume of information I have available. I confess that in my mind I already know that I will never complete this retirement project. It will remain a work in progress. I believe that this blog will proceed under its own weight and will include any circumstances that may prevail in my life at any given time.

I will make modifications to my presentation as I see fit along the way. I will add new ideas that I had not considered before. I will modify existing ideas. I will remove some ideas and I will add others that you may present to me via emails, or at lectures that I intend to conduct from time to time (in South Australia only). Please be aware, because of my age and other domestic circumstances, this project could cease at short notice.

Notwithstanding my previous words, I believe an appropriate start point is a lecture that I am currently preparing to deliver to students and other interested parties at a major tertiary institution in South Australia. The topic is

Is there a virtual matrix pertinent to reality? If so, what are we to make of this?

Abstract

As an intuitive scientist and philosopher, I believe that an imaginative construct can be created which supports my notion that reality can be secularly described and explained. I see reality as being an imaginary virtual state, which I have called “primordial awareness”. I suggest that primordial awareness is like an imaginary backdrop of phenomena that have been, is, or will be. I further suggest that this backdrop is capable of enabling virtual waves that, under certain cosmic conditions, can also become scientifically measurable waves. I believe that primordial awareness is also a state of absolute simultaneity includes absolute time. Under such conditions nothing is scientifically happening, but from a metaphysical perspective, I believe there is. I discuss these things and draw attention to the possibility that we may also be cosmically entwined within this wider process of metaphysical primordial entanglement.

Hence that I believe that:-

1. It is possible to secularly describe a matrix of reality that may be applicable for the later formation of a holistic (all-inclusive) science model.

2. It is possible to build upon this matrix a description of my concept of reality that makes some sort of sense to both laypersons and science-minded persons alike.  However, such a story is heavily influenced by the speculative and descriptive nature of Philosophy.

3. Since the earliest days of recorded history, philosophy and science have consistently worked hand-in-hand with each other, and they remain partners even to this day. This happens through intuition and associated descriptive metaphors relating to the (holistic) real life experience we commonly share, such as consciousness. Orthodox science can cater for such abstract connections via quantum non-locality and entanglement theories. It is in this light that I feel that I can demonstrate (via some elements of contemporary scientific thinking and philosophy) that my concept of a matrix of reality may be described and explained. However, I point out that within this construction process you will find that I have been flexible of thought with my interpretation of some elements of the contemporary scientific method. For example the word “virtual” in physics has a specific scientific meaning, because it relates to quantum particles as well as any described scientific state or scientific process. So then, I am saying that my concept of such a matrix is a virtual (imaginative) state but is not necessarily imaginary in a long-term sense because it can be later non-locally (metaphysically) described otherwise. Keep in mind that I talking about an imaginative abstract cosmic environment, where there is no relative time (in such an environment it is known as absolute time) and no rules of materialist (local) physics exist. Thus I have nominated this matrix as having imaginative features that all have the capacity and propensity to ‘do something’. So, when we begin to introduce maths to this process, I suggest that we have something to work with in building a holistic model pertinent to understanding and describing reality and how this reality may work. Remember this is a theory.

4. In addition to (3) above, I think it is important for my readers to understand that cosmic particles can influence each other without breaching the rules of particle physics — an analogy being perhaps like blowing a spider off a wall and not touching it in the process. Furthermore physicists still do not know the origins of particles in the first place. I have added these words to this separate section to item three because I believe that they significantly add to the mysterious nature of the wider cosmos. Anything can happen within such a cosmic environment, and it does. This is why cosmic randomness keeps physicists puzzled and confused. This is what the science of physics is mostly all about. Why is the cosmos so crazy? I think my readers should also be aware that the physics community is progressively beginning to realise that at the point of cosmic singularity (the Big Bang) it is likely that the rules of cosmic activity and gravitational motion of the universe were established. This implies that the inherent rules of the universe were set in a predetermined instant immediately prior to the rapid expansion of the universe, if not prior to the Big Bang itself. You may reference these words by referring to a story in Quanta Magazine dated February 7th 2017. It is entitled ‘Experiment Reaffirms Quantum Weirdness’

5. As I indicated in (3) above I do not set out to prove anything, and as a scientific philosopher I am not capable of doing so anyway. My words today are simply to encourage you to think about the phenomenon of reality. Perhaps it is possible that I have provided you with enough cues to think about developing your own abstract reality model? Why care if your model is scientific or not. Why care if someone tells you your ideas are irrelevant pseudoscience? Albert Einstein is quoted as saying words to the effect that his education was the greatest handicap to his scientific life. I think what he meant was that his education hindered his imagination.

6. I am planning to leave what I consider to be the most important part of this blog to the closing stage. This particular part relates to how I believe my concept of a matrix reality and its ‘possibilities to do something’ may impact upon us as we go about our daily lives. This includes the sub-quantum mechanisms (non-local) through which we make decisions and subsequently behave. I believe that this relationship between my concept of a matrix reality, our thought construction processes and subsequent behaviour can be described and understood by quantum psychiatrists and psychologists.

7. In this unity blog you will find that I have drawn heavily upon existing material in my website in order to compliment the comprehensiveness of this blog. I feel my readers will appreciate this because it will expose them to much greater learning material and commentary than if I had done otherwise.

I trust that this elementary road map relating to my long term objectives with this matrix of reality blog makes some sort of sense to my readers. I hope that you appreciate why I have been compelled to write this blog as if it were driven by the educational discipline of philosophy, rather than materialist (local) physics. I believe it could not have been effectively written in any other way.

As an adjunct to these words I suggest that you visist my blog “Defining and describing holistic-cosmic influences and processes“.

Also see:

http://www.jonathonfreeman.org/what-is-unity-theory/

Note 18th September 2017. I will be returning to move forward with this blog sometime late this year.

 

5] The physics pertinent to my beliefs about our having dual consciousnesses

Random event generators help provide the evidence for my ideas

Some readers will already know that I am a firm believer that we have two quite different consciousnesses. I say that one of these consciousnesses is implicit (metaphysical) and the other one is relativistic (scientifically materialistic). You will find my wider ideas on this subject in my blog entitled “The finer aspects of reality – a secular argument”. However, for those of you who have an interest in physics, I have created the SoundCloud audio (available below) which is an extract from a full length video entitled “The 5th dimension: Mind over matter”. I feel the most relevant section in the video is from 38 minutes onwards, and it is this section that has been recorded for you. You will notice that the audio mentions that it seems our subconsciousness can subtlety influence the movement of materialistic phenomena. I claim this subconsciousness to influence objects is from our implicit consciousnesses. In other words they are the same thing. Another similar blog has been created that is pertinent to the empirical evidence now beginning to emerge from the medical profession in support of the same physics experiment.

 

6] The real and the flippant nature of science

The dangers of confusing the two

I believe that real science is an honest and dedicated pursuit of facts and theories. Such facts and theories should be specifically related to the real environment within which we all live and interact with each other, as well as the wider [holistic] universe.

I see flippant science as being the converse of these words. I see flippant science as being dishonest in both its intent and practice. I see the unnecessarily derogatory statements and behaviour made by flippant scientists against mainstream science and scientists as culturally mischievous.

I see investigative science and practice as having no specific boundaries or limits. Scientific thinking and practices apply to all things whether it be to the highest realms of existence [and its meaning] or to the lowest nooks and crannies of existence, such as that of a single grain of sand on a beach. I also include the various fields of thoughts that take place prior to our embarking on an act of behaviour in some way.

As an analogy to what I am talking about, consider a five year-old-boy who has accidentally come across the breeding ground of albino ladybird beetles. Can we call such a child a scientist? The child stops to witness the habits and experiences of these ladybirds day after day until such time as he informs his mother of his scientifically unique discovery. I see such a child as a dedicated investigative scientist.

I see no limit to the extent by which we can describe the word ‘scientist’. I see honest scientists as being implicit [holistic] and those that are not inclined this way as being flippant. By the word flippant I mean of an explicit nature and behaviour. This is especially so if they act in a manner that is not conducive to wholesome and constructive scientific practice over all, i.e. holistic reality science.

In summary I see us all as being scientists. I also see that it is our choice whether we choose to be implicit or explicit scientists. Furthermore I believe that we should all see our planet and the wider universe as being of both an implicit and explicit nature. I believe that it is only when we seriously acknowledge this intimate [entangled] relationship that I feel we can live and exist at peace with both it and one another – and perhaps this includes reality too?

 

7] Can Einstein’s theory of relativity be written in words of four letters or less?

Yes it can, here’s the proof!

Someone has rather ingeniously taken time out to write the article in the attached PDF file. I think you will find it entertaining, clever and funny.

Albert Einstein’s Theory of Relativity in words of four letters or less

 

8] Arthur C Clarke talks about fractals

I think most scientists would agree that the Mandelbrot set is one of the most important scientific discoveries in history

In my blog titled Why I think we are all order within chaos you will find why I think fractals are the most important feature of both cosmological science and everyday nature as we understand and perceive it to be. I strongly urge you to view the Youtube video titled The Colors of Infinity that is attached hereto. I think you will find it fascinating. If you have not already perused my Why I think we are all order within chaos blog I think you should both view the video as well as peruse this particular site. The information about fractals, and more importantly, the inherent characteristics of the Mandelbrot set itself contained herein is enormous. However, if you do not have this luxury, you will find timeframes 8:10 to 17:36 and 48:41 to 49:45 as a reasonable overview of the video as a whole. This includes the important things I think you should know about Mandelbrot himself, his hypothesis of the Mandelbrot set as well as the high esteem Arthur C Clarke places upon Mandelbrot’s ideas.

There are two other areas I feel you should acquaint yourself within this video as well. Both these areas relate to my wider views about physics in relationship to my Awareness model. If you look at timeframe 29:50 to 31:08 you will see where the famous scientist Stephen Hawking suggests that the universe probably ends at the Planck length. I have regularly suggested it is at this same point my concept of the fourth dimension kicks in and it is at this junction all cosmic phenomena becomes virtual. In other words I agree with him.

The second timeframe I think you should be aware of is the timeframe between 46:10 and 48:51. This area relates to Jung patterns and consciousness theory. Jung believed that there are primordial images we all share. Fractal awareness provides new insight into how our minds work. I argue that because all of nature is fractal, our brains and minds consider all information in a fractal manner as well. If this is the case then I think my blog titled Your sense of “I” is different to your physical body is a useful instrument in helping to understand how we connect with nature at all conceivable levels, levels that include how we not only think but also behave. This is consistent with my original barcode hypothesis contained within my 2011 thesis.

 

9] Three important scientific phenomena that physics cannot yet explain

Is non-locality [a metaphysical phenomenon] involved somewhere here?

I feel it is strange that in so many different debates that I have read about the metaphysical nature of non-locality that the following three phenomena are included.

Quote:

“Flaws in Current Atomic Theory

We have become so accustomed to the atomic models we have been taught that even our scientists neglect to consider that these are still mere models, which violate both the laws of physics as well as common sense when taken as the literal reality. We are taught that the nucleus mysteriously generates an endless “positive charge force” that pulls on the equally endless “negative charge force” of orbiting electrons. There is no explanation for the source of this apparently endless power output from both nucleus and orbiting electrons, nor is there any theory detailing a power drain from this effort. Further, the closely packed, strongly repelling positively charged protons in the nucleus are said to be kept from flying apart by another mysterious attracting force (Strong Nuclear Force) that for some unexplained reason only appears between protons when they are extremely close to each other in the nucleus. Again, this apparent attracting force in nature is completely unexplained, as is its unending power source. Atomic structure stays together and intact like this for billions of years with no explanation. Further, objects made of atoms also remain together, often under great mechanical stresses and strains. Again, this tremendous ongoing effort of atomic bonds holding together as molecules is completely unexplained. Endless strong nuclear force energy, endless positive charge energy of protons, endless negative charge energy of orbiting electrons, endless atomic bond energy and even endless gravitational energy emerging from atoms .. all at the core of today’s science and all completely unexplained. This is the result of our science legacy from a much simpler time that still remains blindly accepted and completely unexamined by today’s scientists.

Electricity

As mentioned in relation to atomic theory, electric charge is a complete mystery in today’s science. Benjamin Franklin invented this concept as a useful model of observations, but never truly explained it. Two statically charged objects suspended from strings at a distance from each other will pull toward each other and remain angled toward each other against gravity indefinitely, as long as no external influence in the environment around them intervenes. There is no known power source supporting this endless effort, yet it is simply accepted as normal by today’s scientists and educators. The new subatomic principle replaces this flawed concept with the proper understanding of electric charge observations, resolving the mystery of electric charge and electricity in general via the same basic principle that runs throughout the book.

Magnetism

Magnetism is another mysterious and completely unexplained phenomenon in today’s science. A block of wood will not cling to a refrigerator, yet a permanent magnet will. What is the difference? Magnetic energy, of course. So where is the power source for this energy that allows a heavy magnet to cling endlessly against gravity, and even hold other heavy objects as well? You won’t find any answers in today’s science — only the same flawed “Work Equation” explanation attempt offered for gravity, mentioned in Chapter 1. The mystery of magnetism is solved here, again via the same new subatomic principle.”

Quoted from:

http://www.thefinaltheory.com/booksummary.html

 

10] New physics is not new at all

Has an inconvenient truth existed in physics for the last two hundred years?

I have chosen to be provocative with my introduction because in my opinion there is a great man in history who has arguably contributed as much to the evolution of physics as have Isaac Newton and Albert Einstein. Furthermore, this outstanding figure was not afraid to tear away materialist physics either. In other words he believes in metaphysical physics. In many ways I think he lived and worked mostly in the sub-quantum world, and for this reason alone I feel he is a great leader within physics in his own right. The person I am talking about is the Swedish born theologian, philosopher and scientist Emmanuel Swedenborg.

If you have never heard of Emmanuel Swedenborg I urge you to take time out to read what the science writer Michael Talbot wrote about him. The article is attached. As many of my readers would probably know, I am very interested in holographic scientific theory and other theories that seem to me to be related. In particular I feel attracted to the Bohm and Pibram holographic brain theories as well as, more lately, Cahill’s Process Physics model, which is based upon his own neural network theory

Talbot draws attention to what he feels is Swedenborg’s uncanny insights about reality that are explainable in terms of the holographic paradigm. Talbot also draws attention to Swedenborg’s belief about there being a deeper level of reality that he described as being of ‘…angels of the third heaven’. In other words, Swedenborg seems to be saying that there is no distinction between what is symbolic and that which is real. I argue that symbolic reality is sub-quantum reality (like consciousness), whereas standard model physics refers to such phenomena as being irrelevant metaphysics These are the reasons why I feel you will find the contents of Talbot’s article quite compelling. It may also help you understand a little more about my own theory about awareness and thought.

Talbot on Swedenborg.pdf

 

11] It’s Not Cold Fusion… But It’s Something

An experiment that earned Stanley Pons and Martin Fleischmann widespread ridicule in 1989 wasn’t necessarily bogus, and here are the reasons why. The topic is Low-Energy Nuclear Reactions (LENR)

Commentary from above article and quotation

Quote:

“Hidden in the confusion are many scientific reports, some of them published in respectable peer-reviewed journals, showing a wide variety of experimental evidence, including transmutations of elements. Reports also show that LENRs can produce local surface temperatures of 4,000-5,000 K and boil metals (palladium, nickel and tungsten) in small numbers of scattered microscopic sites on the surfaces of laboratory devices.”

It seems that this scientific phenomena may be more real than originally thought. The reasons for this is a quotation from a United States government document entitled “Report of the committee on armed services. House of Representatives on H.R. 4909 together with additional views”. An extract from this document is as follows.

Quote:

“Low Energy Nuclear Reactions (LENR) Briefing

The committee is aware of recent positive developments in devel-
oping  low-energy  nuclear  reactions  (LENR),  which  produce  ultra-
clean, low-cost renewable energy that have strong national security
implications.  For  example,  according  to  the  Defense  Intelligence
Agency  (DIA),  if  LENR  works  it  will  be  a  ‘‘disruptive  technology
that  could  revolutionize  energy  production  and  storage.’’  The  com-
mittee  is  also  aware  of  the  Defense  Advanced  Research  Project
Agency’s  (DARPA)  findings  that  other  countries  including  China
and  India  are  moving  forward  with  LENR  programs  of  their  own
and  that  Japan  has  actually  created  its  own  investment  fund  to
promote  such  technology.  DIA  has  also  assessed  that  Japan  and
Italy  are  leaders  in  the  field  and  that  Russia,  China,
Israel,  and India are now devoting significant resources to LENR development.
To better understand the national security implications of these de-
velopments, the committee directs the Secretary of Defense to pro-
vide a briefing on the military utility of recent U.S. industrial base
LENR  advancements  to  the  House  Committee  on  Armed  Services
by  September  22,  2016.  This  briefing  should  examine  the  current
state of research in the United States, how that compares to work
being  done  internationally,  and  an  assessment  of  the  type  of  mili-
tary applications where this technology could potentially be useful.”

 

12] Why I think David Bohm is a hero of science

Bohm cared to get to the bottom of all things, including reality

It is no secret that I feel David Bohm is amongst the deepest thinking and cleverest scientists of all time. Bohm dared to think about and explain what many other scientists of his generation thought was ludicrous, and was prepared to embrace the most profound ideas of Eastern philosophy into his scientific theories as well.

If you take time to look more closely at the life and times of David Bohm in the attachments, I think you will see the man to be scientifically very insightful and gifted, and a person with a deep sense of personal and social morality as well. He said that each individual in his life is in total contact with all other things (phenomena), including us with each other (his implicate order model). Furthermore, if mankind takes the time to recognize this connectedness then the problems of the world would sort themselves out.

I think if you can understand where Bohm is coming from with his views about science and life in general, you will understand my instinctual views as well, together with the reasons why. I have incorporated two secondary works about the life, times and beliefs of David Bohm. I feel you will find them reasonably straightforward to read and his words gripping and challenging, even if you do not agree with them.

Bohm for website.pdf

 

13] The importance of Godel in mathematical and physical science

Godel’s mathematical theorem has had a profound effect on both these disciplines of science

The following quotation was primarily written for scientists and as such you may find it somewhat difficult to understand. I will do my best to explain Godel’s theorem to you via this analogy [you will find similar words in the primary text]. If I were to walk up to you in the street and say “I am lying” this is a contradiction in terms and thus is a paradox. This paradox implies that there is no proof in my words that I am lying and that if I was not lying  that there is no proof that I was not lying either [in philosophy this is known as the “liars paradox”]. In summary this means that is impossible to have purity of knowledge i.e one event is always dependent on the occurrence of a previous event and this phenomenon is infinite. I hope that these words help.

Quote:

“Godel

“The proof begins with Godel defining a simple symbolic system. He has the concept of a variables, the concept of a statement, and the format of a proof as a series of statements, reducing the formula that is being proven back to a postulate by legal manipulations. Godel only need define a system complex enough to do arithmetic for his proof to hold.

Godel then points out that the following statement is a part of the system: a statement P which states “there is no proof of P”. If P is true, there is no proof of it. If P is false, there is a proof that P is true, which is a contradiction. Therefore it cannot be determined within the system whether P is true.

As I see it, this is essentially the “Liar’s Paradox” generalized for all symbolic systems. For those of you unfamiliar with that phrase, I mean the standard “riddle” of a man walking up to you and saying “I am lying”. The same paradox emerges. This is exactly what we should expect, since language itself is a symbolic system.

Godel’s proof is designed to emphasize that the statement P is *necessarily* a part of the system, not something arbitrary that someone dreamed up. Godel actually numbers all possible proofs and statements in the system by listing them lexigraphically. After showing the existence of that first “Godel” statement, Godel goes on to prove that there are an infinite number of Godel statements in the system, and that even if these were enumerated very carefully and added to the postulates of the system, more Godel statements would arise. This goes on infinitely, showing that there is no way to get around Godel-format statements: all symbolic systems will contain them.

Your typical frustrated mathematician will now try to say something about Godel statements being irrelevant and not really a part of mathematics, since they don’t directly have to do with numbers… justification that might as well turn the mathematician into an engineer. If we are pushing for some kind of “purity of knowledge”, Godel’s proof is absolutely pertinent”

Quoted from:

http://www.rbsp.info/rbs/JOB/git.html

 

14] Is there such a thing as sub-quantum phenomena?

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

I believe there is and it can be scientifically demonstrated as such

I describe sub quantum phenomena as activities that are occurring in the universe at such a low level it is not observable or measurable by conventional physics research methodologies. As such physics has no serious interest in it but it is a topic of fascination for most cosmologists. I describe sub quantum phenomena as being phenomena that is occurring below the Planck level and that this is the point that my concept of a fourth dimension kicks in. If I am correct with my hypothesis then it is necessary for me to describe the sorts of sub quantum activities that may need to come together to both describe it as well as attempt to demonstrate how it may all interconnect to render it as being an argument of some of believable substance (meaning). In order to help me achieve this objective I created my Awareness model of (reality) Physics and in the process coined the phrase ‘fine quantum entangled’.

This phrase means that I believe all phenomena whatsoever is somehow linked to each other, whether it be below or above the Planck line, and the means of building a story about this interconnection is via fine quantum entanglement. Unfortunately conventional physics theorists refer to phenomena occurring at a sub-quantum level as being metaphysical and as such any discussion about it is merely pseudo science and therefore is scientifically meaningless. Many reputable scientists do not see this to be the case and, as an philosophical scientist, neither do I. I have built the whole of my reality model around sub quantum phenomena. I support my position by submitting to my readers the following abstract from Groessing’s 2013 thesis. Groessing is a respected scientist. Unfortunately most laypersons will probably find the abstract a bit hard going to understand (so do I). However, I feel it simply important for you to know that such material exists in the first place and furthermore it is credible. I have not only included this quote but also a url lead in to Groessing’s complete thesis:

https://archive.org/details/arxiv-1304.3719.

Abstract:

Emergence of Quantum Mechanics from a Sub-Quantum Statistical Mechanics

Gerhard Groessing

(Submitted on 12 Apr 2013)

A research program within the scope of theories on “Emergent Quantum Mechanics” is presented, which has gained some momentum in recent years. Via the modeling of a quantum system as a non-equilibrium steady-state maintained by a permanent throughput of energy from the zero-point vacuum, the quantum is considered as an emergent system. We implement a specific “bouncer-walker” model in the context of an assumed sub-quantum statistical physics, in analogy to the results of experiments by Couder’s group on a classical wave-particle duality. We can thus give an explanation of various quantum mechanical features and results on the basis of a “21st century classical physics”, such as the appearance of Planck’s constant, the Schr\”odinger equation, etc. An essential result is given by the proof that averaged particle trajectories’ behaviors correspond to a specific type of anomalous diffusion termed “ballistic” diffusion on a sub-quantum level. It is further demonstrated both analytically and with the aid of computer simulations that our model provides explanations for various quantum effects such as double-slit or n-slit interference. We show the averaged trajectories emerging from our model to be identical to Bohmian trajectories, albeit without the need to invoke complex wave functions or any other quantum mechanical tool. Finally, the model provides new insights into the origins of entanglement, and, in particular, into the phenomenon of a “systemic” nonlocality.

http://arxiv.org/abs/1304.3719

Note: Readers should understand that non local means, in its most simplest interpretation, that something in the world of science is happening but no one seems to understand how it is happening or why. An example of this you may identify with is consciousness.

 

15] Quantum mechanics made easy

A light hearted approach to understanding this mysterious and complex area of physics

I recently stumbled across this presentation and I feel that you will find it just as much of an interest as I have. I especially like the humorous manner in which the item has been written. The story line is as if it were being spoken by Einstein to a student in a space craft touring the universe. I have written another scientific blog relating to professional mysticism. I have done this to demonstrate to my readers that many influential scientists seem to notionally support my belief that awareness [not consciousness] plays a critical role in both cosmological science as well as our everyday lives. If you have the opportunity to read this blog I feel it may change your mind as to how you may see phenomena like consciousness and intuition. These are entangled via quantum mechanics in all of our lives and this is via the medium of awareness, an awareness that I describe as primordial. An example of this is my blog relating to us having dual consciousnesses.

Article:

Simple Quantum Mechanics

Url source:

http://journeybystarlight.blogspot.com/2007/06/quantum-mechanics-for-cat-lovers-newton.html?gclid=CMP51Lao_JoCFRYiagodFH66eQ

 

16] The parallel nature of the Bohm-Hiley and Awareness models of physics

I believe both models have great similarity to each other. Keep in mind that this blog was originally posted in 2014. As of April 2017 a few of my ideas have marginally changed since then.

The Bohm-Hiley model of physics relates to the holographic nature of the universe and beyond. This model is more commonly known as the Implicit Order model. The model was originally developed by David Bohm. It was after Bohm’s death in 1992 that Hiley fine tuned the mathematics of Bohm’s ideas to demonstrate that it had relativity physics compatibility. Hiley bought forward highly abstract Grassmere algebra to bring this about.

Within the attached PowerPoint file you will find I have selected sixteen different extracts from an article about the life, times and scientific beliefs of David Bohm written by Beatrix Murrell. The article is titled “The Cosmic Plenum: Bohm’s Gnosis”. I have made comments about each of these extracts in relationship as to how similar the characteristics and processes of the Hiley-Bohm and Awareness models of physics are.

The major difference between both models is that the Hiley-Bohm model of physics has been mathematically constructed to be compatible with the Relativity model of physics whereas the Awareness model is validated by a described experiment. The Bohm-Hiley model also embraces all phenomena whatsoever in a single reality-frame. The Process physics model does this as well. This includes paranormal phenomena. The Process, Hiley-Bohm and Awareness models of physics are all seeded in a realm of physics that is about as deep as any physics model could ever reach. All three models describe the origins of reality from an abstract corner of nothing.

Here is a PowerPoint presentation relating to this blog:

Comparisons between Bohm’s Implicit Order model and the Awareness model

 

17] What is the quantum wave function in physics?

I feel that this quality video seems to answer this question quite well

https://vimeo.com/231532519

 

18] Visualization of Quantum Physics (Quantum Mechanics)

This video visually demonstrates some basic quantum physics concepts using the simple case of a free particle.

All the simulations here are based on real equations and laws.

https://www.youtube.com/watch?v=p7bzE1E5PMY

 

19] Quantum Mechanics and Every Day Life

Quantum Mechanics and every day life, Stanford University article

First published Wed Nov 29, 2000; substantive revision Tue Sep 1, 2009

Quote:

“Quantum mechanics is, at least at first glance and at least in part, a mathematical machine for predicting the behaviors of microscopic particles — or, at least, of the measuring instruments we use to explore those behaviors — and in that capacity, it is spectacularly successful: in terms of power and precision, head and shoulders above any theory we have ever had. Mathematically, the theory is well understood; we know what its parts are, how they are put together, and why, in the mechanical sense (i.e., in a sense that can be answered by describing the internal grinding of gear against gear), the whole thing performs the way it does, how the information that gets fed in at one end is converted into what comes out the other. The question of what kind of a world it describes, however, is controversial; there is very little agreement, among physicists and among philosophers, about what the world is like according to quantum mechanics. Minimally interpreted, the theory describes a set of facts about the way the microscopic world impinges on the macroscopic one, how it affects our measuring instruments, described in everyday language or the language of classical mechanics. Disagreement centers on the question of what a microscopic world, which affects our apparatuses in the prescribed manner, is, or even could be, like intrinsically; or how those apparatuses could themselves be built out of microscopic parts of the sort the theory describes.[1]

That is what an interpretation of the theory would provide: a proper account of what the world is like according to quantum mechanics, intrinsically and from the bottom up. The problems with giving an interpretation (not just a comforting, homey sort of interpretation, i.e., not just an interpretation according to which the world isn’t too different from the familiar world of common sense, but any interpretation at all) are dealt with in other sections of this encyclopedia. Here, we are concerned only with the mathematical heart of the theory, the theory in its capacity as a mathematical machine, and — whatever is true of the rest of it — this part of the theory makes exquisitely good sense.”

http://plato.stanford.edu/entries/qm/

 

20] A comparison between The Process and Hiley-Bohm models of physics

A discussion and comparison about what I see as the most important features relevant to both.

It was around this time I was attempting to formalise my thinking about my Primordial model of sub-quantum physics.

I was assisted by my colleague (MFP) to bring this short work together. What follows is a quoted record of email dialogue between MFP and me in March 2014. Apart from where I have emboldened and italicized certain text the message remains exactly as when I down loaded it from my computer at the time. The opening sentence of the quote is my specific question to my colleague which was responded to as follows immediately thereafter:

Quote:

“… I think I have understood what you have sent fairly well but where I am confused is in the area of Cahill wave aether in relationship to Bohm background implicate energy hypothesis. As a layperson I feel they are of the same nature and this is where I need your guidance. The extract is a section of a much wider secondary argument and I have highligted in red the respective areas I have difficulty bringing together”

Quoted text from MFP in response thereto below.

”… Thanks for your questions. I have tried to clarify as follows.

Referring to quantum theory, Bohm’s basic assumption is that “elementary particles are actually systems of extremely complicated internal structure….,

I believe that assumption is true, and if Cahill is correct his theory also implies that it is true.

….acting essentially as amplifiers of *information* contained in a quantum wave.”

I am not sure what “amplifiers” means. However, in Cahill’s theory, particles consist of quantum wave packets that contain information.

As a conseqence, he has evolved a new and controversial theory of the universe–a new model of reality that Bohm calls the “Implicate Order.”

I think Cahill’s neural network model that we have talked about shares some features with Bohm’s “Implicate Order” as mentioned below.

The theory of the Implicate Order contains an ultraholistic cosmic view; it connects everything with everything else….

In Cahill’s neural network model everything is also connected to everything else.

In principle, any individual element could reveal “detailed information about every other element in the universe.

This is not true of Cahill’s model. This is because although in Cahill’s model each element is connected to every other element, the information in his model lies in the relationships between the elements and not in the elements themselves. Eg if you had an array of dots you could create a picture by drawing lines of various types between the dots. If you represented non-existence by joining dots with invisible ink and physical structures by using visible ink, then every dot could be connected to every other so as to represent an undivided universe, but individual dots would not reveal information about other dots.

” The central underlying theme of Bohm’s theory is the “unbroken wholeness of the totality of existence as an undivided flowing movement without borders.”

That is true of Cahill’s model too.

….. Bohm notes that the hologram clearly reveals how a “total content–in principle extending over the whole of space and time–is enfolded in the movement of waves (electromagnetic and other kinds) in any given region.” The hologram illustrates how “information about the entire holographed scene is enfolded into every part of the film.” It resembles the Implicate Order in the sense that every point on the film is “completely determined by the overall configuration of the interference patterns.” Even a tiny chunk of the holographic film will reveal the unfolded form of an entire three-dimensional object.

That is true of holograms, but not of neural networks.
So although Cahill’s neural network model is completely interconnected, a small piece of it will not reveal the whole. In fact a small piece of it would not even function, because the whole has to be involved for anything to function.

Proceeding from his holographic analogy, Bohm proposes a new order–the Implicate Order where “everything is enfolded into everything.” This is in contrast to the explicate order where things are unfolded.

This is a poetic idea because it implies that if you hold a piece of the universe in your own hands, then the rest of the universe is “enfolded” into it, so you are holding the whole universe in your own hands.

However, with Cahill’s model, if you are holding a piece of the universe in your hands, the rest of the universe is not enfolded. Nevertheless since everything is interconnected, if you are holding a piece of the universe, then you are also holding the rest, but with that rest lying outside rather than in your hands.

Bohm believes that *the Implicate Order has to be extended into a multidimensional reality;* in other words, the holomovement endlessly enfolds and unfolds into infinite dimensionality. Within this milieu there are independent sub-totalities (such as physical elements and human entities) with relative autonomy. The layers of the Implicate Order can go deeper and deeper to the ultimately unknown. It is this “unknown and undescribable totality” that Bohm calls the holomovement. The holomovement is the “fundamental ground of all matter.”

This is similar to Cahill’s model if you accept that the iterations of his neural network correspond to the holomovement.

….Bohm suggests that instead of thinking of particles as the fundamental reality, the focus should be on discrete particle-like quanta in a continuous field.

I think Cahill’s model suggests this too.

More complex and subtle, this second category applies to a “superfield” or *information* that guides and organizes the original quantum field.

Something similar may be true of Cahill’s model too because the quantum fields emerge from the patterns of information produced by his neural network.

Bohm considers it to be similar to a computer which supplies the information that arranges the various forms–in the first category.

This seems similar to Cahill’s model but Cahill models the information as coming from a neural network rather than a computer.

Bohm’s theory of the Implicate Order stresses that the cosmos is in a state of process.

Cahill’s theory of Process Physics with its neural network model stresses this too.

Bohm’s cosmos is a “feedback” universe that continuously recycles forward into a greater mode of being and consciousness.

Cahill’s neural network model includes feedback and continuously iterates forward. This would seem similar to recycling forward.

At the very depths of the ground of all existence Bohm believes that there exists a special energy. For Bohm it is the plenum; it is an “immense background of energy.” The energy of this ground is likened to one whole and unbroken movement by Bohm. He calls this the “holomovement.” It is the holomovement that carries the Implicate Order.

In Cahill’s theory, the iterations of the neural network can be considered equivalent to the holomovement. As the neural network continuously produces patterns of information that correspond to the generation of expanding space and matter, it could be considered as source of unlimited background energy.

a “movement in which new wholes are emerging.”

I think this corresponds to Cahill’s claim that the iterations of the neural network unceasingly produce new patterns of information, which correspond to new structures of space and matter.

Bohm also declares that the “implicate order has to be extended into a multidimensional reality.” He proceeds: “In principle this reality is one unbroken whole, including the entire universe with all its fields and particles. Thus we have to say that the holomovement enfolds and unfolds in a multidimensional order, the dimensionality of which is effectively infinite. Thus the principle of relative autonomy of sub-totalities–is now seen to extend to the multi-dimensional order of reality.”

The dimensionality of Cahill’s neural network is effectively infinite. However the patterns of information that it produces tend to be mainly three dimensional, which provides an explanation of why we perceive ourselves as living in a three dimensional universe.

Hope this helps.

 

21] Is space-time infinite dimensional?

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

Cantorian Topology and Geometry postulates that this is the case, I support this general position as well.

I have selected various extracts of the ideas and works of El Naschie. I have chosen El Naschie’s work not only because he is respected in his specialized world of physics but also because his views seem to be parallel to my own in relationship to both the existence of a fourth dimension as well as its inherent fractal like properties as well (my opinion). El Naschie also believes the 3D dimension is infinite and within this medium I think he is saying that there is an inherent duality within the system between phenomena that is of an abstract continuum (like my primordial fourth dimension) and that which is materially discreet such as 3D space-time particle activity. In my Awareness model I describe this same duality as being a concurrent one between both levels of cosmic phenomena. It is for this reason I briefly introduce you to Cantorian Topology and Geometry. I feel that El Naschie’s cosmological ideas are generally supportive of my own. As such I feel this blog helps support the validity of cosmic ideas I express to my readers via the medium of my Awareness model of physics.

Cantorian Topology and Geometry

 

22] A boy and his atom

A fascinating story about the world’s smallest movie

This movie was made by IBM. IBM made a movie frame by frame by photographing atoms under a magnification of over 100 million times. IBM was testing the limits of digital memory storage by moving atoms as well as the limits of film making.

The World’s Smallest Movie

The making of video of the above movie can be seen here

 

23] Two respected scientists talk about life and nothing

It is likely most of my readers have heard about the views of Lawrence Krauss regarding cosmological nothing

I introduce you to what I consider to be a very important scientific and philosophical video that incorporates both Lawrence Krauss and Richard Dawkins. The video is a general discussion between these two great scientific and philosophical minds about not only reality but also what may be cosmological “nothing”. This program is now around six years old but I feel that its contents remain relevant today.

Because of what I consider to be the importance of the information contained within the conversation between these two scientists I have also prepared a pdf file of the same conversation. I feel that by me doing this that I am giving Mums, Dads, and Kids a better understanding of what these two men are both talking about as well as mean in terms of everyday life. If you find yourself enjoying this video presentation I also strongly suggest that you also view my blog entitled “Do some people think that science is a belief system?“. The themes between both videos are much the same, i.e. all three presenters are down to earth presenters of information relating to everyday life, the meaning of life and wider reality.

The video

The pdf file

 

24] Developing the cosmology of a continuous (steady) state universe

A debate about a steady state universe presented in an introductory preliminary form by Richard L. Amoroso

I believe our 3D universe exists in a concurrent relationship with a separate field fourth dimension. I have introduced readers to the cosmological ideas of Richard Amoroso because they seem to indicate Amoroso – Noetic physics has similar characteristics to those I have outlined with my Awareness model. I draw reader attention to some of these similarities. I have underlined certain text to assist you to better follow my interpretation of Amoroso’s cosmological physics ideas.

Amoroso hypothesizes because of numerous unresolved problems in contemporary cosmological physics that it is time to think about a new standard model of cosmology, a cosmological theory Amoroso has titles a Continuous State Universe (CSU). The author believes it is only an extended dualistic – theory model that introduces an additional causal order (such as my fourth dimension) that will help resolve the dilemma. Those with a background in physics can peruse the essence of this duality (of dimensions) debate at the top of page two. Amoroso points out that although the notion of Newtonian absolute space (which I believe in) has been discarded by contemporary physics, one appears to already exist (see bottom of two and top of page three).

Amoroso believes his new CSU theory “represents the ground of all existence” and “resides beyond the observable Hubble universe” i.e. determinable outer reaches of the 3D universe. Amoroso proceeds to point out that “Einstein’s theories of relativity can be simplistically represented as a ‘virtual reality’ by interpreting CSU – AS (his physics model as a fundamental background space of the relative space fields referred to by Einstein). In my opinion this means that Amoroso is stating that his CSU – AS model is much akin to my Awareness model of physics notion of the existence of a cosmological backdrop of primordial awareness. The Author goes on to add “Space with boundary conditions or energy is fundamental to all forms of matter”. I suggest that the boundary conditions that Amoroso is talking about is akin to the arbitary boundary I frequently talk about between the sub-quantum and quantum levels, that is the Planck level.

In order for you to better appreciate the significance of the cosmic comparisons I am making I suggest you proceed to the lowest section of page three and read Einstein’s quote. The quote commences with the words “the victory over…” and concludes with the words “…space without a field”. You will note that Einstein refers to his new field theory as being dependant upon for space-time parameters which is exactly my position in my Awareness model.

From this quotation onwards in the text Amoroso supports his hypothesis with an extended argument embracing mathematics and concludes his paper by saying “Scientific theory, whether popular or unpopular at any point in history, must ultimately be based on description of natural law, not creative fantasies of scientists imaginations. Only by adequate determination of natural law can successfully model reality…” It is for these reasons I feel it is very important I introduce my readers to the cosmological ideas of Amoroso. His ideas compliment the already existing parallel nature of my Awareness model of physics with the Process and the Hiley-Bohm models as well. The Awareness model is supported by the SMUT particle experiment.

new steady state theory part 1

new steady state theory part 2

new steady state theory part 3

 

25] What is Process Philosophy?

Process Philosophy:

Quote from Stanford University article cited below:

“The philosophy of process is a venture in metaphysics, the general theory of reality. Its concern is with what exists in the world and with the terms of reference in which this reality is to be understood and explained. The task of metaphysics is, after all, to provide a cogent and plausible account of the nature of reality at the broadest, most synoptic and comprehensive level. And it is to this mission of enabling us to characterize, describe, clarify and explain the most general features of the real that process philosophy addresses itself in its own characteristic way. The guiding idea of its approach is that natural existence consists in and is best understood in terms of processes rather than things — of modes of change rather than fixed stabilities. For processists, change of every sort — physical, organic, psychological — is the pervasive and predominant feature of the real.

Process philosophy diametrically opposes the view — as old as Parmenides and Zeno and the Atomists of Pre-Socratic Greece — that denies processes or downgrades them in the order of being or of understanding by subordinating them to substantial things. By contrast, process philosophy pivots on the thesis that the processual nature of existence is a fundamental fact with which any adequate metaphysic must come to terms.

Process philosophy puts processes at the forefront of philosophical and specifically of ontological concern. Process should here be construed in pretty much the usual way — as a sequentially structured sequence of successive stages or phases. Three factors accordingly come to the fore:

  1. That a process is a complex — a unity of distinct stages or phases. A process is always a matter of now this, now that.
  2. That this complex has a certain temporal coherence and unity, and that processes accordingly have an ineliminably temporal dimension.
  3. That a process has a structure, a formal generic format in virtue of which every concrete process is equipped with a shape or format.”

http://plato.stanford.edu/entries/process-philosophy/

[First published Tue Apr 2, 2002; substantive revision Wed Jan 9, 2008]

 

26] Albert Einstein’s ideas about Simultaneity in his Theory of Relativity

I believe that this animated video is self explanatory

Simultaneity is explained herein. Readers should note that this blog complements my blog entitled “The question of NOW and absolute simultaneity“, which remains a work in progress.

 

27] The emerging crisis in physics. Will physics soon need to take a new course of direction?

The magazine Scientific American seems to think this may be the case. You may also find my two blogs relating to this topic are interesting as well

These two blogs are entitled “The questionable nature of the Standard model of physics” and “Is the scientific method living up to it’s own expectations?“.

The problem for the standard model of physics is that although it correctly describes the attributes of sub-atomic particles, it does not show how these remarkable particles have such attributes. This is why the question of the existence of super-symmetry is so important to physicist allying themselves to the Standard model of physics, whereas the alternative models (such as those of Bohm and Cahill) do not. The same position applies to my Awareness model where I describe reality as being emergent of a continuum of blobs of information and knowledge that is never ending. Furthermore, these blobs are self generating without any external force (energy) needed for them to continue doing so. The Bohm Implicit order (holographic) model works along similar lines as does the Cahill Process Physics model which I feel can be seen to be all somewhat parallel to each other. I am particularly interested in seeing the Bohm/Cahill type models come forward as credible alternative models to the Standard model because it would tend to substantiate my own views regarding the existence of a common awareness to all phenomena (not consciousness). I mean by this it is likely to have some degree of validity. Because these alternative models seem to be reliant on some type of memory (albeit short lived) to explain the perpetual expansionary mature of their models I think this is where the Awareness model may have a helpful feature to contribute to the debate because it has inherent memory embodied within it at every stage.

I have extracted certain phrases from the Scientific American magazine dated May 2014 so that you may share why I feel some of my words will make better sense. The sections I have copied for this blog are directly related to many of the comments I have just made. They relate to the huge urgency for physicists to finally determine the phenomena of super-symmetry, and the subsequent need for some type of alternative model to show how it is that physics can be so weird at times. More importantly of all, entrepreneurial physicists are already “rethinking of basic phenomena that underlies the fabric of the universe”. It seems to me such physicist already feel they have been defending a lost cause. The front cover of the Scientific American magazine is attached as well.

Excerpts from Scientific American may 2014.pdf

 

28] A guide to describing non-locality without employing mathematics

This step by step approach to understanding non-locality in physics may be useful for some of my readers

https://vimeo.com/231509523

 

29] Pilot wave theory explained

Contemporary physics seems to be more seriously considering pilot wave theory as a part of its quantum modelling. This video may assist you to better understand what the theory is about.

The video

Other supporting information:

Link 1

Link 2

 

30] Unusual and challenging E8 maths theory

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

These mathematical equations predict that many more sub-atomic particles and atomic forces are yet to be discovered

See video here

 

31] A profile of Professor Basil Hiley

Professor Hiley is arguably one of the finest scientists in contemporary times

He received the Majorana Prize for “Best person in physics” in the U.K. in 2012.

You will see where I have emboldened certain text within the following quotation:

I have a great deal of respect for Professor Hiley and i have quoted him in a number of my writings and blogs.

Quote:

“Basil J. Hiley, is a British quantum physicist and professor emeritus of the University of London. He received the Majorana Prize “Best person in physics” in 2012. Wikipedia

Born: 1935, Myanmar (Burma)

Long-time co-worker of David Bohm, Hiley is known for his work with Bohm on implicate orders and for his work on algebraic descriptions of quantum physics in terms of underlying symplectic and orthogonal Clifford algebras.[1] Hiley co-authored the book The Undivided Universe with David Bohm, which is considered the main reference for Bohm’s interpretation of quantum theory.

The work of Bohm and Hiley has been characterized as primarily addressing the question “whether we can have an adequate conception of the reality of a quantum system, be this causal or be it stochastic or be it of any other nature” and meeting the scientific challenge of providing a mathematical description of quantum systems that matches the idea of an implicate order.[2]

Hiley worked with David Bohm for many years on fundamental problems of theoretical physics.[10] Initially Bohm’s model of 1952 did not feature in their discussions; this changed when Hiley asked himself whether the “Einstein-Schrödinger equation”, as Wheeler called it, might be found by studying the full implications of that model.[7] They worked together closely for three decades. Together they wrote many publications, including the book The Undivided Universe: An Ontological Interpretation of Quantum Theory, published 1993, which is now considered the major reference for Bohm’s interpretation of quantum theory.[11]

In 1995, Basil Hiley was appointed to the chair in physics at Birkbeck College at the University of London.[12] He was awarded the 2012 Majorana Prize in the category The Best Person in Physics for the algebraic approach to quantum mechanics and furthermore in recognition of ″his paramount importance as natural philosopher, his critical and open minded attitude towards the role of science in contemporary culture”.[13][14]

Implicate orders, pre-space and algebraic structures (quote below)

“Much of Bohm and Hiley’s work in the 1970s and 1980s has expanded on the notion of implicate, explicate and generative orders proposed by Bohm.[51][52] This concept is described in the books Wholeness and the Implicate Order[53] by Bohm and Science, Order, and Creativity by Bohm and F. David Peat.[54] The theoretical framework underlying this approach has been developed by the Birkbeck group over the last decades. In 2013 the research group at Birkbeck summarized their over-all approach as follows:[55]

“It is now quite clear that if gravity is to be quantised successfully, a radical change in our understanding of spacetime will be needed. We begin from a more fundamental level by taking the notion of process as our starting point. [which I agree with] Rather than beginning with a spacetime continuum, we introduce a structure process which, in some suitable limit, approximates to the continuum. We are exploring the possibility of describing this process by some form of non-commutative algebra, an idea that fits into the general ideas of the implicate order. In such a structure, the non-locality of quantum theory can be understood as a specific feature of this more general a-local background and that locality, and indeed time, will emerge as a special feature of this deeper a-local structure.” [I suggest that my Noetic Scientist concept of primordial awareness may be a candidate in this area.]….https://en.wikipedia.org/wiki/Basil_Hiley

 

32] Did you know that cosmologically parts of us have been everywhere?

This idea underpins my belief of an invisible link between all things, regardless of time, location or circumstance

(since I wrote this blog in 2013 I now know that there is cosmic phenomena entitled entanglement that supports my beliefs in this area).

 Journey of an Atom

Quote:

“We are all star children. Every atom in our bodies was once inside the fiery core of a star that exploded billions of years before our solar system was formed. At the same time, each of us is connected to all other life on this planet in ways we rarely imagine. Simple estimates suggest that each time we take a breath, we could be inhaling atoms exhaled by most other human beings who ever lived. We are not only connected to the stars, but to the full breadth of human history”.

Quote:

“… this story is not about all atoms. Because atoms, like people and dogs, and even cockroaches, have individual histories”.

“…this story is a story about one particular atom in particular, an atom of oxygen, locked in a drop of water, on a planet whose surface is largely covered by water but whose evolution is for the moment dominated by intelligent beings who lived on land. It could, at the present moment, be located in a glass of water you drink as you read this book. It could have been in a drop of sweat dropping from Michael Jordan’s nose as he leapt for a basketball in the final game of his career, or in a large wave that is about to strike land after travelling 4000 miles through the Pacific Ocean. No matter. Our story begins before water it self existed, and end well after the planet on which the water is found s no more, the myriad human tragedies of the eons perhaps long forgotten. It is a story rich in drama, and poetry, with moments of fortune and remarkable serendipity, and more than a few of tragedy”

Source:

“Atom. An Odyssey from the life of the Big Bang to life on earth… and Beyond”

Author: Lawrence M. Krauss. Publisher: Little Brown and Company 2001. ISBN 0 316 64877 0

To locate my ideas relating to the unusual science that I feel is applicable to this blog click here.

The lightning, the train, the movement and the act of observation

The Sagnac effect and its relation to Einstein’s lightning, train and observation analogy. A review.

Introduction

I introduce you to the Sagnac effect in physics. The 1930’s Sagnac experiment, together with a long succession of confirming experiments, demonstrates the validity of Newtonian type ether theory. The successful Sagnac experiment seems to be rarely discussed in mainstream relativity physics literature. Neither is a succession of similar experiments that offer evidence that ether theory is a valid inertial reference frame with respect to helping to understand and describe universal reality physics.

As a concept scientist, today I offer a theory that may help you to better understand what appears to be the value of the ether inertial frame point of universal reference. It is likely that I am introducing you to new ideas. I do not seek to prove anything with this blog. I am not capable of doing so anyway. I offer my readers a minimal number of references. For this reason you should consider this blog as being a succession of ideas for you to consider.

If you have a deep interest in physics, it is likely you would be aware of the alleged null result of the 1887 Michelson and Morley experiment that sought to establish if Newtonian type ether theory was a valid hypothesis or not. If you look carefully through the literature you will find the Michelson and Morley experiment was never a null result. Professional physicists at the time described it as being an incomplete result. In the 1920’s, Dayton Miller’s ether experiment plus others around that time [including Ives and Stilwell] demonstrated that the original 1887 Michelson and Morley experiment had greater merit than first thought.

In 1913 the French scientist Georges Sagnac demonstrated the veracity of ether theory in the wider physical debate. Einstein knew about the successful Sagnac experiment as well. There are odd times around that same period in which Einstein admitted the necessity to have a physical inertia frame in order to make both of his models “work”. You will find a reference to Einstein’s apparent change of heart regarding ether in this PDF file. I have emboldened text that I feel may most interest you. It is important that you know that there are various ether theories. The common factor in all of these theories is that ether space is without time. Ether space is absolute space. Einstein’s special and general relativity theories can be seen as concurrent theories with respect to the absolute ether inertia frame of reference. By this I mean the universal frame of reference. You will also see where I discuss simultaneity is possible in ether theory. If you are keenly interested in this history and evolution of the ether debate in the pre-1950 period, I strongly urge you to read a paper written by Lloyd S. Swenson entitled “The Michelson-Morley-Miller experiments before and after 1905”.

The main text

The Sagnac effect is important because it addresses the issue of the isotrophy of light velocity [with respect to the observer] in all the possible inertial frames. If the isotropic speed of light is not found to be constant, then this provides problems for both of Einstein’s relativity theories. This is because he would have needed to modify his special relativity and general relativity models in order to make them compatible with a motionless ether. He did not do this because he felt that his theories made ether unnecessary. The physicists Lorentz and Poincare showed that such a change would have been relatively straightforward.

The Sagnac effect resolves any such dilemma by not postulating the speed of light, by assuming the existence of a preferred inertial frame [ether] in which simultaneity holds. Ether is called preferred because it is where the first synchronisation of clocks is made, and where a frame of reference is moving at the speed of light. This video* illustratively demonstrates the Sagnac experiment quite well. A kit for home use to study the Sagnac effect is available as well. For professional scientists I offer this additional link for you to consider.

*I apologise to my readers that at the end of this video it contains material of a religious nature. However, I have incorporated the video because I feel that it demonstrates the Sagnac effect quite well.

I suggest that the speed of light is meaningless if space does not contain existing co-ordinates that are mobile or fixed, i.e., without time. The laws of nature tell us that such hidden co-ordinates exist and therefore geometry and algebra may be able to predict both what these hidden co-ordinates mean, and then predict their relationship to the holistic universe. The existence of a preferred inertial reference frame would seem to be a sound way of helping to understand such hidden co-ordinates. The eminent 19th century physicist Poincare agreed that such co-ordinates must exist. They already exist as a mathematical concept.

Einstein’s theories demonstrate that a preferred initial frame is unnecessary. However, they rely on the eyes of observers, but these observers are subjective. The Sagnac experiment, which has been replicated, also tells us that an objective co-ordinate must exist in real space because they can be explained by such experiments. These are experiments that tell scientists that there is such a thing as absolute space, and thus ether theory is a valid theory. Relativity theory seems to make no attempt to provide physical meaning to its mathematical construction [because it does not need to], whereas an ether inertial frame does. Einstein argued that they need not do so.

The Sagnac effect physically demonstrates this point quite clearly. As I indicated in my introduction, there are other theories that have been tested that provide additional validity to the Sagnac theory. These include the Michelson, Gale and Pearson experiment in 1925, and perhaps, more importantly, the Ives and Stilwell Brussels canal experiment in 1925. The long term Dayton Miller experiment has also provided much highly useful data that complements all of these tests and experiments. Miller’s testing ranged from the late 1920s well into the 1930s. I would like to introduce you to the Kennedy and Thorndike experiment at this time as well. From my secondary reading it seems that not many physicists have heard about this experiment before.

Einstein’s relativity theories appear to loosely deny the existence of such a universal ether and its alleged hidden co-ordinates. However, at different times Einstein did clearly state that an ether inertial frame of reference exists but he never widely expressed this point of view to the media of his time. [He mostly seems to have stated this idea at private lectures]. An example of this is a lecture that Einstein delivered in 1924 which I have discussed above and you can find in this PDF file. I have emboldened the sections that I feel are more relevant to my readers.

Let me summarise as follows:

1] There is an absolute space inertial frame of reference.

2] There are hidden co-ordinates within the inertial ether frame.

3] The effects of these hidden co-ordinates, together with what meaning they may have, are testable and demonstrable by experiments such as the 1913 Sagnac experiment and others like it as I have discussed.

4] Einstein seems to have been ambivalent as to whether there existed an ether inertial frame of reference, because he thought it was not necessarily relevant to his relativity models. At the same time however, Einstein seems to have said it was likely that ether theory might be desirable for inclusion within both of his relativity models.

 

Many contemporary physicists continue to believe that ether theory is unnecessary in their attempts to create a theory of everything. I will shortly describe how that, in my opinion, absolute ether theory might help to explain the lightning strike, the moving train and observer analogy that Einstein asked his peers to consider. Before doing this, I will add additional information that I believe will help you fully understand what I am talking about in my limited description of differing events relating to Einstein’s moving train analogy.

1] Light should be seen as being a disturbance in the ether medium travelling at constant speed with respect to the medium of the inertial reference frame, [as when light changes speed because of what it may be travelling through, such as air, a vacuum or a transparent object like a diamond.] This is not the same as what an observer sees in a chosen frame as in the case of special relativity theory.

2] The velocity of light in the ether medium is the distance travelled divided by the time it takes to travel the distance. The distance measured in the ether medium is by both material rods and mechanical clocks by means of time dilation. Material rods shrink with motion through dilation [through relativistic effects] relating to any given speed and clocks slow down too. This is via dilation with respect to measurement.

3] As clocks slow with movement in the ether inertia frame, the time varies between clocks at rest in the ether frame. All reference frames chosen by observers in Einstein’s relativity modelling should be considered as being within the universal ether frame. In other words, where Special Relativity theory says that the two return journey events on all platforms are the same, it is not correct.

4] Light travelling between two points should be perceived as and treated as a single event. In a light and mirror experiment, the return trip of light to a common point is an event. The point where the light separates for the return journey is an event unto itself. Such a ‘space’ exists between all events such as in the mirror experiment. and this space should be seen as being without absolute time. This is because all events occur within hidden co-ordinates of a single inertia ether, not just a single reference frame chosen by an observer, as is the case in special relativity theory. This is why from a special relativity frame of reference the two journeys are unequal. The return trip from point B to point A is slower.

5] The measurement of a light signal between two points (say mirrors) necessitates there being two clocks A and B. Clock A is the clock at the point of sending and clock B is the point of receiving. Both clocks must be set at the same run rate. The contraction of clock B when it is moved from point A to point B also needs adjusting. This is because the movement between both reference points A and B is a separate reference frame unto itself with respect to the earth. When clocks dilate they contract with respect to the universal inertia ether, not clock time as is commonly believed by relativity theory scientists.

6] The isotropic radiation effect of moving light also manipulates matter on an atomic scale. This would also occur in relation to the wider universal inertial frame.

 

The Einstein dual lightning strike analogy

Within Einstein’s analogy there are several different events. These include two prospective lightning strikes upon the train and a separate event relating to the formation of plasma caused by the two lightning strikes. In turn, these two strikes are also relevant to the moving train with respect to the rails upon which the train is moving. This includes the observer sitting on the embankment who in turn is located in respect to the centre of the moving train as Einstein’s analogy dictates. This is in addition to the wider events in the chosen frame of reference. Remember that these events are also taking place with respect to the wider universal inertia reference frame of ether and that there is a relativity clock time delay between each event that needs to be considered between these events. This includes the mechanics of clocks. Also all moving objects, including the train itself, shrink as a result of such movement.

Place one clock at the front of the train and walk down to the other end of the train and place another clock at the rear. The clock at the front of the train records the lightning strike as being in local [relative] time, but the clock at the rear of the train, through time dilation due to the act of walking to place the second clock at the rear of the train indicates a slower time. Furthermore clocks fastened externally to the train at each end would record the same degree of dilation if an observer were to walk the length of the roof of the train.

Let us say that the train is moving forward at one hundred kilometres per hour and the length of the train is eighty metres. This means that the events are occurring within one reference frame as chosen by an observer who is observing all events adjacent and related to the embankment as well as adjacent to the moving train. The girl on the train would be observing the events of dual lightning strikes only with respect to the event within the train itself. This is by means of the respective isotropic effects of both lightning strikes that would have been [relatively] instantaneous as per Einstein’s analogy. Both isotropic effects of ether lightning strikes would reach the girl in the centre of the train at C [i.e. the speed of light] The girl’s observation of the observer on the embankment would have also been relating to the same two instantaneous isotropic lights as well. Keep in mind that the same events are occurring with respect to the moving train as well as the wider ether inertial frame.

From these combined events it can be seen that it would be impossible to observe that the two lightning strikes were simultaneous. The clock time dilation effects between both events would prohibit this. The same applies to the girl on the train for similar relative clock time reasons. It is not only the clock time dilation effect that would prohibit this, but also the contraction in length of the train itself, as it was moving instead of being at rest relative to an observer. However, because simultaneity is allowed in the in the ether inertia frame. I will discuss why I feel that this is the case.

Because there is a delay between all events, no matter how large or how small these events may be. This delay is represented by the absoluteness of the universal ether. This delay [I will call NOW] is not measurable by clocks. It simply “IS”. It is representative of the wider influence, and effect of nature. This is nature that has its own already existing hidden co-ordinates in which Einstein’s lightning strike analogy applies.

An observer positioned upon the motionless Earth gravity field would observe the commonality effect of the special relativity reference frame chosen by the observer on the embankment. The observer on the embankment cannot do this. This is because he is included within his own choice of reference frames which in turn is relative from the wider inertia ether frame. This is the reference frame that then must hold. It must be treated as the dominant frame. This is in respect to the observer in the motionless Earth gravity field that in turn is relative to the universal inertial frame of ether. These words draw attention to the commonality to all that “IS” with respect to the universal inertial frame of ether.

I argue that at the relative time that the two lightening strikes hit the train, and from the universal ether inertia frame point of reference (absolute time), as well as the special relativity perspective, there would have been a analogous NOW. From a special relativity perspective this is unknowable because it is unmeasurable and not observable. However, from an observer in the first gravity field it would be. What must be considered is that both lightening strikes hitting the train were two separate events.

I further suggest that this NOW that I have introduced you to has no past and no future. This means that until the indeterminable NOW ‘period’ moves into the past the special relativity reference is “frozen” in relativity clock time. This is before rods commence shrinking and clocks as discussed above run slower in absolute ether time. An observer within the absolute ether of motionless Earth gravitation would observe the NOW. This is a NOW that he would have also observe the position of a train not only relevant to an observer of the two ether inertial frame lightning strikes at each end of the train but also the girl in the middle of the train. [Light and its isotropic effects are related to ether as demonstrated by the Sagnac experiment]. By this I mean that the observer in the motionless gravitational frame of ether reference would then mathematically know exactly where the centre of the train was during without time of the absolute state of NOW as I have been discussing.

If the train is eighty metres in length, then the centre of the train would notionally be at rest with each half of the train resting equally at the point of the absolute NOW. This NOW relative to the train is also relative to the parallel railway lines upon which the train is travelling. The implications of this are that once the relative point with respect to the railway track of NOW has passed [the train continues to move forward in relation to this point]. From this it is then possible to employ this railway line reference point to mathematically determine who saw the lightning strikes happen at the same time, and who did not, i.e., the observer and the girl. Furthermore, who saw the lightning strikes separate with respect to each other. This includes the known length of the train, the known speed of the train and the point of reference on the railway tracks with respect to the embankment. This is from the point of view of the observer on the embankment. The observer however would not know that he was frozen into the “NOW” as is observed by the another observer in the motionless gravitation of ether in the inertial ether frame. The slowing clocks on the train, as well as the contraction of the moving train with respect to the ether frame observer, the time dilation of clock time and shrinkage of the train itself play no role in this ether frame model of event related conditions. It only would when Einstein’s special relativity theory analogy applied to the observer on the embankment in his chosen reference frame.

I believe that my review of the Sagnac experiment demonstrates that there are two different light sources that emanated from a single light source that then come back together to form a single effect of light. The splitting of light this way demonstrates the difficulty that emanated from the Michelson and Morley experiment that allegedly had a null result. This is incorrect as was demonstrated by Dayton Miller and numerous other highly respected physicists around the same time.

©

How Mandelbrot’s fractals changed the world

Did you know that the whole universe is fractal?

I think that this article written by Jack Challoner will stir your imagination about he magical nature of fractals.

Challoner writes for BBC online news magazine

Quote:

“In 1975, a new word came into use, when a maverick mathematician made an important discovery. So what are fractals? And why are they important?

During the 1980s, people became familiar with fractals through those weird, colourful patterns made by computers.

But few realise how the idea of fractals has revolutionised our understanding of the world, and how many fractal-based systems we depend upon.

On 14 October 2010, the genius who coined the word – Polish-born mathematician Benoit Mandelbrot – died, aged 85, from cancer.

Unfortunately, there is no definition of fractals that is both simple and accurate. Like so many things in modern science and mathematics, discussions of “fractal geometry” can quickly go over the heads of the non-mathematically-minded. This is a real shame, because there is profound beauty and power in the idea of fractals.

The best way to get a feeling for what fractals are is to consider some examples. Clouds, mountains, coastlines, cauliflowers and ferns are all natural fractals. These shapes have something in common – something intuitive, accessible and aesthetic.

They are all complicated and irregular: the sort of shape that mathematicians used to shy away from in favour of regular ones, like spheres, which they could tame with equations.

Mandelbrot famously wrote: “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.”

The chaos and irregularity of the world – Mandelbrot referred to it as “roughness” – is something to be celebrated. It would be a shame if clouds really were spheres, and mountains cones.

Look closely at a fractal, and you will find that the complexity is still present at a smaller scale. A small cloud is strikingly similar to the whole thing. A pine tree is composed of branches that are composed of branches – which in turn are composed of branches.

A tiny sand dune or a puddle in a mountain track have the same shapes as a huge sand dune and a lake in a mountain gully. This “self-similarity” at different scales is a defining characteristic of fractals.

The fractal mathematics Mandelbrot pioneered, together with the related field of chaos theory, lifts the veil on the hidden beauty of the world. It inspired scientists in many disciplines – including cosmology, medicine, engineering and genetics – and artists and musicians, too.

The whole universe is fractal, and so there is something joyfully quintessential about Mandelbrot’s insights.

Fractal mathematics has many practical uses, too – for example, in producing stunning and realistic computer graphics, in computer file compression systems, in the architecture of the networks that make up the internet and even in diagnosing some diseases.

Fractal geometry can also provide a way to understand complexity in “systems” as well as just in shapes. The timing and sizes of earthquakes and the variation in a person’s heartbeat and the prevalence of diseases are just three cases in which fractal geometry can describe the unpredictable.

Another is in the financial markets, where Mandelbrot first gained insight into the mathematics of complexity while working as a researcher for IBM during the 1960s.

Mandelbrot tried using fractal mathematics to describe the market – in terms of profits and losses traders made over time, and found it worked well.

In 2005, Mandelbrot turned again to the mathematics of the financial market, warning in his book The (Mis)Behaviour of Markets against the huge risks being taken by traders – who, he claimed, tend to act as if the market is inherently predictable, and immune to large swings.

Fractal mathematics cannot be used to predict the big events in chaotic systems – but it can tell us that such events will happen.

As such, it reminds us that the world is complex – and delightfully unpredictable.

More of Jack Challoner’s writings can be found at Explaining Science

A biography of Benoit Mandelbrot can be found here

Main stream science is dominantly event-orientated

Is this why main stream science seems to have some of the difficulties it does?

Event oriented thinking sees the world as a complex succession of events rather than as a system as a whole. An event is behavior that happened or will happen. Event oriented thinking assumes that each event has a cause and that changing the cause will correspondingly change the event. The rest of the system that produced the event need not be considered.

whereas

Structural thinking sees the world as a complex structure composed of nodes, relationships, and interacting feedback loops. Once the structure is modeled, simulated and understood the fundamental behavior of the system becomes plainly obvious, making the system’s response to solution efforts predictable.

The central tenant of structural thinking is that the behavior of a complex system cannot be correctly understood without thoughtful construction of a model of the key structure of the system, and computer simulation of that model.

Ideas quoted from subsections of: http://www.thwink.org/sustain/glossary/

Albert Einstein and the great ether debate

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

Is ether theory still valid for incorporation within contemporary physics or not?

I believe that it is. My reasons for saying this are based upon what Einstein said about ether in a public lecture in Germany in 1920. In his latter years Einstein continued to believe that ether was pertinent to both his Special Relativity and General Relativity models but both for different reasons. Einstein made the distinction between an immobile ether in his Special Relativity theory and in his General Relativity hypothesis he determined ether to be necessary to accommodate gravitational waves.

I have copied and posted Einstein’s 1920 lecture below and I have highlighted within the text where he has talked about his belief that ether was an important factor in his thoughts in relationship to both his relativity theories. Einstein continued to believe in ether theory until the closing days of his life, but not necessarily in relationship to his original relativity ideas. As an extension of these words keep in mind that both of Einstein’s relativity theories are the core of modern physics models and theories. Contemporary physics has elected to dismiss ether theory because it is seen an unnecessary.

Here is Einstein’s 1920 lecture:

Quote:

Einstein: Ether and Relativity

Albert Einstein gave an address on 5 May 1920 at the University of Leiden. He chose as his topic Ether and the Theory of Relativity. He lectured in German but we present an English translation below. The lecture was published by Methuen & Co. Ltd, London, in 1922.

Ether and the Theory of Relativity by Albert Einstein

How does it come about that alongside of the idea of ponderable matter, which is derived by abstraction from everyday life, the physicists set the idea of the existence of another kind of matter, the ether? The explanation is probably to be sought in those phenomena which have given rise to the theory of action at a distance, and in the properties of light which have led to the undulatory theory. Let us devote a little while to the consideration of these two subjects.

Outside of physics we know nothing of action at a distance. When we try to connect cause and effect in the experiences which natural objects afford us, it seems at first as if there were no other mutual actions than those of immediate contact, e.g. the communication of motion by impact, push and pull, heating or inducing combustion by means of a flame, etc. It is true that even in everyday experience weight, which is in a sense action at a distance, plays a very important part. But since in daily experience the weight of bodies meets us as something constant, something not linked to any cause which is variable in time or place, we do not in everyday life speculate as to the cause of gravity, and therefore do not become conscious of its character as action at a distance. It was Newton’s theory of gravitation that first assigned a cause for gravity by interpreting it as action at a distance, proceeding from masses. Newton’s theory is probably the greatest stride ever made in the effort towards the causal nexus of natural phenomena. And yet this theory evoked a lively sense of discomfort among Newton’s contemporaries, because it seemed to be in conflict with the principle springing from the rest of experience, that there can be reciprocal action only through contact, and not through immediate action at a distance.

It is only with reluctance that man’s desire for knowledge endures a dualism of this kind. How was unity to be preserved in his comprehension of the forces of nature? Either by trying to look upon contact forces as being themselves distant forces which admittedly are observable only at a very small distance and this was the road which Newton’s followers, who were entirely under the spell of his doctrine, mostly preferred to take; or by assuming that the Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton’s law of force as an axiom not further reducible. But the ether hypothesis was bound always to play some part in physical science, even if at first only a latent part.

When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.

This theory – also called the theory of the stationary luminiferous ether – moreover found a strong support in an experiment which is also of fundamental importance in the special theory of relativity, the experiment of Fizeau, from which one was obliged to infer that the luminiferous ether does not take part in the movements of bodies. The phenomenon of aberration also favoured the theory of the quasi-rigid ether.

The development of the theory of electricity along the path opened up by Maxwell and Lorentz gave the development of our ideas concerning the ether quite a peculiar and unexpected turn. For Maxwell himself the ether indeed still had properties which were purely mechanical, although of a much more complicated kind than the mechanical properties of tangible solid bodies. But neither Maxwell nor his followers succeeded in elaborating a mechanical model for the ether which might furnish a satisfactory mechanical interpretation of Maxwell’s laws of the electro-magnetic field. The laws were clear and simple, the mechanical interpretations clumsy and contradictory. Almost imperceptibly the theoretical physicists adapted themselves to a situation which, from the standpoint of their mechanical programme, was very depressing. They were particularly influenced by the electro-dynamical investigations of Heinrich Hertz. For whereas they previously had required of a conclusive theory that it should content itself with the fundamental concepts which belong exclusively to mechanics (e.g. densities, velocities, deformations, stresses) they gradually accustomed themselves to admitting electric and magnetic force as fundamental concepts side by side with those of mechanics, without requiring a mechanical interpretation for them. Thus the purely mechanical view of nature was gradually abandoned. But this change led to a fundamental dualism which in the long-run was insupportable. A way of escape was now sought in the reverse direction, by reducing the principles of mechanics to those of electricity, and this especially as confidence in the strict validity of the equations of Newton’s mechanics was shaken by the experiments with b-rays and rapid cathode rays.

This dualism still confronts us in unextenuated form in the theory of Hertz, where matter appears not only as the bearer of velocities, kinetic energy, and mechanical pressures, but also as the bearer of electromagnetic fields. Since such fields also occur in vacuo – i.e. in free ether-the ether also appears as bearer of electromagnetic fields. The ether appears indistinguishable in its functions from ordinary matter. Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space. There is no fundamental difference between Hertz’s ether and ponderable matter (which in part subsists in the ether).

The Hertz theory suffered not only from the defect of ascribing to matter and ether, on the one hand mechanical states, and on the other hand electrical states, which do not stand in any conceivable relation to each other; it was also at variance with the result of Fizeau’s important experiment on the velocity of the propagation of light in moving fluids, and with other established experimental results.

Such was the state of things when H A Lorentz entered upon the scene. He brought theory into harmony with experience by means of a wonderful simplification of theoretical principles. He achieved this, the most important advance in the theory of electricity since Maxwell, by taking from ether its mechanical, and from matter its electromagnetic qualities. As in empty space, so too in the interior of material bodies, the ether, and not matter viewed atomistically, was exclusively the seat of electromagnetic fields. According to Lorentz the elementary particles of matter alone are capable of carrying out movements; their electromagnetic activity is entirely confined to the carrying of electric charges. Thus Lorentz succeeded in reducing all electromagnetic happenings to Maxwell’s equations for free space.

As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H A Lorentz. It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility. How this is to be understood will forthwith be expounded.

The space-time theory and the kinematics of the special theory of relativity were modelled on the Maxwell-Lorentz theory of the electromagnetic field. This theory therefore satisfies the conditions of the special theory of relativity, but when viewed from the latter it acquires a novel aspect. For if K be a system of coordinates relatively to which the Lorentzian ether is at rest, the Maxwell-Lorentz equations are valid primarily with reference to K. But by the special theory of relativity the same equations without any change of meaning also hold in relation to any new system of co-ordinates K’ which is moving in uniform translation relatively to K. Now comes the anxious question:- Why must I in the theory distinguish the K system above all K’ systems, which are physically equivalent to it in all respects, by assuming that the ether is at rest relatively to the K system? For the theoretician such an asymmetry in the theoretical structure, with no corresponding asymmetry in the system of experience, is intolerable. If we assume the ether to be at rest relatively to K, but in motion relatively to K’, the physical equivalence of K and K’ seems to me from the logical standpoint, not indeed downright incorrect, but nevertheless unacceptable.

The next position which it was possible to take up in face of this state of things appeared to be the following. The ether does not exist at all. The electromagnetic fields are not states of a medium, and are not bound down to any bearer, but they are independent realities which are not reducible to anything else, exactly like the atoms of ponderable matter. This conception suggests itself the more readily as, according to Lorentz’s theory, electromagnetic radiation, like ponderable matter, brings impulse and energy with it, and as, according to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy.

More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.

Think of waves on the surface of water. Here we can describe two entirely different things. Either we may observe how the undulatory surface forming the boundary between water and air alters in the course of time; or else-with the help of small floats, for instance – we can observe how the position of the separate particles of water alters in the course of time. If the existence of such floats for tracking the motion of the particles of a fluid were a fundamental impossibility in physics – if, in fact nothing else whatever were observable than the shape of the space occupied by the water as it varies in time, we should have no ground for the assumption that water consists of movable particles. But all the same we could characterise it as a medium.

We have something like this in the electromagnetic field. For we may picture the field to ourselves as consisting of lines of force. If we wish to interpret these lines of force to ourselves as something material in the ordinary sense, we are tempted to interpret the dynamic processes as motions of these lines of force, such that each separate line of force is tracked through the course of time. It is well known, however, that this way of regarding the electromagnetic field leads to contradictions.

Generalising we must say this:- There may be supposed to be extended physical objects to which the idea of motion cannot be applied. They may not be thought of as consisting of particles which allow themselves to be separately tracked through time. In Minkowski’s idiom this is expressed as follows:- Not every extended conformation in the four-dimensional world can be regarded as composed of world-threads. The special theory of relativity forbids us to assume the ether to consist of particles observable through time, but the hypothesis of ether in itself is not in conflict with the special theory of relativity. Only we must be on our guard against ascribing a state of motion to the ether.

Certainly, from the standpoint of the special theory of relativity, the ether hypothesis appears at first to be an empty hypothesis. In the equations of the electromagnetic field there occur, in addition to the densities of the electric charge, only the intensities of the field. The career of electromagnetic processes in vacuo appears to be completely determined by these equations, uninfluenced by other physical quantities. The electromagnetic fields appear as ultimate, irreducible realities, and at first it seems superfluous to postulate a homogeneous, isotropic ether-medium, and to envisage electromagnetic fields as states of this medium. But on the other hand there is a weighty argument to be adduced in favour of the ether hypothesis. To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.

It is true that Mach tried to avoid having to accept as real something which is not observable by endeavouring to substitute in mechanics a mean acceleration with reference to the totality of the masses in the universe in place of an acceleration with reference to absolute space. But inertial resistance opposed to relative acceleration of distant masses presupposes action at a distance; and as the modern physicist does not believe that he may accept this action at a distance, he comes back once more, if he follows Mach, to the ether, which has to serve as medium for the effects of inertia. But this conception of the ether to which we are led by Mach’s way of thinking differs essentially from the ether as conceived by Newton, by Fresnel, and by Lorentz. Mach’s ether not only conditions the behaviour of inert masses, but is also conditioned in its state by them.

Mach’s idea finds its full development in the ether of the general theory of relativity. According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that “empty space” in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gmn), has, I think, finally disposed of the view that space is physically empty. But therewith the conception of the ether has again acquired an intelligible content although this content differs widely from that of the ether of the mechanical undulatory theory of light. The ether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but helps to determine mechanical (and electromagnetic) events.

What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian ether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The ether of the general theory of relativity is transmuted conceptually into the ether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the ether of the general theory of relativity is the outcome of the Lorentzian ether, through relativation.

As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe.

In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or – as they might also be called – space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass.

Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

From Einstein’s words, more particularly his words that I have emboldened, I hope that my readers may understand why I feel that ether theory in physics remains a valid hypothesis.

It seems that a coming software apocalypse may soon be upon us

Is the only way to head off a catastrophe with software is to change software codes and how we make them?

It seems that this must be the case, and sooner than later. If you are familiar with computer software technology I am sure you will understand how serious this coding problem is. I am not computer friendly, so you must evaluate the contents of the article below as you see fit. I have emboldened text that I feel is most pertinent for you to take notice of. As far as I am aware this urgent story has not yet been discussed in the Australian media.

I present to my readers the following article derived from The Atlantic news journal

I quote the article as follows:

“James Somers, Sep 26, 2017

There were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generate a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.

Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

The attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.

Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.
Related Stories

A person whose head has been replaced with a bulky desktop monitor

You Are Already Living Inside a Computer

Not Even the People Who Write Algorithms Really Know How They Work

Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.

Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.

The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.

“Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”

What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.

“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”

In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.

The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.

It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.

“If the software malfunctions and the same program that crashed is supposed to save the day, it can’t.”

Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”

Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.

There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.

The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”

The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on — so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.

Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

John Resig had been noticing the same thing among his students. Resig is a celebrated programmer of JavaScript—software he wrote powers over half of all websites—and a tech lead at the online-education site Khan Academy. In early 2012, he had been struggling with the site’s computer-science curriculum. Why was it so hard to learn to program? The essential problem seemed to be that code was so abstract. Writing software was not like making a bridge out of popsicle sticks, where you could see the sticks and touch the glue. To “make” a program, you typed words. When you wanted to change the behavior of the program, be it a game, or a website, or a simulation of physics, what you actually changed was text. So the students who did well—in fact the only ones who survived at all—were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation. Resig, like Granger, started to wonder if it had to be that way. Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.

Bret Victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”

“There’s a pretty strong conviction that that’s the wrong way of doing things.”

Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.

Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.

By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.

“Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.

To Victor, the idea that people were trying to understand cancer by staring at a text editor was appalling.

There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.

Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.

Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling. And it was the proper job of programmers to ensure that someday they wouldn’t have to.

There was precedent enough to suggest that this wasn’t a crazy idea. Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS. It is powerful enough to do work that once would have been done by a professional web designer.

But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.

At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like. His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen.

Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have. Then, you’d replay the game to see what actually happened.

Shadow Marios move on the left half of a screen as a mouse drags sliders on the right half.

Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.” He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

When John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.

Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”

In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.

But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.

“I’m not sure that programming has to exist at all.”

In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.

Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.

“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.” Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”

Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.

“The people know how to code. The problem is what to code.”

It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.

“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”

On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.

Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.

Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.” The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”

Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”

Most programmers like code. At least they understand it.

He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”

Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.

Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production. “It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.” Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.

Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.

The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.

We already know how to make complex software reliable, but in so many places, we’re choosing not to.

As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”

Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.

Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.

It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”

“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”

You could do all the testing you wanted and you’d never find all the bugs.

Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”

The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”

Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

In 2011, Chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.

“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”

Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.

“Few programmers write even a rough sketch of what their programs will do before they start coding.”

This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.” The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.

TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.

The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.

For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.

Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.

But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”

“I hope people won’t be allowed to write programs if they don’t understand these simple things.”

Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”

Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.

Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.

“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.

For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.

This code has created a level of complexity that is entirely new. And it has made possible a new kind of failure.

He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.

“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”

“And that’s completely fine until you run smack into a real problem.”

In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.

Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.

“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.

In the world of the self-driving car, software can’t be an afterthought.

“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.

“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”

One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.

“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

“So that’s a big problem.”

The Future of Fundamental Physics

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

I share with my readers the ideas about physics of this distinguished theoretical physicist Nima Arkani-Hamed

The principle text can be found in this pdf file. I have extracted the quotes below from this paper in order to provide you with a guide to what the principle themes Nima Arkani-Hamed is promoting. The text speaks for itself.

Quotes:

“Abstract:

Fundamental physics began the twentieth century with the twin revolutions of relativity and quantum mechanics, and much of the second half of the century was devoted to the construction of a theoretical structure unifying these radical ideas. But this foundation has also led us to a number of paradoxes in our understanding of nature. Attempts to make sense of quantum mechanics and gravity at the smallest distance scales lead inexorably to the conclusion that space-time is an approximate notion that must emerge from more primitive building blocks. Furthermore, violent short-distance quantum fluctuations in the vacuum seem to make the existence of a macroscopic world wildly implausible, and yet we live comfortably in a huge universe. What, if anything, tames these fluctuations? Why is there a macroscopic universe? These are two of the central theoretical challenges of fundamental physics in the twenty-first century. In this essay, I describe the circle of ideas surrounding these questions, as well as some of the theoretical and experimental fronts on which they are being attacked…”

“…But there is a deeper reason to suspect that something much more interesting and subtle than “atoms of space-time” is at play. The problems with space-time are not only localised to small distances; in a precise sense, “inside” regions of space-time cannot appear in any fundamental description of physics at all…”

“…The fact that quantum mechanics makes it impossible to determine precisely the position and velocity of a baseball is also irrelevant to a baseball player. However, it is of fundamental importance to physics that we cannot speak precisely of position and momentum, but only position or momentum…”

“…This simple observation has huge implications. As discussed above, precise observables require a separation of the world into a) an infinitely large measuring apparatus and b) the system being studied…”

“…It should be clear that we have arrived at a bifurcatory moment in the history of fundamental physics, a moment that has enormous implications for the future of the subject. With many theoretical speculations pointing in radically different directions, it is now up to experiment to render its verdict!…”

“…Today, however, we confront even deeper mysteries, such as coming to grips with emergent time and the application of quantum mechanics to the entire universe. These challenges call for a bigger shift in perspective. Is there any hope for taking such large steps without direct input from experiment?…” [Emergent time relates to the concept of ‘NOW’ and its associated simultaneity].

“…Why should it be possible to talk about Newton’s laws in such a different way, which seems to hide their most essential feature of deterministic evolution in time? [Which I agree with] We now know the deep answer to this question is that the world is quantum-mechanical…”

“…There must be a new way of thinking about quantum field theories, in which space-time locality is not the star of the show and these remarkable hidden structures are made manifest. Finding this reformulation might be analogous to discovering the least-action formulation of classical physics; by removing space-time from its primary place in our description of standard physics, we may be in a better position to make the leap to the next theory, where space-time finally ceases to exist…”

Also see my blog entitled “The emerging crisis in physics. Will physics soon need to take a new course of direction?