Addendum to my welcome page

I see this blog as being an important statement for both my existing readers as well as my readers of the future

Welcome again to my website. I have decided to simplify the opening words to my website. I feel that the following words need to be said in context to my wider welcome message. You will find that over a five year period I have written several hundred blogs that are a reflection of my ideas regarding a wide range of subjects that I have felt the need to talk about at different times throughout this period. Most of these older blogs remain in my website for posterity reasons for the benefit of later generations of my family. This is so that they may better appreciate not only the types of topics I like to talk about most, but also how my views about certain topics have changed as I have grown older. I believe that one of my blogs relating to gender expression in the community is a good example of this. It relates to the international gender equality debate. I see this blog as being an example of not only how my views about the subject have changed since I wrote the document in 2014 but those of much of the international community has as well. In this sense my website, in its present form, can be seen as an instrument for recording my views about different topics over different time frames. This includes the contents of any associated web links that may be attached.

A few things about my website

My website has grown under its own weight since it was created in 2013. Because of my attempt to retain most of my original blogs for posterity reasons this has meant that my website at different points has become disjointed in its presentation. At this stage I intend to do nothing more about this, but I probably will in due course. However, I will soon complete blogs that were inefficiently incomplete at the outset, and I will also re-establish appropriate hyperlinks that have either been inadvertently removed or have lapsed over time. I apologise to my readers for this inconvenience. Hopefully, a recently established improved category index will help you to more effectively navigate around my blog posts.

My website is my retirement hobby. Notwithstanding my words above, I do my best to keep my blog creation techniques and administration as professionally presented as I can. I am a man in my mid-seventies who does not enjoy good health, so having outside help with my website work is especially important to me. I am not naturally computer friendly, so this does not help either. Fortunately I have three great friends who help me along the way. Without their assistance this website would not exist. My two sons help me as well. The function I like most about my retirement hobby is conducting online research and the bringing together of new ideas and unusual posts. I do my best to approach my work conscientiously and fair-mindedly with all blog topics. I do not fear writing about any issue that comes to mind to share my ideas with you. I am a person who is strongly focused upon social justice and environmental protection issues. I have posted a number of blogs wherein I demonstrate that I feel the international environment has been severely compromised, and continues to be, by nation states, corporations and individuals who seem to have no sense of duty of environmental care. This greatly distresses me.

My employment history and a few other things about me.

With respect to my wide-ranging work history, there are four principal areas of employment that have dominated most of my working life and that may interest you. These are my managerial association within the retail industry for fifteen years from the early 1970s; my creation of a new building maintenance business that I managed and operated for five years; a new corporate business I created that acted on behalf of prospective new home buyers seeking assistance to develop floor plans; and then assistance for them to locate reliable builders to build their new homes which were principally within country areas. In my latter years I enrolled in tertiary training for a professional mental health analyst and therapist. (I completed this study, as well as undertaking additional post- graduate studies to further enhance my specialised area of mental health expertise). I was accepted to complete a law degree at Adelaide university in the late 1990’s.

My new profession in mental health practice was cut short because of intrusive neuro-surgery through which I completely lost sight in one eye. My remaining eye remains potentially compromised for failure as well. (The medical conditions relating to this threatened new blindness remain, and this is a reason why sometimes I need assistance with the creation of text for my website. It also creates the impetus for me to act with a greater sense of urgency to develop my website than might otherwise be the case).

My relationship with reality science

I love studying and writing about reality science. By reality science I mean not only the ideas and beliefs of contemporary mainstream physicists, but also the works and ideas of trained, widely known and respected ‘fringe’ physicists as well. It is my opinion that there is room for both temporal as well as serious atemporal ontological research in science. These days I see myself as being a philosopher of science, and these are two of the reasons why I want to talk to you about scientific matters today. I see myself as being prepared to look at reality science from a wide perspective and then attempting to make some sort of sense of it all. Hence, over time I have developed my own theoretical scientific model that I feel is representative of science-reality as I now conceive it to be. I have entitled it The Awareness model of theoretical physics (or The Awareness Model).

Introduction to my science-related ideas, and blogs relating to them

I am not a physicist. However, in my website I talk about wide-ranging science-related concepts that I have incorporated into my Awareness model. At times I incorporate into it my conceptual views of reality science, my own interpretation of mainstream physics language, and associated information relating thereto. I do this because it makes it easier for me to describe my own use of language within reality science, so that it makes better descriptive sense to me, and is more likely to resonate with lay readers as well. I feel sure my avid science readers would probably have a good idea of what my line of reasoning is anyway. My words relate to what I consider to be the incomplete and misleading information that often flows from the physics community, with regard to the accuracy of statements it makes about the contemporary status of international physics. It has been published in different scientific journals and essays of respected physicists that international physics is in a state of confusion and crisis at the present time. I think that this is something that you should know about.

I believe that mainstream physics, as a single unit of scientific discovery and investigation, is not fulfilling what I consider to be its duty of care to either itself or the wider population. This is one of the reasons why I make the following statement. Physical science as a single unit of discovery and testing does not include predictive mathematical information (originating from Quantum Mechanics) into its wider modelling process. I acknowledge that this would be a difficult exercise to achieve. However, I believe it is the principle that counts in this instance. Thus I think that the media should play a role in the wider scientific education of the community as well. It should be asserting that the metaphysical aspect is an important part of science, and a tool in understanding wider reality science. The Standard Model of physics itself accepts that metaphysical information from predictive Quantum Mechanics mathematics has been found to be demonstrably real in labs. Furthermore, I see that mysterious metaphysical phenomena probably play a significant part in all of our lives as well. This means that we can assume that metaphysical phenomena are both real and valid science, and not voodoo science, as many critics claim it to be. This is the mystery theory which Albert Einstein could not believe existed, because it could not be explained within his special relativity model. These words are significant with respect to my newly amended Awareness concept model.

My complaint against the scientific community

Quantum entanglement is a theory that has been proved by replicable and reliable experiments. A particle on one side of the universe can influence a particle on the other side of the universe instantly by altering its spin. This means that somehow, every part of the universe is somehow connected to every other part. We, too, are part of this inter-connection and it is conceivable that we can indirectly influence each other at a distance as well. If this is the case, you will probably see why I feel that there needs to be some sort of external review as to how professional science, as a single unit discipline of knowledge and investigative endeavour, can reasonably avoid it. I hope that, over time, metaphysical knowledge in science will become the norm in the media, and teaching intuitions as well. Entanglement theory no longer remains a theory in science. It was finally proven to be so as recently as 2016. The idea was established as early as the 1960s in Bell’s experiment.

A statement from me about this complaint

It is because of this type of widespread incomplete misinformation in professional science that I do not feel uncomfortable in redefining scientific language, and its associated interpretation and meaning, within my written work as I just stated. I see big picture reality science as not needing to be too fussed in order to describe its wider mission. Wider picture reality embraces all that ‘is’ and this includes metaphysical reality as well. My Awareness model seeks to demonstrate that the mysterious metaphysical world is the sub-quantum world where I research and describe unusual phenomena such as consciousness, intuition, the possibility there may be ghosts and what happens when people have near death experiences (NDEs). There are many other instances like it as well, including morphogenic field theory, and my strong belief is that we have two different forms of consciousness. From these latter words, you will probably see why I find such deep level science so fascinating. It also thrills me that perhaps I may be able to enthuse you as well through the information I am sharing with you today.

I will now share with you what I consider to be the most important points to be aware of in reference to my Awareness model. In due course I will remove the following words and rewrite them in a new, more descriptive blog. Today it is important to me that you are made aware of its contents, so that you may get a better grasp of the new blogs that I have recently posted. Keep this in mind, because it is only recently that I introduced Aquinas’ ideas to my work (described below) and because of this, my use of the phrase Awareness model remains sketchy in these new blogs.

The history of my story

It was as recently as late October 2017 that I discovered from my readings that the thirteenth century theologian and philosopher Thomas Aquinas wrote about his views of reality. He described reality as being irreducible and reducible quantities of all that ‘is’ in relation to there being a Deity, a heavenly afterlife, and all that related to our earthly existence. I do not claim to have read this text at length. What follows is not a religious story. It was not long after I read Aquinas’s theosophical ideas that I began to realise that they may also be relevant to us, in being able to help us to better understand the fundamental and multi-layer quantity [conditions influences and effects] of our holistic universe. Although I had already commenced writing about this new material, I felt it was necessary to attempt to include divisibility and indivisibility influences into my informational Awareness model as well. I felt that both these two new words completed the describable nexus of four tools that I needed if I were to realistically describe my concept of reality. More importantly, to describe how such a reality may be the dominant condition within our universe and how it may concurrently work within it. The four tools again are irreducible, reducible, indivisible and divisible conditions, influences and effects all manifested within the same concurrent relationship. This is in all reference frames.

I began to write additional notes, and I then attempted to consider what the inherent difficulties might be if I were to continue pursuing Aquinas’s concept of indivisible and divisible pertinent to reality within my own model. My problem was how I would go about employing this new concept in explaining what the inherent nature of reality might be, and how this same reality condition (energy) may become the same energy type and condition that exists in our universe. Furthermore I then had to work through the issues relating to how the universe may work in relation to this reality energy, which I had by then entitled Primordial Awareness. It was early in December in 2017 that I completed the first draft of the paper. This remains a draft at this time. I may decide to rewrite it one day. At the present time my primary object is to assemble my ideas into some sort of a more coherent and readable and understandable condition. It is extracts from this draft, including most sections of its introduction, that have been included in my blog entitled “The fundamental universe revisited“.

What do all my words mean in an everyday sense?

You may consider that my revised approach to writing about reality science could possibly incorporate every conceivable reality related condition, influence and effect possible within any reference frame. It is only the condition of the observer that may otherwise change the nature of the observation, such as if they were outer space observer visitors. If you consider that if there is only one universal irreducible and describable energy (this is with indivisible and divisible units of energy influences concurrent with it). and it is from this informational energy that all other energy types, influences and effects flow, this means that all these sub-energy types are all that is (‘is’). By this I mean both reality itself as well as universal conditions relating to our universe, but in different ratios, averages and densities to each other rather than in the single quantity of reality energy that I talked about above. In other words I mean that holistic reality energy (My Awareness model concept) exists in two separate reference frames. One is within itself and the other is within the quantity, conditions and effects of our universe from which sub-units of energy influences emerge.

I point out in my extended paper that it is my opinion that Einstein’s special and general relativity theories are not wholly incorrect in relation to the wider universal quantities and conditions that I have just discussed. However, his Special Relativity model would need a describable continuum inertia in order to support it along the lines I have just outlined with my Awareness model (and other models of a similar type .) Furthermore Einstein’s General Relativity model would need to have its existing ether continuum changed to this same continuum inertia as the Special Relativity model that I just talked about. Whilst I have ideas as to how this may occur, I am not capable of doing so in a professional scientific manner. I am more interested in bigger modelling from a philosopher’s point of view.

My words today, as well as those within my blog “The fundamental universe revisited” do not negate the relevance and meaning of all of my existing science related blogs in my website. It just means that the information within them should be conceptually interpreted in a different and more inclusive manner of a type I have been talking about today. Over time I will add text to these existing blogs stating that the contents of the blog “The fundamental universe revisited” are the primary reference frame for all of my scientific beliefs. These beliefs include our cultural environment, our living conditions, how and why we make the types of decisions that we do, and subsequently behave in the manner that we do.

May you enjoy the contents of my website

The fundamental universe revisited

An exacting approach to the scientific understanding of cosmological reality

Abstract

I have identified what appears to be the natural continuum force of the universe. This force is irreducible and experiential across all frames with the exception of those relating to electricity and magnetism. I have entitled this force “primordial gravitation”. Primordial gravitation has the condition and influence of ether, including the effects of ether. The concurrent relationship between the primordial gravitational force and ether influences the condition for the creation of particles. This means primordial energy also informationally influences the conditions for energy transfer into matter, as well as their associated units of fields. These fields can be explained and demonstrated.

Discussion

Recently I completed writing a major science work relating to new ideas that I have created in respect to more efficiently, accurately and decisively describe cosmological reality at every possible level. I have selected different sections and topics from this work to share with you in order to give you a reasonable idea about my line of thinking with physics related material today. I have not changed many of my basic science related ideas over time and I have not done so at the present time either. What I have done is create a new scientific descriptive methodology that allows me to more effectively describe and conceptually justify my wide spread ideas relating to not only the universe but also the human condition. I feel far more confident in being able to introduce you to my wider ranging science and philosophical ideas via this new medium of information sharing that I have created. However, my new presentational methodology is more abstract than what you may be accustomed to. Unfortunately this is the down side of my new approach to understanding the wider world around us and what it may mean in all of our lives.

My original paper will not be posted online. The ideas that are contained in this blog today should be seen as superseding other science related material that I have posted otherwise you may become confused and disappointed. As I pointed out above generally speaking my fundamental ideas about reality have not changed. If you do elect to compare some of my original ideas with the content of this blog you may find them to be complimentary to each other but from a different approach. My primary paper is entitled ‘A fundamental description of the universe and its associated workings’. ‘The indivisible continuum’.

From this point onward I have incorporated text my original work into this blog.

Introduction

I believe that most of contemporary physics theories are missing a critical factor as they attempt to develop a universal theory of everything. I suggest that many mainstream physicists have overlooked what I believe are the two dominating characteristics of the universe. It is my opinion that the universe (the absolute quantity of cosmological reality as we understand it to be) has two separate divisible (DIVIS) and indivisible (INDIV) energy type quantities as well as associated sub- units of energy influences relating to these quantities. I believe that these two fundamental quantities physically and metaphysically mean something with respect our being able to better understand the holistic universe around us and also assisting us to better understand ourselves as well. INDIV and DIVIS could be irreducible and reducible too. My hypothesis is loosely built upon the ideas of the twelfth century philosopher and theologian Thomas Aquinas but my blog today is not a religious story.

Chart one below demonstrates what I mean by these words:

Reality is in two distinguishable parts:

Key:

1: Symbolizes all units of information and influences in universal reality as being indivisible (INDIV).

2: Symbolizes all units of information and influences in universal reality as being divisible (DIVIS).

3: INDIV and DIVIS meet together to complete the three dimensional experiential reality within which we live our lives.

The quote immediately following the note below will provide you a guide to my wider ideas relating to reality physics as well as provide you with clues as to how best to approach different segments I have copied and posted into this blog. My principle intent today is to introduce you to what I believe is an important new tool for scientists to employ with respect to more effectively and efficiently approach their own scientific research methodologies. I apologise that because I have copied an pasted from my other work as I have not only is this work incomplete in respect to my other work but it does not follow a natural continuum either as I earlier mentioned.

Note: You will find throughout this presentation that my science related ideas and statements are not necessarily always in accordance with main stream science meaning and usage. This is because I am a philosopher of science and not a physicist. I apologise for any inconvenience this may cause to my readers.

 

Quote:

 There is a single quantity of information (INDIV) that dominates cosmological reality which includes all life forms as we understand them to be. I also term this single quantity of information as primordial awareness. I refer to this title in many of my existing science blogs.

 There is an additional quantity of information that exists in a concurrent relationship with INDIV and this quantity is DIVIS. DIVIS cannot exist without INDIV.

 Cosmological reality can be explained by this concurrent relationship between INDIV and DIVIS.

 The INDIV and DIVIS quantities can be incorporated within any inertia continuum (foundation) provided their respective conditions and influences (forces) are interpreted, segregated and scientifically applied ethically and correctly in relationship to the stated conditions of the model itself.

 The concurrent relationship between INDIV and DIVIS is not only applicable to our universe but also seems to be pertinent to all other universes and dimensions as well.

Chart two below demonstrates what I mean by these words:

Ourselves and our behavior in relationship to primordial awareness (all that is)

Key:

1: Primordial awareness (wider INDIV reality).

2: The Big Bang.

3: Primordial universe (a quantity).

4: INDIV quantity relating to the wider universe.

5: DIVIS quantity relating to the wider universe.

6: The combined INDIV and DIVIS quantities of the universe from which inherent conditions and influences emerge to create change or otherwise do something of every conceivable nature. These possibilities to do something are units of possibilities as well as any resultant condition or influence resulting there from within the universe. These words mean all units of activity within the universe are connected to each other through different fields of quantum conditions and influences (entanglement).

7: Ourselves as units, including sub-units, to do something such as our body organs and INDIV units to think. Furthermore DIVIS consciousness does something that causes subsequent behavior. DIVIS consciousness can be considered and as such is divisible.

A summary of what I believe that the INDIV and DIVIS theory achieves

1. It is a story that appears to fundamentally and reliably bring together all the quantities, units, forces and influences that can conceivably exist within the universe. The story could conceivably also include the conditions and influences that existed before the Big Bang as well.

2. It describes not only how the universe works but also why it works in the manner that it does.

3. It demonstrates that there are two primary types of scientific causation.

4. It defines and describes a continuum that our universe could conceivably be built upon, and where the inherent energy of the universal system might internally emanate from.

5. It demonstrates how and why the universe is a random system.

6. It explains and describes the relationship between the smallest and the most trivial phenomena and the largest in the universe as well. This includes the method and means by which these quantities and units are interconnected, as well as their associated influences and effects.

7. It intimately describes the human condition at every conceivable level, including the manner in which we make decisions and subsequently behave.

8. It explains and describes why there is such a deep and irreconcilable division which persists to this day between the metaphysical scientific predictions of Quantum Mechanics and Einstein’s Special Relativity model.

9. It is an instrument of instruction that I believe most people could at least partly identify with and believe, in order to form their own views about cosmological reality and its relationship with the human condition, and perhaps wider culture as well.

10. It provides a wide and diverse range of philosophical and scientific information that can define the wider universe and provide the informational base upon which inestimable numbers of shorter investigative stories can be written about.

11. It seems to provide sound grounds for both scientists and philosophers to develop new ideas and theories regarding the nature and origins of the universe.

What my theory does not achieve

From a contemporary mainstream scientific perspective it proves nothing! It is a concept document. This does not mean that my ideas are wrong or phenomenon’s of the type that I have introduced herein do not exist or are worthy of reader consideration. I see my work in progress endeavour today as being a stand-alone fundamental reality theory that stands on its own merits until such time as others can demonstrate a more descriptive and compelling theory. Also see my notes at the end of this blog with respect to this subject.

1. General description

As I stated earlier, I believe that that cosmological reality exists in two parts. I have described these two parts as being quantities. I see these dual quantities as being divisible (DIVIS) and indivisible (INDIV), which not only mean something but also influence units of cosmological information within themselves and each other. Cosmological information is all information and influences that may be related to DIVIS and INDIV. I believe that it is this informational relationship that provides a comprehensive and meaningful understanding of the universe, and insight to into some of the great related scientific mysteries. The types of ideas that you will find include…

A. INDIV may be able to be considered as a cosmological constant because it never decays and so it is a constant everlasting cosmological effect. It provides energy for all phenomena in the universe and its associated effects is ether. We can only ever talk about INDIV as the universal continuum because it is also the force of nature. We can perceive DIVIS as the temporal continuum. We can only talk about INDIV and DIVIS together as information. Some is knowable and some is not, but this does not mean that it is not there.

B. Causation is not restricted to DIVIS units. It applies to INDIV units as well. The concurrent relationship between both is the life force of the universe as we can best perceive it to be.

C. Cosmological reality (the absolute quantity in physics) is about our understanding the relationship between INDIV and DIVIS influences on each other as well as themselves. This relationship relates to energy types, densities, averages and ratios with each other in every conceivable manner in an absolute time continuum.

At this stage, I feel that there are questions that you should consider

Do you think:-

I. That the universe can be separated into two clearly distinguishable quantities. What may I have missed by creating this hypothesis?

II. That the unit system which I have conceived, associated with these two quantities, are representative of universal phenomena at every conceivable level relative to both the INDIV and DIVIS quantities?

III. That I have appropriately nominated the INDIV quantity as being the dominant quantity because I incorporated important phenomena within it such as nature, gravity, the speed of light, intuition and the like into this quantity?

IV. That it makes a significant difference to the validity of my wider hypothesis if I have erred along the way by incorrectly appropriating units to either the INDIV or DIVIS quantity. Can the law of averages apply?

You will find a list of references pertinent to the contents of this blog at the conclusion of this presentation.

2. The relationship between quantities, units, influences and energy. This includes a description of the energy forces at play at the period in cosmic history of the Big Bang

I believe that the metaphorical connecting nodes of this universal relationship between DIVIS and INDIV and their associated units are timeless NOWs. But this is not always necessarily so. I discuss these NOWs in section three. The INDIV quantity may also be seen as metaphysical quantity. I see the DIVIS and INDIV quantities, together with their units of associated energy types, as being in a dynamically fluid concurrent relationship that is mostly unpredictable and random. This unpredictable and random behaviour within the ether effect of primordial gravity could be as is demonstrated in physics circles today. I argue that DIVIS and INDIV influences contain both knowable and unknowable information and so their combined influences of energy cannot be scientifically tested by observation or experiment. However, those that are dominantly DIVIS would need to allow for at least some degree of INDIV hidden entanglement (some physicists say hidden variables), or similar indeterminable universal interference. An example of universal interference is the cosmological influences affecting molecular contraction relating to movement of rods within time dilation theory.

This means that DIVIS and INDIV are representative of units of energy relating to both themselves individually as well as to each other. They mean something worthy of future understanding and investigation. This ‘something’ can also mean communication across the width of the universe in an instant. This force’s behaviour is similar to that in entanglement theory in particle physics, where it has been demonstrated by experiment that the spin of one particle can influence the spin of another particle on the other side of the universe instantly. In other words my INDIV and DIVIS theory means that it is possible for anything to happen across all units of the universe which includes our lives. This idea is similar to the phenomenon of wave particle collapse as a result of observation in particle physics theory. I feel that both phenomena are related.

This inter-relationship between DIVIS and INDIV can be likened to a neural network, and at points where DIVIS and INDIV meet or cross each other, these points can be seen as nodes that have specific informational meaning. Knowable and divisible information emanating from DIVIS includes time, patterns of light and associated densities of light, motion, consciousness (because it can be rationalised and contemplated) and the splitting of an apple into two separate halves. Unknowable but describable INDIV information includes units relating to the origins of electricity, the origins of magnetism, the speed of light (one way in a vacuum), intuition (awareness), thought, gravity and ether in General Relativity theory. These units are to be included with the all-important units of nature and life as we understand and experience them to be. This combined information of INDIV and DIVIS, with associated meaning and influences can then be described as being units of cosmological activity and process along the lines I talked about above. I believe that if DIVIS and INDIV and other related units of influence within the universe did not exist in a concurrent but mutually dependent relationship and ratio with each other, the universe as we understand it to be would not make sense. Furthermore it would probably not exist at all

I say that INDIV primordial gravity wave motion relating to this network (the analogical neural network) at some point, probably before the Big Bang, began to progressively generate DIVIS patterns of unit time of time relating to both DIVIS motion as well as the overall universal unit network which is our universe. This is the universe containing both INDIV and DIVIS units. I suggest it is from this absolute quantity (our universe) that this DIVIS unit of time became knowable and understandable and a unit to assist us, through our curiosity, to progressively context and plan all manner of human endeavour. Clocks that we devised then became the instruments to help us to recognise and explain our existence around cosmological DIVIS time. This is by way of units of time in relationship to movement of objects. Furthermore, I suggest it is from this position that the divisible DIVIS time that we are familiar with today (space/time) progressively became the time/motion reference frame that most physicists have adopted in their theories today. This is the reference frame that scientists tell us is the appropriate manner for us to context everyday reality. Because of this, we have become accustomed to planning and living our lives accordingly. (At the close of the nineteenth century, Lorentz absolute time ether theory was generally seen to be the appropriate continuum inertia frame. Object movement in timeless space was then recorded differently).

As I have indicated I also believe that these same INDIV and DIVIS influences supported the creation of a inertia (the continuum foundation of the universe itself) from which the development of the universe took place, then developed and evolved in the manner that it has and continues to do so. It also seems clear to me that both INDIV and DIVIS influences created the conditions for the Big Bang to occur. This is both before (in INDIV-virtual form) and after the Big Bang in real form. All forces involved collectively created the conditions of energy types, averages, densities and ratios between both DIVIS and INDIV conditions to occur at a particular timeless NOW to create such a Big Bang explosion. The word NOW is important. Remember we are talking about a period in reality space where clock units of time did not exist. Conditions relating to the explosion were simultaneous and as such the DIVIS explosion related only to its own explosion reference frame which included forces existing prior to the explosion itself as I mentioned above.

The creation of preon, quark and gluon particles took place about this period too. These three particle energies are the foundational forces that created DIVIS three-dimensional universal matter. At the indivisible point of explosion a separate and diverse range of electromagnetic field forces complimented these three energy forces of particles. When combined these combined forces contributed to the extent of the massive violence of the explosion by way of their respective DIVIS and INDIV unit energies. The initiating energy for the Big Bang was INDIV energy derivative from the forces, quantities and influences already existing prior to the Big Bang in a virtual DIVIS form in respect to a four dimensional (absolute time) random energy continuum. Keep in mind the respective ratios, densities and averages of these forces played a critical role in this Big Bang event as well. This is why such virtual-DIVIS energy prior to the Big Bang continues to manifest itself today as being primordial noise (cosmic background radiation). It is a four dimensional absolute continuum where nothing is relative except unto itself.

3. Nature and where we fit into this bigger picture

Nature embraces both DIVIS and INDIV units that are common for us all to experience, sense, and see. As I pointed out earlier, I see primordial gravity as being an informational INDIV unit of influence and so you may assume such energy exists relating to influences between both DIVIS and INDIV. I will later explain the concurrent relationship of primordial gravity with undetectable ether pixel energy. Major objects push and pull gravity waves are also INDIV. This is because they travel at C. My concept of their being primordial gravity waves travel beyond C. I believe that we are born with both DIVIS and INDIV features and influences and this means that we are no different from other influences as represented by animals, fish and birds. By this I mean influences that originated from nature (INDIV) in the first place. I argue that the only difference between our own twin influences of DIVIS and INDIV compared to other species is in respect of unit energy types, ratios, averages and densities (degrees of influence) between all of us. I also suggest from these words that as we age and become more widely experienced, the twin influences of DIVIS and INDIV in our lives change in relation to ratios, averages and densities as well, at every conceivable level.

I see the human brain and consciousness as being divisible DIVIS units which can sometimes be measured and tested. An example of this is the Global Consciousness Movement’s activities relating to global atmospheric disturbances (waves) emanating from events such as the collapse of the twin towers in New York and mass bombings that periodically take place somewhere in the world. I suggest that it is these types of both positive and negative influences that impact upon our mind, consciousness and brain relationship globally. These negative and positive influences are the same INDIV and DIVIS influences that cause us to think and act in the manner that we do, however subtle and influential they may be. It should be remembered that Sheldrake’s morphogenic field theory demonstrates the ability of plants and other organisms to influence each other as this ability is a natural part of INDIV nature and so is INDIV intuition as well.

Chart three below demonstrates what I mean by these words:

Our relationship with DIVIS and INDIV:

This illustration also demonstrates how we could not exist without the INDIV influence in our lives.

Key:

1: INDIV represents the dominant quantity of informational units of reality.

2: DIVIS is in a concurrent but junior partnership with INDIV informational units of reality.

3: DIVIS and INDIV come together to represent how we live in an experience that combined DIVIS and INDIV units of influences of reality.

4: Ourselves experiencing DIVIS and INDIV in reality.

4. NOWs and how they impact upon our everyday lives as well as the wider cosmological universe

I see both DIVIS and INDIV units of information as being an instantaneous successions of NOWs. NOWs are also units of INDIV influence. Because these NOWs are not related to clock time (they are related to INDIV absolute time) they are a collective representation of all that has ever been and will be with regard to DIVIS and INDIV units of influence within the universe. The universe can also be considered as being a NOW because ‘something’ brought about its existence in the first place and we can never know for certain whether this ‘something’ was a DIVIS influence, an INDIV influence or a combination of both as I have suggested above. These universal units of DIVIS and INDIV can then be seen as a collective representation of all that has historically been, is, or will ever be into the future via a timeless succession of here and now “NOWs”. These include historical [virtual] NOWs before the Big Bang.

When I talk about future NOWs I mean NOWs that are representative of life units of influences that we are either hopeful about, or neutral about, or fear completely. That is, future events may be positive and joyful family activities like Christmas time, or events like sitting by a lake fishing and relaxing, thinking about nothing in particular. NOWs also include negative events such as the imminent death of someone you care for. NOWs therefore are also a representation of their own history and also all conceivable possibilities to do something NOWs. These latter possibilities are NOWs which we may never experience or seek to exploit in respect to any given day or time NOW. These words are akin to Quantum Mechanics theory that says every conceivable for possible to happen is on the table can happen when it is observed. I suggest that this also means life experiences as well. This means that we can change our minds at random by no other means than a whim or a short term distraction whilst we are engaged in some sort of activity or another.

I believe that the naturally present DIVIS and INDIV influences, together with their subsequent effects relating to this infinite continuum of NOW’s in the universe also implies that the universe is not only aware (INDIV) of itself but it is also conscious (DIVIS) of itself. This is at every conceivable level as well. This idea then implies that the universe not only has its own mind (as seemingly represented in physics by entanglement theory) but it can also influence all manner of DIVIS and INDIV influences within itself too. This set of influences may explain the random nature of the universe and how it manifests itself to observers in the manner that it does. I also believe that the universe has its own separate INDIV and DIVIS memories, which could mean that when we die our INDIV selves (souls if you like) can reconnect with the INDIV souls of deceased persons. Because INDIV is also indivisible information, this idea seems to have merit for me. Both souls return to nature after experiencing their DIVIS (organic) earthly experience.

I also suggest that we live in both DIVIS time and INDIV time simultaneously, and that the metaphysical connection node for us between both times is a NOW unit too. As I have said earlier, DIVIS units exist in this same type of relationship. This means that all past, present and future are always connected to this inter-time node. So then, we move step by step with each other metaphorically either side of the connection node (but in reality simultaneously) at all times as though we are part of a timeless movie. Our DIVIS selves cannot of course know about or understand this but our intuitive INDIV selves can. I see this inability to do so as being no more than a inter dimensional one. By this I mean between separate DIVIS and INDIV conditions and influences respectively which could be between a third and fourth dimension. This ability can manifest itself in unusual and strange ways such as clairvoyance, alleged astral travel, out of body experiences, near death experiences, deep meditation and prayer. Other simple ways are knowing someone is looking at you when you are not looking at them, and knowing when someone has died before you have been told about it.

5. The role of observers within the DIVIS and INDIV model

I feel that observers cannot remove themselves from the act of observation and experiment because they are inseparably a part of the frame of reference they are observing, which is itself a NOW. Observers are not only a unit of NOWs, but they are also influenced by DIVIS’s and INDIV’s NOWs at any given event they are observing. These events that they are observing not only include participating individuals in any given event as NOWs but also these same individuals with their inherent and diverse personal NOWs as well. By this I mean that their own INDIV and DIVIS NOWs relating to their life history thus far. This NOW influence relationship at any given event, including lab experiments, also includes the wider universal NOW. I am suggesting that observers are merely an elementary line in the wider evolving analogical barcode of all possibilities to do something in the universe that I described a little earlier. Observers are therefore are an irremovable part of the DIVIS and INDIV universal cosmic whole, and so their act of observation could be prejudicial to the authenticity of the event itself. This seems to be in accord with the fact that when quantum wave collapse occurs when physicists observe an experiment such as if a cat is dead or alive or not. This is physicists Schrödinger’s analogy relating to his thought experiment involving a cat in a sealed box. Schrödinger devised this theory in order to explain the flawed interpretation of quantum superposition in physics as to at what point can it be scientifically determined if the cat in the sealed box is dead or not. As I discussed above from my physics DIVIS and INDIV perspective the analogical cat is always dead and alive at the same time but of course it does not know it. Nor do the observing physicists because they are all located in the same timeless NOW as the cat. Furthermore in my INDIV and DIVIS hypothesis they are all part of the universal continuum of NOW’S simultaneously in relation to clock unit measurement of time. These words imply that as individuals we are cosmologically INDIV’s and DIVIS’S at the same time but like the cat we do not know it.

6. Why the indivisible quantity is the dominant quantity within the universe

I believe that the universe relates to an INDIV inertia continuum that is, from a relativity theory perspective, an unknowable and therefore a metaphysical INDIV quantity. Relativity theory rejects metaphysical phenomena as part of its universal modelling. However, with INDIV, ether theory type models such as the Lorentz electron model is the continuum upon which the universe evolved in the first place, and how it seems to work in the manner that it does. I support ether type theories and my Awareness Model of Reality physics is loosely a representation of this. You will see where I discuss my theory in section six of this blog.

I believe that DIVIS quantities cannot exist without a pre-existing INDIV quantity. This INDIV quantity needs to influence itself and its associated DIVIS units. DIVIS units cannot influence INDIV units. It is for this reason at the outset of this presentation I stated that the universe has two quantities, which includes their associated units. For this reason the primary divisible quantity, manifesting itself via INDIV units of influence is the dominant influence in the universe. This is accentuated by the five points that I have outlined in the introduction.

7. Why the universe is nothing

A story about the Big Bang and what happened from there

In the first instance I will talk to you in general terms. This may help you to better context some of the more complex physics ideas that I will share with you a little later herein. I believe that it was, incomprehensibly to most people, an indivisible (INDIV) four dimensional Primordial Awareness condition that influenced the creation of the universe in indivisible absolute INDIV time. This is as I suggested a little earlier. Primordial Awareness is an abstract ether like state that I conceived that I believe influences all things in all places in INDIV holistic reality time circumstances. This includes other universes and dimensions as well. I believe that phenomena is created, maintained and destroyed by differing energy types, densities, averages and ratios. This same diverse energy relationship is also pertinent to quantum, sub-quantum metaphysical and unknowable cosmological information that is yet to be discovered. Unknowable and INDIV cosmological information conditions and influences include intuition, thoughts, out of body experiences, how did gravity come into existence and what is the origins of electricity and magnetism.

Gluon particles are an important INDIV particle information as well. You will soon find why this is the case. Gluons have no internal structure yet at the same time they hold the universe together. A more advanced science description of gluons can be found here. With regards to unknowable and indivisible information, such information can be considered to be imaginary and or virtual such as the existence of preon particles. Preon particles are also particles that mathematics say exist but they have not been detected yet. Preons are alleged to be one of the essential sub atomic particles related to the creation of quarks. This same mathematical mystery applies to tachyon and dybbuk particles that can allegedly travel beyond the speed of light which also have not been detected yet. There are many others as well. Divisible DIVIS cosmological information is related to phenomenon that can be reduced in its energy type influences in some way and would include such simple things as cutting and apple in half. What I am saying above is that if you have knowledge about sub quantum metaphysical physics these types of mysteries are not unusual at all. Metaphysical entanglement in particle physics is another such phenomenon as well. Often they are referred to as non-local conditions and influences.

I suggest that it is from this diverse range of cosmological information that the energy force type for the creation of the universe occurred. Only energy of some type existed at the immediate beginning of the Big Bang explosion. I refer to this energy as being virtual energy. I have cited intuition and thoughts as unknowable information above because I believe that the wider cosmological reality from which all universes and dimensions emerged is information that is not only aware of itself but it is also aware of all conditions and influences within itself as well. By this I mean in my concept of Primordial Awareness absolute-time. My ideas relating to primordial awareness energy is an INDIV energy type condition that can create INDIV effect conditions such as was needed to influence the Big Bang explosion. In respect to INDIV energy all conceivable possibilities to do something is conceivable. DIVIS effect relates to phenomenon that is not INDIV such as the cutting of the apple. This means that there are two different universal conditions that are capable of engendering conditions of influences of some kind throughout the universe.

I believe that the explosion of the Big Bang created not only quarks and gluons but also the absolute time conditions of primordial gravity. Primordial informational gravity is weak so it is very difficult to detect, sometimes not at all. I say that the primordial gravity, quark and gluon influences created the informational universal condition of a metaphorical carpet of ether across an ever expanding universe. In other words they created the effects of ether. I believe that all “things” in the universe have a particle nature and this includes gluons that have no internal structure. As I stated above the evolution of the universe needs gluons. There are eight different energy types related to gluons and six energy types in relating to quarks. Gluons and quarks are absolutely inseparable and this is why gluons are often referred to as universal cosmic glue. Furthermore gluons can exchange forces and influences within themselves as well. When one combines the inherent energy of primordial gravitation with these fourteen combined conditions of gluons and quark units of energy I suggest that this collective energy condition is the inherent energy condition of the universe. This collective energy manifests itself upon ever aspect of the universe in both DIVIS and INDIV ways as I discussed earlier. It influences itself in a DIVIS and INDIV manner as well. Particles can move between these conditions at will. This relationship is an entangled one in as much that it is a random relationship of units of energy of these combined fifteen different fields of energy (primordial gravity, gluons and quarks). These fields of energy are in an ever changing density, average and ratio with each other and this is why the universe is unpredictably random as physicists know it to be. You and I are beings that are being similarly affected by this dynamic primordial gravity ether condition. You would realise this if you stop to consider your own DIVIS and INDIV units of energy types and influences.

Also see additional information that I have provided at the end of this section relating to the above paragraph.

In respect to these words I see this ether effect as being both the defining force in nature but also as well as the inertia continuum of the universe as well. I see the informational primordial gravity as being the influence that created a second condition of push and pull gravity as related to larger objects such as planets and similar large objects in space. Electrical and magnetic forces also separately emanated from the Big Bang explosion to create a single electromagnetic field and associated radiation that concurrently travels at C with primordial gravity travelling faster than it beyond C. Over an indeterminable period the Big Bang electromagnetic field influenced the creation of immeasurable numbers of sub fields which included those in relationship to the creation of light and the two types of gravity that I have cited. I have discussed my concept of primordial gravity is an INDIV density wave condition that not only has a concurrent relationship with diverse numbers of electromagnetic fields but it influences the density, ratio and averages of light mass emanating there from these fields in relationship to different universal conditions at any given time and location across the universe. Keep these words in mind in respect to what follows.

You will see from my description of this diverse range of energy influences that primordial gravity, quarks, gluons and electromagnetic fields are additionally representative of the strong force in the universe as described by my concept of a universal and immobile ether. (The causal effect of primordial gravity). Metaphysical (structuralist INDIV) preons and INDIV gluons together with quarks and electromagnetic fields are the combinations of energy that make up protons, neutrons and electrons in different averages, densities and ratios with each other. These diverse conditions of energy can be seen as being like the foot print of all DIVIS conditions and influences across the universe. Because these DIVIS conditions and influences (units of energy) are derivative of the INDIV condition of primordial gravity influenced ether as discussed above, these combined conditions and influences describe the universal whole of the universe at every conceivable level. From these words I postulate that my concept of an ether field, that incorporates primordial gravity, detects the presence of individual planets and then guides push and pull gravity into the appropriate range and types of fields that maintain the universal stability of motion between objects that prevent undue universal chaos, but not its inherent wave randomness in respective to wide ranging inter acting fields of information.

I believe that the other allied important matter relating to these words about this guiding ether field is that it can be seen as the same guiding field that David Bohm postulated existed within his Holomovement pilot wave theory of particle physics. I also claim that this ether gravity field detects the gravity forces between two or more masses of objects which include planets hundreds of light years away from each other. The reason why this can occur is that because my concept of primordial gravity which is in a concurrently relationship to the ether, moves in absolute time. This implies that the relationship between the mass of all objects, at any absolute distance and at any distance in relationship to unit clock time, can be mathematical calculated in respect to their differing densities, ratios and averages.

I have discussed the powerful role that gluons play in respect to holding the building blocks of matter together and in turn demonstrate how they work as well. This is more especially so as they holds neutrons and protons together in a stable relationship in respect to its glue like relationship with quarks. This means that gluons are also conjunctional INDIV stabilisers of my primordial gravity ether system which I see as being critical for the stability and consistency of the universe generally but not its associated diverse range of electromagnetic fields which are an energy force unto themselves. As I suggested earlier I believe that gluons are the analogical life blood of the universe and if they did not exist with the energy and conditions type that they do the universe would not exist and nor would we within it as well. This is with the exception of electric and electromagnetic fields. Particles such as quarks would not exist either which are one of the foundational influences in respect to the creation of matter. The additional mystery about gluons is that although they are now accepted in physics as being elementary particles their microscopic structure remains unknown and furthermore it is considered to be indivisible but it can change influences within itself as I discussed earlier. The other interesting thing is that there is no maths to support gluons from its basic physic law but I have read that some of its properties can be calculated.

The influences and conditions of a diverse range of energy types that I have discussed above demonstrate what I consider to be the dynamic and experiential conditions of the universe as a whole. This includes us as well. This universe is what I term as being a primordial awareness quantity. The mysterious INDIV and DIVIS forces at play before the Big Bang took place in a separate and dynamic state of Primordial Awareness and time quantity. I suggest that this means that there is a continuum of primordial awareness across all reference frames of wider reality as well. I believe that the Euler’s equation in mathematics demonstrates why such a notion may make sense. This is because it embraces the transcendental square root of minus one (-1) as presented by i in this equation:

eiπ + 1 = 0 (I acknowledge that the i and the π should be elevated to represent “to the power of”)

This symbolic mathematical symbol represents all possibilities whatsoever within the frame of reference of reality (what ever this may mean) at any given time, place or circumstance.

You will see from my words that gluons are the life blood of the universe. In closing this section I remind you again that without gluons nothing relating to matter and its associated influences could exist in the universe. It is hard to imagine that electromagnetic fields could exist on their own somehow. The microscopic structure of gluons remains unknown. There is no maths physics to support gluons in relationship to physics laws. This means that gluons do not physically exist and as such they are are permanently INDIV (metaphysical) units of energy. However, this INDIV energy holds the universe together. This means that without gluons for all intents and purposes the universe would be nothing. This further means that the INDIV nothingness of gluons are also at least in part responsible for the creation of the universe as well as being almost fully responsible for its associated maintenance and seeming infinite cohesion and associated stability. Another way of saying this is that a physics influence of nothing created another universe of the same nothing and as such the universe can be physically described as being INDIV nothing.

Reference to item seven above.

The following quotation is an extract from my blog Reality with a matrix which was posted in July 2017. I would write these words differently today but never  the less I believe that my readers may better understand the debate that I am presenting to you above if you do. I also strongly urge you to read the associated links to this blog as well because my ideas are supported by credible physics theories.

Quote:

“Within my blog I regularly talk about both entanglement and non-local phenomena. Non-locality in science has metaphysical characteristics as well, but physicists are prepared to accept this within their theoretical physical models. I have found that there is widespread disagreement within the scientific community as to what entanglement and non-locality really mean. In order to help authenticate this blog, I will clarify my position on these topics for you. You will find this discussion in the reference section of this blog.

Imagine ether to be a seamless parameter free three-dimensional tray of pure clear jelly. [Albeit it being an invisible and difficult to detect field of immobile gas. By this I mean that it is like non-local phenomena that from a layperson’s perspective may be seen as being imaginary]. The jelly is compressible and under certain conditions waves may occur. These waves then progressively move upwards to the surface of the jelly and in the process create their own independent energy along the way.

The jelly must not break the motion of material phenomena passing through it. The upper levels of the jelly are less dense than the lower levels of the jelly. This variation of density in the jelly creates pressure forces throughout the jelly that include the creation of velocity. It is these collective forces that not only permeate the holistic tray of jelly but also influence it as well.

These combined micro and macro phenomena [field forces] that at a distance create non-uniform contact behaviour [interaction] with each other. This is as though it is a neural network entangled within an analogical frame of reference of primordial self-awareness or ether. It is the compressible nature of the ether itself, in a static state, that causes this non-unified random behaviour. It is also because of this non-unified behaviour that the jelly [ether] has a preferred frame of reference. This means that the ether frame is one that collectively and randomly embraces time, motion, velocity, energy and particle size. It is from this non-unified contact behaviour that elementary particles emerge. Elementary electricity and magnetism also emerge. This also means that the holistic [macro] nature of space itself is also in a state of average at any given time and place as well. [Is in some sort of uniformity]. This means it is always fluctuating but not necessarily for the same reasons. Furthermore this means that the ether space has both predictable and non-predictable random characteristics.

These ideas seem consistent with the randomness theory associated with both Einstein’s special relativity theory and Lorentz’s 1904 ether/electron theory. These models are mathematically similar to each other, but they vary in terms of the question of the movement of objects respectively therein. In other words, I am suggesting that my words in this blog today may provide scientists with a few additional clues as to how both these physics models may be finally reconciled. By this I mean via a structurally different ether theory that may never have been considered before. [Albeit being an elementary one].

Furthermore, these differing characteristics of the ether, apart from creating the waves that create gravity, are also separate fields in their own right. Which means gravity itself is a field as well. This in turn, in a macro sense, attracts other wave sources of gravity to it. These words mean that gravity is not only part of the wider space average that I have previously talked about, but also embraces the combined space average of all the other characteristics of the ether frame of reference as a whole. This in turn makes it a new frame of reference in its own right too. By these words I mean this gravitation frame of reference can then conversely become the necessary frame of reference upon which to create alternative material relativity models of physics. These may also include the three models described on a separate hand sheet or information sharing device.

I will now further explain and clarify these words. Within my concept of an ether there exist imaginary [non-local] particles that become identifiable particles on the analogical surface of the ether in the form of surface waves which then interact with each other in relation to the movement embraced within the immobile ether itself. The compressed phenomena associated with the ether causes contractions and expansion of the ether, so that the ether is in a type of perpetual contraction and expansion from which the concept of ether local time originates.

It is the instantaneous momentum of the gravity, together with its inherent particles in a diverse field type and size format that creates identifiable and measurable energy for space itself, and for the associated creation of local matter with both mass and no mass. The gravitational effect then becomes, as I stated earlier, a new state of reference in itself, a state where physical relativity models can then be considered.”

 

My closing words:-

From the information that I have provided for you today, I believe that because of the all-inclusive nature of my INDIV and DIVIS hypothesis, scientists of the future need only to decide upon what respective unit types of energy are involved within universal cosmic reality. From this point, physicists can then decide in what ratios, densities and averages these energy types need to be appropriately melded into a single quantity of information. I think that this will then provide scientists with the necessary basis of a comprehensive theory of everything.

I believe that I have provided and perhaps demonstrated the fundamental structure of the universe and how it may work. It is also possible that I have provided the elementary information that describes the essential nature and associated effects of wider reality as well. This is by both my description of NOW, as I have demonstrated, as well as the associated Euler’s equation to go with it. By these words I mean the everyday conditions and influences that commonly apply to universal human life within the wider quantum universal experience together with its associated INDIV and DIVIS effects, conditions and influences.

I pay special tribute to the philosopher and theologian Thomas Aquinas (1224-1274). If I had not accidentally discovered a small section of Aquinas’s writings and ideas I would never had attempted to write such a comprehensive document as this is and be open to ridicule. Thank you Thomas Aquinas!”

Note: This work is a highly original way of describing and explaining universal reality. This is more especially so because it explains in great detail how we fit into this reality as part of the wider interconnectedness of all things at every conceivable level in the universe.

It is for these reasons that I request your support with my copyright interest in this project. Because this blog is a concept document, and because my theory [with exception to the DIVIS feature, which is not testable] I pass it back to my unbelievers to disprove my argument that in the final analysis, reality is an INDIV/DISIV universal reality. Furthermore the universal inertia continuum is indivisible (INDIV).

Existing blog references pertinent to the above

1] Is our universe being tugged from some external source?

This physics article claims that there seems to be movement of hundreds of galaxy clusters in the universe at about 2,000,000 miles per hour. Furthermore these galaxy clusters seem to have mysterious origins that may be beyond our cosmic horizon or perhaps another universe. In other words the article is suggesting that our space time universe is being tugged by some sort of mysterious dark force that may be linked between my idea of there being both INDIV and DIVIS forces working in conjunction with each other as described in my primary blog today.

2] Were the rules of motion in our 3D universe predetermined?

I believe that they are in respect to the INDIV and DIVIS forces that have evolved in absolute time rather than at the Big Bang. As I have suggested in my primary blog today I believe these forces are derivative between my concept of a primordial gravity field and it’s associated ether theory fields.

3] Why is there no precise dividing line between microscopic and macroscopic phenomena?

This blog features quotations by the eminent physicist Antony Valentini. Valentini argues that there is no precise dividing line between microscopic and macroscopic phenomena. I believe that INDIV [indivisible] forces relate to all things except for that relating to electricity and magnetism. Therefore it is correct to say that Valentini’s ideas are similar to my own in this area.

4] Is there such a thing as sub-quantum phenomena?

As I believe that phenomena has the common field link of INDIV [indivisible] fields there are sound reasons to believe that metaphysical sub-quantum forces exist. The eminent mathematician Groessing wrote a thesis in 2013 that appears to support my INDIV/DIVIS hypothesis. This blog also looks at particle movement at a sub-quantum level as well as the diverse quantum effects relating thereto. Furthermore, the author talks about systemic non-locality which seems to be a condition such as my INDIV quantity hypothesis.

5] How David Bohm focused his (Gnostic) insight into the quantum world?

David Bohm created a Holomovement theory of physics. Bohm’s thesis includes what he refers to as the implicate order. His implicate order is a condition of influences and effects of a similar type to my INDIV hypothesis. Bohm says that elementary particles in the cosmos are amplifiers of information and that this information has no borders. These words also replicate my INDIV/DIVIS hypothesis.

6] Review of Non-local correlations between Electromagnetically Isolated Neural Networks

This very important article is profound in the sense that it shows that it is likely that our consciousness it seated outside of us and this has been proven by reputable physics experiments. Furthermore it demonstrates that metaphysical phenomena such as ghosts are feasible. In my work I refer to this alleged consciousness being outside of us is our own INDIV [indivisible] intuitive consciousness. I argue that our everyday consciousness is DIVIS [divisible] consciousness that is a consciousness that we can internally consider and rationalise, and with INDIV consciousness we cannot. It is a timeless intuition effect.

7] Morphogenic field theory, the great mystery in physics

This blog features a video presentation by Rupert Sheldrake as to how he feels that indivisible fields that can be observed exist. Sheldrake believes that these fields exist in the universe and they provide the conditions for plant, animal, and other life form species to somehow communicate with each other in a non-observable manner.

8] A seven point guide to the day to day workings of reality

This blog will be rewritten in the near future. I continue to support the contents of this blog but from within the frame of reference of my primary blog today.

9] Is space-time infinite dimensional?

This blog features the ideas of the physicist El Naschie. El Naschie believes in the cosmological condition of their being an abstract fourth dimension and our space time universe seems to be supporting particle activity within this concurrent relationship with space-time. El Naschie’s ideas therefore seems to support my DIVIS [divisible]/INDIV [in divisible] hypothesis, in as much that there does exist two reference frames that jointly create the conditions for particles to form.

10] Defining and describing holistic cosmic influences and processes

This blog will be rewritten in the near future. I continue to support the contents of this blog but from within the frame of reference of my primary blog today.

11] Did you know that there is at least 18 different interpretations of Quantum Mechanics?

This is an important blog because it points to what is probably the most serious fundamental problem with mainstream physics today. Particle physics theory is taught as a theory aside from Einstein’s two relativity theories. I think that this is incorrect because all facets of physics should be conducting research theories within a common frame of reference such as my INDIV [indivisible]/DIVIS [divisible] model. By doing this physicists are trying to set aside metaphysical conditions in the universe in order to complete the whole of everything theories.

In contrast to these words quantum mechanics makes strange metaphysical predictions in physics which are akin to my notion of there being a single INDIV force throughout the universe. Physicists have developed at least 18 different predictive models in order to breach the gap between the mysterious metaphysical forces that quantum mechanics predicts against the known understood materialist objects and influences surrounding them which includes cosmologically.

In other words this is another reason why physicists can be seen to be continuing to delve into a bottomless hole with the search for a theory of everything. They are continuing to attempt to ignore the universal INDIV quantity in the universe and its associated force fields.

12] I care to talk about entanglement

Entanglement [with associated non-locality] are fields of metaphysical forces that are understood in physics but cannot be explained. I have prepared this extensive blog regarding entanglement as well as provide a well informed opinion about it. Entanglement is an INDIV [indivisible] force which is related to my concept of there being a primordial gravity. The effects of this gravity can carry INDIV influences across the universe in an instant and this includes reverse particle spin. Also see the blog reference entitled “What travels at 10,000 times the speed of light?” below.

13] What travels at 10,000 times the speed of light?

It is commonly believed that the maximum speed of entanglement influence is about 300,000 km/sec. However, it has been worked out by physicists that the speed of entanglement between particles within the universe is at least 10,000 times the speed of light and some scientists believe it could be up to as much as 144,500 times the speed of light. Furthermore there are other physicists who believe that entanglement may occur at an external time frame of influence where it could be instantaneous. I say that this later position is the case with my INDIV [indivisible]/DIVIS [divisible] theory. Also see the blog reference “I care to talk about entanglement” above.

14] I would like to introduce you to the pure beauty of fractals

Fractal patterns exist throughout all things across the universe and this includes our body organs. You will find by way of example within this blog a beautiful example of how the hidden influence of fractals are immersed within cosmological reality. Here is an example demonstrating how fractals manifest themselves within our body organs as well.

15] Can science create a visible quantum object?

This blog describes a breakthrough experiment that demonstrates that an object can be in two places at the same time. I suggest that this is what may be occurring within my DIVIS/INDIV theory.

16] Do some people think that science is a belief system?

The respected biologist Rupert Sheldrake made a speech relating to the necessity for physicists to combine both metaphysical and physical information into their science models. Sheldrake has made several prominent speeches about his views in this area and as a result of this some professional organisations have banned Sheldrake from lecturing in their institutions. What Sheldrake is really saying is that he supports my concept of a INDIV [indivisible]/DIVIS [divisible] theory.

17] The Future of Fundamental Physics

The respected physicist Nima Arkani-Hamed talks about why he feels that scientists should fundamentally change their thinking about physics. If you read this article it is the same as that I have been talking about in relation to my INDIV [indivisible]/DIVIS [divisible] model today.

18] Seven examples of implicit information

The seven examples of implicit information [INDIV information] support my idea of a wider cosmological DIVIS/INDIV frame of reference.

19] Unusual and challenging E8 maths theory

The presenter of this video demonstrates that there are many more subatomic particles and atomic forces that are yet to be discovered in the universe. As you will find in my blog today these words are consistent to what I have claiming from the outset throughout my blogs.

20] Albert Einstein and the great ether debate

There is a great debate within the physics community as the whether traditional Lorentz ether theory is a valid theory or not. This is in lieu of the difficulties within the physics community at this time. I believe that ether theory makes a great deal of scientific sense and I have written about it accordingly. There is a little known Einstein lecture that Einstein delivered in Germany in the 1920s that strongly supports ether theory. I suggest that Einstein’s words should dominate this debate today. My INDIV [indivisible]/DIVIS [divisible] theory embraces Ether theory.

21] Comparison of three models of reality physics

The Process, Holomovement and Awareness models of physics are all built upon the condition of their being an INDIV [indivisible] frame of reference. This comparison chart demonstrates how this may be the case.

Important aspects of the 1962 Cuban Missile Crisis revisited. Is this story relevant today?

I think that it is

I have found two videos that seem to provide a comprehensive insight as to what happened in October 1962 during the Cuban Missile Crisis in the Caribbean. I think that the Cuban missile Crisis was a defining point in international political and military history. During the 13 day period of the crisis the world unequivocally faced the possibility of nuclear annihilation. It is for this reason, and more particularly in relation to the North Korean military stand-off today that I feel these two video documentaries made in the 1990’s are highly pertinent now.

I think that the most relevant point of the Cuban Missile Crisis is that nuclear annihilation could have occurred simply by incomplete or delayed transfer of information between the United States, The Soviet Union and their respective allies. Both the United States and The Soviet Union were merely a hair trigger away from mutual destruction.

Perhaps one of the most important factors in this stand-off is that the respective American and Soviet field commanders each had autonomous rights to fire weapons without referring back to their superiors if they felt sufficiently mutually threatened to do so by the opposing forces. The Soviets believed that the Americans were planning a full-scale invasion of Cuba [as they were] and the Soviets would have resorted to using tactical nuclear weapons in the field against the Americans if this had occurred. The Soviets also had nuclear armed torpedoes on their submarines as well as short range tactical nuclear rockets available to their field armies. The Americans did not know about the nuclear armed torpedoes and tactical rockets deployed by the Soviets.

Misuse of these nuclear weapons would have most likely resulted in a full scale nuclear exchange between America and The Soviet Union. The Cuban Missile Crisis eventually led to the first nuclear arms control treaty between the Soviets and Americans.

The reasons why the two videos have been incorporated into this blog is because the video entitled “The Cuban Missile Crisis – What the World Didn’t Know” gives a fairly comprehensive overview of the dynamics of the crisis itself. The video entitled “The Cuban missile crisis – The man who saved the world” provides great insight into the hair trigger nature of the conflict as I discussed above.

This wikipedia article provides a much deeper insight into the crisis than my words.

Emergent reality and indivisible information

I believe that if we are to ever fully understand reality then we must also incorporate unknowable [indivisible] information

My regular readers know that I believe all phenomena, including thought construction are both implicit and explicit. My word implicit means information that we all know is real [such as consciousness] but which at the same time it cannot be tested. This is the reason why I have classified metaphysical phenomena such as consciousness as being indivisible information. We can describe indivisible information like consciousness, but physics science generally cannot incorporate consciousness in its modelling. This is because consciousness cannot be defined or measured. I think this is a shame because this means that science models do not seriously incorporate our whole of life experiences, which also means reality.

I recently read an article written by George F. R. Ellis, who talks about this same dilemma in science and I feel you should be aware of Ellis’s ideas as I strongly identify with them. Below you will find the conclusions of Ellis’s essay entitled “On the Nature of Emergent Reality”. I have emboldened sections of the conclusion from the document that I feel  are most pertinent to my argument and you might like to know about them as well. If you have the opportunity to read much of Ellis’s ideas about reality I think that you will feel richly rewarded.

Quote:

“…Conclusion

Reprise: I have given above a view of emergent complex systems where there are structuring relations, triggering relations as well as environmental influences and internal variables, summarised in Figure 9.

Figure 9: The system and its situation: contextual and triggering influences

Ellis daigram 2nov17

Function takes place in the context of a social and physical situation that, together with the values of internal variables, is the current operating environment. Structure is constant on the relevant timescale, enabling the input (triggering events that operate in the given situation – they are varying causal quantities) to have a predictable result. Thus function follows structure. The environment sets the boundary conditions and the internal variables (memory and learnt behaviour patterns) result from past experience. Noise or chance represents the effects of detailed features that we do not know because they are subsumed in the coarse graining leading to higher level descriptions of either the system or the environment. The system structure is determined by developmental processes that use genetic information, read in the context of the system-environment interaction occurring in the organism’s history,  to determine its structure. For example, genes develop a brain capacity to learn language that then results in adaptation of the brain to that specific language. The genetic heritage leading to this result is comes into being through evolutionary adaptation over very long timescales to the past environment. This language then forms the basis of complex symbolic modelling and associated understanding, taking place in a social context,  that guides future actions. Thus human understanding of events and their meanings govern their actions, which then change the situation around them. Symbolic systems are causally effective.

Strong reductionist claims, usually characterised by the phrase `nothing but’ and focusing only on physical existence, simply do not take into account the depth of causation in the real world as indicated above, and the inability of physics on its own to comprehend these interactions and effects.  These claims represent a typical fundamentalist position, claiming a partial truth (based on some subset of causation) to be the whole truth and ignoring the overall rich causal matrix while usually focusing on purely physical elements of causation. They do not and cannot be an adequate basis of explanation or understanding in the real world. Consequently they do not represent an adequate basis for making ontological claims.

This paper has outlined a view of emergent reality in which it is clear that non-physical quantities such as information and goals can have physical effect in the world of particles and forces, and hence must be recognised as having a real existence (Ellis 2003). Associated with this there is a richer ontology than simple physicalism, which omits important causal agencies from its vision. That view does not deal adequately with the real world…”

The original Ellis document online

I have also attached a pdf document to this blog for your added convenience

Irreducible mind theory and the falsity of reductive interpretations of the mind and body relationship

Irreducible Mind is the title of a book that was first published in 2007

The authors are: Edward F. Kelly, Emily Williams Kelly, Adam Crabtree, Alan Gould, Michael Grosso and Bruce Greyson

The book’s contents remain defining and important ones in psychoanalysis to this day

The purpose of this blog is not to talk so much about the book and it’s contents but to look more closely as an extended review of the book by Ulrich Mohrhoff. Mohrhoff’s review discusses the implications of the book Irreducible Mind in relationship to what he considers to be metaphysical nexus between our minds and brains. Mohrhoff introduces sub-quantum ontological physics into his review ideas as he talks about the mind/brain relationship.

In future in my website I will be referring to not only the Irreducible Mind book but more especially so Mohrhoff’s words. I see both these items as being pertinent to not only my physics Awareness model but also my Dual Consciousness [Imiplicit and Explicit] model as well.

You will find Mohrhoff’s review paper here.

You will find another document of reviews relating to the perceived quality nature of the Irreducible Mind book as well.

If you have not heard about the book Irreducible Mind before I feel strongly that you will appreciate me introducing you to both the book as well as Mohrhoff’s ideas.

Albert Einstein and the great ether debate

It is important that you view the contents of this blog in relationship to my new blog entitled: “The fundamental universe revisited“. This new blog is designed to be the master science referential blog for all my science blog postings in my website.

Is ether theory still valid for incorporation within contemporary physics or not?

I believe that it is. My reasons for saying this are based upon what Einstein said about ether in a public lecture in Germany in 1920. In his latter years Einstein continued to believe that ether was pertinent to both his Special Relativity and General Relativity models but both for different reasons. Einstein made the distinction between an immobile ether in his Special Relativity theory and in his General Relativity hypothesis he determined ether to be necessary to accommodate gravitational waves.

I have copied and posted Einstein’s 1920 lecture below and I have highlighted within the text where he has talked about his belief that ether was an important factor in his thoughts in relationship to both his relativity theories. Einstein continued to believe in ether theory until the closing days of his life, but not necessarily in relationship to his original relativity ideas. As an extension of these words keep in mind that both of Einstein’s relativity theories are the core of modern physics models and theories. Contemporary physics has elected to dismiss ether theory because it is seen an unnecessary.

Here is Einstein’s 1920 lecture:

Quote:

Einstein: Ether and Relativity

Albert Einstein gave an address on 5 May 1920 at the University of Leiden. He chose as his topic Ether and the Theory of Relativity. He lectured in German but we present an English translation below. The lecture was published by Methuen & Co. Ltd, London, in 1922.

Ether and the Theory of Relativity by Albert Einstein

How does it come about that alongside of the idea of ponderable matter, which is derived by abstraction from everyday life, the physicists set the idea of the existence of another kind of matter, the ether? The explanation is probably to be sought in those phenomena which have given rise to the theory of action at a distance, and in the properties of light which have led to the undulatory theory. Let us devote a little while to the consideration of these two subjects.

Outside of physics we know nothing of action at a distance. When we try to connect cause and effect in the experiences which natural objects afford us, it seems at first as if there were no other mutual actions than those of immediate contact, e.g. the communication of motion by impact, push and pull, heating or inducing combustion by means of a flame, etc. It is true that even in everyday experience weight, which is in a sense action at a distance, plays a very important part. But since in daily experience the weight of bodies meets us as something constant, something not linked to any cause which is variable in time or place, we do not in everyday life speculate as to the cause of gravity, and therefore do not become conscious of its character as action at a distance. It was Newton’s theory of gravitation that first assigned a cause for gravity by interpreting it as action at a distance, proceeding from masses. Newton’s theory is probably the greatest stride ever made in the effort towards the causal nexus of natural phenomena. And yet this theory evoked a lively sense of discomfort among Newton’s contemporaries, because it seemed to be in conflict with the principle springing from the rest of experience, that there can be reciprocal action only through contact, and not through immediate action at a distance.

It is only with reluctance that man’s desire for knowledge endures a dualism of this kind. How was unity to be preserved in his comprehension of the forces of nature? Either by trying to look upon contact forces as being themselves distant forces which admittedly are observable only at a very small distance and this was the road which Newton’s followers, who were entirely under the spell of his doctrine, mostly preferred to take; or by assuming that the Newtonian action at a distance is only apparently immediate action at a distance, but in truth is conveyed by a medium permeating space, whether by movements or by elastic deformation of this medium. Thus the endeavour toward a unified view of the nature of forces leads to the hypothesis of an ether. This hypothesis, to be sure, did not at first bring with it any advance in the theory of gravitation or in physics generally, so that it became customary to treat Newton’s law of force as an axiom not further reducible. But the ether hypothesis was bound always to play some part in physical science, even if at first only a latent part.

When in the first half of the nineteenth century the far-reaching similarity was revealed which subsists between the properties of light and those of elastic waves in ponderable bodies, the ether hypothesis found fresh support. It appeared beyond question that light must be interpreted as a vibratory process in an elastic, inert medium filling up universal space. It also seemed to be a necessary consequence of the fact that light is capable of polarisation that this medium, the ether, must be of the nature of a solid body, because transverse waves are not possible in a fluid, but only in a solid. Thus the physicists were bound to arrive at the theory of the “quasi-rigid” luminiferous ether, the parts of which can carry out no movements relatively to one another except the small movements of deformation which correspond to light-waves.

This theory – also called the theory of the stationary luminiferous ether – moreover found a strong support in an experiment which is also of fundamental importance in the special theory of relativity, the experiment of Fizeau, from which one was obliged to infer that the luminiferous ether does not take part in the movements of bodies. The phenomenon of aberration also favoured the theory of the quasi-rigid ether.

The development of the theory of electricity along the path opened up by Maxwell and Lorentz gave the development of our ideas concerning the ether quite a peculiar and unexpected turn. For Maxwell himself the ether indeed still had properties which were purely mechanical, although of a much more complicated kind than the mechanical properties of tangible solid bodies. But neither Maxwell nor his followers succeeded in elaborating a mechanical model for the ether which might furnish a satisfactory mechanical interpretation of Maxwell’s laws of the electro-magnetic field. The laws were clear and simple, the mechanical interpretations clumsy and contradictory. Almost imperceptibly the theoretical physicists adapted themselves to a situation which, from the standpoint of their mechanical programme, was very depressing. They were particularly influenced by the electro-dynamical investigations of Heinrich Hertz. For whereas they previously had required of a conclusive theory that it should content itself with the fundamental concepts which belong exclusively to mechanics (e.g. densities, velocities, deformations, stresses) they gradually accustomed themselves to admitting electric and magnetic force as fundamental concepts side by side with those of mechanics, without requiring a mechanical interpretation for them. Thus the purely mechanical view of nature was gradually abandoned. But this change led to a fundamental dualism which in the long-run was insupportable. A way of escape was now sought in the reverse direction, by reducing the principles of mechanics to those of electricity, and this especially as confidence in the strict validity of the equations of Newton’s mechanics was shaken by the experiments with b-rays and rapid cathode rays.

This dualism still confronts us in unextenuated form in the theory of Hertz, where matter appears not only as the bearer of velocities, kinetic energy, and mechanical pressures, but also as the bearer of electromagnetic fields. Since such fields also occur in vacuo – i.e. in free ether-the ether also appears as bearer of electromagnetic fields. The ether appears indistinguishable in its functions from ordinary matter. Within matter it takes part in the motion of matter and in empty space it has everywhere a velocity; so that the ether has a definitely assigned velocity throughout the whole of space. There is no fundamental difference between Hertz’s ether and ponderable matter (which in part subsists in the ether).

The Hertz theory suffered not only from the defect of ascribing to matter and ether, on the one hand mechanical states, and on the other hand electrical states, which do not stand in any conceivable relation to each other; it was also at variance with the result of Fizeau’s important experiment on the velocity of the propagation of light in moving fluids, and with other established experimental results.

Such was the state of things when H A Lorentz entered upon the scene. He brought theory into harmony with experience by means of a wonderful simplification of theoretical principles. He achieved this, the most important advance in the theory of electricity since Maxwell, by taking from ether its mechanical, and from matter its electromagnetic qualities. As in empty space, so too in the interior of material bodies, the ether, and not matter viewed atomistically, was exclusively the seat of electromagnetic fields. According to Lorentz the elementary particles of matter alone are capable of carrying out movements; their electromagnetic activity is entirely confined to the carrying of electric charges. Thus Lorentz succeeded in reducing all electromagnetic happenings to Maxwell’s equations for free space.

As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H A Lorentz. It may be added that the whole change in the conception of the ether which the special theory of relativity brought about, consisted in taking away from the ether its last mechanical quality, namely, its immobility. How this is to be understood will forthwith be expounded.

The space-time theory and the kinematics of the special theory of relativity were modelled on the Maxwell-Lorentz theory of the electromagnetic field. This theory therefore satisfies the conditions of the special theory of relativity, but when viewed from the latter it acquires a novel aspect. For if K be a system of coordinates relatively to which the Lorentzian ether is at rest, the Maxwell-Lorentz equations are valid primarily with reference to K. But by the special theory of relativity the same equations without any change of meaning also hold in relation to any new system of co-ordinates K’ which is moving in uniform translation relatively to K. Now comes the anxious question:- Why must I in the theory distinguish the K system above all K’ systems, which are physically equivalent to it in all respects, by assuming that the ether is at rest relatively to the K system? For the theoretician such an asymmetry in the theoretical structure, with no corresponding asymmetry in the system of experience, is intolerable. If we assume the ether to be at rest relatively to K, but in motion relatively to K’, the physical equivalence of K and K’ seems to me from the logical standpoint, not indeed downright incorrect, but nevertheless unacceptable.

The next position which it was possible to take up in face of this state of things appeared to be the following. The ether does not exist at all. The electromagnetic fields are not states of a medium, and are not bound down to any bearer, but they are independent realities which are not reducible to anything else, exactly like the atoms of ponderable matter. This conception suggests itself the more readily as, according to Lorentz’s theory, electromagnetic radiation, like ponderable matter, brings impulse and energy with it, and as, according to the special theory of relativity, both matter and radiation are but special forms of distributed energy, ponderable mass losing its isolation and appearing as a special form of energy.

More careful reflection teaches us however, that the special theory of relativity does not compel us to deny ether. We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. We shall see later that this point of view, the conceivability of which I shall at once endeavour to make more intelligible by a somewhat halting comparison, is justified by the results of the general theory of relativity.

Think of waves on the surface of water. Here we can describe two entirely different things. Either we may observe how the undulatory surface forming the boundary between water and air alters in the course of time; or else-with the help of small floats, for instance – we can observe how the position of the separate particles of water alters in the course of time. If the existence of such floats for tracking the motion of the particles of a fluid were a fundamental impossibility in physics – if, in fact nothing else whatever were observable than the shape of the space occupied by the water as it varies in time, we should have no ground for the assumption that water consists of movable particles. But all the same we could characterise it as a medium.

We have something like this in the electromagnetic field. For we may picture the field to ourselves as consisting of lines of force. If we wish to interpret these lines of force to ourselves as something material in the ordinary sense, we are tempted to interpret the dynamic processes as motions of these lines of force, such that each separate line of force is tracked through the course of time. It is well known, however, that this way of regarding the electromagnetic field leads to contradictions.

Generalising we must say this:- There may be supposed to be extended physical objects to which the idea of motion cannot be applied. They may not be thought of as consisting of particles which allow themselves to be separately tracked through time. In Minkowski’s idiom this is expressed as follows:- Not every extended conformation in the four-dimensional world can be regarded as composed of world-threads. The special theory of relativity forbids us to assume the ether to consist of particles observable through time, but the hypothesis of ether in itself is not in conflict with the special theory of relativity. Only we must be on our guard against ascribing a state of motion to the ether.

Certainly, from the standpoint of the special theory of relativity, the ether hypothesis appears at first to be an empty hypothesis. In the equations of the electromagnetic field there occur, in addition to the densities of the electric charge, only the intensities of the field. The career of electromagnetic processes in vacuo appears to be completely determined by these equations, uninfluenced by other physical quantities. The electromagnetic fields appear as ultimate, irreducible realities, and at first it seems superfluous to postulate a homogeneous, isotropic ether-medium, and to envisage electromagnetic fields as states of this medium. But on the other hand there is a weighty argument to be adduced in favour of the ether hypothesis. To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.

It is true that Mach tried to avoid having to accept as real something which is not observable by endeavouring to substitute in mechanics a mean acceleration with reference to the totality of the masses in the universe in place of an acceleration with reference to absolute space. But inertial resistance opposed to relative acceleration of distant masses presupposes action at a distance; and as the modern physicist does not believe that he may accept this action at a distance, he comes back once more, if he follows Mach, to the ether, which has to serve as medium for the effects of inertia. But this conception of the ether to which we are led by Mach’s way of thinking differs essentially from the ether as conceived by Newton, by Fresnel, and by Lorentz. Mach’s ether not only conditions the behaviour of inert masses, but is also conditioned in its state by them.

Mach’s idea finds its full development in the ether of the general theory of relativity. According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that “empty space” in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gmn), has, I think, finally disposed of the view that space is physically empty. But therewith the conception of the ether has again acquired an intelligible content although this content differs widely from that of the ether of the mechanical undulatory theory of light. The ether of the general theory of relativity is a medium which is itself devoid of all mechanical and kinematical qualities, but helps to determine mechanical (and electromagnetic) events.

What is fundamentally new in the ether of the general theory of relativity as opposed to the ether of Lorentz consists in this, that the state of the former is at every place determined by connections with the matter and the state of the ether in neighbouring places, which are amenable to law in the form of differential equations; whereas the state of the Lorentzian ether in the absence of electromagnetic fields is conditioned by nothing outside itself, and is everywhere the same. The ether of the general theory of relativity is transmuted conceptually into the ether of Lorentz if we substitute constants for the functions of space which describe the former, disregarding the causes which condition its state. Thus we may also say, I think, that the ether of the general theory of relativity is the outcome of the Lorentzian ether, through relativation.

As to the part which the new ether is to play in the physics of the future we are not yet clear. We know that it determines the metrical relations in the space-time continuum, e.g. the configurative possibilities of solid bodies as well as the gravitational fields; but we do not know whether it has an essential share in the structure of the electrical elementary particles constituting matter. Nor do we know whether it is only in the proximity of ponderable masses that its structure differs essentially from that of the Lorentzian ether; whether the geometry of spaces of cosmic extent is approximately Euclidean. But we can assert by reason of the relativistic equations of gravitation that there must be a departure from Euclidean relations, with spaces of cosmic order of magnitude, if there exists a positive mean density, no matter how small, of the matter in the universe.

In this case the universe must of necessity be spatially unbounded and of finite magnitude, its magnitude being determined by the value of that mean density.

If we consider the gravitational field and the electromagnetic field from the standpoint of the ether hypothesis, we find a remarkable difference between the two. There can be no space nor any part of space without gravitational potentials; for these confer upon space its metrical qualities, without which it cannot be imagined at all. The existence of the gravitational field is inseparably bound up with the existence of space. On the other hand a part of space may very well be imagined without an electromagnetic field; thus in contrast with the gravitational field, the electromagnetic field seems to be only secondarily linked to the ether, the formal nature of the electromagnetic field being as yet in no way determined by that of gravitational ether. From the present state of theory it looks as if the electromagnetic field, as opposed to the gravitational field, rests upon an entirely new formal motif, as though nature might just as well have endowed the gravitational ether with fields of quite another type, for example, with fields of a scalar potential, instead of fields of the electromagnetic type.

Since according to our present conceptions the elementary particles of matter are also, in their essence, nothing else than condensations of the electromagnetic field, our present view of the universe presents two realities which are completely separated from each other conceptually, although connected causally, namely, gravitational ether and electromagnetic field, or – as they might also be called – space and matter.

Of course it would be a great advance if we could succeed in comprehending the gravitational field and the electromagnetic field together as one unified conformation. Then for the first time the epoch of theoretical physics founded by Faraday and Maxwell would reach a satisfactory conclusion. The contrast between ether and matter would fade away, and, through the general theory of relativity, the whole of physics would become a complete system of thought, like geometry, kinematics, and the theory of gravitation. An exceedingly ingenious attempt in this direction has been made by the mathematician H Weyl; but I do not believe that his theory will hold its ground in relation to reality. Further, in contemplating the immediate future of theoretical physics we ought not unconditionally to reject the possibility that the facts comprised in the quantum theory may set bounds to the field theory beyond which it cannot pass.

Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this ether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it.”

From Einstein’s words, more particularly his words that I have emboldened, I hope that my readers may understand why I feel that ether theory in physics remains a valid hypothesis.

It seems that a coming software apocalypse may soon be upon us

Is the only way to head off a catastrophe with software is to change software codes and how we make them?

It seems that this must be the case, and sooner than later. If you are familiar with computer software technology I am sure you will understand how serious this coding problem is. I am not computer friendly, so you must evaluate the contents of the article below as you see fit. I have emboldened text that I feel is most pertinent for you to take notice of. As far as I am aware this urgent story has not yet been discussed in the Australian media.

I present to my readers the following article derived from The Atlantic news journal

I quote the article as follows:

“James Somers, Sep 26, 2017

There were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generate a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.

Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

The attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.

Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.
Related Stories

A person whose head has been replaced with a bulky desktop monitor

You Are Already Living Inside a Computer

Not Even the People Who Write Algorithms Really Know How They Work

Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.

Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.

The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.

“Software engineers don’t understand the problem they’re trying to solve, and don’t care to.”

What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.

“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”

In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.

The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.

It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.

“If the software malfunctions and the same program that crashed is supposed to save the day, it can’t.”

Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”

Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.

There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.

The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”

The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on — so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.

Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

John Resig had been noticing the same thing among his students. Resig is a celebrated programmer of JavaScript—software he wrote powers over half of all websites—and a tech lead at the online-education site Khan Academy. In early 2012, he had been struggling with the site’s computer-science curriculum. Why was it so hard to learn to program? The essential problem seemed to be that code was so abstract. Writing software was not like making a bridge out of popsicle sticks, where you could see the sticks and touch the glue. To “make” a program, you typed words. When you wanted to change the behavior of the program, be it a game, or a website, or a simulation of physics, what you actually changed was text. So the students who did well—in fact the only ones who survived at all—were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation. Resig, like Granger, started to wonder if it had to be that way. Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?

The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.

Bret Victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”

“There’s a pretty strong conviction that that’s the wrong way of doing things.”

Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.

Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.

By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.

“Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.

To Victor, the idea that people were trying to understand cancer by staring at a text editor was appalling.

There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.

Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.

Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling. And it was the proper job of programmers to ensure that someday they wouldn’t have to.

There was precedent enough to suggest that this wasn’t a crazy idea. Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS. It is powerful enough to do work that once would have been done by a professional web designer.

But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.

At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like. His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen.

Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have. Then, you’d replay the game to see what actually happened.

Shadow Marios move on the left half of a screen as a mouse drags sliders on the right half.

Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.” He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

When John Resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.

Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”

In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.

But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.

“I’m not sure that programming has to exist at all.”

In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.

Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.

“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.” Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”

Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.

“The people know how to code. The problem is what to code.”

It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.

“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”

On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.

Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.

Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.” The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”

Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”

Most programmers like code. At least they understand it.

He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”

Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.

Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production. “It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.” Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.

Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.

The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.

We already know how to make complex software reliable, but in so many places, we’re choosing not to.

As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”

Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.

Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.

It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”

“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”

You could do all the testing you wanted and you’d never find all the bugs.

Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”

The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”

Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

In 2011, Chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.

“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”

Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.

“Few programmers write even a rough sketch of what their programs will do before they start coding.”

This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.” The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.

TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.

The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.

For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.

Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.

But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”

“I hope people won’t be allowed to write programs if they don’t understand these simple things.”

Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”

Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.

Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.

“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.

For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.

This code has created a level of complexity that is entirely new. And it has made possible a new kind of failure.

He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.

“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”

“And that’s completely fine until you run smack into a real problem.”

In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.

Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.

“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.

In the world of the self-driving car, software can’t be an afterthought.

“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.

“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”

One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.

“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

“So that’s a big problem.”

Is it true that humans could become extinct if sperm counts in men continue to fall?

It seems that there are sound reasons to believe this may be the case

I believe that you will find the following words in my blog ones to be concerned about. I have elected to combine two media articles into this blog presentation as I believe that they are mutually complementary to each other. There is also overlaps of information between both. The first item is a news story derived from the BBC and the second story is derived from the American Thinker news journal.

It is this latter article that I feel you will find to be most confrontational and disturbing. It talks about extensively about water pollution that emanates from the effects of estrogenic compounds in water supplies, from industry, agriculture  and artificial birth control chemicals flowing into the public water supply system. The article focuses heavily on the dangers of birth control chemicals.

I have emboldened text that I feel may most interest you. I acknowledge that this important information has been derived from secondary sources and furthermore there may be a covert political agenda in the American Thinker article. I will leave it to you to make up your own mind about this matter.

Article 1: [From the BBC]

Sperm count drop ‘could make humans extinct’
By Pallab Ghosh Science correspondent, BBC News

25 July 2017

Humans could become extinct if sperm counts in men continue to fall at current rates, a doctor has warned.

Researchers assessing the results of nearly 200 studies say sperm counts among men from North America, Europe, Australia, and New Zealand, seem to have halved in less than 40 years.

Some experts are sceptical of the Human Reproduction Update findings.

But lead researcher Dr Hagai Levine said he was “very worried” about what might happen in the future.

The assessment, one of the largest ever undertaken, brings together the results of 185 studies between 1973 and 2011.

Dr Levine, an epidemiologist, told the BBC that if the trend continued humans would become extinct.

Decline rate ‘increasing’

“If we will not change the ways that we are living and the environment and the chemicals that we are exposed to, I am very worried about what will happen in the future,” he said.

“Eventually we may have a problem, and with reproduction in general, and it may be the extinction of the human species.”

Scientists not involved in the study have praised the quality of the research but say that it may be premature to come to such a conclusion.

Dr Levine, from the Hebrew University of Jerusalem, found a 52.4% decline in sperm concentration, and a 59.3% decline in total sperm count in men from North America, Europe, Australia and New Zealand.

The study also indicates the rate of decline among men living in these countries is continuing and possibly even increasing.

In contrast, no significant decline was seen in South America, Asia and Africa, but the researchers point out that far fewer studies have been conducted on these continents. However, Dr Levine is concerned that eventually sperm counts could fall in these places too.

Many previous studies have indicated similar sharp declines in sperm count in developed economies, but skeptics say that a large proportion of them have been flawed.

Some have investigated a relatively small number of men, or included only men who attend fertility clinics and are, in any case, more likely to have low sperm counts.

There is also concern that studies that claim to show a decline in sperm counts are more likely to get published in scientific journals than those that do not.

Another difficulty is that early methods of counting sperm may have overestimated the true count.

Taken together these factors may have created a false view of falling sperm counts.

But the researchers claim to have accounted for some of these deficiencies, leaving some doubters, such as Prof Allan Pacey of Sheffield University, less skeptical.

He said: “I’ve never been particularly convinced by the many studies published so far claiming that human sperm counts have declined in the recent past.”

“However, the study today by Dr Levine and his colleagues deals head-on with many of the deficiencies of previous studies.”

But Prof Pacey believes that although the new study has reduced the possibility of errors it does not entirely remove them. So, he says, the results should be treated with caution.

“The debate has not yet been resolved and there is clearly much work still to be done.

“However, the paper does represent a step forward in the clarity of the data which might ultimately allow us to define better studies to examine this issue.”

There is no clear evidence for the reason for this apparent decrease. But it has been linked with exposure to chemicals used in pesticides and plastics, obesity, smoking, stress, diet, and even watching too much TV.

Dr Levine says that there is an urgent need to find out why sperm counts are decreasing and to find ways of reversing the trend.

“We must take action – for example, better regulation of man-made chemicals – and we must continue our efforts on tackling smoking and obesity.”

Sperm count drop ‘could make humans extinct’

Article 2: [From American Thinker]

July 27, 2017
Low sperm counts? Report fails to mention birth control in water supplies
By Monica Showalter

A study has found that male sperm counts have plunged since 1973, citing the evidence found in a large number of studies.  Scientists say a continuation of this trend could mean the human race will go extinct.

A team of scientists is sounding the alarm about declining sperm counts among men in the Western world.

As Hagai Levine, the lead author of a recently published study, told the BBC, “If we will not change the ways that we are living and the environment and the chemicals that we are exposed to, I am very worried about what will happen in the future.”

He added, “Eventually we may have a problem, and with reproduction in general, and it may be the extinction of the human species.”

Sperm counts have fallen an average of 1.2 percent each year, and the compounded effect of that has resulted in a more than 50% drop in sperm counts today.  CBS news reports that it follows a 1992 study that shows the exact same 50% decline, so nothing has changed in the rate of decline; it remains steady.

Sperm concentration decreased an average 52 percent between 1973 and 2011, while total sperm count declined by 59 percent during that period, researchers concluded after combining data from 185 studies. The research involved nearly 43,000 men in all.

“We found that sperm counts and concentrations have declined significantly and are continuing to decline in men from Western countries,” said senior researcher Shanna Swan.

The effect of estrogenic compounds in the water supply from industry, agriculture, and other sources raises concerns about human health and deserves scrutiny.

The one factor the report doesn’t mention, but probably should, is the credible reports of artificial birth control getting into the water supply.

This is not the Catholic Church’s argument against contraception going on here – the Catholic Church opposes artificial contraception because it interferes with the natural male-female relationship in marriage and discourages its use.  This is something entirely different: whether one person’s right to “control her own body” entitles her to damage the reproductive system of another person’s body.  Ultimately, it is a question of whether a man has a right to control his own body, too.  This is deep libertarian territory.

The Competitive Enterprise Institute’s Iain Murray has done significant research on the effects of birth control pills in the water supply, pointing out that its hormones released into the water supply, which can’t be filtered out, are creating “intersex” characteristics and sterility in the fish supply.  Fish exhibit sexual characteristics of both species due to estrogen contamination and cannot reproduce.  Scientific American has noted that despite the claims that the amounts present are small, the presence of them has harmed wildlife in the water supply.  Might be canaries in the coal mine for us.

Writing in 2008, Murray noted:

As I demonstrate in The Really Inconvenient Truths, by any standard typically used by environmentalists, the pill is a pollutant. It does the same thing, just worse, as other chemicals they call pollution. But liberals have gone to extraordinary lengths in order to stop consideration of contraceptive estrogen as a pollutant.

When Bill Clinton’s Environmental Protection Agency launched its program to screen environmental estrogens (a program required under the Food Quality Protection Act), the committee postponed considering impacts from contraceptives. Instead, it has decided to screen and test only “pesticide chemicals, commercial chemicals, and environmental contaminants.” When and if it considers the impacts from oral contraceptives, the Agency says that its consideration will be limited because pharmaceutical regulation is a Food and Drug Administration concern.

As a result, the EPA’s program will focus all energies on the smallest-possible part of endocrine exposure in the environment and the lowest-risk area.

The U.S. Geological Survey has found problems, too.

A recent report from the U.S. Geological Survey (USGS) found that birth-control hormones excreted by women, flushed into waterways and eventually into drinking water can also impact fish fertility up to three generations after exposure – raising questions about their effects on humans, who are consuming the drugs without even knowing it in each glass of water they drink.

The survey, published in March in the journal Scientific Reports, looked at the impact of the synthetic hormone 17α-ethinylestradiol (EE2), an ingredient of most contraceptive pills, in the water of Japanese medaka fish during the first week of their development.

While the exposed fish and their immediate offspring appeared unaffected, the second generation of fish struggled to fertilize eggs – with a 30% reduction in fertilization rates – and their embryos were less likely to survive. Even the third generation of fish had 20% impaired fertility and survival rates, though they were never directly exposed to the hormone.

The article states that there have been problems in mammals, too.

The Vatican, too, has spoken out about the environmental damage of artificial birth control going unfiltered into the water supply, specifically linking it to male infertility. Agence France-Presse reports:

The contraceptive pill is polluting the environment and is in part responsible for male infertility, a report in the Vatican newspaper L’Osservatore Romano said Saturday.

The contraceptive pill is polluting the environment and is in part responsible for male infertility, a report in the Vatican newspaper L’Osservatore Romano said Saturday.

The pill “has for some years had devastating effects on the environment by releasing tonnes of hormones into nature” through female urine, said Pedro Jose Maria Simon Castellvi, president of the International Federation of Catholic Medical Associations, in the report.

“We have sufficient evidence to state that a non-negligible cause of male infertility in the West is the environmental pollution caused by the pill,” he said, without elaborating further.

“We are faced with a clear anti-environmental effect which demands more explanation on the part of the manufacturers,” added Castellvi.

The blame cannot be laid on individuals who are attempting to do something they believe is responsible and useful and who have no intent to harm others.  Nobody here is calling for the pill’s prohibition in a free society, where people of all religions should be free to make their own choices.

There should be reason, however, to look into whether birth control is affecting the water supply and contributing to this species-threatening low sperm count matter.  The science does show that compounds excreted by users are impossible to filter from the water supply, and there are credible reports as to this affecting male fertility.

I would add that the span of years coincides with the rise of birth control pills, and it also coincides with the nations that use it.

A pro-contraception trade group, the Association of Reproductive Health Professionals, has admitted in a long editorial that there could be a problem, even as it tries to exculpate its industry, citing other possibilities.

The effect of estrogenic compounds in the water supply from industry, agriculture, and other sources raises concerns about human health and deserves scrutiny.

But all we see blamed in this and other editorials are “pesticide chemicals, commercial chemicals, and environmental contaminants,” as National Review’s article notes.

Seriously, why?  Why not investigate everything and, if there is a problem found, find new ways to filter out the pollutants from the water supply?  For all the global warmers’ alarmed claims about the threat to the species, here is a real threat, it’s moving fast, and nothing effective is being done about it.

Low sperm counts? Report fails to mention birth control in water supplies