The Magnitude of Morgellons

The Magnitude of Morgellons

Clifford E Carnicom
Dec 06 2016

Note: Carnicom Institute is not offering any medical advice or diagnosis with the presentation of this information. CI is acting solely as an independent research entity that is providing the results of extended observation and analysis of unusual biological conditions that are evident.  Each individual must work with their own health professional to establish any appropriate course of action and any health related comments in this paper are solely for informational purposes.

It must now be accepted by the global community that the “Morgellons” condition exists as a verifiable pathological condition. An objective online extensive health survey conducted by Carnicom Institute during this past year involving approximately 1000 participants, with significant international representation, substantiates this claim.  The survey clearly reveals and establishes that the health effects from the Morgellons condition are commensurate and on par with the global influences of such widespread conditions as Lyme’s disease and Chronic Fatigue Syndrome.   The demographics of the survey demonstrate a reasonably broad and representative segment of the population.  The symptoms are unique, real, physical, complex, and verifiable. Any measures or campaign to portray the situation as other than above are disingenuous and they are not confronting of the facts or extensive evidence on record.   It will be to the benefit of society when such realities are accepted in good order, and the measures taken to reduce or eliminate the unnecessary suffering that is in place.

A high-level summary of primary symptoms tabulated within the survey is available as follows:

Morgellons Research Project : Phase I
Primary Symptom Survey Results

A more detailed presentation of the survey results is available as follows:

Morgellons Research Project : Phase I
Symptom Survey Results 

The summary presentations above represent only a segment of information collected under the auspices of the survey.  Those health  practitioners and researchers interested in furthering the understanding and education of the Morgellons conditions are invited to apply to participate in the Carnicom Institute Community Health Professional Network (CHPN) available at the following location:

Carnicom Institute Morgellons Research Project : Phase II

Community Health Professional Network (CHPN)

Morgellons Research Project : Primary Symptom Survey Results

Morgellons Research Project : Phase I

Primary Symptom Survey Results

Clifford E Carnicom
Nov 05 2016

Note: Carnicom Institute is not offering any medical advice or diagnosis with the presentation of this information. CI is acting solely as an independent research entity that is providing the results of extended observation and analysis of unusual biological conditions that are evident.  Each individual must work with their own health professional to establish any appropriate course of action and any health related comments in this paper are solely for informational purposes.


To access the survey results in their entirety, please visit the the following page:


To apply for or visit Phase II of the Carnicom Institute Morgellons Research Project, please visit the following page:


The following list comprises the top 20th percentile of symptoms that have been compiled in Phase I of the Carnicom Institute Morgellons Research Project survey that has recently completed.  The online survey operated on this site for approximately one year and includes the results of approximately 1000 individuals.  Both short and long version survey results were collected.  The information below is a high level summary and it represents only a small portion of the data that is available via the Institute.


(Top 20th Percentile):

1. Materials or substances emerging from skin

2. Open and/or slow healing lesions

3. Rashes or other skin conditions

4. Itchy scalp

5. Change in the quality of vision (e.g., blurry or fatigued)

6. Unusual & chronic ringing in the ears

7. Unusual dental conditions

8. FATIGUE (6 overlapping sections of survey)

9. Shortness of breath, persistent or excess mucus or sputum

10. Stiffness in joints

11. Constipation, bloating, unusual weight gain

12. Anxiety, nervousness, irritability

13. Headaches, dry eyes & mouth

14. Forget events

15. Reliance on external memory aids (calendar, notes)

16. Loss of train of thought or flow of thread of conversations

17. Difficulty diagnosing, identifiying or explaining the illness

18. Skin problems

19. Associated conditions (diagnosed or examined) :

     a. Lyme’s Disease

     b. Chronic Fatigue

     c. Herpes

A Response to the University of California and the Carnegie Institute

A Response to the University of California and the Carnegie Institute

Clifford E Carnicom
Aug 22 2016

Preliminary Note:  A journalist of professional standing recently contacted Carnicom Institute requesting comments with respect to a recently published paper by the University of California.  The paper claims to issue an authoritative edict as a denial of geoengineering activities that are now actively practiced and that are detrimental to the global environment.  The following comments were provided to that journalist and they are made available to the public as follows:


The body of scientific work on geoengineering and bioengineering issues by Carnicom Institute spans close to twenty years.  The library of work, approximately 350 original research papers, encompasses a variety of scientific disciplines.  The methods and results, essentially with no exception, are reproducible and adhere to scientific protocols.  This evidence (not survey) based work is available for your review at:

(by category):


In addition, documentary summaries are available at:

(2005 full-version):

(2011 abbreviated version):

The remainder of this response will be necessarily brief: we can pursue further discussion later, should you choose.

Specifically, in reference to the UC “peer-review study” and the presentation on the UC website, I will make the following comments at this time:

  1. The paper in no way represents honest scientific work. This is a shining example of modern “science with an agenda” as opposed to truthful scientific pursuit. The paper is characterized more accurately as an exercise in social engineering versus fulfilling the requirements of the scientific method.
  1. The emphasis upon the act of “debunking,” in itself, is a prelude to a biased investigation. The term implies a strong association with an attempt to disprove, discredit, and refute claims without fulfilling the obligations to conduct the actual research that is required to answer a question or to solve a problem.
  1. The creation of the acronym “SLAP” at the onset is an obvious ruse and manipulative ploy to steer public perception toward ridicule. The term has not existed in the history of the issues and it was created specifically for the purpose above. It is an example of the many clever and subtle machinations to affect public psychology under the purported guise of professional presentation and credentials. It is a cheap ruse.
  1. It is understood that most individuals will never read the actual paper at the “core” of the study. I hope that you may choose to devote some time to this effort, as well as gain some familiarity with the body of Carnicom Institute research listed above. The UC paper can, of course, be dissected to infinity; however, I will make a few individual references to exemplify pertinent topics for discussion.

Let us begin with what appears to be the motive for the study; it speaks more strongly of the desire to influence public behavior than it does to seek observational and evidence-based data to substantiate the scientific method.

“Meanwhile, a growing number of studies have shown that quantifying and communicating the scientific consensus on contested issues such as vaccine safety and climate change can help lower public misperceptions and uncertainty(Myers et al 2015, vander Linden et al 2015, van der Linden et al 2015).

Here, therefore, we report the results of an expert survey in which we asked experts on atmospheric chemistry and atmospheric deposition to scientifically evaluate the claims of SLAP theorists.”

The first assumption implicit within this statement is that for some “unknown” reason, the public is in a state of “misperception” and “uncertainty.” Why would such an assumption need to exist for the scientific method to proceed?  This type of bias is a discredit to the acumen of the public.  Even casual research will reveal that the concern by the public regarding the geoengineering issue is now elevated to a global level.  By what right and upon what basis must we start our endeavor by assuming that this global population is ill-informed?

Notice the phrase “Here, therefore, we report the results of an expert survey….”  This phrase continues the mis-advised logic from above and it states the true motive for the project.  It is to “correct” the misguided ways of the global public in their “growing public distrust of elites and social institutions.”

The project is flawed from the beginning. It does not embody or represent the scientific method; it is not based upon direct observation, direct collection of evidence, the testing of hypothesis, and the fair and honest assessment of bonafide data to reach accurate and truthful conclusions.  None of the work or research in the paper is original.  This so-called “peer-review study” is an orchestrated and manipulative social engineering project; it is not science.

  1. If you continue to examine the processes adopted within the survey (an incomplete approach, at best, to a phenomenon of global proportions), you will see the frequent repetition of the words “thought” and “likely” (NOT observation, NOT evidence) by the claimed experts. No participant offers any objective data or pursuit of resolution to eliminate this ambiguous response. A more fair and thorough response to many of the questions posed would be: What steps are being taken to acquire the data to eliminate the ambiguity? What data do I need? Who is responsible for providing the data? How is the data audited? The peer-review process itself is now flawed and it does not assimilate independently (i.e. “citizen science”) acquired data, contributions, and reviews into science as it is now claimed to exist.
  1. We have an additional curiosity taking place. It will be noticed on multiple occasions that unexplainable data results were apparent to the participants. Subsequently, a generally uniform response of rejection was avowed. The thought process of rejection is not adequately explained and the dismissal is substituted with an ambiguous call for “more data.”

Where is the cry and demand for the data? Not a trailing and vague ending to the most critical questions at hand, but real data, impartial data, independent data, accountable data.  The lack of accountability on this global environmental issue is preposterous.

  1. There are, with no doubt, weaknesses and flaws that exist in the quality and standards of control for citizen collected samples. More importantly, we should be asking the question as to why citizens are in such a position to begin with. Maybe it is because of the inadequacy of the regulatory agencies to fulfill their own responsibilities for environment protection.
  1. There are many technical issues that can also be discussed within this paper. These issues are subject to serious evaluation and debate in comparison to how they have been cited as authoritative references. One example of this includes the elaborate discussion of a mixed “contrail-cirrus” mathematical model. The very basis of the model itself is open to contentious discussion. This and other topics can be discussed further by those with interest.
  1. For now, let me end this brief examination with attention to a closing phrase of the paper.

“We therefore offer the first peer-reviewed expert response on SLAP data.” … “The evidence as evaluated here does not point to a ….”

What a perfectly loaded and crafted phrase.  It is everything that the social engineers need to achieve their goals of manipulating and affecting public perception. Sarcasm aside, it is even more impressive because it is the “first.” This statement is a masterful conclusion of an incomplete and questionable process that avoids the hard-hitting realities and confrontations that come forth from TRUE science. Finally, I would claim that this paper does not present evidence; it present a series of ambiguous and incomplete responses to the reasonable demands from an alert and aware global population that is truly and genuinely concerned about our environment.

This is only a partial response to a purported accredited and authoritative study.  My hope is that readers will pursue honesty and thoroughness in these affairs and that they will be guided by their moral conscience toward truth.


Additional Notes:

1. Having attended the University of California at the onset of my higher education pursuits approximately 45 years ago, I must say that I am embarrassed and sorry for the state of education as it now exists in this country.  What was once considered to be an honor and privilege of attendance must now be accepted with a level of disgrace to the nobler goals that were once served.  I encourage each member of that institution, student, faculty and administrator, to reclaim the powers and benefits that come forth from comprehensive investigation and critical thinking to reach honest conclusions and assessments of the state of our world.

2. As of this date, the journalist referred to has not acknowledged receipt of the comments above.  This statement will be revised as circumstances warrant.

The Demise of Rainwater

The Demise of Rainwater

Clifford E Carnicom

A Paper to be Developed During
the Summer of 2016
(Last Edit Jun 20 2016)

The single most important chemical species in clouds and precipitation is the .. pH value.

Paul Crutzen, Nobel Prize Winner in Chemistry, 1995

Atmosphere, Climate and Change, Thomas Graedel & Paul J. Crutzen

Scientific American Library, 1997


Photo : Carnicom Institute

An analysis of five rainfall samples collected over a period of six months and spanning three states in the western United States has been completed.  There are five conclusions that are forthcoming:

1. The rainfall samples studied portray a smorgasbord of contamination. The contaminants appear to be both complex and numerous in nature.

2. There does not appear to be effective or comprehensive monitoring or regulation of the state of air quality, and consequently, rainfall quality in the United States at this time.

3. The results of the current analysis, utilizing more capable equipment and methods, are highly consistent with those that originated from this researcher close to two decades ago.

4. All reasonable requests or demands by the citizenry for the investigation and addressing of this state of affairs over this same time period have been refused or denied.

5. The level of contamination that exists poses both a risk and a threat to health, agriculture, biology, and the welfare of the planet.


Let us now proceed with some of the details.

We can begin with the pH, i.e., the acid or alkaline nature of rainfall.  Biochemical reactions take place (or, for that matter, do not take place..) at a specific temperature and pH.  If the system or environment for that reaction is disturbed with respect to the acidity and temperature, then the reaction itself is interfered with.  If the conditions depart far enough from what is required, the reaction may simply not even take place at all.  Such is the risk of interference to the acid-base nature of rainfall, upon which all life on this planet depends.


To be continued.





UV Detector & Lab Equipment Used for Summary View Data


Rainfall Analysis_16

Electrochemical Signature of Rainwater Tests for Trace Metals
as Determined by Differential Normal Pulse Voltammetry

The following metallic elements have been determined to exist, or to be strong candidates to exist, within a series of five rainwater samples that have been tested for trace metals.  The samples span three states across the country and six months of time.  The method applied is that of Differential Normal Pulse Voltammetry.  The level of detection for the method is on the order of parts per million (PPM).  This list considerably extends the scope of consideration for the future investigation and detection of metallic elements within rainwater.  The findings in the upper portion of the table are highly consistent with those under reporting by various laboratories across the country; those in the lower half serve to prompt further investigations into additional elements that are highly related in their properties within the periodic table.  An examination of the physical properties of these elements, in detail, will likely provide additional insight into the applications of use for these same elements.  It can be noticed that the majority of elements within the list act as reducing agents.

Element Measured Mean Redox Voltage
(Absolute Value)
Actual Redox Voltage
(Absolute Value)
Titanium (Ti) 1.63, 1.32, 1.24 1.63, 1.31, 1.23
Aluminum (Al) 1.67 1.66
Barium (Ba) 2.90 2.90
Strontium (Sr) 2.90 2.89
Magnesium (Mg) 2.66, 2.35 2.68, 2.37
Gallium (Ga) .52, .65 .56, .65
Scandium (Sc) 2.56, 2.09 2.60, 2.08
Zirconium (Zr) 1.45 1.43
Standard Error of Measurement 0.013 V; n = 15
(No information regarding concentration or concentration ranking is provided here)


Additional Inorganic Analyses:

Qualitative (Color Reagents) Test Results for Combined Rainfall Sample
A Value of 1 Indicates a Positive Test Result
Concentration of RainwaterSample ~15x
(No information regarding concentration or concentration ranking is provided here.)
(Chromium, Cyanide & Iron appear to be at minimal trace levels)


Qualitative Positive Test Examples:
Phosphates, Nitrates, Ammonia, Silica




Tests to Determine the Boiling Point
for the Concentrate Rainfall Sample Using an Oil Bath
(Contamination is Evident)





An Organic Extraction Process

(Results subsequently to be examined by Infrared Spectroscopy)


Infrared Spectrum of Rainfall Organic Extraction :

Water Soluble & Insoluble Components

(see previous photo)

(solvent influences removed)


Gas Chromatography (TCD) Applied to Organic Extracts

(tailing from varying polarities)




Biologicals Extracted from Rainfall Concentrate Samples


Additional Note:

I wish to thank Mr. John Whyte for his dedication and effort to organize and produce an environmental conference in Los Angeles, California during the summer of 2012. Mr. Whyte, in support of the speakers at the conference, provided the means for some of the environmental test equipment used in this report. I also wish to thank the general public for their assistance during this last year in the acquisition of important scientific instrumentation by Carnicom Institute. This report is made possible only by that generosity.

Clifford E Carnicom

Jun 18, 2016

To be continued.

A Clash of Evidence


A Clash of Evidence:
The Realities of Solar Radiation Management (SRM)

by Clifford E Carnicom
Apr 06 2016
Edit May 14 2016
Edit Jul 05 2016
(A Partial Editorial)

There are many environmental activists who assume a certain cause and relationship between active geoengineering programs and those projects that fall under the term of “Solar Radiation Management (SRM). This paper will reiterate the basic fallacy of that assumption, and it will direct the reader towards a more comprehensive inquiry of the true nature of the forces and agendas that are likely to be involved.

For those that do not wish to engage in the full length of this article,  the Solar Radiation Management principle is one of interfering with solar heat transfer to the earth.  There are various schemes for accomplishing this which will be discussed later; the most modest of the choices requires the introduction of certain types of particulates into the middle of the stratosphere (from about 7 to 30 miles above sea level).

The essential problem here is that geoengineering  activity as it is currently practiced (and for that matter, bioengineering as well), is operational in the troposphere (from ground level to an average of about 7 miles above sea level), and not the mid-stratosphere.  There is a world of difference between the two, but for that discussion you will have to muse yourself further into this paper.


Image source :


Before going further, however, it will be beneficial to provide a brief historical context for the issues and the language involved.  There is a track record of controversy and confusion, information and misinformation, official responses and denials, organization and disorganization, research and speculation, and authorities and personalities that now span close to two decades. Unfortunately, the progress of society coming to terms and truthfulness with the deliberate modification of the atmosphere, and ultimately the planet itself, has been slow.

So first, a little history of language and personalities.  The journalistic rise of the geoengineering issue began, to my best recollection, in the last few weeks of the year 1998.  A certain Canadian journalist came to prominence quite rapidly on a nationally syndicated radio show, with coined language and defined agendas to let the world know that something very different and important was to affect the world.  It is fair to say that I have never been at ease with either the language or the a priori “agenda” that was introduced, as they always seemed to be supported with substantial fanfare and attention, but without any basic science to support claims being made.  The issue was, essentially, outlined and served to the public without proper investigation and discussion.

It is worthwhile to investigate that history a bit, as it represents a good portion of why we are where we are today.  Most of us may not be aware that generational forces are now at play in our understanding of the geoengineering issue. The language introduced at that time was the use of the term “chemtrail”,  a term that never did have a formal, accurate, or scientific definition then, and it still does not today.  That deficiency alone has been enough to interfere with the proper investigation of environmental pollution and contaminants, and it remains moderately successful to this day.  Whether such language of derision and denial, but of popular appeal, was a product of personal creativity or design of influence I may never be able to state with certainty; I do, however, have my opinions on the matter and I see no benefits from the choice.  My separation and disdain for populist and ill-defined terminology that is used in vain to seek legal standing is known, and I shall not be party to perpetuate this dubious origin. Only those words that will stand up in a court of law have merit here, and you are the one that will need to make your case.

The second great coup of the early journalistic ‘work’ was to define, in the eyes of the public, the very reason for the existence of geoengineering programs before any science was in place to justify the claim.  Again, it was all far, far “too easy” for one of my persuasion.  Check your internet history books, but you will find that a global and covert operation of unprecedented scale was, by use of a curious combination of implication and certainty,  for the purpose of “reducing global warming.

History will show that there has been an incredible level of success in strategy and influence upon public perception with these implants.  They are, however, in reality travesties and injustices to the public cause.

What the public was ‘given’, therefore, was an unsubstantiated agenda, ill-defined language of popular attraction, and a host of ready-made and supported ‘detractors’ that raised a commotion, provided distraction and dispute;  all of these set the stage to successfully avoid journalistic integrity, scientific investigation, and accountability by public representatives.  The obstacles were all provided at little cost, but at great expense to the needs and interests of the public.

This strategy of framing public perception and discussion under the guise of potential benefit was generally effective for more than a decade.  Hard hitting journalism never did take place, thorough investigations were not launched, scientific work was not supported, and public officials were not held accountable.

The problem that developed was that the claim of ‘cooling the planet’  by using aircraft to disperse aerosols did not fit the facts of observation.  They did not fit them then, and they do not fit them now.  It has taken some time for this truth to become evident; I presented my first paper on this topic (Drought Inducement, Apr. 2002) in the early part of the last decade.  This work was followed by additional papers (Global Warming and Aerosols, Jan. 2004, A Global Warming Model, Apr. 2007, and A Geoengineering and Climate Change Model, Jan. 2015) during the course of the successive decades. The tenets of that investigative work are also confirmed on a broader level with documents issued by, for example, the International Panel on Climate Change (IPCC) (International Panel on Climate Change (IPCC) 1999, 17) and NASA (“Clouds & Radiation Fact Sheet : Feature Articles” 2016) on the net heating effects from “thin, high clouds.”

High, thin “clouds”, including those that originate from an introduced aerosol base, do not cool the planet; they heat it up.

The next piece of the puzzle that we must fit into the picture is Edward Teller, and specifically the paper by him entitled, “Global Warming and Ice Ages: Prospects for Physics-Based Modulation of Global Climate Change.”  This paper, authored in part by the developer of the hydrogen bomb, is often cited by activists themselves as one of the holy grails that proves that geoengineering operations are in place, and that they are indeed “cooling the planet” and “combating global warming” (albeit covertly, for some unknown reason). There are some important portions of the paper that have not been paid attention to;  this omission inappropriately supports a culture of popular belief that lacks scientific foundation.

Edward Teller does indeed propose various schemes for cooling the earth’s temperature, including the introduction of aerosols or particulates into the atmosphere. The issue, however, is WHERE in the atmosphere he proposes to do this, and the answer to this question is very relevant to the cause and purpose of this paper. It is even more revealing to point out the additional options that are both proposed and preferred by Edward Teller in his paper, as they help to place his atmospheric aerosol proposal into a better perspective.

Let us spend a brief time with the proposals of Edward Teller, as they are outlined in the paper cited above.  Please note that even within the introductory notes that Teller uses the phrase of introducing “scatterers” (i.e., light and heat) “into space from the vicinity of the earth”; this should give some indication of what the thrust of the thinking process is.  Teller proposes to introduce the scatterers into three different locations to artificially cool the earth (Teller 1997, 7):

1. Into the middle of the stratosphere (NOT the troposphere). The stratosphere is in the upper atmosphere, and the troposphere is the lower atmosphere. This important difference will be discussed in more detail a little later in this paper.

2. In orbit, in SPACE, approximately 4000 miles above the earth.

3. Deep in SPACE, approximately 400,000 miles from the center of the earth.

An obvious pattern of diverting the heat to locations distant from the earth should be apparent to us; it is one that has not been disclosed sufficiently within the current discussions taking place with respect to both geoengineering and climate control.

The reason the materials are proposed to be so distant from the earth is two-fold:

1. Most of the materials considered will absorb heat.

2. It is desired to have the captured heat radiate into space; not into the earth and its lower atmosphere.

The principles of the approach should not be difficult to grasp here, but they most certainly have been misrepresented in most discussions that are taking place with respect to current and active geoengineering (and bioengineering) operations.

If you hold a parasol over your head on a hot sunny day, it might keep you cooler. The air around you will still absorb that heat, however. The color and material of the umbrella is going to be another factor (i.e, albedo, specific heat, etc.) that you will want to consider. If you want to cool the planet, you are going to have to move the umbrella a lot further away – into space, for example. This is the essence of the Teller paper, and it is important to understand this proposal before certain terms of “solar radiation management” with respect to current geoengineering practices are bandied about. WHERE the material is injected into the atmosphere makes a big difference on the net heat effect, and this topic has largely been ignored within the popular circles of discussion on geoengineering. This discussion should lead one to think much more deeply about what the definition of geoengineering actually is, and how that definition compares to the realities of the projects and operations AS THEY ARE CURRENTLY AND ACTIVELY PRACTICED. Climate modification strategies, or more appropriately, environmental control strategies, are only one part of a much bigger picture.

The Teller paper has gained a lot of mileage in the geoengineering circles, and it is my opinion that much of this mileage is without merit and in ignorance. I must credit the Canadian journalist again for the majority of that progress, as the seed was planted very early in the game with a great deal of supposed ‘alternative media’ support. The Teller paper never explained the physics or consequences of introducing massive amounts of specific aerosol types into the lower atmosphere. The reason for this is simple; the paper was never intended to explain it because this act is not a viable way to cool down the earth. The Teller paper was inappropriately supported and attached to the observation of and media coverage of geoengineering (and bioengineering) operations as they are currently in place and operational.

Now let’s discuss some of the differences between the troposphere and the stratosphere in more detail. The distinction between what is real and hypothetical will never take place until we put at least some effort in that direction.

The troposphere is where weather is made. The troposphere is where airplanes generally fly. The troposphere is where the air is more dense and it is where pollution has a more immediate impact upon us. It is the where the majority of the earth’s atmosphere is, and consequently it is where we can breath and live. The troposphere has a profound and immediate impact upon our very existence on this planet. Roughly ¾ of the mass of the entire atmosphere is contained within the troposphere, the average height is about seven miles (a trip to the grocery store), and it is a veritable delicate eggshell of life for this planet. The troposphere is delicate and crucial to all life on this planet, and disturbance or pollution within it threatens our very existence. It cannot sustain serious damage without immediate consequence.

The stratosphere is where the air is very thin, centering closer to an average height of 20 miles above the earth. Airplanes cannot and do not fly in the mid-stratosphere regularly, as there is not enough air to support them; only specialized or high performance aircraft will rarely be able to visit this transitional zone to space. Geoengineering (and bioengineering) operations, in a practical aviation sense with current technology, cannot be practiced there. Teller makes clear that the preferred target for his ideas is generally in space, where the heat can feasibly be diverted or managed AWAY from the earth.

Readers may also to review an interview from several years past on this and related subjects; it is available via Freedom For All TV which is based in Canada (“Freedom Free For All TV: Clifford Carnicom Interview – YouTube” 2016).

It is now that we can understand a portion of the dilemma that is before us. If we accept that aviation is a primary tool that is actively being used to artificially modify the atmosphere, then we know that this is occurring within the troposphere, and not the mid-stratosphere. But we also know, at least as based upon Teller’s models, that mid-stratospheric operations would be required to effect any type of practical mitigation to global climate warming. Teller also lets us know that long term climate control by aircraft is hardly a preferred method, as it requires specialized performance aircraft and requires continual renewal to maintain its effectiveness. What is known, therefore, is that geoengineering (and bioengineering) operations AS THEY ARE NOW PRACTICED IN THE LOWER ATMOSPHERE, i.e., the troposphere, are not directed and motivated primarily toward climate control, including the purported mitigation of “global warming”.

The forces behind the implementation of active and current geoengineering operations have always understood this, and it never has been a logical motive for the current operations. This is the case regardless of popular conceptions with popular appeal that have been circulated for far too long without contest.

It is certainly past time for the citizens of the world to understand this as well, including many of the well intended environmental individuals and organizations that affect this same citizenry.

The language may have changed some over the recent decades, but the confusion and obfuscation remains as strong as ever now. It is past time to play the cards straight and to force each of us to confront the truths of the matter.

We must now pay some attention to the language that is now in vogue and how it changes. The terms of ‘chemtrails’ and ‘global warming’ were foisted upon us in earlier days; aerosols and particulates were always favored from my position, but those terms do not exactly have popular twitter appeal. They do, however, remain valid and accurate as far as the substance of the matter.

We have transitioned now to more socially acceptable terms of climate change, geoengineering, and “solar radiation management”. Unfortunately, the confusion behind the terms remains as dysfunctional as ever. We can be assured that the definition of geoengineering (and bioengineering) as I understand them, are not at all in agreement with many popularly held notions of that same term. Environmental modification and control is simply one small slice of the bigger pie, as far as I am concerned. I will reiterate my scope of consideration for the term near the end of this paper.

We should, however, at least seek out the definition of the popular term (by many environmental activists as well) “Solar Radiation Management”. This term refers to the management of climate control issues through a modification of the earth’s heat balance;  only one option of which includes the introduction of particulate matter into the stratosphere (NOT the troposphere).

Specifically, from the Royal Society:

“Solar Radiation Management (SRM) [are] techniques which reflect a small percentage of the sun’s light and heat back into space.”

Again, I will make the case here that the term cannot and does not apply to current and active geoengineering (and bioengineering) operations as they are currently practiced in the lower atmosphere (troposphere). The stratosphere is not the troposphere, and the troposphere is not the stratosphere. The physics of each layer within the atmosphere are completely different from one another and they cannot, in general, be “used” for the same purposes. You cannot talk about them or treat them as though there is no difference of importance.

You cannot rely on methods and definitions that have physical principles, meaning and application within a certain domain (i.e, the stratosphere) and then use those same methods and principles for a different domain (i.e., the troposphere).

To further assume that the practitioners of active geoengineering (and bioengineering) operations are active within the mid-stratosphere when they are not (as determined by direct observation) further undermines the case for protest of the actual modification of the lower atmosphere (i.e., the troposphere) that is taking place. Talk about misrepresentation and obfuscation of a global environmental and health issue; there is plenty of fodder to work with here.

To claim further that the motives of the geoengineering practitioners are beneficial and well-intended (i.e, “solar radiation management and the curtailment of “global warming”) but that the operations are now known to actually cause harm because of a net heating effect is equally misguided. The operations as they are practiced are not an experiment of beneficent intent; the developers understand the physics and the applications quite well (within their sphere of interest). Rest assured that the web of deployment is not centered on, or confined to, the principles of “Solar Radiation Management”.

Current operations directly impact and affect the lower atmosphere (troposphere) in which we all live and breathe; this assertion is now supported directly by field measurements.  The particulate counts are real and observable, and they have been made. The measurements referred to are not worthy solely of “climate control” consideration; they are, however, of immediate impact and detriment to your health and well-being. Gravity works, and the materials do ultimately reach ground level and they are measurable in direct correspondence to activity levels. You may wish to think a little closer to home, in some respects, and become active on that front.

Incidentally, attention should probably be called to a particular segment of a particular interview from several years past; my recollection is that a Mr. George Knapp from the Coast to Coast network moderated the affair. It is another part of the social history, “alternative” media, and social impressionability that precedes us. You may or may not choose to investigate the affair as I report it here.

It was not made clear prior that multiple parties would be available on the interview and fair representation on the sides of an issue can always be a topic of debate. What remains of interest to me is a particular response evoked from a particular Canadian journalist on the panel when I introduced the subject of “biological operations” (e.g., bioengineering) into the discussion. I think it is fair to say that I must have struck a nerve in the flow or agenda of the conversation. After the claim that biological operations are indeed an active component of the aerosol operations as they are now practiced, the particular response from this “Canadian journalist” was:

“There is not! There is not! I repeat there is not any evidence of biological operations available!” (to my best recollection). The response was immediate, emphatic and unqualified.

The show’s host then immediately switched to a commercial break after this statement was made. You may judge for yourself what dynamics transpired at that moment, but the forceful response certainly struck me as out of balance within a purported discussion of important environmental issues.

In the time made available, I refuted the unsubstantiated claim then. I refute it now as well.

I am only one researcher, and I hardly make claim to knowing all shades of an operation that I am not party to. Over the years, however, a ‘list of applications” has been developed which remains internally consistent with all known and observed data. The list has not changed in any significant fashion for more than a decade. I will continue to voice the claim that no discussion of geoengineering (or bioengineering) is of adequate scope unless it delves into each of the following domains:

1. Environmental modification and control (of broader scope than global temperature issues).
2. Military applications
3. Electromagnetic operations
4. Biological operations (including bioengineering)
5. Geophysical considerations
6. Surveillance System Development (LIDAR applications)
7. Exotic technology system monitoring

The prime-time audience may not be ready for the realities and implications of the various aspects itemized above, but they are ultimately deserving.

There are parties that continue to promulgate the thesis that Solar Radiation Management, i.e., the attempted mitigation of “global warming” via stratospheric modification is at the crux of active geoengineering operations. There frequently remains the implication that the motives for operation are of good intention even if the observations of consequence contradict that claim. The use of Edward Teller’s paper is frequently cited as the basis for the implementation of theoretical concepts into actual operation, regardless of the physics or details involved. There are seldom, if ever, references to differences between the impact of operations in the troposphere (lower atmosphere) vs. the stratosphere (upper atmosphere). There frequently is the assumption that the agendas of operation are known and defined by popular perceptions. For close to two decades, the evidence does not support these claims and misrepresentation is in place.

I would encourage that each of us seek common ground and understanding of the forces and applications that are likely operative within the spheres of active and practiced geoengineering (and bioengineering) operations. There is some value in review and observation of the social history and assumptions that accompany our evolution in the pursuit of truth. It is also wise to force good science and reason continuously into our deliberations and debates, and to admit our mistakes so that we may rise above them. If information, analyses and representations are inconsistent we must each be willing to confront those positions. I believe that the phrase has already been coined for us – “The Truth is Out There”, and it is the job of each one of us to help find it.


Clifford E Carnicom
April 06, 2016
Edit May 14, 2016
Edit July 05, 2016


Additional Notes:

Readers may also wish to become familiar with a model document that proposes an international ban on geoengineering (and bioengineering) practices. Please refer to for additional information(“Global Ban on Geoengineering – Stop Global Geoengineering” 2016).

Appreciation is extended to Harold Saive for a note of clarification within this paper.


“Clouds & Radiation Fact Sheet : Feature Articles.” 2016. Accessed March 24.

“Freedom Free For All TV: Clifford Carnicom Interview – YouTube.” 2016. Accessed April 6.

“Geoengineering the Climate: Science, Governance and Uncertainty | Royal Society.” 2016. Accessed March 29.

“Global Ban on Geoengineering – Stop Global Geoengineering.” 2016. Accessed April 6.

“Image: The Stratosphere – Overview | UCAR Center for Science Education.” 2016. Accessed March 29.

International Panel on Climate Change (IPCC). 1999. “Aviation and the Global Atmosphere.”

Teller, Edward. 1997. “Global Warming and Ice Ages: Prospects for Physics-Based Modulation of Global Climate Change.”

“The Stratosphere – Overview | UCAR Center for Science Education.” 2016. Accessed April 6.

“The Troposphere – Overview | UCAR Center for Science Education.” 2016. Accessed April 6.

Pollution, Concentration and Mortality

Pollution, Concentration and Mortality

by Clifford E Carnicom
Mar 19 2016

A preliminary analytical model has been developed to estimate the impact of increased concentrations of atmospheric fine particulate pollution (PM 2.5) upon mortality rates. The model is a synthesis between an analysis of measured pollution levels (PM 2.5) and published increased mortality estimates. The model is based, in part, upon previous investigations as published in the paper “The Obscuration of Health Hazards : An Analysis of EPA Air Quality Standards“, Mar 2016.

Models for both concentration levels and visibility have now been developed; for a related model in terms of visibility, please see the paper entitled Pollution, Visibility and Mortality, Mar 2016.

Preliminary Concentration -Exposure – Mortality Model

A substantial data base based upon direct field measurements of atmospheric fine particulate matter in the southwestern United States during the winter of 2015-2016 has been acquired. The measurements reveal clear relationships between the quality of air, the PM 2.5 concentration levels, visibility of the surrounding territory, and the existence or absence of airborne aerosol operations.

The field data shows that repeated instances of the PM 2.5 count in the range between 30-60 ug/m3 is not unusual in combination with active atmospheric aerosol operations; visibility and health impacts are obvious under these conditions. The PM 2.5 count will inevitably be less than 10 (or even 5) ug/m3 under good quality air conditions.

Additional studies based upon this acquired data may be conducted in the future. Numerous published studies make known relationships between small increases in PM 2.5 pollution and increased mortality.

 meter44Measured PM 2.5 Count, 44 ug/m3.

As an example of use of this model, if the PM 2.5 count is 44 ug/m3 as shown in the above example, and if the number of days of exposure of this level is approximately 50, then the estimated increase in annual mortality is approximately 17%. This is an extreme increase in mortality, but under observed conditions in various locales it is not beyond the range of consideration.  It is thought that reasonably conservative approaches have been adopted within the modeling process.

The field data that has been collected and this model further highlight the serious deficiencies in the current Air Quality Index (AQI) as in current use by the U.S. Environmental Protection Agency (EPA). In light of the current understanding of the health impacts of small changes in PM 2.5 counts (e.g, 10 ug/m3), a scale that gives equal prominence to values as high as 500 ug/m3 (catastrophic conditions) is an incredible disservice to the public. Please see the earlier referenced papers for a more thorough discussion of the schism between public health needs and the reporting systems that are in place.

This researcher advocates the availability of direct and real-time fine particulate matter concentration levels (PM 2.5) to the public; this information should be as readily available as current weather data is.  Cost and technology are no longer major barriers to this goal.


operation-01Active Aerosol Operation
City of Rocks, Southern N.M.

operation-02Demonstration of the Impact of Aerosol Banks Upon Visibility.
Concentration Levels and Subsequent Visibility Changes
Directly Impact Mortality.

As an incidental note, it may be recalled from earlier work that there is a strong conceptual basis for the development and application of surveillance systems that are dependent upon atmospheric aerosol concentrations. This application is only one of many that have been proposed over a period of many years, and readers may refer to additional details on this subject within the research library. Documentaries produced by this researcher (Aerosol Crimes, Cloud Cover) during the last decade also elaborate on those analyses. The principles of LIDAR apply here.

Current field observations continue to reinforce this hypothesis. Observation in the southwest U.S. indicates that two locale types appear to be preferred targets for application: these include the large urban areas and the border region between the U.S. and Mexico. These locations, considered in a joint sense, suggest that both people and the monitoring or tracking of those same people within an area may be a technical and strategic priority of the project. A citizen based systematic and sustained nationwide monitoring system of PM 2.5 concentrations over a sufficient time period can clarify this issue further.

The recent papers on the subject of air quality are intended to raise the awareness and involvement of the public with respect to environmental and health conditions. There are very real relationships between how far you can see, the concentration levels of particulates in the atmosphere, and ultimately our mortality. It is our responsibility as stewards, as well as in our own best interest, to not deliberately and wantonly contaminate the planet.

Clifford E Carnicom
Mar 19, 2016

Pollution, Visibility and Mortality

Pollution, Visibility and Mortality
Clifford E Carnicom
Mar 12 2016

A preliminary empirical model has been developed to estimate the impact of diminished visibility and fine particulate pollution upon mortality rates.  The model is a synthesis between an analysis of measured pollution levels (PM 2.5), observed visibility levels and published increased mortality estimates.  The model is based, in part, upon previous investigations as published in the paper “The Obscuration of Health Hazards : An Analysis of EPA Air Quality Standards“, Mar 2016.



Preliminary Visibility -Exposure – Mortality Model

Air pollution has many consequences.  One of the simplest of these consequences to understand is that of mortality and the degradation of health.  It would be prudent for each of us to be aware of the sources of pollution in the atmosphere, and their subsequent effects upon our well being.  Measurement, monitoring and auditing of airborne pollution is within range of the general public, and the role of the citizens to participate in these actions is of increased imperative.  The role of public service agencies to act on behalf of public health needs and interests has not been fulfilled and we must all understand and react to the consequences of that neglect.

This particular model places the emphasis upon what can be directly observed with no special means, and that is the visibility of the surrounding sky.  Visibility levels are a direct reflection of the particulate matter that is in the atmosphere, and relations between what can be seen (or not seen, for that matter) and the concentration of pollution in the atmosphere can be established.  The relationships are observable, verifiable and are well known for their impacts upon human health, including that of mortality.

All models are idealized representations of reality.  Regardless of variations in the modeling process, it can be confidently asserted that there are direct physical relationships between particulate matter in the atmosphere, the state of visibility, and your health.   There are, of course, many other relationships of supreme importance, but the objective of this article is a simple one.  It is : to look, to be aware of your surroundings, to think, to act, and to participate. The luxuries and damage from perpetual ignorance can not be dismissed or excused.

The call for awareness is a fairly simple one here.  I encourage you to become engaged;  if for nothing else than the sake of your own health.  When this has been achieved, you are in a position of strength to help others and to improve our world.  This generation has no right or privilege to deny the depths of nature to those that will follow us.



Models are one thing, real life is another.  It is time to assume your place.


Clifford E Carnicom
Mar 12, 2016

The Obscuration of Health Hazards :

The Obscuration of Health Hazards:
An Analysis of EPA Air Quality Standards

Clifford E Carnicom
Mar 12 2016

A discrepancy between measured and observed air quality in comparison to that reported by the U.S. Environmental Protection Agency under poor conditions in real time has prompted an inquiry into the air quality standards in use by that same agency. This analysis, from the perspective of this researcher, raises important questions about the methods and reliability of the data that the public has access to, and that is used to make decisions and judgements about the surrounding air quality and its impact upon human health. The logic and rationale inherent within these same standards are now also open to further examination. The issues are important as they have a direct influence upon the perception by the public of the state of health of the environment and atmosphere. The purpose of this paper is to raise honest questions about the strategies and rationales that have been adopted and codified into our environmental regulatory systems, and to seek active participation by the public in the evaluation process.  Weaknesses in the current air quality standards will be discussed, and alternatives to the current system will be proposed.

Particulate Matter (PM) has an important effect upon human health.  Currently, there are two standards for measuring the particulate matter in the atmosphere, PM 10 and PM 2.5.  PM 10 consists of material less than 10 microns in size and is often composed of dust and smoke particles, for example.  PM 2.5 consists of materials less than 2.5 microns in size and is generally invisible to the human eye until it accumulates in sufficient quantity.  PM 2.5 material is considered to be a much greater risk to human health as it penetrates deeper into the lungs and the respiratory system.  This paper is concerned solely with PM 2.5 pollution.

As an introduction to the inquiry, curiosity can certainly be called to attention with the following statement by the EPA in 2012, as taken from a document (U.S. Environmental Protection Agency 2012,1) that outlines certain changes made relatively recently to air quality standards:

“EPA has issued a number of rules that will make significant strides toward reducing fine particle pollution (PM 2.5). These rules will help the vast majority of U.S. counties meet the revised PM 2.5 standard without taking additional action to reduce emissions.”

Knowing and studying the “rule changes” in detail may serve to clarify this statement, but on the surface it certainly conveys the impression of a scenario whereby a teacher changes the mood in the classroom by letting the students know that more of them will be passing the next test.  Even better, they won’t need to study any harder and they will still get the same result.

In contrast, the World Health Organization (WHO) is a little more direct (World Health Organization 2013, 10) about the severity and impact of fine particle pollution (PM 2.5):

“There is no evidence of a safe level of exposure or a threshold below which no adverse health effects occur. The exposure is ubiquitous and involuntary, increasing the significance of this determinant of health.”

We can, therefore, see that there are already significant differences in the interpretation of the impact of fine particle pollution (especially from an international perspective), and that the U.S. EPA is not exactly setting a progressive example toward improvement.

Another topic of introductory importance is that of the AQI, or “Air Quality Index” that has been adopted by the EPA (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016).  This index is of the “idiot light” or traffic light style, where green means all is fine, yellow is to exercise caution, and red means that we have a problem.  The index, therefore, has the following appearance:

There are other countries that use a similar type of index and color-coded scheme.  China, for example, uses the following scale (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016):


As we continue to examine these scale variations, it will also be of interest to note that China is known to have some of the most polluted air in the world, especially over many of the urban areas.

Not all countries, jurisdictions or entities , however, use the idiot light approach that employs an arbitrary scaling method that is removed from showing the actual PM 2.5 pollution concentrations, such as those shown from the United States and China above.  For example, the United Kingdom uses a scale (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016) that is dependent upon actual PM 2.5 concentrations, as is shown below:

Notice that the PM 2.5 concentration for the U.K. index is directly accessible and that the scaling for the index is dramatically different than that for the U.S. or China.  In the case of the AQI used by the U.S. and China (and other countries as well), a transformed scale runs from 0 to 300-500 with concentration levels that are generally more obscure and ambiguous within the index.  In the case of the U.K index, the scale directly reports with a specific PM 2.5 concentration level with a maximum (i.e., ~70 ug/m^3) that is far below that incorporated into the AQI index (i.e., 300 – 500 ug/m^3).

We can be assured that if a reading of 500 ug/m^3 is ever before us, we have a much bigger problem on our hands than discussions of air quality.  The EPA AQI is heavily biased toward extreme concentration levels that are seldom likely to occur in practical affairs; the U.K. index gives much greater weight to the lower concentration levels that are known to directly impact health, as reflected by the WHO statement above.

Major differences in the scaling of the indices, as well as their associated health effects, are therefore hidden within the various color schemes that have been adopted by various countries or jurisdictions.  Color has an immediate impact upon perception and communication; the reality is that most people will seldom, if ever, explore the basis of such a system as long as the message is “green” under most circumstances that they are presented with.  The fact that one system acknowledges serious health effects at a concentration level of  50 – 70 ug/m^3 and that another does not do so until the concentration level is on the order of 150 – 300 ug/m^3 is certainly lost to the common citizen, especially when the scalings and color schemes chosen obscure the real risks that are present at low concentrations.

The EPA AQI system appears to have its roots in history as opposed to simplicity and directness in describing the pollution levels of the atmosphere, especially as it relates to the real-time known health effects of even short-term exposure to lower concentration PM 2.5 levels.  The following statement (“Air Quality Index | World Public Library” 2016) acknowledges weaknesses in the AQI since its introduction in 1968, but the methods are nevertheless perpetuated for more than 45 years.

“While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application.”

The system of color coding to extreme and rarified levels with the use of an averaged and biased scale versus one that directly reports the PM 2.5 concentration levels in real time is an artifact that is divorced from current observed measurements and the knowledge of the impact of fine particulates upon human health.

The reporting of PM 2.5 concentrations directly along with a more realistic assessment of impact upon human health is hardly unique to the U.K. index system. With little more than casual research, at least three other independent systems of measurement have been identified that mirror the U.K. maximum scaling levels along with the commensurate PM 2.5 counts. These include the World Health Organization, a European environmental monitoring agency, and a professional metering company index scale (World Health Organization 2013, 10) (“Air Quality Now – About US – Indices Definition” 2016) (“HHTP21 Air Quality Meter, User Manual, Omega Engineering” 2016, 10).

As another example to gain perspective between extremes and maximum “safe” levels of PM 2.5 concentrations, we can recall an event that occurred in Beijing, China during November 2010, and that was reported by the New York Times in January of 2013 (Wong 2013) .  During this extreme situation, the U.S. Embassy monitoring equipment registered a PM 2.5 reading of 755, and the story certainly made news as the levels blew out any scale imaginable, including those that set maximums at 500.

An after statement within the article that references the World Health Organization standards may be the lasting impression that we should carry forward from the horrendous event, where it is stated that:

“The World Health Organization has standards that judge a score above 500 to be more than 20 times the level of particulate matter in the air deemed safe.”

Not withstanding the fact that WHO also states that no there is no evidence of any truly “safe” level of particulate matter in the atmosphere, we can nevertheless back out of this statement that a maximum “safe” level for the PM 2.5 count, as assessed by WHO, is approximately 25 ug / m^3.  This statement alone should convince us that we must pay close attention to the lower levels of pollution that enter into the atmosphere, and that public perception should not be distorted by scales and color schemes that usually only affect public perception when they number into the hundreds.

Let us gain a further understanding of how low concentration levels and small changes affect human health and, shall I daresay, mortality. The case for low PM 2.5 concentrations being seriously detrimental to human health is strong and easy to make.  Casual research on the subject will uncover a host of research papers that quantify increased mortality rates with direct relationship to small changes in PM 2.5 concentrations, usually expressing a change in mortality per 10 ug / m^3.  Such papers are not operating in the arena of scores to hundreds of micrograms per cubic meter, but on the order of TEN micrograms per cubic meter.  This work underscores the need to update the air quality standards, methods and reporting to the public based upon current health knowledge, instead of continuing a system of artifacts based upon decades old postulations.

These papers will refer to both daily mortality levels as well as long term mortality based upon these “small” increases in PM 2.5 concentrations.  The numbers are significant from a public health perspective.  As a representative article, consider the following recent published paper in Environmental Health Perspectives in June of 2015, under the auspices of the National Institute of Environmental Health Sciences(Shi et al. 2015) :




with the following conclusions:




as based upon the following results:




Let us therefore assume a more conservative increase of 2% mortality for a short-term exposure (i.e., 2 day) per TEN (not 12, not 100, not 500 per AQI scaling) micrograms per cubic meter.  Let us assume a mortality increase of 7% for long term exposure (i.e, 365 days).

Let us put these results into further perspective.  A sensible question to ask is, given a certain level of fine particulate pollution introduced into the air for a certain number of days within the year, how many people would die as a consequence of this change in our environment?  We must understand that the physical nature of the particulates is being ignored here (e.g., toxicity, solubility, etc.) other than that of the size being less than 2.5 microns.

The data results suggest a logarithmic form of influence, i.e. a relatively large effect for short term exposures, and a subsequently more gradual impact for long term exposure.  A linear model is the simplest approach, but it also is likely to be too modest in modeling the mortality impact. For the purpose of this inquiry, a combined linear-log approach will be taken as a reasonably conservative approach.

The model developed, therefore, is of the form:

Mortality % Increase (per 10ug/m^3) = 1.65 +. 007(days) + 0.48 * ln(days)

The next step is to choose the activity level and time period for which we wish to model the mortality increase.  Although any scenario within the data range could be chosen, a reasonably conservative approach will also be adopted here.  The scenario chosen will be to introduce 30 ug/m^3 of fine particulate matter into the air for 10% of the days within a year.

The model will therefore estimate a 3.6% increase in mortality for 10 ug/ m^3 of introduced PM 2.5 materials (36.5 days).  For 30 ug/m^3, we will therefore have a a 10.9% increase in mortality.  As we can see, the numbers can quickly become significant, even with relatively low or modest PM 2.5 increases in pollution.

Next we transform this percentage into real numbers. During the year of 2013, the Centers for Disease Control (CDC) reports that 2,596,993 people died during that year from all causes combined (“FastStats” 2016).  The percentage of 10.9% increase applied to this number results in 283, 072 additional projected deaths per year.

Continuing to place this number into perspective, this number exceeds the number of deaths that result from stroke, Alzheimer’s, and influenza and pneumonia combined (i.e, 5th, 6th, and 8th leading causes of death) during that same year.  The number is also much higher than the death toll for Chronic Pulmonary Obstructive Disease (COPD), which is now curiously the third leading cause of death.

We should now understand that PM 2.5 pollution levels are a very real concern with respect to public health, even at relatively modest levels.  Some individuals might argue that such a scenario could never occur, as the EPA has diminished the PM 2.5 standard on an annual basis down to 12 ug/m^3.  The enforcement and sensitivity of that measurement standard is another discussion that will be reserved for a later date.  Suffice it to say that the scenario chosen here is not unduly unrealistic here for consideration, and that it is in the public’s interest to engage themselves in this discussion and examination.



The next issue of interest to discuss is that of a comparison between different air quality scales in some detail.  In particular, the “weighting”, or influence, of lower concentration levels vs. higher concentration levels will be examined.  This topic is important because it affects the interpretation by the public of the state of air quality, and it is essential that the impacts upon human health are represented equitably and with forthrightness.

The explanation of this topic will be considerably more detailed and complex than the former issues of “color coding” and mortality potentials, but it is no less important.  The results are at the heart of the perception of the quality of the air by the public and its subsequent impact upon human health.

To compare different scales of air quality that have been developed; we must first equate them.  For example, if one scale ranges from 1 to 6, and another from 0 to 10, we must “map”, or transform them such that the scales are of equivalent range.  Another need in the evaluation of any scale is to look at the distribution of concentration levels within that same scale, and to compare this on an equal footing as well.  Let us get started with an important comparison between the EPA AQI and alternative scales that deserve equal consideration in the representation of air quality.

Here is the structure of the EPA AQI in more detail (U.S. Environmental Protection Agency 2012, 4) .


 AQI Index AQI Abitrary Numeric  AQI Rank PM 2.5 (ug/m^3) 24 hr avg.
Good  0-50  1  0-12
Moderate  51-100  2  12.1-35.4
Unhealthy for Sensitive Groups  101-150  3  35.5-55.4
Unhealthy  151-200  4  55.5-150.4
Very Unhealthy  201-300  5  150.5-250.4
Hazardous  301-500  6  250.5-500


Now let us become familiar with three alternative scaling and health assessment scales that are readily available and that acknowledge the impact of lower PM 2.5 concentrations to human health:


United Kingdom Index U.K. Nomenclature PM 2.5 ug/m3 24 hr avg.
1 Low 0-11
2 Low 12-23
3 Low 24-35
4 Moderate 36-41
5 Moderate 41-47
6 Moderate 48-53
7 High 54-58
8 High 59-64
9 High 65-70
10 Very High >=71


Now for a second alternative air quality scale, this being from Air Quality Now, a European monitoring entity:


Air Quality Now EU Rank Nomenclature PM 2.5  Hr PM 2.5 24 Hrs.
1 Very Low 0-15 0-10
2 Low 15-30 10-20
3 Medium 30-55 20-30
4 High 55-110 30-60
5 Very High >110 >60


And lastly, the scale from a professional air quality meter manufacturer:


Professional Meter Index Nomenclature PM 2.5 ug/m^3 Real Time Concentration
0 Very Good 0-7
1 Good 8-12
2 Moderate 13-20
3 Moderate 21-31
4 Moderate 32-46
5 Poor 47-50
6 Poor 52-71
7 Poor 72-79
8 Poor 73-89
9 Very Poor >90


We can see that the only true common denominator between all scaling systems is the PM 2.5 concentration.  Even with the acceptance of that reference, there remains the issue of “averaging” a value, or acquiring maximum or real time values.  Setting aside the issue of time weighting as a separate discussion, the most practical means to equate the scaling system is to do what is mentioned earlier:  First, equate the scales to a common index range (in this case, the EPA AQI range of 1 to 6 will be adopted).  Second, inspect the PM 2.5 concentrations from the standpoint of distribution, i.e., evaluate these indices as a function of PM 2.5 concentrations.  The results of this comparison follow below, accepting the midpoint of each PM 2.5 concentration band as the reference point:

PM 2.5 (ug/m^3) EPA AQI UK EU (1hr) Meter
1-10 1 1 1 1
10-20 2 1.6 1 2.1
20-30 2 2.1 2.2 2.7
30-40 2 2.1 3.5 3.2
40-50 3 3.2 3.5 3.2
50-60 3 4.3 3.5 4.3
60-80 4 5.4 4.8 4.9
80-100 4 6 4.8 6
100-150 4 6 6 6
150-200 4 6 6 6
200-250 5 6 6 6
250-300 5 6 6 6
300-400 6 6 6 6
400-500 6 6 6 6


This table reveals the essence of the problem; the skew of the EPA AQI index toward high concentrations that diminishes awareness of the health impacts from lower concentrations can be seen within the tabulation. 

This same conclusion will be demonstrated graphically at a later point.

Now that all air quality scales are referenced to a common standard, i.e., the PM 2.5 concentration), the general nature of each series can be examined via a regression analysis.  It will be found that a logistical function is a favored functional form in this case and the results of that analysis are as follows:

EPA Index (1-6) = 5.57 / (1 + 2.30 * exp(-.016 * PM 2.5))
Mean Square Error = 0.27

Mean (UK – EU – Meter) Index (1-6) = 6.03 / (1 + 5.65 * exp(-.046 * PM 2.5))
Mean Square Error = 0.01

The information that will now be of value to evaluate the weighting distribution applied to various concentration levels is that of integration of the logistical regression curves as a function of bandwidth.  The result of the integration process (Int.) applied to the above regressions is as follows:

PM 2.5 Band EPA AQI (Int.)
[Index * PM 2.5]
Mean Index (Int.)
[Index * PM 2.5]
% Relative Overweight or Underweight of PM 2.5 Band Contribution Between EPA AQI and Mean Alternative Air Quality Index Scale (Endpoint Bias Removed)
1-10 16.1 10.1 +42%
10-20 19.8 15.8 +27%
20-30 21.9 21.6 +8%
30-40 24.1 28.3 -10%
40-50 26.3 35.2 -27%
50-60 28.5 41.5 -39%
60-80 63.6 98.0 -47%
80-100 72.1 110.4 -46%
100-150 211.7 295.0 -32%
150-200 243.7 300.8 -16%
200-250 261.7 301.4 -8%
250-300 270.7 301.5 -4%
300-400 551.8 603.0 -2%
400-500 555.9 603.0 0%


A graph of a regression curve to the % Relative Overweight/Underweight data in the final column of the table above is as follows (band interval midpoints selected; standard error = 4.1%).


EPA Underweight Function Feb 09 2016 - 01


And, thus, we are led to another interpretation regarding the demerits of the EPA AQI.  The EPA AQI scaling system unjustifiably under-weights the harmful effects of PM 2.5 concentrations that are most likely to occur in real world, real time, daily circumstances.  The scale over-weights the impacts of extremely low concentrations that have little to no impact upon human health.  And lastly, when the PM 2.5 concentrations are at catastrophic levels and the viability of life itself is threatened, all monitoring sources, including the EPA, are in agreement that we have a serious situation.  One must seriously question the public service value under such distorted and disproportionate representation of this important monitor of human health, the PM 2.5 concentration.



Let us proceed to an additional serious flaw in the EPA air quality standards, and this is the issue of averaging the data. It will be noticed that the current standard for EPA PM 2.5 air quality is 12 ug/m^3 , as averaged over a 24 hour period. On the surface, this value appears to be reasonably sound, cautious and protective of human health. A significant problem, however, occurs when we understand that the value is averaged over a period of time, and is not reflective of real-time dynamic conditions that involve “short-term” exposures.

To begin to understand the nature of the problem, let us present two different scenarios:

Scenario One:

In the first scenario, the PM 2.5 count in the environment is perfectly even and smooth, let us say at 10 ug/m^3. This is comfortably within the EPA air quality standard “maximum” per a 24 hour period, and all appears well and good.

Scenario Two:

In this scenario, the PM 2.5 count is 6 ug/m^3 for 23 hours out of 24 hours a day. For one hour per day, however, the PM 2.5 count rises to 100 ug/m^3, and then settles down back to 6 ug/m^3 in the following hour.

Instinctively, most of us will realize that the second scenario poses a significant health risk, as we understand that maximum values may be as important (or even more important) than an average value. One could equate this to a dosage of radiation, for example, where a short term exposure could produce a lethal result, but an average value over a sufficiently long time period might persuade us that everything is fine.

And this, therefore, poses the problem that is before us.

In the first scenario, the weighted average PM 2.5 count over a 24 hour period is 10 ug/ m^3.

In the second scenario, the weighted average PM 2.5 count over a 24 hour period is 10 ug/m^3.

Both scenario averages are within the current EPA air quality maximum pollution standards.

Clearly, this method has the potential for disguising significant threats to human health if “short-term” exposures occur on any regular basis. Observation and measurement will show that they do.

Now that we have seen some of the weaknesses of the averaging methods, let us look at an additional scenario based upon more realistic data, but that continues to show a measurable influence upon human health. The scenario selected has a basis in recent and independently monitored PM 2.5 data.

The situation in this case is as follows:

This model scenario will postulate that the following conditions are occurring for approximately 10% of the days in a year. For that period, let us assume that for 13.5 hours of the day that the PM 2.5 count is essentially nil at 2 ug/m^3. For the remaining 10.5 hours of the day during that same 10% of the year, let us assume the average PM 2.5 count is 20 ug/m^3. The range of the PM 2.5 count during the 10.5 hour period is from 2 to 60 ug/m^3, but the average of 20 ug/m^3 (representing a significant increase) will be the value required for the analysis. For the remainder of the year very clean air will be assumed at a level of 2 ug/m^3 for all hours of the day.

A more extended discussion of the nature of this data is anticipated at a later date, but suffice it to say that the energy of sunlight is the primary driver for the difference in the PM 2.5 levels throughout the day.

The next step in the problem is to determine the number of full days that correspond to the concentration level of 20 ug/m^3, and also to provide for the fact that the elevated levels will be presumed to exist for only 10% of the year.  The value that results is:

0.10 * (365 days) * (10.5 hrs / 24 hrs) = 16 full days of 20 ug/m^3 concentration level.

As a reference point, we can now estimate the increase in mortality that will result for an arbitrary 10 ug/m^3 (based upon the relationship derived earlier):

Mortality % Increase (per 10ug/m^3) = 1.65 +. 007(16 days) + 0.48 * ln(16 days)


Mortality % Increase (per 10ug/m^3) = 3.1%

The increase in this case is 18 ug/m^3 (20 ug/m^2 – 2 ug/m^3), however, and the mortality increase to be expected is therefore:

Mortality % Increase (per 18ug/m^3 increase) = 1.8 * 3.1% = 5.6%.

Once again, to place this number into perspective, we translate this percentage into projected deaths (as based upon CDC data, 2013):

.056 * (2, 596, 993) = 145, 431 projected additional deaths.

This value is essentially equivalent (again, curiously) to the third leading cause of death, namely Chronic Pulmonary Obstructive Disease (COPD), with a reported value of deaths for 2013 of 149, 205.

It is understood that a variety of factors will ultimately lead to mortality rates, however, this value may help to put the significance of  “lower” or “short-term” exposures to PM 2.5 pollution into perspective.

It should also be recalled that the averaging of PM 2.5 data over a 24 hour period can significantly mask the influences of such “short-term” exposures.

A remaining issue of concern with respect to AQI deficiencies is its accuracy in reflecting real world conditions in a real-time sense. The weakness in averaging data has already been discussed to some extent, but the issue in this case is of a more practical nature. Independent monitoring of PM 2.5 data over a reasonably broad geographic area has produced direct visible and measurable conflicts in the reported state of air quality by the EPA.

After close to twenty years of public research and investigation, there is no rational denial that the citizenry is subject to intensive aerosol operations on a regular and frequent basis. These operations are conducted without the consent of that same public. The resulting contamination and pollution of the atmosphere is harmful to human health.  The objective here is to simply document the changes in air quality that result from such a typical operation, and the corresponding public reporting of air quality by the EPA for that same time and location.

Multiple occasions of this activity are certainly open to further examination, but a representative case will be presented here in order to disclose the concern.



Typical Conditions for Non- Operational Day.
Sonoran National Monument – Stanfield AZ


Aerosol Operation – Early Hours
Jan 19 2016 – Sonoran National Monument – Stanfield AZ


Aerosol Operation – Mid-Day Hours
Jan 19 2016 – Sonoran National Monument – Stanfield AZ



EPA Website Report at Location and Time of Aerosol Operation.
Jan 19 2016 – Sonoran National Monument – Stanfield AZ
Air Quality Index : Good
Forecast Air Quality Index : Good
Health Message : None

Current Conditions : Not Available
(“AirNow” 2016)


The PM 2.5 measurements that correlate with the above photographs are as follows:

With respect to the non-operational day photograph, clean air can and does exist at times in this country, especially in the more remote portions of the southwestern U.S. under investigation.  It is quite typical to have PM 2.5 counts from 2 to 5 ug/m^3, which fall under the category of very good air quality by any index used.  Low PM 2.5 counts are especially prone to occur after periods of heavier rain, as the materials are purged from the atmosphere.  The El Nino influence has been especially influential in this regard during the earlier portion of this winter season.  Visibility conditions of the air are a direct reflection of the PM 2.5 count.

On the day of the aerosol operation, the PM 2.5 counts were not low and the visibility down to ground level was highly diminished.  The range of values throughout the day were from 2 to 57, with the low value occurring prior to sunrise and post sundown.  The highest value of 57 occurred during mid-afternoon.  A PM 2.5 value of 57 ug/m^3 is considered poor air quality by many alternative and contemporary air quality standards, and the prior discussions on mortality rates for “lower” concentrations should be consulted above.  This high value has no corollary, thus far, during non-aerosol-operational days.  From a common sense point of view, the conditions recorded by both photograph and measurement were indeed unhealthy.  Visibility was diminished from a typical 70 miles + in the region to a level of approximately 30 miles during the operational period.  Please refer to the earlier papers (Visibility Standards Changed, March 2001 and Mortality vs. Visibility, June 2004; also additional papers) for additional discussions related to these topics.

The U.S. Environmental Protection Agency reports no concerns, no immediate impact, nor any potential impact to health or the environment during the aerosol operation at the nearest reporting location.



This paper has reviewed several factors that affect the interpretation of the Air Quality Index (AQI) as it has been developed and is used by the U.S. Environmental Protection Agency (EPA). In the process, several shortcomings have been identified:

1. The use of a color scheme strongly affects the perception of the index by the public. The colors used in the AQI are not consistent with what is now known about the impact of fine particulate matter (PM 2.5) to human health. The World Health Organization (WHO) acknowledges that there are NO known safe levels of fine particulate matter, and the literature also acknowledges the serious impact of low concentration levels of PM 2.5, including increased mortality.

2. The scaling range adopted by the AQI is much too large to adequately reveal the impact of the lower concentration levels of PM 2.5 to human health. A range of 500 ug/m^3 attached to the scale when mortality studies acknowledge significant impact at a level of 10 ug/m^3 is out of step with current needs by the public.

3. The underweighting of the lower PM 2.5 concentration levels relative to more contemporary scales that adequately emphasize lower level health impacts obscures health impacts which deserve more prominent exposure.

4. The AQI numeric scale is divorced from actual PM 2.5 concentration levels. The arbitrary scaling has no direct relationship to existing and actual concentrations of mass to volume ratios. The actual conditions of pollution are therefore hidden by an arbitrary construct that obscures the impact of pollution to human health.

5. The AQI is a historic development that has been maintained in various incarnations and modifications since its origin more than 45 years ago. The method of presentation and computation is obtuse and appears to exist as a legacy to the past rather than directly portraying pollution health risks.

6. The averaging of pollution data over a time period that filters out short term exposures of high magnitude is unnecessary and it hinders the awareness of the actual conditions of exposure to the public.

7. Presentation of air quality information through the authorized portal appears to present potential conflicts between reported information and actual field condition observation, data and measurement.


In the opinion of this researcher the AQI, as it exists, should be revamped or discarded. Allowing for catastrophic pollution in the development of the scale is commendable, but not if it interferes with the presentation of useful and valuable information to the public on a practical and daily basis.

There is a partial analogy here with the scales used to report earthquakes and other natural events, as they are of an exponential nature and they provide for extreme events when they occur. It is now known, however, that very low levels of fine particulate matter are very harmful to human health. Any scaling chosen to represent the state of pollution in the atmosphere must correspondingly emphasize and reveal this fact. This is what matters on a daily basis in the practical affairs of living; the extreme events are known to occur but they should not receive equal (or even greater) emphasis in a daily pollution reporting standard. It is primarily a question of communicating to the public directly in real-time with actual data, versus the adherence to decades old legacies and methods that do not accurately portray modern pollution and its sources.

It seems to me that a solution to the problem is fairly straightforward; this issue is whether or not such a transformation can be made on a national level and whether or not it has strong public support. Many other scaling systems have already made the switch to emphasize the impact of lower level concentrations to human health; this would seem to be admirable based upon the actual needs of society.

It is a fairly simple matter to reconstruct the scale for an air quality index. THE SIMPLEST SOLUTION IS TO REPORT THE CONCENTRATION LEVELS DIRECTLY, IN REAL TIME MODE. For example, if the PM 2.5 pollution level at a particular location is, for example, 20 ug/m^3, then report it as such. This is not hard to do and technology is fully supportive of this direct change and access to data. We do not average our rain when it rains, we do not average our sunlight when we report how clear the sky is, we do not average the cloud cover, and we do not average how far we can see. The environmental conditions exist as they are, and they should be reported as such. There is no need to manipulate or “transform” the data, as is being done now. A linear scale can also be matched fairly well to the majority of daily life needs, and the extreme ranges can also be accommodated without any severe distortion of the system. The relationship between visibility and PM 2.5 counts will be very quickly and readily assimilated by the public when the actual data is simply available in real-time mode as it needs to be and should be. Of course, greater awareness of the public of the actual conditions of pollution may also lead to a stronger investigation of their source and nature; this may or may not be as welcome in our modern society. I hope that it will be, as the health of our generation, succeeding generations, and of the planet itself is dependent upon our willingness to confront the truths of our own existence.

Clifford E Carnicom
Mar 12, 2016

Born Clifford Bruce Stewart
Jan 19, 1953



“AirNow.” 2016. Accessed March 13.

“Air Quality Index | World Public Library.” 2016. Accessed March 13.

“Air Quality Index – Wikipedia, the Free Encyclopedia.” 2016. Accessed March 13.

“Air Quality Now – About US – Indices Definition.” 2016a. Accessed March 13.
———. 2016b. Accessed March 13.

“FastStats.” 2016. Accessed March 13.

“HHTP21 Air Quality Meter, User Manual, Omega Engineering.” 2016.

Shi, Liuhua, Antonella Zanobetti, Itai Kloog, Brent A. Coull, Petros Koutrakis, Steven J. Melly, and Joel D. Schwartz. 2015. “Low-Concentration PM2.5 and Mortality: Estimating Acute and Chronic Effects in a Population-Based Study.” Environmental Health Perspectives 124 (1). doi:10.1289/ehp.1409111.

U.S. Environmental Protection Agency. 2012. “Revised Air Quality Standards for Particle Pollution and Updates to the Air Quality Index (AQI).”

Wong, Edward. 2013. “Beijing Air Pollution Off the Charts.” The New York Times, January 12.

World Health Organization. 2013. “Health Effects of Particulate Matter, Policy Implications for Countries in Eastern Europe, Caucasus and Central Asia.”

Tertiary Rainwater Analysis : Questions of Toxicity

Tertiary Rainwater Analysis : Questions of Toxicity

 Clifford E Carnicom
Nov 08 2015


This paper presents evidence of a chemical signature that exists within an analyzed rain sample that is characteristic of known toxins and pesticides. The method of analysis used is that of mid-infrared spectroscopy. Specifically, certain functional groups involving sulfur, nitrogen, phosphorus, oxygen, and halogens have been identified in the analysis. It is recommended that the investigation be duplicated by independent researchers to determine if an environmental hazard does exist. If these results are verified to be positive, the source of the contaminants is to be identified and eliminated from the environment.

residual_ir4Infrared Spectrum of Concentrated Rain Water Sample
(Aqueous Influence Removed)

The original rainwater sample volume for this analysis is approximately 3.25 liters.  The sample was evaporated under mild heat to approximately 0.5% of the original volume, or about 15 milliliters.  The sample has previously been shown to contain both aluminum, biological components, and a residue that appears to be an insoluble metallic or organometallic complex.  The target of this particular study is that of soluble organics.

The organic infrared signal within the solution is weak and difficult to detect with the means available; it is further complicated by being present in aqueous solution.  The aqueous influence was minimized by making an evaporated film layer on a KCl cell; the transmission mode was used. The signal is identifiable and repeatable under numerous passes in comparison to the reference background.

The primary conclusion from the infrared analysis is that a core group of elements exists within the solution; these appear to include carbon, hydrogen, nitrogen, sulfur, phosphorus, oxygen and a halogen.  The organic footprint appears to be weak but detectable and dominated by the above heteratoms.

As further evidence for the basis of this report, qualitative tests for an amine (nitrogen and hydrogen), sulfates and phosphates (sulfur, oxygen and phosphorus) have each produced a positive test result.  A qualitative test for a halogen in the concentrated rainwater sample has also produced a positive result; the most likely candidate at this point is the chloride ion.  All elements present have therefore been proven to exist at detectable levels by two independent methods.

This grouping of elements is distinctive; they essentially comprise the core elements of many important, powerful and highly toxic pesticides.   For example, three sources directly state the importance of the group above as the very base of most pesticides:


“In pesticides, the most common elements are carbon, hydrogen, oxygen, nitrogen, phosphorus, sulfur and chlorine”.

Pesticide Residues in Food and Drinking Water : Human Exposure and Risks, Dennis Hamilton, 2004.


“We can further reduce the list by considering those used most frequently in pesticides: carbon, hydrogen, oxygen, nitrogen, phosphorus, chlorine, and sulfur”.

Fundamentals of Pesticides, A Self-Instruction Guide, George Ware , 1982.


“Heteratoms like fluorine, chlorine, bromine, nitrogen, sulfur and phosphorus, which are important elements in pesticide residue analysis, are of major interest”.

Analysis of Pesticides in Ground and Surface Water II : Latest Developments, Edited by H.J. Stan, 1995.


It is also true that phosphate diesters are at the core of DNA structure and that many genetic engineering procedures involve the splitting of the phosphate diester complex.

The information provided above is sufficient to justify and invoke further investigation into the matter.  The sample size, although it was derived from an extensive storm over several days in the northwest U.S., is nevertheless limited and quite finite after reduction of the sample volume.  The residual insoluble components (apparently metallic in nature) are also limited in amount and more materials will be required for further analysis.  The signal is weak and difficult to isolate from the background reference; concentration level estimates for elements or compounds (other than that of aluminum which has been assessed earlier) is another entire endeavor.  Systematic, wide-area, and long term testing will be required to validate or refute the results.  All caveats above aside, it would seem that the duty to address even the prospect of the existence of such toxins in the general rainfall befalls each of us.  It would seem wise that this process begins without delay.

There are a few additional comments on this finding that need be mentioned.

The first of these is the issue of local and regional vs. a national and international scope of consideration.  It is understood that pesticides or compounds similar in nature are a fact of our environment, and that considerable awareness and effort is in place to mitigate their damage over decades of use.  Organic farming and genetically engineered crops are two very divergent approaches to reconciliation with the impact of environmental harm, and they are shaping our society and food supply in the most important ways manageable.  Given that the pesticide industry exists, regardless of our varying opinions of merit or harm, I think that it is fair to say that we generally presume that pesticides are under some form of local control.  Our general understanding is that pesticides are applied at ground or close to ground level and are intended to be applied to a specific location or, at most, a region within a defined time interval.

The prospect, even I daresay, the hint, of pesticide or pesticide-like compounds in rainfall is more than daunting.  It seems immediately necessary to consider what scale of operation would support such toxins finding their way into the expanses of atmosphere and rainfall?  For the sake of the general welfare, I think we should all actively wish and seek to disprove the findings within this report.  I will not hesitate to amend this report if honest, fair and accurate testing bears out negative reports over an adequate time period, and my motive never includes sensationalizing an issue.  This is one test, one time, one place, with limited means and support in the process.  I cannot disprove the results at this time and I have an obligation to report on that which seems to be case, uncomfortable as it might be.  It is not the first time that I have been in this situation, and judging from the changes in the the health of the planet that have taken place, it is unlikely to be the last.  The sooner that the state of truth is reached, the better we shall all be for it in any sense that is real.

The second comment relates to the decline of the bee population.  Bees are an indicator species, the canary in the mine, as it were.  The bees and the amphibians have both been ringing their alarm for some time now, and we best not remain passive about finding the reasons for decline.  A minimum of 1/3 of our agricultural economy, and that means food, is dependent upon the bee population for its very existence.  This is no trifling matter, and we all need to get up to speed quickly on the importance of this issue, myself included.

Suffice it to say that compounds of this nature, i.e, historical pesticides like organophosphates and the purported safer and more recent alternatives (e.g., the neonicotinoids), have a very close relationship to the ongoing and often ambiguous studies regarding bee Colony Collapse Disorder (CCD).  From my perspective, it would seem prudent to eliminate the findings of this report as a contributing cause to the problem as promptly as possible.  If that can not be done so readily, then we may have a bigger problem on our hands than is imagined.

One of the interesting side notes is that the elements and groups identified as candidates for investigation actually seem to overlap between the neonicotinioids and the organophosphates.  This includes the nitrogen groups that characterize the neonicotinoids and the phosphate esters that characterize the organophosphates.  If such a combination were at hand, this would seem especially troublesome as both forms remain mired in controversy, let alone any combination thereof.

The third and final comment relates to the toxicity of these compound types in general.  It is not just an issue about bees or salamanders.  These particular compounds have a history and effects that are not difficult for us to research, and we should become aware of their impacts upon the planet quickly enough.  Many of us already are.  The fact is that organophosphates have their origins as nerve gas agents in the pre-World War II era, and in theory their use has been reduced but hardly eliminated.  Residential use is apparently no longer permissible in the United States, but commercial usage still is.  This raises questions on what real effect any such “restrictive” legislation has had.

The neonicotinoids are promoted as a generally safer alternative to the organophosphates, but they are hardly without controversy as well.  They too have strong associations with CCD in the research that is ongoing.  They also are neuro-active insecticides.

It would seem to me that we all have a job to do in getting up to speed on the source, distribution and levels of exposures to insecticide and insecticide related compounds.  A greater awareness of toxins in our environment, in general, also seems in order.  If our general environment has been affected to a degree that has avoided confrontation  thus far, then we need to face the music as quickly as possible.  I trust that we understand the benefits of both rationality and aggressiveness when serious issues face us, and this may be another such time.  I hope that I will be able to dismiss this report in due time; at this time, I cannot.


Clifford E Carnicom
Nov 05, 2015

Born Clifford Bruce Stewart
Jan 19, 1953


Additional Notes:

The preliminary functional group assignments being made to the absorption peaks at this time are as follows (cm-1):

~ 3322 : Amine, Alkynes (R2NH considered)
~ 2921 : CH2 (methylene)
~ 2854 : CH2 (methylene)
~1739 : Ester (RCOOR, 6 ring considered)
~1447 : Sulfate (S=O considered)
~1149 : Phosphate (Phosphate ester, organophosphate considered)
~1072 : Phosphine, amine, ester, thiocarbonyl
~677  : Alkenes, aklynes, amine, alkyl halide

The assignments will be revised or refined as circumstances and sample collections permit, however, as a group they appear to provide a distinctive organic signature.  A structural model may be developed at a future date.

Some chemical compounds which may share some similar properties to that under consideration here include, for example, (not all elements included in any listed compound; only for reference comparison purposes):

p-chlorophenyl (3-phenoxypropyl)carbamate
N-(1-naphthylsulfonyl)-L-phenylalanyl chloride
2,2,2-trichloroethyl 2-(2-benzothiazolyl)dithio-alpha-isopropenyl-4-oxo-3-phenylacetamido-1-azetidineacetate
cytidine monophosphate

per :
SDBSWeb : (National Institute of Advanced Industrial Science and Technology, Nov 06 2015)