Carpinteria Crystal

Carpinteria Crystal

Clifford E Carnicom
Sep 25 2016

An environmental crystal sample sent to Carnicom Institute from a concerned citizen has been analyzed as to its nature.  The ground sample was received three years ago and it has been held in custody since that time.  Circumstances are now more favorable toward establishing the identity or nature of inorganic compounds, and thus the opportunity to do so in this case has been exercised.  The sample originates from the Santa Barbara – Carpinteria region of the country.  The sample is well documented, clean, and has been collected and transported in a careful fashion.

One of the reasons for the interest in the sample is a repetition of events.  The citizen reports that similar appearing materials  have occurred within the same coastal housing district on multiple occasions over a period of many years.  In addition, the findings of this study may have relevance to a paper presented earlier on this site.  The interest in devoting time to sample analysis is directly related to the the frequency and pattern of appearance.

There are also several occasions of crystal samples collected or received over the years that have not received proper attention due to insufficient resources and means for investigation.  The majority of these cases, to my recollection, resulted from air filtration systems.  These deficiencies have likely delayed our understanding of various forms of pollution that likely surround us, and this will remain the case until full and sufficient resources are devoted to these types of problems.  It is the opinion of this researcher that the regulating environmental protections agencies have an obligation to this end and that it has not been well served.

This particular sample has the following appearance:


Environmental Crystal Sample Material Received in 2013


The purpose of this paper is not to debate the origin or delivery method of the sample; the information available is insufficient to fully detail those answers.  It can be stated in fairness that the observer witnessed heavy aerosol  operations over the region in the early hours of the day of collection of the sample.  The density and activity level of the operations was stated to be high.

The purpose of this paper IS to call attention to what may be a repeating type of material that has potentially important environmental consequences, particularly if they are found to exist in aerosol or particulate form within the general atmosphere.  The sample type is also fully consistent with many of the analyses and postulates that have developed within the research over the years.  The specifics of that discussion will follow within this paper.

The sample has been evaluated using multiple approaches.  These include, but are not limited to:

  1. Electrochemistry techniques, specifically differential normal pulse voltammetry.
  2. Solubility analyses
  3. Melting point determination
  4. Density estimates
  5. Microscopic crystal analysis
  6. Qualitative reagent tests
  7. Conductivity measurements
  8. Index of refraction measurements

The results of these analyses indicate that the dominant component of the material is that of potassium chloride, a metallic salt form.  There are indications that the sample does contain more than one component, but any further investigation will have to take place at a later time.   Every physical and chemical form has implications, applications and consequences, especially if they occur in a manner foreign or unexplained to the environment.  The material shown above is of no exception to those concerns.  It may be the case that the appearance of this material in an unexplained manner and location is of no consequence; prudence, however, would suggest that we are obligated to seek out that which has no accountable explanation.  This premise is at the very heart of any forensic investigation, and environmental science and pollution control are also subject to that very same demand.



A brief bit of historical perspective on this topic could be helpful.  A search on this site on the subject of crystals will bring up a minimum of eight additional papers that are relevant; there are likely to be more.  These papers range in date from 2001 to the current date, so from this standpoint alone there is a repeating issue involved here.

A search on this site for historical presentation on potassium issues produces at least three papers on the subject.  There is reason to consider, therefore, that potassium (and related) chemical compounds may be worthy of examination with respect to geoengineering as well as biological issues.

Within this combined set of close to a dozen or more papers on the subjects, two will be mentioned further at this time.

The first will be that of another sample, also of a crystalline nature, received in 2003 from the same specific region of the country.  The title of that short report is “Additional Crystal Under Examination” (Jun 2003).  There are three points of interest in comparison between that and the current report:

1. Two generally similar and unaccountable sample forms appear in similar locations over a 10 year period, and a public interest in identification of the nature of the material remains over this same prolonged period.

2. The report in 2003 is reasonably brief with a limited microscopic examination offered.  The topic is mentioned more in the sense of an anomaly and a curiosity as there is no basis at the time for an in depth study of the materials; in addition, resources to do so at the time are non-existent.

3. The third will be the comment regarding the lack of water solubility of the first sample.  The importance of this observation will be the fact that the samples, although visually similar, have important differing chemical properties.  The conclusion is that multiple material types are expected to be subject to investigation over the course of time.

The second will be that of a laboratory report received in the year of  2005.  The title of that paper is “Calcium and Potassium” (Mar. 2005).  The importance and relevance of this paper can be understood from the opening paragraph:

A laboratory analysis of a rainwater sample from a rural location in the midwestern U.S. has been received.  This lab report reveals extremely high levels of potassium and calcium within the sample. Comparative studies have been done and they show that the calcium concentration is a minimum of 5 times greater, and that the potassium level is a minimum of 15 times greater than that which has been reported1 in the polluted skies of Los Angeles, California.

It will also be noticed that several health and environmental concerns with respect to aerosolized potassium salts are enumerated in that latter paper.  Attention should also be paid to the intriguing discussion of electromagnetic effects and impacts that must be considered with the chemistry of potassium and related ions.

Potassium chloride has common uses as well, such as a fertilizer or as a water treatment compound; there is, however, no cause given to think that it is being used in such fashions at this location and setting at this time.



Let us now bring ourselves back to the current moment.  The relevance and direction of those papers have borne themselves out over time, and the urgency of responsibility upon us is as imposing as ever.  We do not have the luxury of another 20 years to conclude on such an obvious state of affairs.

There are at least three immediate applications or consequences of the existence of aerosolized potassium chloride upon the atmosphere that should be mentioned.

1. Heat Impacts

2. Moisture Impacts

3. Electromagnetic Impacts

With respect to heat impact, potassium chloride is highly soluble within water.  When it does dissolve, it absorbs heat from the water, and the magnitude is significant.  Potassium chloride has actually been used as a cold pack commercially for this same reason; it is also readily available and relatively inexpensive.  It therefore can potentially be used to influence atmospheric thermodynamics, and this is one of many leads of investigation to pursue.

On the flip side of the equation, potassium chloride in a solid state has a rather low specific heat, especially relative to that of both air and water.  This means that, depending upon the state of the surrounding atmosphere, that it can also possess the capability to heat the atmosphere, rather than to cool it.

Furthermore, potassium as a metal in its elemental form also has a lower specific heat than air and once again this may allow for a net heating impact upon the atmosphere, depending on states of being, location and interaction with other elements or compounds.

The point of this discussion is that metallic salts of any kind DO have an impact upon the heating dynamics of the atmosphere, and that this process can be both complicated and variable.  You cannot place anything into the atmosphere without having an effect in some fashion, and it is a mistake to oversimplify and overgeneralize as to what those changes will be.  The location of placement of aerosols is another matter also, as has been discussed extensively on this site.

We are, therefore, not permitted to remain ignorant of the impacts that foreign and contaminating materials have upon the environment; heat dynamics are only one of many aspects of that we are forced to confront when the atmosphere is altered in ANY significant fashion.

There are, of course, many other environmental consequences from the addition of ionizable metallic salts into the environment.  These include plant life and agriculture, for example.  Readers may also wish to become familiar with a discussion regarding soil impacts as presented within the paper “The Salts of Our Soils” (May 2005).

As far as moisture is concerned, heat and moisture are obviously very closely related subjects.  One of the trademarks of the salt genre is that of absorbing moisture.  Some salts attract moisture so strongly that they are hygroscopic, meaning that they can draw moisture from the ambient atmosphere.  The observation of this phenomenon is quite remarkable; one can start with a solid and watch it change to an eventual liquid form.  Calcium chloride and strontium chloride are both good examples of this class of materials.

Locking moisture up in this fashion will most certainly increase the heat in the atmosphere; water is one of the greatest cooling compounds that exists on the planet.  It is impossible to separate heat and moisture impacts when dealing with aerosolized metallic salts; it is certain that there will be an impact upon the atmosphere,  environment and health.  It is difficult to predict a favorable outcome here.

Lastly, there may still be some that will ridicule the notion of electromagnetic impacts of ionized metallic salts upon the atmosphere and the environment.  I think such an approach might ultimately be foolhardy.  This tenet was brought forth early in the research of this organization, and the premise remains as strong as when it is originated.  For those that care to repeat the enterprise, there are measurements to support the hypothesis, and they only continue to accumulate.

For those that seek conventional sources, one need look no further than a document that traces back to the 1990’s, entitled “Modeling of Positively Charged Aerosols in the Polar Summer Mesopause Region” (Rapp, Earth Planets Space 1999).  A very specific reference of the ability of potassium in combination with ultraviolet light to increase the electron density of the atmosphere will be found there.  There are other elements that share in this remarkable physical property, and they have been discussed within this site for many years now.  Reading the patents by Bernard Eastlund may also be insightful.  The ability of moisture to ionize many metallic salts is also to be included within the examinations that are required to take place.

It is difficult to ignore and discount the fundamental heat, moisture, and electromagnetic impacts upon the planet when metallic salts are artificially introduced into the atmosphere.  It would not be wise to do so.  The case for investigation, accountability and redress is now strong, and each of us can make the choice as to how to best proceed.  It seems to be a simple matter to want to protect and ensure the welfare of our gifted home, as our existence depends upon it.  Clarity and unity of purpose would seem to be an end goal here; I hope that each of us will seek it.

Regardless of the origin of this particular sample (which is unlikely to ever be known exactly), this report points to the requirement of identifying repetitive and unknown contaminants in the environment.  The responsibility for this process does not fall either primarily or exclusively upon the citizens; this population has neither the resources or means to perform or satisfy the requirements of identification, evaluation and assessment.  Entrusted agencies that exist specifically for protection of the welfare of the common environment (e.g., air, water, soil) and that are funded by these same citizens ARE required to do so.  In this vein, I will once again repeat the closing statement from above:

Clarity and unity of purpose would seem to be an end goal here; I hope that each of us will seek it.


Clifford E Carnicom

Sep 25 2016


Supplemental Discussion:

Approximately a dozen methods of investigation have been used to reach the conclusions of this report.  These will now be described to a modest level of detail to assist in portraying the complexities of analyzing unknown environmental samples.  This description will further the argument that the citizenry is not realistically expected to assume this burden and cost; contamination and pollution are at the heart of existence for publicly funded environmental protection agencies and entities.  It is recommended that the public seek the level of accountability that is required to reduce and eliminate persistent and harmful pollution and the contamination of our common environment.

1. Voltammetry:

The methods of differential pulse voltammetry have been applied to the sample.  The methods are quite useful in the detection of inorganics, especially metals and trace metal concentrations.  The results of the analysis are shown below:


Differential Normal Pulse Voltammetry Analysis of Crystal Sample

The analysis indicates a minimum of two chemical species to consider.  The first of these is a suspected Group I or Group II element (-2.87V).  The most probable candidates to consider will be that of calcium, strontium, barium and potassium.  The other will be the consideration of  the chloride ion ( +0.63V and +1.23V).

At this point of the investigation, our strongest prospect will therefore be an ionic metallic salt crystalline form, most likely involving a subset of Group I or II of the periodic table.  The most likely candidate will, furthermore, be a chloride form of the salt.

2. We can then proceed to solubility tests.  Four candidates from above will now be considered, along with two additional candidates resulting from the chloride prospects:

calcium chloride
strontium chloride
barium chloride
potassium chloride

lithium chloride
cesium chloride

With respect to the first set of four, the solubility tests applied (i.e., water, methanol, acetone, sodium bicarbonate, acid, base) eliminate all but potassium chloride for further examination.

This reduces the primary set of consideration to that of:

potassium chloride
lithium chloride
cesium chloride

We now attempt to confirm the existence of the chloride ion in a redundant fashion.  A qualitative chemical test (HCl, AgNO3) is then applied to the sample in aqueous solution.  The existence of the chloride ion is confirmed.  The set of three candidates remains in place.

The next method applied to the sample is the determination of the melting point of the presumed ionic crystal form.  Ionic metallic salts have generally high melting points and this does present some difficulties with the use of conventional equipment and means.

The methods of calorimetry were adapted to solve this particular problem.  The methods were also applied to a control sample of potassium chloride, as well as two additional control compounds.  The results of the control and calibration trials produced results within the range of expected error (~ < 5%).

The melting point of the crystal form was determined experimentally by the above methods as approximately 780 deg. C.  The melting point of potassium chloride is 770 deg. C.  This result is well within the range of expected experimental error (1.4%).  During the process, it was noticed that an additional minority compound does exist within the sample, as a small portion of the sample does melt at a much lower point (est. 300-400 deg. C.) The minority compound would require separation and identification in a further analysis.

The melting points of lithium chloride and cesium chloride are 605 deg. C. and 645 deg. C., respectively, and they are thus eliminated from further consideration.

These results narrow the list of candidates specifically to that of potassium chloride.

An additional controlled test of conductivity of the salt in solution was applied.   The result of that test indicates agreement in conductivity with a known concentration solution of potassium chloride.  The error in that case was also well within the expected range of experimental error (0.6%).

In addition, further tests involving density determination, index of refraction, visual and microscopic crystal analysis further substantiate the identification of the crystal as being primarily that of potassium chloride.

The Demise of Rainwater

The Demise of Rainwater

Clifford E Carnicom

A Paper to be Developed During
the Summer of 2016
(Last Edit Jun 20 2016)

The single most important chemical species in clouds and precipitation is the .. pH value.

Paul Crutzen, Nobel Prize Winner in Chemistry, 1995

Atmosphere, Climate and Change, Thomas Graedel & Paul J. Crutzen

Scientific American Library, 1997


Photo : Carnicom Institute

An analysis of five rainfall samples collected over a period of six months and spanning three states in the western United States has been completed.  There are five conclusions that are forthcoming:

1. The rainfall samples studied portray a smorgasbord of contamination. The contaminants appear to be both complex and numerous in nature.

2. There does not appear to be effective or comprehensive monitoring or regulation of the state of air quality, and consequently, rainfall quality in the United States at this time.

3. The results of the current analysis, utilizing more capable equipment and methods, are highly consistent with those that originated from this researcher close to two decades ago.

4. All reasonable requests or demands by the citizenry for the investigation and addressing of this state of affairs over this same time period have been refused or denied.

5. The level of contamination that exists poses both a risk and a threat to health, agriculture, biology, and the welfare of the planet.


Let us now proceed with some of the details.

We can begin with the pH, i.e., the acid or alkaline nature of rainfall.  Biochemical reactions take place (or, for that matter, do not take place..) at a specific temperature and pH.  If the system or environment for that reaction is disturbed with respect to the acidity and temperature, then the reaction itself is interfered with.  If the conditions depart far enough from what is required, the reaction may simply not even take place at all.  Such is the risk of interference to the acid-base nature of rainfall, upon which all life on this planet depends.


To be continued.





UV Detector & Lab Equipment Used for Summary View Data


Rainfall Analysis_16

Electrochemical Signature of Rainwater Tests for Trace Metals
as Determined by Differential Normal Pulse Voltammetry

The following metallic elements have been determined to exist, or to be strong candidates to exist, within a series of five rainwater samples that have been tested for trace metals.  The samples span three states across the country and six months of time.  The method applied is that of Differential Normal Pulse Voltammetry.  The level of detection for the method is on the order of parts per million (PPM).  This list considerably extends the scope of consideration for the future investigation and detection of metallic elements within rainwater.  The findings in the upper portion of the table are highly consistent with those under reporting by various laboratories across the country; those in the lower half serve to prompt further investigations into additional elements that are highly related in their properties within the periodic table.  An examination of the physical properties of these elements, in detail, will likely provide additional insight into the applications of use for these same elements.  It can be noticed that the majority of elements within the list act as reducing agents.

Element Measured Mean Redox Voltage
(Absolute Value)
Actual Redox Voltage
(Absolute Value)
Titanium (Ti) 1.63, 1.32, 1.24 1.63, 1.31, 1.23
Aluminum (Al) 1.67 1.66
Barium (Ba) 2.90 2.90
Strontium (Sr) 2.90 2.89
Magnesium (Mg) 2.66, 2.35 2.68, 2.37
Gallium (Ga) .52, .65 .56, .65
Scandium (Sc) 2.56, 2.09 2.60, 2.08
Zirconium (Zr) 1.45 1.43
Standard Error of Measurement 0.013 V; n = 15
(No information regarding concentration or concentration ranking is provided here)


Additional Inorganic Analyses:

Qualitative (Color Reagents) Test Results for Combined Rainfall Sample
A Value of 1 Indicates a Positive Test Result
Concentration of RainwaterSample ~15x
(No information regarding concentration or concentration ranking is provided here.)
(Chromium, Cyanide & Iron appear to be at minimal trace levels)


Qualitative Positive Test Examples:
Phosphates, Nitrates, Ammonia, Silica




Tests to Determine the Boiling Point
for the Concentrate Rainfall Sample Using an Oil Bath
(Contamination is Evident)





An Organic Extraction Process

(Results subsequently to be examined by Infrared Spectroscopy)


Infrared Spectrum of Rainfall Organic Extraction :

Water Soluble & Insoluble Components

(see previous photo)

(solvent influences removed)


Gas Chromatography (TCD) Applied to Organic Extracts

(tailing from varying polarities)




Biologicals Extracted from Rainfall Concentrate Samples


Additional Note:

I wish to thank Mr. John Whyte for his dedication and effort to organize and produce an environmental conference in Los Angeles, California during the summer of 2012. Mr. Whyte, in support of the speakers at the conference, provided the means for some of the environmental test equipment used in this report. I also wish to thank the general public for their assistance during this last year in the acquisition of important scientific instrumentation by Carnicom Institute. This report is made possible only by that generosity.

Clifford E Carnicom

Jun 18, 2016

To be continued.

Pollution, Concentration and Mortality

Pollution, Concentration and Mortality

by Clifford E Carnicom
Mar 19 2016

A preliminary analytical model has been developed to estimate the impact of increased concentrations of atmospheric fine particulate pollution (PM 2.5) upon mortality rates. The model is a synthesis between an analysis of measured pollution levels (PM 2.5) and published increased mortality estimates. The model is based, in part, upon previous investigations as published in the paper “The Obscuration of Health Hazards : An Analysis of EPA Air Quality Standards“, Mar 2016.

Models for both concentration levels and visibility have now been developed; for a related model in terms of visibility, please see the paper entitled Pollution, Visibility and Mortality, Mar 2016.

Preliminary Concentration -Exposure – Mortality Model

A substantial data base based upon direct field measurements of atmospheric fine particulate matter in the southwestern United States during the winter of 2015-2016 has been acquired. The measurements reveal clear relationships between the quality of air, the PM 2.5 concentration levels, visibility of the surrounding territory, and the existence or absence of airborne aerosol operations.

The field data shows that repeated instances of the PM 2.5 count in the range between 30-60 ug/m3 is not unusual in combination with active atmospheric aerosol operations; visibility and health impacts are obvious under these conditions. The PM 2.5 count will inevitably be less than 10 (or even 5) ug/m3 under good quality air conditions.

Additional studies based upon this acquired data may be conducted in the future. Numerous published studies make known relationships between small increases in PM 2.5 pollution and increased mortality.

 meter44Measured PM 2.5 Count, 44 ug/m3.

As an example of use of this model, if the PM 2.5 count is 44 ug/m3 as shown in the above example, and if the number of days of exposure of this level is approximately 50, then the estimated increase in annual mortality is approximately 17%. This is an extreme increase in mortality, but under observed conditions in various locales it is not beyond the range of consideration.  It is thought that reasonably conservative approaches have been adopted within the modeling process.

The field data that has been collected and this model further highlight the serious deficiencies in the current Air Quality Index (AQI) as in current use by the U.S. Environmental Protection Agency (EPA). In light of the current understanding of the health impacts of small changes in PM 2.5 counts (e.g, 10 ug/m3), a scale that gives equal prominence to values as high as 500 ug/m3 (catastrophic conditions) is an incredible disservice to the public. Please see the earlier referenced papers for a more thorough discussion of the schism between public health needs and the reporting systems that are in place.

This researcher advocates the availability of direct and real-time fine particulate matter concentration levels (PM 2.5) to the public; this information should be as readily available as current weather data is.  Cost and technology are no longer major barriers to this goal.


operation-01Active Aerosol Operation
City of Rocks, Southern N.M.

operation-02Demonstration of the Impact of Aerosol Banks Upon Visibility.
Concentration Levels and Subsequent Visibility Changes
Directly Impact Mortality.

As an incidental note, it may be recalled from earlier work that there is a strong conceptual basis for the development and application of surveillance systems that are dependent upon atmospheric aerosol concentrations. This application is only one of many that have been proposed over a period of many years, and readers may refer to additional details on this subject within the research library. Documentaries produced by this researcher (Aerosol Crimes, Cloud Cover) during the last decade also elaborate on those analyses. The principles of LIDAR apply here.

Current field observations continue to reinforce this hypothesis. Observation in the southwest U.S. indicates that two locale types appear to be preferred targets for application: these include the large urban areas and the border region between the U.S. and Mexico. These locations, considered in a joint sense, suggest that both people and the monitoring or tracking of those same people within an area may be a technical and strategic priority of the project. A citizen based systematic and sustained nationwide monitoring system of PM 2.5 concentrations over a sufficient time period can clarify this issue further.

The recent papers on the subject of air quality are intended to raise the awareness and involvement of the public with respect to environmental and health conditions. There are very real relationships between how far you can see, the concentration levels of particulates in the atmosphere, and ultimately our mortality. It is our responsibility as stewards, as well as in our own best interest, to not deliberately and wantonly contaminate the planet.

Clifford E Carnicom
Mar 19, 2016

Pollution, Visibility and Mortality

Pollution, Visibility and Mortality
Clifford E Carnicom
Mar 12 2016

A preliminary empirical model has been developed to estimate the impact of diminished visibility and fine particulate pollution upon mortality rates.  The model is a synthesis between an analysis of measured pollution levels (PM 2.5), observed visibility levels and published increased mortality estimates.  The model is based, in part, upon previous investigations as published in the paper “The Obscuration of Health Hazards : An Analysis of EPA Air Quality Standards“, Mar 2016.



Preliminary Visibility -Exposure – Mortality Model

Air pollution has many consequences.  One of the simplest of these consequences to understand is that of mortality and the degradation of health.  It would be prudent for each of us to be aware of the sources of pollution in the atmosphere, and their subsequent effects upon our well being.  Measurement, monitoring and auditing of airborne pollution is within range of the general public, and the role of the citizens to participate in these actions is of increased imperative.  The role of public service agencies to act on behalf of public health needs and interests has not been fulfilled and we must all understand and react to the consequences of that neglect.

This particular model places the emphasis upon what can be directly observed with no special means, and that is the visibility of the surrounding sky.  Visibility levels are a direct reflection of the particulate matter that is in the atmosphere, and relations between what can be seen (or not seen, for that matter) and the concentration of pollution in the atmosphere can be established.  The relationships are observable, verifiable and are well known for their impacts upon human health, including that of mortality.

All models are idealized representations of reality.  Regardless of variations in the modeling process, it can be confidently asserted that there are direct physical relationships between particulate matter in the atmosphere, the state of visibility, and your health.   There are, of course, many other relationships of supreme importance, but the objective of this article is a simple one.  It is : to look, to be aware of your surroundings, to think, to act, and to participate. The luxuries and damage from perpetual ignorance can not be dismissed or excused.

The call for awareness is a fairly simple one here.  I encourage you to become engaged;  if for nothing else than the sake of your own health.  When this has been achieved, you are in a position of strength to help others and to improve our world.  This generation has no right or privilege to deny the depths of nature to those that will follow us.



Models are one thing, real life is another.  It is time to assume your place.


Clifford E Carnicom
Mar 12, 2016

The Obscuration of Health Hazards :

The Obscuration of Health Hazards:
An Analysis of EPA Air Quality Standards

Clifford E Carnicom
Mar 12 2016

A discrepancy between measured and observed air quality in comparison to that reported by the U.S. Environmental Protection Agency under poor conditions in real time has prompted an inquiry into the air quality standards in use by that same agency. This analysis, from the perspective of this researcher, raises important questions about the methods and reliability of the data that the public has access to, and that is used to make decisions and judgements about the surrounding air quality and its impact upon human health. The logic and rationale inherent within these same standards are now also open to further examination. The issues are important as they have a direct influence upon the perception by the public of the state of health of the environment and atmosphere. The purpose of this paper is to raise honest questions about the strategies and rationales that have been adopted and codified into our environmental regulatory systems, and to seek active participation by the public in the evaluation process.  Weaknesses in the current air quality standards will be discussed, and alternatives to the current system will be proposed.

Particulate Matter (PM) has an important effect upon human health.  Currently, there are two standards for measuring the particulate matter in the atmosphere, PM 10 and PM 2.5.  PM 10 consists of material less than 10 microns in size and is often composed of dust and smoke particles, for example.  PM 2.5 consists of materials less than 2.5 microns in size and is generally invisible to the human eye until it accumulates in sufficient quantity.  PM 2.5 material is considered to be a much greater risk to human health as it penetrates deeper into the lungs and the respiratory system.  This paper is concerned solely with PM 2.5 pollution.

As an introduction to the inquiry, curiosity can certainly be called to attention with the following statement by the EPA in 2012, as taken from a document (U.S. Environmental Protection Agency 2012,1) that outlines certain changes made relatively recently to air quality standards:

“EPA has issued a number of rules that will make significant strides toward reducing fine particle pollution (PM 2.5). These rules will help the vast majority of U.S. counties meet the revised PM 2.5 standard without taking additional action to reduce emissions.”

Knowing and studying the “rule changes” in detail may serve to clarify this statement, but on the surface it certainly conveys the impression of a scenario whereby a teacher changes the mood in the classroom by letting the students know that more of them will be passing the next test.  Even better, they won’t need to study any harder and they will still get the same result.

In contrast, the World Health Organization (WHO) is a little more direct (World Health Organization 2013, 10) about the severity and impact of fine particle pollution (PM 2.5):

“There is no evidence of a safe level of exposure or a threshold below which no adverse health effects occur. The exposure is ubiquitous and involuntary, increasing the significance of this determinant of health.”

We can, therefore, see that there are already significant differences in the interpretation of the impact of fine particle pollution (especially from an international perspective), and that the U.S. EPA is not exactly setting a progressive example toward improvement.

Another topic of introductory importance is that of the AQI, or “Air Quality Index” that has been adopted by the EPA (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016).  This index is of the “idiot light” or traffic light style, where green means all is fine, yellow is to exercise caution, and red means that we have a problem.  The index, therefore, has the following appearance:

There are other countries that use a similar type of index and color-coded scheme.  China, for example, uses the following scale (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016):


As we continue to examine these scale variations, it will also be of interest to note that China is known to have some of the most polluted air in the world, especially over many of the urban areas.

Not all countries, jurisdictions or entities , however, use the idiot light approach that employs an arbitrary scaling method that is removed from showing the actual PM 2.5 pollution concentrations, such as those shown from the United States and China above.  For example, the United Kingdom uses a scale (“Air Quality Index – Wikipedia, the Free Encyclopedia” 2016) that is dependent upon actual PM 2.5 concentrations, as is shown below:

Notice that the PM 2.5 concentration for the U.K. index is directly accessible and that the scaling for the index is dramatically different than that for the U.S. or China.  In the case of the AQI used by the U.S. and China (and other countries as well), a transformed scale runs from 0 to 300-500 with concentration levels that are generally more obscure and ambiguous within the index.  In the case of the U.K index, the scale directly reports with a specific PM 2.5 concentration level with a maximum (i.e., ~70 ug/m^3) that is far below that incorporated into the AQI index (i.e., 300 – 500 ug/m^3).

We can be assured that if a reading of 500 ug/m^3 is ever before us, we have a much bigger problem on our hands than discussions of air quality.  The EPA AQI is heavily biased toward extreme concentration levels that are seldom likely to occur in practical affairs; the U.K. index gives much greater weight to the lower concentration levels that are known to directly impact health, as reflected by the WHO statement above.

Major differences in the scaling of the indices, as well as their associated health effects, are therefore hidden within the various color schemes that have been adopted by various countries or jurisdictions.  Color has an immediate impact upon perception and communication; the reality is that most people will seldom, if ever, explore the basis of such a system as long as the message is “green” under most circumstances that they are presented with.  The fact that one system acknowledges serious health effects at a concentration level of  50 – 70 ug/m^3 and that another does not do so until the concentration level is on the order of 150 – 300 ug/m^3 is certainly lost to the common citizen, especially when the scalings and color schemes chosen obscure the real risks that are present at low concentrations.

The EPA AQI system appears to have its roots in history as opposed to simplicity and directness in describing the pollution levels of the atmosphere, especially as it relates to the real-time known health effects of even short-term exposure to lower concentration PM 2.5 levels.  The following statement (“Air Quality Index | World Public Library” 2016) acknowledges weaknesses in the AQI since its introduction in 1968, but the methods are nevertheless perpetuated for more than 45 years.

“While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application.”

The system of color coding to extreme and rarified levels with the use of an averaged and biased scale versus one that directly reports the PM 2.5 concentration levels in real time is an artifact that is divorced from current observed measurements and the knowledge of the impact of fine particulates upon human health.

The reporting of PM 2.5 concentrations directly along with a more realistic assessment of impact upon human health is hardly unique to the U.K. index system. With little more than casual research, at least three other independent systems of measurement have been identified that mirror the U.K. maximum scaling levels along with the commensurate PM 2.5 counts. These include the World Health Organization, a European environmental monitoring agency, and a professional metering company index scale (World Health Organization 2013, 10) (“Air Quality Now – About US – Indices Definition” 2016) (“HHTP21 Air Quality Meter, User Manual, Omega Engineering” 2016, 10).

As another example to gain perspective between extremes and maximum “safe” levels of PM 2.5 concentrations, we can recall an event that occurred in Beijing, China during November 2010, and that was reported by the New York Times in January of 2013 (Wong 2013) .  During this extreme situation, the U.S. Embassy monitoring equipment registered a PM 2.5 reading of 755, and the story certainly made news as the levels blew out any scale imaginable, including those that set maximums at 500.

An after statement within the article that references the World Health Organization standards may be the lasting impression that we should carry forward from the horrendous event, where it is stated that:

“The World Health Organization has standards that judge a score above 500 to be more than 20 times the level of particulate matter in the air deemed safe.”

Not withstanding the fact that WHO also states that no there is no evidence of any truly “safe” level of particulate matter in the atmosphere, we can nevertheless back out of this statement that a maximum “safe” level for the PM 2.5 count, as assessed by WHO, is approximately 25 ug / m^3.  This statement alone should convince us that we must pay close attention to the lower levels of pollution that enter into the atmosphere, and that public perception should not be distorted by scales and color schemes that usually only affect public perception when they number into the hundreds.

Let us gain a further understanding of how low concentration levels and small changes affect human health and, shall I daresay, mortality. The case for low PM 2.5 concentrations being seriously detrimental to human health is strong and easy to make.  Casual research on the subject will uncover a host of research papers that quantify increased mortality rates with direct relationship to small changes in PM 2.5 concentrations, usually expressing a change in mortality per 10 ug / m^3.  Such papers are not operating in the arena of scores to hundreds of micrograms per cubic meter, but on the order of TEN micrograms per cubic meter.  This work underscores the need to update the air quality standards, methods and reporting to the public based upon current health knowledge, instead of continuing a system of artifacts based upon decades old postulations.

These papers will refer to both daily mortality levels as well as long term mortality based upon these “small” increases in PM 2.5 concentrations.  The numbers are significant from a public health perspective.  As a representative article, consider the following recent published paper in Environmental Health Perspectives in June of 2015, under the auspices of the National Institute of Environmental Health Sciences(Shi et al. 2015) :




with the following conclusions:




as based upon the following results:




Let us therefore assume a more conservative increase of 2% mortality for a short-term exposure (i.e., 2 day) per TEN (not 12, not 100, not 500 per AQI scaling) micrograms per cubic meter.  Let us assume a mortality increase of 7% for long term exposure (i.e, 365 days).

Let us put these results into further perspective.  A sensible question to ask is, given a certain level of fine particulate pollution introduced into the air for a certain number of days within the year, how many people would die as a consequence of this change in our environment?  We must understand that the physical nature of the particulates is being ignored here (e.g., toxicity, solubility, etc.) other than that of the size being less than 2.5 microns.

The data results suggest a logarithmic form of influence, i.e. a relatively large effect for short term exposures, and a subsequently more gradual impact for long term exposure.  A linear model is the simplest approach, but it also is likely to be too modest in modeling the mortality impact. For the purpose of this inquiry, a combined linear-log approach will be taken as a reasonably conservative approach.

The model developed, therefore, is of the form:

Mortality % Increase (per 10ug/m^3) = 1.65 +. 007(days) + 0.48 * ln(days)

The next step is to choose the activity level and time period for which we wish to model the mortality increase.  Although any scenario within the data range could be chosen, a reasonably conservative approach will also be adopted here.  The scenario chosen will be to introduce 30 ug/m^3 of fine particulate matter into the air for 10% of the days within a year.

The model will therefore estimate a 3.6% increase in mortality for 10 ug/ m^3 of introduced PM 2.5 materials (36.5 days).  For 30 ug/m^3, we will therefore have a a 10.9% increase in mortality.  As we can see, the numbers can quickly become significant, even with relatively low or modest PM 2.5 increases in pollution.

Next we transform this percentage into real numbers. During the year of 2013, the Centers for Disease Control (CDC) reports that 2,596,993 people died during that year from all causes combined (“FastStats” 2016).  The percentage of 10.9% increase applied to this number results in 283, 072 additional projected deaths per year.

Continuing to place this number into perspective, this number exceeds the number of deaths that result from stroke, Alzheimer’s, and influenza and pneumonia combined (i.e, 5th, 6th, and 8th leading causes of death) during that same year.  The number is also much higher than the death toll for Chronic Pulmonary Obstructive Disease (COPD), which is now curiously the third leading cause of death.

We should now understand that PM 2.5 pollution levels are a very real concern with respect to public health, even at relatively modest levels.  Some individuals might argue that such a scenario could never occur, as the EPA has diminished the PM 2.5 standard on an annual basis down to 12 ug/m^3.  The enforcement and sensitivity of that measurement standard is another discussion that will be reserved for a later date.  Suffice it to say that the scenario chosen here is not unduly unrealistic here for consideration, and that it is in the public’s interest to engage themselves in this discussion and examination.



The next issue of interest to discuss is that of a comparison between different air quality scales in some detail.  In particular, the “weighting”, or influence, of lower concentration levels vs. higher concentration levels will be examined.  This topic is important because it affects the interpretation by the public of the state of air quality, and it is essential that the impacts upon human health are represented equitably and with forthrightness.

The explanation of this topic will be considerably more detailed and complex than the former issues of “color coding” and mortality potentials, but it is no less important.  The results are at the heart of the perception of the quality of the air by the public and its subsequent impact upon human health.

To compare different scales of air quality that have been developed; we must first equate them.  For example, if one scale ranges from 1 to 6, and another from 0 to 10, we must “map”, or transform them such that the scales are of equivalent range.  Another need in the evaluation of any scale is to look at the distribution of concentration levels within that same scale, and to compare this on an equal footing as well.  Let us get started with an important comparison between the EPA AQI and alternative scales that deserve equal consideration in the representation of air quality.

Here is the structure of the EPA AQI in more detail (U.S. Environmental Protection Agency 2012, 4) .


 AQI Index AQI Abitrary Numeric  AQI Rank PM 2.5 (ug/m^3) 24 hr avg.
Good  0-50  1  0-12
Moderate  51-100  2  12.1-35.4
Unhealthy for Sensitive Groups  101-150  3  35.5-55.4
Unhealthy  151-200  4  55.5-150.4
Very Unhealthy  201-300  5  150.5-250.4
Hazardous  301-500  6  250.5-500


Now let us become familiar with three alternative scaling and health assessment scales that are readily available and that acknowledge the impact of lower PM 2.5 concentrations to human health:


United Kingdom Index U.K. Nomenclature PM 2.5 ug/m3 24 hr avg.
1 Low 0-11
2 Low 12-23
3 Low 24-35
4 Moderate 36-41
5 Moderate 41-47
6 Moderate 48-53
7 High 54-58
8 High 59-64
9 High 65-70
10 Very High >=71


Now for a second alternative air quality scale, this being from Air Quality Now, a European monitoring entity:


Air Quality Now EU Rank Nomenclature PM 2.5  Hr PM 2.5 24 Hrs.
1 Very Low 0-15 0-10
2 Low 15-30 10-20
3 Medium 30-55 20-30
4 High 55-110 30-60
5 Very High >110 >60


And lastly, the scale from a professional air quality meter manufacturer:


Professional Meter Index Nomenclature PM 2.5 ug/m^3 Real Time Concentration
0 Very Good 0-7
1 Good 8-12
2 Moderate 13-20
3 Moderate 21-31
4 Moderate 32-46
5 Poor 47-50
6 Poor 52-71
7 Poor 72-79
8 Poor 73-89
9 Very Poor >90


We can see that the only true common denominator between all scaling systems is the PM 2.5 concentration.  Even with the acceptance of that reference, there remains the issue of “averaging” a value, or acquiring maximum or real time values.  Setting aside the issue of time weighting as a separate discussion, the most practical means to equate the scaling system is to do what is mentioned earlier:  First, equate the scales to a common index range (in this case, the EPA AQI range of 1 to 6 will be adopted).  Second, inspect the PM 2.5 concentrations from the standpoint of distribution, i.e., evaluate these indices as a function of PM 2.5 concentrations.  The results of this comparison follow below, accepting the midpoint of each PM 2.5 concentration band as the reference point:

PM 2.5 (ug/m^3) EPA AQI UK EU (1hr) Meter
1-10 1 1 1 1
10-20 2 1.6 1 2.1
20-30 2 2.1 2.2 2.7
30-40 2 2.1 3.5 3.2
40-50 3 3.2 3.5 3.2
50-60 3 4.3 3.5 4.3
60-80 4 5.4 4.8 4.9
80-100 4 6 4.8 6
100-150 4 6 6 6
150-200 4 6 6 6
200-250 5 6 6 6
250-300 5 6 6 6
300-400 6 6 6 6
400-500 6 6 6 6


This table reveals the essence of the problem; the skew of the EPA AQI index toward high concentrations that diminishes awareness of the health impacts from lower concentrations can be seen within the tabulation. 

This same conclusion will be demonstrated graphically at a later point.

Now that all air quality scales are referenced to a common standard, i.e., the PM 2.5 concentration), the general nature of each series can be examined via a regression analysis.  It will be found that a logistical function is a favored functional form in this case and the results of that analysis are as follows:

EPA Index (1-6) = 5.57 / (1 + 2.30 * exp(-.016 * PM 2.5))
Mean Square Error = 0.27

Mean (UK – EU – Meter) Index (1-6) = 6.03 / (1 + 5.65 * exp(-.046 * PM 2.5))
Mean Square Error = 0.01

The information that will now be of value to evaluate the weighting distribution applied to various concentration levels is that of integration of the logistical regression curves as a function of bandwidth.  The result of the integration process (Int.) applied to the above regressions is as follows:

PM 2.5 Band EPA AQI (Int.)
[Index * PM 2.5]
Mean Index (Int.)
[Index * PM 2.5]
% Relative Overweight or Underweight of PM 2.5 Band Contribution Between EPA AQI and Mean Alternative Air Quality Index Scale (Endpoint Bias Removed)
1-10 16.1 10.1 +42%
10-20 19.8 15.8 +27%
20-30 21.9 21.6 +8%
30-40 24.1 28.3 -10%
40-50 26.3 35.2 -27%
50-60 28.5 41.5 -39%
60-80 63.6 98.0 -47%
80-100 72.1 110.4 -46%
100-150 211.7 295.0 -32%
150-200 243.7 300.8 -16%
200-250 261.7 301.4 -8%
250-300 270.7 301.5 -4%
300-400 551.8 603.0 -2%
400-500 555.9 603.0 0%


A graph of a regression curve to the % Relative Overweight/Underweight data in the final column of the table above is as follows (band interval midpoints selected; standard error = 4.1%).


EPA Underweight Function Feb 09 2016 - 01


And, thus, we are led to another interpretation regarding the demerits of the EPA AQI.  The EPA AQI scaling system unjustifiably under-weights the harmful effects of PM 2.5 concentrations that are most likely to occur in real world, real time, daily circumstances.  The scale over-weights the impacts of extremely low concentrations that have little to no impact upon human health.  And lastly, when the PM 2.5 concentrations are at catastrophic levels and the viability of life itself is threatened, all monitoring sources, including the EPA, are in agreement that we have a serious situation.  One must seriously question the public service value under such distorted and disproportionate representation of this important monitor of human health, the PM 2.5 concentration.



Let us proceed to an additional serious flaw in the EPA air quality standards, and this is the issue of averaging the data. It will be noticed that the current standard for EPA PM 2.5 air quality is 12 ug/m^3 , as averaged over a 24 hour period. On the surface, this value appears to be reasonably sound, cautious and protective of human health. A significant problem, however, occurs when we understand that the value is averaged over a period of time, and is not reflective of real-time dynamic conditions that involve “short-term” exposures.

To begin to understand the nature of the problem, let us present two different scenarios:

Scenario One:

In the first scenario, the PM 2.5 count in the environment is perfectly even and smooth, let us say at 10 ug/m^3. This is comfortably within the EPA air quality standard “maximum” per a 24 hour period, and all appears well and good.

Scenario Two:

In this scenario, the PM 2.5 count is 6 ug/m^3 for 23 hours out of 24 hours a day. For one hour per day, however, the PM 2.5 count rises to 100 ug/m^3, and then settles down back to 6 ug/m^3 in the following hour.

Instinctively, most of us will realize that the second scenario poses a significant health risk, as we understand that maximum values may be as important (or even more important) than an average value. One could equate this to a dosage of radiation, for example, where a short term exposure could produce a lethal result, but an average value over a sufficiently long time period might persuade us that everything is fine.

And this, therefore, poses the problem that is before us.

In the first scenario, the weighted average PM 2.5 count over a 24 hour period is 10 ug/ m^3.

In the second scenario, the weighted average PM 2.5 count over a 24 hour period is 10 ug/m^3.

Both scenario averages are within the current EPA air quality maximum pollution standards.

Clearly, this method has the potential for disguising significant threats to human health if “short-term” exposures occur on any regular basis. Observation and measurement will show that they do.

Now that we have seen some of the weaknesses of the averaging methods, let us look at an additional scenario based upon more realistic data, but that continues to show a measurable influence upon human health. The scenario selected has a basis in recent and independently monitored PM 2.5 data.

The situation in this case is as follows:

This model scenario will postulate that the following conditions are occurring for approximately 10% of the days in a year. For that period, let us assume that for 13.5 hours of the day that the PM 2.5 count is essentially nil at 2 ug/m^3. For the remaining 10.5 hours of the day during that same 10% of the year, let us assume the average PM 2.5 count is 20 ug/m^3. The range of the PM 2.5 count during the 10.5 hour period is from 2 to 60 ug/m^3, but the average of 20 ug/m^3 (representing a significant increase) will be the value required for the analysis. For the remainder of the year very clean air will be assumed at a level of 2 ug/m^3 for all hours of the day.

A more extended discussion of the nature of this data is anticipated at a later date, but suffice it to say that the energy of sunlight is the primary driver for the difference in the PM 2.5 levels throughout the day.

The next step in the problem is to determine the number of full days that correspond to the concentration level of 20 ug/m^3, and also to provide for the fact that the elevated levels will be presumed to exist for only 10% of the year.  The value that results is:

0.10 * (365 days) * (10.5 hrs / 24 hrs) = 16 full days of 20 ug/m^3 concentration level.

As a reference point, we can now estimate the increase in mortality that will result for an arbitrary 10 ug/m^3 (based upon the relationship derived earlier):

Mortality % Increase (per 10ug/m^3) = 1.65 +. 007(16 days) + 0.48 * ln(16 days)


Mortality % Increase (per 10ug/m^3) = 3.1%

The increase in this case is 18 ug/m^3 (20 ug/m^2 – 2 ug/m^3), however, and the mortality increase to be expected is therefore:

Mortality % Increase (per 18ug/m^3 increase) = 1.8 * 3.1% = 5.6%.

Once again, to place this number into perspective, we translate this percentage into projected deaths (as based upon CDC data, 2013):

.056 * (2, 596, 993) = 145, 431 projected additional deaths.

This value is essentially equivalent (again, curiously) to the third leading cause of death, namely Chronic Pulmonary Obstructive Disease (COPD), with a reported value of deaths for 2013 of 149, 205.

It is understood that a variety of factors will ultimately lead to mortality rates, however, this value may help to put the significance of  “lower” or “short-term” exposures to PM 2.5 pollution into perspective.

It should also be recalled that the averaging of PM 2.5 data over a 24 hour period can significantly mask the influences of such “short-term” exposures.

A remaining issue of concern with respect to AQI deficiencies is its accuracy in reflecting real world conditions in a real-time sense. The weakness in averaging data has already been discussed to some extent, but the issue in this case is of a more practical nature. Independent monitoring of PM 2.5 data over a reasonably broad geographic area has produced direct visible and measurable conflicts in the reported state of air quality by the EPA.

After close to twenty years of public research and investigation, there is no rational denial that the citizenry is subject to intensive aerosol operations on a regular and frequent basis. These operations are conducted without the consent of that same public. The resulting contamination and pollution of the atmosphere is harmful to human health.  The objective here is to simply document the changes in air quality that result from such a typical operation, and the corresponding public reporting of air quality by the EPA for that same time and location.

Multiple occasions of this activity are certainly open to further examination, but a representative case will be presented here in order to disclose the concern.



Typical Conditions for Non- Operational Day.
Sonoran National Monument – Stanfield AZ


Aerosol Operation – Early Hours
Jan 19 2016 – Sonoran National Monument – Stanfield AZ


Aerosol Operation – Mid-Day Hours
Jan 19 2016 – Sonoran National Monument – Stanfield AZ



EPA Website Report at Location and Time of Aerosol Operation.
Jan 19 2016 – Sonoran National Monument – Stanfield AZ
Air Quality Index : Good
Forecast Air Quality Index : Good
Health Message : None

Current Conditions : Not Available
(“AirNow” 2016)


The PM 2.5 measurements that correlate with the above photographs are as follows:

With respect to the non-operational day photograph, clean air can and does exist at times in this country, especially in the more remote portions of the southwestern U.S. under investigation.  It is quite typical to have PM 2.5 counts from 2 to 5 ug/m^3, which fall under the category of very good air quality by any index used.  Low PM 2.5 counts are especially prone to occur after periods of heavier rain, as the materials are purged from the atmosphere.  The El Nino influence has been especially influential in this regard during the earlier portion of this winter season.  Visibility conditions of the air are a direct reflection of the PM 2.5 count.

On the day of the aerosol operation, the PM 2.5 counts were not low and the visibility down to ground level was highly diminished.  The range of values throughout the day were from 2 to 57, with the low value occurring prior to sunrise and post sundown.  The highest value of 57 occurred during mid-afternoon.  A PM 2.5 value of 57 ug/m^3 is considered poor air quality by many alternative and contemporary air quality standards, and the prior discussions on mortality rates for “lower” concentrations should be consulted above.  This high value has no corollary, thus far, during non-aerosol-operational days.  From a common sense point of view, the conditions recorded by both photograph and measurement were indeed unhealthy.  Visibility was diminished from a typical 70 miles + in the region to a level of approximately 30 miles during the operational period.  Please refer to the earlier papers (Visibility Standards Changed, March 2001 and Mortality vs. Visibility, June 2004; also additional papers) for additional discussions related to these topics.

The U.S. Environmental Protection Agency reports no concerns, no immediate impact, nor any potential impact to health or the environment during the aerosol operation at the nearest reporting location.



This paper has reviewed several factors that affect the interpretation of the Air Quality Index (AQI) as it has been developed and is used by the U.S. Environmental Protection Agency (EPA). In the process, several shortcomings have been identified:

1. The use of a color scheme strongly affects the perception of the index by the public. The colors used in the AQI are not consistent with what is now known about the impact of fine particulate matter (PM 2.5) to human health. The World Health Organization (WHO) acknowledges that there are NO known safe levels of fine particulate matter, and the literature also acknowledges the serious impact of low concentration levels of PM 2.5, including increased mortality.

2. The scaling range adopted by the AQI is much too large to adequately reveal the impact of the lower concentration levels of PM 2.5 to human health. A range of 500 ug/m^3 attached to the scale when mortality studies acknowledge significant impact at a level of 10 ug/m^3 is out of step with current needs by the public.

3. The underweighting of the lower PM 2.5 concentration levels relative to more contemporary scales that adequately emphasize lower level health impacts obscures health impacts which deserve more prominent exposure.

4. The AQI numeric scale is divorced from actual PM 2.5 concentration levels. The arbitrary scaling has no direct relationship to existing and actual concentrations of mass to volume ratios. The actual conditions of pollution are therefore hidden by an arbitrary construct that obscures the impact of pollution to human health.

5. The AQI is a historic development that has been maintained in various incarnations and modifications since its origin more than 45 years ago. The method of presentation and computation is obtuse and appears to exist as a legacy to the past rather than directly portraying pollution health risks.

6. The averaging of pollution data over a time period that filters out short term exposures of high magnitude is unnecessary and it hinders the awareness of the actual conditions of exposure to the public.

7. Presentation of air quality information through the authorized portal appears to present potential conflicts between reported information and actual field condition observation, data and measurement.


In the opinion of this researcher the AQI, as it exists, should be revamped or discarded. Allowing for catastrophic pollution in the development of the scale is commendable, but not if it interferes with the presentation of useful and valuable information to the public on a practical and daily basis.

There is a partial analogy here with the scales used to report earthquakes and other natural events, as they are of an exponential nature and they provide for extreme events when they occur. It is now known, however, that very low levels of fine particulate matter are very harmful to human health. Any scaling chosen to represent the state of pollution in the atmosphere must correspondingly emphasize and reveal this fact. This is what matters on a daily basis in the practical affairs of living; the extreme events are known to occur but they should not receive equal (or even greater) emphasis in a daily pollution reporting standard. It is primarily a question of communicating to the public directly in real-time with actual data, versus the adherence to decades old legacies and methods that do not accurately portray modern pollution and its sources.

It seems to me that a solution to the problem is fairly straightforward; this issue is whether or not such a transformation can be made on a national level and whether or not it has strong public support. Many other scaling systems have already made the switch to emphasize the impact of lower level concentrations to human health; this would seem to be admirable based upon the actual needs of society.

It is a fairly simple matter to reconstruct the scale for an air quality index. THE SIMPLEST SOLUTION IS TO REPORT THE CONCENTRATION LEVELS DIRECTLY, IN REAL TIME MODE. For example, if the PM 2.5 pollution level at a particular location is, for example, 20 ug/m^3, then report it as such. This is not hard to do and technology is fully supportive of this direct change and access to data. We do not average our rain when it rains, we do not average our sunlight when we report how clear the sky is, we do not average the cloud cover, and we do not average how far we can see. The environmental conditions exist as they are, and they should be reported as such. There is no need to manipulate or “transform” the data, as is being done now. A linear scale can also be matched fairly well to the majority of daily life needs, and the extreme ranges can also be accommodated without any severe distortion of the system. The relationship between visibility and PM 2.5 counts will be very quickly and readily assimilated by the public when the actual data is simply available in real-time mode as it needs to be and should be. Of course, greater awareness of the public of the actual conditions of pollution may also lead to a stronger investigation of their source and nature; this may or may not be as welcome in our modern society. I hope that it will be, as the health of our generation, succeeding generations, and of the planet itself is dependent upon our willingness to confront the truths of our own existence.

Clifford E Carnicom
Mar 12, 2016

Born Clifford Bruce Stewart
Jan 19, 1953



“AirNow.” 2016. Accessed March 13.

“Air Quality Index | World Public Library.” 2016. Accessed March 13.

“Air Quality Index – Wikipedia, the Free Encyclopedia.” 2016. Accessed March 13.

“Air Quality Now – About US – Indices Definition.” 2016a. Accessed March 13.
———. 2016b. Accessed March 13.

“FastStats.” 2016. Accessed March 13.

“HHTP21 Air Quality Meter, User Manual, Omega Engineering.” 2016.

Shi, Liuhua, Antonella Zanobetti, Itai Kloog, Brent A. Coull, Petros Koutrakis, Steven J. Melly, and Joel D. Schwartz. 2015. “Low-Concentration PM2.5 and Mortality: Estimating Acute and Chronic Effects in a Population-Based Study.” Environmental Health Perspectives 124 (1). doi:10.1289/ehp.1409111.

U.S. Environmental Protection Agency. 2012. “Revised Air Quality Standards for Particle Pollution and Updates to the Air Quality Index (AQI).”

Wong, Edward. 2013. “Beijing Air Pollution Off the Charts.” The New York Times, January 12.

World Health Organization. 2013. “Health Effects of Particulate Matter, Policy Implications for Countries in Eastern Europe, Caucasus and Central Asia.”

Tertiary Rainwater Analysis : Questions of Toxicity

Tertiary Rainwater Analysis : Questions of Toxicity

 Clifford E Carnicom
Nov 08 2015


This paper presents evidence of a chemical signature that exists within an analyzed rain sample that is characteristic of known toxins and pesticides. The method of analysis used is that of mid-infrared spectroscopy. Specifically, certain functional groups involving sulfur, nitrogen, phosphorus, oxygen, and halogens have been identified in the analysis. It is recommended that the investigation be duplicated by independent researchers to determine if an environmental hazard does exist. If these results are verified to be positive, the source of the contaminants is to be identified and eliminated from the environment.

residual_ir4Infrared Spectrum of Concentrated Rain Water Sample
(Aqueous Influence Removed)

The original rainwater sample volume for this analysis is approximately 3.25 liters.  The sample was evaporated under mild heat to approximately 0.5% of the original volume, or about 15 milliliters.  The sample has previously been shown to contain both aluminum, biological components, and a residue that appears to be an insoluble metallic or organometallic complex.  The target of this particular study is that of soluble organics.

The organic infrared signal within the solution is weak and difficult to detect with the means available; it is further complicated by being present in aqueous solution.  The aqueous influence was minimized by making an evaporated film layer on a KCl cell; the transmission mode was used. The signal is identifiable and repeatable under numerous passes in comparison to the reference background.

The primary conclusion from the infrared analysis is that a core group of elements exists within the solution; these appear to include carbon, hydrogen, nitrogen, sulfur, phosphorus, oxygen and a halogen.  The organic footprint appears to be weak but detectable and dominated by the above heteratoms.

As further evidence for the basis of this report, qualitative tests for an amine (nitrogen and hydrogen), sulfates and phosphates (sulfur, oxygen and phosphorus) have each produced a positive test result.  A qualitative test for a halogen in the concentrated rainwater sample has also produced a positive result; the most likely candidate at this point is the chloride ion.  All elements present have therefore been proven to exist at detectable levels by two independent methods.

This grouping of elements is distinctive; they essentially comprise the core elements of many important, powerful and highly toxic pesticides.   For example, three sources directly state the importance of the group above as the very base of most pesticides:


“In pesticides, the most common elements are carbon, hydrogen, oxygen, nitrogen, phosphorus, sulfur and chlorine”.

Pesticide Residues in Food and Drinking Water : Human Exposure and Risks, Dennis Hamilton, 2004.


“We can further reduce the list by considering those used most frequently in pesticides: carbon, hydrogen, oxygen, nitrogen, phosphorus, chlorine, and sulfur”.

Fundamentals of Pesticides, A Self-Instruction Guide, George Ware , 1982.


“Heteratoms like fluorine, chlorine, bromine, nitrogen, sulfur and phosphorus, which are important elements in pesticide residue analysis, are of major interest”.

Analysis of Pesticides in Ground and Surface Water II : Latest Developments, Edited by H.J. Stan, 1995.


It is also true that phosphate diesters are at the core of DNA structure and that many genetic engineering procedures involve the splitting of the phosphate diester complex.

The information provided above is sufficient to justify and invoke further investigation into the matter.  The sample size, although it was derived from an extensive storm over several days in the northwest U.S., is nevertheless limited and quite finite after reduction of the sample volume.  The residual insoluble components (apparently metallic in nature) are also limited in amount and more materials will be required for further analysis.  The signal is weak and difficult to isolate from the background reference; concentration level estimates for elements or compounds (other than that of aluminum which has been assessed earlier) is another entire endeavor.  Systematic, wide-area, and long term testing will be required to validate or refute the results.  All caveats above aside, it would seem that the duty to address even the prospect of the existence of such toxins in the general rainfall befalls each of us.  It would seem wise that this process begins without delay.

There are a few additional comments on this finding that need be mentioned.

The first of these is the issue of local and regional vs. a national and international scope of consideration.  It is understood that pesticides or compounds similar in nature are a fact of our environment, and that considerable awareness and effort is in place to mitigate their damage over decades of use.  Organic farming and genetically engineered crops are two very divergent approaches to reconciliation with the impact of environmental harm, and they are shaping our society and food supply in the most important ways manageable.  Given that the pesticide industry exists, regardless of our varying opinions of merit or harm, I think that it is fair to say that we generally presume that pesticides are under some form of local control.  Our general understanding is that pesticides are applied at ground or close to ground level and are intended to be applied to a specific location or, at most, a region within a defined time interval.

The prospect, even I daresay, the hint, of pesticide or pesticide-like compounds in rainfall is more than daunting.  It seems immediately necessary to consider what scale of operation would support such toxins finding their way into the expanses of atmosphere and rainfall?  For the sake of the general welfare, I think we should all actively wish and seek to disprove the findings within this report.  I will not hesitate to amend this report if honest, fair and accurate testing bears out negative reports over an adequate time period, and my motive never includes sensationalizing an issue.  This is one test, one time, one place, with limited means and support in the process.  I cannot disprove the results at this time and I have an obligation to report on that which seems to be case, uncomfortable as it might be.  It is not the first time that I have been in this situation, and judging from the changes in the the health of the planet that have taken place, it is unlikely to be the last.  The sooner that the state of truth is reached, the better we shall all be for it in any sense that is real.

The second comment relates to the decline of the bee population.  Bees are an indicator species, the canary in the mine, as it were.  The bees and the amphibians have both been ringing their alarm for some time now, and we best not remain passive about finding the reasons for decline.  A minimum of 1/3 of our agricultural economy, and that means food, is dependent upon the bee population for its very existence.  This is no trifling matter, and we all need to get up to speed quickly on the importance of this issue, myself included.

Suffice it to say that compounds of this nature, i.e, historical pesticides like organophosphates and the purported safer and more recent alternatives (e.g., the neonicotinoids), have a very close relationship to the ongoing and often ambiguous studies regarding bee Colony Collapse Disorder (CCD).  From my perspective, it would seem prudent to eliminate the findings of this report as a contributing cause to the problem as promptly as possible.  If that can not be done so readily, then we may have a bigger problem on our hands than is imagined.

One of the interesting side notes is that the elements and groups identified as candidates for investigation actually seem to overlap between the neonicotinioids and the organophosphates.  This includes the nitrogen groups that characterize the neonicotinoids and the phosphate esters that characterize the organophosphates.  If such a combination were at hand, this would seem especially troublesome as both forms remain mired in controversy, let alone any combination thereof.

The third and final comment relates to the toxicity of these compound types in general.  It is not just an issue about bees or salamanders.  These particular compounds have a history and effects that are not difficult for us to research, and we should become aware of their impacts upon the planet quickly enough.  Many of us already are.  The fact is that organophosphates have their origins as nerve gas agents in the pre-World War II era, and in theory their use has been reduced but hardly eliminated.  Residential use is apparently no longer permissible in the United States, but commercial usage still is.  This raises questions on what real effect any such “restrictive” legislation has had.

The neonicotinoids are promoted as a generally safer alternative to the organophosphates, but they are hardly without controversy as well.  They too have strong associations with CCD in the research that is ongoing.  They also are neuro-active insecticides.

It would seem to me that we all have a job to do in getting up to speed on the source, distribution and levels of exposures to insecticide and insecticide related compounds.  A greater awareness of toxins in our environment, in general, also seems in order.  If our general environment has been affected to a degree that has avoided confrontation  thus far, then we need to face the music as quickly as possible.  I trust that we understand the benefits of both rationality and aggressiveness when serious issues face us, and this may be another such time.  I hope that I will be able to dismiss this report in due time; at this time, I cannot.


Clifford E Carnicom
Nov 05, 2015

Born Clifford Bruce Stewart
Jan 19, 1953


Additional Notes:

The preliminary functional group assignments being made to the absorption peaks at this time are as follows (cm-1):

~ 3322 : Amine, Alkynes (R2NH considered)
~ 2921 : CH2 (methylene)
~ 2854 : CH2 (methylene)
~1739 : Ester (RCOOR, 6 ring considered)
~1447 : Sulfate (S=O considered)
~1149 : Phosphate (Phosphate ester, organophosphate considered)
~1072 : Phosphine, amine, ester, thiocarbonyl
~677  : Alkenes, aklynes, amine, alkyl halide

The assignments will be revised or refined as circumstances and sample collections permit, however, as a group they appear to provide a distinctive organic signature.  A structural model may be developed at a future date.

Some chemical compounds which may share some similar properties to that under consideration here include, for example, (not all elements included in any listed compound; only for reference comparison purposes):

p-chlorophenyl (3-phenoxypropyl)carbamate
N-(1-naphthylsulfonyl)-L-phenylalanyl chloride
2,2,2-trichloroethyl 2-(2-benzothiazolyl)dithio-alpha-isopropenyl-4-oxo-3-phenylacetamido-1-azetidineacetate
cytidine monophosphate

per :
SDBSWeb : (National Institute of Advanced Industrial Science and Technology, Nov 06 2015)

Secondary Rainwater Analysis : Organics & Inorganics

Secondary Rainwater Analysis :
Organics & Inorganics

Clifford E Carnicom
Nov 04 2015


A second rainwater sample has been evaluated. On this occasion, both organic and inorganic attributes of the sample have been examined.  Although the sample investigated is of much larger volume, the results demonstrate an essentially equivalent level of aluminum present to that defined within the earlier report, i.e., approximately 2 PPM.  This magnitude exceeds the US Environmental Protection Agency recommended standards for aluminum in drinking factor by roughly a factor of 10. 

In addition, various organic attributes of the sample are introduced within this report.


 Concentrated Rain Sample under Study in this Report
Distilled Water Reference on Left, Concentrated Rainfall to Right

Residual Solid Materials from the Rainwater Sample of this Study

The volume of the sample collected is approximately 6.5 liters over a three day heavy storm period, collected in clean containers that are were exposed to open sky.  The sample was concentrated by evaporation under modest heat to approximately 6% of the original volume.  It is apparent from visual inspection and by visible light spectrometry that the concentrated rainfall sample is not transparent and that it does contain materials to some degree.

Visible Light Spectrum Rainfall2

Visible light spectrum of the concentrated rainfall sample.  The increase in absorption in the lower ranges of visible light correspond to the yellow and yellow-green colors that are observed with the sample.
The pH of the concentrated sample is recorded at 8.5; this value is surprisingly alkaline and indicates the presence of substantial hydroxide ions in solution.  The pH of the solution prior to concentration measures at 7.5; this also must be registered as highly alkaline under the circumstances.

The pH of  ‘natural’ rain water has been discussed in earlier papers and its relationship to the expected value of 5.7 due to the presence of carbonic acid in the atmosphere (carbon dioxide and water).  The departure of natural rainwater from the theoretical neutrality of 7.0 is one aspect of the pH studies that I conducted in conjunction with numerous citizens across the nation some years ago, and these reports remain available.  The current finding is remarkably alkaline and, by itself, is indicative of fundamental acid-base change in the chemistry of the atmosphere.

From those early reports, it may be wise to recall the words of Paul Crutzen, Nobel Prize winner for Chemistry (Atmosphere, Climate and Change, 1995), who stated that the most important chemical attribute of precipitation is indeed the pH value.  It behooves us, as a species, to act rather quickly on any reasonable claim to a significant change in fundamental atmospheric chemistry that may exist.  It must be acknowledged that these same claims now prevail over decades of time, and that any dismissal as an aberration of no consequence is unjustifiably diminutive.

The sample has been examined again for the existence of trace metals using the method of differential cyclic chronopotentiometry, as described in the earlier report. The results are essentially identical to that of the earlier report, and once again the signature of a soluble form of aluminum is detected . The sample in this case, however, is of much larger volume, was collected over a longer duration, and was more highly concentrated that that in the preliminary report.

The concentration level was again determined, and the analysis indicates a level of soluble aluminum within the rainwater sample at 2.0 PPM.  This compares quite closely with the earlier sample result of approximately 2.4 PPM . This determination once again takes into account the concentration process that has been applied to the sample for testing sensitivity purposes.

Two facts bear repeating here:

First, this value exceeds the US Environmental Protection Agency (EPA) standards for drinking water by roughly a factor of 10, again using the most conservative approach possible that can be taken.

Second, the previously referenced U.S. Geological Survey statement from the year of 1967 is valuable both in relation to evaluating the EPA standards as well as assessing the expectations of aluminum concentrations in natural waters:


There is now a necessity to include an additional aspect of the rainfall analysis that has made its presence known more clearly.  This is the case of biologicals.  It is a fact, that in addition to the repeated detection of a trace metal at questionable levels, certain organic constituents are coming to the fore.   The test results are repeatable at this point and these organics will eventually require an equal accounting for their existence.  I will not enter into an extended discussion of their potential significance at this time, as the first and necessary step is to place on the table that which must be confronted.  My introductory suggestion at this point is to become aware of a previous paper on this site, entitled “A New Biology” to gain some familiarity with the scope of the issue . It is fair to say that along with changes of chemistry in this planet, we must also confront certain changes in biology that are in place.  The history of this planet, the cosmos, life and our own species is dynamic, and intelligence itself is partially expressed in the ability to adapt to changing circumstances.  We are in the process, whether we like it or not, of learning if and how quickly we can adapt to changes that have and are taking place, induced or otherwise.  We may also choose whether to participate in the process (hopefully for the betterment of the world, as opposed to its detriment), or if we shall remain ignorant in an effort to ensconce ourselves in a purported comfort zone.

The methods of examination to be presented here are twofold: that of microscopy and that of infrared spectroscopy.  Here are some some images that relate to the fact of the matter; they are repeated in both samples that have been examined:


Low Power (~200x) of Biological Filaments Contained in
Residual Materials from Concentrated Rainwater Samples
(The colors of the filaments are a unique characteristic (commonly red and blue) and they exist as an aid to identification with low power microscopy)


High Power (~5000x) of Biological Filaments Contained in
Residual Materials from Concentrated Rainwater Samples

These images will not be elaborated on in detail at this time, as it may require a period of time to examine the information that has come forth here.  They most certainly indicate a biological nature that shares a common origin with many of the research topics that have evolved on this site over the years.  It may be worthwhile to begin by becoming familiar with the ‘environmental filament’ issue that is so thoroughly examined on this site.  Since it seems clear that we are indeed dealing with an ‘environmental contaminant’ of sorts, the history of communication with the U.S. Environmental Protection Agency may also be worthy of review.

It would also seem to be the case that a significant portion of the residual material is inorganic as well, as in an insoluble metallic form.  It may be that the insoluble residual material may be composed in part as an organometallic complex, based upon historical findings.

Regardless of the source or impact of these materials, it does seem to fair to state that an accounting for their existence in the atmosphere and rainfall is deserved.  Each of us may wish to play a part in seeking the answers to such issues and questions before us all.  I wish for this to happen, as I suspect many of us know that it is the right thing to do.


Clifford E Carnicom
November 01, 2015.

Born Clifford Bruce Stewart
January 19, 1953.

Preliminary Rainwater Analysis : Aluminum Concentration

Preliminary Rainwater Analysis :
Aluminum Concentration

Clifford E Carnicom
Nov 02 2015


A method and means to identify the species and concentration of several different trace metals in ionic form has been established.  The method employed is that of differential cyclic chronopotentiometry, which is a subset of the science of voltammetry.  The brief paper presents a preliminary examination of a rainwater sample for the existence of trace metals.  The sample under examination shows the existence of aluminum in a soluble form.  An estimate of the concentration level of the aluminum has been made; this level exceeds that of the recommended standards for drinking water.  The results indicate that public concerns about the toxicity levels of certain trace metals in the general environment are warranted, and that a more thorough evaluation of the state of atmospheric quality by the responsible agencies is required.

Rainwater Sample of this Study Collected under “Clean” Conditions
Note that Visible Pollution is also Evident

The determination of  trace metals can be an expensive and sophisticated proposition.  One of the more modern methods of detection at trace levels involves the use of Inductively Coupled Plasma (ICP); such means and skill sets are not practiced by the public under normal circumstances.  The determination of inorganic compounds at trace levels has always presented a serious challenge to this Institute, and in the past all such efforts have been relegated to that which can be gleaned primarily from qualitative testing methods.  One interesting alternative, with a long history and of increasing importance, is the science of voltammetry.  Many are familiar with the fact that elements and compounds have unique electromagnetic spectrums, such as those employed in the disciplines of spectroscopy including, for example, infrared spectrometry and atomic absorption.  It is valuable to know that many of these same elements also have an ‘electrochemical signature’, and that they behave in unique and identifiable ways when exposed to variations in voltage and current.  It is from this fact that voltammetry was born, and its origin dates back to the the days of Michael Faraday.  The basic principle of voltammetry is to examine the relationships of oxidation and reduction within a medium or a reaction; there are numerous variations upon the specifics of this theme.  Voltammetry equipment is dramatically more modest in cost than ICP and mass spectrometry, and yet it can still produce usable results that are, on many occasions, commensurate with the more advanced equipment and technology.  Such equipment, in is most basic form, is now employed at the Institute and it is yielding promising results in the important domain of inorganic analysis, such as metals and halogens.

The study here refers only to an inorganic analysis that has been made; at a later date a presentation on biological aspects of the rainwater sample will occur as time and circumstances permit.

The rain sample was collected on Oct 30 2015 with new and clean containers with a clear path to the sky above.  The sample was then evaporated to 33% of the original volume for the purpose of increasing the concentration level sufficient for testing purposes.  The sample was compared to a control volume of distilled water.

The potentiostat used in the voltammetry work is a CV-27 model from Bioanalytical Sciences. The unit has passed all test procedures as described in the manual. The output from the potentiostat is coupled to a Pico 2000 series digital oscilloscope, whereby both voltage input and output can be displayed as a function of time. The basic mode of operation for the testing process is therefore one of chronopotentiometry.

A series of calibration tests were made with a variety of trace metals, including calcium, magnesium, sodium, potassium, iron and aluminum.

The goals of the investigations include both the ability to identify the species as well as concentration; both goals have been achieved with the above elements in an ionic state in sufficient concentration, i.e., on the order of a few parts per million (PPM).  The work will extend to other species and combinations thereof in the future.

The particular variation of chronopotentiometry that has been utilized is that of cyclic chronopotentiometry, i.e, the alternating sweep between positive and negative voltages in the effort to identify the peak potential that characterizes the redox reaction of the particular element.

In addition, it has been found that the derivative of the chronopotentiogram is a key and critical factor in the determination of the species.  A careful analysis of the derivative of the cyclic chronopotentiogram can be used with favor to identify the peak potential of the element.

When this point is identified and collated with the identifying element, concentration levels can also be established if a set of known standards is available. Concentration determinations on the order of a few parts per million have been achieved on multiple occasions.

Further careful evaluation of the derivative of the cyclic chronopotentiogram in combination with variable voltage sweeps can be used to identify separate components within a mixture of ionic species; this has been accomplished with a combination of three elements in ionic form in aqueous media to date.

The current work, under these preliminary conditions and examinations, leads to an assessment of a concentration level estimate of aluminum (+3, ionic state) within the rain sample at approximately 2.5 PPM.  A conservative approach in all manners of examination has been adopted in the preparation of this estimate, and the condensing of the sample is accounted for.

The Environmental Protection Agency in 2012 lists the secondary regulations for aluminum in drinking water as being within the range of 0.05 to 0.2 mg/L.  This corresponds to a range of 0.05 to 0.2 PPM for this same standard.  It is an interesting observation within the same report that Secondary Drinking Water Regulations exist as non-enforceable federal guidelines. The wisdom of that classification process can be determined by the reader.

Continuing with the most conservative approach possible, one is led to the assessment that this particular rain sample from a rural location in northern Idaho exceeds the EPA drinking water standard and health advisory by roughly a factor of 12.

The following reference statement from the United States Geological Survey (Bulletin 1827-A, 1967) may be of interest in the evaluation of importance that is to take place:


It is a point of interest that many individuals have ascribed the detection of aluminum within the atmosphere over a period of many years to my name.  Such was never the case.  My earlier work did indeed establish the precept that ionizable metallic salts are at the core of atmospheric pollution that we now live under, but the testing of aluminum, specifically, was not a part of that process.  The chemistry of aluminum is quite different from that of the alkali earth metals, and the documentation of its existence by others has always raised intriguing questions of physics. Prior to this current work, most of the inorganic analyses that I have made have been restricted to qualitative tests.  No means of testing aluminum at the trace levels has existed for the Institute prior to this occasion.  Hopefully, this situation is now mildly improved with the current voltammetric studies.  This paper adds itself to a long list of documented actions by the citizenry on the consideration that aluminum is certainly, and has been, entitled to.

As a starting point, we might wish to consider the role that aluminum may play within a geoengineered environment, and it may be worthwhile to look at the exothermic energetics of nano-particulates of aluminum under exposure to moisture.  It raises some tantalizing prospects for additional capabilities of an induced or artificial plasma state.

It is also an observation that visible pollutants in rainwater may be most pronounced with the advent of a storm. This is logical, and this has certainly been observed in the cases of excessive fires in this region.  Time will tell if it is the circumstance of other samples.  It remains to be seen how the gradation of pollutants varies with respect to the duration of the rainfall.  Nevertheless, this study does exist as a valid data point and the merit of consideration is not weakened by any progression of dilution.  The concentration gradient with respect to storm length for invisible pollutants, such as those in ionic form, remains as a topic of equal interest for the future.

There is, of course, considerable debate on the issue of the sources of contamination within our water supplies on this planet.  I will not engage in that debate in this paper, as the purpose here is to simply provide another data point of reference that may be of service in helping to establish the accountability that is required.  There are arguments by some that wish to frame a state of ‘normalcy’ for us, regardless of the level of contamination that as a species we now infest ourselves with.  Regardless of various machinations that may be in vogue, we may all ask the questions of where standards evolve from, and whether or not we knowingly wish to deny the legacy of health knowledge that has been acquired over decades, if not centuries.  We should also be called upon to use our united common sense and intuition, pray coupled with the best scientific information available, to act as stewards for our future, and to be worthy of such a title.

Clifford E Carnicom
November 01, 2015.

Born Clifford Bruce Stewart
January 19, 1953.

CDB Lipids : An Introductory Analysis

CDB Lipids : An Introductory Analysis

Clifford E Carnicom
Mar 12 2015
Edited May 29 2016

Note: I am not offering any medical advice or diagnosis with the presentation of this information. I am acting solely as an independent researcher providing the results of extended observation and analysis of unusual biological conditions that are evident.  Each individual must work with their own health professional to establish any appropriate course of action and any health related comments in this paper are solely for informational purposes and they are from my own perspective. 

An introductory qualitative and analytical analysis of certain lipids that have been extracted from the cross-domain bacteria (CDB), as they are designated on an interim level by this researcher, has been made.   Lipids are a primary biological molecule within any living organism and future studies of this component will be of the greatest importance.

Several major characteristics have been identified using modest means and methods, and the results bring to the forefront additional unusual properties of the organism under study with respect to the so-called “Morgellons” condition.  There are, potentially, several important health implications that arise from this recent work; these health factors are in complete accord with the historical record of discovery and examination that is available on this site.  This paper will be relatively brief in coverage but it will,  hopefully,  serve to reiterate certain themes and directions of research that remain to be confronted by society and that are deserving of appropriate support and resources.

The primary characteristics or factors that have been identified in the course of this study are:

1.  The lipids from the CDB appear to be highly non-polar in nature.

2.  The lipids have a relatively high index of refraction.

3. The lipids appear to be composed, in the main, from long chain poly-unsaturated fatty acids.

4. The lipids appear to support combustion (i.e., oxidation) with ease.

5. The lipids appear to react readily with the halogens, such as iodine.

6. The visible light spectrum of the lipid – iodine reaction is unique and it serves as an additional means of identification.  Peak absorbance of the reaction is at  approximately 498 nanometers.

7.  A significant portion of the extracted lipids is expected to originate from the membranes of the CDB.

8. Endoxtoxins within the CDB are suspected to exist and this subject remains as a serious prospect for research in the future.

These characteristics will now be discussed in greater detail to formulate a general but composite assessment of the lipid character, as well as a reference to certain health impacts that are necessary to consider.

Variable Solubility of the Lipids as it Relates to Polarity

Polarity is a defining property of a molecular structure, and it is a measure of the distribution of charges within a molecule.  Non-polar molecules are generally symmetric in their nature with a tendency toward an equal and symmetric distribution of charges.  Polar molecules, in contrast, are usually of an asymmetric nature with the charges on the molecule unevenly distributed.  Information on polarity, therefore, provides some generalized nature as to the form or nature of the molecule or substance under study.

In this photo, The lipids are mixed with a mildly polar solvent in the tube to the left in the photo; a clear separation remains after settling.  In contrast, the lipids dissolve much more readily in a highly polar solution to the right in the photograph.

The significance of this result is as follows:

Fatty acids are a dominant component of many lipids.  They are comprised of a carboxyl group that is attached to a hydrocarbon chain.  The length of this chain can vary depending upon the particular fatty acid that is involved.  The carboxyl group is polar in nature and therefore the charge distribution on that particular functional group is asymmetric.  The carboxyl group is also acidic in nature and this is the origin of the name of fatty acids that is attached to this common lipid structure.

The hydrocarbon chain that is attached to the carboxyl group is generally of a non-polar nature, and it serves to counteract the polar effect from the carboxyl group.  Therefore, the more non-polar the lipid is, the more likely it is that the hydrocarbon is of relative greater length.  A very long hydrocarbon chain (non-polar) will tend to dominate the character of the molecule in this case and ultimately make the molecule less polar.

This relationship between the polarity of and the length of the attached hydrocarbon chain provides our first useful interpretation as to the structure of the lipid molecule.  Some lipids are more or less polar than others; a highly polar lipid is indicative of lengthy hydrocarbon chains within the fatty acid.  The longer the fatty acid is, the more complex the lipid structure or interactions with other molecules is likely to be.  The structure of any molecule is of the highest importance, as one of the dogmas of biology is that structure determines function.  We are after both, structure and function, and usually in that same order. 

A couple of examples of short vs. long chain fatty acids follows; it can be seen that the differences in form and structure can be substantial:

Short Chain Fatty Acids

Short Chain Fatty Acids
Image Source :

The specific conclusion in this case is that we are more likely to be dealing with a lipid form that contains more extensive hydrocarbon chains.

The next topic of interest concerns the index of refraction.  The index of refraction is a measure of the ability of a substance to bend a light wave that passes through it.  It is also a measure of the speed of light though that same material.  It is also an important defining physical property of a substance, and its measurement can be made with relative ease and modest cost.  Tables of the index of refraction for a wide variety of substances, including lipids and oils are readily available for comparison purposes.

The index of refraction for the lipids under examination measures at 1.487 as the average between two different samples.  The instrument has been calibrated with numerous comparison oil samples and is performing accurately and reliably.  The estimated error of the measurement is +/- .001.

The measurement of 1.487 is a relatively high index of refraction, especially as far as oils are concerned.  This higher measurement also leads to interpretations of significance as we shall soon discover.

There is a relationship between the index of refraction and the degree of saturation within a fatty acid or lipid.  The saturation level (i.e., saturated vs unsaturated) property of a lipid is also a very important characteristic as it expresses itself in terms of the the bond types within the molecule; this is an additional aspect of structure that we have declared as our pursuit.

Let us begin with the definitions for saturated vs. unsaturated fats.  A saturated fat is one in which a full complement of attached hydrogen atoms exists.  A saturated fat contains only single bonds between the carbon atoms.  An unsaturated fat, in contrast, has double (or higher) bonds between the carbon atoms, and there will be fewer hydrogen atoms attached as a result.  Let us present a couple of images to clarify the difference between saturated and unsaturated fats.

An example of a saturated vs. an unsaturated fat

An example of a saturated vs. an unsaturated fat
image source :

In addition, a distinction should be made between mono-unsaturated fats and poly-unsaturated fats.  In essence, a mono-saturated fat has a single double carbon bond within the hydrocarbon chain and a poly-unsaturated fat has more than one double carbon bond within the chain.  The image below shows this difference
The top image shows another example of a saturated fat.
The lower two images show the distinction between monounsaturated and polysaturated fats.
Notice the number of number of double carbon bonds present in the latter examples.
image source :

As information is gained, let us never lose sight of the end goal:The more that can be understood about the structure of a biological molecule, the closer that we are towards learning about the behavior, interaction and function of that molecular structure.  This information is a prerequisite toward the design of effective mitigation strategies.  While much of this pursuit remains in our future, we nevertheless can report the modest levels of progress as they occur, albeit under restricted conditions.

Now that we understand the variations of saturation within fats and oils (lipids), let us return to something that can be measured to give us information about the state of saturation within a lipid.  Once such measurement is the index of refraction, as has been referred to above.

It will be found in the literature that that there is a ‘relationship’ between the degree of saturation in a fat and the ‘iodine number’.  The iodine number is a measure of the level of absorption of iodine by fats, and this number can be used in turn to infer the degree of saturation by that same lipid or fat.  The method is commonly used in the food industry to determine the quality of fats.  The degree of fat saturation is a variable of high interest within the food industry as it affects the spoilage rate and this in turn affects the economics of the food industry.  There are many important reasons to understand the qualitative characteristics of lipids beyond our immediate interest in the ‘Morgellons’ issue.

Determination of the iodine number is a more demanding laboratory method and it requires additional time, protocols and reagents in comparison to alternative methods that have developed within this study.

There is, however, a more accessible method to fulfill our immediate need, and that is to get some sense of the likely saturation level of this particular lipid.  It will be found, with study, that there is also a relationship that can be established between the index of refraction of an oil and the iodine number of that same oil.  An increase in the iodine number is indicative of a higher unsaturation level and in parallel it will be found that a higher index of refraction is strongly correlated with a higher iodine number.  We are able, therefore, to make an equally viable interpretation of the saturation (i.e, unsaturation as well) level with the use of the index of refraction as our primary dependent variable.  Ultimately, a higher iodine number estimate will indicate a higher level of unsaturation within the lipid.  Such a relationship has been researched and established as presented below.

linear reg

Several different oil types have been investigated and the correlation between the index of refraction is reasonably strong (r = 0.92, n = 13).  The accuracy of the refractometer in use has been included as a part of the study.  The result of this work is that a viable method to estimate the level of relative saturation from a direct measurement of the index of refraction of the lipid under study now exists.

The application of the linear regression model to the measured index of refraction (1.487) yields an estimate for the iodine value as 218.  This magnitude for the estimated iodine value is extremely high and it is significant in its own right.

The conclusion to be reached from this iodine value is meaningful.  This stage of the study indicates that the character of the lipid is more likely to be that of a highly poly-unsaturated lipid.  This result is corroborative with the first interpretation of a relatively lengthy fatty acid chain within the lipid structure.  These two interpretations are mutually supportive of one another.  This means that the lipid hydrocarbon chains are more likely to be lengthy with several double carbon bonds along the chain.  This, in turn, will affect the structure as double bonds cause a bend to take place in the hydrocarbon chain.  Several double bonds would only enhance that feature further.

In addition, double bonds within a hydrocarbon chain have another likely and important result.  They are much more likely to produce chemical reactions.  Two likely candidates for reaction are oxygen and the halogens.  Lipids with a high iodine value are more subject to oxidation and therefore have a greater likelihood of becoming rancid (spoiled).  High iodine level lipids are also more likely to produce free radicals.  Lastly, highly polyunsaturated lipids are more likely to polymerize (i.e, ‘plasticize).  Each of these impacts offer the prospect of additional harm to the body, and great attention to the effects of oxidation and free radicals has been given in the history of research on this site. 

There is a wealth of information that is available on the health risks associated with polyunsaturated fats.  The following citations are a couple of representative examples of the issues involved, the first from a lay standpoint and the second from the Commission of European Communities:

Reports of the Scientific Committee for FoodsSource : Reports of the Scientific Committee for Foods, Commission of European Communities

Readers may recall the extensive attention that has given within this site to the role that antioxidants can play in the mitigation of excessive oxidation to the body.  Those discussions, once again, appear to be especially relevant in the amelioration of the harmful influences of polyunsaturated fats. The impact of halogens to the thyroid and metabolism have also been extensively discussed on this site and we will return to that topic later in this paper as well.

The issue of oxidation in combination with combustion tests should now be raised.  The tests, at this stage of investigation, indicate that this particular species of lipids may be highly subject to the process of oxidation.  The purity of the sample can not be quantified at this point since there may be other compounds present within the lipid samples.  However, all indications are that the character of the lipids is somewhat unusual with respect to oxidation and, for that matter, combustion.

The lipids that have been extracted ignite easily, as is shown in the photograph below on the left side:

Lipid Combustion Tests 1  Lipid Combustion Tests 2
Lipid Combustion Tests 3
Lipid Combustion Tests

In this case, the method involves placing a small amount of the lipids into a watchglass with a small piece of paper acting as a wick.  The lipids burn easily and steadily under these conditions, and the behavior is somewhat akin to lamp oil.  Due to the biological and apparent polyunsaturated nature of the lipids, a comparison might be made with whale oil, which was an important source of fuel in earlier times.  There is no suggestion here that the lipids are chemically identical to whale oil by any means, however, the fish oils and whale oil share many interesting properties of the highly polyunsaturated fats. The photograph on the right shows the wick remaining at the end of combustion; this demonstrates that the oil itself is the primary source of fuel within combustion. The last photograph shows an inclusive example of the failure of any of the other tested lipids or oils to support direct combustion.

Combustion goes hand in hand with oxidation; something that burns oxidizes. It is of interest that of all the other oils tested under similar conditions (approximately 8 varieties of varying degrees of unsaturation), only the lipids under examination here showed any ease of combustion at the level shown within the photographs.  Along with the highest index of refraction found within the group that has been examined, the dramatic display of combustion of the sample further reinforces the case for a lipid that is highly unsaturated and thus prone to excessive oxidation.  This finding is once again corroborative of the extensive case for excessive oxidation within the body that occurs in association with the ‘Morgellons’ condition; readers may also recall the lengthy discussions on the apparent marked oxidation of iron within during the examination of blood samples.  All signs of the accumulated research indicate that excessive oxidation within the body is one of the most likely outcomes expected to be found within any future studies of the ‘Morgellons’ condition.  Preliminary data from early questionnaires submitted to the public also strongly indicates this same result.

There are at least two primary forms of lipids in the body, one for storage of energy within the cells and another within the membranes of the cell, where they act to to encapsulate and protect the cell.  Saturated fats are more likely to be associated with the storage of energy internal to the cell and unsaturated fats are more likely to be associated with the membranes of a cell .  Phospholipids are a very important class of lipids that are found within the cell membranes.   The degree of unsaturation within phospholipids varies, with one or both tails having double carbon bonds (the site of oxidation).  An image of a representative phospholipid follows:

Phospholipid within a Cell Membrane
Source :

The oxidation of lipids is referred to as lipid peroxidation, and it is especially prone to occur with polyunsaturated lipids, as we appear to have in this case.  Phospholipids (a bi-layer) are a major constituent of cell membranes, and the oxidation of these lipids subsequently causes damage to the cell.  Lipid peroxidation is essentially the theft of electrons from the lipids in the membranes and it occurs as a free radical chain reaction.  The oxidation occurs when there is an excess availability of free radicals, or reactive oxygen species. The point of oxidation will be the location of the double bond, which occurs at the bent location within the unsaturated fatty acid tail, as shown in the picture above.  An illustration of the lipid peroxidation reaction is shown below; notice the site of activity at the carbon double bond:

Source : Colorado State University

It appears to be the case at this point that the CDB contain within them a highly polyunsaturated fat and/or fatty acids, most likely to occur within the membranes of the CDB, and that the CDB may therefore be subject to, or result in, lipid peroxidation in the presence of free radicals.  This process, once started, is a chain reaction and is only terminated in the presence of appropriate antioxidants, such as Vitamin E, glutathione peroxidase, transferrin (binding free iron), enzymes (such as catalase), in addition to others[see Robbins above].  As shown within earlier culture trials, Vitamin C and NAC (N-acetyl cysteine acting as a glutathione precursor) may show themselves to be effective antioxidants as well.  The issue of oxidants vs. antioxidants has emerged earlier within the research and this information remains available to review.  Those seeking therapeutic protocols dependent upon oxidizing protocols vs. antioxidant protocols may wish to examine further the fundamental differences that are apparent within the scientific literature.  Each individual must , of course, seek health consultation that is appropriate to their individual needs.

Another more complete description of lipid peroxidation comes from Robbins Pathologic Basic of Disease, 4th Edition, where the following sequence is described:

“Lipid peroxidation is one well-studied…mechanism of free radical injury.  It it initiated by hydroxyl radicals, which react with unsaturated fatty acids of membrane phospholipids to generate organic acid free radicals, which in turn react quickly with oxygen to form peroxides.  Peroxides themselves then act as free radicals, initiating an autocatalytic chain reaction, resulting in further loss of unsaturated fatty acids and in extensive membrane damage”

To reiterate the attention that has been given in the research to the oxidation and antioxidant issues in the case of ‘Morgellons’, please recall some of the earlier papers (this paper included) that complement this discussion:

Morgellons : A Discovery and a Proposal – February 2010
Morgellons : Growth Inhibition Confirmed – March 2010
Morgellons : The Extent of the Problem – June 2010
Morgellons : In the Laboratory – May 2011
Morgellons : A Thesis – October 2011
Morgellons : The Breaking of Bonds and Reduction of Iron – November 2012
Amino Acids Verified – November 2012
Morgellons : A Working Hypothesis : Part I – December 2013
Morgellons : A Working Hypothesis : Part II – December 2013
Morgellons : A Working Hypothesis : Part III – December 2013
Growth Inhibition Achieved – January 2014
Biofilm, CDB and Vitamin C – April 2014
CDB : General Characteristics (In Progress) – July 2014
CDB Lipids : An Introductory Analysis – March 2015

Lipid peroxidation is a complex area for study, however, the importance of doing so can be understood from the following statement by Marisso Repetto, from the Institute of Biochemistry and Molecular Medicine, Argentina:

“Currently, lipid peroxidation is considered [as one of] the main molecular mechanisms involved in the oxidative damage to cell structures and in the toxicity process that lead[s] to cell death.”

The complete paper is detailed but insightful,  and it demonstrates the extensive research that is now available on the subject of lipid peroxidation.  The paper in its entirety may be accessed here.

Let us introduce an observed reaction with one of the halogens, in this case, iodine. The reaction is shown below on the right hand side, and in comparison to a negative reaction with vegetable oil on the left. Similar to the case of combustion from above, the CBD lipids under study are the only lipids (of approximately eight in comparison) that have displayed this pronounced reaction with iodine. It appears to be a unique, important and characteristic reaction.

CBD Lipids

It is understood that iodine reacts with lipids; in fact, this is the very basis of the ‘iodine number’ method and it is used as a measure of the unsaturation level of the lipid.  The higher the iodine level, the higher the level of unsaturation in the lipid.  We have already discussed the relationship between the iodine number and correlation with the index of refraction, and we have very good reason to suspect a very high level of unsaturation within the lipids examined.

What is under discussion here is the formation of a bright red colored iodine complex which, thus far, presents itself only within this particular lipid form, at least in relation to numerous sample types that it has been compared with.  The colored complex reaction formed is, in itself, worthy of continued chemical analysis and investigation.  This reaction has not occurred in like fashion to any other lipid samples examined thus far.  The nature of the complex is not completely understood at this time;  the consideration of an iron-lipid-iodine or transition metal complex, however, is extremely high on the list of possibilities.

What can be concluded from visible light spectroscopy, however, is that the colored complex formed once again assures us that we are dealing with a structure that contains numerous double carbon bonds.  Visible light spectroscopy is highly dependent upon what is termed conjugation; conjugation is a molecular structure that is based upon alternating single and double carbon bonds.  The greater the degree of conjugation, the longer the wavelength of the color that will be absorbed.  An example of a highly conjugated form is as follows:

 An example of a conjugated structure within a chromophore
An example of a conjugated structure within a chromophore
(portion of a molecule that absorbs color).
Source : wikipedia

Notice the numerous alternating single and double bonds in the above structure.  Chromophores are especially likely to form with compounds that involve the transition metals, such as iron.  The color of the complex lends itself well to visual light spectrometry and a spectral plot of the CDB complex formation in the visible light range is shown below:

 Visible Light Spectrum of the CDB Lipid-Iodine Complex
Visible Light Spectrum of the CDB Lipid-Iodine Complex

The peak absorbance occurs at approximately 498 nanometers.  This spectral examination of the lipid-iodine complex is an important identification method to establish the presence or existence of this particular CDB lipid form.

The identification of an iron-lipid-iodine complex is further substantiated with tests for the detection of iron using 1,10 phenanthroline reagent in combination with the lipids in a mildly polar solution.  These initial tests are weak in color but nevertheless positive for the presence of the Fe+2 ion within the CDB lipids.  This finding is in coincidence with the paramount conclusion of significant Fe+2 iron use and metabolism by the CDB, as it has been discussed extensively within earlier papers.

The impact of halogens upon the body has been discussed extensively in earlier work and it will not be repeated here.  Readers are referred to the paper entitled Morgellons : A Working Hypothesis (esp. Parts II & III) for the important effects and toxicity potential discussed therein.


The next topic of importance to discuss is that of polymerization.  A polymer is a molecular structure that is composed of many repeating smaller units.  They can be either synthetic or natural, and they usually have a large molecular mass compared to that of the basic structural unit.  Latex and Styrofoam are examples of both a natural and a synthetic polymer.  The architecture and length of the polymer chains strongly affect the physical properties of the polymer, such as elasticity, melting point, and solubility, amongst others.  A diagram of various structural forms is shown below:

Source : Wikipedia

The reason that polymerization is relevant here is that unsaturated lipids are prone to polymerization.  The higher the degree of unsaturation, the more likely that polymerization will take place.  This is due to the oxidation at the double carbon bonds that have been brought to attention repeatedly here.  A familiar example of polymerization to many of us is with the use of linseed oil.  Linseed oil is a highly unsaturated lipid that is applied to furniture as a protective coating; this is one of the so-called “drying oils”.  As this type of oil weathers (or oxidizes), it will form a harder and protective coating over the wood surface.  This is an excellent example of the oxidation of a highly unsaturated oil, or lipid, that produces a polymer.  As mentioned, polymers can vary widely in their physical properties, and the plastics are an excellent additional example of synthetic polymers.  Oil paints that artists use are another example of the “drying oils” that share these same characteristics.

It appears that the probability of polymerization for the CDB lipid complex appears to be high at this point, as all of the prerequisite characteristics appear to be in place.  It appears to be highly unsaturated and therefore subject to oxidation as has been detailed above.  This places us on the alert that the CDB lipids may be a candidate to produce polymers which, in general, would be anticipated to cause harm if internal to the body.

With respect to lipid discovery and extraction, we would be remiss if the subject of endotoxins was not again introduced.  Readers may recall that all tests conducted on the CDB to date indicate that they are Gram-negative.  A Gram-negative test is important for bacteria as it indicates at least three characteristics of importance:

1. The cell walls are lipid-rich in comparison to Gram-positive bacteria.
2. The negative test indicates the presence of lipopolysaccharides (LPS) within the cell wall; lipopolysaccharides are essentially synonymous with endotoxins.
3. Pathogenic bacteria are often associated with endotoxins.

Let us visually compare the cell walls of a Gram-positive bacteria vs. a Gram-negative bacteria:

Gram-positive bacteria vs. a Gram-negative bacteria

Source :

There are distinctive differences that can be noticed.  Starting from the bottom, we can see that both cells contain phospholipids (the lipid bi-layer presented earlier).  The Gram-negative cell, however, is lipid rich, while the Gram-positive cells have a much lower lipid content. The lipid content of the Gram-negative cell wall is approximately 20-30%, which is very high compared to the Gram-negative cell wall.  The relatively high volume of lipids that have been extracted from the CDB are supportive of the Gram-negative test result.

In the Gram-negative cell, the peptidoglycan layer is about 5-20% by dry weight of the cell wall; in the Gram-positive cell the peptidoglycan layer is about 50-90% of the cell wall by dry weight.  Peptidoglycan, also known as murein, is a polymer consisting of amino acids and sugars.

Gram negative bacteria are generally more resistant to antibiotics than Gram-negative bacteria.  In consideration of the cross-domain terminology currently in use, it is of interest to note that the archaea can be either Gram-negative or Gram-positive; the archaea and the eukaryotes remain under equal consideration within the studies.  It is also of interest to know that until relatively recent times that the archaea were classified as bacteria and that the classification systems of biology remain dynamic.

A central difference between the two forms, beyond the relative lipid content and peptidoglycan layer, is the presence of lipopolysaccarides (LPS) on the Gram-negative bacteria.  LPS, or endotoxins, elicit a strong immune response in animals.

Aerosolized endotoxins are known to have a significant effect upon the pulmonary system and chronic exposures are known to increase the risk of chronic obstructive pulmonary disease (COPD).  COPD is now the third leading cause of death in the United States. Sub-lethal doses cause fluctuations in body temperature (short term increases and longer term decreases),  and changes in the blood, immune, endocrine systems and metabolism.  They can result in “flu-like” symptoms, cough, headache and respiratory distress.  They are linked to increases in asthma and chronic bronchitis.  There are no regulatory standards for the levels of endotoxins in the environment (source : National Resources Defense Council).

Endotoxins are associated with increased weight gain, obesity, gum and dental infections and diabetes. A linkage with Chronic Fatigue Syndrome exists, as well as with atherosclerosis, oxidative stress, chronic conditions, cardiovascular disease and Parkinson’s Disease.  The condition of endotoxins within the blood is referred to as endotoxemia.

There may be a discomforting familiarity with the above symptoms in correlation with the so-called “Morgellons” condition; this familiarity justifies intensive research into the potential linkage between “Morgellons” and endotoxins.

Lastly, let us now review an infrared investigation into the nature of the extracted lipids.

Infrared Spectrum of CDB Lipids

Although a low resolution IR spectrophotometer has been used for this project, a very clear spectrum has been obtained.  The spectrum is dominated by peaks at 2900 cm-1 and 1700 cm-1.   The 2900 cm-1 peak can be attributed to sp3 single carbon-hydrogen bonds.  This functional group is perfectly in accord with the structure that forms the core of a fatty acid, as:


In addition, the peak at 1700 cm-1 can be attributed to carbon-oxygen double bonding, also in perfect accord with an unsaturated fatty acid, subject to oxidations as extensively described in this report.

A probability model has been developed for the analysis of infrared spectrums, subject to the constraints of the technology available to the Institute. The application of the model to the infrared spectrum above presents the following relative probabilities for the existence of the various functional groups:

Functional Group Relative Probability of Existence
Ketones 90%
Alkanes 70%
Aldehyde 60%
Carboxylic Acid 45%
Phosphonate 45%
Silane 37%
Phosphonic Acid 30%
Ether 30%
Ester 30%
Amide 20%
Phosphine 20%
Sulfate 15%

An analysis of the above probability table will demonstrate that it is highly dominated by the combination and presence of carbon-carbon and carbon-oxygen single and double bonds functional groups.  The study and examination of the high probability functional groups and their potential impacts upon health will continue; the strong appearance of the ketone and aldehyde groups with a double carbon-oxygen bond (carbonyl group) is also of high interest here; the aldehydes are very easily subject to oxidation.  The potential presence of impurities within the sample will also need to be examined further, including those that might be a part of the extraction process.

All assessments in this report are highly corroborative of one another and they support the assessment of a highly unsaturated lipid, and all that this entails, as comprising a core structure of the CDB extraction that has taken place.


Additional Note:

Some additional analysis of biomolecules with the use of more capable and advanced infrared spectroscopy instrumentation has been completed as of May 2016.  The structural information identified continues to support the hypothesis that the CDB derive from the bacterial domain and this remains a primary focal point of research as to its origin.  The degree of overlap of genetics, if any, with the remaining archaea or eukaryote domains remains an open topic of research.


Clifford E Carnicom
Mar 12 2015
Edited May 29 2016

born Clifford Bruce Stewart
Jan 19 1953