Home |
Search |
Today's Posts |
![]() |
|
sci.geo.meteorology (Meteorology) (sci.geo.meteorology) For the discussion of meteorology and related topics. |
Reply |
|
LinkBack | Thread Tools | Display Modes |
#1
![]() |
|||
|
|||
![]()
Another good article that AGW alarmists cannot rationally rebute,
instead the predicable insults you expect from true believers. http://www.drroyspencer.com/2009/06/...-court-of-law/ http://www.drroyspencer.com/2009/05/...els-are-wrong/ http://en.wikipedia.org/wiki/Daubert_Standard June 6, 2009, 08:41:00 | Roy W. Spencer, Ph. D. The use of complex computerized numerical models for predicting global warming is obviously critical to the IPCC's claims that manmade global warming will be - if it is not already - a serious problem for humanity. Computer modeling used as evidence in a court of law is not new, and it involves special challenges. While I am not a lawyer, nor did I stay in a Holiday Inn Express last night, I have the following thoughts regarding the admissibility of climate models as scientific evidence. The admissibility of scientific evidence, such as computer models, now relies upon the Daubert standard resulting from legal precedent set by the U.S. Supreme Court in 1993. Applying the Daubert tests impartially, I believe computer models should be deemed inadmissible as evidence when it comes to predictions of global warming and associated climate change. This is not likely to happen since a judge has considerable discretion in how the Daubert standard is applied, and the U.S. Supreme Court in its April 2, 2007 ruling on carbon dioxide as a pollutant obviously relied very heavily on climate models as evidence. Nevertheless, it is useful to address the Daubert standard since it provides a framework for understanding the strengths and weakness of climate models. In the following, I have paraphrased the Wikipedia summary of the 5 cardinal points of Daubert, and offered some thoughts on each. My apologies in advance for any misunderstanding on my part of the law and legal precedent. 1. Has the technique been tested in actual field conditions (and not just in a laboratory)? This is where climate models are particularly weak. Climate modelers increasingly rely on theory, and rely less on actual testing with real observations. I believe there are three kinds of testing, all against actual observations of the real climate system, in this regard. In increasing order of importance I would place them as: (1) testing of the physical processes as components of the model; (2) testing of the short-term behavior of model as a whole; and (3) testing of model predictions of future climate states. On the last point, the testing of climate models to see if it they can predict climate change is nearly impossible. Climate modeling as a discipline has existed for little more than 20 years, and they are being used to predict 100 or more years into the future. There is no known analog for manmade global warming with which a model can be shown to be a reliable indicator of the future. Sure, they have been tested to see if they reasonably replicate seasonal variations in temperature, clouds, rainfall, etc, (which is one example of the second kind of test). But the models are not being used to predict the seasons. They are instead being used to predict how the seasons will change in the distant future. If we use forensic fingerprinting as an example, this would be like using a technique that has been demonstrated to reliably identify whether a smudge mark is a fingerprint at all, and then claiming that it can also distinguish between different persons' fingerprints without actually testing that claim. Similarly, climate models have been unable to reliably predict year-to-year variability, let alone long-term warming. Their purported ability to explain 2 to 3 observed events of warming and cooling over the last 50 to 100 years based upon anthropogenic pollution by carbon dioxide and particulates, respectively, is similarly weak in the sense of "testing" because natural sources of climate change (e.g. natural, decadal-time scale fluctuations in cloud cover) can not be investigated as potential alternative explanations. This is because sufficiently accurate global measurements of any natural forcing mechanisms like natural cloud variations exist for only the last 10 years or so. As far as I can tell, the neglect of clouds as a potential forcing mechanism of climate change is never mentioned by the modelers, or by the IPCC. Finally, I have seen recent (but as yet unpublished) results that have tested the IPCC models' predictions of various warming rates over short, sub-periods of time and found that the small amount of warming we have actually observed over the last 10 or 20 years is inconsistent with at least 95% of the climate models' predictions. This suggests the models are predicting too much warming. I address our own testing of models under point #4, below. 2. Has the technique been subject to peer review and publication? Climate models have, of course, been published in the peer-reviewed scientific literature. But that does not mean that the models can be replicated by someone who then wants to use the publication to build his own version of the model. The only way that could be done is to publish all of the computer code contained in the model - thousands of lines of code. Each line has the potential to drastically alter the behavior of the model, and the researcher attempting to replicate a study will, in general, have no idea which lines are critical and which lines are not. Peer review always involves a certain amount of faith on the part of the peer reviewer that the researchers publishing their study have performed and reported their research in an unbiased fashion, and that is true in spades for climate models. Nevertheless, this Daubert test would probably still be considered to be met since strictly speaking, yes, models have been published in the peer-reviewed literature. But there is little question that the publishing of climate models pushes the limits of how well peer review can control the quality of what get published in the scientific literature. 3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero? This is closely related to point (1) above, the testing of models. Again, there is no way to know what the error rate of the model is for predicting manmade global warming because that is a one-of-a-kind event. So, in my opinion, not only is the error rate not close to zero, there is the distinct possibility that the error rate will end up being 100%. In stark contrast, the IPCC has cleverly attached a 90% probability to its statement that global warming over the last 50 years is very likely to be mostly due to anthropogenic pollution. But that 90% represents their level of faith. It is not the result of any kind of statistical testing of the number of successful and unsuccessful predictions by the models. 4. Do standards exist for the control of the technique's operation? Modelers no doubt have some fairly uniform standards by which they manipulate models, just as weather modelers do. But for what really matters to forecasts of climate change - feedbacks - there are no well established procedures for controlling feedbacks in the models. As it is, we are not even sure what the feedbacks in the real climate system are, let alone have procedures for making sure the models behave in a similar manner. Yet, it is the feedbacks in the models that will determine whether manmade global warming will even be measurable, let alone catastrophic. It is also the feedbacks that determine whether increasing CO2 in the past 50 to 100 years is a sufficiently strong forcing to cause the warming observed over the same period of time. Here I discuss our recent evidence that strongly suggests the feedback tests of models that have been made against observational data have not been sufficient to even distinguish between positive and negative feedback in the climate system, let alone the different levels of positive feedback which are exhibited by the 21 models tracked by the IPCC. And if feedbacks are indeed negative, then manmade global warming becomes, for all practical purposes, a non-issue. 5. Has the technique been generally accepted within the relevant scientific community? If the "relevant scientific community" is considered to be just those who run computerized climate models, then the answer is most certainly "yes". If the community is climate researchers in general, the answer would probably be "yes". (After all, non-modelers will tend to trust that the modelers know what they are doing.) But if the community includes all meteorologists (who also depend upon computer models as tools in weather forecasting), the answer might well be "no". I like to point out that climate is just time-averaged meteorology, and if you don't get the meteorology right, how can you expect your climate model to predict anything that is meaningful? Climate modelers claim that meteorologists are not qualified to critique climate model predictions, but I would argue that many climate modelers are not qualified to create climate models that can be expected to realistically predict climate change. In summary, I would say that a totally impartial judge might well find that climate models are inadmissible as scientific evidence in a court of law under the 5 criteria implicit in the Daubert standard. Of course, it might be difficult to find an impartial judge on the subject of global warming. |
#2
![]() |
|||
|
|||
![]()
On Jun 6, 5:46*pm, "Eric Gisin" wrote:
Another good article that AGW alarmists cannot rationally rebute, instead the predicable insults you expect from true believers. http://www.drroyspencer.com/2009/06/...ubert_Standard June 6, 2009, 08:41:00 | Roy W. Spencer, Ph. D. The use of complex computerized numerical models for predicting global warming is obviously critical to the IPCC's claims that manmade global warming will be - if it is not already - a serious problem for humanity. Computer modeling used as evidence in a court of law is not new, and it involves special challenges. While I am not a lawyer, nor did I stay in a Holiday Inn Express last night, I have the following thoughts regarding the admissibility of climate models as scientific evidence. The admissibility of scientific evidence, such as computer models, now relies upon the Daubert standard resulting from legal precedent set by the U.S. Supreme Court in 1993. Applying the Daubert tests impartially, I believe computer models should be deemed inadmissible as evidence when it comes to predictions of global warming and associated climate change. This is not likely to happen since a judge has considerable discretion in how the Daubert standard is applied, and the U.S. Supreme Court in its April 2, 2007 ruling on carbon dioxide as a pollutant obviously relied very heavily on climate models as evidence. Nevertheless, it is useful to address the Daubert standard since it provides a framework for understanding the strengths and weakness of climate models. In the following, I have paraphrased the Wikipedia summary of the 5 cardinal points of Daubert, and offered some thoughts on each. My apologies in advance for any misunderstanding on my part of the law and legal precedent. 1. Has the technique been tested in actual field conditions (and not just in a laboratory)? This is where climate models are particularly weak. Climate modelers increasingly rely on theory, and rely less on actual testing with real observations. I believe there are three kinds of testing, all against actual observations of the real climate system, in this regard. In increasing order of importance I would place them as: (1) testing of the physical processes as components of the model; (2) testing of the short-term behavior of model as a whole; and (3) testing of model predictions of future climate states. On the last point, the testing of climate models to see if it they can predict climate change is nearly impossible. Climate modeling as a discipline has existed for little more than 20 years, and they are being used to predict 100 or more years into the future. There is no known analog for manmade global warming with which a model can be shown to be a reliable indicator of the future. Sure, they have been tested to see if they reasonably replicate seasonal variations in temperature, clouds, rainfall, etc, (which is one example of the second kind of test). But the models are not being used to predict the seasons. They are instead being used to predict how the seasons will change in the distant future. If we use forensic fingerprinting as an example, this would be like using a technique that has been demonstrated to reliably identify whether a smudge mark is a fingerprint at all, and then claiming that it can also distinguish between different persons' fingerprints without actually testing that claim. Similarly, climate models have been unable to reliably predict year-to-year variability, let alone long-term warming. Their purported ability to explain 2 to 3 observed events of warming and cooling over the last 50 to 100 years based upon anthropogenic pollution by carbon dioxide and particulates, respectively, is similarly weak in the sense of "testing" because natural sources of climate change (e.g. natural, decadal-time scale fluctuations in cloud cover) can not be investigated as potential alternative explanations. This is because sufficiently accurate global measurements of any natural forcing mechanisms like natural cloud variations exist for only the last 10 years or so. As far as I can tell, the neglect of clouds as a potential forcing mechanism of climate change is never mentioned by the modelers, or by the IPCC. Finally, I have seen recent (but as yet unpublished) results that have tested the IPCC models' predictions of various warming rates over short, sub-periods of time and found that the small amount of warming we have actually observed over the last 10 or 20 years is inconsistent with at least 95% of the climate models' predictions. This suggests the models are predicting too much warming. I address our own testing of models under point #4, below. 2. Has the technique been subject to peer review and publication? Climate models have, of course, been published in the peer-reviewed scientific literature. But that does not mean that the models can be replicated by someone who then wants to use the publication to build his own version of the model. The only way that could be done is to publish all of the computer code contained in the model - thousands of lines of code. Each line has the potential to drastically alter the behavior of the model, and the researcher attempting to replicate a study will, in general, have no idea which lines are critical and which lines are not. Peer review always involves a certain amount of faith on the part of the peer reviewer that the researchers publishing their study have performed and reported their research in an unbiased fashion, and that is true in spades for climate models. Nevertheless, this Daubert test would probably still be considered to be met since strictly speaking, yes, models have been published in the peer-reviewed literature.. But there is little question that the publishing of climate models pushes the limits of how well peer review can control the quality of what get published in the scientific literature. 3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero? This is closely related to point (1) above, the testing of models. Again, there is no way to know what the error rate of the model is for predicting manmade global warming because that is a one-of-a-kind event. So, in my opinion, not only is the error rate not close to zero, there is the distinct possibility that the error rate will end up being 100%. In stark contrast, the IPCC has cleverly attached a 90% probability to its statement that global warming over the last 50 years is very likely to be mostly due to anthropogenic pollution. But that 90% represents their level of faith. It is not the result of any kind of statistical testing of the number of successful and unsuccessful predictions by the models. 4. Do standards exist for the control of the technique's operation? Modelers no doubt have some fairly uniform standards by which they manipulate models, just as weather modelers do. But for what really matters to forecasts of climate change - feedbacks - there are no well established procedures for controlling feedbacks in the models. As it is, we are not even sure what the feedbacks in the real climate system are, let alone have procedures for making sure the models behave in a similar manner. Yet, it is the feedbacks in the models that will determine whether manmade global warming will even be measurable, let alone catastrophic. It is also the feedbacks that determine whether increasing CO2 in the past 50 to 100 years is a sufficiently strong forcing to cause the warming observed over the same period of time. Here I discuss our recent evidence that strongly suggests the feedback tests of models that have been made against observational data have not been sufficient to even distinguish between positive and negative feedback in the climate system, let alone the different levels of positive feedback which are exhibited by the 21 models tracked by the IPCC. And if feedbacks are indeed negative, then manmade global warming becomes, for all practical purposes, a non-issue. 5. Has the technique been generally accepted within the relevant scientific community? If the "relevant scientific community" is considered to be just those who run computerized climate models, then the answer is most certainly "yes". If the community is climate researchers in general, the answer would probably be "yes". (After all, non-modelers will tend to trust that the modelers know what they are doing.) But if the community includes all meteorologists (who also depend upon computer models as tools in weather forecasting), the answer might well be "no". I like to point out that climate is just time-averaged meteorology, and if you don't get the meteorology right, how can you expect your climate model to predict anything that is meaningful? Climate modelers claim that meteorologists are not qualified to critique climate model predictions, but I would argue that many climate modelers are not qualified to create climate models that can be expected to realistically predict climate change. In summary, I would say that a totally impartial judge might well find that climate models are inadmissible as scientific evidence in a court of law under the 5 criteria implicit in the Daubert standard. Of course, it might be difficult to find an impartial judge on the subject of global warming. NO! they are subject to Chaos Theory |
#3
![]() |
|||
|
|||
![]()
According to the Wikipedia article given,
it would be up to the anti-environmentalist side to file a Daubert motion to suppress climate model testimony. For whatever reason, so far, the attorneys for the fossil fuel industry have not done so. Fossil fool pseudo-science isn't used by industry legal representatives in court cases. Most of it relies on lies of omission, and therefore would be perjury. Fossil fool spin's purpose is just to confuse the suckers. http://en.wikipedia.org/wiki/Daubert_Standard On Jun 6, 9:46*am, "Eric Gisin" wrote: Another good article that AGW alarmists cannot rationally rebute, instead the predicable insults you expect from true believers. http://www.drroyspencer.com/2009/06/...ubert_Standard June 6, 2009, 08:41:00 | Roy W. Spencer, Ph. D. The use of complex computerized numerical models for predicting global warming is obviously critical to the IPCC's claims that manmade global warming will be - if it is not already - a serious problem for humanity. Computer modeling used as evidence in a court of law is not new, and it involves special challenges. While I am not a lawyer, nor did I stay in a Holiday Inn Express last night, I have the following thoughts regarding the admissibility of climate models as scientific evidence. The admissibility of scientific evidence, such as computer models, now relies upon the Daubert standard resulting from legal precedent set by the U.S. Supreme Court in 1993. Applying the Daubert tests impartially, I believe computer models should be deemed inadmissible as evidence when it comes to predictions of global warming and associated climate change. This is not likely to happen since a judge has considerable discretion in how the Daubert standard is applied, and the U.S. Supreme Court in its April 2, 2007 ruling on carbon dioxide as a pollutant obviously relied very heavily on climate models as evidence. Nevertheless, it is useful to address the Daubert standard since it provides a framework for understanding the strengths and weakness of climate models. In the following, I have paraphrased the Wikipedia summary of the 5 cardinal points of Daubert, and offered some thoughts on each. My apologies in advance for any misunderstanding on my part of the law and legal precedent. 1. Has the technique been tested in actual field conditions (and not just in a laboratory)? This is where climate models are particularly weak. Climate modelers increasingly rely on theory, and rely less on actual testing with real observations. I believe there are three kinds of testing, all against actual observations of the real climate system, in this regard. In increasing order of importance I would place them as: (1) testing of the physical processes as components of the model; (2) testing of the short-term behavior of model as a whole; and (3) testing of model predictions of future climate states. On the last point, the testing of climate models to see if it they can predict climate change is nearly impossible. Climate modeling as a discipline has existed for little more than 20 years, and they are being used to predict 100 or more years into the future. There is no known analog for manmade global warming with which a model can be shown to be a reliable indicator of the future. Sure, they have been tested to see if they reasonably replicate seasonal variations in temperature, clouds, rainfall, etc, (which is one example of the second kind of test). But the models are not being used to predict the seasons. They are instead being used to predict how the seasons will change in the distant future. If we use forensic fingerprinting as an example, this would be like using a technique that has been demonstrated to reliably identify whether a smudge mark is a fingerprint at all, and then claiming that it can also distinguish between different persons' fingerprints without actually testing that claim. Similarly, climate models have been unable to reliably predict year-to-year variability, let alone long-term warming. Their purported ability to explain 2 to 3 observed events of warming and cooling over the last 50 to 100 years based upon anthropogenic pollution by carbon dioxide and particulates, respectively, is similarly weak in the sense of "testing" because natural sources of climate change (e.g. natural, decadal-time scale fluctuations in cloud cover) can not be investigated as potential alternative explanations. This is because sufficiently accurate global measurements of any natural forcing mechanisms like natural cloud variations exist for only the last 10 years or so. As far as I can tell, the neglect of clouds as a potential forcing mechanism of climate change is never mentioned by the modelers, or by the IPCC. Finally, I have seen recent (but as yet unpublished) results that have tested the IPCC models' predictions of various warming rates over short, sub-periods of time and found that the small amount of warming we have actually observed over the last 10 or 20 years is inconsistent with at least 95% of the climate models' predictions. This suggests the models are predicting too much warming. I address our own testing of models under point #4, below. 2. Has the technique been subject to peer review and publication? Climate models have, of course, been published in the peer-reviewed scientific literature. But that does not mean that the models can be replicated by someone who then wants to use the publication to build his own version of the model. The only way that could be done is to publish all of the computer code contained in the model - thousands of lines of code. Each line has the potential to drastically alter the behavior of the model, and the researcher attempting to replicate a study will, in general, have no idea which lines are critical and which lines are not. Peer review always involves a certain amount of faith on the part of the peer reviewer that the researchers publishing their study have performed and reported their research in an unbiased fashion, and that is true in spades for climate models. Nevertheless, this Daubert test would probably still be considered to be met since strictly speaking, yes, models have been published in the peer-reviewed literature.. But there is little question that the publishing of climate models pushes the limits of how well peer review can control the quality of what get published in the scientific literature. 3. What is the known or potential rate of error? Is it zero, or low enough to be close to zero? This is closely related to point (1) above, the testing of models. Again, there is no way to know what the error rate of the model is for predicting manmade global warming because that is a one-of-a-kind event. So, in my opinion, not only is the error rate not close to zero, there is the distinct possibility that the error rate will end up being 100%. In stark contrast, the IPCC has cleverly attached a 90% probability to its statement that global warming over the last 50 years is very likely to be mostly due to anthropogenic pollution. But that 90% represents their level of faith. It is not the result of any kind of statistical testing of the number of successful and unsuccessful predictions by the models. 4. Do standards exist for the control of the technique's operation? Modelers no doubt have some fairly uniform standards by which they manipulate models, just as weather modelers do. But for what really matters to forecasts of climate change - feedbacks - there are no well established procedures for controlling feedbacks in the models. As it is, we are not even sure what the feedbacks in the real climate system are, let alone have procedures for making sure the models behave in a similar manner. Yet, it is the feedbacks in the models that will determine whether manmade global warming will even be measurable, let alone catastrophic. It is also the feedbacks that determine whether increasing CO2 in the past 50 to 100 years is a sufficiently strong forcing to cause the warming observed over the same period of time. Here I discuss our recent evidence that strongly suggests the feedback tests of models that have been made against observational data have not been sufficient to even distinguish between positive and negative feedback in the climate system, let alone the different levels of positive feedback which are exhibited by the 21 models tracked by the IPCC. And if feedbacks are indeed negative, then manmade global warming becomes, for all practical purposes, a non-issue. 5. Has the technique been generally accepted within the relevant scientific community? If the "relevant scientific community" is considered to be just those who run computerized climate models, then the answer is most certainly "yes". If the community is climate researchers in general, the answer would probably be "yes". (After all, non-modelers will tend to trust that the modelers know what they are doing.) But if the community includes all meteorologists (who also depend upon computer models as tools in weather forecasting), the answer might well be "no". I like to point out that climate is just time-averaged meteorology, and if you don't get the meteorology right, how can you expect your climate model to predict anything that is meaningful? Climate modelers claim that meteorologists are not qualified to critique climate model predictions, but I would argue that many climate modelers are not qualified to create climate models that can be expected to realistically predict climate change. In summary, I would say that a totally impartial judge might well find that climate models are inadmissible as scientific evidence in a court of law under the 5 criteria implicit in the Daubert standard. Of course, it might be difficult to find an impartial judge on the subject of global warming. |
#4
![]() |
|||
|
|||
![]()
On Sat, 6 Jun 2009 18:26:17 -0700 (PDT), Roger Coppock
wrote: According to the Wikipedia article given, it would be up to the anti-environmentalist side to file a Daubert motion to suppress climate model testimony. Who wrote that, there is NO "anti-evironmentalist" side, we all care about the environment, all hunters care about wildlife management, and we all care about the planet. Nobody should suppress any relevant testimony, hopefully the jurists will have enough brains to be realistic. For whatever reason, so far, the attorneys for the fossil fuel industry have not done so. Fossil fool pseudo-science isn't used by industry legal representatives in court cases. Most of it relies on lies of omission, and therefore would be perjury. Fossil fool spin's purpose is just to confuse the suckers. Those are the words of a sick man, and the link below accompanying them describes situations where they do not apply. http://en.wikipedia.org/wiki/Daubert_Standard It is not the choice of the judges to decide if an expert witness is giving valid testimony, all the expert witnesses have to be evaluated on experience, Is the top posting because of laziness, or a desperate attempt to get the reader to only read the response? |
#5
![]() |
|||
|
|||
![]()
What A. Fool wrote:
On Sat, 6 Jun 2009 18:26:17 -0700 (PDT), Roger Coppock wrote: According to the Wikipedia article given, it would be up to the anti-environmentalist side to file a Daubert motion to suppress climate model testimony. Who wrote that, there is NO "anti-evironmentalist" side, Don't lie to us, dumbass. We have to hear here every day how environmentalists are socialists, communists, want humans to die, should be shot, etc., etc., etc. |
Reply |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
New wave height record accepted by WMO | uk.sci.weather (UK Weather) | |||
"climate establishment does not follow the scientific method" -INSTITUTE FOR LAW AND ECONOMICS | sci.geo.meteorology (Meteorology) | |||
Sunspots, Not Debunked Climate Models Drive Our Climate | sci.geo.meteorology (Meteorology) | |||
Statistica Sinica newly-accepted articles (Nov. 2007) | sci.geo.meteorology (Meteorology) | |||
Will Belfort be eating "CROW" in court, while their former Scientist eats steak and potatoes? | sci.geo.meteorology (Meteorology) |