sci.geo.meteorology (Meteorology) (sci.geo.meteorology) For the discussion of meteorology and related topics.

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #31   Report Post  
Old December 10th 09, 01:43 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Jul 2009
Posts: 438
Default Can Global Warming Predictions be Tested with Observations of the Real Climate System?

On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv jmfbahciv@aol wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism, but the
science behind it is questionable. Note how the alarmists cannot
respond to this important issue, other than with insane rants
and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper at the
Fall meeting of the American Geophysical Union (AGU) in San
Francisco, in a special session devoted to feedbacks in the
climate system. If you don't already know, feedbacks are what
will determine whether anthropogenic global warming is strong or
weak, with cloud feedbacks being the most uncertain of all.

In the 12 minutes I have for my presentation, I hope to convince
as many scientists as possible the futility of previous attempts
to estimate cloud feedbacks in the climate system. And unless we
can measure cloud feedbacks in nature, we can not test the
feedbacks operating in computerized climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are self
evident and measurable. Except if you are an AGW holowarming nut
and fruitcake. You'll just have to make up some more numbers and
bully more publications to get it past peer review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so that it
can be peer reviewed.
I don't have any problem at all with *honest* peer review. What I
do have a BIG problem with is making the data available to people
who are certainly NOT "peers" (in the sense of having little or no
scientific training in any field, let alone a specialization in
anything relating to climatology), who furthermore have a real
anti-warming agenda, and who will, either willfully or ignorantly,
misinterpret the data to suit their purposes, and spread the
resulting disinformation far and wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a person
who's had high school physics, but has forgotten much of it. I'd
make the best, most honest case I could, showing and explaining the
evidence both supporting and against the hypothesis.

Then I'd publish the first draft and invite review by anyone who
feels qualified to comment. The second draft would honestly answer
the issues and misunderstandings raised in those comments, again
keeping the language and concepts accessible and convincing to any
interested high school physics graduate.

The process would iterate until a sufficiently understandable,
unambiguous case could be made for AGW to convince most people, or
the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an honest,
open debate. Focus on learning how the climate system actually
works rather than trying to advance a political agenda by
frightening gullible people with scare tactics.

And the scientist is no longer doing his/her science. To make data
available requires a maintenance staff before it's written to the
public disk.


Don't you think it might be a good idea to do some data QC before
it's written to disks distributed to anyone? I'd think that's part
of the scientist's job. Why should the public see anything
different from the same disks the research is based on? The more
eyes looking, the earlier discrepancies can be resolved. Science is
supposed to be an open process, not a quasi-religious ceremony.


What discrepancies? We're talking about science data, not a doc that
can be proof-read.
If that's the case, why not just post it? Why try to hide it?
What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a public
server with documentation describing it so a non-scientist would
understand the data. Frankly, I think this (documenting it) is
impossible but there are amazing writers in the science biz.


I'm talking about making the data available online to whoever wants to
review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure they
"understand" it in the "correct" way. That's not up to you, me, or
anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by reverse
engineering secret analytical processes after the fact.


ARe you talking about raw data? I don't see how you can QC raw data.
Organize it into files suitable for archiving and searching, then check
for typos and transcription errors.
WTF are you talking about? There can't be typos in raw data, let alone
transcription errors.


I'm talking about unadjusted digital versions of the "raw data",


No, you are not. See below.

not the
original paper forms. I'm assuming there will be keyboarding errors,
wrong dates, etc, which should be checked against the paper originals to
avoid propagating unambiguous errors. Range checks and other automated
methods could be used to flag suspected errors for human intervention. I
am specifically excluding any "corrections" based on opinion or
assumptions such as UHI, etc.


All of this requires code that massages the real data. So you aren't
talking about raw data here either.


Raw data is numbers not nice wordage in English ASCII.


And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw data"
would be useless.


Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money,
time, and maintenance. By your definition, the bits put on a
public server will not be data but a report of the data.


That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.


And you're assuming facts not in evidence.


Actually, I'm not assuming anything. I'm talking about
moving bits and presenting them to non-expert readers.
I know a lot about this kind of thing because I did that
kind of work for 25 years.


I'm definitely not talking about a contract.


Then who's paying for it? If it's not taxpayers, then I really don't
care how it's done. If it is from taxes, then there better be an
enforceable contract in place, or we'll be right back where we are now.


Contract law is different in each and every country. Which taxpayers
do you think paid for the gathering of that data? Who pays for
the data the maritime business provides?

/BAH


BS, the data was mostly taken by weather stations
with no further market for it than the newspapers and
radio and TV stations.

Didn't Jones make it clear, the data won't
be released, period, it is not nice to argue with
superior persons.







  #32   Report Post  
Old December 10th 09, 06:22 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 197
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism, but
the science behind it is questionable. Note how the alarmists
cannot respond to this important issue, other than with insane
rants and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper at
the Fall meeting of the American Geophysical Union (AGU) in
San Francisco, in a special session devoted to feedbacks in
the climate system. If you don't already know, feedbacks are
what will determine whether anthropogenic global warming is
strong or weak, with cloud feedbacks being the most uncertain
of all.

In the 12 minutes I have for my presentation, I hope to
convince as many scientists as possible the futility of
previous attempts to estimate cloud feedbacks in the climate
system. And unless we can measure cloud feedbacks in nature,
we can not test the feedbacks operating in computerized
climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are
self evident and measurable. Except if you are an AGW
holowarming nut and fruitcake. You'll just have to make up some
more numbers and bully more publications to get it past peer
review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so that
it can be peer reviewed.
I don't have any problem at all with *honest* peer review. What
I do have a BIG problem with is making the data available to
people who are certainly NOT "peers" (in the sense of having
little or no scientific training in any field, let alone a
specialization in anything relating to climatology), who
furthermore have a real anti-warming agenda, and who will,
either willfully or ignorantly, misinterpret the data to suit
their purposes, and spread the resulting disinformation far and
wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a
person who's had high school physics, but has forgotten much of
it. I'd make the best, most honest case I could, showing and
explaining the evidence both supporting and against the
hypothesis.

Then I'd publish the first draft and invite review by anyone who
feels qualified to comment. The second draft would honestly
answer the issues and misunderstandings raised in those comments,
again keeping the language and concepts accessible and convincing
to any interested high school physics graduate.

The process would iterate until a sufficiently understandable,
unambiguous case could be made for AGW to convince most people,
or the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an honest,
open debate. Focus on learning how the climate system actually
works rather than trying to advance a political agenda by
frightening gullible people with scare tactics.

And the scientist is no longer doing his/her science. To make
data available requires a maintenance staff before it's written to
the public disk.


Don't you think it might be a good idea to do some data QC before
it's written to disks distributed to anyone? I'd think that's part
of the scientist's job. Why should the public see anything
different from the same disks the research is based on? The more
eyes looking, the earlier discrepancies can be resolved. Science
is supposed to be an open process, not a quasi-religious ceremony.


What discrepancies? We're talking about science data, not a doc
that can be proof-read.
If that's the case, why not just post it? Why try to hide it?
What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a public
server with documentation describing it so a non-scientist would
understand the data. Frankly, I think this (documenting it) is
impossible but there are amazing writers in the science biz.


I'm talking about making the data available online to whoever wants to
review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure they
"understand" it in the "correct" way. That's not up to you, me, or
anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by reverse
engineering secret analytical processes after the fact.


ARe you talking about raw data? I don't see how you can QC raw
data.
Organize it into files suitable for archiving and searching, then
check for typos and transcription errors.
WTF are you talking about? There can't be typos in raw data, let
alone transcription errors.


I'm talking about unadjusted digital versions of the "raw data",


No, you are not. See below.


You know what I'm talking about, and I don't? That's quite a gift.

not the
original paper forms. I'm assuming there will be keyboarding errors,
wrong dates, etc, which should be checked against the paper originals
to avoid propagating unambiguous errors. Range checks and other
automated methods could be used to flag suspected errors for human
intervention. I am specifically excluding any "corrections" based on
opinion or assumptions such as UHI, etc.


All of this requires code that massages the real data. So you aren't
talking about raw data here either.


It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.

Raw data is numbers not nice wordage in English ASCII.


And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw data"
would be useless.


Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money, time,
and maintenance. By your definition, the bits put on a public server
will not be data but a report of the data.


I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.

By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.

If you want to call the verification and formatting "massaging", fine,
but if it's not done, the data is unusable.

That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.


And you're assuming facts not in evidence.


Actually, I'm not assuming anything. I'm talking about moving bits and
presenting them to non-expert readers. I know a lot about this kind of
thing because I did that kind of work for 25 years.


It's you that's worried about "non-expert" readers, not me. I just want
it accessible in a usable form. You don't need to sugar coat it.

I'm definitely not talking about a contract.


Then who's paying for it? If it's not taxpayers, then I really don't
care how it's done. If it is from taxes, then there better be an
enforceable contract in place, or we'll be right back where we are now.


Contract law is different in each and every country.


So? There are still enforceable contracts. How would you do
international business without them?

Which taxpayers do you think paid for the gathering of that data? Who
pays for the data the maritime business provides?


Don't know, don't care. Are you saying the IPCC is not tax-funded?

Where did our $50B go, then? I think grants are generally in the form of
contracts.

  #33   Report Post  
Old December 11th 09, 02:03 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 59
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

Bill Ward wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism, but
the science behind it is questionable. Note how the alarmists
cannot respond to this important issue, other than with insane
rants and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper at
the Fall meeting of the American Geophysical Union (AGU) in
San Francisco, in a special session devoted to feedbacks in
the climate system. If you don't already know, feedbacks are
what will determine whether anthropogenic global warming is
strong or weak, with cloud feedbacks being the most uncertain
of all.

In the 12 minutes I have for my presentation, I hope to
convince as many scientists as possible the futility of
previous attempts to estimate cloud feedbacks in the climate
system. And unless we can measure cloud feedbacks in nature,
we can not test the feedbacks operating in computerized
climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are
self evident and measurable. Except if you are an AGW
holowarming nut and fruitcake. You'll just have to make up some
more numbers and bully more publications to get it past peer
review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so that
it can be peer reviewed.
I don't have any problem at all with *honest* peer review. What
I do have a BIG problem with is making the data available to
people who are certainly NOT "peers" (in the sense of having
little or no scientific training in any field, let alone a
specialization in anything relating to climatology), who
furthermore have a real anti-warming agenda, and who will,
either willfully or ignorantly, misinterpret the data to suit
their purposes, and spread the resulting disinformation far and
wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a
person who's had high school physics, but has forgotten much of
it. I'd make the best, most honest case I could, showing and
explaining the evidence both supporting and against the
hypothesis.

Then I'd publish the first draft and invite review by anyone who
feels qualified to comment. The second draft would honestly
answer the issues and misunderstandings raised in those comments,
again keeping the language and concepts accessible and convincing
to any interested high school physics graduate.

The process would iterate until a sufficiently understandable,
unambiguous case could be made for AGW to convince most people,
or the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an honest,
open debate. Focus on learning how the climate system actually
works rather than trying to advance a political agenda by
frightening gullible people with scare tactics.

And the scientist is no longer doing his/her science. To make
data available requires a maintenance staff before it's written to
the public disk.
Don't you think it might be a good idea to do some data QC before
it's written to disks distributed to anyone? I'd think that's part
of the scientist's job. Why should the public see anything
different from the same disks the research is based on? The more
eyes looking, the earlier discrepancies can be resolved. Science
is supposed to be an open process, not a quasi-religious ceremony.
What discrepancies? We're talking about science data, not a doc
that can be proof-read.
If that's the case, why not just post it? Why try to hide it?
What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a public
server with documentation describing it so a non-scientist would
understand the data. Frankly, I think this (documenting it) is
impossible but there are amazing writers in the science biz.
I'm talking about making the data available online to whoever wants to
review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure they
"understand" it in the "correct" way. That's not up to you, me, or
anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by reverse
engineering secret analytical processes after the fact.


ARe you talking about raw data? I don't see how you can QC raw
data.
Organize it into files suitable for archiving and searching, then
check for typos and transcription errors.
WTF are you talking about? There can't be typos in raw data, let
alone transcription errors.
I'm talking about unadjusted digital versions of the "raw data",

No, you are not. See below.


You know what I'm talking about, and I don't? That's quite a gift.


Yes. I know what you're not talking about. It's clear you have no
idea what processes are involved w.r.t. putting readable bits on
a computer system.


not the
original paper forms. I'm assuming there will be keyboarding errors,
wrong dates, etc, which should be checked against the paper originals
to avoid propagating unambiguous errors. Range checks and other
automated methods could be used to flag suspected errors for human
intervention. I am specifically excluding any "corrections" based on
opinion or assumptions such as UHI, etc.

All of this requires code that massages the real data. So you aren't
talking about raw data here either.


It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.


Which requires code if you're putting in into bits and storing
the results on a system which can be accessed by the rest of the
world's computers.


Raw data is numbers not nice wordage in English ASCII.
And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw data"
would be useless.

Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money, time,
and maintenance. By your definition, the bits put on a public server
will not be data but a report of the data.


I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.


No, didn't talk about that. You want prettied up and reformatted so
anybody can read it and understand what it is. That takes code and
massages the raw data.


By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.


That is not raw data. You've typed it in and its format
is ASCII.



If you want to call the verification and formatting "massaging", fine,
but if it's not done, the data is unusable.


Exactly. It's unusable to most people except those who run code
to use it as input (which is what scientists do).


That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.
And you're assuming facts not in evidence.

Actually, I'm not assuming anything. I'm talking about moving bits and
presenting them to non-expert readers. I know a lot about this kind of
thing because I did that kind of work for 25 years.


It's you that's worried about "non-expert" readers, not me. I just want
it accessible in a usable form. You don't need to sugar coat it.


Your kind of usable form requires the raw data to be massaged before
storing it on a public forum.


I'm definitely not talking about a contract.


Then who's paying for it? If it's not taxpayers, then I really don't
care how it's done. If it is from taxes, then there better be an
enforceable contract in place, or we'll be right back where we are now.


Contract law is different in each and every country.


So? There are still enforceable contracts. How would you do
international business without them?


You sign a contract for each country or entity in which you want
to do business.


Which taxpayers do you think paid for the gathering of that data? Who
pays for the data the maritime business provides?


Don't know, don't care. Are you saying the IPCC is not tax-funded?

Where did our $50B go, then? I think grants are generally in the form of
contracts.


You don't even know how things get done.

/BAH

  #34   Report Post  
Old December 11th 09, 02:06 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 59
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

I M @ good guy wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv jmfbahciv@aol wrote:


snip


BS, the data was mostly taken by weather stations
with no further market for it than the newspapers and
radio and TV stations.


Then it's not data "owned" by taxpayers.


Didn't Jones make it clear, the data won't
be released, period, it is not nice to argue with
superior persons.


I've been trying to talk about the problems with making
any kinds of data available for anybody to look at. This
is not a trivial task.

/BAH
  #35   Report Post  
Old December 11th 09, 08:25 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Jul 2009
Posts: 438
Default Can Global Warming Predictions be Tested with Observations of the Real Climate System?

On Fri, 11 Dec 2009 09:06:36 -0500, jmfbahciv jmfbahciv@aol wrote:

I M @ good guy wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv jmfbahciv@aol wrote:


snip


BS, the data was mostly taken by weather stations
with no further market for it than the newspapers and
radio and TV stations.


Then it's not data "owned" by taxpayers.


Didn't Jones make it clear, the data won't
be released, period, it is not nice to argue with
superior persons.


I've been trying to talk about the problems with making
any kinds of data available for anybody to look at. This
is not a trivial task.

/BAH


I think at least some of those who requested
the data have a good idea of the format the data
is in.

GISS provides a graph plus makes available
text files of daily or monthly average temperatures,
I have not tried to find out how that data was
obtained or how it was manipulated.

But if the posted files claimed to be from
the cru computers I get the impression they
didn't have a competent IT professional or
programmer in the organization.







  #36   Report Post  
Old December 11th 09, 11:09 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 197
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

On Fri, 11 Dec 2009 09:03:37 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism, but
the science behind it is questionable. Note how the
alarmists cannot respond to this important issue, other than
with insane rants and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper at
the Fall meeting of the American Geophysical Union (AGU) in
San Francisco, in a special session devoted to feedbacks in
the climate system. If you don't already know, feedbacks are
what will determine whether anthropogenic global warming is
strong or weak, with cloud feedbacks being the most
uncertain of all.

In the 12 minutes I have for my presentation, I hope to
convince as many scientists as possible the futility of
previous attempts to estimate cloud feedbacks in the climate
system. And unless we can measure cloud feedbacks in nature,
we can not test the feedbacks operating in computerized
climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are
self evident and measurable. Except if you are an AGW
holowarming nut and fruitcake. You'll just have to make up
some more numbers and bully more publications to get it past
peer review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so that
it can be peer reviewed.
I don't have any problem at all with *honest* peer review.
What I do have a BIG problem with is making the data available
to people who are certainly NOT "peers" (in the sense of
having little or no scientific training in any field, let
alone a specialization in anything relating to climatology),
who furthermore have a real anti-warming agenda, and who will,
either willfully or ignorantly, misinterpret the data to suit
their purposes, and spread the resulting disinformation far
and wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a
person who's had high school physics, but has forgotten much of
it. I'd make the best, most honest case I could, showing and
explaining the evidence both supporting and against the
hypothesis.

Then I'd publish the first draft and invite review by anyone
who feels qualified to comment. The second draft would
honestly answer the issues and misunderstandings raised in
those comments, again keeping the language and concepts
accessible and convincing to any interested high school physics
graduate.

The process would iterate until a sufficiently understandable,
unambiguous case could be made for AGW to convince most people,
or the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an
honest, open debate. Focus on learning how the climate system
actually works rather than trying to advance a political agenda
by frightening gullible people with scare tactics.

And the scientist is no longer doing his/her science. To make
data available requires a maintenance staff before it's written
to the public disk.
Don't you think it might be a good idea to do some data QC before
it's written to disks distributed to anyone? I'd think that's
part of the scientist's job. Why should the public see anything
different from the same disks the research is based on? The more
eyes looking, the earlier discrepancies can be resolved. Science
is supposed to be an open process, not a quasi-religious
ceremony.
What discrepancies? We're talking about science data, not a doc
that can be proof-read.
If that's the case, why not just post it? Why try to hide it?
What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a
public server with documentation describing it so a non-scientist
would understand the data. Frankly, I think this (documenting it)
is impossible but there are amazing writers in the science biz.
I'm talking about making the data available online to whoever wants
to review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure
they "understand" it in the "correct" way. That's not up to you, me,
or anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by
reverse engineering secret analytical processes after the fact.


ARe you talking about raw data? I don't see how you can QC raw
data.
Organize it into files suitable for archiving and searching, then
check for typos and transcription errors.
WTF are you talking about? There can't be typos in raw data, let
alone transcription errors.
I'm talking about unadjusted digital versions of the "raw data",
No, you are not. See below.


You know what I'm talking about, and I don't? That's quite a gift.


Yes. I know what you're not talking about.


Was that a typo, or you actually agreeing with me now?

It's clear you have no idea what processes are involved w.r.t. putting
readable bits on a computer system.


You might be surprised.

not the
original paper forms. I'm assuming there will be keyboarding errors,
wrong dates, etc, which should be checked against the paper
originals to avoid propagating unambiguous errors. Range checks and
other automated methods could be used to flag suspected errors for
human intervention. I am specifically excluding any "corrections"
based on opinion or assumptions such as UHI, etc.
All of this requires code that massages the real data. So you aren't
talking about raw data here either.


It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.


Which requires code if you're putting in into bits and storing the
results on a system which can be accessed by the rest of the world's
computers.


It's easier that way. But the most important thing is to avoid
corrupting the data.

Raw data is numbers not nice wordage in English ASCII.
And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw data"
would be useless.


Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money, time,
and maintenance. By your definition, the bits put on a public server
will not be data but a report of the data.


I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.


No, didn't talk about that. You want prettied up and reformatted so
anybody can read it and understand what it is. That takes code and
massages the raw data.


"Adjusting", or "massaging" is different from "reformatting".


By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.


That is not raw data. You've typed it in and its format is ASCII.


You seem to be straining at gnats here. When does the mercury position
become "raw data" to you? When does it stop being "raw data"?

If you want to call the verification and formatting "massaging", fine,
but if it's not done, the data is unusable.


Exactly. It's unusable to most people except those who run code to use
it as input (which is what scientists do).


And many others who might be seriously interested.

That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.


And you're assuming facts not in evidence.


Actually, I'm not assuming anything. I'm talking about moving bits
and presenting them to non-expert readers. I know a lot about this
kind of thing because I did that kind of work for 25 years.


It's you that's worried about "non-expert" readers, not me. I just
want it accessible in a usable form. You don't need to sugar coat it.


Your kind of usable form requires the raw data to be massaged before
storing it on a public forum.


I guess that depends on your definition of "massaging". As long as it
doesn't corrupt the data, I don't care what you call it, but the simpler,
the better.

I'm definitely not talking about a contract.


Then who's paying for it? If it's not taxpayers, then I really don't
care how it's done. If it is from taxes, then there better be an
enforceable contract in place, or we'll be right back where we are
now.


Contract law is different in each and every country.


So? There are still enforceable contracts. How would you do
international business without them?


You sign a contract for each country or entity in which you want to do
business.


Exactly. Why were you trying to make an issue of such an obvious point?

Which taxpayers do you think paid for the gathering of that data? Who
pays for the data the maritime business provides?


Don't know, don't care. Are you saying the IPCC is not tax-funded?

Where did our $50B go, then? I think grants are generally in the form
of contracts.


You don't even know how things get done.


Again, you might be surprised.
  #37   Report Post  
Old December 12th 09, 02:45 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 59
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

I M @ good guy wrote:
On Fri, 11 Dec 2009 09:06:36 -0500, jmfbahciv jmfbahciv@aol wrote:

I M @ good guy wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv jmfbahciv@aol wrote:

snip

BS, the data was mostly taken by weather stations
with no further market for it than the newspapers and
radio and TV stations.

Then it's not data "owned" by taxpayers.

Didn't Jones make it clear, the data won't
be released, period, it is not nice to argue with
superior persons.

I've been trying to talk about the problems with making
any kinds of data available for anybody to look at. This
is not a trivial task.

/BAH


I think at least some of those who requested
the data have a good idea of the format the data
is in.


Data can be in any format the gatherer wants it to be.
We're not talking about other scientists reading that data.
We've been talking about non-scientists getting access
to data with descriptions so that a one-year-old can understand
what the data means.

GISS provides a graph plus makes available
text files of daily or monthly average temperatures,
I have not tried to find out how that data was
obtained or how it was manipulated.


Maybe you should try; then you wouldn't state some of things
you have written.


But if the posted files claimed to be from
the cru computers I get the impression they
didn't have a competent IT professional or
programmer in the organization.


You still don't know what you're talking about. Data is not
code and code, usually, is not data.

/BAH
  #38   Report Post  
Old December 12th 09, 02:53 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 59
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

Bill Ward wrote:
On Fri, 11 Dec 2009 09:03:37 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism, but
the science behind it is questionable. Note how the
alarmists cannot respond to this important issue, other than
with insane rants and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper at
the Fall meeting of the American Geophysical Union (AGU) in
San Francisco, in a special session devoted to feedbacks in
the climate system. If you don't already know, feedbacks are
what will determine whether anthropogenic global warming is
strong or weak, with cloud feedbacks being the most
uncertain of all.

In the 12 minutes I have for my presentation, I hope to
convince as many scientists as possible the futility of
previous attempts to estimate cloud feedbacks in the climate
system. And unless we can measure cloud feedbacks in nature,
we can not test the feedbacks operating in computerized
climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are
self evident and measurable. Except if you are an AGW
holowarming nut and fruitcake. You'll just have to make up
some more numbers and bully more publications to get it past
peer review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so that
it can be peer reviewed.
I don't have any problem at all with *honest* peer review.
What I do have a BIG problem with is making the data available
to people who are certainly NOT "peers" (in the sense of
having little or no scientific training in any field, let
alone a specialization in anything relating to climatology),
who furthermore have a real anti-warming agenda, and who will,
either willfully or ignorantly, misinterpret the data to suit
their purposes, and spread the resulting disinformation far
and wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a
person who's had high school physics, but has forgotten much of
it. I'd make the best, most honest case I could, showing and
explaining the evidence both supporting and against the
hypothesis.

Then I'd publish the first draft and invite review by anyone
who feels qualified to comment. The second draft would
honestly answer the issues and misunderstandings raised in
those comments, again keeping the language and concepts
accessible and convincing to any interested high school physics
graduate.

The process would iterate until a sufficiently understandable,
unambiguous case could be made for AGW to convince most people,
or the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an
honest, open debate. Focus on learning how the climate system
actually works rather than trying to advance a political agenda
by frightening gullible people with scare tactics.

And the scientist is no longer doing his/her science. To make
data available requires a maintenance staff before it's written
to the public disk.
Don't you think it might be a good idea to do some data QC before
it's written to disks distributed to anyone? I'd think that's
part of the scientist's job. Why should the public see anything
different from the same disks the research is based on? The more
eyes looking, the earlier discrepancies can be resolved. Science
is supposed to be an open process, not a quasi-religious
ceremony.
What discrepancies? We're talking about science data, not a doc
that can be proof-read.
If that's the case, why not just post it? Why try to hide it?
What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a
public server with documentation describing it so a non-scientist
would understand the data. Frankly, I think this (documenting it)
is impossible but there are amazing writers in the science biz.
I'm talking about making the data available online to whoever wants
to review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure
they "understand" it in the "correct" way. That's not up to you, me,
or anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by
reverse engineering secret analytical processes after the fact.


ARe you talking about raw data? I don't see how you can QC raw
data.
Organize it into files suitable for archiving and searching, then
check for typos and transcription errors.
WTF are you talking about? There can't be typos in raw data, let
alone transcription errors.
I'm talking about unadjusted digital versions of the "raw data",
No, you are not. See below.
You know what I'm talking about, and I don't? That's quite a gift.

Yes. I know what you're not talking about.


Was that a typo, or you actually agreeing with me now?

It's clear you have no idea what processes are involved w.r.t. putting
readable bits on a computer system.


You might be surprised.


Not really.


not the
original paper forms. I'm assuming there will be keyboarding errors,
wrong dates, etc, which should be checked against the paper
originals to avoid propagating unambiguous errors. Range checks and
other automated methods could be used to flag suspected errors for
human intervention. I am specifically excluding any "corrections"
based on opinion or assumptions such as UHI, etc.
All of this requires code that massages the real data. So you aren't
talking about raw data here either.
It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.

Which requires code if you're putting in into bits and storing the
results on a system which can be accessed by the rest of the world's
computers.


It's easier that way. But the most important thing is to avoid
corrupting the data.


There are lots of ways it can be corrupted. A lot them don't even
require a human being.


Raw data is numbers not nice wordage in English ASCII.
And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw data"
would be useless.


Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money, time,
and maintenance. By your definition, the bits put on a public server
will not be data but a report of the data.
I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.

No, didn't talk about that. You want prettied up and reformatted so
anybody can read it and understand what it is. That takes code and
massages the raw data.


"Adjusting", or "massaging" is different from "reformatting".


All reformatting is massaging. If you are not doing a bit-for-
bit copy, you are massaging the file.


By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.

That is not raw data. You've typed it in and its format is ASCII.


You seem to be straining at gnats here. When does the mercury position
become "raw data" to you? When does it stop being "raw data"?


Raw data is the original collection of facts. Prettying numbers
up to be displayed on a TTY screen or hardcopy paper requires
massaging if those bits are stored on computer gear.


If you want to call the verification and formatting "massaging", fine,
but if it's not done, the data is unusable.

Exactly. It's unusable to most people except those who run code to use
it as input (which is what scientists do).


And many others who might be seriously interested.


This thread has been talking about non-scientists having access to
any data which was collected; further constraints were declared
that the data had to be prettied up and completely described so
that anybody could access the data and know what it meant. One
of you made a further requirement that the scientist do all that
work. Ptui.


That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.


And you're assuming facts not in evidence.


Actually, I'm not assuming anything. I'm talking about moving bits
and presenting them to non-expert readers. I know a lot about this
kind of thing because I did that kind of work for 25 years.
It's you that's worried about "non-expert" readers, not me. I just
want it accessible in a usable form. You don't need to sugar coat it.

Your kind of usable form requires the raw data to be massaged before
storing it on a public forum.


I guess that depends on your definition of "massaging". As long as it
doesn't corrupt the data, I don't care what you call it, but the simpler,
the better.


You can't tell if the data's been corrupted if it's been reformatted.
You have to have QA specialist checking.


I'm definitely not talking about a contract.

Then who's paying for it? If it's not taxpayers, then I really don't
care how it's done. If it is from taxes, then there better be an
enforceable contract in place, or we'll be right back where we are
now.

Contract law is different in each and every country.
So? There are still enforceable contracts. How would you do
international business without them?

You sign a contract for each country or entity in which you want to do
business.


Exactly. Why were you trying to make an issue of such an obvious point?


You're the one who started to talk about contracts.


Which taxpayers do you think paid for the gathering of that data? Who
pays for the data the maritime business provides?
Don't know, don't care. Are you saying the IPCC is not tax-funded?

Where did our $50B go, then? I think grants are generally in the form
of contracts.


You don't even know how things get done.


Again, you might be surprised.



Not at all. You have no idea how much work is involved.

/BAH
  #39   Report Post  
Old December 12th 09, 07:37 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Feb 2009
Posts: 197
Default Can Global Warming Predictions be Tested with Observations ofthe Real Climate System?

On Sat, 12 Dec 2009 09:53:43 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Fri, 11 Dec 2009 09:03:37 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Mon, 07 Dec 2009 08:38:20 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Sun, 06 Dec 2009 21:43:15 -0800, isw wrote:

In article ,
7 wrote:

Eric Gisin wrote:

Positive cloud feedback is the key to Climate Alarmism,
but the science behind it is questionable. Note how the
alarmists cannot respond to this important issue, other
than with insane rants and conspiracies.

http://www.drroyspencer.com/2009/12/can-global-warming-
predictions-
be-
tested-with-observations-of-the-real-climate-system/
December 6, 2009, 08:19:36 | Roy W. Spencer, Ph. D.

In a little over a week I will be giving an invited paper
at the Fall meeting of the American Geophysical Union
(AGU) in San Francisco, in a special session devoted to
feedbacks in the climate system. If you don't already
know, feedbacks are what will determine whether
anthropogenic global warming is strong or weak, with cloud
feedbacks being the most uncertain of all.

In the 12 minutes I have for my presentation, I hope to
convince as many scientists as possible the futility of
previous attempts to estimate cloud feedbacks in the
climate system. And unless we can measure cloud feedbacks
in nature, we can not test the feedbacks operating in
computerized climate models.

WHAT ARE FEEDBACKS?
Systems with feedback have characteristic time constants,
oscillations and dampening characteristics all of which are
self evident and measurable. Except if you are an AGW
holowarming nut and fruitcake. You'll just have to make up
some more numbers and bully more publications to get it
past peer review.

Climate science needs more transparency.

Thats easy:

1. Put all your emails on public ftp servers.

2. Put all the raw climate data in public ftp servers so
that it can be peer reviewed.
I don't have any problem at all with *honest* peer review.
What I do have a BIG problem with is making the data
available to people who are certainly NOT "peers" (in the
sense of having little or no scientific training in any
field, let alone a specialization in anything relating to
climatology), who furthermore have a real anti-warming
agenda, and who will, either willfully or ignorantly,
misinterpret the data to suit their purposes, and spread the
resulting disinformation far and wide.

How do you propose to prevent that?
Excellent question.
Yup.

First, I'd write a clear, coherent, complete description and
explanation of the exact mechanism by which CO2 is thought to
increase surface temperatures. I'd aim it at the level of a
person who's had high school physics, but has forgotten much
of it. I'd make the best, most honest case I could, showing
and explaining the evidence both supporting and against the
hypothesis.

Then I'd publish the first draft and invite review by anyone
who feels qualified to comment. The second draft would
honestly answer the issues and misunderstandings raised in
those comments, again keeping the language and concepts
accessible and convincing to any interested high school
physics graduate.

The process would iterate until a sufficiently
understandable, unambiguous case could be made for AGW to
convince most people, or the hypothesis is clearly falsified.

IOW, cut the condescending, supercilious crap and have an
honest, open debate. Focus on learning how the climate
system actually works rather than trying to advance a
political agenda by frightening gullible people with scare
tactics.

And the scientist is no longer doing his/her science. To make
data available requires a maintenance staff before it's
written to the public disk.


Don't you think it might be a good idea to do some data QC
before it's written to disks distributed to anyone? I'd think
that's part of the scientist's job. Why should the public see
anything different from the same disks the research is based
on? The more eyes looking, the earlier discrepancies can be
resolved. Science is supposed to be an open process, not a
quasi-religious ceremony.


What discrepancies? We're talking about science data, not a doc
that can be proof-read.


If that's the case, why not just post it? Why try to hide it?


What are you talking about now? I've been trying to discuss the
problems with the suggestion that any science data be put on a
public server with documentation describing it so a non-scientist
would understand the data. Frankly, I think this (documenting it)
is impossible but there are amazing writers in the science biz.


I'm talking about making the data available online to whoever wants
to review it, not keeping it from those who might disagree with the
conclusions the IPCC is promoting. There are no "wrong people" who
shouldn't have access to the data, and there's no need to be sure
they "understand" it in the "correct" way. That's not up to you,
me, or anyone else to decide. It's public property.

It seems a shame for Steve McIntyre to have to do the QC by
reverse engineering secret analytical processes after the fact.



ARe you talking about raw data? I don't see how you can QC raw
data.


Organize it into files suitable for archiving and searching, then
check for typos and transcription errors.


WTF are you talking about? There can't be typos in raw data, let
alone transcription errors.


I'm talking about unadjusted digital versions of the "raw data",


No, you are not. See below.


You know what I'm talking about, and I don't? That's quite a gift.


Yes. I know what you're not talking about.


Was that a typo, or you actually agreeing with me now?

It's clear you have no idea what processes are involved w.r.t. putting
readable bits on a computer system.


You might be surprised.


Not really.


not the
original paper forms. I'm assuming there will be keyboarding
errors, wrong dates, etc, which should be checked against the
paper originals to avoid propagating unambiguous errors. Range
checks and other automated methods could be used to flag suspected
errors for human intervention. I am specifically excluding any
"corrections" based on opinion or assumptions such as UHI, etc.


All of this requires code that massages the real data. So you
aren't talking about raw data here either.


It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.


Which requires code if you're putting in into bits and storing the
results on a system which can be accessed by the rest of the world's
computers.


It's easier that way. But the most important thing is to avoid
corrupting the data.


There are lots of ways it can be corrupted. A lot them don't even
require a human being.

Raw data is numbers not nice wordage in English ASCII.
And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw
data" would be useless.


Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money,
time, and maintenance. By your definition, the bits put on a public
server will not be data but a report of the data.


I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.


No, didn't talk about that. You want prettied up and reformatted so
anybody can read it and understand what it is. That takes code and
massages the raw data.


"Adjusting", or "massaging" is different from "reformatting".


All reformatting is massaging. If you are not doing a bit-for- bit
copy, you are massaging the file.


You use strange definitions, but OK.


By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.
That is not raw data. You've typed it in and its format is ASCII.


You seem to be straining at gnats here. When does the mercury position
become "raw data" to you? When does it stop being "raw data"?


Raw data is the original collection of facts. Prettying numbers up to
be displayed on a TTY screen or hardcopy paper requires massaging if
those bits are stored on computer gear.


I didn't see an answer to my question there. At what point does the
value representing the position of the Hg meniscus become "raw data"?

"[O]riginal collection of facts" is a bit ambiguous. Is it when the
observer reads it, when he initially writes it down on a form, when he
keys it into a computer memory, when he saves it to permanent media, when
a herdcopy is printed...? If you're going to make up definitions, you
at least need to be specific and consistent.

I define "raw data" as any copy of the original reading that carries
exactly the same information as the original reading, no matter what
format it's in. If any information has changed, it's no longer raw
data. If the information is the same, but the data has been reformatted,
labeled, columnized, "prettied up", sorted, or any other information
preserving transformation, it's still raw data, since the information is
unchanged.

Do you see any problem with that?

If you want to call the verification and formatting "massaging",
fine, but if it's not done, the data is unusable.


Exactly. It's unusable to most people except those who run code to
use it as input (which is what scientists do).


And many others who might be seriously interested.


This thread has been talking about non-scientists having access to any
data which was collected; further constraints were declared that the
data had to be prettied up and completely described so that anybody
could access the data and know what it meant.


That would be what you were talking about, not me. All I insisted was
that the data be usable, which I think you are calling "prettied up".

That should be a requirement for any data used to support a paper. If
the data is not in usable form, how could the research be done? It looks
like that may be one of the current problems with "climate science". The
data they were using was/is not in usable form, but they didn't let that
stop them.

One of you made a further
requirement that the scientist do all that work. Ptui.


No, that's for grad students. ;-) But somebody has to do it, or the
research is based on invalid assumptions.

That should be one of the
deliverables in the data collection contract.

You don't know what you're talking about.


And you're assuming facts not in evidence.


Actually, I'm not assuming anything. I'm talking about moving bits
and presenting them to non-expert readers. I know a lot about this
kind of thing because I did that kind of work for 25 years.


It's you that's worried about "non-expert" readers, not me. I just
want it accessible in a usable form. You don't need to sugar coat
it.


Your kind of usable form requires the raw data to be massaged before
storing it on a public forum.


I guess that depends on your definition of "massaging". As long as it
doesn't corrupt the data, I don't care what you call it, but the
simpler, the better.


You can't tell if the data's been corrupted if it's been reformatted.
You have to have QA specialist checking.


Shouldn't that be a routine procedure? Or do you expect to use invalid
data to get valid results?

I'm definitely not talking about a contract.

Then who's paying for it? If it's not taxpayers, then I really
don't care how it's done. If it is from taxes, then there better
be an enforceable contract in place, or we'll be right back where
we are now.

Contract law is different in each and every country.


So? There are still enforceable contracts. How would you do
international business without them?


You sign a contract for each country or entity in which you want to do
business.


Exactly. Why were you trying to make an issue of such an obvious
point?


You're the one who started to talk about contracts.

Which taxpayers do you think paid for the gathering of that data?
Who pays for the data the maritime business provides?


Don't know, don't care. Are you saying the IPCC is not tax-funded?

Where did our $50B go, then? I think grants are generally in the
form of contracts.


You don't even know how things get done.


Again, you might be surprised.


Not at all. You have no idea how much work is involved.


We paid for a lot of work that now appears useless. I'd rather pay for
careful work done in an open, transparent manner. It's cheaper than
having to redo it.

  #40   Report Post  
Old December 12th 09, 07:44 PM posted to alt.global-warming,alt.politics.libertarian,sci.geo.meteorology,sci.physics
external usenet poster
 
First recorded activity by Weather-Banter: Dec 2009
Posts: 18
Default Can Global Warming Predictions be Tested with Observations of the Real Climate System?

On 2009-12-12, Bill Ward wrote:
On Sat, 12 Dec 2009 09:53:43 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Fri, 11 Dec 2009 09:03:37 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Thu, 10 Dec 2009 08:01:19 -0500, jmfbahciv wrote:

Bill Ward wrote:
On Wed, 09 Dec 2009 09:13:35 -0500, jmfbahciv wrote:

On Tue, 08 Dec 2009 08:41:40 -0500, jmfbahciv wrote:

not the
original paper forms. I'm assuming there will be keyboarding
errors, wrong dates, etc, which should be checked against the
paper originals to avoid propagating unambiguous errors. Range
checks and other automated methods could be used to flag suspected
errors for human intervention. I am specifically excluding any
"corrections" based on opinion or assumptions such as UHI, etc.


All of this requires code that massages the real data. So you
aren't talking about raw data here either.


It doesn't "require" code, it requires a consistent, transparent,
algorithm, whether done by machine or not.


Which requires code if you're putting in into bits and storing the
results on a system which can be accessed by the rest of the world's
computers.

It's easier that way. But the most important thing is to avoid
corrupting the data.


There are lots of ways it can be corrupted. A lot them don't even
require a human being.

Raw data is numbers not nice wordage in English ASCII.
And I'm talking about adding the labels, dates, locations and other
metadata required to make it usable. By your definition, "raw
data" would be useless.

Then the data has to be massaged by code which has to be written,
tested, debugged, and load tested. This takes manpower, money,
time, and maintenance. By your definition, the bits put on a public
server will not be data but a report of the data.


I think that's your definition. I said,"I'm talking about unadjusted
digital versions of the 'raw data'", and you took issue with it.


No, didn't talk about that. You want prettied up and reformatted so
anybody can read it and understand what it is. That takes code and
massages the raw data.

"Adjusting", or "massaging" is different from "reformatting".


All reformatting is massaging. If you are not doing a bit-for- bit
copy, you are massaging the file.


You use strange definitions, but OK.


By your definition they'd be useless:

And now for the scores: 7 to 3; 2 to 1; and 21 to 7.

There's your "raw data", but it's not all that useful.
That is not raw data. You've typed it in and its format is ASCII.

You seem to be straining at gnats here. When does the mercury position
become "raw data" to you? When does it stop being "raw data"?


Raw data is the original collection of facts. Prettying numbers up to
be displayed on a TTY screen or hardcopy paper requires massaging if
those bits are stored on computer gear.


I didn't see an answer to my question there. At what point does the
value representing the position of the Hg meniscus become "raw data"?

"[O]riginal collection of facts" is a bit ambiguous. Is it when the
observer reads it, when he initially writes it down on a form, when he
keys it into a computer memory, when he saves it to permanent media, when
a herdcopy is printed...? If you're going to make up definitions, you
at least need to be specific and consistent.

I define "raw data" as any copy of the original reading that carries
exactly the same information as the original reading, no matter what
format it's in. If any information has changed, it's no longer raw
data. If the information is the same, but the data has been reformatted,
labeled, columnized, "prettied up", sorted, or any other information
preserving transformation, it's still raw data, since the information is
unchanged.

Do you see any problem with that?


Yes, because of one thing. The verification that it is unchanged. Which
is why any science class trains you to enter the data directly into whatever
will be the retention mechanism. In olden days, that was a log book. Today,
that is typically some sort of digital form.

If there is no transcription, even digital, then I have no problem with
it. But if it is flowed into another form, there is some potential for
error. Which is why you don't destroy the raw data.

--
Clothes make the man. Naked people have little or no influence on
society. -- Mark Twain


Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Rain finally arrives in S.Essex due to a tried and tested predictionmethod. Dave Cornwell[_4_] uk.sci.weather (UK Weather) 3 August 13th 15 05:04 PM
Ancient climate records 'back predictions' Climate sensitivitysimilar in past warmings Dawlish uk.sci.weather (UK Weather) 0 February 5th 15 01:27 PM
Models may be Overestimating Global Warming Predictions David[_4_] sci.geo.meteorology (Meteorology) 5 November 21st 08 09:11 PM
Weather Eye: Old-timers' tales tell story of global warming -- Climate change observations from a professional observer. Psalm 110 sci.geo.meteorology (Meteorology) 0 August 23rd 04 06:53 AM
Rubber Duckies Can Save The World ..... Can Solve Global Warming or Cooling KCC alt.talk.weather (General Weather Talk) 2 January 19th 04 12:12 PM


All times are GMT. The time now is 07:23 PM.

Powered by vBulletin® Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright ©2004-2025 Weather Banter.
The comments are property of their posters.
 

About Us

"It's about Weather"

 

Copyright © 2017