Home |
Search |
Today's Posts |
![]() |
|
uk.sci.weather (UK Weather) (uk.sci.weather) For the discussion of daily weather events, chiefly affecting the UK and adjacent parts of Europe, both past and predicted. The discussion is open to all, but contributions on a practical scientific level are encouraged. |
Reply |
|
|
LinkBack | Thread Tools | Display Modes |
#11
![]() |
|||
|
|||
![]()
On Jun 12, 5:45*pm, Adam Lea wrote:
On 12/06/11 07:15, Dawlish wrote: On Jun 11, 10:29 pm, Adam *wrote: On 11/06/11 18:09, Dawlish wrote: On Jun 11, 2:49 pm, Adam * *wrote: On 11/06/11 11:49, Dawlish wrote: On Jun 11, 11:42 am, Adam * * *wrote: On 11/06/11 09:00, Gavino wrote: "Will * * * *wrote in message ... OK I'm conceeding that it is highly likely that I will be totally wrong about my indications for a hot and dry June Kudos for admitting failure. But the real lesson is that anyone who predicts the weather several weeks ahead is never 'right' or 'wrong', merely 'lucky' or 'unlucky'. If someone wins the football pools, no-one would think to praise him for being a great forecaster. Why should (long-range) weather forecasting be any different? Because seasonal forecasting is conducted using physical/dynamical relationships within the atmosphere, not guesswork. It is possible for some seaonal forecasts to show skill when assessed over many forecasts. Where Adam. Gavono's assessment is harsh and harsher than my view. I feel that SSTs may provide a means of forecasting seasonally in the future. They are already used by the MetO to predict the winter sign of the NAO with reasonable hindsight accuracy, but that does not predict our winter weather pattern well, unfortunately. As I've asked many others, if you feel there is success in using dynamics for long-term (seasonal) forecastuing then the research to show that must exist. Simply show it to us, or you are just repeating percieved wisdom - which I feel is not correct, as I have not seen the research evidence. if you can show me it, I'll happily revise my thinking. If it's not there; you and others must really revise yours. *)) http://www.tropicalstormrisk.com/doc...004.pdfhttp://.... using the criteria that "skill" means doing better than climatology over the long term.- Hide quoted text - - Show quoted text - Sorry Adam, forgot to say. When I "forecast" winters in the UK I only use one variable; temperature. (rainfall, snowfall, storminess etc is impossible, IMO). I've been right about 75%+ of the time since 1990 and I have forecasted "warmer than the long-term average" every year. No flannel; I just use two things. Climatology, married to a UK warming trend. Show me another forecaster that has that kind of level of accuracy. *)) (notice the smiley) Winter 2011-12 - warmer than average. You heard it here first! Easy eh? *)) That is what is known as a climatology forecast, or a "zero intelligence" forecast. It is a benchmark against which seasonal forecasts are judged, a forecast is skilful if it outperforms the benchmark forecast over a significant number of forecasts. Just to check, are you trying to claim all seasonal forecasts have no skill or just UK seasonal forecasts?- Hide quoted text - - Show quoted text - I did say "notice the smiley" and I think you missed the inverted commas Adam. *)) I'm very aware of what that "forecast" shows. No- one, AFAIK has got anywhere near that benchmark during that time. If they have, I'd love to know, as we would have found a forecasting holy grail. It just illustrates that any forecast you read for this coming winter (or any other season) does not deserve the credence some people like to give them. *Judge forecasters on their accuracy; not on their forecasts. Well there is an easy way to check: 1. Take your "climatology married to a warming trend" as a benchmark forecast (this is equivalent to a trend line of temperature against year, is it not?). 2. Look up the Met Office (or anyone else's) winter forecasts going as far back as possible. 3. Compute the squared error between the forecast anomaly and observed anomaly for each year. 4. Compute the squared error between the benchmark forecast anomaly and the observed anomaly for each year. 5. Compute the mean squared error for benchmark and operational forecasts and from these, calculate the mean squared skill score (MSSS): MSSS(%) = 100*sqrt(1-(MSE(f)/MSE(cl))) where MSE(f) is the mean squared error of the operational forecast, and MSE(cl) is the mean squared error of the benchmark climatology forecast. A positive MSSS indicates the operation forecast is skilful over the climatology forecast over the years assessed. The mean squared skill score is recommended by the World Meteorological Organisation for verification of deterministic seasonal forecasts.- Hide quoted text - - Show quoted text - Yes. Thank you.Try it Adam. See if you come up with a figure greater than 75% accuracy. If you do, I'd be highly surprised! In fact, I know it will be less. The MetO's hindsight accuracy in predicting the sign of the NAO was only 65% when I could last find it on the site (I think it has gone) and the winter forecast accuracy was less than that. In addition, finding past MetO forecasts is impossible. It's not something they really want to advertise as the outcome success has been so poor. *)) |
#12
![]() |
|||
|
|||
![]() "Adam Lea" wrote in message ... On 12/06/11 07:15, Dawlish wrote: On Jun 11, 10:29 pm, Adam wrote: On 11/06/11 18:09, Dawlish wrote: On Jun 11, 2:49 pm, Adam wrote: On 11/06/11 11:49, Dawlish wrote: On Jun 11, 11:42 am, Adam wrote: On 11/06/11 09:00, Gavino wrote: "Will wrote in message ... OK I'm conceeding that it is highly likely that I will be totally wrong about my indications for a hot and dry June Kudos for admitting failure. But the real lesson is that anyone who predicts the weather several weeks ahead is never 'right' or 'wrong', merely 'lucky' or 'unlucky'. If someone wins the football pools, no-one would think to praise him for being a great forecaster. Why should (long-range) weather forecasting be any different? Because seasonal forecasting is conducted using physical/dynamical relationships within the atmosphere, not guesswork. It is possible for some seaonal forecasts to show skill when assessed over many forecasts. Where Adam. Gavono's assessment is harsh and harsher than my view. I feel that SSTs may provide a means of forecasting seasonally in the future. They are already used by the MetO to predict the winter sign of the NAO with reasonable hindsight accuracy, but that does not predict our winter weather pattern well, unfortunately. As I've asked many others, if you feel there is success in using dynamics for long-term (seasonal) forecastuing then the research to show that must exist. Simply show it to us, or you are just repeating percieved wisdom - which I feel is not correct, as I have not seen the research evidence. if you can show me it, I'll happily revise my thinking. If it's not there; you and others must really revise yours. *)) http://www.tropicalstormrisk.com/doc...004.pdfhttp://... using the criteria that "skill" means doing better than climatology over the long term.- Hide quoted text - - Show quoted text - Sorry Adam, forgot to say. When I "forecast" winters in the UK I only use one variable; temperature. (rainfall, snowfall, storminess etc is impossible, IMO). I've been right about 75%+ of the time since 1990 and I have forecasted "warmer than the long-term average" every year. No flannel; I just use two things. Climatology, married to a UK warming trend. Show me another forecaster that has that kind of level of accuracy. *)) (notice the smiley) Winter 2011-12 - warmer than average. You heard it here first! Easy eh? *)) That is what is known as a climatology forecast, or a "zero intelligence" forecast. It is a benchmark against which seasonal forecasts are judged, a forecast is skilful if it outperforms the benchmark forecast over a significant number of forecasts. Just to check, are you trying to claim all seasonal forecasts have no skill or just UK seasonal forecasts?- Hide quoted text - - Show quoted text - I did say "notice the smiley" and I think you missed the inverted commas Adam. *)) I'm very aware of what that "forecast" shows. No- one, AFAIK has got anywhere near that benchmark during that time. If they have, I'd love to know, as we would have found a forecasting holy grail. It just illustrates that any forecast you read for this coming winter (or any other season) does not deserve the credence some people like to give them. Judge forecasters on their accuracy; not on their forecasts. Well there is an easy way to check: 1. Take your "climatology married to a warming trend" as a benchmark forecast (this is equivalent to a trend line of temperature against year, is it not?). 2. Look up the Met Office (or anyone else's) winter forecasts going as far back as possible. 3. Compute the squared error between the forecast anomaly and observed anomaly for each year. 4. Compute the squared error between the benchmark forecast anomaly and the observed anomaly for each year. 5. Compute the mean squared error for benchmark and operational forecasts and from these, calculate the mean squared skill score (MSSS): MSSS(%) = 100*sqrt(1-(MSE(f)/MSE(cl))) where MSE(f) is the mean squared error of the operational forecast, and MSE(cl) is the mean squared error of the benchmark climatology forecast. A positive MSSS indicates the operation forecast is skilful over the climatology forecast over the years assessed. The mean squared skill score is recommended by the World Meteorological Organisation for verification of deterministic seasonal forecasts. I'm not one for blowing my own trumpet but I bet Dawlish in winter 2009/10 that every month (DJF) would have a below average CET ( how common is that nowadays?). I got it on the nose! Of course, I was "lucky" wasn't I even though I have never ever made such a forecast in my life before. Anomaly correlation is also a good measure of skill. Last winter I was wrong (but not totally), but I'll give it a go again this year as it is fun! Will -- |
#13
![]() |
|||
|
|||
![]()
On 12/06/11 19:13, Dawlish wrote:
On Jun 12, 5:45 pm, Adam wrote: On 12/06/11 07:15, Dawlish wrote: On Jun 11, 10:29 pm, Adam wrote: On 11/06/11 18:09, Dawlish wrote: On Jun 11, 2:49 pm, Adam wrote: On 11/06/11 11:49, Dawlish wrote: On Jun 11, 11:42 am, Adam wrote: On 11/06/11 09:00, Gavino wrote: "Will wrote in message ... OK I'm conceeding that it is highly likely that I will be totally wrong about my indications for a hot and dry June Kudos for admitting failure. But the real lesson is that anyone who predicts the weather several weeks ahead is never 'right' or 'wrong', merely 'lucky' or 'unlucky'. If someone wins the football pools, no-one would think to praise him for being a great forecaster. Why should (long-range) weather forecasting be any different? Because seasonal forecasting is conducted using physical/dynamical relationships within the atmosphere, not guesswork. It is possible for some seaonal forecasts to show skill when assessed over many forecasts. Where Adam. Gavono's assessment is harsh and harsher than my view. I feel that SSTs may provide a means of forecasting seasonally in the future. They are already used by the MetO to predict the winter sign of the NAO with reasonable hindsight accuracy, but that does not predict our winter weather pattern well, unfortunately. As I've asked many others, if you feel there is success in using dynamics for long-term (seasonal) forecastuing then the research to show that must exist. Simply show it to us, or you are just repeating percieved wisdom - which I feel is not correct, as I have not seen the research evidence. if you can show me it, I'll happily revise my thinking. If it's not there; you and others must really revise yours. *)) http://www.tropicalstormrisk.com/doc...004.pdfhttp://... using the criteria that "skill" means doing better than climatology over the long term.- Hide quoted text - - Show quoted text - Sorry Adam, forgot to say. When I "forecast" winters in the UK I only use one variable; temperature. (rainfall, snowfall, storminess etc is impossible, IMO). I've been right about 75%+ of the time since 1990 and I have forecasted "warmer than the long-term average" every year. No flannel; I just use two things. Climatology, married to a UK warming trend. Show me another forecaster that has that kind of level of accuracy. *)) (notice the smiley) Winter 2011-12 - warmer than average. You heard it here first! Easy eh? *)) That is what is known as a climatology forecast, or a "zero intelligence" forecast. It is a benchmark against which seasonal forecasts are judged, a forecast is skilful if it outperforms the benchmark forecast over a significant number of forecasts. Just to check, are you trying to claim all seasonal forecasts have no skill or just UK seasonal forecasts?- Hide quoted text - - Show quoted text - I did say "notice the smiley" and I think you missed the inverted commas Adam. *)) I'm very aware of what that "forecast" shows. No- one, AFAIK has got anywhere near that benchmark during that time. If they have, I'd love to know, as we would have found a forecasting holy grail. It just illustrates that any forecast you read for this coming winter (or any other season) does not deserve the credence some people like to give them. Judge forecasters on their accuracy; not on their forecasts. Well there is an easy way to check: 1. Take your "climatology married to a warming trend" as a benchmark forecast (this is equivalent to a trend line of temperature against year, is it not?). 2. Look up the Met Office (or anyone else's) winter forecasts going as far back as possible. 3. Compute the squared error between the forecast anomaly and observed anomaly for each year. 4. Compute the squared error between the benchmark forecast anomaly and the observed anomaly for each year. 5. Compute the mean squared error for benchmark and operational forecasts and from these, calculate the mean squared skill score (MSSS): MSSS(%) = 100*sqrt(1-(MSE(f)/MSE(cl))) where MSE(f) is the mean squared error of the operational forecast, and MSE(cl) is the mean squared error of the benchmark climatology forecast. A positive MSSS indicates the operation forecast is skilful over the climatology forecast over the years assessed. The mean squared skill score is recommended by the World Meteorological Organisation for verification of deterministic seasonal forecasts.- Hide quoted text - - Show quoted text - Yes. Thank you.Try it Adam. See if you come up with a figure greater than 75% accuracy. If you do, I'd be highly surprised! In fact, I know it will be less. The MetO's hindsight accuracy in predicting the sign of the NAO was only 65% when I could last find it on the site (I think it has gone) and the winter forecast accuracy was less than that. In addition, finding past MetO forecasts is impossible. It's not something they really want to advertise as the outcome success has been so poor. *)) Actually, having looked at the Met Office site they don't do deterministic winter forecasts, only probabilistic ones. In this case the Rank Probability Skill Score is the appropriate score to use. I might look into it as an exercise if I can get hold of the Met Office past forecasts, the only one I could find on there was the 2008/9 winter forecast. |
#14
![]() |
|||
|
|||
![]()
On Jun 12, 10:25*pm, Adam Lea wrote:
On 12/06/11 19:13, Dawlish wrote: On Jun 12, 5:45 pm, Adam *wrote: On 12/06/11 07:15, Dawlish wrote: On Jun 11, 10:29 pm, Adam * *wrote: On 11/06/11 18:09, Dawlish wrote: On Jun 11, 2:49 pm, Adam * * *wrote: On 11/06/11 11:49, Dawlish wrote: On Jun 11, 11:42 am, Adam * * * *wrote: On 11/06/11 09:00, Gavino wrote: "Will * * * * *wrote in message . .. OK I'm conceeding that it is highly likely that I will be totally wrong about my indications for a hot and dry June Kudos for admitting failure. But the real lesson is that anyone who predicts the weather several weeks ahead is never 'right' or 'wrong', merely 'lucky' or 'unlucky'. If someone wins the football pools, no-one would think to praise him for being a great forecaster. Why should (long-range) weather forecasting be any different? Because seasonal forecasting is conducted using physical/dynamical relationships within the atmosphere, not guesswork. It is possible for some seaonal forecasts to show skill when assessed over many forecasts. Where Adam. Gavono's assessment is harsh and harsher than my view.. I feel that SSTs may provide a means of forecasting seasonally in the future. They are already used by the MetO to predict the winter sign of the NAO with reasonable hindsight accuracy, but that does not predict our winter weather pattern well, unfortunately. As I've asked many others, if you feel there is success in using dynamics for long-term (seasonal) forecastuing then the research to show that must exist. Simply show it to us, or you are just repeating percieved wisdom - which I feel is not correct, as I have not seen the research evidence. if you can show me it, I'll happily revise my thinking. If it's not there; you and others must really revise yours. *)) http://www.tropicalstormrisk.com/doc...004.pdfhttp://... using the criteria that "skill" means doing better than climatology over the long term.- Hide quoted text - - Show quoted text - Sorry Adam, forgot to say. When I "forecast" winters in the UK I only use one variable; temperature. (rainfall, snowfall, storminess etc is impossible, IMO). I've been right about 75%+ of the time since 1990 and I have forecasted "warmer than the long-term average" every year. |
Reply |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Forum | |||
I'm throwing in the towel | uk.sci.weather (UK Weather) | |||
I'm throwing in the towel | uk.sci.weather (UK Weather) | |||
Throwing in the towel... | uk.sci.weather (UK Weather) | |||
Thrown the towel in! | uk.sci.weather (UK Weather) | |||
Throwing down, up and down again | uk.sci.weather (UK Weather) |