Notice: Undefined index: img in /home/environ5/public_html/wp-content/plugins/wp-members/includes/class-wp-members-forms.php on line 1144

Notice: Undefined index: hidden in /home/environ5/public_html/wp-content/plugins/wp-members/includes/class-wp-members-forms.php on line 1144

This content is restricted to site members. If you are an existing user, please log in. New users may register below.

Existing Users Log In
   
New User Registration
Really Simple CAPTCHA is not enabled
*Required field
7 Comments
Newest
Oldest
Inline Feedbacks
View all comments
David Fox

RESOLUTION

Use following response from JT:

We have expanded the table on software tools and rewritten the section to allow more comparisons between tools. Model averaging is only available for SSD Toolbox and (shiny)ssdtools. Ultimately the user can select which models to include however the purpose of model averaging is to downweight detrimental models. We are unaware of an example where the AICc criteria is insufficient.

Joe Thorley

I’m happy to tackle in a section on ssd software.

David Fox

Go for it!

Rebecca Fisher

I am a little confused why they think there are two packages? DO they think ssdtools and the shiny ap are two separate packages? If so we can just clarify the wording in the MS, and we only need to provide more information for the ssdtools R package. The shiny is really just an interface.

David Fox

Agreed. Plus I think there are other points of clarification.

  1. both ssdtools shiny and SSD Toolbox are flexible wrt model inclusion/exclusion;
  2. the whole point of AICc was to down weight ‘detrimental’ models!;
  3. with due respect, I wouldn’t be using SSD Master as a yardstick – it was a useful tool but limited by Excel functionality (or lack thereof).
Rebecca Fisher

It might be worth adding to the Table 1 an indication of the distribution(s) each uses. Obviously for ssdtools this would be “various” – but where there are only a couple they probably should be listed (e.g. BurrliOz). I haven’t used SSD Master but from the text in our paper it looks like you can fit a range of distributions, but it does not use model averaging. I’m not sure what evidence the reviewer bases his statement that “Reliance on the stated AIC criteria alone for down weighing these models may not be sufficient.” So far with all the examples I have tried, AICc seems to be doing a pretty good job. As David says in point 2, the whole point is that it down weights these detrimental models. We could reference Schwarz and Tillmanns (2019) as evidence of the stability of the AICc based model averaging method. Or a pers com from one of the Canadians that have has used the method in their derivations – have any of you guys found the AICc to be insufficient in terms of identifying bad fitting models? My simulation study (too complex to include in this paper) suggests it may start to fail to select the “correct” distribution with small sample sizes, but with real data we do not know the correct distribution in any case – thus a model averaged version is probably safer than arbitrary selection of a single wrong distribution in any case.

As an aside – From what I can gather on their instructions the contents of tables are not included in the word limit, which means expanding Table 1 with more detail would provide a means of expanding our ‘review’ of the available methods without necessarily blowing out the word limit.

Joe Thorley

I agree with adding the distributions to table 1 (#12) and will do.