• We are looking for you!
    Always wanted to join our Supporting Team? We are looking for enthusiastic moderators!
    Take a look at our recruitement page for more information and how you can apply:
    Apply

Feedback for The Galata Tower!

DeletedUser

Why wasn't your sample size on the order of that amount?

You have to know what the sample size would be to reliably support your position.

You didn't get anywhere near it.



No, JBG used to volunteer for INNO. He quit.

It's not everyone, it's just 5 of 6 people who have posted. Keep your eye on the sixth, I'll get back to them.

Folk don't need to be paid to disagree with posters of your ilk, that disagreement naturally happens on forums. For a while.

If you continue in your current vein you will notice that fewer folk will respond over time.

Folks who are here for serious reasons eventually tire of your kind and eventually start ignoring you, because you will demonstrate you have nothing of value to contribute.

Your opinions and posts will be dismissed out of hand.

Case in point, you forgot or ignored (remember your remark about everyone up in arms?) that another poster agreed with one of your quips.

That's your future: Even posters of your own ilk will ignore and forget your posts. Eventually you'll get tired of being ignored and wander off.

----------

I admit to the personal fault that I take great enjoyment out of telling folk like you exactly what is happening and will happen and not be believed and then watch it happen.
That's a relief, I was wondering when the hate mail would stop!
 

The Lady Redneck

Well-Known Member
I do not see what all the fuss is about. I do not go in for all the mathematical and statistical stuff some seem to enjoy. I simply play the game. If INNO introduces something I like I try to adjust my game to take advantage of that. And if it is something I do not like I enjoy the challenge of finding ways to work round it. (without the use of bots as I see them as an admission of failure) As for the Galata tower. It makes no difference at all to my game. This is Forge of Empires not some build a pretty city game. War and fighting is part of the game and any army will attack and plunder the land it is passing through (Ever heard the cliche "All is fair in Love and war"?} I do attack, I do plunder, I do snipe. If I have friends or guild mates in the same hood as me, I also share the love and pass info on to them about profitable plunder opportunities and snipes we can share. I was curious as to how Galata would affect my game. OK it blocks a few plunders. but very few in comparison with those that succeed. If it block me on one train I know 2 or three others will succeed which more than compensates me for that. Collecting your stuff in a timely manner is still the best protection against being plundered just as it was before Galata. I do enjoy seeing in event history when someones attempt to plunder me has been blocked and I am sure others feel the same when they see I have been blocked. And as an aside, the goods I get from that little 3x3 GB are welcome. So I fail to see how it could possibly ruin the game for anyone.
 

qaccy

Well-Known Member
All that arguing above, and I'm still over here wondering how many years it'll be before Inno fixes this thing so the cost and rewards actually match up. Or, as I said I'd also be ok with, explaining that it's intended for an EMA GB with EMA contribution rewards to have the FP costs of an HMA GB.
 

Kranyar the Mysterious

Well-Known Member
All that arguing above, and I'm still over here wondering how many years it'll be before Inno fixes this thing so the cost and rewards actually match up. Or, as I said I'd also be ok with, explaining that it's intended for an EMA GB with EMA contribution rewards to have the FP costs of an HMA GB.
Over on the Beta forum @lordwasa posted: It's intended behavior and won't be changed
 
Ummm...no, your sample size is not meaningful. Even if you saved all your event currency for the last day, that is a drop in the proverbial bucket to the thousands and thousands of samples from all the players participating to some degree in the event.

And there are hundreds of thousands of players, so the "sample size" from one player is basically meaningless. Unless you actually don't understand probability at all, which is more likely.

So start a city on every one of the 27 U.S. worlds and get a bigger sample size. Although that would still be 27 out of thousands, so...

Would you be so kind as to share what an acceptable sample size would be, what confidence it would provide, and what method was used to determine the suggestion?
 

Johnny B. Goode

Well-Known Member
Would you be so kind as to share what an acceptable sample size would be, what confidence it would provide, and what method was used to determine the suggestion?
More than what one player could accumulate even if they had a city on every world and played them all to the max.
Since it's unlikely any player (or players) will ever go to the work of obtaining an acceptable sample size, the confidence provided is a moot point. If anyone ever does, ask me again.
If you're talking about me telling them to start 27 cities, that was sarcasm, not a suggestion. I like to be sarcastic when others are obtuse.
 

Algona

Well-Known Member
Would you be so kind as to share what an acceptable sample size would be, what confidence it would provide, and what method was used to determine the suggestion?

Delete your account in a snit because you were treated the way you deserved then create a new account to ask that question?

What bothered you so much that 6 days later you had to come back?

Embarrassment for looking like a clown the first time around? A pathological need to get in the last word? Some distorted notion that you are righteously smiting the oppressors?

Congratulations though on reaching a new level of pitiful.

You are the first person on this forum to have been so upset / angry / spiteful they deleted their forum account and then came back days later to resume an argument.

Question: What is the sample size needed to determine marduino is a new nadir in pathetic posters?

1 post.
 
Delete your account in a snit because you were treated the way you deserved then create a new account to ask that question?

Swing and miss big fella. You might want to send him or her an apology.

You’re free to have your own standard, my parents raised me treat others the way I wanted to be treated. I don’t recall anything about deserving.

While you‘re here, maybe you could answer the same questions on your 28000 sample size recommendation.

These seem like straightforward questions, so I’m not sure why your post was focused on besmirching a specific player and not on helping the community better understand proper methods.
 

RazorbackPirate

Well-Known Member
I can't speak to @Algona's number, but I can tell you my experience.

When I first started doing RQs, I wanted to know if the 1/14 chances being communicated was accurate. As I completed RQs in my two cities, I kept track of what each Random Reward payed out in a spreadsheet by day, that then calculated the % payout of each for the week. After 4 weeks, I totaled everything up and had enough samples, a bit over 1,000, to be close enough to the expected payouts in each category to settle it for myself.

Below is what I had after 3 weeks. I had posted this to the forum in a thread that was discussing it at the time. This is after 3 weeks, I went one more week, and was so close between actual and expected, I was satisfied and ended it.

RQ Totals.png

The challenge you have with measuring Galata's accuracy vs. published states is that you not only have to track each potential plunder, but you'd also have to categorize the results by Galata level. Even if 1,000 was enough to show the stats correct, you'd have to get 1,000 samples per level.
 
More than what one player could accumulate even if they had a city on every world and played them all to the max.
Since it's unlikely any player (or players) will ever go to the work of obtaining an acceptable sample size, the confidence provided is a moot point. If anyone ever does, ask me again.

I don’t think you will find many data scientists that agree with you on this. Sample size is one of the most difficult and debated aspects of running experiments. Generally speaking, it‘s not a matter of how small the sample size is, but what you can infer from it. Many experiments/surveys just simply do not have access to unlimited data and so the decision becomes how much precision is provided by what is there. There are several situations outlined in this board already, that warrant a more thorough explanation of why they are wrong, misleading, or accurate.

Marduino makes the claim that he had a sample size of 25, with an expected chance of 10% or more, and 0 successes. This sample size is actually relevant enough, but his/her conclusion is wrong. What their data shows is that we can say with 95% confidence that the RNG’s mean value was between 0 and 13.7%. In other words, there is no proof in this data that the advertised rate is wrong.

Marduino makes the claim that he had a sample size of 57, with an expected chance of 10% or more, and 2 successes. This sample size is also relevant enough, but again his/her conclusion is wrong. Their data shows that we can say with 95% confidence that the RNG’s mean value was between 1% and 12%. Again, no proof in this data that the advertised rate is wrong.

More generally, for the Spring Event there would have been 2 main strategies for pursuing daily prizes: lowest cost of lanterns per % or highest %. The former would have provided about 130 leaps, while the latter closer to 90. Expected value of the first would be about 7.83% and for the second 11.42%, both unsurprisingly providing an expected value of 10 daily prizes. Based on the two sample size options from a single city, winning any value between 5 and 16 of daily prizes would fall within the 95% confidence interval and subsequently would not provide any proof the advertised rate is wrong. Under these conditions, less than 5 or more than 16, would be problematic and worth raising.

In Algona’s diamond experiment, he states a sample size of 10,000, with an expected chance of 1%, and 99 successes. Again, the sample size is relevant is his conclusion is supported. His data shows that we can say with 95% confidence that the WW diamond RNG‘s mean value is between .81% and 1.2%. It does not show that it is 1%, but it shows that 1% is within the confidence interval and there is no proof the advertised rate is wrong. If his results had held at say 5,000 attempts, the 95% confidence interval would be .74% and 1.29%. Halving the sample size again would move the interval to .65% and 1.42%. In none of those cases, can we definitively state that the exact % of the RNG is 1, but there is nothing in the data to suggest it couldn’t be.

With Pirate’s data, it is much like Algona’s in that it doesn’t definitively answer the question of what % the RNG is running at, but shows that the published rates are within the confidence interval based on the sample provided. If you bundle his data together into 1/14, 2/14, and 5/14 events his data shows:
a.) that the 95% confidence interval for goods is 30.5% to 36.9% encompassing the expected value of 35.7%.
b.) that the 95% confidence interval for small coin/supply is 25.6% to 31.7% encompassing the EV of 28.5%
c.) that the 95% confidence interval for every other outcome is 34.6% to 41.1% encompassing the EV of 35.7%

If either Algona or Pirate had been seeking a more exact value, which I don’t believe was the goal, the only thing that is really of any debate in these two cases is how wide of an interval is sufficient to show the preciseness of that value. That is where the size of the sample would come into discussion as more samples would certainly lead to a smaller width, as demonstrated on the examples above in using 5,000 or 2,500 samples instead of 10,000.

My last example comes from my own experience in the 2020 Forge Bowl. My sample size was 17 throws, with 6 of them resulting in double rewards, something that was only supposed to occur 3% of the time. To be clear, this is not statistically impossible, but it works out that there is about a 1/155,000 chance of this happening, and the 99% confidence interval for the mean RNG for this sample was 13.7% to 65.1%. When we talk about something with such small chances of occurrence, we have to consider the chance that something was not working as intended, even if only temporarily.

In every case above, regardless of sample size, there is something to be learned from the data. The answer isn’t necessarily that someone’s sample size isn’t large enough, rather it’s their interpretation of that sample that is off-base, or in the latter cases that the data supports their assessment.
 

RazorbackPirate

Well-Known Member
In every case above, regardless of sample size, there is something to be learned from the data. The answer isn’t necessarily that someone’s sample size isn’t large enough, rather it’s their interpretation of that sample that is off-base, or in the latter cases that the data supports their assessment.
TL/DR

In every case, the sample size was sufficient to show that the RNGs perform within expectations. That Inno runs an honest game. If you think otherwise, collect your data samples and show it. Or don't. Easier to talk about doing it, than doing it, I suppose.
 

Johnny B. Goode

Well-Known Member
I don’t think you will find many data scientists that agree with you on this. Sample size is one of the most difficult and debated aspects of running experiments. Generally speaking, it‘s not a matter of how small the sample size is, but what you can infer from it. Many experiments/surveys just simply do not have access to unlimited data and so the decision becomes how much precision is provided by what is there. There are several situations outlined in this board already, that warrant a more thorough explanation of why they are wrong, misleading, or accurate.

Marduino makes the claim that he had a sample size of 25, with an expected chance of 10% or more, and 0 successes. This sample size is actually relevant enough, but his/her conclusion is wrong. What their data shows is that we can say with 95% confidence that the RNG’s mean value was between 0 and 13.7%. In other words, there is no proof in this data that the advertised rate is wrong.

Marduino makes the claim that he had a sample size of 57, with an expected chance of 10% or more, and 2 successes. This sample size is also relevant enough, but again his/her conclusion is wrong. Their data shows that we can say with 95% confidence that the RNG’s mean value was between 1% and 12%. Again, no proof in this data that the advertised rate is wrong.

More generally, for the Spring Event there would have been 2 main strategies for pursuing daily prizes: lowest cost of lanterns per % or highest %. The former would have provided about 130 leaps, while the latter closer to 90. Expected value of the first would be about 7.83% and for the second 11.42%, both unsurprisingly providing an expected value of 10 daily prizes. Based on the two sample size options from a single city, winning any value between 5 and 16 of daily prizes would fall within the 95% confidence interval and subsequently would not provide any proof the advertised rate is wrong. Under these conditions, less than 5 or more than 16, would be problematic and worth raising.

In Algona’s diamond experiment, he states a sample size of 10,000, with an expected chance of 1%, and 99 successes. Again, the sample size is relevant is his conclusion is supported. His data shows that we can say with 95% confidence that the WW diamond RNG‘s mean value is between .81% and 1.2%. It does not show that it is 1%, but it shows that 1% is within the confidence interval and there is no proof the advertised rate is wrong. If his results had held at say 5,000 attempts, the 95% confidence interval would be .74% and 1.29%. Halving the sample size again would move the interval to .65% and 1.42%. In none of those cases, can we definitively state that the exact % of the RNG is 1, but there is nothing in the data to suggest it couldn’t be.

With Pirate’s data, it is much like Algona’s in that it doesn’t definitively answer the question of what % the RNG is running at, but shows that the published rates are within the confidence interval based on the sample provided. If you bundle his data together into 1/14, 2/14, and 5/14 events his data shows:
a.) that the 95% confidence interval for goods is 30.5% to 36.9% encompassing the expected value of 35.7%.
b.) that the 95% confidence interval for small coin/supply is 25.6% to 31.7% encompassing the EV of 28.5%
c.) that the 95% confidence interval for every other outcome is 34.6% to 41.1% encompassing the EV of 35.7%

If either Algona or Pirate had been seeking a more exact value, which I don’t believe was the goal, the only thing that is really of any debate in these two cases is how wide of an interval is sufficient to show the preciseness of that value. That is where the size of the sample would come into discussion as more samples would certainly lead to a smaller width, as demonstrated on the examples above in using 5,000 or 2,500 samples instead of 10,000.

My last example comes from my own experience in the 2020 Forge Bowl. My sample size was 17 throws, with 6 of them resulting in double rewards, something that was only supposed to occur 3% of the time. To be clear, this is not statistically impossible, but it works out that there is about a 1/155,000 chance of this happening, and the 99% confidence interval for the mean RNG for this sample was 13.7% to 65.1%. When we talk about something with such small chances of occurrence, we have to consider the chance that something was not working as intended, even if only temporarily.

In every case above, regardless of sample size, there is something to be learned from the data. The answer isn’t necessarily that someone’s sample size isn’t large enough, rather it’s their interpretation of that sample that is off-base, or in the latter cases that the data supports their assessment.
Obfuscation.
 
TL/DR

In every case, the sample size was sufficient to show that the RNGs perform within expectations. That Inno runs an honest game. If you think otherwise, collect your data samples and show it. Or don't. Easier to talk about doing it, than doing it, I suppose.

I would phrase it a little differently if you were going to condense it. More like, for the data provided, in all but 1 case, the data supports the published probabilities of Inno Games or the observed probabilities from player sites. Making a statement about them being an honest company would not be in the scope of the data. That’s not to say that they are not honest, but that’s not what this data unequivocally shows.

I assume “you” is meant to be the understood version, as I don’t have any reason to “think otherwise”, nor have I indicated any doubt.
 

RazorbackPirate

Well-Known Member
I would phrase it a little differently if you were going to condense it. More like, for the data provided, in all but 1 case, the data supports the published probabilities of Inno Games or the observed probabilities from player sites. Making a statement about them being an honest company would not be in the scope of the data. That’s not to say that they are not honest, but that’s not what this data unequivocally shows.

I assume “you” is meant to be the understood version, as I don’t have any reason to “think otherwise”, nor have I indicated any doubt.
You as in the person I'm responding to, for the last time. Not meant to be a trick statement I think I'm in agreement with @Algona on this one.
 

Johnny B. Goode

Well-Known Member
Which part is unclear? I would be happy to explain.
None of it is unclear. You're trying make the point of the discussion unclear by talking about tangential thoughts that are irrelevant. Thus, obfuscation. Because you're wrong about sample size in the context of the point of the discussion and can't bring yourself to admit it.
 
Top