crc1021

Now that all regional competitions are done, I'm curious about what others thought about the Crossfire game. I liked the shooting aspect, but was disappointed with the lack of point diversity. Rather than teams having to figure out a strategy, every team just did all challenges in largely the same way. Also, I don't recall a game where it was possible to complete all the challenges in three minutes (at least in the time I've been with involved with BEST)

For next year, I'm hoping for a more complicated game like Pay Dirt or Gatekeeper.

What are your thoughts?

Quote 0 0
Zach M
Yeah, I feel like it was an easier year for most teams in general, but I think the amount of points possible to get may have been small, but the bonuses available separated the teams quite well in my opinion. 

But yes, I hope there's something like Pay Dirt with the market shift. That was a real strategy game. I like it when things change from seeding to semis to finals. I think 2018 has lots of potential to be like Pay Dirt, because it seems like it is going to be about collecting lots of stuff. I can't wait for bESTology to start! We shall see. 
Quote 0 0
nicholas.seward
This year was super fun to watch, easy to explain, easy to build a practice field, and the bot was fun to drive.  I would say this is about as good as it gets.  (If I was to add anything it would be something analogous to the agate or prairie chickens.  For instance, we could have some small foam cubes be stand ins for rubble.  There could be a very small number of points awarded for clearing it away.)

It seems like there were clear divisions between teams.  You don't want a competition where everyone is so close in points that assigning a winner isn't arbitrary.  From the games that I witnessed there was always a clear winner.  As we all know now, it came down to getting Manny fast.  Figuring that out and designing a bot for speed is far from trivial.

Please oh please never let us have a competition again where the bot doesn't roll.  It is so hard to show off your bot if it has to be mounted on a poll, rail, trolley, etc.

My hat is off to this year's designer.  I can only hope we can get a game this good again.
Quote 0 0
jgraber
  I agree field was visually appealing,  but lack of planned disassembly made the field center painful to move between Kickoff,Demo,Game,Demo,Game,Regional.
Throwing projectiles was new as far as I know, and cool idea;  not too many shot out of the field.
The design team consists of former BEST competitors, now mostly engineers in aerospace, and they think they know how rules should be written to be clear;  they also did PayDirt.

Contrasting opinions: 
 
   I disagree about the importance fast manny:  There are only a few top teams that can even clear the field at all, so it is only those teams that fast manny is the distinction between them.   And of those,  reliably picking up manny the first time is more important than shaving a few seconds of drive speed.  What does it mean that a robot can completely clear their field in half the time?  not hard enough?  opportunity for cooperation to other teams?


Quote 0 0
nicholas.seward
@jrgraber: I agree.  It all depends on your level.  I usually tell a team that at locals if you can score anything and give yourself a week to practice then you will probably place.  Don't get to caught up in trying to do it all.  In fact, many of the bonuses are usually distractions(red herrings).  All my students are so over-programmed that we have to build the bot in a weekend so all we focused on was getting Manny and the cans.  We provisioned for a catapult but didn't add it on until the very last minute.  I am pretty sure we would still have placed without the catapult.  To me it seemed like this competition gave a good range of challenges to engage a large range of teams. 

The only thing I would have liked to see added is some kind of point opportunity for a dumb rolling robots.  Getting a robot to roll is very hard for some teams so we need to give them a win so they will want to come back in the future.  Side note: I think the kit needs to include two wheel hubs to get teams off the ground.  I try to have other rookie teams come over to our shop to go from zero to rolling in an afternoon.  Having a robot that moves really gets these kids jazzed.  One team we helped just made a robot that went around and pushed prairie chickens and those kids were ecstatic.
Quote 0 0
jgraber
BEST Game design usually includes small points for dozer robots, just to get off of zero points.
It was possible for Crossfire, but you needed at least a static hook or a magnet to drag manny backwards.
Game for 2018 is being developed now.  I'll voice your concerns to them.
Quote 0 0
Zach M
@jgraber: while you're at it, may I request that you suggest they incorporate projectiles (as in Crossfire) into Current Events 2018?
Quote 0 0
ralsobrook
First of all, let me say that I think this year's game concept was very well designed aesthetically, and turned out to be quite fun for competitors and engaging for spectators. (The trailer was absolutely awesome by the way, much praise to whoever was responsible for that)

My main issue with Crossfire was the scoring system.

It seemed to me that the scoring system this year loosely followed a curve where the difference in score between any two teams decreases as you ascend the ranks . The following data was collected from the results of South's BEST this year, as posted on their website. South's Best.png 
I would argue that a scoring system of this nature is discouraging to teams at the top and bottom of the ranks. Those at the bottom are discouraged by the considerable differences between their scores and those ranked just ahead of them, and which misrepresent the difference in their efforts. Those at the top know how hard they work, and are discouraged by their marginal progress between local and regional competitions. As a mentor for a team that won the game at their regional competition, I can tell you that going from 0pts to 580pts took about 30% of our practice time, and going from 580pts to 588pts took 70%. This type of scoring system also increases the likelihood of a signal failure leading to a loss in rank. With very few points separating the finalists, and those points being a factor of time (manny rescue), a single signal failure early in the round could cause a team to lose on no fault of their own. 

I believe a good game scoring system does the opposite, with differences in score increasing with rank. (Demonstrated by competitions like Bet the Farm and Paydirt) This rewards those teams that go the extra mile, build a superior machine, develop effective strategy, and devote hours upon hours to practice. In addition, the prospect of moving up in the ranks and doing better next year is not so daunting for the teams that don't have the time or resources necessary to build a winning machine.
Quote 0 0
Zach M
I totally agree with that!
Quote 0 0
jgraber
I updated my profile to get notices of all topics, here in the off-season at least. 
It looks like I'm not staying current on a couple topics, and this post will have too many replies, but will catch me up.

- There should not be two non-driving games in a row, but there will be non-driving games, so that the same clawbot doesn't work from year to year to year.
- We have noted that projectiles are fun, but it doesn't seem likely in Current Events.  Maybe 2019.

Nicholas - The goal is some points for dozer type robots.   I plan to have such a demo bot for Current Events. 
 - Providing wheel hubs is actively being planned.  I think we missed 2019 though.   In Dallas, I host a build session at kickoff to get wheels and motor hubs made, but that is still not enough for some teams.   

- Strategy: Yeah, Paydirt market shift was tricky.  One popular idea is to have a pre-published change between Semis/Finals, or Hub/Regionals.  
- Re scoring: congrats on perfect score with 12s rescue.  similar to Texas BEST top scores.  Once your fire was out, did you go around putting out the fires in other quadrants?  That was a big crowd pleaser, good sportsmanship, and earned praises for those drivers, at Texas BEST, and it gave them something to do in the last 60-90 seconds of the match.
 -  I can see your point about timed scoring sensitivity to signal failure.   

- ralsobrook,  Can you propose an alternate scoring system for Crossfire that avoids the failings you see, and explain how it avoids them?  Maybe similar graphs for Paydirt and Farm would help.
-  Or a generic game:  how many activities, how much of each activity, what bonuses for a complete activity or for a diversity group.
Quote 0 0
Zach M
Hello Mr. Graber, 

Sorry, but I did not understand what you meant by "- There should not be two non-driving games in a row, but there will be non-driving games, so that the same clawbot doesn't work from year to year to year." Since there have not been two non-driving robots in the past 6 years, what did you mean by there not being 2 in a row? Does that mean that in Current Events the robot will not move on its own?

Too bad about projectiles not being in Current Events... That was a fun twist!

Thanks.
Quote 0 0
jgraber
Your quote of mine was a general comment that works for all years.   
I intend to say nothing specific about any specific game of the future, because that would be a spoiler, so don't ask. 
Quote 0 0
ralsobrook
It would seem that I too need to turn on my notifications during the off-season. Sorry for the delay, here we go.

A few things to know about the following graphs:
  • I created these graphs by plotting data I found on the South's BEST website. They seem to be the only region that publishes, and keeps record of, thorough robot rankings.
  • My team competes at Texas BEST, ( which I think answers one of your questions Jgraber.) so I was not present at any of these contests.  Thus I cannot speak to any eccentricities or odd circumstances during these contests that might explain outliers in the data.
  • These graphs show cumulative score from the seeding rounds only, as that is the largest sample size, and in my opinion, is the best indicator of a robot's capabilities (as distinct from drivers' skill and strategy which often define ranking in the semi's and finals). This also means that the Paydirt graph does not display the effects of Market Shift, as it didn't happen until the semi's. 

  
Capture3.png 
Capture2.png     

Capture1.png   
South'sBestAll.png       
The 3 contests preceding Crossfire all demonstrate a point distribution similar to the one I described in my previous post. I wish there was data available for Gatekeeper, because I believe it would have displayed a more accentuated manifestation of this trend. 

Here are my thoughts on a good generic scoring system:

  • There should be enough tasks available that a team's score is limited by their capabilities alone. The maximum score should be difficult, if not impossible, to calculate, and should be virtually unattainable. (Ex. Bet the Farm, Paydirt, Gatekeeper)
  • There should not be a single task that is extensive enough or valuable enough for a team to win by focusing solely upon it.
  • There should be one or more overall goals, comprised of multiple tasks, that when completed, award a significant bonus. (Ex. Gatekeeper's completed CPU, Bladerunner's completed windmills, Bet the Farm's plant and harvest)
  • I don't see a real need for a diversification bonus, as diversification should be rewarding enough on it's own. The most likely reason a team would avoid a task is because the time and effort it requires is not proportional to the points awarded for completion. If the point values are properly calibrated, then each task has it's own incentive. That being said, I don't think such a bonus does any harm, I just find it superfluous. 
  • Time should NEVER have a defining role in a team's score. These kids work too hard to lose a rank because too many people in the building were using the WiFi.
My thoughts on Crossfire specifically:
  • A quick and simple task, like clearing debris from the doorway, would have increased the scoring opportunity for the "dozer" robots.
  • Manny should have been more easily accessible for "dozer" robots. If you had a robot that could roll, a driver who knew which buttons to press, and a spotter who had read the rules, you should have been able to rescue Manny. 
  • Manny rescue should have been mandatory, as in you receive no points without him. Had he been easier to rescue, I believe that would have been a fair stipulation, preservation of human life being the chief priority, and would have provided even greater incentive than the time bonus. 
  • I believe the bonus for extinguishing all the flames of your color should have been much bigger, and I believe there should have been a bonus for extinguishing all of the flames in your quadrant, regardless of color. After all, what's the point of extinguishing only part of a fire?
I believe, under these conditions, competition would have been more rewarding for teams on both ends of the spectrum. I certainly don't wish to criticize those who came up with this the scoring system for Crossfire. Hindsight is always twenty-twenty, and I think it's important to point out flaws when we finally do see them, endeavoring continuously to improve the experience of all those who are fortunate enough to be involved in a BEST competition. 

P.S. I sincerely apologize to any who suffer undue anguish in trying to follow my ridiculous sentence structure.
Quote 0 0
jgraber
Lovely graph.  
Can you attach a spreadsheet with the data?
Perhaps your point would stand out better, if the graphs were normalized for scale and slope at the 20/80 % points on the Y axis, and on 0-100% for the X axis.
   Then we will better be able to see the difference in shapes at the top and bottom.  

Many good points on generic scoring.  I'll highlight this to the game team.
Your generic desired point 3 "sequential tasks" is similar to your unneeded point 4 "diversity".
One issue with sequential tasks, is that if you can't do the first one, its hopeless;  so there is a need to judge difficulty carefully, and in correct order.  
Quote 0 0
ralsobrook
I have attached my spreadsheet below. The modifications you suggested stretch beyond my skill in the field of graphical statistics, though if someone with a more practiced hand wishes to take a crack at it, I certainly invite them to do so.

In reviewing my previous post, I saw that I had switched the labels for Bet the farm and Bladerunner. I have revised the post to correct that error and include a shot of each graph individually, in addition to overlaying them.

I too felt that my fourth point was a bit redundant. Rereading your post, I now see the distinction you made between a bonus for a "complete activity" (I.E. extinguishing all the flames in Crossfire) and one for a "diversity group" (Assembling a windmill in Gatekeeper). Both of which I think are great additions to the game when the tasks are of sufficient difficulty.

Regarding the issue of "sequential tasks", I should have clarified my point to say that the tasks need not necessarily be "sequential". What I meant to suggest was a set of tasks which are interdependent for the completion bonus only. That is to say that each can be completed individually, in any order that the team desires, and has its own point value, separate from the completion bonus. That being said, I'm not against the idea of sequential tasks when they fit the logic of the game, as they did in Gatekeeper.
Quote 0 0