Assessing Game Demand

Radarscope, Nintendo’s first American release in 1981 identified how culture can affect game acceptance and performance. Based on the mega-hit Space Invaders, Radarscope was widely popular in Japan, yet solidly rejected by American players. When it was decided to retrofit cabinets in America with a new game, Nintendo of America (NOA) was in luck. Shigeru Miyamoto had an original design up his sleeve called Donkey Kong. Understandably skeptical when reviewing the game, NOA staff had no idea what fell into their laps. Single-handedly this game would change Nintendo’s fortune.

Based on Space Invaders, Radarscope failed to excite players

Based on the hit game Space Invaders, Radarscope failed to excite players

My point here is, Nintendo didn’t accurately assess the demand for their games. They overproduced Radarscope, and had to convert 2/3 of the 3000 manufactured into Donkey Kong. It required unboxing, removing and replacing all graphics and EPROMs (erasable programmable read-only memory). Not exactly an efficient use of time and resources. And after the success of Donkey Kong, Miyamoto parlayed his success with a derivative design. Perplexingly, Donkey Kong Junior’s popularity and sales defied expectations. Estimating the demand for a new game was far from easy.

In a perfect world, Nintendo would know precisely how many of Radarscope, Donkey Kong, and Donkey Kong Junior’s the market would bear. How many cabinets, electronics, monitors, joysticks, buttons, sets of artwork needed to be ordered. With lead-times as long as two months, management needed information to estimate and manage risk. Arcade games had a short “shelf life” and a small window of opportunity to perform at their peak on location. Market conditions were dynamic, with a constant influx of new games. This presented an opportunity for me to make a real contribution. Data to provide useful information to make crucial decisions was sorely needed.

During development, I provided feedback to working game builds. I fought for better player comprehension and early engagement. Weeks before production of a new game, test games collected valuable information. In a mix of large arcades and small street locations, we hoped to accurately reveal game potential and make final adjustments. We used this information to forecast production and assist marketing efforts.

Today companies have “business intelligence” (BI) teams with an overabundance of data. Would more of the right data have helped better predict demand and improve a game’s earning power? Absolutely. Also in the study of the user experience. It would have helped to better tune games to player needs, and for management planning and forecasting.

Popeye Time Study JA_LO

Figure 2: Ever-increasing play times resulted in lower earnings.

Game Analysis

I analyzed collection reports from across the country. Recently launched, Junior was no match in earning power and lacked the “legs” (longevity) of Donkey Kong. Like data today, problem areas were identified but didn’t provide solutions.

I manually recorded play times to study learning curves (Figure 2). It illustrated how ever-increasing play times resulted in fewer per day player sessions. The effect was lower game earnings.

Over a period of years and little by little, play data was added to “bookkeeping functions.” Operators and manufacturers could better tune a game by knowing average play time, the locations of pinch points, what levels players were accessing and where sessions were ending.

Prior to introducing a game, manufacturers sales reps would push their distributors for sales commitments. Distributors didn’t always believe collection reports, so manufacturers were forced to test games strategically with key distributors. Those tests proved influential in the sales of a game.

At trade shows, I played and analyzed new games. By playing a game for ten minutes I could get a good feeling for it. Though just a sampling, the first few minutes and early levels were most important to me. If a game was ever going to “hook” me into playing longer, it had to excel in its early stages. Trade show conversation always centered on “which games do you like?” and “what games are making money.” Distributors and operators appreciated my critical eye and respected my opinions.

But being a source for competitive information was a conflict of interest. I worked for Nintendo, but we didn’t always have the best games. And for distributors, purchasing the best games for their customers made the difference between “selling out” (a good problem) and “being stuck” with games no one wants.

Continue reading ➢

pages: 1 2 3 4 5

6 Responses to Methodologies to Analyze Classic Arcade Games

  1. Frank Ballouz says:

    Well done, Mo Mo!!

  2. Jeff Walker says:

    Well done Jerry. I think all of us in the coin-operated amusement marketing side of things had a much harder challenge of selling units of entertainment time versus our counterparts selling package goods. Great article

    • Jerry Momoda says:

      Thank you Jeff! I agree with your challenge comment. We couldn’t B.S. a game was good when it wasn’t. The cashbox told the truth. But coin-op sales people really knew/know how to sell the less than great product. That takes chops. Coin-op has so many great stories to tell. The game is the game, yes? 🙂

  3. Mark West says:

    That’s a well-written article, Jerry, and it opened my eyes to the marketing research done in those days. I should have listened to more of your advice for “Danger Express”!

    • Jerry Momoda says:

      Well thank you Mark! As you know, at Atari Games we used a more process-minded approach to game development. Looking back, somehow incorporating an approach like mine into the concept approval process and at milestones might have been useful. I still remember how awesome an artist you are! Thanks for reading!

Leave a Reply

Your email address will not be published. Required fields are marked *