The world of Quality Assurance can be a potentially routine based job and to be fair... that’s correct! You get to perform menial tasks in wonderful worlds for 8 hours at a time, and then go home and realize you have difficulty separating job from hobby!
(I have 5 years of experience working for an outsourcing QA company (Full disclosure: I got fired months ago for something that was totally my fault). More importantly, I’m not American - Thus there’s caveats everywhere as my experience won’t be the same as what you’ll find in the good ol’ US of A (Mainly, we have some sort of worker protection here) and it won’t be the same as someone who worked directly for a developer or a publisher. Practically every former QA employee has a different story and opinion on the business.)
That’s what I’ll be covering today. As outsoucers, your job is to ensure client satisfaction at all times because they are the ones that essentially decide if your company has work for 10 or 100 people for the next 6 months. Now, at the base level that may be none of your concern, but the policies where I worked were all implemented to meet a high degree of customer satisfaction and it would be safe to assume that most outsourcing companies have similar base policies. The details of how they work will invariably be different, of course.
When the project lands, we’d have to come up with a testing plan. Take a good look at the game, make an inventory of all environments, variables, items, characters, interactions, animations, etc. Once that’s done, you’ll generally want to create test cases using either a test case tracker (in the case of absolutely rigid test cases that require metrics to send to the client) or, well, an Excel sheet if it’s just to keep as a tracker of what was covered on this build.
A team lead would also need to build his team - Sometimes it would be workers who’d worked with them previously or people who are known to have experience with that type of project. As an example: There was a period that I was assigned to every game we had of a specific genre (For secrecy’s sake, I’ll not list what that is) and that has also occurred for others.
That’s where the work you want to know about happens. The team lead generally receives what the client wants done on the project, they assigned these tasks to the testers, and they go in the game and test it out. There are several ways to test and I’ll list a few here:
Smoke Test: The first part of anything you receive usually- Imagine if you go to test and whoops the game crashes on boot. Kinda need to flag it to the client so they can send you something that works or revert to the previous build while they figure out what’s wrong. Sometimes, you’ll be asked not to perform this as it will already have been done on the client’s side but it was always safer for our side to do it anyway, even if only one person did it, just in case. Sometimes there’s problems that arise due to the difference in configurations between developer and outsourcing QA.
Ad-hoc/Destructive: Typically this is done after everything has been completed on this build and it is essentially mining for bugs. The tester attempts to play the game in a destructive manner, logging in the database anything he finds that does bad things to the game. This can range to finding a weird set of options that guarantees a crash, or just something that causes graphical corruption somehow. Both terms are accepted I’ve found, if only because you’ll usually have to explain them as it’s rare that people do that.
Structured Testing: Pretty much you give someone a bunch of tasks and they perform them. Works very well to ensure that everything functions as intended. This will usually follow the smoke test outlined above. This usually takes the form of Test Cases, which can come from the client, your own team or be a mixture of both. I’ve seen all 3 applications of structured testing in my time as a QA tester.
Playthroughs: Usually done as part of structured testing, it can happen that the structure is a complete playthrough with a set of parameters, such as a set given language, 100%/any%/lowest possible%, specific abilities skipped, a given mission order, etc. This usually occurs near the end of the testing cycle as the game is pretty much ready and everyone involved wants to know if something bad will happen to players playing through the game normally. Can result in legendary bugs.
That depends on the client - Usually, they’ll have their own database and you have to conform to that. In which case, we used an edited version of the company’s bug writing format to match their needs as much as possible. You essentially have to write bugs in the same way you would resume a news article: Who, What, Where, When, but except of “Why” you answer “How”. Your job is not to answer “Why” unless a developer asks you (which HAS happened, but usually the workers are untrained in actual programming, so explaining it is difficult with our limited understanding of it) but rather to show them how to do it so they can reproduce it on their end.
This is one of the caveats of outsourcing: Insourced QA have it “easy” (They don’t - the job’s just as hard!) in that if a programmer needs something reproed, he can just grab the tester and ask for a live reproduction. It’s impossible for an outsourcer to do so unless he’s been sent to the client’s office for work. Usually, these bugs get bounced back and forth until a resolution is found. Either better reproduction steps are found, or something else is fixed that somehow fixes the issue.
As far as databases are concerned, Atlassian’s JIRA might be the most popular one that I’ve used (and my favorite), but I’ve seen instances where we had to keep bugs in an Excel sheet and one where the entire “database” was a self-forwarded e-mail. So really, there is no one standard across every company out there.
That we do - Usually, the team will go through a process called regression. Someone will collect all the bug tickets that were entered and marked as fixed by the developer, and then the team will verify that they’re actually fixed from the end user standpoint. This is a regular process that usually occurs both on the developer’s end if they have a QA department as well. If it’s verified, it’s marked as such, and if it’s not, it’s marked as having failed regression and sent back to the client for them to work on it more. The policy where I worked would be to add additional data to the ticket such as video, screenshots or anything deemed appropriate with metrics gathering tools (that were usually proprietary so I’ll not expound on that here).
That’s pretty much the nitty gritty of the job in an admittedly large nutshell. I’ll answer any questions I can if you have any. Next one’s going to probably detail other QA prospects, probably localization because we all know the tale of Ys VIII and it’s a good example of why localization QA is necessary.