Michel Bastos
Business is booming.

Software Testing: a Glossary of Critical Terms (2020)

Software testing is an underappreciated and absolutely crucial part of the development process; simple fixes in development can cost millions if the bug manages to reach production. Even if you’re not a tester, you owe it to yourself and your company to understand what they’re talking about so you can work with them more effectively. Today I’m going to be breaking down some key software testing terminology to help you know what’s going on.

How Much Are You Testing?

Unit Testing

The earliest (and almost certainly the most common) sort of testing, where an application is broken down into individual component parts, and they are then tested in isolation. What that means depends on the software, but the general idea is that you break it down into the smallest usable units. This makes bug detection fairly straightforward: if something is broken, you know exactly where it is and which part you need to fix.

Integration Testing

Putting the pieces together and seeing how they work in concert. This doesn’t necessarily mean all the elements get tested together: integration testing will often group several units of code together to see how a particular system works, then test those groups together with each other and so on until you’ve managed to get everything working together. Jumping straight to integrating everything makes it significantly harder to isolate where the bugs are, and it’s better to do it in well-chosen stages. When everything is ready to test together, you get:

Functional Testing

Does the product work as a whole? A functional test will have a certain set of goals it seeks to fulfil, and will usually be run as though the testers were users (i.e. black box). The goals will differ but they tend to coalesce around the idea “does our software do the thing it’s designed to do?”

Who Is Testing?

Manual testing

A tester does everything by hand. Less common these days than it used to be, but it still happens fairly frequently, especially when something is wrong but automated testing can’t figure it out.

Automated Testing

A vast majority of modern testing uses suites like Selenium to run large batteries of tests with an efficiency that a human could never even hope to match. Automated testing can and does miss things, but it catches a lot, and—especially if you’re dealing with very complex software—it’s the only way to get certain tests run.

User Testing

When you bring in people from outside the company to test for you. This can be beta testing where a restricted group of public individuals (e.g. people from your mailing list) are given early access to your software with the understanding it may be buggy and that they should report issues, or it might be something like accessibility testing, where a group of people with (say) colour blindness are brought in to see how well they can use the software.

What Can the Testers See?

White Box Testing

When the testers have unrestricted access to the code, and can see what’s going on under the hood. If you have a specific issue you’re trying to diagnose, this is the way to go: you can test certain inputs and see how everything works on the inside to better figure out what’s gone wrong.

Black Box Testing

When the code is concealed from the testers. This lets you better mimic how actual users will interact with your app, and is also useful for removing distractions—less information can make things harder to work out, but it can also clear away static and help you see where the actual problem lies.

Why Are We Testing?

Cyclomatic Complexity

The number of routes that can be taken through a particular piece of code. High C Complexity means more moving parts and more that can go wrong, though it can sometimes provide powerful modality. There’s no one Correct Amount of C Complexity—you need to scale it based on what your app needs to do.

Fault Tolerance

How badly can the system break before it, well, breaks? A lot can be going wrong under the hood while the actual application functions fine, and this is good—if 99/100 pieces break down but the user can’t tell the difference, then you’ve built in exceptionally high fault tolerance.

Smoke Testing

Does it work? Can users order delivery through the food delivery app? Can users read the newspaper on the newspaper app? Can they play music on the music app? And so on and so on.

Cross-Browser Testing

Does it work when you move it onto a different browser? This is an infamous testing problem: something that works fine when the developers were testing on Chrome suddenly breaks when somebody tries to run it in Safari, and your app is unusable by anybody on an Apple device. Personally I like Cross Browser Testing for this sort of thing.

Stress Testing

How many transactions/data points/users/olympic gymnasts can it handle at once? The more critical a component is, the more you’re going to want to stress test it—malicious parties will try to crash sites or applications by overloading them with requests (a DDoS attack), and sometimes traffic just surges organically, and it’s important to know that the load-bearing beams are going to hold.

MTBF

Mean time between failures. The measure of the average time between fail states, though what counts as a fail state will depend on your piece of software. Ideally this would be ∞ but we don’t live in an ideal world, and it’s important to know so you can be efficient in your repair schedule.

Developing your software is only part of the process, testing is equally important. If you have a development team and you need to expand your testing capabilities, we’d recommend the software testing services at CodeClouds.

Comments are closed.