🎉 Watch our 3 webinars as you explore the Workday 31 Preview. Our experts review the HCM, Payroll and Financials features that are coming plus show you how to finish your testing in under a day.

Interviews Testing Updates

Workday Clients’ 8 Ideas to Help You Win at Testing Workday Updates

After our webinar Ask Me Anything (AMA) About Testing the Workday 30 Update, we asked our friends in the Workday community to join the conversation and share additional tips for a successful update testing. Here’s what they had to say.

Workday is software, not sci-fi. Plan your testing.

The key to a successful Workday update is preparation and organisation. “You must plan,” says Coreen Campbell, Director of HR Systems at the International Rescue Committee (IRC). “I know Workday is in the cloud and is so much easier to work with than other ERP platforms I have used. It has completely transformed and simplified the way our teams manage and configure our business processes. However, this is not Star Trek; your system configuration does not magically look after itself. No system does. The software requires maintenance and a methodological approach to assess changes and conduct periodic validation thru testing.

“If you don’t have standard operational procedures for testing, test reporting, and fixes,” she continues, “problems will manifest. Before the Workday update preview starts (or before any test event, for that matter) you want to document your test plan, your scripts, your test cycles, and your plan for capturing and fixing errors.”

Pre-test and introduce an ongoing optimisation strategy

Because the test window is a fixed five-week period, you want to minimise disruptions and delays during that test period. At Cornell University, they test in the lead-up to the update so that they can reduce the number of self-induced surprises.

“If you shortchange the planning and prep time, you can end up having to deal with a lot of pack mitigation in the middle of the update testing,” says Harland Harris, HRIS Configuration Specialist at Cornell. “That creates additional stress. So do your best to make sure that as many of your packs and test cases as possible are going to run the way you would expect. Pre-testing can help. For instance, we run a weekly BP test pack, so we know that that test pack is stable and up-to-date. If a test script does ever differ from our configuration design, those weekly tests allow us to catch that quickly and update our pack if required.

“We also have some test packs that are very specific to the update,” he continues, “and we pre-test those because they don’t get as much regular play as some of our other tests. But if you only test your configuration once the preview starts, you’re going to run into many instances of ‘I forgot that we changed that configuration. I need to modify that test pack.’ And that eats up valuable time during the preview window.”

“…this is not Star Trek; your system configuration does not magically look after itself. No system does.”

As your configuration is a constantly changing system, your testing process and test documentation should also be continually evolving to stay in sync with its needs. “Every time we run a test cycle, we try to optimise portions of it to improve,” says David Garver, Assistant Director of Identity & Compliance at Louisiana State University. “This includes the process and the documentation. For example, we try to write our tests well in advance, reuse whatever tests we can (for example our weekly regression tests), identify holes in them as we’re testing, and add detail so they’re in better shape for the next update.”

Help test teams focus, and provide support

During the Workday update preview window, your systems team will most likely call on subject matter experts (SMEs) from various areas of your organisation to help execute your test cycles. A dedicated team focussed solely on testing during the update period would be ideal, but more often SMEs assist with testing in addition to their everyday duties. This divided attention can sometimes make it difficult for you to predict and control testing timelines as well as ensure test quality and consistency.

For example, over the years we’ve heard about instances of manual testers:

  • multitasking their testing during meetings (which can lead to testing mistakes);
  • stopping midway through tests because of interruptions (which can affect test reliability and results);
  • faking test results just to complete their allotment (risking that issues in your configuration go undetected); and
  • forgetting for days to hand off to the next business team once their stage of testing was complete (causing delays in an already busy testing window).



To improve focus during update testing (and also streamline effort so teams have bandwidth to explore Workday’s game-changing enhancements), LSU sets up a dedicated room for testing. “It’s helpful because it removes testers from their day-to-day responsibilities, which are tempting to dip into,” says David. “It eliminates distractions and helps cut down on double duty because the people they’re surrounded by are also testing—helping keep everyone on track, focused and more efficient.”

Your update testing may also go more smoothly if you:

  • standardise ways to support the test team as they’re executing tests; and
  • quality assure the testing as you progress.



This can reduce the amount of rework you need to do. For example, if your SMEs are located remotely, Lawrence Berra, HR Manager at Magellan Health, suggests setting up regular check-in periods. “We had a case where a tester was behind, and when they finally turned in their results, they had tested incorrectly and most parts had to be retested. By introducing check-ins, we’re able to catch problems early and our testers who are more on their own get the support they need.”

“Relying on engineered data that we can control and replicate update to update means we don’t have to start from scratch and rebuild as much…”

Coreen also suggests that you periodically observe your testers. “Manual testing increases risk,” she explains. “Even with experienced testers, you can still run into issues like simple human error, inconsistency and reliability problems. People miss things. Boredom or impatience can affect thoroughness. In addition, subjectivity can lead to inconsistent outcomes. For example, a script might say to hit ENTER, but in the UI the button says SUBMIT. One tester may mark that test as a fail. The next tester may mark it as a pass. To mitigate these risks, I will periodically sit and walk my testers through tests step by step. I’ll also sit with them and watch them work through a test. And with less experienced testers I make a point to check their reports in detail to assess whether the criteria they used to determine if a test case passed or failed is valid.”

Finning has recently taken tester support one step further by screen-recording their manual tests. “We began doing this as part of our phased rollout in South America, because it was proving difficult to impart our test instructions written in English to non-English-speaking test teams,” says Heather Butt Paul, the Global Practice Lead for Workday at Finning. “The new approach proved really successful, and we’ve since realised that it’s also very useful for training new manual testers here at home because it reduces doubt and confusion. They can see screen by screen and field by field what they should be doing, and that helps improve test accuracy and consistency.”

Use synthetic data where suitable

In Workday, real employees and their profiles change all the time. When the majority of your tests use real employee data as the subject, maintaining those tests can become a challenge. Before executing them, you need to review the employee data you’re using to ensure it’s still relevant to the test scenarios you’re testing. Is Employee A still in the supervisory org? Are they in the same position as last time? Do they work the same type of hours or schedule that you’re testing for?

“At Cornell, we’re trying to get better at using synthetic data to test the large percentage of basic business processes that we run over and over,” says Harland. “Relying on engineered data that we can control and replicate update to update means we don’t have to start from scratch and rebuild as much. It helps reduce instances of ‘Oh gosh, that test is going to fail because John Doe is no longer in that job anymore.’ Then we use real employee data for more specialised testing (for example when we need to validate that a process works as expected for a unique employee) as well as for more critical and complex business processes.

“It’s helped reduce our effort around data maintenance,” he explains, “and it’s also much better from a data protection and compliance standpoint because it vastly reduces the amount of worker information that testers are exposed to.

For more advice on testing your configuration during the Workday update period, jump over to our webinar video where our testing experts share best practice, including:

  • what makes a good test plan;
  • important but sometimes overlooked tests;
  • our recommended sequence of testing, and more.

Like what you see?

Leave us your details and we'll send you useful Workday-related content like this straight to your inbox.

Sign me up!