Understanding Impact: A Longitudinal Study for Enterprise IT

Project description

USAA’s IT organization recognized that they lacked a Voice of the Employee program. They desired a way to be more data-driven with continual employee feedback. They were collecting minimal user satisfaction data on a few software tools, despite owning and supporting numerous enterprise-wide IT products and services.

Research objective

The objective of the project was to design a research study to provide actionable employee feedback across a range of IT products and services. The organization desired summative data to inform their decision-making and help with prioritization. They wanted to measure the impact of their efforts and understand if they were improving employee experiences.

The IT organization provided enterprise-wide services included hardware equipment, conference room technology, a mobile Bring-Your-Own-Device (BYOD) program, software request and installation, and multiple forms of technical support. For software, they owned and maintained an Intranet, an employee directory, messaging and conferencing solutions, as well as productivity tools. Each piece of software supported a range of devices and operating systems.

Research method

Collaborating with six different groups of Product Managers, I crafted an online survey to address each group’s most pressing research interests. The questions aligned with their visions for the products and services. This approach meant the same survey could be repeated year after year.

The 17 rating-scale questions asked about employees perceptions: whether the products are useful, findable, and/or easy to use and whether interacting with each service is a positive experience. The two open-ended questions inquired about what IT does well and where it can improve.

The survey is actively administered by the Human Resources team (People Insights). They have agreed to identify a cross-section of 5000 employees and conduct the study twice a year. No incentive is offered.

High-level findings

Analyzing the findings, I organized a scorecard for the rating-scale questions. There were three categories: What IT does well, What IT does OK, and What IT needs to Improve.

IT Longitudinal Study Scorecard

What IT does well

92% Easily connect with colleagues

90% Useful communication tools

95% Ease to use conferencing technology

What IT does OK

77% Appropriateness of hardware

77% Usefulness of productivity tools

72% Awareness of IT help options

70% Findability of mobile app

70% Findability of enterprise content

69% Workforce Experience Commitment

69% Quality of IT support

What IT needs to improve

66% Clarity of IT communications

65% Relevancy of IT communications

62% Quality of installed software services

62% Usefulness of mobile solutions

59% Ability to self-service

59%) Awareness of IT offerings

54% Quality of hardware delivery services

IT open-ended findings

Looking across the open-ended responses, I found that the most frequently cited themes for what IT does well were:

  • Provides great support and is helpful (50 comments)
  • Quick and efficient at resolving issues (31 comments)
  • Useful collaboration and productivity tools (29 comments)
  • Responsive IT Service Desk (22 comments)
  • Useful conference solutions and conference rooms (20 comments)

And, the most frequently cited themes for where IT can improve were:

  • Improve hardware provided: equipment quality, laptop battery life, different equipment desires (45 comments)
  • Streamline hardware request, delivery and return processes (37 comments)
  • Reduce IT support time with faster ticket resolution (34 comments)
  • Address conference room equipment problems (23 comments)
  • Address authentication issues on mobile (22 comments)

Key takeaways and recommendations

Combining the qualitative results with the qualitative statements, four key takeaways and recommendations emerged:

  1. Hardware: opportunity to improve the standard hardware provided, both computers and headsets (top theme in open-ended comments related to hardware)
    • Recommendation: Prioritize hardware concerns and measure impact in future studies
  2. Hardware Request and Delivery: opportunity to improve hardware and request delivery experiences (only 54% agreement about quality of service and second most number of negative comments)
    • Recommendation: Explore current pain points and coordinate across organizations to address problems
  3. Self-Service: opportunity to improve self-service approaches (only 59% agreement with ability to self-serve and comments about difficulties finding information)
    • Recommendation: Follow a user-centered design process in redesigning the IT help portal
  4. Mobile: opportunity to improve authentication approaches (62% agreement for mobile usefulness and numerous negative comments about authentication issues
    • Recommendation: Prioritize authentication issues and measure impact in future studies

Analysis process

For the rating-scale questions, a frequency distribution analysis was performed on the percentage of agreement for each question. This analysis revealed the three natural groups, or “buckets”, of scores. Each question corresponded to one category: What IT does well, What IT does OK, or What IT needs to improve.

In the future, with more survey executions, trending information will be available for each question. Chi-squared tests can also determine if there is a statistically significant difference in the results to each question.

For the open-ended questions, each response was manually associated with one or more product/service categories, based on how the IT team organizes their work. Thematic analysis identified reoccurring themes within each category. I analyzed the two questions (What IT does well and Where can IT improve) separately.

I also experimented with an AI analysis of the raw responses. I wanted to compare my findings with those of Box AI technology to see if we reached the same conclusions. In some cases, the AI results aligned with my results. We differed in our findings when there were numerous comments, as the AI tool seemed to struggle to find the overarching themes.

Looking across the poorest rating scales and the most prominent improvement themes led to the key takeaways and corresponding recommendations.

Share out

To share the findings, I initially presented the detailed results to each of the original, six Product Management groups. The study was designed for them, as they were accountable for the products and services covered in the survey. I followed a similar format for each group: reviewing the results of the relevant rating scale questions and presenting the top themes from open-ended responses specific to their area. I also shared a link to my detailed analysis for them to review all of the feedback shared by the employees.

I also presented a formal report and organized a website with the findings. The report was presented to the IT leadership team. The website allowed employees who participated in the survey a chance to review the results.

Impact

The IT leadership team was probably the most eager to hear the results of the survey. This was the first time they had an evaluation of employee perceptions specific to their suite of products and services. Across all of IT, they had insights into how they were doing and where they could improve. The results also provided them a benchmark to compare against in the future.

During the formal share out, they were able to assign each key takeaway to a small team. They had a new way to prioritize the work of the IT organization and were both confident and excited about this new direction because it was based on direct employee feedback.