Understanding user behaviour is like predicting weather – we may think we know what’s going to happen, but there’s usually an error rate associated. Enter Chaos Theory.
Per Wikipedia, “Chaos theory is a branch of mathematics focusing on the behavior of dynamical systems that are highly sensitive to initial conditions. “Chaos” is an interdisciplinary theory stating that within the apparent randomness of chaotic complex systems, there are underlying patterns, constant feedback loops, repetition, self-similarity, fractals, self-organization, and reliance on programming at the initial point known as sensitive dependence on initial conditions. The butterfly effect describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state.”
When designing, building and deploying systems, IT generally has an “idea” of what users will do – we are generally always right and always wrong at the same time (Schrodinger’s Cat anyone?). The rigorous planning, stressing, and testing of systems will always (always) prove to miss one aspect of what a user will do. We end up as IT meteorologists.
Yet there are many methods in which one can minimize the error rate associated with user behaviour and control the “initial conditions” to ensure “chaos” does not reign. As we know, user feedback is extremely important, and understanding not only user requirements but user expectations must be done with extreme meticulousness.
First, assumptions on user requirements and expectations can not be entirely derived from historical knowledge, nor can it be entirely derived from current knowledge. Requirements and expectations must equally include historical, current, and future knowledge – what did you do in the past, what are you doing now, what do you hope to do in the future? Assessing user behaviour from this collaborative approach is a psychological method to pull negative viewpoints from users in a positive manner without retribution. They will be more likely to provide honest and constructive feedback, even for items they’ve come to accept through complacency.
We then control the “initial conditions” through gradual release of technology to ensure champions have their requirements and expectations exceeded through hands-on instruction, over the shoulder monitoring, and excessive feedback. We take our time in our release strategy as we do not want to overlook or miss any component of the technology, as the smallest items may create an enormous butterfly effects in the time to come.
On occasion, we’ve been bit by an application that we were told was “retired”, only to find out it was actually used by one person in the organization who happened to need it “immediately.” As a rule, we now assume that everything is used, factor this into design and incorporate into the build. If it is never used, they were right; if it is used, we were. As we said, we are always equally right and wrong.
We also have to assume the user hasn’t tested everything, or anything. We understand that users have a job to do – daily tasks, routines, their own timelines and deliverables – which can impact their ability to provide thorough testing of the platform. It is our responsibility to ensure everything is checked. We ensure our testing groups live in the platform and carry out their daily routines for a lengthy period of time to guarantee that they accidentally and over the course of time test everything. Then, we ask them to test again.
Lastly, we gather feedback, monitor utilization and ask questions. One part of Chaos Theory is self-organization, and for us this is fundamentally “shadow IT”. If users always have a negative experience, they may be generally hesitant to provide feedback as problems are never resolved and will self-organize to find ways that work to meet their requirements and expectations. This may result in an over-arching user belief that Citrix, or whatever platform it may be, is slow. Address these matters immediately by soliciting feedback, monitoring utilization, and asking questions when micro patterns are identified. This ensures that if part of the initial build is negative, such as poor performance, a missing application or a delta in experience, users will share their experience and the issue will be fixed.
Fundamentally, user behaviour is part IT, part social intelligence, and part psychology. It takes a very experienced engineer or architect to effectively understand what users are needing and expecting of the platform. Our team has engaged with tens of thousands of users over significant time, significant organizations, and significant deployments to simplify the analysis of users and their interactions in complex systems. Ensuring success of immediate design and deployment as well as longer term evolution of strategy is about thorough understanding of user requirements and demands and, importantly, controlling the potential “butterfly effects” by immense proactive involvement in not only preliminary deployments but ongoing utilization. And that, my friends, is operational excellence – our topic for the next post.