How can you learn from an event and create a better one next time when you know your next event can be completely different from the one you just wrapped up? Event creators didn't have a tool to that - today's venue and speakers could have been great, but the event they were doing next week would be online and on a different subject. Because of that, the traditional way of measuring success by setting KPIs that refer to isolated touchpoints can be useless.
That bothered us. Many companies spend upwards of 20% of their marketing budget on events. And once that budget gets decreased, events usually are the first thing to be dropped. So how could we help demonstrate the impact an event has on the people participating in it while helping event organizers learn and improve?
The event industry didn't have a standard to measure the impact of events, and we wanted to create a single metric to evaluate them. Because of the dynamic nature of events, we needed to look into the common denominator of successful events. From the start, we drew from Lyyti's common knowledge that the only ones who'd be able to define what a valuable event was would be the people participating in those events. So we settled that asking them directly would be the best way to evaluate their experience at an event.
We hypothesised the one thing in common for all events were the participants, who commit their time in exchange for the value they derive from the event. To dig deeper and give more nuance to this hypothesis, I interviewed some of our clients who do active, hands-on work organizing events (think of job titles like Event Organizers) as well as clients who built events in a more high-level, strategic way (such as CMOs, Event Directors). This is what I learned:
Have a clear definition of success: For them, the hallmark of a successful event was indeed when participants felt their time was well spent.
Don't have unified metrics. Kinda of: Some event organizers defined goals set/goals met metrics, but those were not used uniformly in their own companies. All events organizers I interviewed were dissatisfied with that.
Don't have a sufficient overview of the event chain: Event metrics are complicated because they aim to measure single touch-points and not the entirety of the experience. This makes it difficult to develop a framework for creating successful events experiences.
Define the experience but can't measure it: An Event Director couldn't specify their measuring tools since it was scattered in different software.
Valued subjective feelings: All interviewed in this group found that feelings were fundamental. Some said events are supposed to be emotional experiences, while others mentioned how a positive experience influenced brand image.
Emmi and I decided on a design sprint to kick-start this feature. Together with a Product Manager, a Front-end developer, and a back-end engineer, we spent five days working from refining our problem definition to creating an interactive prototype and performing usability tests.
Those usability tests showed us that:
While we clearly a lot of iterating to do, some comments gave us encouragement. Our testers were getting behind the idea of measuring their events, and they were excited to share their scores with their co-workers. We were onto something!
It was evident that to make this feature work, we needed to focus on making it easy for people to share their experiences. While exploring different ways to deploy our survey, our two leading contenders were e-mails and SMS messages. To deploy quickly and start learning more from our users, we opted for e-mails, as they were already widely used with our system and would avoid creating a cost barrier.
Knowing that response rates for e-mail surveys ranged between 15-30%, we needed to make sure to stay on the higher side of response rates. So we had another ideation session to define precisely how could we achieve that.
In just seven months, we moved from thinking about how could we shape our Experience Value Score concept to launching its Beta version. Our Design team led the discovery process with our project manager, and by March, we had a project team set up with two designers, a project manager, a front-end developer, and a back-end engineer. A few months into development, I became the principal designer for the project with Emmi on a supporting role to accommodate her Lead responsibilities.
We invited 41 users to participate in our beta launch in September 2020. With analytics in place and supported by heatmaps and continuous conversations with our users.
The Lyyti users have been known for their engagement with the tool's development, and provided us with valuable feedback during the beta. Much of it related to issues we had already managed to map earlier, and we were thrilled to see we didn't really get any negative feedback regarding the problem the feature was solving.
After addressing the most pressing issues and feedback, the Experience Value Score feature was moved from Beta to production.
One of the most common feedback we received was sharing the Experience Value Score results with others. Again, we had pinpointed this during the discovery process, and it became the next iteration point for our team. At this point, the team configuration had changed slightly, and I was now the sole designer responsible for the feature.