Capturing the human element in an artificial world
How AI can actually make research more people-centric
by Eric Tayce
“I was shooting bricks all night” is the kind of comment most people can understand coming from a professional basketball player after a losing game. It’s also the kind of comment that led artificial intelligence to claim Golden State Warriors shooting guard Klay Thompson had been on a window-breaking vandalism spree in the San Francisco area earlier this year (he had, in fact, not thrown any real bricks at all). This is an extreme instance, of course, but it’s a good reminder that humans are still a necessary ingredient for conducting insightful research and data analysis about…humans. Even before the recent explosion of generative AI, we saw a concerted effort to inject more humanity into research processes and deliverables. But the promise of flexibility and efficiency through AI is too impossible for our industry to ignore.
Initially, this seems to present a paradox: our desire for humanized insights contradicts our inevitable shift toward generative processes. However, looking deeper, we see potential for AI and HI (human intelligence) to not only coexist but to complement one another. As such, we believe AI can help researchers comprehensively elevate value for clients, while expanding our capacity for human understanding like never before.
But before we look at how – and why – we believe that’s possible, let’s further explore the unique challenge of deploying AI in an industry that’s already sensitive about overreliance on technology.
Lived experiences
The industry’s push to humanize research data manifests in the language we often use. Phrases like “human element” and “people-centric” speak to insights that reflect real-life perspectives and lived experiences. This human focus also manifests in the proliferation of streamlined, narrative-style reporting that has replaced data-intense tomes and clinical-sounding slide titles.
Through storytelling, we tackle the dual objectives of informing business strategy while also communicating with organizational stakeholders on an emotionally engaging level. The payoff, we believe, lies in a deeper understanding of the unique motivations and experiences that drive customer behaviors. Finding the “human” in the data is, metaphorically speaking, the Holy Grail in research. Humans are – after all – the subject of everything we study.
Yet, since the introduction of generative large language models (LLMs), the Holy Grail seems to have shifted. As new AI capabilities become available, we move a few steps closer to removing the “human” that we tried so hard to prioritize. Where once we yearned for deeper human understanding, we now find ourselves evaluating the trade-offs made for synthetic data sets and self-optimizing algorithms.
The best of both worlds
But the benefits of “humanizing” research and the benefits of embracing AI don’t have to be mutually exclusive. While it may seem counterintuitive, we’ve found that AI can actually make research more human. Based on our experience, here are a few key examples.
Humanizing surveys
Researchers have long acknowledged the limitations of survey research and its inability to re-create the experience of making real-world decisions. This is an area where artificial intelligence can help. For starters, AI allows us to minimize the unnatural artifice of survey research through conversational experiences via chatbots, even if just for small portions of the survey. For instance, our own experimentation shows that following open-end responses with conversational AI-powered probing leads to an average of 270% more unstructured data being collected from respondents.
Organizations can also use generative LLMs to mimic natural conversation through iterative questioning techniques that can capture a much wider range of consumer perceptions and opinions than traditional approaches. In fact, we’ve found that properly trained chatbots with well-defined guardrails can reliably identify optimal price levels, investigate decision drivers and generally deliver a richer experience for the respondent. The humanizing trend sets the tension and artificial intelligence solves for it.
Assessing unstructured data
In addition, AI tools can more effectively analyze unstructured data versus traditional methods, parsing out more organic, more human insights. Unstructured data has traditionally held limited business value for organizations, simply because the methods for analyzing it are either computationally too complex or logistically too time-consuming. However, AI’s massive computing power has removed this barrier. Researchers are unlocking new value by using AI-powered algorithms to execute techniques like sentiment analysis and theme detection on unstructured data. These deliver respondent-level indicators that can be used to predict behaviors or to develop targeting strategies.
Creating data cohesion
Estimates vary, but most experts agree that between 80% and 90% of all enterprise data is unstructured, making AI’s inherent flexibility a tremendous strength by broadening the range of data sources that can be linked with survey records. Social media posts, customer service chat logs, survey research and call transcriptions are all candidates for deepening our understanding of consumer behaviors. Each data point humanizes the insights by building context. Intelligently integrating these sources enriches the depth and texture of insights by building truly multifaceted perspectives. Ultimately, this helps support strategies that align with the authentic sentiments and preferences of a target audience.
Mastering reports
The humanizing impact of AI also extends into the domain of reporting, where it’s reshaping the way insights are communicated and understood. Fine-tuned GPTs (generative pre-trained transformers) are infinitely moldable, able to ingest data and adopt engaging personas that mimic real consumers down to the smallest details.
When informed with data from a market segmentation, for example, AI-powered personas have the potential to provide insightful opinions, reference detailed statistics, discuss motivations behind behaviors and even engage in simulated conversations that illuminate varied perspectives and insights rooted in facts. Thus, AI helps transform traditional static deliverables into vibrant, interactive narratives that foster a deeper, self-led understanding of research outcomes.
Ethical and operational challenges
While the integration of AI into marketing research unlocks a host of new opportunities, it also presents significant ethical and operational challenges. First, security and privacy concerns are paramount – LLM-based AI systems typically employ APIs to facilitate communication between remote servers and users, which means sensitive data travels outside company firewalls. And while small language models can be used on local devices, they currently lack the computational power needed to execute the kind of AI-powered work described earlier.
Furthermore, today’s leading AI language models are trained to provide friendly assistance to the user – even casual users notice a helpful, upbeat tone in their interactions with GPTs. Without the right constraints in place, chatbots will reward positive survey responses with affirming interjections like “that’s great” and “I’m glad you think so,” potentially prodding a respondent toward a conclusion they may not have otherwise reached.
It’s also important to remember that AI mistakes and hallucinations are fairly common – but they’re not completely understood (even by LLM builders themselves). This researcher has reviewed survey chat logs in which a chatbot reached an optimal price for a product and then informed the respondent that “a draft contract is in the works!” AI/LLM models are essentially a black box; researchers are wise to maintain close oversight and to provide copious direction for how they execute prompts.
But LLM models become less of a black box when you train them on proprietary data and use carefully engineered prompts designed around specific research needs. An off-the-shelf LLM creates output that reflects the entirety of the data on which it was trained, complete with any errors and biases that may be present. Thus, it’s absolutely essential to train and fine-tune models so they stay grounded in relevant facts while remaining flexible enough to reliably navigate open-ended interactions. To this end, special attention needs to be given to dataset quality; only real human inputs will provide models with the ability to mimic the opinions and behaviors of real humans. As with any programmable system, the outputs from AI are only as good as the inputs.
Monumental change
The future trajectory of AI in research promises monumental change. As AI technologies advance, we anticipate the development of intuitive systems capable of conducting in-depth, conversational surveys. These systems will adapt in real time to the flow of dialogue, much like a human interviewer. Voice capabilities will mimic real interviewers, with the added benefits of perfect diction, an encyclopedic knowledge of the topic and an ability to leap across devices (watch, phone, tablet) to better accommodate respondents’ schedules.
Another exciting possibility lies in AI’s potential to create fully personalized survey experiences. AI will tailor surveys based on a respondent’s personal profile, known behaviors, social network and even the emotional cues picked up throughout the interview. With all the right context in place, an AI model will have complete freedom to accomplish a set of research objectives using any path it chooses, dramatically increasing respondent engagement as well as the richness of data captured.
Further, integrating AI with existing technologies will lead to continuously learning systems. These systems work off “North Star” directives that establish ground rules for how algorithms are refined as new data comes online, allowing them to improve their questions and interactions over time without human intervention. Models with highly specialized profiles will evolve and new data silos will be a source of competitive advantage in the marketplace.
Deepen our understanding
The integration of artificial intelligence into traditional marketing research represents an opportunity to unlock fresh, dynamic perspectives on the human experience and to fuel actionable insights. By enhancing data collection, analysis and reporting, AI offers a pathway to deepen our understanding of consumer behaviors, motivations and preferences.
However, we can only realize this potential through thoughtful implementation that addresses ethical and operational issues. Moreover, we must continue to prioritize high-quality human input as we expand our usage of AI. After all, humans are the subject of our study.
As AI evolves, its role in creating more empathetic, nuanced and dynamic insights will become increasingly important for driving innovation and connecting more deeply with consumers. The journey of integrating AI into marketing research is just beginning and its transformative impact will undoubtedly reshape the industry in exciting, and likely very unexpected, ways.
This article was originally published in Quirk’s magazine.
Eric Tayce is VP, Innovation Solutions at Burke. With over 20 years in the industry, Eric has experience that spans the entire research process from the perspectives of both a supplier and a client. Eric’s expertise covers many business issues, with particular emphasis on brand equity, brand image and positioning, segmentation, and product optimization.
Interested in reading more? Check out Eric’s other articles:
Front End of Innovation 2024: Five AI Insights
How Research Can Guide the Innovation Process
As always, you can follow Burke, Inc. on our LinkedIn, Twitter, Facebook and Instagram pages.
Source: Feature Image – ©Smile Studio AP – stock.adobe.com