A Growing Reliance on GenAI
The widespread adoption of large language models (LLMs) and GenAI-enabled tools has transformed design and research workflows. However, unlike operating a car—where users are trained, licensed, and bound by clear rules—using GenAI lacks established safety standards or regulatory frameworks. This means anyone can use these tools without a full understanding of the potential pitfalls.
Key Risks Identified
1. Overreliance on AI Responses
Designers and researchers are accustomed to trusting search engines to provide accurate information. GenAI goes further by summarizing, analyzing, and generating conclusions, which can encourage misplaced confidence. Unlike a search engine, its outputs are not purely factual lookups; they rely on judgment, which can be flawed.
2. AI’s Inability to Recognize Its Own Errors
LLMs can produce convincing but incorrect information without signaling uncertainty. One example involved an LLM drafting a complete eulogy with fabricated details before any user input had been provided. In areas such as persona research, this inability to self-correct can lead to undetected inaccuracies.
3. Embedded Bias in Training Data
GenAI systems learn from vast but incomplete datasets shaped by cultural, social, and historical biases. Patterns such as gender or racial bias can inadvertently appear in outputs, and when AI-generated material is used to train new systems, those biases can multiply. Detecting subtle biases in outputs like workflows or user journeys can be challenging.
4. Intellectual Property Exposure
Data entered into some GenAI tools may be used to train those systems. Uploading proprietary materials—such as company slide decks or research data—could unintentionally disclose trade secrets. Conversely, outputs might incorporate copyrighted or patented elements from other sources, exposing organizations to legal risks.
5. Illusion of Completeness
AI-generated personas and user journeys can appear comprehensive due to their detail and volume. However, critical elements may be missing, and users unfamiliar with the subject matter may not recognize the gaps. Without manual review, such outputs can lead to flawed decision-making.
Mitigation Strategies
Be Skeptical and Prepared to Recover.
Accept that GenAI tools can and will produce errors. Always plan for verification and correction.
Only Submit Verifiable Tasks.
Focus on tasks where results can be independently fact-checked. For example, tools like Dovetail allow quick data summarization alongside access to original sources.
Treat GenAI as a Subject Matter Expert, Not a Sole Authority.
Use AI outputs as a starting point, then validate through direct research or expert reviews.
Follow Organizational Policies and Confirm Data Handling.
Understand how a tool uses data before uploading sensitive materials. Seek approval from security teams or managers if needed.
Continue Adding Human Value.
Human insights, facilitation skills, and design creativity remain essential. GenAI can assist, but it cannot replace the strategic and empathetic contributions of experienced practitioners.
Conclusion
As GenAI tools become integral to UX design and research, the challenge is to use them responsibly. Risks such as bias, misinformation, and intellectual property exposure require awareness and active mitigation. By verifying outputs, respecting data policies, and supplementing AI-generated results with human expertise, practitioners can harness these tools effectively while safeguarding their work and their organizations.