The Results screen assists in reviewing qualitative data by providing synthesized insights, allowing researchers to drill down as needed. It combines transparent metrics with fast AI summarization, showcasing Fuel Cycle’s dedication to actionable insights, thoughtful design, and research integrity.
Navigate to the Studies section and select a completed unmoderated study. The AI Summary tab opens by default, presenting a high-level summary and performance metrics.
After an unmoderated study is completed, the Results screen provides a consolidated view of participant activity and AI-generated insights. This view is designed to help researchers quickly evaluate study performance and identify major themes from participant responses.
AI Summary
Performance Metrics
At the top of the results screen, the Performance Metrics section shows basic study statistics. Use this section to verify that the study has been fully fielded and that timing falls within expectations:
- Created on—Date the study was launched.
- Platform—The device used by participants (e.g., Desktop).
- Total sessions needed—Target number of completions.
- Completed sessions—Number of finished sessions.
- Average session time: The mean time participants spent on the study.
- Study updated on—Timestamp of the most recent update.
AI Study Summary 
The AI Study Summary automatically synthesizes open-ended responses into a structured, readable summary. It identifies key participant insights and includes supporting quotes from actual responses.
Each insight includes:
- A primary takeaway derived from common themes
- Verbatim quotes supporting the insight, pulled directly from participant responses
- A tooltip explaining how the insight was generated
- A Regenerate option for creating a new version of the summary
You can regenerate the summary up to three times. Each version reflects the same data but may vary in language or emphasis.
Ask AI
The Ask AI tab provides researchers with a quick, conversational way to explore open-ended study responses. Rather than manually scanning transcripts, you can type a question and receive immediate, AI-generated answers based on your study’s actual data. Ask AI is especially insightful in early-stage analysis when you're shaping a narrative or seeking direction before deeper manual review.
Use Ask AI when you want to:
- Accelerate exploratory analysis
- Validate patterns spotted in recordings or summaries
- Extract quotes for presentations or reports
- Compare participant sentiment across topics
When you open the Ask AI tab, you’ll be greeted with a welcome panel that introduces the assistant’s functionality and limitations.
Capabilities
- Study awareness—The assistant understands the context of the current study, including participant responses.
- Unlimited queries—You can ask follow-up questions without restriction.
- Visualized output—Some answers may include charts or graphs to help you interpret data patterns.
- Persistent thread—The assistant remembers the flow of your session to maintain continuity.
Known Limitations
- Beta release—The feature is still being refined.
- Occasional errors—Some responses may be inaccurate or misleading.
- Bias risk—Language models may reflect unintended bias in generated content.
How to Use Ask AI
- Type a question related to your study. For example:
- "What do participants say about trust in influencers?"
- "Summarize opinions about product usability."
- Review the answer. The AI will generate a response using data from your study, potentially citing quotes or trends.
- Refine or follow up. You can dig deeper with clarifying or comparative questions, continuing the thread naturally.
- Interpret with care. While the AI surfaces themes quickly, it’s important to cross-reference with original participant data when making decisions.
Responses
The Responses tab serves as your live data interface, bridging structured task formats and participant-level feedback. These graphics show researchers how participants answered each question. It is designed for both quick reviews and detailed analysis. Use this tab to spot trends among participants, check their exact feedback, and understand the context behind the numbers.
When to Use This Tab
- To review individual responses tied to specific tasks
- To compare open- and closed-ended answers in one view
- To spot distribution trends in numeric or choice data
- To extract quotes and clips for stakeholder reporting
Task Results Panel
On the left, the Task Results panel lists all questions or prompts in the order they appeared in the study. Each item includes:
Prompt number and text (truncated if long)
Task type, such as Multiple Choice, Open Text, Speak Loud, or Number Rating
Clicking any task loads its corresponding data in the response viewer on the right.
Aggregated Results View
For closed-ended tasks (e.g., multiple choice, single choice, or ratings), the right panel displays:
Visual charts showing the number and percentage of responses per option
Response count totals in a compact bar layout
Live updates based on current filters or criteria
This provides an at-a-glance view of consensus and variation across your sample.
Individual Response Table
Beneath the chart, individual responses are listed in a table format. Columns include:
- Participant Name
- Response (for the selected prompt)
- Duration of the response
- Date the session was completed
- Action to Play back the session recording (if applicable)
Use this section to cross-reference quotes, validate participant intent, or pull examples for reports.
Filter and Sort Tools
At the top of the page, use the Filter by field or criteria dropdown to narrow results. You can segment by metadata such as completion status or platform to focus your analysis.
Recordings
The Recordings tab provides a complete view of individual participant submissions, giving researchers direct access to the raw, unfiltered feedback behind any study. This view supports deeper analysis, evidence gathering, and validation of AI-generated insights. By enabling fast toggling between sessions, easy playback of recordings, and quick identification of high-value moments, the Responses tab anchors the researcher's workflow in primary data.
Recording List
A scrollable table of all completed sessions. Each entry typically shows:
- Participant Name
- Completion timestamp
- Session duration
- Session status
- Play Link
- Delete
Click any entry to view the full session details.
Search and Filter
Search open-text fields or filter by metadata such as completion time, device type, or question tags. Helps isolate patterns or review targeted participant groups.