Chapter 6: AI-Generated Data Collection

Chapter 6: AI-Generated Data Collection — ThinkNavi User Manual

Chapter 6: AI-Generated Data Collection

Freeform, Competitive Analysis, and Sensory Evaluation are three modules that generate data using LLMs (Large Language Models). They share common operations.

6.1 Column Schema Setup

Before generating data, define the column structure (schema) for the generated data.

Steps:

  1. Review the current column configuration in the “Schema Settings” section
  2. Each module has preset (default) columns

Adding Columns:

  1. Enter a column name in the text input field (e.g., “Target Audience”, “Technical Features”)
  2. Click the “+” button or press Enter to add
  3. Added columns appear in the list

Removing Columns:

  • Click the “×” button next to the column name

Reset to Preset:

  • Click “Reset to Preset” to restore the module’s default column configuration

Freeform Preset Columns

  • name, description, features, category

Competitive Analysis Preset Columns

  • company, product, strengths, weaknesses, market_position

Sensory Evaluation Preset Columns

  • stimulus, visual, tactile, auditory, olfactory, overall_impression

6.2 Running Data Generation

Settings:

  • Number of items: Number of data rows to generate (e.g., 10, 20, 30, 50)
  • Category (optional): Specify a data category (e.g., “Smartphones”, “Cafes”)
  • Prompt: Instructions for AI (e.g., “Compare and analyze popular smartphones in the Japanese market”)

Steps:

  1. Select or enter the number of items
  2. Enter a category (optional)
  3. Write instructions for the AI in the prompt
  4. Click the “Generate” button

During Generation:

  • AI streams data generation, adding rows to the table in real-time
  • A progress bar is displayed
  • If you try to leave the page during generation, a “Leave page?” confirmation dialog appears

Credit Cost: Data collection costs 2 credits per run

6.3 Reviewing and Editing Results

After generation completes, results are displayed in the table.

Operations:

  • Exclude Rows: Use the trash icon on each row to exclude unwanted rows (they won’t be saved to CSV)
  • Auto-Coding: Click “Auto-Code” to have AI automatically assign tags/codes to each row

6.4 Saving as CSV

Save generated results as a CSV file.

Steps:

  1. After reviewing/editing the table, click “Save as CSV”
  2. A filename is automatically generated
  3. Once saved, the file appears in the sidebar file list

Storage: CSVs are stored in Cloudflare R2 storage. They’re linked to projects and accessible from any device when logged in.

6.5 Freeform — Creative Ideation Tool

Freeform is a module for having AI generate ideas and perspectives freely on a given theme. Unlike Competitive Analysis, which targets existing products/companies, its primary use is collecting diverse perspectives on topics that haven’t taken shape yet.

Use Cases for Freeform

Freeform is ideal for posing “questions” to AI and getting comprehensive lists of perspectives:

Identifying Social Issues:

  • “List potential problems related to SNS usage from psychological, social, legal, and educational perspectives”
  • “What problems might retirees face, from economic, health, social, and purpose-in-life perspectives?”
  • “What factors might affect remote workers’ mental health?”

Brainstorming Ideas:

  • “What approaches could revitalize rural areas, from technology, tourism, education, and industry angles?”
  • “Daily inconveniences for parents and solution ideas”
  • “30 ideas to increase university library usage”

Exploring Research Topics:

  • “Impact of generative AI on education, both positive and negative”
  • “Factors that inhibit the circular economy”
  • “Environmental, behavioral, and psychological factors affecting sleep quality”

Creating Personas/Scenarios:

  • “Potential user personas for a health management app, with age, challenges, motivations, and usage scenarios”
  • “Customer segments to consider when opening a new cafe”

Column Schema Customization Examples

In Freeform, customizing columns to match your theme is important. The defaults (name, description, features, category) work, but theme-specific columns significantly improve output quality.

Example 1: Social Issue Analysis

  • problem, affected_group, severity, root_cause, possible_solution

Example 2: Ideation

  • idea, target_user, expected_effect, difficulty, uniqueness

Example 3: Research Topic Exploration

  • topic, research_question, methodology, expected_finding, relevance

Example 4: Risk Analysis

  • risk, probability, impact, mitigation, stakeholder

Tips for Writing Prompts

Freeform quality depends on the prompt. Keep these in mind:

Specify perspectives: Providing multiple analytical angles like “from X, Y, and Z perspectives” improves comprehensiveness

  • Good: “Problems with SNS from psychological, social, legal, educational, and business perspectives”
  • Bad: “List problems with SNS” (perspectives may be skewed)

Specify level of detail: State explicitly how specific you want the data

  • “Include specific examples and statistics”
  • “In the context of Japan” or “Reflecting the 2020s situation”

Specify output language: Even with English column names, specifying “output in Japanese” in the prompt generates Japanese content

Generation count tips: Setting a higher count (30-50) and removing unwanted rows later is more efficient

Using Freeform Results

Generated data can be used for these subsequent processes:

  1. Build models with ConceptMap-Text: Load the generated CSV into Model & Explore to visualize idea structure
  2. Classify with Coding: Assign theme codes in the Coding module for categorical analysis
  3. Deep dive in Chat: Paste generated ideas into Chat for individual deep-dive discussions
  4. Prioritize with AHP: Use generated ideas as alternatives in Propensity Analysis (AHP) for evaluation

6.6 Competitive Analysis — Product/Service Comparison

The Competitive Analysis module generates structured comparison data for existing products, services, and companies. The difference from Freeform is that “comparison targets exist in reality” and preset columns are optimized for competitive analysis.

Use Cases

  • New product positioning
  • Pre-market entry competitor research
  • Strengths and weaknesses analysis of existing products
  • Industry mapping for investment decisions

Input Examples

Example 1: SaaS Product Comparison

  • Category: “Project Management Tools”
  • Prompt: “Analyze major project management tools available in the Japanese market by features, pricing, target users, strengths, and weaknesses”

Example 2: Restaurant Chain Comparison

  • Category: “Coffee Chains”
  • Prompt: “Compare major coffee chains in Japan by store count, price range, target audience, and differentiation points”

Example 3: Technology Company Comparison

  • Category: “Generative AI Platforms”
  • Prompt: “Compare major generative AI platforms (OpenAI, Anthropic, Google, Meta, etc.) as of 2025 by model performance, pricing, API specifications, and ecosystem”

Column Customization Examples

Default: company, product, strengths, weaknesses, market_position

Price-focused Analysis:

  • company, product, pricing_model, lowest_price, enterprise_price, free_tier, value_proposition

Technology-focused Analysis:

  • company, product, core_technology, api_availability, integration_ecosystem, scalability, security

6.7 Sensory Evaluation — Sensory/Experience Description Data

The Sensory Evaluation module generates descriptive data about the five senses and experiences. It’s designed for structuring sensory characteristics that are hard to quantify, such as food, cosmetics, materials, and spatial design.

Use Cases

  • Food and beverage taste evaluation
  • Cosmetic texture and fragrance analysis
  • Material touch and appearance comparison
  • Ambient space/environment evaluation
  • Sensory UI/UX usability evaluation

Input Examples

Example 1: Food Sensory Evaluation

  • Category: “Premium Chocolate”
  • Prompt: “Describe detailed sensory evaluations of representative premium chocolate brands covering taste, aroma, texture, appearance, and packaging design”

Example 2: Beverage Evaluation

  • Category: “Sake”
  • Prompt: “Describe representative sake brands in terms of aroma, flavor (sweetness/dryness, acidity, umami), mouthfeel, aftertaste, and ideal serving temperature”

Example 3: Space Evaluation

  • Category: “Coworking Spaces”
  • Prompt: “Evaluate urban coworking spaces on lighting, sound environment, temperature, chair comfort, visual openness, and concentration level”

Column Customization Examples

Default: stimulus, visual, tactile, auditory, olfactory, overall_impression

Food-specific:

  • product, appearance, aroma, taste_profile, texture, aftertaste, pairing_suggestion, overall_score

Space-specific:

  • space_name, lighting, acoustics, temperature, ergonomics, visual_openness, concentration_level, social_atmosphere

6.8 Troubleshooting

IssueCause and Solution
“API key not set” errorSystem key is normally used, but this may appear when credits are insufficient. Check your balance on the dashboard
Generation stops midwayCheck your network connection. If AI response times out, refresh the browser to retry, but partial data will be lost
Low quality resultsMake the prompt more specific. Reviewing column schema and using AI-friendly column names improves quality
“Insufficient credits” errorCheck your credit balance on the dashboard and purchase additional credits or wait for next month’s allocation
Many empty cells in CSVOutput varies by AI model. Re-run generation or try a different model