slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Speech and Language Technology For Dialog-based CALL PowerPoint Presentation
Download Presentation
Speech and Language Technology For Dialog-based CALL

Loading in 2 Seconds...

play fullscreen
1 / 80

Speech and Language Technology For Dialog-based CALL - PowerPoint PPT Presentation


  • 160 Views
  • Uploaded on

Speech and Language Technology For Dialog-based CALL. Gary Geunbae Lee, POSTECH. Outline. 1. Introduction. DBCALL: Educational Error Handling. 2. 3. Spoken Dialog Systems. 4. 5. PESAA: Postech English Speaking Assessment and Assistant. Field Study. CHAPTER 1. iNTRODUCTION.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Speech and Language Technology For Dialog-based CALL' - maxime


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Speech and Language Technology

For Dialog-based CALL

Gary Geunbae Lee, POSTECH

slide2

Outline

1

Introduction

DBCALL: Educational Error Handling

2

3

Spoken Dialog Systems

4

5

PESAA: Postech English Speaking Assessment and Assistant

Field Study

english tutoring methods
English Tutoring Methods
  • Tranditional Approches

<Multimedia>

<Classroom>

<Textbook>

CALL Approches

<CMC>

<ICALL>

socio economic effects
Socio-Economic Effects
  • Changing our current foreign language education system in public schools
    • From vocabulary and grammar methodology
    • To speaking ability
  • Significant effect of decreasing private English education fee
    • private English education fee in Korea, reaching up to 16 trillion won annually
  • Expect the effect of the overseas export
    • Japan, China, etc.
interdiciplinary research
Interdiciplinary Research

Evaluation

• Cognitive Effect

• Affective Effect

NLP

• Dialog Management

• Error Detection

• Corrective Feedback

• Comprehensible Input and Output

• Corrective Feedback

• Attitude & Motivation

SLA

slide7

Second Language Acquisition

Second Language Acquisition Theory

  • Input Enhancement
    • Comprehensible input
    • Provision of inputs with high frequency
  • Immersion
    • Authentic environment
    • Direct form-meaning mapping
  • Noticing & Attention
    • Output hypothesis test
    • Corrective feedback
    • Affective factors
  • Motivation
    • Goal achievement & rewards
    • Interest
    • Importance of L2
dialog based call db call
Dialog-Based CALL (DB-CALL)
  • Spoken Dialog System
  • DB-CALL System

<Educational Robot>

<3D Educational Game>

existing db call systems
Existing DB-CALL Systems
  • Alelo
  • Tactical language & culture training system
  • Learn Iraqi Arabic by playing a fun video game
  • Dedicated to serving langauge and culture learning needs of military
  • SPELL
  • Learning English in functional situations such as going to a restaurant, expressing (dis-)likes, etc.
  • The speech recogniser is programmed to recognise grammatical and some ungrammatical utterances
  • DEAL
  • Learning Dutch in a flea market situation
  • The model can also convey extra linguistic signs such as lip-synching, frowning, nodding, and eyebrow movements
sds applications

Home networking

Car-navigation

Tele-service

Robot interface

SDS APPLICATIONS
automatic speech recognition asr
Automatic Speech Recognition (ASR)

버스 정류장이

어디에 있나요?

버스 정류장이

어디에 있나요?

Feature

Extraction

Decoding

Speech Signals

Word Sequence

Network

Construction

Speech

DB

Acoustic

Model

Pronunciation

Model

Language

Model

HMM

Estimation

G2P

Text

Corpora

LM

Estimation

spoken language understanding slu

Info.

Source

Feature Extraction / Selection

+

Dialog Act

Identification

Frame-Slot

Extraction

Relation

Extraction

+

+

+

Unification

+

Spoken Language Understanding (SLU)
  • Semantic Frame Extraction (~Information ExtractionApproach)
    • Dialog act / Main action Identification ~ Classification
    • Frame-Slot Object Extraction ~ Named Entity Recognition
    • Object-Attribute Attachment ~ Relation Extraction

How to get to DisneyWorld?

Domain: Navigation

Dialog Act: WH-question

Main Action: Search

Object.Location.Destination=DisneyWorld

Examples of semantic frame structure

Overall architecture for semantic analyzer

joint approach
JOINT APPROACH
  • Named Entity ↔ Dialog Act

[Jeong and Lee, SLT2006][Jeong and Lee, IEEE TASLP2008]

hdp hmm for unsupervised dialog acts
HDP-HMM for Unsupervised Dialog Acts

β ~ GEM(α), ω~ Dir(ω0)

for each hidden state k ∈ [1,2,…]

πk~ DP(α',β)

ϕk~ Dir(ϕ0), θk~ Dir(θ0)

for each dialog d

λd~ Beta(λ0)

for time stamp t

zt ~ Multi(πzt-)

for each entity e

ei ~ Multi(θzt)

for each word w

xi ~ Bern(λd)[select word type]

if xi = 0:wi ~ Multi(ϕzt)

elsewi ~ Multi(ω)

[background LM]

Generative Story

crf with posterior regularization for unsupervised ner
CRF with Posterior Regularization for unsupervised NER
  • Constraints for NER

# We would like to go on a tour during the day . # -> null

0:1.000:We would like to go on a tour during the day .

# We have two daytime tours # -> the Downtown Tour and the All Around Town Tour .

0:1.000:We have two daytime tours

# Which tour goes to the Statue of Liberty ? # -> null

0:1.000:Which tour goes to the <PLACE>Statue of Liberty</PLACE> ?

# You can visit the Statue of Liberty on either tour . # -> null

0:1.000:You can visit the <PLACE>Statue of Liberty</PLACE> on either tour .

Welcome O:1.000

W1=<s> O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001

W2=<s>,Welcome O:1.000

W3=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001

W4=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001

W5=_ O:0.997 PLACE-b:0.001 TOURS-b:0.002 GUIDE-b:0.001

W6=to O:1.000

W7=Welcome,to O:1.000

W8=the O:0.924 PLACE-b:0.005 PLACE-i:0.006 TOURS-b:0.001 TOURS-i:0.064

W9=Welcome,the O:1.000

BOARD_TYPE:Hop-on

BOARD_TYPE:Hop-off

PLACE:Times Square

PLACE:Empire State Building

PLACE:Chinatown

PLACE:Site of the World Trade Center

PLACE:Statue of Liberty

PLACE:Rockefeller Center

PLACE:Central Park

Heuristic

Matching

Extract

Features

LABELED

FEATURES

HYPOTHESIS

DICT/DB/Web

Welcome to the New York City Bus Tour Center .

I want to buy tickets for me and my child .

What kind of tour would you like to take ?

We would like to go on a tour during the day .

We have two daytime tours: the Downtown Tour and the All Around Town Tour .

Which tour goes to the Statue of Liberty ?

Constraints Learning

CRF

Model with PR

UNLABELD

CORPUS

vanilla example based dm ebdm
Vanilla EXAMPLE-BASED DM (EBDM)
  • Example-based approaches

Turn #1 (Domain=Building_Guidance)

Dialog Corpus

USER: 회의 실 이 어디 지 ?

[Dialog Act = WH-QUESTION]

[Main Goal = SEARCH-LOC]

[ROOM-TYPE =회의실]

SYSTEM: 3층에 교수회의실, 2층에 대회의실, 소회의실이 있습니다.

[System Action = inform(Floor)]

Indexed by using semantic & discourse features

Domain = Building_Guidance

Dialog Act = WH-QUESTION

Main Goal = SEARCH-LOC

ROOM-TYPE=1 (filled), ROOM-NAME=0 (unfilled)

LOC-FLOOR=0, PER-NAME=0, PER-TITLE=0

Previous Dialog Act = <s>, Previous Main Goal = <s> Discourse History Vector = [1,0,0,0,0]

Lexico-semantic Pattern = ROOM_TYPE 이 어디 지 ?

System Action = inform(Floor)

Dialog Example

Having the similar state

Dialog State Space

[Lee et al., SPECOM2009]

error handling and n best support
Error handling and N-best support
  • To increase the robustness of EBDM with prior knowledge

1) Error Handling

  • AgendaHelp
      • S: Next, you can do the subtask 1) Asking the room's role, or 2)Asking the office phone number, or 3) Selecting the desired room for navigation.

If the system knows what the user will do next

Dynamic Help Generation

FOCUS NODE

LOCATION

UtterHelp

S: Next, you can say 1) “What is it?”, or 2) “What’s the phone number of [ROOM_NAME]?”, or 3) “ Let’s go there.

ROOM ROLE

OFFICE

PHONE NUMBER

GUIDE

NEXT_TASK

[Lee et al CSL2010]

error handling and n best support1
Error handling and N-best support
  • To increase the robustness of EBDM with prior knowledge

2) N-best support

If the system knows which subtask will be more probable next

Rescoring N-best hypotheses (h1~hn)

h1

ROOM

NAME

h3

FLOOR

LOCATION

h2

OFFICE

PHONE NUMBER

h4

the framework of ranking based ebdm
The Framework of ranking-based EBDM

EBDM

Dialog

Examples

Scoring Module

Discourse

Similarity

Calculated

Scores

Relative Position

RankSVM

User Intention

(system intention)

system Intention

(user intention)

Entity Constraint

Dialog Act

Features

[Noh et al IWSDS2011]

dialog simulation
Dialog Simulation
  • User Simulation for spoken dialog systems involves four essential problems

User Intention Simulation

User Utterance Simulation

Spoken Dialog System

Simulated Users

ASR Channel Simulation

[Jung et al., CSL 2009]

slide25

Dialog Studio Architecture

Design Step

Semantic Structure

Dialog Structure

KnowledgeStructure

ExternalComponent

Dialog StudioComponent

Annotation Step

SemanticAnnotator

DialogAnnotator

KnowledgeAnnotator

File

KnowledgeImporter

Corpus

SLUCorpus

DialogCorpus

KnowledgeSource

LanguageSynchronization Step

DialogUtterance Pool

Training Step

ASRTrainer

SLUTrainer

DMTrainer

KnowledgeBuilder

Model

ASRModel

SLUModel

DialogModel

KnowledgeModel

Running Step

ASR

SLU

DM

[Jung et al., SPECOM 2008]

architecture of woz
Architecture of WOZ

[Lee et al SLATE2011]

User speech

mic

speaker

human

subject

Wizard

User Screen

Wizard Screen

TTS

Text

input

User Character

Control

NPCs

Control

Wizard speech(Network RPC)

global errors
Global Errors
  • Global errors are errors that affect overall sentence organization. They are likely to have a marked effect on comprehension. [1] 

What is the purpose of your trip?

It’s ... I ... purpose business

Intention: inform(trip-purpose)

Sorry, I didn’t understand.

What did you say?

You can say “I am here on business”

I am here on business

hybrid model
Hybrid Model
  • Robust to learners’ errors
    • Hybrid model combining utterance-based model and dialog context-based model

Learner’s Utterance

Dialog State

Level 1

Data

Level 1

Utterance Model

Dialog Context

Model

Dialog

Manager

Level 2

Data

Level 2

Utterance Model

Level N

Data

Level N

Utterance Model

Learner‘s Intention

Lee, S., Lee, C., Lee, J., Noh, H., & Lee, G. G. (2010). Intention-based Corrective Feedback Generation using

Context-aware Model. Proceedings of International Conference on Computer Supported Education.

formulating the prediction as probabilistic inference
Formulating the prediction as probabilistic inference:

Chain rule

Bayes’ rule

Ignore invariants

Utterance Model

Dialog-Context Model

  • Maximum Entropy
  • Features:
  • Word
  • Part of speech
  • Enhanced K-Nearest Neighbors
  • Features:
  • Previous system intention
  • Previous user intention
  • Current system intention
  • A list of exchanged information
  • Number of database query results
dialog context model
Dialog-Context Model

Segment #2 (Domain = Fruit Store)

Dialog Corpus

SYSTEM: Namsu, what would you like to buy today?

[Intention = Ask(Select_Item)]

USER: I’d like to buy some oranges

[Intention = Inform(Order_Fruit), ITEM_NAME = orange]

SYSTEM: How many oranges do you need?

[Intention = Ask(Order_Quantity)]

USER: I need three oranges

[Intention = Inform(Order_Quantity), NUM = three]

Indexed by using semantic & discourse features

Domain = Fruit_Store

Previous System Intention = Ask(Select_Item)

Previous User Intention = Inform(Order_Fruit)

System Intention = Ask(Order_Quantity)

Exchanged Information State=

[ITEM_NAME = ‘orange’ (C), ITEM_QUANTITY = 3 (U)]

Number of DB query results = 0

Dialog State

Dialog State Space

User Intention = Inform(Order_Quantity)

User Intention

recast feedback generation
Recast Feedback Generation

Example

Expresssion DB

Example Search

Intention

Recognition

Example Expressions

User’s

Utterance

Pattern Matching

N

> θ

No Feedback

Y

Feedback

local errors
Local Errors
  • Local errors are errors that affect single elements in a sentence. [1]

What is the purpose of your trip?

I am here at business

ErrorInfo: prep_sub(at/on)

On business

I am here on business

[1] Ellis., R.  (2008). The Study of Second Language Acquisition. 2nd ed. Oxford: OUP

local error detecter architecture
Local Error Detecter Architecture

Erroneous Text

Grammatical ErrorSimulation

Text

N-gram LM

N-gram LM

ASR

ASR’

Merged Hypotheses

Grammaticality

Checker

Error-type

Classifier

Feedback

Error Patterns

Error Frequency

Lee, S., Noh, H., Lee, K., & Lee, G. G., (2011) Grammatical Error Detection for Corrective Feedback Provision in Oral Conversations, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco.

two step approach
Two-Step Approach
  • Data Imbalance Problem
    • Simply produce majority class
    • Or, High false positive rate
  • Large number of error types
    • Makes model learning and selection procedure vastly complicated
  • Grammaticality checking itself can be useful for some Applications
    • Categorizing learners’ proficiency level
    • Generating implicit corrective feedback such as repetition, elicitation, and recast feedback

Grammatical Error Detection

I

am

here

at

business

Grammaticality Checking

0

0

0

1

0

1)

Error Type Classification

2)

None

None

None

PRP_LXC

None

grammaticality checker model learning
Grammaticality Checker- Model Learning
  • Binary Classification
    • Support Vector Machine
  • Model Selection
    • Radial Basis Kernel
    • Search for C, γ which optimize:
      • Maximize F-scoreSubject to Precision > 0.90, False positive rate < 0.01
    • 5-fold cross-validation
error type classification
Error Type Classification
  • Error type information is useful for
    • Meta-linguistic feedback
    • Sophisticated learner model
  • Simplest way
    • Choose the error type associated with the top ranked error pattern
    • Two flaws:
      • does not have a principled way to break tied error patterns
      • does not consider the error frequency
  • Weighting according to error frequency
    • Score(e) = TS(e) + α * EF(e)
ges grammar error simulator
GES: Grammar Error Simulator

Correct Sentences

Grammatical Error Simulator

Incorrect Sentences

<LM Adaptation & Grammatical Error Detection>

Automatic Speech Recognizer

Error Types

ges application
GES Application

<Grammar Quiz Generation>

markov logic network
Markov Logic Network
  • subject-verb agreement errors
  • omission errors of prepositions
  • omission errors of articles

He want go to movie theater

Sungjin Lee, Gary Geunbae Lee. Realistic grammar error simulation using markov logic. Proceedings of the ACL 2009, Singapore, August 2009.

Sungjin Lee, Jonghoon Lee, Hyungjong Noh, Kyusong Lee, Gary Geunbae Lee. (2011) Grammatical Error Simulation for Computer-Assisted Language Learning, Knowledge-Based Systems

grammar error simulation
Grammar Error Simulation
  • Realistic errors
    • Encoding characteristics of learners’ errors using the Markov logic
  • Over-generalization of some rules of the L2
  • Lack of knowledge of some rules of the L2
  • Applying rules and forms of the first language into the L2
nict jle corpus
NICT JLE Corpus

Erroneous part

<n_num crr=“x”>...</n_num>

  • Number of interviews
    • 167
  • Number of sentences of interviewees
    • 8,316
  • Average length of sentences
    • 15.59
  • Nubmer of total errors
    • 15,954

POS

(i.e. n=noun)

Corrected form

Grammatical system

(i.e. num=number)

Example) I belong to two baseball <n_num crr=“teams”>team</n_num>

english oral proficiency assessment korean national test
English oral proficiency assessment:Korean national test
  • National English Ability Test (NEAT)
  • Tasks
    • Answering short questions (communication)
    • Describing pictures (story telling)
    • Presentation
      • Describing figures, tables, and graphs
      • Introducing products or events
    • Giving an opinion (discussion)
english oral proficiency assessment general common tasks
English oral proficiency assessment:General common tasks
  • Giving an opinion / discussion
  • Rubrics
    • Delivery
      • Pronunciation
      • Fluency (Prosody)
    • Language use
      • Grammar
      • Word choice
    • Topic development
      • Organization
      • Discourse
      • Contents
requirements real environment
Requirements:Real environment

Existing systems for read speech

NEAT

Spontaneous speech

Text-independent input

training data collection
Training data collection
  • SNU pronunciation/prosody

Speech waveform

Spectrogram

/ pitch contour

Word

PLU

Sentence stress

for public use
For Public Use
  • Boston University radio news corpus
    • Speech from FM radio news announcers
    • 424 paragraphs (30,821 words)
    • ToBI labels (pitch accent  stress)
    • 0.48 marked stress per word
    • PLU set: TIMIT phonetic labeling system
slide53

Aix-Marsec database

Speech waveform

Spectrogram

/ pitch contour

Multi-level annotation

collecting grammar error data picture description task
Collecting Grammar Error Data:Picture description task
  • From English learners of Korean
  • Story Telling based on pictures
  • 80 Students (5 tasks for each student)
collecting grammar error data error tagsets
Collecting Grammar Error Data: Error tagsets
  • JLE Tagset
    • Consisting of 46 tags
    • Systematic tag structure
    • Some ambiguity caused by POS specific error tag structure
  • CLC Tagset
    • World-widely used tagset including 76 tags
    • Systematic & Taxonomic tag structure
    • JLE issue is figured out by taxonomic tag structure
  • NUCLE Tagset
    • 27 error tags
    • Quiet arbitrary tag structure
  • UIUC Tagset
    • Only for articles and prepositions
pesaa pronuciation feedback

User

PESAA: Pronuciation Feedback

Pronouncing Simulation

simulation part

Material

EPD

Forced Alignment

Word-level transcription

ASR

Speech input

Actual pronunciation

Orthographic pronunciation

recognition part

Comparison

error detection & feedback part

Error candidates

Error Detection

Feedback Generation

Error information

pronunciation error simulation learning context rules using generalized tbl
Pronunciation Error simulation:Learning context rules using Generalized TBL

n := 0

Iterative

initialization

Training input

Left-right

ngram context

n := n + 1

nth initial machine annotation

Majority choice

/ Context

Training

reference

Collect transformations

nth order initialization rules

n

Merge transformations

Machine annotated data

List of transformations

Best transformation

Apply

pronunciation error simulation multi tag result
Pronunciation Error simulation:Multi-tag Result
  • Example Input
    • Input
      • Let’s go shopping
      • # L EH T S # G OW # SH AH P EH NG #
  • Example Output
    • #/# L/L EH/EH T/T S/S #/# G/G OW/OW|AO #/# SH/SH AA/AH|AA P/P IH/IH NG/NG #/#
      • #/# L/L EH/EH T/T S/S #/# G/G OW/AO #/# SH/SH AA/AA P/P IH/EH NG/NG #/#
      • #/# L/L EH/EH T/T S/S #/# G/G OW/OW #/# SH/SH AA/AA P/P IH/EH NG/NG #/#
      • #/# L/L EH/EH T/T S/S #/# G/G OW/AO #/# SH/SH AA/AH P/P IH/EH NG/NG #/#
      • #/# L/L EH/EH T/T S/S #/# G/G OW/OW #/# SH/SH AA/AH P/P IH/EH NG/NG #/#
pronunciation error detection feedback
Pronunciation Error detection/feedback

Feedback preference

Error confidence

Error candidate information

Feedback

Feedback decision

Word ASR confidence

Feedback DB

Phoneme ASR confidence

pesaa prosody feedback
PESAA: Prosody Feedback
  • Stress & Prosodic phrasing & boundary tone

* Existence of word/sentence stress for each syllable/word

Stress

* Location of phrase breaks

Prosodic phrasing

* Type of boundary tone for each phrasal boundary

Boundary tone

sentence stress feedback architecture
Sentence Stress Feedback:Architecture

Rules

Text

Analysis

Sentence Stress Prediction

Predicted

Sentence

Stress

Model

Training

Text

Text

Model

Rule Application

Diff.

Feedback

Text

Analysis

Alignment

Model

Training

Model

Speech Signal

Speech Analysis

Sentence Stress Detection

Detected

Sentence

Stress

sentence stress prediction
Sentence Stress Prediction
  • Feature used
    • Position info: the number of phonemes in word, the number of syllables in word, …
    • Stress info: word stress, sentence stress (rule-based prediction), …
    • Lexical info: identity of word, identity of vowel
    • Part-of-speech info
sentence stress detection
Sentence Stress Detection
  • Feature used
    • Duration info: duration of vowel, duration of syllable, normalized duration of word according to the number of syllables, …
    • Intensity info: energy of vowel (+delta)
    • F0 info: f0 of vowel (+delta)
    • MFCC info: mfcc of vowel (+delta, +delta-delta)
    • Lexical info: identity of vowel
sentence stress feedback
Sentence Stress Feedback
  • Adopting output probability
    • Feedback candidates: syllables in “predicted stress” with low or high output probability

Predicted stress

Not stressed

Detected stress

Stressed

pesaa grammar feedback
PESAA: Grammar Feedback

Correct Sentences

Written GE

Simulator

GE Patterns

Written English

ASR/CN

SPEECH

Training

Soft Constraint

SVM

Training

SVM

Training

Spoken GE Detector

GE tagged

Texts/Speech

GE tagged Texts

Written GE Detector

Training

Soft Constraint

User Input

TEXT

Correct Sentences

Spoken GE Simulator

GE Patterns

GE Feedback

Spoken English

slide72

Field Study: Robot-Assisted Language Learning

Affective Effects

1

Experimental Design

3

Cognitive Effects

2

Sungjin Lee, Hyungjong Noh, Jonghoon Lee, Kyusong Lee, Gary Geunbae Lee, Seongdae Sagong, Moonsang Kim. (2011) On the Effectiveness of Robot-Assisted Language Learning, ReCALL Journal, Vol.23(1), SSCI.Sungjin Lee, Changgu Kim, Jonghoon Lee, Hyungjong Noh, Kyusong Lee, Gary Geunbae Lee.Affective Effects of Speech-enabled Robots for Language Learning. Proceedings of the 2010 IEEE Workshop on Spoken Language Technology (SLT 2010), Berkeley, December 2010

Sungjin Lee, Hyungjong Noh, Jonghoon Lee, Kyusong Lee, Gary Geunbae Lee. Cognitive Effects of Robot-Assisted Language Learning on Oral Skills. Proceedings of Interspeech Second Language Studies Workshop, Tokyo, Sep 2010.

hri experimental design
HRI Experimental Design
  • Setting and participants
    • 24 elementary students
    • Ranging in age over 9-13
    • Divided into two groups (beginner, intermediate)
  • Material and treatment
    • 68 lessons
      • 17 lessons for each level and theme
    • Simple to complex task
    • 2 hours a week extended over 8 weeks
hri experimental design1
HRI Experimental Design

1) PC room

2) Pronunciationtraining room

4) Stationerystore

3) Fruit and Vegetablestore

evaluation of cognitive effects
Evaluation of Cognitive Effects
  • Data collection and analysis
    • Evaluation method
      • Pre-test/Post-test
    • For the listening skills
      • 15 items for multiple choice question
      • Cronbach’s alpha
        • pre-test: 0.87, post-test: 0.66
    • For the speaking skills
      • 10 items for 1-on-1 interview
      • Cronbach’s alpha
        • pre-test: 0.93, post-test: 0.99
experiment result
Experiment Result

<Cognitive effects on oral skills for overall students>

*p < .05

evaluation of affective factors
Evaluation of Affective Factors
  • Data collection
      • Questionnaire (4 point scale without a neutral option)
  • Data analysis
    • For satisfaction in using robots
      • Descriptive statistics
    • For interest in learning English, Confidence with English, Motivation for learning English
      • Pre-/Post-test

N Ɨ = Number of questions, R ƗƗ = Cronbach’s alpha in the form of pre-test(post-test)