Proceedings of the KSPS conference (대한음성학회:학술대회논문집)
The Korean Society Of Phonetic Sciences And Speech Technology
- Semi Annual
Domain
- Linguistics > Linguistics, General
1996.10a
-
This study aims to see some prosodic features of English spoken by Japanese learners of English. It focuses on speech rates, pauses, and intonation when the learners read an English passage. Three Japanese learners of English, who are all male university students, were asked to read the speech material, an English passage of 110 word length, at their normal reading speed. Then a native speaker of English, a male American English teacher. was asked to read the same passage. The Japanese speakers were also asked to read a Japanese passage of 286 letters (Japanese Kana) to compare the reading of English with that of japanese. Their speech was analyzed on a computerized system (KAY Computerized Speech Lab). Wave forms, spectrograms, and F0 contours were shown on the screen to measure the duration of pauses, phrases and sentences and to observe intonation contours. One finding of the experiment was that the movement of the low speakers' speech rates showed a similar tendency in their reading of the English passage. Reading of the Japanese passage by the three learners also had a similar tendency in the movement of speech rates. Another finding was that the frequency of pauses in the learners speech was greater than that in the speech of the native speaker, but that the ration of the total pause length to the whole utterance length was about tile same in both the learners' and the native speaker's speech. A similar tendency was observed about the learners' reading of the Japanese passage except that they used shorter pauses in the mid-sentence position. As to intonation contours, we found that the learners used a narrower pitch range than the native speaker in their reading of the English passage while they used a wider pitch range as they read the Japanese passage. It was found that the learners tended to use falling intonation before pauses whereas the native speaker used different intonation patterns. These findings are applicable to the teaching of English pronunciation at the passage level in the sense that they can show the learners. Japanese here, what their problems are and how they could be solved.
-
This paper deals with the nature and function of intensification in Korean in a wider scope of intensification which was not paid proper attention including intensification in the initial position as well as middle position. Unobserved new areas of intensification in the initial position are paid more attention like sound split of polysemy e.g. (s'eda), (kyongk'i) by means of intensification and north Korean application of intensification on (wonsu) and intensification of borrowed English. The recent phenomenon of ‘gwua’ intensification is experimented on two groups of people, young students and old people beyond 65 years old by means of sociolinguistic analysis. The result shows that its intensification is a form of student violent power and a mark of extreme solidarity among activist students. Thirty three university students(male 16, female 17) are asked to explained to write the meanings(feelings or when to use, etc.) of the words which have normal form and intensified forms. The results show intensification attaches the meaning of ‘emphasis,’ to bring the extremely polarized emotion: samll to the smallest, exact to the perfect exactness, bad to the worst feeling. Four words are being split to express different meaning with the word intensified. In conclusion, the nature of so called saisiot(t) e.g. intensification is voiceless tensed pause and its functions are the polarization of the original meaning of the word, sound split of polysemy and attachment of social values by intensification.
-
Much research has been done on the rues differentiating the three Korean stops in word initial position. This paper focuses on a more neglected area: the acoustic cues differentiating the medial tense and lax unaspirated stops. Eight adult Korean native speakers, four males and four females, pronounced sixteen minimal pairs containing the two series of medial stops with different preceding vowel qualities. The average duration of vowels before lax stops is 31 msec longer than before their tense counterparts (70 msec for lax vs 39 msec for tense). In addition, the average duration of the stop closure of tense stops is 135 msec longer than that of lax stops (69 msec for lax vs 204msec for tense). THESE DURATIONAL DIFFERENCES ARE 50 LARGE THAT THEY MAY BE PHONOLOGICALLY DETERMINED, NOT PHONETICALLY. Moreover, vowel duration varies with the speaker's sex. Female speakers have 5 msec shorter vowel duration before both stops. The quality of voicing, tense or lax, is also a cue to these two stop types, as it is in initial position, but the relative duration of the stops appears to be much more important cues. The duration of stops changes the stop perception while that of preceding vowel does not. The consequences of these results for the phonological description of Korean as well as the synthesis and automatic recognition of Korean will be discussed.
-
The purpose of this paper is to study deletion of the Arabic glottal stop /'/ in Literary Arabic(LA) and Cairene Arabic(CA). Arabic has a diglossia structure - Literary and Colloquial Arabic. The former is the standard and written language, while the latter is the oral language used in dialects of many areas. Of the various Colloquial Arabic dialects the Cairene Arabic is the most influential and powerful dialect, hence it was chosen as the subject study. In this paper the followings are found: (1) The deletion of the Arabic glottal stop /'/ in CA is found much more frequently than that in LA. (2) In a word the deletion is found more frequently in the middle position than in the initial or final position, in which /'/ is sometimes converted weak consonants /w/ or /y/.
-
-
In this paper, I would like to explore the possibility that the nature of place assimilation can be captured in terms of the OCP within the Optimality Theory (Mccarthy & Prince 1999. 1995; Prince & Smolensky 1993). In derivational models, each assimilatory process would be expressed through a different autosegmental rule. However, what any such model misses is a clear generalization that all of those processes have the effect of avoiding a configuration in which two consonantal place nodes are adjacent across a syllable boundary, as illustrated in (1):(equation omitted) In a derivational model, it is a coincidence that across languages there are changes that have the result of modifying a structure of the form (1a) into the other structure that does not have adjacent consonantal place nodes (1b). OT allows us to express this effect through a constraint given in (2) that forbids adjacent place nodes: (2) OCP(PL): Adjacent place nodes are prohibited. At this point, then, a question arises as to how consonantal and vocalic place nodes are formally distinguished in the output for the purpose of applying the OCP(PL). Besides, the OCP(PL) would affect equally complex onsets and codas as well as coda-onset clusters in languages that have them such as English. To remedy this problem, following Mccarthy (1994), I assume that the canonical markedness constraint is a prohibition defined over no more than two segments,
$\alpha$ and$\beta$ : that is,$^{*}\{{\alpha, {\;}{\beta{\}$ with appropriate conditions imposed on$\alpha$ and$\beta$ . I propose the OCP(PL) again in the following format (3) OCP(PL) (table omitted)$\alpha$ and$\beta$ are the target and the trigger of place assimilation, respectively. The '*' is a reminder that, in this format, constraints specify negative targets or prohibited configurations. Any structure matching the specifications is in violation of this constraint. Now, in correspondence terms, the meaning of the OCP(PL) is this: the constraint is violated if a consonantal place$\alpha$ is immediately followed by a consonantal place$\bebt$ in surface. One advantage of this format is that the OCP(PL) would also be invoked in dealing with place assimilation within complex coda (e.g., sink [si(equation omitted)k]): we can make the constraint scan the consonantal clusters only, excluding any intervening vowels. Finally, the onset clusters typically do not undergo place assimilation. I propose that the onsets be protected by certain constraint which ensures that the coda, not the onset loses the place feature. -
This paper proposes a derivational account of tensing and neutralization of obstruents in Korean within the theory of Government Phonology (GP) (Kaye, Lowenstamm and Vergnaud 1990, henceforth KLV; Park 1996). We begin by outling the relevant tensing and neutralization data in Korean. We point out several problems that need to be addressed in any account of these data. We then set out the central notions of GP, pointing out how adherence to the requirement that government relations remain constant throughout a derivation under the Projection Principle prevents a GP account of tensing and neutralization in Korean, which requires government relations to switch between lexical and phonetic representations. To address this problem, we propose abandoning the Projection Principle, extending lexical representations in GP along the lines of the Markedness Theory approach (Michaels 1989), and adopting the economy principles for derivation of the Minimalist approach (Chomsky 1993; Chomsky & Lasnik 1991). finally, we summarize the analysis of obstruent phenomena in Korean within GP extended in these ways.
-
So far we have proposed the following constraint ranking for the (over-)application of the coda neutralization: (22) License family ≫ UE family ≫ IDENT-IO family ≫ Base-ID This analysis shows that only the surface level is enough to analyze the opaque behaviors of coda neutralization. Uniform Exponence constraint is worth further study since it can handle Consonant Cluster Simplification and underapplication of /t/-palatalization in Korean compounds in which morphemes before a stem are uniformly realized as one surface form: i.e., the output base form (S. Hong in preparation)(equation omitted)
-
This paper is an acoustic study of neutralization of short tones in Taiwanese. The results show that the two short tones were completely neutralized in juncture position. Since long tones in Taiwanese show complete neutralization in context position, the bidirectionality of tone alternation in Taiwanese Tone Sandhi poses a problem far rule-based approaches, while it is consistent with the hypothesis that both juncture and context tones are listed in the lexicon, instead of one being derived from the other. Moreover, in order to account for the difference between Taiwanese Tone Sandhi and Mandarin Tone Sandhi (which has been proven acoustically to be incomplete neutralization), the Naturalness Hypothesis is proposed, which claims that if the neutralization is phonetically unnatural, then the neutralization is more likely to be lexicalized and show complete neutralization.
-
이 연구의 목적은 의사소통 능력 중심의 영어교육을 하기 위하여 특별히 한국인들이 영어를 발음할 때 나타나는 문제점들을 살펴보고 보다 정확한 영어발음을 낼 수 있도록 교육할 수 있는 지도안을 작성해 보고자하는 것이다. 먼저 한국인을 위한 영어발음교육의 특성과 제문제를 살펴보고, 보다 효과적인 발음지도를 위해 구체적인 발음지도 목표와 그 목표에 맞는 발음지도 법을 알아보았다. 발음지도 목표로는 우선, 영어를 모국어로 하는 사람들이 알아듣고 이해할 수 있는 정도의 발음을 갖추도록 하며, 이를 위해 (1)영어자,모음 식별 청취 및 발음, (2)올바른 강세와 억양 식별 및 구사, (3)연음 및 기타 주요 발음 현상 식별 및 구사 등을 지도하되, (1)보다 (2)와 (3)을 보다 집중적으로 지도 할 것을 제시하였다. 아울러 이들 각각의 내용을 보다 효과적으로 지도하기 위하여 의사소통 능력을 중심으로 한 여러 가지 지도법과 학습활동들을 소개하였다. 또한 교육한 내용에 대한 평가의 중요성을 강조하고 그 방법을 제시하였고, 보다 실용적인 발음지도안을 작성하기 위한 교사교육과 작성된 발음지도안의 활용이 필요함을 강조하였다.
-
Since the 1980s, a number of professionals in the ESL/EFL field have investigated the role of pronunciation in the ESL/EFL curriculum. Applying the insights gained from the second language acquisition research, these efforts have focused on the integration of pronunciation teaching and learning into the communicative curriculum, with a shift towards overall intelligibility as the primary goal of pronunciation teaching and learning. The present study reports on the efficacy of audio-visual aids and hyper-pronunciation training method in teaching the productions of English consonants to Japanese college students. The talk will focus on the implications of the present study, and the presenter makes suggestions to teaching pronunciation to Japanese learners.
-
In this paper I will examine some concrete examples of the obstacles faced by non-native speakers of Japanese when learning the language. I will go on to suggest ways in which these obstacles may be overcome. Nowadays there are numerous Japanese language books available for non-native speakers. However, most of these introductory Japanese language books focus on topics such as pronunciation, accent and intonation. Notable, these introductory textbooks provide insufficient emphasis on prosodic features of the Japanese language. The Japanese language has been considered by many teachers as relatively easy compared to other languages, due to its simple phonetic structure. This may be a partial explanation of the reason why the teaching of prosodic features has generally been given insufficient emphasis. To teach Japanese efficiently at a university level I have combined an emphasis on the teaching of prosodic features together with my experience of television announcing. This has entailed using television news programmes and contemporary reading materials in my class. Using taped material I intend to describe a case-study of teaching of Japanese articulation.
-
This paper investigates patterns of manner assimilation in Toba Batak, Sanskrit, Ponapean and Korean. Based on cross-linguistic patterns of manner assimilation, I develop the constraint, Syllable Contact (SyllCon), as a type of a markedness constraint in Correspondence Theory. With the establishment of high-ranking SyllCon, I argue that several patterns of manner assimilation result from the interaction of high-ranking SyllCon and correspondence constraints such as Ident[sonorant].
-
Using eighteen text materials from various goners of present-day Japanese, we collected phonologically reduced forms frequently observed in conversational Japanese, and classified them in search of unified explanation of phonological reduction phenomena. We found 7,516 cases of reduced forms which we divided into 43 categories according to the types of phonological changes they have undergone. The general tendencies ale that deletion and fusion of a phoneme or an entire syllable takes place frequently, resulting in the decrease in the number of syllable. Typical examples frequently observed throughout the materials are :
$~/noda/{\rightarrow}~/nda/,{\;}-/teiru/{\rightarrow}~/teru/,{\;}~/dewa/{\rightarrow}~/zja/,{\;}~/tesimau/{\rightarrow}~/cjau/$ . From morphosyntactic point of view phonological reduction often occurs at the NP and VP morpheme boundaries. The following findings are drawn from phonological observations of reduction. (1) Vowels are more easily deleted than consonants. (2) Bilabials(/m/, /b/, and /w/ are the most likely candidates for deletion. (3) In a concatenation of vowels, closed vowels are absorbed into open vowels, or two adjacent vowels come to create another vowel, in which case reconstruction of the original sequence is not always predictable. (4) Alveolars are palatalized under the influence of front vowels. (5) Regressive assimilation takes place in a syllable starting with ill, changing the entire syllable into phonological choked sound or a syllabic nasal, depending on the voicing of following phoneme. -
Previous studies of American English(e.g. Sussman 1991, 1993, 1994) CVC coarticulation with initial consonants representing the labial, alveolar, and velar showed a linear relationship that fits to data points formed by plotting onsets of F2 transition along the y-axis and their corresponding midvowel points along the x-axis. The present study extends the locus equation metric to include the following places of articulation:uvular, pharyngeal, laryngeal, and emphatics. The question of interest is to determine if locus equation could serve as phonetic descriptor for the place of articulation in Arabic. Five male native speakers of Colloquial Egyptian Arabic(CEA) read a list of 204 CVC and CVCC words, containing eight different places of articulation and eight vowels. Average of formant patterns(Fl,F2,F3) onsets, midpoints, and offsets were calculated, using wide band spectrograms obtained by means of the kay spectrograph model(7029), and plotted as locus equations. A summary of the acoustic properties of the place of articulation of CEA will be presented in the frames of bVC and CVb. Strong linear regression relationships were found for every place of articulation.
-
The neutral tone is one of the most important distinguishing features in Beijing Mandarin, but there are two completely different views on its linguistic function: a special tone(Xu, 1980) versus weak stress(Chao, 1968). In this paper, the acoustic manifestation of the neutral tone will be explored to show that it is closely related to weak stress. 122 disyllabic words in which the second syllable carries the neutral tone, including 22 stress pairs, were uttered by a native male speaker of Beijing dialect and analysed by Kay Digital Sonagraph 5500-1. The results of the acoustic analysis are presented as follows: 1) The first two formants of the medial and the syllabic vowel moves towards that of central vowel with a greater magnitude in the syllable with the neutral tone than in the syllable with any of the four normal tones. Also the vowel ending, and nasal coda /n/ and / / in the syllable with the neutral tone tends to be deleted. 2) In the syllables with the neutral tone, there are strong carryover coarticulations between the medial and syllabic vowel and the preceding unvoiced consonant. In general, the vowel is affected to move towards the position of the central vowel with more greater magnitude by coronal consonant than by labial or velar consonant. 3) In the syllable with the neutral tone, when and only when it precedes a syllable with tone-4, the high vowel following [f], [ts'], [s], [ts'], [s], [tc'] or [c] tends to be voiceless. 4) It can be seen from the acoustical results of 22 stress pairs that the duration of the syllable with the neutral tone is on the average reduced to 55% of that of the syllable with the four normal tones, and the duration of the final in the syllable with neutral tone is on the average reduced to 45% of that of the final in the syllable with the four normal tones(Lin & Yan 1980). 5) The FO contour of the neutral tone is highly dependent on the preceding normal tone(Lin & Yan 1993). For a number of languages it has been found that the vowel space is reduced as the level of stress placed upon the vowel is reduced(Nord 1986). Therefore we reach the conclusion that the syllable with neutral tone is related to weak stress(Lin & Yan 1990). The neutral tone is not a special tone because the preceding normal tone.
-
This paper reports a preliminary investigation on the time course of intersyllabic coarticulation in Standard Chinese. In this investigation, around 3800 phonetically compact C1V1-C2V2 type disyllabic structures 3re employed to observe the acoustic effect of coarticulation in general, and about 400 disyllabic words are designed as the materials to examine: (1) How the articulators move from one syllable to the next? (2) What is the extent to which the syllables overlapped? And (3) In what sense, the syllables are produced in parallel; and in what sense, they are in sequence? For the convenience of description, we just take the offset transition of V1 end the onset transition of C2 os the rough representations for anticipatory and carryover effect respectively, durational measurements are made correspondingly. To evaluate the possible influence on the behavior of gestural overlap from stress contrast and constituent difference of the syllables, analysis of variance are counducted as well. Based on this study, Some impressions about general nature of coarticulation behined the intersyllabic gesture overlapping in this language are discussed.
-
This paper is intended as a study on how an utterance is divided into rhythmic units in Standard Korean with respect to its syntactic structure. With respect to the data in this study I used 150 sentences which contained similar number of words and various syntactic structures. Those sentences were read by 7 speakers of Seoul dialect in a conversation style. Each sentence was read twice in a normal speed and twice in a fast speed. As a total, 4200 sentences were recorded. Then listening to them, the author marked the sentences with two kinds of boundaries i.e. strong and weak. To explore the relationship between rhythmic units and syntactic structure I devised a framework of grammatical symbols. Each symbol is designed to have both syntactic and morphological information at the same time. So I assigned those grammatical symbols to the sentences. Having sentences marked with grammatical symbols on the one hand, and with the rhythmic boundaries on the other hand, 1 could show the relationship between rhythmic units and syntactic structure; which syntactic structures are likely to be pronounced as one rhythmic unit, and which are on the rhythmic boundaries.
-
When language A borrows words, it borrows them according to its own phonetic rules. In other words, language B, where borrowed words coming from, has to comply with the phonetic requirements of language A. It may be added that language A only borrows the elements, the types of syllables and accentuation that already exist in its own phonetic struture and rejects all the rest that are not compatible. It operates exactly like a sieve. That is why borrowed words offer an excellent observation post to notice how react in phonetic contexts. The Japanese language has borrowed and is borrowing extensively from other languages and cultures, mainly from the English ones in the fields of sports, medicine, industry, commerce, and natural sciences. Relatively very few new words are created using the ancient Chinese or native backgrounds. This presentation will look for the rules of borrowing and try to show that this way of borrowing represents an organized system of its own. Three levels would be particularly studied : - the phonemic level - the syllable level and - the accentual level. This last point would be specially targeted with the question of syllable tension-relaxation. Such a study of languages in phonetics contacts could shed some new light on the phonetic charaCteristics of Japanese language and will confirm or weaken some conclusion already demonstated otherwise. We will be aming specially at the endings of the borrowed words where, it seems, Japanese language manifests itself very strongly.
-
Purpose: Some of the properties of the prosodic phrasing and some acoustic and phonological effects of contrastive focus on the tonal pattern of Seoul Korean is explored based on a brief experiment of analyzing the fundamental frequency(=FO) contour of the speech of the author. Data Base and Analysis Procedures: The examples were chosen to contain mostly nasal and liquid consonants, since it is difficult to track down the formants in stops and fricatives during their corresponding consonantal intervals and stops may yield an effect of unwanted increase in the FO value due to their burst into the following vowel. All examples were recorded three times and the spectrum of the most stable repetition was generated, from which the FO contour of each sentence was obtained, the peaks with a value higher than 250Hz being interpreted as a high tone (=H). The result is then discussed within the prosodic hierarchy framework of Selkirk (1986) and compared with the tonal pattern of the Northern Kyungsang dialect of Korean reported in Kenstowicz & Sohn (1996). Prosodic Phrasing: In N.K. Korean, H never appears both on the object and on the verb in a neutral sentence, which indicates the object and the verb form a single Phonological Phrase (
$={\phi}$ ), given that there is only one pitch peak for each$={\phi}$ . However, Seoul Korean shows that both the object and the verb have H of their own, indicating that they are not contained in one$={\phi}$ . This violates the Optimality constraint of Wrap-XP (=Enclose a lexical head and its arguments in one$={\phi}$ ), while N.K. Korean obeys the constraint by grouping a VP in a single$={\phi}$ . This asymmetry can be resolved through a constraint that favors the separate grouping of each lexical category and is ranked higher than Wrap-XP in Seoul Korean but vice versa in N.K. Korean;$Align-x^{lex}$ (=Align the left edge of a lexical category with that of a$={\phi}$ ). (1) nuna-ka manll-ll mEk-nIn-ta ('sister-NOM garlic-ACC eat-PRES-DECL') a. (LLH) (LLH) (HLL) ----Seoul Korean b. (LLH) (LLL LHL) ----N.K. Korean Focus and Phrasing: Two major effects of contrastive focus on phonological phrasing are found in Seoul Korean: (a) the peak of an Intonatioanl Phrase (=IP) falls on the focused element; and (b) focus has the effect of deleting all the following prosodic structures. A focused element always attracts the peak of IP, showing an increase of approximately 30Hz compared with the peak of a non-focused IP. When a subject is focused, no H appears either on the object or on the verb and a focused object is never followed by a verb with H. The post-focus deletion of prosodic boundaries is forced through the interaction of StressFocus (=If F is a focus and DF is its semantic domain, the highest prominence in DF will be within F) and Rightmost-IP (=The peak of an IP projects from the rightmost$={\phi}$ ). First Stress-F requires the peak of IP to fall on the focused element. Then to avoid violating Rightmost-IP, all the boundaries after the focused element should delete, minimizing the number of$={\phi}$ 's intervening from the right edge of IP. (2) (omitted) Conclusion: In general, there seems to be no direct alignment constraints between the syntactically focused element and the edge of$={\phi}$ determined in phonology; all the alignment effects come from a single requirement that the peak of IP projects from the rightmost$={\phi}$ as proposed in Truckenbrodt (1995). -
This paper is intended as a preliminary study on phonetic and phonological differences between Polish and Korean languages. In this paper an attempt is made to examine the most conspicious difficulties encountered by Polish learners who begin to speak Korean (and in doing so, 1 would hope that it might be of help to future learners of both languages). Since the phoneme inventory and general phonetic rules for both languages are very different, teaching and learning accurate pronunciation is extremely difficult for both the Poles and Koreans without any previous phonetic training. In the case of Polish and Korean we can see how strong and persistent the influences of the mother-tongue are on the target language. As an example I would like to discuss the basic differences between Polish and Korean consonants. The most important consonantal opposition in Polish is voice-/voicelessness (f. ex.; 〔b〕 / 〔p〕, 〔g〕 / 〔k〕) while in Korean, opposition such as voice-/voicelessness is of secondary importance. Therefore Korean speakers do not perceive the difference between Polish voiced and voiceless consonants. On the other hand, Polish speakers can not distinguish Korean lenis / fortis / aspirated consonants (f. ex.; ㅂ 〔b〕 / ㅃ 〔p〕 / ㅍ〔ph〕, ㄱ 〔g〕 / ㄲ 〔k〕 / ㅋ 〔kh〕)) opposition. The other very important factor is palatalization which is of vital importance in Polish and, because of this, Polish speakers are extremely sensitive to it. In Korean palatalization is not important phonetically and Korean speakers do not distinguish between palatalized and non-palatalized consonants. The transcription used here is based on ' The principles of the International Phonetic Association and the Korean Phonetic Alphabet ' (1981) by Hyun Bok Lee.
-
La structure syllabique est un facteur important dan la prononciation d'une langue. Dans cette these, on a essaye de montrer que les caracteristiques de la syllabe varie d'une langue a une autre, et que les regles qui regissent la syllabe dan la langue maternelle a le dessue et done fait appliquer ses regles a la langue etrangere. Premierement, l'accentuation du coreen depend de la structure syllabique (qu'elle soit consonne-voyelle(CV) ou consonne-voyelle-consonne(CVC) et de la longueur de la voyelle. Ce qui fait que bien que le francais soit oxyton, et que la longueur et la qualite de la voyelle dependent de la structure syllabique, le francais, le francais parle par les coreens suit les regles du coreen. Autre caracteristique est que le coreen n'admet pas de suite de consonnes avant et apres la voyelle centrale comme dans "premier"[
$pr{\partial}mje$ ] "autre"[o:tR]. D'ou l'insertion de voyelle superflue comme [${\varpi},{\;}{\wedge},{\;}{\partial}$ ]. Troisiemement, il existe une difference dans la coupe de la chaine parlee: en coreen la coupe ecrite(le blanc) egale a peu pres a la coupe orale(la pause) alors qu'en francais, c'est en groupe de mots que se fait la coupe. A l'interieur du groupe, les mots sont lies entre eux, soity par liaison, soit par enchainement. On peut remarquer donc une nette influence du coreen, ou la regle est de prononcer correctement par unites ecrites (equivalent des mots en francais) : la chaine parlee devient saccadee, avec un accent sur tous les mots, et des coups de glotte entre les mots, et l'une des voyelles [${\varpi},{\;}{\wedge},{\;}{\partial}$ ] inseree entre un mot se terminant par une consonne suivi d'un mot qui commence par une consonne. -
본고는 일본어화자의 한국어 학습에 나타나는 발음상의 제문제를 논한다. 문제점은 초급, 중급의 단계별로 고찰할 필요가 있고 또한 학습자의 이론적인 인식상의 문제와 학습자의 실제 발음 실천상의 문제를 구별해서 검토할 필요가 있다. 본고에서는 모음, 자음, 음운변화 및 표기상에 나타나지 않는 발음의 문제 등, 음운단위에 관한 요소, 그리고 높낮이(pitch)와 억양(intonation) 등, 운율적인 요소를 다루었다. 서울말의 높낮이에 대한 기술도 시도하였다.
-
This presents the preliminary results from work in progress of a paired study of the acquisition of voiceless stops by Spanish speakers learning English, and American English speakers learning Spanish. For this study the hypothesis was that the American speakers would have no difficulty suppressing the aspiration in Spanish unaspirated stops; the Spanish speakers would have difficulty acquiring the aspiration necessary for English voiceless stops, according to Eckman's Markedness Differential Hypothesis. The null hypothesis was proved. All subjects were given the same set of disyllabic real words of English and Spanish in carrier phrases. The tokens analyzed in this report are limited to word-initial voiceless stops, followed by a low back vowel in stressed syllables. Tokens were randomized and then arranged in a list with the words appearing three separate times. Aspiration was measured from the burst to the onset of voicing(VOT). Both the first language (Ll) tokens and second language (L2) tokens were compared for each speaker and between the two groups of language speakers. Results indicate that the Spanish speakers, as a group, were able to reach the accepted target language VOT of English, but English speakers were not able to reach the accepted range for Spanish, in spite of statistically significant changes of p<.OOl by speakers in both groups of learners. A closer analysis of the speech samples revealed wide variability within the speech of native speakers of English. Not only is variability in English due to the wide range of VOT (120 msecs. for English labials, for example) but individual speakers showed different patterns. These results are revealing for the demands requied in experimental designs and the number of speakers and tokens requied for an adequate description of different languages. In addition, a simple report of means will not distinguish the speakers and the respective language learning situation; measurements must also include the RANGE of acceptability of VOT for phonetic segments. This has immediate consequences for the learning and teaching of foreign languages involving aspirated stops. In addition, the labelling of spoken language in speech technology is shown to be inadequate without a fuller mathematical description.
-
Koreans learning Mandarin Chinese are faced with serious pronunciation errors in vowels, consonants and tones, etc. Most of these pronunciation errors are found to be due the transference of the native Korean phonetic habits. Following are some of the most common pronunciation errors encountered by Koreans learning Chinese.
-
Based on the articulatory phonetic (or organic) principle, the Korean alphabet of 28 letters as invented by King Sejong in 1443 is not only systematic and scientifically oriented but also easy to learn and use in everyday life of the Korean people. The International Korean Phonetic Alphabet was devised by the present writer in 1971 by applying the organic principle much more extensively. Accordingly, the IKPA symbols are just as simple and easy to loam and memorize as the Korean alphabet, and at the same time they are much more consistent and logical than the IPA symbols which, having been derived mainly from Roman and Greek letters, are unsystematic mass of letters except in one respect, i.e., retroflex symbols. This paper describes the organic principles exploited in devising the International Korean Phonetic Alphabet and assesses its advantages.
-
In Korean, as with Kana and Kanji in Japanese, two kinds of word-writing systems--Hangul (the Korean alphabet) and Hanja (the Chinese character; Kanji in Japanese)--have been and still are being used. Hangul is phonetic while Hanja is ideographic. A phonetic alphabet represents the pronunciation of words, wheras ideographs are where a character of a writing system represents a concept. Aphasics suffer from language disorders following brain damage. The reading and writing of Hangul and Hanja by two Korean Broca's aphasics were analyzed with two goals. The first goal was to confirm the functional autonomy of reading and writing systems in the brain that has been argued by other researchers. The second goal was to reveal what difference the subjects show in reading and writing Hangul and Hanja. As experimental materials, 50 monosyllabic words were chosen in Hangul and Hanja respectively. The 50 word pairs of Hangul and Hanja have the same meaning and are also the most familiar monosyllabic words for a group of normal adults in their fifties and sixties. The errors that the aphasic subjects made in performing the experimental materials are analyzed and discussed here. This analysis has confirmed that reading and writing systems are located in different parts in the brain. Furthemore, it seems clear that the two writing systems of Hangul and Hanja have their own respective processes.
-
한글은 세계 여느 문자처럼 자연 발생적으로 생긴 문자체계가 아니라 수천년간 인류의 어음에 대한 사유와 고도의 어음기술이 집약된 문자관에 의해 연역적 방법으로 창제된 문자체계이므로 다른 문자와는 차별적인 연구방법이 요구된다. 세종조에 편찬되었던
${\ll}훈민정음{\gg}{;\}.{\;}{\ll}홍무정운역훈{\gg}{;\}.{\;}{\ll}동국정운{\gg}$ 는 제작 동기와 목적이 달라서 각각 상이한 어음체계를 가지고 있는데, 각 어음체계가 필요로 하는 글자꼴을 한글은 27개(자음16개, 모음11개)의 기본글자꼴에 발음부위 및 발음방법을 상형한 새로운 글자꼴을 제공하여 각각의 문자체계를 구성하였다. 유교적 언어관의 토양에서 세상 모든 음의 생성원리에 의해 창제된 한글은 바로 세상의 모든 음을 표기할 수 있는 문자체계인 것이다. 이러한 한글음성문자는 오늘날 이질어음체계의 외국어를 표음하는 데에도 예외 없이 적용될 수 있을 뿐만 아니라 글자꼴의 유연성.응용면에서 국제음성문자(I.P.A)보다 월등히 과학적이고 완정한 음성문자로서 기능할 수 있다. -
Present 'Regulation for Romanization of Korean' is not very well observed by most of Koreans because it is self-contradictory, inconvenient, awkward and difficult to follow. In this paper, the problematic issues are described in detail, and the corrective reforms are suggested. Emphasis is placed on the reasonability, clarity and convenience in use. Capitalizing the first letter of each syllable in a word, the author demonstrates a possibility of remarkably fashionable romanization for Korean language, i.e. HanGeul.
-
The Korean Phonetic Alphabet(KPA) as devised by H. B. Lee on the basis of Han-geul, the Korean Alphabet, was incorporated into the Hangul Word Processor(HWP) 1.
$^{*}$ to be used on personal computers. With the upgrading of the HWP software from$1.^{*}$ to more sophisticated versions of$2.^{*}$ ,$3.^{*}$ , etc., it became necessary to convert the HWP$1.^{*}$ KPA into upgraded version. This paper traces the history of the computerized KPA software from the initial version of HWP$1.^{*}$ to the latest one. -
The current speech technology has been aiming to acquire much clearer and more natural synthetic speech sound. The naturalness can be developed by an adequate phrasing of target sentence, of course, which seems to be strongly related to both syntactic and phonetic aspect simultaneously. The present study aims to describe, at one aspect, the relatedness between syntactic structure and prosodic phrasing through dialogue speech, and at the other, to establish a suitable phrasing pattern with respect to the purpose of acquiring more natural synthetic sound. The prosodic phrase, here, means a prosodic unit which can be clearly identified as having an evident break boundary at its final position in a sentence in the sense of both perceptual and acoustical viewpoint. The end of each prosodic phrase is, accordingly, marked as the point of major boundary in a sentence.
-
It is widely known that the Japanese alveolar nasal (n) is affected by adjacent vowels in most positions, that is, the variants of the alveolar (n) occur conditionally. The Japanese (n) is palatalized under the influence of vowel (i) or palatal (j). In the articulation of (ni), for instance, the tip and sides of the tongue make wide contact with the palate. It is interesting to know how palatalization occurs and varies during the production in different contexts. In my presentation, the actual realization of the palatalized alveolar nasal in different contexts is examined and clarified by consider me the Electro-palatographic data and examining the articulatory feel ins and auditory impression. As a result, palatalized (equation omitted) occurs either word-initially or inter-vocalically. (equation omitted) in (equation omitted) and (equation omitted) has great palatality. When conditioned by (j), the (equation omitted) in (equation omitted), (equation omitted) and (equation omitted) has full palatality. In each sound the average number of contacted electrodes of the Electro-palatograph at maximum tongue-palate contact is 63 or 100% of the total. To summarize the experimental data, articulatory feel ins and auditory impression, it can be concluded that the (n) followed by or hemmed in (i), (j) is a palatalized nasal (equation omitted).
-
In prevoius works we have repored phonetic similarities between Japanese and Spanish voweis and syiiabic sounds. (1) (2) (3) (4). In the present communication we explore the relative importance of duration of the consonantal segment to elicit Spanish /l/ - /r/ distinction by native j Japanese talkers. Three Argentine and three trained native Japanese talkers recorded /l-r/ combined with /a/ in VCV sequences. Modifications of consonant duration and vowel context with transitions were m made by editing natural /ala/ sounds. Mixed VCV were produced by combining sounds of both languages. Perceptual tests were produced by combining sounds of both languages perceptual performed presenting the speech material, to native t trained and non trained Japanese listeners. In a tirst sessIOn a d discrimination procedure was applied. The items were arranged in pairs a and listeners Nere told to indicate the pair that sounded different. In the f following session they were asked to identify and type the letter corresponding to each one of the items. Responses arc examined in tenns of critical duration of the interval between vowels. Preliminary results indicate that the duration of intervocalic intervais was a relevant cue for the identification of /l/ and /r/. It seems that to differentiate the two sounds, Japanese listeners required relatively longer interval steps than the argentine suhjects. There was a tendency to conhlse more frequently /l/ for /r/ than viceversa.
-
This paper examines the contribution of vocalic information (after the onset of voicing) to the perception of Korean alveolar stops: the aspirated /
$t^{h}$ /, the lenis /t/, and the fortis /$t^{*}$ /. These stops have been analyzed as differing in VOT (Abramson & Lisker, 1964), the glottal width or aspiration (Kim, 1970), and F0 and intensity build-up (Han & Weitzman, 1970). These studies focused on the articulatory and acoustic qualities of the consonants and often assumed that the consonantal portion before the onset of voicing plays the main role in maintaining the three-way distinction. In contrast, the role of the following vowels was given less attention. In order to investigate the contribution of the following vowels, a perceptual study was conducted using stimuli cross-spliced from three naturally produced syllables [$t^{h}al$ ] 'mask', [tal] 'moon', and [$t^{*}al$ ]) 'daughter'. Stimuli were presented to 12 Korean listeners for identification. Each subject responded to a total of 486 tokens. The results show that vowels play the primary role when the cut occurs at the star of voicing. Even with cuts at 10 ms and 40 ms into voicing, the following vowel still plays a clear role. This suggests that vowels carry the important information for distinguishing the three stops. -
The phonetic mastery of English has been considered next to impossible to many non-native speakers of English, including even some teachers of English. This paper takes issue with this phonetic problem of second language acquisition and proposes that combination of cognitive and physical approaches can help master English faster and more easily.
-
This paper is generally divided in 2 parts. One is the study on vowels about korean singer's lyric song in view of Daniel Jones' Cardinal Vowel. The other is acoustic study on vowels in my singing about korean lyric song. Analysis data are KBS concert video tape and CSL's. NSP file on my singing and Informants are famous singers i.e. 3 sopranos, 1 mezzo, 2 tenors, 1baritone, and me. Analysis aim is to find out Korean 8 vowels([equation omitted]) quality in singing. The methods of descrition are used in closed vowels, half closed vowels, half open vowels, open vowels and rounded vowels, unroundes vowels and formants. The study of the former is while watching the monitor screen to stop the scene that is to be analysixed. The study of the latter is to analysis the spectrogram converted by CSL's. SP file. Analysis results are an follows: Visual and auditory korean vowels quality in singing have the 3 tendency. One is the tendency of more rounded than is usual Korean vowels. Another is the tendency of centralized to center point in Cardinal Vowel and the other is the tendency of diversity in vowel quality. Acoustic analysis is studied by means of 4 formants. Fl and F2 show similiar step in spoken. In Fl there is the same formant values. This seems to vocal organization be perceived the singign situation. The width of F3 is the widest of all, so F3 may be the characteristics in singing. In conclude, the characteristics of vowels in Korean lyric songs are seems to have the tendencies of rounding, centralizing to center point in Cardinal Vowel, diversity in vowel quality and, F3'widest width in compared with usual Korean vowels.
-
Most Koreans agree that Korean traditional singing voice has a very peculiar sound comparing to Western singing voice. The goal of this paper is to investigate the acoustic characteristics of Korean traditional singing voice called 'Pansori' Materials are analyzed from 3male professional singers and 4 female professional singers. Their singing was compared with their own conversation and other non-singers' conversation. Long term average spectra indicated that all the singers showed a much less spectral tilt than non-singers. The phenomenon was prevailing for professional singers not only in their singing, but also in their conversation. This suggests that it is not the result of a temporary effort but it may involve a certain permanent change in their physiological configuration. (To assess this hypothesis, voice source should be looked at directly. Therefore, in further research, using Rothenberg mask (Rothenberg, 1973) is strongly recommended.) In addition to LTA, individual vowel formants will be studied later.
-
우리나라에서는 리듬을 장단이라고도 한다. 장단은 그 말이 뜻하는 바와 같이 대부분 길고 짧은 음이 모인 부가 리듬형이다. 이것은 서양음악의 분할리듬과 대조되는 것이다. 우리나라 음악의 리듬은 길고 짧은 음이 모여 장단을 이룬다. 그래서 3박은 2+1, 5박은 3+2, 8박은 5+3, 10박은 6+4, 16박은 11+5의 장단이다. 중국음악의 리듬은 대개 1자1음식(syllabic)이다. 그리고 4언1구 5언절귀 7언절귀 등 시형은 여러 가지가 있지만, 대부분 4박으로 부른다. 그래서 한국음악은 중국음악에 비하여 리듬이 복잡하다. 이것은 우리말이 중국어보다 리듬이 복잡하기 때문이다.
-
Human beings and chimpanzees are very much alike. and scientists say there is only 1% difference between them. Contrary to our expectations, the difference lies not in brains but in tracheas ( windpipes ). Those of human beings are bigger and longer than those of chimpanzees. Thu means more air is inspired and expired as breath. About breath there are interesting descriptions in the Bible. In the Genesis it says God made a man out of soil and breathed life-giving breath into his nostrils and the man began to live. In other part it says life exists between incoming breath and outgoing breath. Thus breath plays key role is our life. In Hebrew and Greek, breath and spirit are the same words. In Hebrew it is ‘Luahf’ and in Greek, ‘Pneuma’ With breath and mouth organs human beings produced voice, and with haritage and through leaning we train our voice to reach the level of language which convey our culture. My contention is to realize the gift of voice and train it so that it can perform proper function as a tool of conveying our thought and culture. This is a kind of practice of speech and it may be called speechology. It includes the following practical methods: 1. Try to read aloud. 2. Encourage recitation, 3. Make public speaking as possible. 4. Learn theories of phonetics; such as about pronunciation, accent, intonation, prominence, assimilation and so on.
-
This paper compares the duration aspect of Daegu tongue with that of standard Korean. In the former study on the rhythm of standard Korean, one of the purposes of the study was to compare it with dialects. This paper is the first attempt to do that. For this purpose, this paper proceeds as follows. After Introduction, Chapter 2 surveys the former study. Chapter 3 deals with the materials, method and results of the experiment. Chapter 4 analyzes and interprets the results of the experiment. In Conclusion, the most prominent fact is that the results of the experiment fall short of Daegu tongue speakers' expectations. Daegu tongue is generally considered as "tone language." And as Daegu tongue speakers sensitively recognize pitch, they think that they quitckly say the syllables between the pitch stressed syllables, whereas standard Korean speakers say those syllables relatively slowly. But in this experiment, which deals with only duration ignoring pitch, their assumption is proved to be false.
-
Acoustic analysis study was performed on 20 normal subjects by speaking nonsense syllables composed of Korean bilabial stops(/p,
$p^{*}$ /, ph/) and their Preceding and/or following vowel /a/(that is, [pa,$p^{*}a$ , pha, apa,$ap^{*}a$ , apha]) with an ultraminiature pressure sensor in their mouths. Speech materials were phonated twice, once with a moderate voice, another time with a loud voice. The acoustic signal and intraoral pressure were recorded simultaneously on computer. By these procedures, we were to measure the intraoral pressure, closure duration and VOT of Korean bilabial stops, and to compare the values one another according to the intensity of phonation and the position of the target consonants. Intraoral pressure was measured by the peak intraoral pressure value of its wave; closure duration by the time interval between the onset of intraoral pressure build-up and the burst meaning the release of closure; Voice onset time(VOT) by the time interval between the burst and the onset of glottal vibration. Heavily aspirated bilabial stop consonant /ph/ showed the highest intraoral pressure value, unaspirated /p$^{*}$ /, the second, slightly aspirated /p/, the lowest. The syllable initial bilabial stops showed higher intraoral pressure than word initial stops, and the value of loudly phonated consonants were higher than moderate consonants. The longest closure duration period was that of /$p^{*}$ / and the shortest, /p/, and the duration was longer in word initial position and in the moderate voice. In VOT, the order of the longest to shortest was /ph/, /p/, /$p^{*}$ /, and the value was shorter when the consonant was in intervocalic position and when it was phonated with a loud voice. -
This paper addresses issues of perceptual constancy in speech perception through the use of a spatial metaphor for speech sound identity as opposed to a more conventional characterisation with multiple interacting acoustic cues. This spatial representation leads to a correlation between phonetic, acoustic and auditory analyses of speech sounds which can serve as the basis for a model of speech perception based on the general auditory characteristics of sounds. The correlations between the phonetic, perceptual and auditory spaces of the set of English voiceless fricatives /f
$\theta$ s$\int$ h / are investigated. The results show that the perception of fricative segments may be explained in terms of 2-dimensional auditory space in which each segment occupies a region. The dimensions of the space were found to be the frequency of the main spectral peak and the 'peakiness' of spectra. These results support the view that perception of a segment is based on its occupancy of a multi-dimensional parameter space. In this way, final perceptual decisions on segments can be postponed until higher level constraints can also be met. -
In this paper, temporal variables of French oral discourse are analyzed and interpreted. They are distinguished in two Troops, the temporal external variable and the temporal internal variable The external variable is determined by the breathing function as the physical condition of the verbal message and the internal variable is directly associated with multiple effects by the accentuation on final syllable of rhythmic group in actual French. Temporal variables, external and internal, are taken as devices of verbal support that serve to create immediate effects of oral production.
-
The aim of this paper is to research on the actual condition of Koreans' Spanish consonants pronunciation with an emphasis on describing the phonetic different of Korean speakers and Spanish speakers. 40 Spanish words were chosen for the speech sampling, and 10 Spanish majoring Korean students from Seoul or Kyunggi Province and 3 Spanish speakers form Castile, Spain participated in the interview. The most noticeable phonetic differences of Korean speakers' pronunciation comparing with Spanish speakers are abstracted as follows: 1) The voiced stops are pronounced voiceless or weak voiced. 2) The voiced stops are slightly aspirated. 3) The length of voiceless consonants is quite longer than the length of proceeding vowel. 4) Fricatives and affricates are somewhat fronter, and weaker in the degree of friction. 5) There is a strong tendency to geminate dental lateral /l/ such as 'pelo' and to vocalize palatal lateral /
$\rightthreetimes$ / such as 'calle' 6) Unlike in Spanish speech flap$\mid$ r$\mid$ and trill [r] are pronounced similarly in Korean speech. -
A multimedia Version of
${\ulcorner}Lee-Kim test of Korean Articulation{\lrcorner}$ "Lee-Kim Test of Korean Articulation" consisting of picture test, Sentence test, user's manual and notation, analysis sheets was published in 1990 to serve as a standard tool for testing and analysing the articulation errors of normal and abnormal speakers. It has been found, however that, the picture and sentences test using the printed version of Lee-Kim test of Korean Articulation revealed several limitations, in i, e, a) inefficiency in inducing desirable response from the informants b) lack of concentration and interest on the part of informants c) no consistent way of providing the informant with a clue in case the informant is unfamiliar with the word represented by the picture or the sentence. d) no reliable means for the speech-language pathologist to analyze and evaluate the informant's speech in relation to the standard pronunciation A multimedia version of Lee-Kim Korean articulation Test which features picture and word as well as recorded voice has been developed with a view to eliminating the limitation mentioned above and facilitating the articulation test-with ease and accuracy. -
Among the scholars of second language (L2) acquisition who have used prosodic considerations in syntactic analyses, pausing and intonation contours have been used to define utterances in the speech of second language learners (e.g., Sato, 1990). In recent research on conversational analysis, it has been found that lexically marked causal clause combining in the discourse of native speakers can be distinguished as "intonational subordination" and "intonational coordination(Couper-Kuhlen, Elizabeth, forthcoming.)". This study uses Pienemann's Processability Theory (1995) for an analysis of the speech of native speakers of Japanese (L1) learning English. In order to accurately assess the psycholinguistic stages of syntactic development, it is shown that pitch, loudness, and timing must all be considered together with the syntactic analysis of interlanguage speech production. Twelve Japanese subjects participated in eight fifteen minute interviews, ninety-six dyads. The speech analyzed in this report is limited to the twelve subjects interacting with two different non-native speaker interviews for a total of twenty-four dyads. Within each of the interviews, four different tasks are analyzed to determine the stage of acquisition of English for each subject. Initially the speech is segmented according to intonation contour arid pauses. It is then classified accoding to specific syntactic units and further analysed for pitch, loudness and timing. Results indicate that the speech must be first claasified prosodic ally and lexically, prior to beginning syntactic analysis. This analysis stinguishes three interlanguage lexical categories: discourse markers, coordinator
$s_ ordinators, and transfer from Japanese. After these lexical categories have been determined, the psycholinguistic stages of syntactic development can be more accurately assessed.d. -
The aim of this presentation is to show the structures and characteristics of English-Korean Machine Translator 'Trannie 96' 'Trannie 06' consists of five main engines and various types of dictionaries. With respect to the engines, the English sentences filtered by Pre-processor are tagged and parsed. After the conversion form English sentence structure to Korean one, 'Trannie 96' constructs Korean sentence. As for dictionaries, each engine has more than one optimized dictionaries. The algorithms employed by this machine is based on Linguistic theories, which make it possible for us to produce speedy and accurate translation.
-
In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task
-
In this paper, we have proposed a VQ algorithm which uses a generating order to make quantize feature vector of speech signal. The proposed algorithm inspects what codeword follows a(ter present codeword and adds new index to established codebook, when mapping speech signal. We present a variable bit rate for new codebook, and propose an efficient compressed way of information. In this way, the number of computation and the number of codewords to be searched are reduced considerably. The performance of the proposed VQ algorithm is evaluated by spectrum distortion measure and bit rate. The obtained spectrum distortion is reduced about 0.22 [db], and the bit rate is saved over 0.21 bit/frame.
-
In this paper, we discussed three cases to see the effects of the characteristics of Hangul writing system. In applications such as computer Hangul shorthands for ordinary people and pushbuttons with Hangul characters engraved, we found that there is much advantage in using Hangul. In case of Hangul Transliteration, we discussed some problems which are related with the characteristics of Hangul writing system. Shorthands use 3-set keyboards in England, America, and Korea. We saw how ordinary people can do computer Hangul shorthands, whereas only experts can do computer shorthands in other countries. Specifically, the facts that 1) Hangul characters are grouped into syllables (syllabic blocks) and that 2) there is already a 3-set Hangul keyboard for ordinary people allow ordinary people to do computer Hangul shorthands without taking special training as with English shorthands. This study was done by the author under the codename of 'Sejong 89'. In contrast like QWERTY or DVORAK, a 2-set Hangul keyboard cannot be used for shorthands. In case of English pushbuttons, one digit is associated with only one character. However, by engraving only syllable-initial characters on the phone pushbuttons, we can associate one Hangul "syllable" with one digit. Therefore, for a given number of digits, we can associate longer words or more meaningful words in Hangul than in English. We discussed the problems of the Hangul Transliteration system proposed by South Korea and suggested their solutions, if available. 1) We are incorrectly using the framework of transcription for transliteration. To solve the problem, the author suggests that a) we include all complex characters in the transliteration table, and that b) we specify syllable-initial and -final characters separately in the table. 2) The proposed system cannot represent independent characters and incomplete syllables. 3) The proposed system cannot distinguish between syllable-initial and -final characters.
-
This paper aims to exploit inter/intra-speaker phoneme sub-class variations as criteria for adaptation in a phoneme recognition system based on a novel neural network architecture. Using a subcluster neural network design based on the One-Class-in-One-Network (OCON) feed forward subnets, similar to those proposed by Kung (2) and Jou (1), joined by a common front-end layer. the idea is to adapt only the neurons within the common front-end layer of the network. Consequently resulting in an adaptation which can be concentrated primarily on the speakers vocal characteristics. Since the adaptation occurs in an area common to all classes, convergence on a single class will improve the recognition of the remaining classes in the network. Results show that adaptation towards a phoneme, in the vowel sub-class, for speakers MDABO and MWBTO Improve the recognition of remaining vowel sub-class phonemes from the same speaker
-
이 논문에서는 대량의 음성합성용 운율 DB를 용이하게 구축하기 위해 음성번역시스템을 이용한 자동 레이블러의 성능을 다양한 음성데이타를 대상으로 평가하였다. 실험 결과 FM radio news문장, 대화체 문장 및 낭독체 문장 등에는 레이블링 대상 음소의 약 80% 이상이 오류가 30msec 이내인 범위로 레이블링 되며, 고립단어에 대해서는 약 60%의 성능을 보여주고 있다. 현재 당 연구실에서는 자동 레이블러를 이용하여 합성용 운율 DB 및 합성단위를 작성하고 있으며. 자동 레이블러를 이용함으로서 일관성 있는 레이블링 결과를 얻을 수 있을 환 아니라 작성하는데 소요되는 시간도 줄일 수 있었다
-
The material of database for speech recognition should include phonetic phenomena as much as possible. At the same time, such material should be phonetically compact with low redundancy[1, 2]. The phonetic phenomena in continuous speech is the key problem in speech recognition. This paper describes the processing of a set of sentences collected from the database of 1993 and 1994 "People's Daily"(Chinese newspaper) which consist of news, politics, economics, arts, sports etc.. In those sentences, both phonetic phenometla and sentence patterns are included. In continuous speech, phonemes always appear in the form of allophones which result in the co-articulary effects. The task of designing a speech database should be concerned with both intra-syllabic and inter-syllabic allophone structures. In our experiments, there are 404 syllables, 415 inter-syllabic diphones, 3050 merged inter-syllabic triphones and 2161 merged final-initial structures in read speech. Statistics on the database from "People's Daily" gives and evaluation to all of the possible phonetic structures. In this sentence set, we first consider the phonetic balances among syllables, inter-syllabic diphones, inter-syllabic triphones and semi-syllables with their junctures. The syllabic balances ensure the intra-syllabic phenomena such as phonemes, initial/final and consonant/vowel. the rest describes the inter-syllabic jucture. The 1560 sentences consist of 96% syllables without tones(the absent syllables are only used in spoken language), 100% inter-syllabic diphones, 67% inter-syllabic triphones(87% of which appears in Peoples' Daily). There are rougWy 17 kinds of sentence patterns which appear in our sentence set. By taking the transitions between syllables into account, the Chinese speech recognition systems have gotten significantly high recognition rates[3, 4]. The following figure shows the process of collecting sentences. [people's Daily Database] -> [segmentation of sentences] -> [segmentation of word group] -> [translate the text in to Pin Yin] -> [statistic phonetic phenomena & select useful paragraph] -> [modify the selected sentences by hand] -> [phonetic compact sentence set]
-
The prosodic features of Spoken Chinese play the important roll of the naturalness, a list of prosodic labeling symbols represents all the prosodic features is given in this paper, and a paragraph of ' Prosodic Labeling Text '(PLT) is also attached for example.
-
This paper explores to identify the stem-final consonants of p- and t-irregular verbs in Korean within the framework of Government Phonology. Since the advent of Generative Phonology, the phenomenon of irregular verbal conjugation has been of great interest to Korean phonologists both in linear and non-linear approaches. When we examine these analyses, we find that one of the major issues concerns the identities of the stem-final consonants of p- and t-irregular verbs. The proposals concerning this issue vary considerably from one another. In this paper, I put forward a different view from those proposals in that the stem-fmal consonants of p- and tirregular verbs are tensed p' and t' respectively.
-
Korean fortis consonant is not included in the consonantal inventory, but a result of phonetic implementation at the phonetic level, P. With the framework of Cognitive Phonology, a construction of Post Obstruent Tensification is proposed in such a way that rule-ordering is eliminated. This enables us to overcome methodological problems raised in former analyses of fortis under geminate hypothesis, and give a uniform account for three categories of fortis consonants. By assuming extrasyllabicity of verb-stem-final, consonant neutralization of fortis in the coda position is explained by the invisibility at the P-level. and, therefore. modification of Coda Neutralization rule is called for.
-
This paper is a survey on the characteristics of reduplication in Bahasa Indonesia(B1). B1 abound in reduplicated sound-symbolic expressions. like Japanes and Korean, as such reduplication is considered as one of the significant morphological processes in B1. Despite the huge number of these expressions in B1. scholarship has not hitherto paid much of attentions to their non-arbitrary characteristics nor has not explained their iconicity systematically so far. This study concerns about the needs to describe the iconic patterns of reduplication in the grammar of B1. Firstly, tense-iconicity could be shown in verbal reduplicatives. Secondly, idiomatic reduplicatives could be considered as the remnants of diachronic reduplicated sound-symbolid expressions. The iconicity of reduplication of B1 must be described in a distinct component of the grammar of B1. As one of the simple-structured languages in the world. B1 shows iconic patterns, being fundamentally language-specific, in the grammar. But at the moment, we do not have the formal lingustic tools necessary for describing iconicity. This problem could probably be solved by modifying formal conventions about rules and features.
-
현대 한국어에 ㅎ음운도치 현상에 잇는데 주로 두가지 문법범주에 나타난다. 첫째는
${\ulcorner}암탉/수탉{\Ircorner}$ 처럼${\ulcorner}암{\Ircorner}$ (female),${\ulcorner}수{\Ircorner}$ (male)에 이어지는 명사초성 파열음에서 생기고, 그 다음은${\ulcorner}놓고/많지{\Ircorner}$ 처럼 용언 어간말음 ㅎ소리와 그것에 이어지는 어미초성 파열음, 파찰음 사이에서 생긴다. 첫째 경우에는 정서법이 음운도치를 그대로 반영하지만 두째 경우에는 어간.어미를 갈라적는다는 기준에 따라 대개 발음순서와 철자순서가 다르게 되어있다. 이 글의 초점은 현행 한글맞춤법상 이렇게 발음순서와 철자순서가 어긋남을 음성학적 분석을 통해 고찰하고, 그 철자방식을 비판하면서 한글맞춤법 개선을 위해 ㅎ받침 효용에 대한 새로운 제안을 하려는 것이다. -
Igbo, a new Benue Congo language has a vowel harmony system which, like that of Akan, is based on the pharynx size or tongue root position. In this study we examine Igbo vowel harmony with particular reference to assimilatory patterns of vowels in different harmony sets. This is to gain some insight into the factors involved in Igbo vowel assimilation, and to establish to what extent reports on Akan vowel assimilation are validated in Igbo. Tokens of the eight phonemic vowels of Standard Igbo are recorded from three native speakers of Igbo. The vowels are acoustically investigated (using the LPC analysis of CSL) in individual lexical items and within carefully designed carrier phrases. The F1 and F2 values of the vowels are obtained as these formant values are generally useful in establishing the salient characteristics of vowels. Vowels from the harmony sets are juxtaposed in the carrier phrases to ascertain the extent of assimilation. Results of the investigation show that the F1 values, to a large extend, are enough to characterize these vowels. The (-Expanded) vowels have higher F1 values than their (+Expanded) counterpart. Where there is an overlap in F1 values for some vowels the F1 bandwidth values serve to distinguish between the vowels. The overlap often reported in Akan for /I/ and /e/ on the one hand and /
${\mho}$ / and /o/ on the other is not validated in Igbo. While the F1 values for these pairs of vowels are quite similar for one of our speakers, there is an appreciable difference between the F1 values of these vowels for the other two speakers. There is however an overlap for /e/ and /o/ for one of the speakers. Assimilations are generally regressive across word boundaries. It is, however, necessary to point out that the general perceptual impression that one of the vowels completely assimilates to the other, is not borne out by our investigation. Most of our F1 and F2 values for the vowels in individual lexical items are altered in assimilations. This then suggests that assimilation involving these vowels is partial rather than complete. The emerging 'allophones' are acoustically similar to the (+Expanded) vowel involved in the assimilation, that is when vowels from different harmony sets are involved. We conclude that while assimilation of Igbo vowels involves some phonological considerations, phonetic factors appear to be permanent in deciding the final form of the vowels. -
This presentation explores on the perceptual characteristics of the lateral sound /l/ in CV syllables. At initial position we found that /l/ has well marked formant transitions. Then several questions arise: 1) are these formant structures dependent on the following vowel\ulcorner. 2) Are the formant transitions giving an additional cue for the identification\ulcorner Considering that the French vocalic system presents a greater variety of vowels than Spanish, several experiments were designed to verify to what extent a more extensive range of vocalic timbres contribute to the perception of /l/. Natural emissions of /l/ produced in Argentine Spanish and Canadian French CV syllables were recorded, where V was successively /i, e, a, o, u/ for Spanish and /i, e,
$\varepsilon$ , a,$\alpha$ , o, u, y,\phi$ / for French. For each item, the segment C was maintained and V was replaced by cutting & splicing by each of the remaining vowels without transitions. Results of the identification tests for Spanish show that natural /l/ segments with low Fl and high formants F3, F4 can be clearly identified in the /i, e, u/ vowel contexts without transitions. For French subjects the combination of /l/ with a vowel without transitions reflected correct identifications for its own original vowel context in /e,$\varepsilon$ , y,$\phi$ /. For both languages, in all these combinations, F1 values remained rather steady along the syllable. In the case of /o, u/ very likely the F2 difference lead to a variety of perceptions of the original /l/. For example in Ilul, French subjects reported some identifications of /l/ as a vowel, mainly /y/. Our observations reinforce the importance of F1 as a relevant cue for /l/, and the incidence of the relative distance between formants frequencies of both components. -
There are some similar phonological properties shared by different languages. The phenomenon of vowel length is just one of them which shows distinctive futures. In some languages long vowels serve to differentiate meanings. In that case the phonological context it creates is important and so it has to be incorporated into the phonemic inventory of the language, otherwise there will be misunderstanding. In this paper I will try to explain the Turkish vowel system as well as the Korean, and then to show how long vowels take their forms in Turkish and Korean.
-
본 연구는 한국인 일본어학습자의 일본어 청취상의 난점을 한일 양언어의 음운 조직의 차이를 비교.분석함으로써, 문제의 소지와 그 해결점을 제시하는 것을 목적으로 한다. 연구방법으로서는, 초급일본어 학습자인 일어일문학 전공학생 1학년 30명을 대상으로 두재의 다른 종류의 청취태스트를 2회에 걸쳐 실시, 그 자료를 통계적으로 분석.고찰하는 방법을 취하였다. 분석내용은, 첫째 청취테스트의 결과로부터 한국인 일본어학습자의 일본어 청취에 있어서 나타나는 전형적인 오류의 패턴을 분석하고, 둘째 그 오류의 패턴의 원인을 한일 양언어의 음운조직의 상이함을 대조.분석하므로써 구조적 원인을 이론적인 측면에서 보다 명백히 밝혀냈다. 끝으로, 한국인 일본어학습자의 일본어 청취상의 난점과 문제점을 미리 예측하고 제시할 수 있는 항목들을 구체적으로 제시하여, 효과적인 일본어 교육은 물론 동시에 한국어교육에 있어서도 이 연구의 결과가 적용.응용될 수 있도록 하였다.
-
Prosodic characteristics of natural speech, especially intonation, in many cases represent specific feelings of the speaker at the time of the utterance, with relatively vast variations of speaking styles over the same text. We analyzed a collected speech corpus, recorded with ten Slovene speakers. Interpretation of observed intonation contours was done for the purpose of modelling the intonation contour in synthesis process. We devised a scheme for modeling the intonation contour for different types of intonation units based on the results of analyzing intonation contours. The intonation scheme uses a superpositional approach, which defines the intonation contour as the sum of global (intonation unit) and local (accented syllables or syntactic boundaries) components. Near-to-natural intonation contour was obtained by rules, using only the text of the utterance as input.
-
The sentence-final rising tones of interrogatives in Seoul dialect are contrastive at lesat in two groups. One is H-type which ends in a lower pitch, the other HH-type ending in a higher one. But they are different in terms of inclinational value, also. And the latter is more adequate criterion than the former which would be available in an ideal condition.
-
An utterance is normally divided into two or more intonation groups. Bach intonation group has its intonation pattern. Pitch movement of Spanish utterance is basically determined by a combination of two factors: position of the stressed syllables and the intonation pattern. The pitch of a syllable can be affected by that of preceding syllables. This is rather a physiological effect than a phonological one.
-
Intensive studies on Kyongsang Korean tone and tone related processes have been carried out by many scholars. But intonation of this dialect has never been investigated. In this paper, I discuss the relationship between tone and intonation and describe phrasal tones and nuclear tones in Kyongsang Korean.
-
The theme of the current study is to study intonation of Taiwanese(Tw.) by comparing the intonation patterns in native language (Ll), target language (L2), and interlanguage (IL). Studies on interlanguage have dealt primarily with segments. Though there were studies which addressed to the issues of interlanguage intonation, more often than not, they didn't offer evidence for the statement, and the hypotheses were mainly based on impression. Therefore, a formal description of interlanguage intonation is necessary for further development in this field. The basic assumption of this study is that native speakers of one language perceive and produce a second language in ways closely related to the patterns of their first language. Several studies on interlanguage prosody have suggested that prosodic structure and rules are more subject to transfer than certain other phonological phenomena, given their abstract structural nature and generality(Vogel 1991). Broselow(1988) also shows that interlanguage may provide evidence for particular analyses of the native language grammar, which may not be available from the study of the native language alone. Several research questions will be addressed in the current study: A. How does duration vary among native and nominative utterances\ulcorner The results shows that there is a significant difference in duration between the beginning English learners, and the native speakers of American English for all the eleven English sentences. The mean duration shows that the beginning English learners take almost twice as much time (1.70sec.), as Americans (O.97sec.) to produce English sentences. The results also show that American speakers take significant longer time to speak all ten Taiwanese utterances. The mean duration shows that Americans take almost twice as much time (2.24sec.) as adult Taiwanese (1.14sec.) to produce Taiwanese sentences. B. Does proficiency level influence the performance of interlanguage intonation\ulcorner Can native intonation patterns be achieved by a non-native speaker\ulcorner Wenk(1986) considers proficiency level might be a variable which related to the extent of Ll influence. His study showed that beginners do transfer rhythmic features of the Ll and advanced learners can and do succeed in overcoming mother-tongue influence. The current study shows that proficiency level does play a role in the acquisition of English intonation by Taiwanese speakers. The duration and pitch range of the advanced learners are much closer to those of the native American English speakers than the beginners, but even advanced learners still cannot achieve native-like intonation patterns. C. Do Taiwanese have a narrower pitch range in comparison with American English speakers\ulcorner Ross et. al.(1986) suggests that the presence of tone in a language significantly inhibits the unrestricted manipulation of three acoustical measures of prosody which are involved in producing local pitch changes in the fundamental frequency contour during affective signaling. Will the presence of tone in a language inhibit the ability of speakers to modulate intonation\ulcorner The results do show that Taiwanese have a narrower pitch range in comparison with American English speakers. Both advanced (84Hz) and beginning learners (58Hz) of English show a significant narrower FO range than that of Americans' (112Hz), and the difference is greater between the beginning learners' group and native American English speakers.