Mostrar el registro sencillo del ítem

dc.creatorCastillo M.
dc.creatorRubio F.
dc.creatorPorras D.
dc.creatorContreras Ortiz, Sonia Helena
dc.creatorSepúlveda A.
dc.date.accessioned2020-03-26T16:33:04Z
dc.date.available2020-03-26T16:33:04Z
dc.date.issued2019
dc.identifier.citation2019 22nd Symposium on Image, Signal Processing and Artificial Vision, STSIVA 2019 - Conference Proceedings
dc.identifier.isbn9781728114910
dc.identifier.urihttps://hdl.handle.net/20.500.12585/9154
dc.description.abstractThis paper presents a new database consisting of concurrent articulatory and acoustic speech data. The articulatory data correspond to ultrasound videos of the vocal tract dynamics, which allow the visualization of the tongue upper contour during the speech production process. Acoustic data is composed of 30 short sentences that were acquired by a directional cardioid microphone. This database includes data from 17 young subjects (8 male and 9 female) from the Santander region in Colombia, who reported not having any speech pathology. © 2019 IEEE.eng
dc.description.sponsorshipIEEE Colombia Section;IEEE Signal Processing Society Colombia Chapter;Universidad Industrial de Santander
dc.format.mediumRecurso electrónico
dc.format.mimetypeapplication/pdf
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.sourcehttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85068073792&doi=10.1109%2fSTSIVA.2019.8730224&partnerID=40&md5=f3c96d8ebc49f846b1e99dafa00b746d
dc.sourceScopus2-s2.0-85068073792
dc.titleA small vocabulary database of ultrasound image sequences of vocal tract dynamics
dcterms.bibliographicCitationRichmond, K., (2001) Estimating Articulatory Parameters from the Acoustic Speech Signal., , PhD thesis, The Centre for Speech Technology Research, Edinburgh University
dcterms.bibliographicCitationMaeda, S., (1990) Speech Production and Speech Modelling, Chapter Compensatory Articulation during Speech: Evidence from the Analysis and Synthesis of Vocal-tract Shapes Using Articulatory Model, pp. 131-149. , Kluwer Academic Publishers
dcterms.bibliographicCitationX-ray Microbeam Speech Production Database User's Handbook Version 1.0
dcterms.bibliographicCitationXue, Q., Improvement in tracking of articulatory movements with the x-ray microbeam system Annual International Conference on Engineering in Medicine and Biology Society
dcterms.bibliographicCitationMunhall, K.G., VatikiotisBateson, E., Tohkura, Y., Xray film database for speech research (1995) The Journal of the Acoustical Society of America, 98 (2), pp. 1222-1224
dcterms.bibliographicCitationSock, R., Hirsch, F., Laprie, Y., Perrier, P., Vaxelaire, B., An x-ray database, tools and procedures for the study of speech production 9th International Seminar on Speech Production (ISSP 2011), , V.L. Gracco D.J. Ostry L. Mnard, S.R. Baum, editor, June
dcterms.bibliographicCitationWrench, A.A., Hardcastle, W.J., A multichannel articulatory database and its application for automatic speech recognition (2000) 5th Seminar on Speech Production: Models and Data, 1
dcterms.bibliographicCitationRudzicz, F., Namasivayam, A., Wolff, T., The torgo database of acoustic and articulatory speech from speakers with dysarthria (2010) Language Resources and Evaluation, 46 (1), pp. 1-19
dcterms.bibliographicCitationNarayanan, S., Toutios, A., Ramanarayanan, V., Lammert, A., Kim, J., Lee, S., Nayak, K., Proctor, M., Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC) (2014) The Journal of the Acoustical Society of America, 136 (3), pp. 1307-1311
dcterms.bibliographicCitationGábor Csapó, T., Grósz, T., Gosztolya, G., Tóth, L., Markó, A., DNN-based ultrasound-to-speech conversion for a silent speech interface (2017) Proc. Interspeech, pp. 3672-3676. , Stockholm, Sweden
dcterms.bibliographicCitationQin, C., Carreira-Perpinán, M.A., Richmond, K., Wrench, A., Renals, S., Predicting tongue shapes from a few landmark locations (2008) Ninth Annual Conference of the International Speech Communication Association
dcterms.bibliographicCitationPreston, J.L., McAllister Byun, T., Boyce, S.E., Hamilton, S., Tiede, M., Phillips, E., Rivera-Campos, A., Whalen, D.H., Ultrasound images of the tongue: A tutorial for assessment and remediation of speech sound errors (2017) Journal of Visualized Experiments: JoVE, 119
dcterms.bibliographicCitationGábor Csapó, T., Grósz, T., Gosztolya, G., Tóth, L., Markó, A., Dnn-based ultrasound-to-speech conversion for a silent speech interface (2017) Proc. Interspeech, Stockholm, Sweden, pp. 3672-3676
dcterms.bibliographicCitationXu, K., Roussel, P., Gábor Csapó, T., Denby, B., Convolutional neural network-based automatic classification of midsagittal tongue gestural targets using b-mode ultrasound images The Journal of the Acoustical Society of America, 141 (6)
dcterms.bibliographicCitationScobbie, J.M., Wrench, A.A., Van Der Linden, M., Headprobe stabilisation in ultrasound tongue imaging using a headset to permit natural head movement (2008) Proceedings of the 8th International Seminar on Speech Production, pp. 373-376
dcterms.bibliographicCitationThe haskins optically corrected ultrasound system (HOCUS) (2005) Journal of Speech Language and Hearing Research, 48 (3), p. 543
dcterms.bibliographicCitationJallon, J.F., Berthommier, F., A semi-automatic method for extracting vocal tract movements from X-ray films (2009) Speech Communication, 51 (2), pp. 97-115
dcterms.bibliographicCitationFontecave, J., Berthommier, F., Quasi-automatic extraction of tongue movement from a large existing speech cineradiographic database (2009) Evaluation, 2, pp. 8-11
dcterms.bibliographicCitationGhosh, P.K., Narayanan, S., A generalized smoothness criterion for acoustic-to-articulatory inversion (2010) The Journal of the Acoustical Society of America, 128, pp. 2162-2172
dcterms.bibliographicCitationLofqvist, A., Tongue movement kinematics in long and short Japanese consonants (2007) Journal of the Acoustical Society of America, 122 (1), pp. 512-518
dcterms.bibliographicCitationLi, M., Kambhamettu, C., Stone, M., Automatic contour tracking in ultrasound images (2005) Clinical Linguistics & Phonetics, 19 (6-7), pp. 545-554
dcterms.bibliographicCitationKass, M., Witkin, A., Terzopoulos, D., Snakes: Active contour models (1988) International Journal of Computer Vision, 1 (4), pp. 321-331
dcterms.bibliographicCitationXu, K., Csapó, T.G., Roussel, P., Denby, B., A comparative study on the contour tracking algorithms in ultrasound tongue images with automatic re-initialization (2016) The Journal of the Acoustical Society of America, 139 (5), pp. EL154-EL160
dcterms.bibliographicCitationYu, Y., Acton, S.T., Speckle reducing anisotropic diffusion (2002) IEEE Transactions on Image Processing, 11 (11), pp. 1260-1270. , Nov
dcterms.bibliographicCitationLozano-Herrera, C., Gmez-Reyes, J., (2017) Implementacin y Anlisis de un Mtodo Automtico de Deteccin Del Contorno Superior de la Lengua en Secuencias de Imgenes de Ultrasonido, , May
dcterms.bibliographicCitationCadena-Bonfanti, A., Contreras-Ortiz, S.H., Giraldo-Guzmn, J., Porto-Solano, O., Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering (2014) 10th International Symposium on Medical Information Processing and Analysis, 2014. , Oct
dcterms.bibliographicCitationKing, S., Frankel, J., Livescu, K., McDermott, E., Richmond, K., Wester, M., Speech production knowledge in automatic speech recognition (2007) The Journal of the Acoustical Society of America, 121 (2), pp. 723-742
dcterms.bibliographicCitationLing, Z.-H., Richmond, K., Yamagishi, J., Wang, R.-H., Integrating articulatory features into hmm-based parametric speech synthesis (2009) IEEE Transactions on Audio, Speech, and Language Processing, 17 (6), pp. 1171-1185
dcterms.bibliographicCitationLi, M., Kim, J., Lammert, A., Kumar Ghosh, P., Ramanarayanan, V., Narayanan, S., Speaker verification based on the fusion of speech acoustics and inverted articulatory signals (2016) Computer Speech & Language, 36, pp. 196-211
dcterms.bibliographicCitationWang, L., Qian, X., Han, W., Soong, F.K., Synthesizing photo-real talking head via trajectory-guided sample selection (2010) Eleventh Annual Conference of the International Speech Communication Association
dcterms.bibliographicCitationSepúlveda, A., Capobianco Guido, R., Castellanos-Dominguez, G., Estimation of relevant timefrequency features using Kendall coefficient for articulator position inference (2013) Speech Communication, 55 (1), pp. 99-110. , jan
dcterms.bibliographicCitationStone, M., A guide to analysing tongue motion from ultrasound images (2005) Clinical Linguistics & Phonetics, 19 (6-7), pp. 455-501. , jan
dcterms.bibliographicCitationCsapó, T.G., Lulich, S.M., Error analysis of extracted tongue contours from 2D ultrasound images (2015) Proc. Interspeech, pp. 2157-2161. , Dresden, Germany
dcterms.bibliographicCitationGhosh, P.K., Narayanan, S., Automatic speech recognition using articulatory features from subject-independent acoustic-to-articulatory inversion (2011) The Journal of the Acoustical Society of America, 130 (4), pp. EL251-EL257
dcterms.bibliographicCitationLi, M., Kim, J., Lammert, A., Kumar Ghosh, P., Ramanarayanan, V., Narayanan, S., Speaker verification based on the fusion of speech acoustics and inverted articulatory signals (2016) Computer Speech & Language, 36, pp. 196-211
datacite.rightshttp://purl.org/coar/access_right/c_16ec
oaire.resourceTypehttp://purl.org/coar/resource_type/c_c94f
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85
dc.source.event22nd Symposium on Image, Signal Processing and Artificial Vision, STSIVA 2019
dc.type.driverinfo:eu-repo/semantics/conferenceObject
dc.type.hasversioninfo:eu-repo/semantics/publishedVersion
dc.identifier.doi10.1109/STSIVA.2019.8730224
dc.subject.keywordsArticulation
dc.subject.keywordsSpeech
dc.subject.keywordsTongue
dc.subject.keywordsUltrasound
dc.subject.keywordsData visualization
dc.subject.keywordsDatabase systems
dc.subject.keywordsSpeech
dc.subject.keywordsUltrasonics
dc.subject.keywordsVision
dc.subject.keywordsAcoustic data
dc.subject.keywordsAcoustic speech
dc.subject.keywordsArticulatory data
dc.subject.keywordsSpeech pathology
dc.subject.keywordsSpeech production
dc.subject.keywordsTongue
dc.subject.keywordsUltrasound image sequences
dc.subject.keywordsUltrasound videos
dc.subject.keywordsImage processing
dc.rights.accessrightsinfo:eu-repo/semantics/restrictedAccess
dc.rights.ccAtribución-NoComercial 4.0 Internacional
dc.identifier.instnameUniversidad Tecnológica de Bolívar
dc.identifier.reponameRepositorio UTB
dc.relation.conferencedate24 April 2019 through 26 April 2019
dc.type.spaConferencia
dc.identifier.orcid57209530567
dc.identifier.orcid57209536314
dc.identifier.orcid57209535982
dc.identifier.orcid57210822856
dc.identifier.orcid55340424500


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

http://creativecommons.org/licenses/by-nc-nd/4.0/
http://creativecommons.org/licenses/by-nc-nd/4.0/

Universidad Tecnológica de Bolívar - 2017 Institución de Educación Superior sujeta a inspección y vigilancia por el Ministerio de Educación Nacional. Resolución No 961 del 26 de octubre de 1970 a través de la cual la Gobernación de Bolívar otorga la Personería Jurídica a la Universidad Tecnológica de Bolívar.