Datasets:

Formats:
parquet
Libraries:
Datasets
pandas
License:
source
stringclasses
3 values
word
stringlengths
2
12
emotion
stringclasses
6 values
actor_info
stringclasses
3 values
audio_array
sequencelengths
3.57k
40.6k
input_values
sequencelengths
3.57k
16k
MESD
lento
Disgust
C
[-0.00458592688664794,-0.004132989328354597,-0.004029537085443735,-0.005327824968844652,0.0006927686(...TRUNCATED)
[-0.01829650066792965,-0.016573047265410423,-0.0161794051527977,-0.02111946791410446,0.0017892484320(...TRUNCATED)
MESD
delgado
Neutral
C
[-0.004443122074007988,-0.0061936164274811745,-0.003345814999192953,-0.002165757352486253,0.00027879(...TRUNCATED)
[-0.020105091854929924,-0.02760367840528488,-0.015404562465846539,-0.010349554009735584,0.0001221760(...TRUNCATED)
MESD
desorden
Disgust
M
[0.005586540326476097,0.008894712664186954,0.007141478359699249,0.007337796967476606,0.0073127280920(...TRUNCATED)
[0.027894169092178345,0.04570763185620308,0.03626701980829239,0.03732413426041603,0.0371891446411609(...TRUNCATED)
MESD
arana
Disgust
F
[0.017348598688840866,0.02792941965162754,0.02010868303477764,0.01618877239525318,0.0079847062006592(...TRUNCATED)
[0.065262071788311,0.10529995709657669,0.07570624351501465,0.060873281210660934,0.029829056933522224(...TRUNCATED)
MESD
tarde
Happiness
C
[-0.0001356362918158993,-0.00021871927310712636,-0.00019987646373920143,-0.00007945136894704774,0.00(...TRUNCATED)
[-0.0007979838992469013,-0.0011964795412495732,-0.001106102718040347,-0.0005285009974613786,0.001361(...TRUNCATED)
MESD
imprudente
Fear
M
[0.00227145291864872,0.003223331645131111,0.002665290143340826,0.0039628781378269196,0.0052385744638(...TRUNCATED)
[0.009068948216736317,0.013052105903625488,0.010716968216001987,0.01614675484597683,0.02148493379354(...TRUNCATED)
MESD
clavel
Happiness
F
[0.0023653085809201,0.007175563368946314,0.008544052951037884,0.006229302380234003,0.006189116742461(...TRUNCATED)
[0.008662950247526169,0.02715284936130047,0.032413117587566376,0.02351556345820427,0.023361096158623(...TRUNCATED)
MESD
explosivo
Anger
C
[0.012009509839117527,0.003682253183797002,-0.0077629173174500465,0.013422205112874508,0.03478718921(...TRUNCATED)
[0.059556685388088226,0.01802809163928032,-0.03904975205659866,0.06660189479589462,0.173150509595870(...TRUNCATED)
MESD
rapido
Disgust
F
[0.0002786305849440396,0.0004117949865758419,0.0003872498054988682,0.00023457303177565336,0.00007912(...TRUNCATED)
[0.0006709542358294129,0.0013829170493409038,0.0012516863644123077,0.0004354007833171636,-0.00039570(...TRUNCATED)
MESD
hoy
Disgust
M
[0.003422575071454048,0.00456723989918828,0.005237491335719824,0.006316684652119875,0.00565231870859(...TRUNCATED)
[0.01192324236035347,0.016093168407678604,0.018534842878580093,0.022466260939836502,0.02004602737724(...TRUNCATED)

Dataset Card for MESD

Dataset Summary

Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.

Ejemplo de referencia: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb

Hemos accedido a la base MESD para obtener ejemplos.

Breve descripción de los autores de la base MESD: "La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.

Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.

Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "

Supported Tasks and Leaderboards

[Needs More Information]

Languages

Español

Dataset Structure

Data Instances

[Needs More Information]

Data Fields

Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.

Palabra: texto de la palabra que se ha leído.

Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.

InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.

AudioArray: audio array, remuestreado a 16 Khz.

Data Splits

Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.

Validation: 130 ejemplos, todos casos MESD.

Test: 129 ejemplos, todos casos MESD.

Dataset Creation

Curation Rationale

Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.

Source Data

Initial Data Collection and Normalization

Acceso a los datos en bruto: https://data.mendeley.com/datasets/cy34mh68j9/5

Conversión a audio arra y remuestreo a 16 Khz.

Who are the source language producers?

Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

Creative Commons, CC-BY-4.0

Citation Information

Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
Downloads last month
51
Edit dataset card

Space using somosnlp-hackathon-2022/MESD 1