Presentation done at the 36th Pacific Asia Conference on Language, Information and Computation (PACLIC 36)
The presentation took place at the 36th Pacific Asia Conference on Language, Information, and Computation on October 20, 2022, held in Manila, Philippines. The research aimed to craft an embedding structure tailored for the Facebook Dataset.
The findings highlighted that a fusion of fastText word embedding with a sentencer embedding structure within the Seq2seq model, incorporating GRU and attention layers, emerged as the most effective model. Notably, hyperbolic embeddings fell short compared to fastText and Word2Vec embeddings, attributed to a lack of a proper parser. Additionally, Glove embeddings exhibited reduced performance due to the absence of a well-suited pre-trained Glove model for the Sinhala language.
References
2022
Sinhala Sentence Embedding: A Two-Tiered Structure for Low-Resource Languages
Gihan Weeraprameshwara, Vihanga Jayawickrama, Nisansa Silva, and Yudhanjaya Wijeratne
In Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation, 2022
In the process of numerically modeling natural languages, developing language embeddings is a vital step. However, it is challenging to develop functional embeddings for resourcepoor languages such as Sinhala, for which sufficiently large corpora, effective language parsers, and any other required resources are difficult to find. In such conditions, the exploitation of existing models to come up with an efficacious embedding methodology to numerically represent text could be quite fruitful. This paper explores the effectivity of several one-tiered and two-tiered embedding architectures in representing Sinhala text in the sentiment analysis domain. With our findings, the two-tiered embedding architecture where the lower-tier consists of a word embedding and the upper-tier consists of a sentence embedding has been proven to perform better than one-tier word embeddings, by achieving a maximum F1 score of 88.04% in contrast to the 83.76% achieved by word embedding models. Furthermore, embeddings in the hyperbolic space are also developed and compared with Euclidean embeddings in terms of performance. A sentiment data set consisting of Facebook posts and associated reactions have been used for this research. To effectively compare the performance of different embedding systems, the same deep neural network structure has been trained on sentiment data with each of the embedding systems used to encode the text associated.
@inproceedings{weeraprameshwara2022sinhala,title={Sinhala Sentence Embedding: A Two-Tiered Structure for Low-Resource Languages},author={Weeraprameshwara, Gihan and Jayawickrama, Vihanga and de Silva, Nisansa and Wijeratne, Yudhanjaya},booktitle={Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation},pages={325--336},year={2022},address={Manila, Philippines},publisher={De La Salle University},}